uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,499,515
arxiv
\section{Introduction} The exchange bias (EB) effect was extensively investigated over the past 60 years due to its applicability in high-density magnetic recording, giant magnetoresistance and spin valve devices \cite{Dagotto}. It consists in a non-equilibrium phenomenon caused by the uncompensated coupling at the interface between different magnetic components \cite{Nogues}, thus being observed in magnetic heterostructures as ferromagnetic (FM)-antiferromagnetic (AFM), FM-spin glass (SG), FM-ferrimagnetic (FIM), AFM-FIM, FIM-SG-AFM etc. The EB effect is manifested by a shift of the magnetic hysteresis loops along the field axis, being conventionally observed after cooling the system in the presence of a static magnetic field ($H$) from above its magnetic transition temperature ($T$). For many decades, the cooling $H$ was understood as a necessary protocol to set the unidirectional anisotropy (UA) by breaking the symmetry of the interface moment, and any shifted hysteresis loop observed after zero-field-cooling (ZFC) the system was naturally attributed to experimental artifacts and/or minor loop effects \cite{Geshev}. In 2007 J. Saha \textit{et al.} proposed a model predicting UA in a ZFC FM-AFM system, induced by the application of the first field during the magnetization as a function of field [$M(H)$] measurement (the so-called virgin curve) \cite{Saha}. After that, robust ZFC EB (ZEB) has been experimentally observed in different kinds of materials as polycrystalline oxides \cite{Giri,CoIr_PRB}, nanocomposites \cite{Maity} alloys \cite{Wang,Nayak} and films \cite{Prieto}. The presence of SG-like phases, concomitant to other conventional phases as FM/AFM, seems to be a common feature in many of these materials, and recently we have shown the significant influence of glassy magnetism on the ZEB effect observed in double-perovskite (DP) compounds \cite{ZEBmodel}. Due to their intrinsic inhomogeneity, DP materials usually present structural and magnetic disorder \cite{Vasala,Serrate}, which may lead to competing magnetic interactions and frustration. These characteristics are key ingredients to the appearance of SG-like behavior, and is not surprising that a great majority of ZEB materials have perovskite structure \cite{Giri,CoIr_PRB,Maity,Huang,Xie,Murthy,CaCoMn_JMMM}. Among the ZEB materials discovered so far, La$_{1.5}$Sr$_{0.5}$CoMnO$_{6}$ (LSCMO) presents the largest shift of the $M(H)$ curve \cite{Murthy}. Interestingly, we showed that replacing Sr by Ca also gives rise to a ZEB material, but the smaller Ca ionic radius leads to a shift of the $M(H)$ loop that is one order of magnitude smaller than that of the Sr-based compound \cite{CaCoMn_JMMM}. This exemplifies the strong correlation between structural, electronic and magnetic properties in these DP systems. Although it is a common ground that the re-entrant SG (RSG) behavior is necessary to the observation of the ZEB effect in these materials \cite{ZEBmodel}, the microscopic mechanisms responsible for such different magnetic properties observed in similar compounds is not well understood. In order to get further insight to this question, we thoroughly investigated La$_{1.5}$A$_{0.5}$CoMnO$_{6}$ (A = Ba, Ca, Sr) polycrystalline samples by means of synchrotron X-ray powder diffraction (SXRD), Raman spectroscopy, muon spin rotation and relaxation ($\mu$SR), AC and DC magnetization, X-ray absorption spectroscopy (XAS) at Co- and Mn-$K$ and $L_{3,2}$ edges, as well as X-ray magnetic circular dichroism (XMCD) at Co- and Mn-$L_{3,2}$ edges. We show that La$_{1.5}$Ba$_{0.5}$CoMnO$_{6}$ (LBCMO) exhibits a giant ZEB effect, although not as large as that observed for LSCMO. The parent compound La$_{2}$CoMnO$_{6}$ (LCMO), which is a non-RSG and non-ZEB material, was also investigated, for comparison. For the SXRD measurements, we carried anomalous scattering with energy $E$ = 6500 eV, in order to maximize the difference between Co and Mn scattering factors and investigate whether these ions are ordered along the lattice or not. Our results indicate disordering of Co and Mn in all samples, and Ba-, Ca- and Sr-doped samples present a small amount of phase segregation. Since the ZEB effect is observed at low $T$, we also performed SXRD at several $T$ with $E$ = 9000 eV, and the results indicate that the larger ZEB effect observed for LBCMO and LSCMO is related to their more symmetrical crystal structures. These results are corroborated by the Raman spectroscopy measurements. The XAS results indicate mixed valence states Co$^{2+}$/Co$^{3+}$ and Mn$^{4+}$/Mn$^{3+}$ in all samples, and also that Ba$^{2+}$/Ca$^{2+}$/Sr$^{2+}$ partial substitution at La$^{3+}$ site leads to a main increase of Co average valence, while the increase of Mn formal valence is very subtle. Oxygen vacancies (OV) are present in all samples, which may play an important role in their magnetic properties. All samples present two FM transitions, attributed to Co$^{2+}$--O--Mn$^{4+}$ and Co$^{3+}$--O--Mn$^{3+}$ couplings. For the Ba-,Ca- and Sr-doped compounds we observed anomalies in the temperature dependence of magnetization [$M(T)$], which were confirmed by the $\mu$SR results to be related to the onset of a third magnetic transition to a spin glassy state, possibly caused by the Co$^{3+}$--O--Mn$^{4+}$ AFM coupling. Our XMCD results show that the shift of the hysteresis curves are related to uncompensated AFM coupling between Co and Mn. The great decrease of Co magnetic moment observed for the Sr-based compound, induced by the increased proportion of low spin Co$^{3+}$, would augment this uncompensation and result in its largest ZEB effect. For La$_{1.5}$Ca$_{0.5}$CoMnO$_{6}$ (LCCMO) this AFM coupling is nearly compensated, resulting in a small $H_{EB}$. \section{Experimental details} Polycrystalline samples of LCMO, LBCMO, LCCMO and LSCMO were synthesized by conventional solid state reaction, as described in the Supplementary Material (SM) \cite{SM}. The SXRD data were recorded at the XPD beamline of the Brazilian Synchrotron Light Laboratory (LNLS) using the Bragg-Brentano geometry. The anomalous scattering measurements were performed at room temperature using wavelength $\lambda$ = 1.9074 $\textrm{\AA}$, and the XRD patterns were obtained by one-dimensional Mythen-1K detector (Dectris). In order to further investigate the Co/Mn cationic order in LCMO sample, we also performed room temperature SXRD with $\lambda$ = 0.6525 $\textrm{\AA}$ at the x-ray diffraction and spectroscopy (XDS) beamline of LNLS \cite{XDS}. The low-$T$ measurements were performed at $\lambda$ = 1.3776 $\textrm{\AA}$ using a DE-202 cryostat (ARS Cryo) and the XRD patterns were obtained with a HOPG(002) analyser. The Rietveld refinements were performed using the program GSAS+EXPGUI \cite{GSAS}. The AC and DC magnetic measurements were carried out using a Quantum Design PPMS-VSM magnetometer. In order to prevent the presence of trapped current on the magnet and ensure a reliable ZFC process, from one measurement to another the samples were warmed up to the paramagnetic state and the coil was demagnetized in the oscillating mode. Unpolarized Raman scattering measurements were performed at several $T$ on a Jobin Yvon T64000 triple 1800 mm$^{-1}$ grating spectrometer equipped with a liquid N$_{2}$-cooled multichannel CCD detector. The excitation was achieved with a 488 nm Ar$^{+}$ laser line in a quasi-backscattering configuration. Muon spin rotation and relaxation ($\mu$SR) experiments were performed at the Swiss Muon Source of Paul Scherrer Institut, Switzerland, using the nearly 100$\%$ spin-polarized positive muon beam at the GPS instrument. The measurements were performed for the LBCMO, LCCMO and LSCMO powder samples in zero field (ZF) and weak transverse field (wTF, field applied perpendicular to the initial muon spin direction) modes. The data were acquired at several $T$ between 5 and 300 K. wTF experiments were performed under a field of $H$ = 50 Oe. XAS measurements at Co- and Mn-$K$ edges were performed at room temperature in the dispersive X-ray absorption (DXAS) beamline at LNLS \cite{DXAS}, and the Co- and Mn-$L_{3,2}$ XAS and XMCD spectra were recorded at room- and low-$T$ at the Planar Grating Monochromator (PGM) beamline of LNLS \cite{PGM}, in total electron yield (TEY) mode. The XAS and XMCD were investigated with the samples in the form of powder, but prior to the powdering of the bulk the pellet's surfaces were scraped in order to avoid the influence of its redox in the results. The circular polarization rate for the XMCD was of approximately 80\%. \section{Results and discussions} \subsection{Synchrotron X-ray diffraction} The LCMO compound has been the subject of great academic interest since 1955, when J. B. Goodenough predicted FM insulating behavior for it \cite{Goodenough}. More recently, the interest was renewed with the discovery of room temperature magnetodielectric behavior \cite{Singh}, which is generally attributed to charge order of Co$^{2+}$ and Mn$^{4+}$ \cite{Chen,Blasco}. However, the presence of at least a small fraction of Co$^{3+}$ and Mn$^{3+}$ is frequently observed in bulk LCMO samples \cite{Dass,Burnus}, and there is an open debate on whether Co and Mn ions are ordered along the lattice or not \cite{Chen,Singh,Blasco,Goodenough,Burnus,Joy,Fournier}. The controversy takes place because the Co and Mn scattering factors are very alike, and consequently the conventional XRD results can be interpreted in terms of the monoclinic $P2_1/n$ space group, where Co and Mn order alternate along the lattice, as well as in terms of the orthorhombic $Pnma$ space group, corresponding to a disordered population of B-site. To address this issue we performed anomalous scattering measurements on LCMO using wavelength energy $E$ = 6500 eV. This energy was chosen to maximize the difference between the Co and Mn scattering factors. The small differences between the $P2_1/n$ and $Pnma$ diffraction patterns correspond to very weak reflections that are expected for the monoclinic symmetry, but not for the orthorhombic one. Long-lasting experiments were carried around the 2$\theta$ regions where these peaks are expected to appear, and there was no observation of such reflections. To further investigate that, we also performed a room temperature SXRD with $E$ = 19 keV and again no trace of these Bragg peaks associated to the $P2_1/n$ space group was observed (see SM \cite{SM}), giving thus strong evidence that our LCMO sample grows in $Pnma$ space group, corresponding nominally to a La(Co$_{0.5}$Mn$_{0.5}$)O$_{3}$ perovskite. Accordingly, all subsequent analysis of this sample considers it to present orthorhombic symmetry. SXRD measurements with $E$ = 6500 eV were also performed for the Ba-, Ca- and Sr-doped compounds. For these samples the presence of two crystallographic phases is clear, as already proposed for resemblant compounds \cite{Vashook,Taraphder,Xu,Berger}. For LCCMO, as observed for LCMO, it was not found the weak reflections expected for $P2_1/n$ space group. Therefore, the XRD pattern could be successfully refined with two $Pnma$ phases (94\% and 6\%) with almost equal cell volumes. The fact that both LCMO and LCCMO belong to the same space group is expected, since Ca$^{2+}$ and La$^{3+}$ ionic radii are very close at XII coordination (1.34 and 1.36 \AA, respectively \cite{Shannon}). For LSCMO the SXRD data could be successfully refined with mixed rhombohedral $R\bar{3}c$ (91\%) and orthorhombic $Pnma$ (9\%) phases, while for LBCMO the Ba-doping brought the whole system to the rhombohedral phase, the crystalline structure being successfully calculated with two $R\bar{3}c$ phases (96\% and 4\%). The phase segregation may be related to the formation of different domains rich in La-Ba/Ca/Sr and/or Co-Mn. Since the EB effect is generally attributed to the magnetic coupling at the interface of different magnetic phases, the presence of distinct crystallographic phases may be related to the ZEB observed in such materials. Since the ZEB effect is only observed at temperatures far below the magnetic transitions, we investigated the structural evolution of each sample with $T$ ranging from 300 down to 16 K, using wavelength $E$ = 9000 eV. Fig. \ref{Fig_SXRD} shows the evolution of the normalized average unit cell volume as a function of $T$. Although it was not observed structural transition for any compound, there are changes in the slope of the curves at temperatures close to the material's magnetic transitions, as will be discussed next. We can also note that the curves of LCMO and LCCMO are roughly similar, as well as those of LSCMO and LBCMO. These results are not surprising, since Ca$^{2+}$ and La$^{3+}$ ionic radii are close, whereas those for Ba$^{2+}$ and Sr$^{2+}$ are larger \cite{Shannon}. \begin{figure} \begin{center} \includegraphics[width=0.47 \textwidth]{Fig_SXRD.pdf} \end{center} \caption{Normalized average unit cell volume (V) as a function of $T$ for La$_{2-x}$A$_{x}$CoMnO$_{6}$ samples. The dashed lines are guides for the eye. The upper and bottom insets show respectively the average B-O bond length and B-O-B' bond angle as a function of the average A-site ionic radius, $r_A$, obtained from 16 K SXRD results.} \label{Fig_SXRD} \end{figure} The insets of Fig. \ref{Fig_SXRD} show the mean (Co/Mn)--O and Co--O--Mn bond lengths and angles obtained from the 16 K SXRD measurements. As expected, the Co--O--Mn bond angle increases as the A-site average ionic radius ($r_A$) increases \cite{Vasala,Woodward}. Interestingly, the mean B--O bond length presents the smaller value for LSCMO. Both B--O--B' angle and B--O length are strongly correlated to the magnetic coupling of transition metal (TM) ions in DP compounds \cite{Vasala,Serrate}. The closer to 180$^{\circ}$ is the B--O--B' angle, the more symmetric is the crystal structure, and the exchange interactions are stronger. On the other hand, in general the superexchange interaction gets stronger as the TM ions get closer. These results are certainly related to the magnetic properties observed for La$_{2-x}$A$_{x}$CoMnO$_{6}$. Despite the fact the B--O--B' angle is not the largest for LSCMO, Sr ionic radius is large enough to ensure a nearly symmetric structure for this sample, and the smaller B--O length ensures a strong coupling between Co and Mn, leading to the largest ZEB observed. This and other magnetic results will be discussed below. Although careful must be taken in interpreting these data since the oxygen scattering factor is small, the trends here observed are expected for DP compounds, and explain very well the magnetic properties of the system, being thus very plausible. \subsection{AC and DC magnetization} Fig. \ref{Fig_MxT}(a) shows the $M(T)$ curves for LCMO, measured with $H$ = 100 Oe at 2 K$\leq$$T$$\leq$400 K. It can be clearly seen the presence of two anomalies, at $\sim$230 and $\sim$155 K. Early reports for this compound proposed Co$^{2+}$ and Mn$^{4+}$ configuration \cite{Goodenough}, but recently it is stablished that the Co and Mn valences are very sensitive to the conditions in which the sample is prepared \cite{Dass,Burnus}, the presence of Co$^{3+}$ and Mn$^{3+}$ being commonly observed in LCMO. The mixed valences Co$^{2+}$/Co$^{3+}$ and Mn$^{3+}$/Mn$^{4+}$ are confirmed by our XAS measurements (Section E) which, together with Goodenough-Kanamori-Anderson (GKA) rules \cite{GKA}, can plausibly explain the two anomalies observed in Fig. \ref{Fig_MxT}(a). According to GKA rules, Co$^{2+}$--O--Mn$^{4+}$ and Co$^{3+}$--O--Mn$^{3+}$ interactions are expected to be FM, the former coupling would correspond to $T_{C1}$=230 K transition while the latter would correspond to $T_{C2}$=155 K \cite{Chen,Burnus}. The $M(T)$ curves of Ba-/Ca-/Sr-doped samples are displayed in Figs. \ref{Fig_MxT}(b)-(d). It can be seen that the two peaks related to the FM transitions are present, although $T_{C1}$ is reduced. The replacement of 25\% of La$^{3+}$ by Ca$^{2+}$/Sr$^{2+}$/Ba$^{2+}$ changes the electronic configuration of the TM ions. For the doped compounds an increase of the average Co and/or Mn valence in relation to LCMO is expected, signifying a decrease in the concentration of the Co$^{2+}$--O--Mn$^{4+}$ coupling, and this may be directly related to the decrease of their ordering $T'$s. \begin{figure} \begin{center} \includegraphics[width=0.5 \textwidth]{Fig_MxT.pdf} \end{center} \caption{(a) ZFC and FC $M(T)$ curves for LCMO, measured at $H$ = 100 Oe. The inset shows a magnified view of the ZFC curve, evidencing $T_{C1}$ and $T_{C2}$. (b), (c) and (d) show the ZFC $M(T)$ curves for LCCMO, LSCMO and LBCMO respectively.} \label{Fig_MxT} \end{figure} From the fit to the magnetic susceptibility curves in the paramagnetic region with the Curie-Weiss law we obtained the Curie-Weiss temperature, $\theta_{CW}$, and the effective magnetic moment, $\mu_{eff}$ (see table \ref{Tmag}). For all investigated samples $\theta_{CW}$ is positive and fairly close to $T_{C1}$, giving further evidence of the prominent FM coupling. For LCMO $\mu_{eff}$=6.7 $\mu_{B}$/f.u., in agreement with previous reports \cite{Blasco,Dass,Joy}. For Ba- and Sr-doped samples it decreases to 6.6 and 6.2 $\mu_{B}$/f.u., respectively, which could be related to the increased proportion of Mn$^{4+}$ and/or Co$^{3+}$ in the low spin (LS) configuration, as will be discussed in section E. Interestingly, for the Ca-based sample there is no significant decrease of $\mu_{eff}$ in relation to the parent compound, which could indicate that this system is favorable for the emergence of high spin (HS) or intermediate spin (IS) Co$^{3+}$. Nevertheless, these results are certainly related to the different ZEB effects observed in these samples. \begin{table} \renewcommand{\arraystretch}{1.2} \caption{Main results obtained from the AC and DC $M(T)$ curves, and from the ZFC $M(H)$ measurements.} \label{Tmag} \resizebox{\columnwidth}{!}{ \begin{tabular}{ccccc} \hline \hline Sample & LCMO & LCCMO & LSCMO & LBCMO \\ \hline $T_{C1}$ (K) & 230 & 158 & 180 & 186 \\ $T_{C2}$ (K) & 157 & 141 & 157 & 155 \\ $T_{N}$ (K) & - & 62 & 74 & 45 \\ $\theta_{CW}$ (K) & 213 & 185 & 220 & 187 \\ $\mu_{eff}$ ($\mu_B$/f.u.) & 6.7 & 6.7 & 6.2 & 6.6 \\ $T_g$ (K) & - & 46.7 & 72.9 & 71.7 \\ $\tau_0$ (s) & - & 3.9$\times$10$^{-7}$ & 3.9$\times$10$^{-7}$ & 2.8$\times$10$^{-8}$ \\ \textit{z}$\nu$ & - & 8.9 & 5.8 & 6.5 \\ $\delta T_{f}$ & - & 0.07 & 0.09 & 0.08 \\ $H_C$ (Oe) & 7694 & 8437 & 7147 & 5178 \\ $H_{EB}$ (Oe) & - & 253 & 3128 & 1605 \\ \hline \hline \end{tabular}} \end{table} It is important to note the appearance of subtle anomalies at the low-$T$ regions of the $M(T)$ curves of Ba-/Ca-/Sr-doped samples. This may be related to the emergence of non-negligible portions of other magnetic interactions like Co$^{3+}$--O--Mn$^{4+}$ and Co$^{2+}$--O--Co$^{3+}$, which are predicted by GKA rules to be AFM. The presence of competing magnetic phases, as well as disorder, are key ingredients to the emergence of SG-like behavior. In order to verify the presence of SG-like phase, which is believed to play an important role on the ZEB effect of DP systems \cite{ZEBmodel}, we performed AC magnetic susceptibility measurements ($\chi_{AC}$) at several frequencies ($f$) in the range 25-10000 Hz, using $H_{AC}$ = 5 Oe. For LCMO it was not found evidence of glassy magnetism\cite{SM}, as expected. This compound was exhaustively investigated in the last decades and in general SG-like behavior is not suggested. For the Ba-/Ca-/Sr-doped samples one can clearly observe a monotonic increase of the freezing $T$ ($T_f$) with increasing $f$ (see SM \cite{SM}), suggesting SG-like behavior. The $T_f$ as a function of $f$ curves of each sample could be well fitted by the power law equation of the dynamic scaling theory, commonly used to investigate SG-like systems \cite{Mydosh,Souletie} \begin{equation} \frac{\tau}{\tau_{0}}=\left[\dfrac{(T_{f} - T_{g})}{T_{g}}\right]^{-z\nu} \label{Eq1} \end{equation} where $\tau$ is the relaxation time corresponding to the measured frequency, $\tau_{0}$ is the characteristic relaxation time of spin flip, $T_{g}$ is the SG-like transition temperature ($T_f$ as $f$ tends to zero), $z$ is the dynamical critical exponent and $\nu$ is the critical exponent of the correlation length. The main results obtained from the fittings are displayed in table \ref{Tmag}, where it can be noted that the $\tau_{0}$ and $z\nu$ values are typically found for cluster spin glass (CG) systems \cite{Souletie,Malinowski,Murthy2}. Usually, a formula computing the shift in $T_{f}$ between the two outermost frequencies is used to classify the material as SG, CG or superparamagnet (SP) \begin{equation} \delta T_{f}=\frac{\triangle T_{f}}{T_{f} \triangle log f}. \label{Eq2} \end{equation} The $\delta T_{f}$ values obtained for the doped samples (see Table \ref{Tmag}) lie in the range $0.01\lesssim\delta T_{f}\lesssim0.1$ usually found for CG systems. For canonical SG, the usual value is $\delta T_{f}\lesssim0.01$, while for SP $\delta T_{f}\gtrsim0.1$ \cite{Souletie,Malinowski,Murthy2,Anand2}. These results, together with the conventional magnetic transitions observed at higher $T$, confirms the RSG state on doped La$_{1.5}$A$_{0.5}$CoMnO$_{6}$ compounds that present ZEB effect, while for the parent LCMO compound there is no RSG nor ZEB behavior. In order to verify the ZEB effect on the La$_{2-x}$A$_{x}$CoMnO$_{6}$ system, we performed $M(H)$ measurements after ZFC each sample. Fig. \ref{Fig_MxH} shows the curves obtained at $T$ = 5 K, using a maximum applied field $H_{max}$ = 90 kOe. For all samples we observe closed loops, fairly symmetric with respect to the $M$ axis. For LCMO there is no EB, \textit{i.e.}, the $M(H)$ curve is also symmetric with respect to $H$ axis. As commented, this compound was extensively investigated and, to the best of our knowledge, EB effect was never reported for LCMO. It is interesting to note the lack of a complete saturation, and the $M$ $\sim$4.1 $\mu_{B}$/f.u. observed at $H_{max}$ = 90 kOe is fairly below the value expected for a FM system containing Co$^{2+}$/Co$^{3+}$ and Mn$^{3+}$/Mn$^{4+}$. These results were previously explained in terms of the appearance of Mn--O--Mn and Co--O--Co AFM bonds, the presence of LS Co$^{3+}$ and the formation of antiphase boundaries, that leads to the antiparallel coupling of neighboring domains \cite{Blasco,Dass,Fournier}. \begin{figure*} \begin{center} \includegraphics[width= \textwidth]{Fig_MxH.pdf} \end{center} \caption{ZFC $M(H)$ curves of (a) LCMO, (b) LCCMO, (c) LSCMO and (d) LBCMO, measured at $T$ = 5 K and $H_{max}$ = 90 kOe. The insets show magnified views of the curves close to the $M$ = 0 region, evidencing the ZEB effect in the doped samples. (d) $H_C$ as a function of $r_A$.} \label{Fig_MxH} \end{figure*} The $M(H)$ curves of the doped compounds are shifted toward left along the $H$ axis, characterizing the ZEB effect. The EB and the coercive fields are respectively defined as $H_{EB}=|H^{+}+H^{-}|/2$ and $H_{C}=(H^{+}-H^{-})/2$, where $H^{+}$ and $H^{-}$ are the positive and negative coercive fields, respectively. The Sr-based sample presents the largest ZEB effect reported so far, $H_{EB}$ $\sim$ 3100 Oe at 5 K. The Ba-based compound also exhibits a pronounced shift, $H_{EB}$ $\sim$ 1600 Oe, while the Ca-based sample, on the other hand, presents a very small shift, $H_{EB}$ $\sim$ 250 Oe. These results are certainly related to the structural and electronic configuration of each sample. As discussed above, LSCMO presents a large B--O--B' angle and the smallest B--O length. It is also interesting to note the decrease of the ``saturation" magnetization ($M_{sat}$) of the doped samples in relation to the parent compound. This can be understood in terms of the increased proportion of LS Co$^{3+}$ induced by Ca$^{2+}$/Ba$^{2+}$/Sr$^{2+}$ to La$^{3+}$ substitution, as will be discussed in Section E. This decrease is more pronounced for Sr- and Ba-based samples than for the Ca-based one, being certainly related to the small $H_{EB}$ observed for the last compound. The $M(H)$ curves also show an interesting decrease of $H_{C}$ with the increase of the average A-site ionic radius ($r_A$), Fig. \ref{Fig_MxH}(e). Previous studies of chemical pressure on DP compounds report the increase of $H_{C}$ with the decrease of $r_A$, which was attributed to the enhancement of the orbital magnetic moment of the TM ions caused by the increased lattice distortion \cite{Sikora,Haskel}. These results keep resemblance with those obtained for our system. As will be shown in section E, the Co relative orbital-to-spin moment ratio, $m_L$/$m_S$, is greatly reduced in LSCMO in comparison to LCCMO. Although the $m_L$/$m_S$ ratio is even larger for LCMO than for LCCMO, the average $r_A$ is smaller for the later. Recent investigations of applied external pressure on DP compounds have shown that, even when the physical pressure does not significantly alter the orbital moment, it drastically changes the coercivity, being this related to pressure-induced changes on the crystal field \cite{Haskel}. In the case of La$_{2-x}$A$_{x}$CoMnO$_{6}$, the combined effect of crystal field and spin-orbit interaction seems to play a role on the magnetic anisotropy. \subsection{Muon rotation and relaxation} As discussed in the previous section, the presence of two FM couplings in CoMn-based DPs is well known, being attributed to Co$^{2+}$--O--Mn$^{4+}$ and Co$^{3+}$--O--Mn$^{3+}$ interactions \cite{GKA,Chen,Burnus}. However, the ZFC $M(T)$ experiments revealed the presence of a third anomaly at lower-$T$ for the Ba-, Ca- and Sr-doped compounds. For further clarification of the magnetic properties we performed $\mu$SR experiments on these samples. Representative spectra taken in wTF mode at several temperatures for LBCMO are shown in Fig. \ref{Fig_muSR_TF}(a). \begin{figure} \begin{center} \includegraphics[width=0.42 \textwidth]{Fig_muSR_TF.pdf} \end{center} \caption{(a) $\mu$SR rotation patterns of LBMCO in a weak applied transverse field (wTF) of 50 Oe at various temperatures; (b) variation of weakly damped paramagnetic fraction with temperature derived from wTF spectra. Indicated marks for $T_{C1}$, $T_{C2}$ and $T_g$ are taken from magnetization and susceptibility data.} \label{Fig_muSR_TF} \end{figure} In wTF experiments, the onset of magnetic ordering causes an apparent loss of asymmetry of the initially 100$\%$ polarized muon spins due to randomly adding strong local magnetic fields to the applied magnetic field causing a severe damping of the signal. Comparing the spectra taken at 210 K and 180 K, one can clearly observe changes in the initial asymmetry of a weakly damped rotation signal, confirming that the first magnetic transition $T_{C1}$ is within this $T$ range, as found in the $M(T)$ curves. Already at 190 K one can detect a small contribution by a spontaneous rotation seen at very early times indicating a partial onset of magnetic order. The dominant signal represents a rotation of muon spins with a frequency $\omega_{\mu}$. This signal can be fitted to \begin{equation} A(t) = A(t=0)\cdot A_{0}(t)\cdot G_{par}(t)\cdot cos(\omega_{\mu}t), \label{Eq3} \end{equation} where $\omega_{\mu}$ = $\gamma_{\mu}B_{\mu}$, with $\gamma_{\mu}$ = 2$\pi\cdot$135.54 $\mu$s$^{-1}T^{-1}$ being the muon gyromagnetic ratio and $B_{\mu}$ the local field at the muon site. $G_{par}(t)$ = $e^{-\lambda t}$ is the depolarization function with a damping factor $\lambda$ due to field inhomogeneity and relaxation. Notably, $B_{\mu}$ is slightly larger than the applied field and increasing upon lowering $T$. This is due to a Knight shift adding to the applied field related to a $T$ dependent susceptibility. The rotating signal comes from a paramagnetic (PM) fraction still existing below the FM transitions. In Fig. \ref{Fig_muSR_TF}(b) we have plotted the variation of this PM fraction with $T$. Between $T_{C2}$ down to $T_g$ there is a nearly constant PM fraction of about 35\%, at lower $T$ the PM fraction vanishes. The wTF data are supported by ZF $\mu$SR spectra, Fig. \ref{Fig_muSR_ZF}(a) (see also SM \cite{SM}). Fits to these spectra suggest the presence of several superimposed signals with different temperature dependent partial asymmetries summing up to a total observed asymmetry $A(t)$: \begin{equation} \begin{split} A(t) & = A(t=0)\{A_{PM1}\cdot G_{PM1}(t)+A_{PM2}\cdot G_{PM2} \\ & + A_{int}\cdot G_{int}(t)+A_{KT}\cdot G_{KT}(t)\}. \label{Eq4} \\ \end{split} \end{equation} As will be discussed below in more detail, $A_{PM1}$, $A_{PM2}$, $A_{int}$, and $A_{KT}$ are the partial asymmetry contributions from a slowly exponentially damped paramagnetic, a stretched exponentially damped paramagnetic, an internal field and a Gaussian Kubo-Toyabe signal, respectively. $G(t)$ are the corresponding depolarization factors that are explicitly given in SM \cite{SM}. \begin{figure} \begin{center} \includegraphics[width=0.42 \textwidth]{Fig_muSR_ZF.pdf} \end{center} \caption{(a) ZF $\mu$SR of LBCMO for various $T$; (b) $T$ dependence of partial asymmetries. Indicated marks for $T_{C1}$, $T_{C2}$ and $T_g$ are taken from magnetization and susceptibility data.} \label{Fig_muSR_ZF} \end{figure} In Fig. \ref{Fig_muSR_ZF}(b) we have plotted the $T$ dependence of the different partial contributions to the spectra. The higher sensitivity of ZF spectra to different damping behavior of subspectra allows a more detailed analysis. The onset of spontaneous rotations below $T_{C1}$ and the variation of the local field at the muon sites can be clearly traced especially for the early times of spectra (see SM \cite{SM}). Down to about $T_g$ we see the spontaneously rotating signal with a frequency $\omega_{int}$ = $\gamma_{\mu}B_{int}$ having a spectral contribution $A_{int}$ with about 50\% of total asymmetry (third term of Eq. \ref{Eq4}); this fraction is long-range ordered. The $T$ dependent magnetic field at the muon site $B_{int}$ is plotted in SM \cite{SM}. In correspondence with the wTF spectra, there is a slowly exponentially damped paramagnetic signal (PM1) having a spectral fraction of about 35\% (first term in Eq. \ref{Eq4}) . In addition there is, however, a contribution of about 10\% that is best fitted with a stretched exponential depolarization function $e^{-(\lambda_{2}t)^{\beta}}$ (second term in Eq. \ref{Eq4}) typical for a wide distribution of fluctuation frequencies as expected for a still dynamically fluctuating, but gradually freezing spin glass. Values of the exponent $\beta$ vary from 1 to about 0.4 between $T_{C2}$ and $T_g$. The damping parameter $\lambda_2$ is much larger than for PM1, indicating longer correlation times for PM2 and, for this reason, it was not possible to resolve this contribution from wTF. Below $T_g$ the relative contributions from subspectra change. The PM1 fraction vanishes and the spontaneously rotating signal decreases. About 80\% of the spectrum can be described by a so-called static Gaussian Kubo-Toyabe function (last term in Eq. \ref{Eq4}) \begin{equation} G_{KT}(t) = \frac{1}{3} + \frac{2}{3}[1-(\sigma t)^2]e^{-(\sigma t)^{2}/2}, \label{Eq5} \end{equation} that is typical for a randomly frozen spin system producing a field distribution with a Gaussian width $\sigma$ at the muon site. For $T$ $\geq$ 80 K the time dependent asymmetries in Fig. \ref{Fig_muSR_ZF}(a) reveal a continuous decrease up to long times due to relaxational damping and a levelling off with about 10\% of initial asymmetry. This corresponds to the muon spins in an internal field oriented parallel or antiparallel to the initial polarization of the muon beam. Therefore these muon spins do not undergo spontaneous rotation in the internal field, instead they give rise to a so-called 1/3 tail (see SM \cite{SM}). In our case this is only very weakly damped, \textit{i.e.} magnetic fluctuations are nearly absent at these muon sites. Note, that this 1/3 tail increases when going below $T_g$, since now we have also an additional 1/3 tail from the static Kubo-Toyabe contribution (first term in Eq. \ref{Eq5}). For the Ca- and Sr- doped samples we found very similar results (see SM \cite{SM}). Since the $M(T)$ curves did not reveal a low-$T$ anomaly for the parent compound LCMO, and as will be shown in section E, the Ba/Ca/Sr doping leads to the enhancement of Co$^{3+}$, the vanishing of the PM1 fraction at 50 K may be associated to the onset of a third magnetic transition related most probably to the Co$^{3+}$--O--Mn$^{4+}$ AFM coupling. Frustration is leading to a SG-like state, as clearly seen by ZF $\mu$SR, in agreement with magnetic susceptibility data. \subsection{Raman spectroscopy} Raman spectroscopy is a sensitive and powerful tool to investigate charge/orbital ordering and spin-lattice interactions in perovskite compounds. Unpolarized Raman spectra were taken at several $T$ in the range 24-400 K for the four investigated samples. As can be seen in Fig. \ref{Fig_Raman}(a), the observed spectra are consistent with previous works reported for LCMO and resemblant compounds, where a broad peak at $\sim650$ cm$^{-1}$ corresponding to the symmetric stretching mode and another at $\sim500$ cm$^{-1}$ associated with either anti-stretching or bending mode vibrations of (Co/Mn)O$_{6}$ octahedra can be noticed \cite{Fournier,Murthy2,Iliev,Murthy3,Silva}. Due to the possible overlapping of Raman-active modes below 600 cm$^{-1}$, we focused here on the $\sim650$ cm$^{-1}$ stretching mode. A detailed description of the anti-stretching/bending mode can be found in SM \cite{SM}. \begin{figure} \begin{center} \includegraphics[width=0.5 \textwidth]{Fig_Raman.pdf} \end{center} \caption{(a) Low-$T$ Raman spectra of La$_{2-x}$A$_{x}$CoMnO$_{6}$ samples. The $T$-dependences of the relative shifts in the stretching mode are displayed for (b) LCMO, (c) LCCMO, (d) LBCMO and (e) LSCMO, where the solid circles show the relative unit cell volume variations extracted from SXRD data. The doted lines are guides to the eye. (f) shows the $\triangle\omega_{mag-phon}(T)$ for all investigated samples (see text).} \label{Fig_Raman} \end{figure} To obtain the phonon parameters as a function of $T$, the observed peaks were fitted with Lorentzian lineshapes, which are good approximations for the classical Raman response from damped harmonic oscillators \cite{Cardona}. Figs. \ref{Fig_Raman}(b)-(e) show the $T$-dependence of the Raman shift for the stretching mode. The resulting curves show characteristic changes in the Raman shift that are closely related to each material's magnetic transitions. The figures also show a large softening of the phonon frequencies for LCMO and LCCMO below $T_C$, while for LBCMO and LSCMO this tendency is not so clear, although changes in the slope of the curves still seems to be present at $T$ $\sim$ $T_C$. Both the magnetic couplings and the structural changes observed for these samples are expected to impact the phonon frequency. The changes in the phonon frequency due to thermal lattice expansion can be expressed by the following equation \cite{Gervais} \begin{equation} \delta \omega_{n}(T)= -\omega_{n}\int_{0}^{T}g_{n}(T)\alpha_{V}(T)dT, \label{Eq6} \end{equation} where $n$ refers to the $n$-th phonon mode, $\omega_{n}$ is its frequency, $g_{n}(T)$ is its Gr\"{u}neisen parameter and $\alpha_{V}(T)$ is the volumetric thermal expansion coefficient. For the $T$-range here investigated, $g_{n}$ can be assumed to be nearly $T$-independent, resulting in \begin{equation} \triangle \omega_{n}(T)= -\omega_{n}g_{n}\frac{\triangle V}{V}. \label{Eq7} \end{equation} Figs. \ref{Fig_Raman}(b)-(e) compares the evolution of -$\Delta$V/V obtained from the SXRD data with the phonon frequencies, confirming that the tendency of softening of the phonon frequency at low-$T$ is most likely related to the magnetic orderings. In previous reports for LCMO this softening was interpreted as a signature of spin-lattice coupling, which in turn was associated to the magnetodielectric effect observed for this compound \cite{Iliev,Murthy3}. In order to further investigate this possible spin-lattice coupling, we have plotted in Fig. \ref{Fig_Raman}(f) the changes in the phonon frequency of the stretching mode with the contribution of the structural variations discounted, \textit{i.e.}, we have used the following equation \begin{equation} \triangle\omega_{mag-phon}(T)= \triangle \omega_{n}-\triangle \omega(T), \label{Eq8} \end{equation} where $\triangle\omega(T)$ represents the $\triangle\omega$ obtained from the Raman spectra and $\triangle \omega_{n}$ corresponds to the changes in the phonon frequency due to the structural changes, extracted from Eq. \ref{Eq6} (see also Ref. \citenum{Granado}). The resulting curves indicate a strong spin-phonon coupling for LCMO below $T_C$ and a smaller, but still significant, effect for LCCMO. For LBCMO and LSCMO the effect is greatly reduced, but non-negligible $\triangle\omega_{mag-phon}$ values can be noticed at low-$T$. A detailed investigation of the material's dielectric properties is necessary to verify the possible magnetodielectric effect in the Ba-,Ca- and Sr-doped compounds. \begin{table} \caption{FWHD obtained from the Lorentzian fits of Raman spectra carried at 300 K and 25 K for all samples, except for LCMO whose spectra were taken at 296 K and 24 K.} \label{Traman} \begin{tabular}{c|c|c|c|c} \hline \hline Sample & \multicolumn{4}{c}{FWHM (cm$^{-1}$)} \\ \hline & \multicolumn{2}{c|}{Room temperature} & \multicolumn{2}{c}{Low temperature}\\ \hline & stretching & anti-stretching & stretching & anti-stretching \\ \hline LCMO & 40.3(2) & 80.1(26) & 26.7(1) & 79.5(20) \\ LCCMO & 63.8(9) & 113.6(87) & 52.8(6) & 88.6(68) \\ LBCMO & 97.5(20) & 70.9(77) & 70.2(8) & 75.1(30) \\ LSCMO & 81.6(13) & 126.0(73) & 74.1(9) & 100.3(30) \\ \hline \hline \end{tabular} \end{table} Comparing the Raman spectra of Fig. \ref{Fig_Raman}(a), it is interesting to observe that the anti-stretching/bending peaks of the doped ZEB samples are shifted to higher frequencies in relation to that of non-ZEB LCMO. This monotonic dislocation of the peak position follows the same trend of increase of $H_{EB}$. It can also be noticed a pronounced broadening of the stretching mode of the doped samples in relation to the parent compound (see also Table \ref{Traman}), most likely related to the increase of disorder and/or the presence of biphasic crystal structure in the samples \cite{Fournier,Murthy2}. In the case of our Ba-, Ca- and Sr-doped samples there are both cationic disorder and the presence of two structural phases. Table \ref{Traman} shows that LSCMO presents the broader peaks at low-$T$. It is known that disorder and competing magnetic phases are key ingredients to the appearance of SG-like behavior, which in turn is necessary for the emergence of ZEB effect \cite{ZEBmodel}. The higher inhomogeneity observed for LSCMO in the SXRD and Raman data may be related to its larger ZEB. \subsection{X-ray absorption spectroscopy} XAS spectra at $L_{2,3}$-edges of TM ions are very sensitive to valence states. In the case of first row TM ions, different valences produce clearly differentiated final states in the 2$p^{6}$3$d^{n}$ to 2$p^{5}$3$d^{n+1}$ absorption process which translate into shifts in the energy position of the absorption peaks of the spectra. In order to determine the Co- and Mn-valence states in the La$_{2-x}$A$_{x}$CoMnO$_{6}$ samples, we performed XAS measurements at Co- and Mn-$L_{2,3}$ edges. We used CoO, LaCoO$_3$, LaMnO$_3$ and CaMnO$_3$ as reference samples for Co$^{2+}$, Co$^{3+}$, Mn$^{3+}$ and Mn$^{4+}$ configuration, respectively. Fig. \ref{Fig_XAS_L}(a) shows the Co-$L_{2,3}$ edge curves of all investigated compounds. Comparing the spectra of LCMO and CoO, it can be seen that the spectral features are quite similar at the low-energy side of the $L_3$ white line, indicating a large concentration of Co$^{2+}$ in this sample. However, at the high-energy side the spectral weight of LCMO is increased in relation to that of CoO, clearly indicating that Co$^{3+}$ is also present in this compound. The spectral feature of LCMO is very similar to those observed for this compound in previous works, with the presence of both Co$^{2+}$ and Co$^{3+}$ valence states \cite{Burnus,Mir}. The Ba$^{2+}$/Ca$^{2+}$/Sr$^{2+}$ to La$^{3+}$ partial substitution is expected to lead to the increase of Co and/or Mn mean valence, in order to fulfill charge neutrality. For the Ca- and Sr-based samples, Fig. \ref{Fig_XAS_L}(a) shows the increase of the peak about 780 eV, indicative of the increase of the amount of Co$^{3+}$. For LBCMO it was not possible to precisely determine such changes due to the proximity between Co-$L_{2,3}$ and Ba-$M_{4,5}$ edges (see SM \cite{SM}). \begin{figure} \begin{center} \includegraphics[width=0.5 \textwidth]{Fig_XAS_L.pdf} \end{center} \caption{(a) Co-$L_3$ and (b) Mn-$L_3$ spectra of La$_{2-x}$A$_{x}$CoMnO$_{6}$ samples at 300 K. The spectra of CoO, LaCoO$_3$, LaMnO$_3$ and CaMnO$_3$ are also displayed as references for Co$^{2+}$, Co$^{3+}$, Mn$^{3+}$ and Mn$^{4+}$, respectively.} \label{Fig_XAS_L} \end{figure} In order to further investigate whether the increase of Co$^{3+}$ concentration in the doped samples is also accompanied by changes in the Mn valence, we carried out XAS spectra around the Mn-$L_{2,3}$ edges. Fig. \ref{Fig_XAS_L}(b) shows the Mn-$L_{3}$ spectra of all investigated samples. The main peak of the curves of La$_{2-x}$A$_{x}$CoMnO$_{6}$ samples lies in between those of the reference spectra for Mn$^{3+}$ and Mn$^{4+}$, indicating that Mn is also in mixed valence state in all investigated samples. Moreover, the spectral features of all samples are very similar, indicating that any doping-induced Mn valence change may be subtle. In order to get quantitative estimates of the Co and Mn formal valences we compared the position of the``center of gravity" of each sample's Co- and Mn-$L_3$ white lines with those of the reference spectra for Co$^{2+}$, Co$^{3+}$, Mn$^{3+}$ and Mn$^{4+}$. Assuming a linear variation from Co$^{2+}$ to Co$^{3+}$ in these systems, we obtained approximately 2.1+, 2.4+ and 2.5+ for the average valences of LCMO, LSCMO and LCCMO, respectively. The same procedure for the Mn-$L_3$ white lines resulted in approximately 3.8+ for LCMO and 3.9+ for all doped samples. These rough estimations indicate that the charge compensation caused by A$^{2+}$ doping at La$^{3+}$ site is manifested mainly at Co valence. Several studies have revealed that Co$^{3+}$ is present in non-magnetic LS configuration in La$_{2-x}$A$_{x}$CoMnO$_{6}$ systems \cite{Dass,Burnus,Mir}. In the case of the samples investigated here, the doping-induced increase of Mn$^{4+}$ and LS Co$^{3+}$ is supported by the $K$ edge results (see SM \cite{SM}) and by the decrease of $\mu_{eff}$ and $m_{sat}$ observed in the $M(T)$ and $M(H)$ measurements. These changes are expected to remarkably affect the materials magnetic anisotropy and, consequently, the EB effect. However, it must be noted that these rough estimates of the Co and Mn valences would lead to OV in all samples. Assuming the complete stoichiometry of La, Ca/Sr, Co and Mn in the compounds, one would get $\delta$ = 0.1 for LSCMO and 0.05 for LCMO and LCCMO. The OV is known to affect the structure and the magnetization of perovskite compounds \cite{Vasala}, and thus the presence of oxygen holes may directly impact the EB in La$_{2-x}$A$_{x}$CoMnO$_{6}$. Having established the valences of the Co and Mn ions, we now focus our attention on their magnetic properties. We carried out XMCD measurements at Co- and Mn-$L_{2,3}$ edges for all investigated samples, at $T$ = 14 K and $H$ = 40 kOe. Fig. \ref{Fig_XMCD} displays the XMCD spectra of LCMO, LCCMO and LSCMO. The results for LBCMO were omitted, due to the presence of Ba-$M_{4,5}$ edges. The red dashed and black solid curves stand, respectively, to $\mu^+$ and $\mu^-$, \textit{i.e.}, for parallel and antiparallel alignments between the photon spin and the magnetic field. The difference spectra, $\triangle \mu$ = $\mu^+$-$\mu^-$, correspond to the blue lines, and the integral curves of the XMCD and XAS spectra are respectively the green and red lines. \begin{figure*} \begin{center} \includegraphics[width= \textwidth]{Fig_XMCD.pdf} \end{center} \caption{(Color online) Co-$L_{2,3}$ spectra of (a) LCMO, (b) LCCMO, (c) LSCMO, and Mn-$L_{2,3}$ spectra of (d) LCMO, (e) LCCMO and (f) LSCMO, taken with circularly polarized x-rays at 14 K. The photon spin was aligned parallel ($\mu^+$, black solid) and antiparallel ($\mu^-$, red dashed) to the 40 kOe magnetic field, and the difference spectra are shown in blue.} \label{Fig_XMCD} \end{figure*} As can be noted, the XMCD curves are largely negative at Co-$L_3$ edge, but are nearly zero at Co-$L_2$ edge. This is a clear indication of a non-negligible orbital contribution to the Co magnetic moment. Despite the different Co average valences observed for these samples, their XMCD signals are spectrally rather similar, indicating that a significant portion of the extra Co$^{3+}$ emerging in LCCMO and LSCMO doped samples may be in LS configuration since this non-magnetic ion is not expected to alter the XMCD results of the HS Co$^{2+}$ ions remaining in the system. For a quantitative analysis of the Co-$L_{2,3}$ XMCD spectra, we have used the sum rules to derive the orbital and spin contributions to the magnetization \cite{Thole,Carra} \begin{equation} m_{orb}=- \frac{4\int_{L_3+L_2}(\mu^+-\mu^-)d\omega}{3\int_{L_3+L_2}(\mu^++\mu^-)d\omega}N_{h}, \label{Eq9} \end{equation} \begin{equation} \begin{split} m_{spin} & = -\frac{6\int_{L_3}(\mu^+-\mu^-)d\omega-4\int_{L_3+L_2}(\mu^+-\mu^-)d\omega}{\int_{L_3+L_2}(\mu_++\mu_-)d\omega}\times \\ & N_{h}\left(1+ \frac{7\langle T_z\rangle}{2\langle S_z \rangle}\right)^{-1}. \label{Eq10}\\ \end{split} \end{equation} where $m_{orb}$ and $m_{spin}$ are the angular and spin magnetic moments in units of $\mu_B$/atom, $L_z$ and $S_z$ denote the projections along $z$ of the angular and spin magnetic momenta, respectively, $N_h$ represent the number of empty 3$d$ states and $T_z$ denotes the magnetic dipole moment. For the $N_h$ we used here as an approximation atomistic values that correspond to 3 holes at the 3$d$ level of Co$^{2+}$. Moreover, $T_z$ has been estimated to be negligible against $S_z$ for TM ions in octahedral symmetry \cite{Teramura,Groot}. With these considerations, Eqs. \ref{Eq9} and \ref{Eq10} become \begin{equation} m_{orb}= \frac{4Q}{R}; \qquad m_{spin}= \frac{18P-12Q}{R},\label{Eq11} \end{equation} where $P$, $Q$, and $R$ represent the integrals $\int_{L_3}(\mu^+ - \mu^-)d\omega$, $\int_{L_3+L_2}(\mu^+ - \mu^-)d\omega$ and $\int_{L_3+L_2}(\mu^+ + \mu^-)d\omega$. Nonetheless, a correction to the spin sum rule must be used for the Co spin moments associated with the relatively weak spin-orbit coupling in the Co 2$p$ core holes. Here the Co$^{2+}$ spin moments extracted from the sum rule were divided by 0.921 to correct the deviation due to the core-hole Coulomb interaction \cite{Teramura}. In addition, corrections for the 80\% of circular polarization must be also included in the calculations. The values so determined are listed in Table \ref{Txmcd}. \begin{table} \caption{Results obtained from the Co$-L_{2,3}$ XMCD.} \label{Txmcd} \centering \begin{tabularx}{\columnwidth}{>{\centering}X|>{\centering}X>{\centering}X>{\centering}X>{\centering}X>{\centering}X} \hline & P & Q & R & $m_{orb}(\mu_B)$ & $m_{spin}(\mu_B)$ \tabularnewline \hline LCMO & 0.48 & 0.44 & 4.49 & 0.49 & 1.02 \tabularnewline \hline LCCMO &0.15 &0.12 & 4.56 & 0.16 & 0.38 \tabularnewline \hline LSCMO & 0.16& 0.12& 5.00 & 0.12 & 0.39 \tabularnewline \hline \end{tabularx} \end{table} Although one can note from Table \ref{Txmcd} a clear decrease of $m_{orb}$ and $m_{spin}$ of the doped samples in relation to LCMO, the values here obtained are smaller than those usually found for Co$^{2+}$ \cite{Burnus} and thus care must be taken in the interpretation of these data. One of the possible reasons for the deviations from the correct values relies in the fact that the estimates of $m_{orb}$ and $m_{spin}$ depend on the value of $R$. Even though the presence of non-magnetic LS Co$^{3+}$ is not expected to alter the XMCD signal of HS Co$^{2+}$, it may contribute to the XAS signal. Moreover, it was used $N_h$ = 3 for all samples, \textit{i.e.}, we assumed that all Co$^{3+}$ are in LS configuration, but it is not possible to know a priori if at least a part of these ions is in HS or even IS state. Finally, it is also possible that the $H$ = 40 kOe used was not enough to achieve a complete magnetization of Co due to the strong magnetocrystalline anisotropy of these polycrystalline materials. An interesting way to avoid the possible deviations caused by $R$ and $N_h$ is to compute the $m_{orb}$/$m_{spin}$ ratio. For LCMO $m_{orb}$/$m_{spin}$ = 0.48, being in good agreement with previous results found for this compound \cite{Burnus,Mir}. This result is a direct indication of the presence of HS Co$^{2+}$, whose orbital magnetic moment may play an important role on the system magnetic and structural properties. For LCCMO and LSCMO doped samples the $m_{orb}$/$m_{spin}$ ratio is reduced to 0.42 and 0.31, respectively, which could mean the appearance of at least a small fraction of Co$^{3+}$ ions in HS state, since the transition from HS Co$^{2+}$ ($t_{2g}^{5}e_g^{2}$, S = 3/2, $\tilde{L}$ = 1) to HS Co$^{3+}$ ($t_{2g}^{4}e_g^{2}$, $S$ = 3/2, pseudo-orbital moment $\tilde{L}$ = 1) leads to the increase of the spin moment \cite{Hollmann}. However, the presence IS state, already proposed for Sr-doped LCMO compounds \cite{Taraphder}, can not be completely ruled out in spite the expected decrease of the spin moment of IS Co$^{3+}$ ($t_{2g}^{5}e_g^{1}$, S = 1) in relation to HS Co$^{2+}$. The reduction of the orbital contribution may be even greater due to the ordering of the split $e_g$ orbitals and to their strong hybridization with the O-2$p$ orbitals. The decrease of the Co moment observed in the XMCD, as well as for the net moment observed in the macroscopic magnetization results, indicate that the occurrence of IS Co$^{3+}$, if so, may be present in a minor part of Co ions, possibly in the proximity to Mn$^{3+}$ ions, which would result in local Jahn-Teller deformations \cite{Dass}. The Co$^{3+}$ HS/IS state conjectures are speculative, and other mechanisms may play a role on the decrease of $m_{orb}$/$m_{spin}$ observed in the doped samples. The OV, for instance, may remarkably affect the Co magnetic moment \cite{Vasala,Miao}. Fig. \ref{Fig_XMCD} also displays the XMCD spectra at Mn-$L_{2,3}$ edges. In general, the sum rules analysis is not fully valid for systems containing small 2$p$ core-hole spin orbit coupling such as Mn, for which the error in the calculated effective spin moment is very large \cite{Groot}. Nevertheless, it is important to note that the XMCD spectra are negative at both Mn- and Co-$L_3$ edges, confirming that the FM coupling between Co and Mn is present in all samples. \begin{figure} \begin{center} \includegraphics[width=0.47 \textwidth]{Fig_XMCDloop.pdf} \end{center} \caption{XMCD hysteresis loops of Co and Mn $L_3$ edges for (a) LCMO, (b) LCCMO and (c) LSCMO, measured at 14 K. The upper insets display the curves resulting from the sum of Co and Mn signals, and the bottom ones show magnified views of the loops close to the coercive field regions.} \label{Fig_XMCDloop} \end{figure} To get a deeper insight about the role played by Co and Mn on the ZEB effect, we carried element-specific hysteresis loops in all samples by monitoring the Co- and Mn-$L_3$ edge XMCD signals as a function of applied magnetic field. Fig. \ref{Fig_XMCDloop}(a) shows the hysteresis loops for Co (solid circles) and Mn (open circled) in LCMO, for which the XMCD signals are normalized by their isotropic XAS signals at the $L_3$ peak. The data were taken at $T$ = 14 K and using $H_{max}$ = 40 kOe, after ZFC the sample. As expected for this non-ZEB material, there is no shift of the Co or Mn curves along $H$ axis. The inset shows the loop resulting from the sum of Co and Mn signals, which is very similar to the $M(H)$ curve obtained in Fig. \ref{Fig_MxH}(a). Very interesting results were found for the doped samples. Fig. \ref{Fig_XMCDloop}(b) displays the hysteresis loops obtained for LCCMO, where it can be seen that the curve for Mn is shifted to the left, while the one for Co is shifted to the right. It can also be noted an overall decrease of the XMCD signal in relation to LCMO parent compound, in resemblance to the results found in the macroscopic $M(H)$ curves. The fact the reduction is not as large as that observed in the macroscopic magnetization results is probably related to the different $T$ and $H$ at which the investigations were performed. These results are indicative of the emergence of AFM coupling, with the decrease of the Co signal in relation to Mn being caused by the increased amount of LS Co$^{3+}$. The uncompensated shifts of Co and Mn curves result in the ZEB effect, as already observed in the macroscopic $M(H)$ curve. A similar result was found for LSCMO, as can be seen in Fig. \ref{Fig_XMCDloop}(c). The XMCD loops obtained for LBCMO were omitted in this figure due to the proximity between Ba-$M_{5}$ and Co-$L_3$ edges. Nevertheless, the Mn-$L_3$ XMCD loop of LBCMO is shifted to the left, in similarity to LCCMO and LSCMO (see SM \cite{SM}). We are aware of the importance of conducting Ligand-Field Multiplet calculations to quantitatively describe our results. Such analysis is being conducted and will be published elsewhere. The ZEB effect here observed for the doped samples can be understood in terms of the AFM coupling between Co and Mn in the low-$H$ regions of the $M(H)$ curves, \textit{i.e.}, the regions encompassing the positive and negative coercive fields. During the $M(H)$ cycle, the initial magnetization process induces the alignment of Mn ions toward $H$ (positive) direction. A part of these ions are AFM coupled to Co ions and, during the $H$ cycling, some of them become pinned toward the positive field direction while some Co ions are consequently pinned into negative direction. Since Mn$^{3+}$/Mn$^{4+}$ magnetic moment is larger than that of LS Co$^{3+}$, this AFM coupling is uncompensated, leading to the shift of the $M(H)$ curve toward the left side of $H$ axis. As the proportion of LS Co$^{3+}$ ions increases, the uncompensation increases, which corroborates the largest $H_{EB}$ observed for LSCMO. The XMCD results indicated a smaller Co magnetic moment for this compound. However, the possibility that the changes in the ZEB effect are related to OV cannot be excluded, since LSCMO also presents the largest $\delta$. It is also possible that the SG-like behavior observed for the doped compounds comes from the Mn ions surrounded by non-magnetic LS Co$^{3+}$, since they would be under the effect of weaker magnetic interactions, which may also lead to frustration. Increasing the amount of LS Co$^{3+}$ ions would result in the observed reduction of the systems overall magnetization, but it could lead to the increase of the amount of SG-like phase. \section{Summary} In summary, in this work we thoroughly investigated the structural, electronic and magnetic properties of LCMO, LBCMO, LCCMO and LSCMO compounds. Our SXRD results indicate more symmetrical crystal structures for Ba- and Sr-doped samples. It was also found that all samples present cationic disorder of Co and Mn, and that the doped compounds present a small amount of phase segregation. These results are corroborated by the Raman spectroscopy measurements, and are believed to be related to the larger ZEB effect observed for LBCMO and LSCMO. In addition to the two FM transitions observed for LCMO, for the doped samples it was found the emergence of a third anomaly at low-$T$ in the $M(T)$ measurements that is related to the formation of a CG behavior, as concluded from AC magnetic susceptibility and $\mu$SR experiments. This third magnetic transition is most likely related to the Co$^{3+}$--O--Mn$^{4+}$ AFM coupling. XAS measurements indicate mixed valence states Co$^{2+}$/Co$^{3+}$ and Mn$^{4+}$/Mn$^{3+}$ in all samples, and also that Ba$^{2+}$/Ca$^{2+}$/Sr$^{2+}$ partial substitution at La$^{3+}$ site leads to a large increase of Co average valence, with subtle changes of the Mn formal valence. The XAS data also suggest the presence of OV in the samples, which may also play an important role on their magnetic properties. Our XMCD results indicate that the ZEB effect observed for the doped samples is related to uncompensated AFM coupling between Co and Mn. The reduction of Co magnetic moment observed for LSCMO, induced by the increased proportion of Co$^{3+}$ and/or OV, augments this uncompensation and results in its large ZEB effect. Similar result may be found for LBCMO. \begin{acknowledgements} This work was supported by Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq) [No. 400134/2016-0], Funda\c{c}\~{a}o Carlos Chagas Filho de Amparo \`{a} Pesquisa do Estado do Rio de Janeiro (FAPERJ), Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de Goi\'{a}s (FAPEG), Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP) and Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior (CAPES). The authors thank the DXAS, PGM, XDS and XPD staffs of LNLS for technical support and LNLS for the concession of beam time (proposals No. 20160546, 20160578, 20170057, 20170310, 20170612 and 20180745). E.S., F.J.L. and E.B.S. acknowledge support by a joint DFG-FAPERJ project Li 244/12. F.J.L. is grateful for a fellowship by FAPERJ. \end{acknowledgements}
1,116,691,499,516
arxiv
\section{\label{introduction} Introduction} \pst Spin states are usually described by spinors (pure states) or density matrices associated with a finite-dimensional Hilbert space. On the other hand, in the tomographic-probability representation, spin states (qudit states) can be described by fair probability distributions or points on the simplex (probability vectors)~\cite{dodonovPLA,oman'ko-jetp,serg-inverse-spin}. The maps of qudit states onto different quasidistribution functions defined on a finite number of points are discussed in~\cite{vourdas04,vourdas06,wootters,klimov06,klimov07}. All these maps including the tomographic-probability map~\cite{serg-spin,serg-chebyshev,ibort} can be formulated in terms of star-product schemes~\cite{oman'ko-JPA,oman'ko-vitale}. These schemes are analogues to the known scheme developed for the star product on a phase space~\cite{stratonovich,garcia-bondia}. The analogues of Wigner function on a finite set of points are studied in~\cite{chaturvedi}. Among the possible probability descriptions of qudit states one can point out a symmetric informationally complete (SIC) positive operator-valued measures (POVMs) studied in~\cite{caves,renes,fuchs-2010}. These maps are associated with the existence of specific bases in finite-dimensional Hilbert spaces which can also be considered from the star-product point of view~\cite{serg-sic}. Another kind of specific bases in finite-dimensional Hilbert spaces is so called mutually unbiased bases (MUBs)~\cite{ivanovic,wootters87,wootters-fields,albouy}. Also, MUBs can be considered by using the star-product approach (see, e.g., remarks in~\cite{serg-mub}). Some experimental aspects related to SIC-POVMs and MUBs are considered in~\cite{steinberg}. The aim of our article is to demonstrate the possibility to construct specific bases in finite-dimensional Hilbert spaces by using properties of unitary and non-unitary matrices. The $d^2\times N$ matrices are built by considering $N$ operators acting on a $d$-dimensional Hilbert space as $d^2$-dimensional vectors. Since each $d^2\times N$ matrix corresponds to a star-product scheme, a classification of star-product schemes with respect to associated $d^2\times N$ matrices is given. In particular, unitary matrices are shown to be responsible for self-dual schemes. It turns out that there exists no minimal self-dual star-product scheme with dequantizers in the form of POVM effects. Also, we prove that Hermitian dequantizers and quantizers of a self-dual scheme must contain negative eigenvalues. The article is organized as follows. In Sec. 2, we present a review of Hilbert spaces as well as representation of matrices by vectors and vice versa. In Sec. 3, we review a star-product scheme following~\cite{oman'ko-JPA,oman'ko-vitale}. In Sec. 4, we relate properties of unitary and non-unitary matrices with self-dual and other star-product schemes. In this section, we also present star-product picture of qubit state bases, and review the known results of constructing the different bases for qubit states studied in~\cite{caves,renes,fuchs-2010,livine}. The conclusions and prospects are given in Sec. 5. \section{\label{sec-Concise-Review-Hilbert-Spaces} Concise Review of Hilbert Spaces} \pst We review in this Section the construction of star products of functions of discrete variables following \cite{oman'ko-JPA,oman'ko-vitale,ibort}. Let $\mathcal{H}_d$ be a $d$-dimensional Hilbert space of complex vectors $|\psi\rangle$ with a standard inner product $\langle \phi | \psi \rangle$ that is antilinear in the first argument and linear in the second one. The normalized vectors ($\langle \psi | \psi\rangle = 1$) describe pure states of a $d$-dimensional quantum system (qudit). By $\mathcal{B}(\mathcal{H}_d)$ denote a set of linear operators acting on $\mathcal{H}_d$. Since $\dim\mathcal{H}_d=d<\infty$, any operator $\hat{A}\in\mathcal{B}(\mathcal{H}_d)$ is bounded and thoroughly described by the $d\times d$ matrix $A$ with complex matrix elements $A_{ij} = \langle e_i | \hat{A} | e_i \rangle = {\rm Tr}\big[ \hat{E}_{(i,j)}^{\dag} \hat{A} \big]$, where $\{|e_k\rangle\}_{k=1}^{d^2}$ is an orthonormal basis in $\mathcal{H}_d$ and $\hat{E}_{(i,j)} = | e_i \rangle \langle e_j |$ is a matrix unit. We have just introduced the inner product of operators $\hat{X}$ and $\hat{Y}$ in the following manner ${\rm Tr}\big[ \hat{X}^{\dag} \hat{Y} \big] \equiv {\rm Tr}\big[ {X}^{\dag} {Y} \big]$, where matrix $X^{\dag} = (X^{\ast})^{\rm tr} = (X^{\rm tr})^{\ast}$ determines the adjoint operator $\hat{X}^{\dag}$. Matrix units $\hat{E}_{(i,j)}$, $1 \le i,j \le d$, form a bases in $\mathcal{B}(\mathcal{H}_d)$. The above arguments allow drawing a conclusion that $\mathcal{B}(\mathcal{H}_d)$ is the $d^2$-dimensional Hilbert space. \subsection{\label{subsec-Matrices-Vectors}Matrices as Vectors and Vectors as Matrices} \pst Let us consider the linear space of $m\times n$ matrices and choose a set of $mn$ matrix units $E_{(i,j)}$, $1 \le i \le m$, $1 \le j \le n$, as basis in this space: \begin{equation} \label{matrix-unit} \underset{m\times n}{E_{(i,j)}} = \bordermatrix{ & & & \underset{\downarrow}{j} & \cr & 0 & 0 & 0 & 0 & 0 \cr & 0 & \cdots & 0 & \cdots & 0 \cr i \rightarrow & 0 & 0 & 1 & 0 & 0 \cr & 0 & \cdots & 0 & \cdots & 0 \cr & 0 & 0 & 0 & 0 & 0 \cr }. \end{equation} We use a known map (see, e.g.,~\cite{mmsz-03}) of $m\times n$ matrix $Z$ onto an $mn$-dimensional vector $|Z\rangle$ and vice versa. For successive $i=1,2,\ldots,m$ take the $i$th row and transpose it. Then join all the obtained $n$-columns step by step to achieve the $mn$-dimensional column. This column is nothing else but the coordinate representation of vector $|Z\rangle$ in some orthogonal basis. For instance, in case $m=n=2$ we have \begin{equation} \label{matrices-vectors} Z = \left \begin{array}{cc} a & b \\ c & d \\ \end{array \right) \longrightarrow |Z\rangle = \left \begin{array}{c} a \\ b \\ c \\ d \\ \end{array \right). \end{equation} Thus, thanks to this rule any rectangular $m\times n$ matrix can be considered as $mn$-dimensional vector. Apparently, there exists an inverse operation which provides the inverse map of a $N$-dimensional vector $|Z\rangle$ onto matrix $Z$ if the number of vector elements is a composite number $N=mn$. Such a composite number $N=mn$ provides two rectangular matrices of dimension $m\times n$ and $n\times m$, with matrices depending on how we split up the vector onto components and then collect them in columns and rows. This map provides the possibility to consider any composite column vector as a $m\times n$ matrix of an operator: $\mathcal{H}_n \rightarrow \mathcal{H}_m$. Conversely, another matrix (of dimension $n\times m$) yields the map of a vector from $\mathcal{H}_m$ onto a vector in $\mathcal{H}_n$. The feature of a prime number $N$ is that the $N$-dimensional vector cannot be bijectively mapped (without extension) onto a $m\times n$ matrix with $m,n>1$. This characteristic property of prime numbers can shad some light on proving nonexistence of a full set of mutually unbiased bases in Hilbert spaces of non-power-prime dimensions. \textbf{Remark 1}. Square matrix $Z$ ($m=n=d$) is represented by $d^2$-vector with components ${\rm Tr}\big[ E_{(i,j)}^{\dag} Z \big]$. However, instead of matrix units $E_{(i,j)}$, one can use another orthonormal (in trace sense) basis of matrices in $\mathcal{B}(\mathcal{H}_d)$. For example, if $d=2$ one can use conventional matrices of operators $\frac{1}{\sqrt{2}} (\hat{I}_2, \hat{\sigma}_x, \hat{\sigma}_y, \hat{\sigma}_z)$, where $\hat{I}_2\in\mathcal{B}(\mathcal{H}_2)$ is the identity operator and $(\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z)$ is the set of Pauli operators. Then \begin{equation} \label{matrices-vectors-Pauli} Z = \left \begin{array}{cc} a & b \\ c & d \\ \end{array \right) \longrightarrow |\widetilde{Z}\rangle = \frac{1}{\sqrt{2}}\left \begin{array}{c} a+d \\ b+c \\ i(b-c) \\ a-d \\ \end{array \right). \end{equation} \subsection{\label{subsec-Hierarchy-Operators}Hierarchy of Operators} \pst Applying the above consideration to $d\times d$ matrices $X$ and $Y$ of operators $\hat{X},\hat{Y}\in\mathcal{B}(\mathcal{H}_d)$ results in $d^2$-dimensional complex vectors $|X\rangle$ and $|Y\rangle$ such that $\langle X | Y \rangle = {\rm Tr}\big[ \hat{X}^{\dag} \hat{Y} \big]$. In other words, trace operation applied to the product of two matrices is equivalent to the standard scalar product of column vectors constructed from the initial matrices. It follows easily that $\mathcal{B}(\mathcal{H}_d)$ is isomorphic to $\mathcal{H}_{d^2}$, i.e. $\mathcal{B}(\mathcal{H}_d) \Longleftrightarrow \mathcal{H}_{d^2}$. On obtaining this crucial result one can readily repeat the development of this Section by substituting $d^2$ for $d$. Similarly, one can construct a $d^4$-dimensional Hilbert space $\mathcal{B}(\mathcal{B}(\mathcal{H}_d))$ of operators acting on the space of operators $\mathcal{B}(\mathcal{H}_d)$ which in turn act on vectors from $\mathcal{H}_d$. We will refer to the space $\mathcal{B}(\mathcal{B}(\mathcal{H}_d))$ as a space of superoperators on $\mathcal{H}_d$. Evidently, $\mathcal{B}(\mathcal{B}(\mathcal{H}_d)) \Longleftrightarrow \mathcal{H}_{d^4}$ and this consideration can be continued ad infinitum. This leads to the following hierarchy of spaces: \begin{equation} \mathcal{H}_d \Longrightarrow \mathcal{B}(\mathcal{H}_d) \Longleftrightarrow \mathcal{H}_{d^2} \Longrightarrow \mathcal{B}(\mathcal{B}(\mathcal{H}_d)) \Longleftrightarrow \mathcal{B}(\mathcal{H}_{d^2}) \Longleftrightarrow \mathcal{H}_{d^4} \Longrightarrow \ldots \end{equation} \section{\label{section-generic-star-product-scheme} Star Product for Discrete Variables} \pst In this Section, following the ideas of~\cite{oman'ko-JPA,oman'ko-vitale,ibort} we review a construction of the star product for functions depending on discrete variables. Let us consider the Hilbert space $\mathcal{B}(\mathcal{H}_d)$. {\bf Definition}. The function $f_{A}(k)$ on a discrete set $\{k\}$, $k=1,\ldots,N<\infty$, defined by the relation \begin{equation} \label{dequantizer} f_A(k) = {\rm Tr} \big[\hat{U}_k^{\dag} \hat{A}\big] \end{equation} \noindent is called the symbol of an operator $\hat{A}\in\mathcal{B}(\mathcal{H}_d)$ and an operator $\hat{U}_k\in\mathcal{B}(\mathcal{H}_d)$ is called dequantizer operator of the star-product scheme. Note that a symbol $f_A(k)$ can be considered as elements of a column $\boldsymbol{f}_A = \left \begin{array}{ccc} f_A(1) & \cdots & f_A(N) \\ \end{array \right)^{\rm tr}$. For example, if we choose $d\times d$ matrix units $\hat{E}_{(i,j)}$ as quantizers $\hat{U}_k$, where the index $k=1,\ldots,d^2$ is parameterized by $k=d(i-1)+j$, then $\boldsymbol{f}_A = |A\rangle \in \mathcal{H}_{d^2}$. If the symbol $f_A(k)$ contains a full information about the operator $\hat{A}$, then such star-product scheme is tomographic (informationally complete). In other words, knowledge of the symbol $f_A(k)$ is sufficient in order to find an explicit form of the operator $\hat{A}$, namely, \begin{equation} \label{quantizer} \hat{A} = \sum_{k=1}^{N} f_A(k) \hat{D}_k. \end{equation} \noindent The operator $\hat{D}_k \in \mathcal{B}(\mathcal{H}_d)$ is referred to as quantizer and is connected with the dequantizer $\hat{U}_{k'}$ by means of relation \begin{equation} \label{delta} {\rm Tr} \big[ \hat{U}_{k}^{\dag} \hat{D}_{k'} \big] = \delta(k,k'), \end{equation} \noindent where the function $\delta(k,k')$ of two discrete variables plays a role of delta-function on the set of tomographic symbols of all operators. In other words, \begin{equation} \label{delta-check} \sum_{k'=1}^{N} f_A(k') \delta(k,k') = f_A(k). \end{equation} \subsection{\label{subsec-Tomographic-Star-Product-Scheme} Tomographic Star-Product Scheme} \pst It is shown in~\cite{manko-marmo-simoni-etal,manko-marmo-simoni-vent,mms-sudarshan-vent} that the star-product scheme (\ref{dequantizer}), (\ref{quantizer}) is tomographic if and only if \begin{equation} \label{tomogr-s-p-require} \sum_{k=1}^{N} | D_k \rangle \langle U_k | = \hat{I}_{d^2}, \end{equation} \noindent where $|D_k \rangle,|U_k \rangle\in\mathcal{H}_{d^2}$ are vectors constructed from the quantizer $\hat{D}_k$ and the dequantizer $\hat{U}_k$, respectively, by the higher-dimensional analog of the rule (\ref{matrices-vectors}), $\langle U_k | = |U_k \rangle^{\dag}$, and $\hat{I}_{d^2}$ is an identity operator in $\mathcal{B}(\mathcal{H}_{d^2})$. It is worth noting that condition (\ref{delta-check}) is then automatically met because $\delta(k,k') = \langle U_k | D_{k'} \rangle$, $f_A(k') = \langle U_{k'} | A \rangle$, and $\sum_{k'=1}^{N} \langle U_k | D_{k'} \rangle \langle U_{k'} | A \rangle = \langle U_k | \hat{I}_{d^2} | A \rangle = \langle U_k | A \rangle$. An evident requirement for (\ref{tomogr-s-p-require}) to be fulfilled is $N\ge d^2$, because a sum of rank-1 projectors should be equal to the full-rank operator. For the inverse map (\ref{quantizer}): $\mathbb{C}^{N} \rightarrow \mathcal{B}(\mathcal{H}_d)$ to exist, it is necessary and sufficient that the set of dequantizers $\{\hat{U}_k\}_{k=1}^{N}$ contains $d^2$ linearly independent operators. If we combine the corresponding $d^2$-dimensional columns $|U_k\rangle$ into a single $d^2\times N$ dequantization matrix $\mathscr{U}$ of the form \begin{equation} \label{dequantization-matrix} \underset{d^2\times N}{\mathscr{U}} = \left( \Bigg| U_1 \Bigg\rangle \Bigg| U_2 \Bigg\rangle \cdots \Bigg| U_N \Bigg\rangle \right) = \left( \begin{array}{cccc} |U_1\rangle_1 & |U_2\rangle_1 & \cdots & |U_N\rangle_1\\ |U_1\rangle_2 & |U_2\rangle_2 & \cdots & |U_N\rangle_2\\ \cdots & \cdots & \cdots & \cdots\\ |U_1\rangle_{d^2} & |U_2\rangle_{d^2} & \cdots & |U_N\rangle_{d^2}\\ \end{array} \right), \end{equation} \noindent then this criterion can be rewritten as ${\rm rank} \mathscr{U} = d^2$. Once this condition is met, a set of quantizers $\{\hat{D}_k\}_{k=1}^{N}$ exists and can also be written in terms of a single quantization matrix \begin{equation} \label{quantization-matrix} \underset{d^2\times N}{\mathscr{D}} = \left( \Bigg| D_1 \Bigg\rangle \Bigg| D_2 \Bigg\rangle \cdots \Bigg| D_N \Bigg\rangle \right) = \left( \begin{array}{cccc} |D_1\rangle_1 & |D_2\rangle_1 & \cdots & |D_N\rangle_1\\ |D_1\rangle_2 & |D_2\rangle_2 & \cdots & |D_N\rangle_2\\ \cdots & \cdots & \cdots & \cdots\\ |D_1\rangle_{d^2} & |D_2\rangle_{d^2} & \cdots & |D_N\rangle_{d^2}\\ \end{array} \right). \end{equation} In Section \ref{sec-Dequantization-Matrix-and-SPScheme}, we will reveal a relation between matrices $\mathscr{U}$, $\mathscr{D}$ and properties of the star-product scheme. \textbf{Remark 2}. Exploiting the notation (\ref{dequantization-matrix})--(\ref{quantization-matrix}), the criterion (\ref{tomogr-s-p-require}) takes the form $\mathscr{D}\mathscr{U}^{\dag} = I_{d^2}$. \subsubsection{\label{subsubsec-Search-Quantization-Matrix}Search of Quantization Matrix} \pst Given the dequantization matrix $\mathscr{U}$, ${\rm rank}\mathscr{U} = d^2$, a quantization matrix (\ref{quantization-matrix}) can be found via the following pseudoinverse operation \begin{equation} \label{D-from-U} \mathscr{D} = (\mathscr{U}\mathscr{U}^{\dag})^{-1} \mathscr{U}. \end{equation} \noindent Indeed, it can be easily checked that $\sum_{k=1}^{N}|U_k\rangle\langle U_k| = \mathscr{U}\mathscr{U}^{\dag}$. Hence, \begin{equation} \sum_{k=1}^{N} | D_k \rangle \langle U_k | = \sum_{k=1}^{N} (\mathscr{U}\mathscr{U}^{\dag})^{-1} |U_k\rangle\langle U_k| = (\mathscr{U}\mathscr{U}^{\dag})^{-1} \mathscr{U}\mathscr{U}^{\dag} = \hat{I}_{d^2}, \end{equation} \noindent i.e. the requirement (\ref{tomogr-s-p-require}) holds true. It is worth mentioning that the matrix $\mathcal{D}$ does not have to be expressed in the form (\ref{D-from-U}) if $N>d^2$. In fact, in this case vectors $\{|U_k\rangle\}_{k=1}^{N}$ are linearly dependent. Therefore there exists a nontrivial linear combination $\sum_{k=1}^{N}c_k |U_k\rangle = 0$. Transformation $\delta(k,k') \rightarrow \delta(k,k')+c_{k'}^{\ast}$ leaves the equality (\ref{delta-check}) accomplished. Such a transformation is easily achieved by the following transformation of the quantization matrix: $\mathscr{D} \rightarrow \mathscr{D} + (\cdot){\rm diag}(c_1^{\ast}, c_2^{\ast}, \ldots, c_N^{\ast})$, where $(\cdot)$ is an arbitrary $d^2\times N$ matrix. This means that an ambiguity of quantization matrix (\ref{quantization-matrix}) is allowed and formula (\ref{D-from-U}) covers only one of many possibilities. \subsubsection{\label{subsubsec-MinimalTSPS} Minimal Tomographic Star-Product Scheme} \pst Important is the special case $N=d^2$ leading to a \textit{minimal} tomographic star-product scheme. The condition ${\rm rank}\mathscr{U} = d^2$ is then equivalent to $\det\mathscr{U} \ne 0$, i.e. to the existence of the inverse matrix $\mathscr{U}^{-1}$. Formula (\ref{delta-check}) is valid for any symbol $f_A(k)$, $k=1,\ldots,d^2$ if and only if $\delta(k,k')$ reduces to the Kronecker delta-symbol $\delta_{k,k'}$. Taking into account relation (\ref{delta}), we obtain \begin{equation} \label{D-from-U-square-matrix} \mathscr{U}^{\dag}\mathscr{D}=I_{d^2} \quad \Longleftrightarrow \quad \mathscr{D} = (\mathscr{U}^{\dag})^{-1}. \end{equation} \textbf{Example 1}. It is easily seen that if we choose $d\times d$ matrix units $\hat{E}_{(i,j)}$ as dequantizers $\hat{U}_k$, $k=d(i-1)+j$, then $\mathscr{U}=\mathscr{D}=I_{d^2}$ and the requirement (\ref{tomogr-s-p-require}) is satisfied. Such a tomographic procedure results in the proper reconstruction formula (\ref{quantizer}) with $\hat{D}_k = \hat{E}_{(i,j)}$. However, in physics, scientists are interested in the reconstruction of the Hermitian density operator $\hat{\rho}$ by measuring physical quantities associated with Hermitian dequantizer operators $\hat{U}_{k}=\hat{U}_{k}^{\dag}$ (in contrast to matrix units for which $\hat{E}_{(i,j)}^{\dag} = \hat{E}_{(j,i)} \ne \hat{E}_{(i,j)}$). The most general case of measurements associated with positive operator-valued measures is considered in Section \ref{subsec-square-matrix}.\hfill$\scriptstyle\blacksquare$ \subsection{\label{subsec-Star-Product-Kernel} Star-Product Kernel} \pst The symbol $f_{AB}(k)$ of the product of two operators $\hat{A},\hat{B}\in\mathcal{B}(\mathcal{H}_d)$ equals a star product of symbols $f_A$ and $f_B$ determined by the formula \begin{equation} (f_A \star f_B) (k) \equiv f_{AB}(k) = \sum_{k',k''=1}^{N} f_A(k') f_B(k'') K(k,k',k''), \end{equation} \noindent where the kernel $K$ is expressed in terms of dequantizer and quantizer operators as follows: \begin{equation} K(k,k',k'') = {\rm Tr} \big[ \hat{U}_{k}^{\dag} \hat{D}_{k'} \hat{D}_{k''} \big]. \end{equation} \noindent Since star product is associative by definition, it necessarily satisfies the nonlinear equation \begin{equation} K^{(3)}(k,k',k'',k''') = \sum_{l=1}^{N} K(k,l,k''') K(l,k',k'') = \sum_{l=1}^{N} K(k,k',l) K(l,k'',k'''), \end{equation} \noindent which is an immediate consequence of the relation $f_A \star f_B \star f_C = (f_A \star f_B) \star f_C = f_A \star (f_B \star f_C)$. \subsection{\label{subsec-Interteining-Kernels} Intertwining Kernels Between Two Star-Product Schemes} \pst Let us assume that we are given two different discrete sets $\{k\}_{k=1}^{N}$ and $\{\kappa\}_{\kappa=1}^{M}$ as well as two different sets of the corresponding dequantizers and quantizers, $\{\hat{U}_k,\hat{D}_k\}_{k=1}^{N}$ and $\{\hat{\mathfrak{U}}_{\kappa},\hat{\mathfrak{D}}_{\kappa}\}_{\kappa=1}^{M}$, respectively, with operators from both sets acting on the same Hilbert space $\mathcal{H}_d$. In view of this, one can construct two different star-product schemes for two different kinds of symbols $f_A(k)$ and $\mathfrak{f}_A(\kappa)$. The symbols are related by intertwining kernels \begin{eqnarray} \label{intertw-symbols} f_A(k) = \sum_{\kappa=1}^{M} K_{\mathfrak{f} \rightarrow f} (k,\kappa) \mathfrak{f}_A(\kappa), \qquad \boldsymbol{f}_A = K_{\mathfrak{f} \rightarrow f} ~ \boldsymbol{\mathfrak{f}}_A, \nonumber\\ \mathfrak{f}_A(\kappa) = \sum_{k=1}^{N} K_{f \rightarrow \mathfrak{f}} (\kappa,k) f_A(k), \qquad \boldsymbol{\mathfrak{f}}_A = K_{f \rightarrow \mathfrak{f}} ~ \boldsymbol{f}_A, \end{eqnarray} \noindent where the intertwining kernels are represented as rectangular matrices expressed through dequantizers and quantizers as follows: \begin{eqnarray} \label{intertw-kernels} && K_{\mathfrak{f} \rightarrow f} (k,\kappa) = {\rm Tr} \big[ \hat{U}_k^{\dag} \hat{\mathfrak{D}}_{\kappa}\big], \qquad K_{f \rightarrow \mathfrak{f}} (\kappa,k) = {\rm Tr} \big[ \hat{\mathfrak{U}}_{\kappa}^{\dag} \hat{D}_{k}\big],\\ && K_{\mathfrak{f} \rightarrow f} = \mathscr{U}_{\{k\}}^{\dag} \mathscr{D}_{\{\kappa\}} = \underset{N\times M}{\left \begin{array}{cc} \vdots & \vdots \\ \end{array \right)}, \qquad K_{f \rightarrow \mathfrak{f}} = \mathscr{U}_{\{\kappa\}}^{\dag} \mathscr{D}_{\{k\}} = \underset{M\times N}{\left \begin{array}{c} \cdots\\ \cdots\\ \end{array \right)}. \end{eqnarray} \textbf{Example 2}. Given a unitary $d^2\times d^2$ matrix $u$, we construct two star-product schemes: the first one exploits columns of the matrix $u$ as dequantizers $|U_k\rangle$ (i.e. $\mathscr{U}_{\{k\}}=\mathscr{D}_{\{k\}}=u$), the second one utilizes rows of the matrix $u$ as dequantizers $|\mathfrak{U}_k\rangle$ (i.e. $\mathscr{U}_{\{\kappa\}}=\mathscr{D}_{\{\kappa\}}=u^{\rm tr}$). Using formulas (\ref{intertw-symbols}), (\ref{intertw-kernels}) and decomposing row matrix elements in terms of column matrix elements, we get the cubic relation $u = (uu^{\ast})u^{\rm tr}$. \hfill$\scriptstyle\blacksquare$ One can consider a particular case $\{k\} \equiv \{\kappa\}$, $\hat{\mathfrak{U}}_{\kappa} = \hat{D}_{k}$, and $\hat{\mathfrak{D}}_{\kappa} = \hat{U}_{k}$, which is called dual star-product quantization scheme. \subsection{\label{subsec-Self-Dual} Self-Dual Star-Product Scheme} \pst \textbf{Definition}. Star-product scheme (\ref{dequantizer}), (\ref{quantizer}) is called self-dual if there exists $c\in\mathbb{R}$, $c>0$ such that $\hat{U}_k = c \hat{D}_k$ for all $k=1,\ldots,N$. We will refer to the factor $c$ as coefficient of skewness. Self-dual star-product scheme is completely equivalent to the scheme with coincident dequantizer and quantizer operators $\hat{\widetilde{U}}_k = \hat{\widetilde{D}}_k = \frac{1}{\sqrt{c}} \hat{U}_k = \sqrt{c} \hat{D}_k$. \textbf{Example 3}. Matrix units $\hat{E}_{(i,j)}$ form a self-dual scheme with $c=1$.\hfill$\scriptstyle\blacksquare$ \textbf{Example 4}. A description of the qubit ($d=2$) phase space proposed in the paper~\cite{livine} implies a self-dual star-product scheme with the following dequantizers and quantizers: \begin{eqnarray} \label{qubit-phase} && \hat{U}_1 = \frac{1}{2}\hat{D}_1 = \frac{1}{4}\left( \hat{I}_2 + \hat{\sigma}_x + \hat{\sigma}_y + \hat{\sigma}_z \right), \nonumber\\ && \hat{U}_2 = \frac{1}{2}\hat{D}_2 = \frac{1}{4}\left( \hat{I}_2 + \hat{\sigma}_x - \hat{\sigma}_y - \hat{\sigma}_z \right), \nonumber\\ && \hat{U}_3 = \frac{1}{2}\hat{D}_3 = \frac{1}{4}\left( \hat{I}_2 - \hat{\sigma}_x + \hat{\sigma}_y - \hat{\sigma}_z \right), \nonumber\\ && \hat{U}_4 = \frac{1}{2}\hat{D}_4 = \frac{1}{4}\left( \hat{I}_2 - \hat{\sigma}_x - \hat{\sigma}_y + \hat{\sigma}_z \right). \end{eqnarray}\hfill$\scriptstyle\blacksquare$ \section{\label{sec-Dequantization-Matrix-and-SPScheme} Type of Dequantization Matrix and Properties of Star-Product Scheme} \pst In this Section, we will establish a relation between the type of dequantization matrix $\mathscr{U}$ (quantization matrix $D$) and particular properties of the star-product scheme. Unless specifically stated, we deal with the $d^2$-dimensional space of operators $\mathcal{B}(\mathcal{H}_d)$. \subsection{Rectangular Matrix} We start with the most general rectangular $d^2\times N$ matrix $\mathscr{U}$. As it was shown previously in Section \ref{subsec-Tomographic-Star-Product-Scheme}, if $N<d^2$ then ${\rm rank}\mathscr{U} \le N < d^2$, the set of dequantizers $\{\hat{U}_k\}_{k=1}^{N}$ is underfilled, and quantization matrix $D$ is not defined. In the opposite case $N\ge d^2$, the scheme is underfilled again if ${\rm rank}\mathscr{U} < d^2$ and the scheme is overfilled if ${\rm rank}\mathscr{U} = d^2$. Underfilled schemes enable revealing partial information about the system. The greater ${\rm rank}\mathscr{U}$ the more information can can be extracted from the symbols (\ref{dequantizer}). Under this circumstance, the closer $N$ to ${\rm rank}\mathscr{U}$, the less resource-intensive is the procedure. Overfilled set of dequntizers provides a tomographic star-product scheme and allows calculating quantization matrix $\mathscr{D}$, e.g. according to formula (\ref{D-from-U}). For overfilled scheme, the smaller difference $N-d^2$ the less redundant information is contained in tomographic symbols. \textbf{Example 5}. Consider a full set of mutually unbiased bases (MUBs) $\{|a\alpha\rangle\}$, $a=0,\ldots,d$ (basis number), $\alpha=0,\ldots,d-1$ (vector index inside a basis) in power-prime-dimensional Hilbert space $\mathcal{H}_d$. Dequantizers of the form $|a\alpha\rangle \langle a\alpha | \in \mathcal{B}(\mathcal{H}_d)$ lead to an overfilled scheme with the $d^2\times d(d+1)$ rectangular dequantization matrix $\mathscr{U}$, ${\rm rank}\mathscr{U} = d^2$. The case $d=2$ is illustrated in Table \ref{table}.\hfill$\scriptstyle\blacksquare$ \subsection{\label{subsec-square-matrix}Square Matrix} \pst An arbitrary square $d^2\times d^2$ matrix $\mathscr{U}$ with $\det \mathscr{U} \ne 0$ defines a minimal tomographic star-product scheme and vice versa. Quantization matrix $\mathscr{D}$ is given by formula (\ref{D-from-U-square-matrix}). Symbols (\ref{dequantizer}) thoroughly determine a desired operator $\hat{A}\in\mathcal{B}(\mathcal{H}_d)$. The density operator $\hat{\rho}$ of the physical system is of special interest. All informationally complete positive operator-valued measures (POVMs) are nothing else but either overfilled or minimal tomographic star-product schemes (see, e.g.,~\cite{weigert}), where POVM effects are regarded as dequantizers. If this is the case, symbols can, in principal, be measured experimentally. Assuming a non-zero error bar of measured symbols, the less is the condition number of the matrix $\mathscr{U}$ the less erroneous is the reconstructed density operator (in a desired basis). \textbf{Example 6}. Symmetric informationally complete POVM (SIC-POVM) of the Weyl-Heisenberg form is conjectured to exist for an arbitrary finite dimension $d=\dim\mathcal{H}_d$ (although not proven yet). SIC-POVM consists of $d^2$ effects $\hat{U}_k = \frac{1}{d} \hat{\Pi}_k = \frac{1}{d}|\psi_k\rangle\langle\psi_k|\in\mathcal{B}(\mathcal{H}_d)$ such that ${\rm Tr}\big[ \hat{\Pi}_k \hat{\Pi}_{k'} \big] = (d\delta_{kk'}+1)/(d+1)$. It means that the scalar product $\langle U_k | U_{k'} \rangle$ of any two different columns of matrix $\mathscr{U}$ is the same number $1/d^2(d+1)$. The example of qubits is placed in the Table \ref{table}. \hfill$\scriptstyle\blacksquare$ \begin{figure} \begin{center} \includegraphics{figure} \caption{\label{figure} One-to-one correspondence between the type of dequantization matrix $\mathscr{U}$, ${\rm rank}\mathscr{U} = d^2$, and the type of star-product scheme in $\mathcal{H}_d$. The matrix $\mathscr{U}$ is constructed by higher-dimensional analogues of formulas (\ref{matrices-vectors}), (\ref{dequantization-matrix}).} \end{center} \end{figure} \subsection{Unitary Matrix} \pst To begin with, let us remind some properties of unitary matrices. A unitary $d^2\times d^2$ matrix $\mathscr{U}$ satisfies the condition $\mathscr{U} \mathscr{U}^{\dag} = \mathscr{U}^{\dag} \mathscr{U} = I_{d^2}$. This property implies the orthogonality of columns of this matrix \begin{equation} \sum_{p=1}^{d}\mathscr{U}_{pq}^{\ast} \mathscr{U}_{pq'} = \delta_{qq'}, \end{equation} \noindent It can be easily checked that the rows are also orthogonal, i.e. $\sum_{q=1}^{d} \mathscr{U}_{pq}^{\ast} \mathscr{U}_{p'q} = \delta_{pp'}$. This property means that the columns (rows) of the matrix $\mathscr{U}$ can be chosen as orthonormal basis vectors in $d^2$-dimensional Hilbert space $\mathcal{H}_{d^2}$ and, consequently, in the space $\mathcal{B}(\mathcal{H}_d)$ by the higher-dimensional analogue of the map inverse to (\ref{matrices-vectors}). It means that all bases and sets of operators in $\mathcal{B}(\mathcal{H}_{d})$ can be represented as linear combinations of operators ${\hat{U}_k}$ obtained from the columns $|U_k\rangle$ of matrix $\mathscr{U}$. Now, we proceed to the analysis of the relation between the unitary dequantization matrix $\mathscr{U}$ and features of the star-product scheme. \textbf{Proposition 1}. A star-product scheme is minimal self-dual with coefficient of skewness $c$ if and only if the corresponding dequantization matrix $\mathscr{U}=\sqrt{c}\tilde{\mathscr{U}}$, where $\tilde{\mathscr{U}}$ is a unitary $d^2\times d^2$ matrix. \textbf{Proof}. As it is stated in Section \ref{subsec-Self-Dual}, a self-dual star-product scheme is equivalent to the scheme with coincident quantizers and dequantizers, i.e. $\tilde{\mathscr{U}} = \tilde{\mathscr{D}} = \frac{1}{\sqrt{c}}\mathscr{U}$. On the other hand, from (\ref{D-from-U-square-matrix}) it follows that $\tilde{\mathscr{U}}^{\dag} = \tilde{\mathscr{U}}^{-1}$. Now the statement of the Proposition is clearly seen. \hfill$\scriptstyle\blacksquare$ For many applications it is important to be aware of the relation between POVMs (primarily used for performing tomography of the system) and self-dual schemes (usually exploited while considering phase-space of the system). The following Propositions reveal an incompatibility of these two approaches. \textbf{Proposition 2}. There exists no minimal tomographic star-product scheme with dequantizers in the form of POVM effects and Hermitian semi-positive quantizers. \textbf{Proof}. Assume the converse, namely, $\sum_{k=1}^{d^2}\hat{U}_k = \hat{I}_{d}$, $\hat{U}_k = \hat{U}_k^{\dag} \ge 0$, and $\hat{D}_k = \hat{D}_k^{\dag} \ge 0$ for all $k=1,\ldots,d^2$. From Eq. (\ref{D-from-U-square-matrix}) it follows that ${\rm Tr}\big[ \hat{U}_k \hat{D}_{k'} \big] = \delta_{kk'}$ and $\sum_{k=1}^{d^2}{\rm Tr}\big[ \hat{U}_k \hat{D}_{k'} \big] = {\rm Tr}\big[\hat{D}_{k'} \big] = 1$. This implies that $\{\hat{D}_{k}\}_{k=1}^{d^2}$ is a set of density operators. Since $0\le \hat{U}_k \le \hat{I}_{d}$ then the equality ${\rm Tr}\big[ \hat{U}_k \hat{D}_{k} \big] = 1$ can be only achieved if $\hat{U}_k=\hat{D}_{k}=| \psi_k \rangle\langle \psi_k |$, $| \psi_k \rangle\in\mathcal{H}_d$ or $\hat{U}_k = \hat{I}_d$. The latter case is inconsistent in view of POVM requirement $\sum_{k=1}^{d^2}\hat{U}_k = \hat{I}_{d}$ and the former case implies $\langle \psi_k | \psi_{k'} \rangle = \delta_{kk'}$ for all $k,k'=1,\ldots,d^2$, which is impossible as there can be no greater than $d$ orthonormal vectors in $\mathcal{H}_d$. This contradiction concludes the proof. \hfill$\scriptstyle\blacksquare$ This proposition is followed by immediate consequences. \textbf{Corollary 1}. There exists no minimal self-dual star-product scheme with dequantizers in the form of POVM effects. \textbf{Proof}. If such a scheme existed, then the quantizers would be Hermitian semi-positive in view of self duality. This contradicts to Proposition 1. \hfill$\scriptstyle\blacksquare$ \textbf{Corollary 2}. If dequantizers $\{\hat{U}_k\}_{k=1}^{d^2}$ form a POVM, then dequantization and quantization matrices $\mathscr{U}$ and $\mathscr{D}$ are not proportional to any unitary matrix. \textbf{Corollary 3}. Hermitian dequantizers and quantizers of a self-dual scheme must contain negative eigenvalues. The result of Corollary 1 indicates a slight error in the paper~\cite{livine}, where dequantizers of the self-dual scheme (\ref{qubit-phase}) are treated as POVM effects, which is incorrect but harmless to the rest of the article. The paper~\cite{appleby-arbitrary-rank} uses a notation ``Wigner POVM" because of an observed connection of Wigner function with POVM-probabilities rescaled by a constant amount and then shifted by a constant amount. The very shift makes the scheme non-self-dual (as it should be according to Corollary 1). Taking into account Proposition 2, we can predict the negative sign of this shift. The obtained results seem to be valid not only in finite-dimensional Hilbert spaces but also in infinite dimensional case. For instance, Corollary 3 is illustrated by the following example. \textbf{Example 7}. Weyl star-product scheme is defined through dequantizers $\hat{U}(q,p) = 2 \hat{\mathcal{D}}(\alpha) \hat{\mathcal{I}} \hat{\mathcal{D}}(-\alpha)$ and quantizers $\hat{D}(q,p) = \frac{1}{2\pi}\hat{U}(q,p)$, where $\alpha=(q+ip)/\sqrt{2}$, $\hat{\mathcal{D}}(\alpha) = \exp\big[ \alpha \hat{a}^{\dag} - \alpha^{\ast} \hat{a} \big]$ is the displacement operator, $\hat{a}^{\dag}$ and $\hat{a}$ are creation and annihilation operators, respectively, $\hat{\mathcal{I}}$ is the inversion operator. The scheme is obviously self-dual. Since the displacement operator is unitary, dequantizers and quantizers are Hermitian and inherit a spectrum of the inversion operator ${\rm Sp}_{\mathcal{I}}=\{\pm 1\}$, i.e. exhibit negative eigenvalues. \hfill$\scriptstyle\blacksquare$ \subsection{Unitary Matrix $u\otimes u^{\ast}$} \pst The dequantization matrix of the form $u\otimes u^{\ast}$ occurs while performing a unitary rotation of matrix units $\hat{E}_{(i,j)}$, $i,j=1,\ldots,d$. Indeed, a transform $u E_{(i,j)} u^{\dag} = | u_i \rangle \langle u_j |$, where $| u_i \rangle$ is the $i$th column of a unitary $d\times d$ matrix $u$, $\langle u_j | = | u_j \rangle^{\dag}$. Vector representation (\ref{matrices-vectors}) of the matrix $| u_i \rangle \langle u_j |$ is $| u_i \rangle \otimes (\langle u_j |)^{\rm tr} = | u_i \rangle \otimes (| u_j \rangle)^{\ast}$. Stacking these vectors by the rule (\ref{dequantization-matrix}) yields $\mathscr{U}=u\otimes u^{\ast}$. It means that such a matrix $\mathscr{U}$ defines dequantizers and quantizers of the form $\hat{U}_k = \hat{D}_k = \hat{u} \hat{E}_{(i,j)} \hat{u}^{\dag} = \hat{u} | e_i \rangle \langle e_j | \hat{u}^{\dag} = | \psi_i \rangle \langle \psi_j |$ for all $k=1,\ldots,d^2$. It is worth noting that $\langle \psi_i | \psi_j \rangle = \delta_{ij}$, so the star-product scheme is matrix-unit-like, with all dequantizers and quantizers being rank-1 operators. The results of this Section concerning tomographic star-product schemes are depicted in Figure \ref{figure}. We also provide a summary Table \ref{table} of examples for qubits. \begin{table}[t] \caption{\label{table} Examples of $4 \times N$ matrices $\mathscr{U}$ and corresponding bases (sets of vectors) in $\mathcal{H}_2$} \begin{tabular}{|c|c|c|} \hline Dequantizers & Dequantization matrix $\mathscr{U}$ & Dequantization matrix $\mathscr{U}$\\ $\{\hat{U}_k\}_{k=1}^{N}$ & constructed by rules (\ref{matrices-vectors}), (\ref{dequantization-matrix}) & constructed by rules (\ref{matrices-vectors-Pauli}), (\ref{dequantization-matrix}) \\ \hline $\begin{array}{c} {\rm Matrix~units} \\ \hat{E}_{(i,j)},~ {\rm Eq.}~(\ref{matrix-unit}) \\ \end{array}$ & $\left \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array \right)$ & $\frac{1}{\sqrt{2}}\left \begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & i & -i & 0 \\ 1 & 0 & 0 & -1 \\ \end{array \right)$\\ \hline $\frac{1}{\sqrt{2}}(\hat{I}_2, \hat{\sigma}_x, \hat{\sigma}_y, \hat{\sigma}_z)$ & $\frac{1}{\sqrt{2}}\left \begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & -i & 0 \\ 0 & 1 & i & 0 \\ 1 & 0 & 0 & -1 \\ \end{array \right)$ & $\left \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array \right)$ \\ \hline $\frac{1}{\sqrt{2}}(\hat{I}_2, \hat{\sigma}_x, i\hat{\sigma}_y, \hat{\sigma}_z)$ & $\frac{1}{\sqrt{2}}\left \begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & -1 & 0 \\ 1 & 0 & 0 & -1 \\ \end{array \right)$ & $\left \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & i & 0 \\ 0 & 0 & 0 & 1 \\ \end{array \right)$\\ \hline $\begin{array}{c} {\rm Eqs.~(\ref{qubit-phase})} \\ {\rm (Ex.~4)} \\ \end{array}$ & $\frac{1}{2}\left \begin{array}{cccc} 2 & 0 & 0 & 2 \\ 1-i & 1+i & -1-i & -1+i \\ 1+i & 1-i & -1+i & -1-i \\ 0 & 2 & 2 & 0 \\ \end{array \right)$ & $\frac{1}{2}\left \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ \end{array \right)$\\ \hline $\begin{array}{c} {\rm SIC\!-\!POVM} \\ {\rm (Ex.~6)} \\ \end{array}$ & $\frac{1}{2\sqrt{3}}\left \begin{array}{cccc} \sqrt{3}+1 & \sqrt{3}-1 & \sqrt{3}-1 & \sqrt{3}+1 \\ 1-i & 1+i & -1-i & -1+i \\ 1+i & 1-i & -1+i & -1-i \\ \sqrt{3}-1 & \sqrt{3}+1 & \sqrt{3}+1 & \sqrt{3}-1 \\ \end{array \right)$ & $\frac{1}{2\sqrt{3}}\left \begin{array}{cccc} \sqrt{3} & \sqrt{3} & \sqrt{3} & \sqrt{3} \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ \end{array \right)$\\ \hline MUBs (Ex. 5) & $\frac{1}{\sqrt{2}}\left \begin{array}{cccccc} \sqrt{2} & 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & -1 & i & -i\\ 0 & 0 & 1 & -1 & -i & i\\ 0 & \sqrt{2} & 1 & 1 & 1 & 1\\ \end{array \right)$ & $\frac{1}{2\sqrt{2}}\left \begin{array}{cccccc} \sqrt{2} & \sqrt{2} & 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & -1 & 0 & 0\\ 0 & 0 & 0 & 0 & -1 & 1\\ \sqrt{2} & -\sqrt{2} & 0 & 0 & 0 & 0\\ \end{array \right)$\\ \hline \end{tabular} \end{table} \section{\label{section-conclusions}Conclusions and Prospects} \pst To conclude, we present the main results of the paper. A bijective map: $\{N$ operators in $\mathcal{B}({H}_d)\} ~ \longleftrightarrow ~ \{ d^2\times N$ matrix $\mathscr{U}\}$ is constructed and associated with a star-product formalism. For $N$ these operators to form a basis in $\mathcal{B}({H}_d)$, conditions on matrix $\mathscr{U}$ are derived. Classification of possible matrices $\mathscr{U}$ and related star-product schemes $\{\hat{U}_k,\hat{D}_k\}_{k=1}^{N}$ is accomplished. This gives rise to a new approach of introducing bases in $\mathcal{B}({H}_d)$ with desired properties. One chooses a class of matrices and impose additional limitations. Once matrix $\mathscr{U}$ is built, a corresponding basis (set of operators) in $\mathcal{B}({H}_d)$ with expected properties appears. A development of the paper is complemented by illustrating examples. Another substantial result is a series of Propositions and Corollaries which demonstrate peculiarities of dequantizers and quantizers, especially in a self-dual star-product scheme. Namely, it is proved that there exists no minimal tomographic star-product scheme with dequantizers in the form of POVM effects and Hermitian semi-positive quantizers. On applying this argument to self-dual schemes, we have proved that (i) there exists no minimal self-dual star-product scheme with dequantizers in the form of POVM effects and (ii) Hermitian dequantizers and quantizers of a self-dual scheme must contain negative eigenvalues. The achieved results can be useful for an analysis of the following problems which are of great interest for further consideration: symmetric but non-informationally complete structures of arbitrary rank, a relation between symmetric bases in spaces of different dimension, and specific bases in multipartite systems. \section*{Acknowledgments} \pst The authors thank the Russian Foundation for Basic Research for partial support under Projects Nos. 09-02-00142 and 10-02-00312. S.N.F. is grateful to the Russian Science Support Foundation for support under Project ``Best postgraduates of the Russian Academy of Sciences 2010". S.N.F. thanks the Ministry of Education and Science of the Russian Federation for support under Project Nos. 2.1.1/5909, $\Pi$558, and 14.740.11.0497. V.I.M. was supported by NIX Computer Company in the form of a gift (computer) provided by Organizers of Seminars on quantum physics and informatics in Landau Institute for Theoretical Physics, where V.I.M. was happy to deliver two lectures.
1,116,691,499,517
arxiv
\section{Introduction and preliminaries} As well-known examples of Leavitt path algebras arise the so-called primary colours. They respectively correspond to the ideal of $L_K(E)$ generated by the set of line points $\mathop{\hbox{\rm P}_{l}}$, the ideal generated by the vertices that lie on cycles without exits $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}$ and the one generated by the set in extreme cycles $\mathop{\hbox{\rm P}_{ec}}$. Those sets constitute an essential ingredient in the structure of Leavitt path algebras. Firstly, the ideal generated by $\mathop{\hbox{\rm P}_{l}}$ is precisely the socle of $L_K(E)$ \cite{AMMS1, AMMS2}. What is more, in \cite{CGKS} the ideal generated by $\mathop{\hbox{\rm P}_{l}}$ has recently been proved to be the largest locally left/right artinian ideal inside $L_K(E)$ and respectively, $I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}})$ the largest locally left/right noetherian without minimal idempotents. On the other hand, the ideal generated by $\mathop{\hbox{\rm P}_{ec}}$ is purely infinite \cite{CGKS,CMMS}. Another important fact is that all of them have been proved to be invariant ideals under rings isomorphisms for Leavitt path algebras: the ideal generated by $\mathop{\hbox{\rm P}_{l}}$ in \cite{AMMS1}, $I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}})$ in \cite{ABS} and the ideal generated by $\mathop{\hbox{\rm P}_{ec}}$ in \cite{CGKS}. For an arbitrary graph $E$, the union of the three sets above mentioned that give us the primary colours, together with the set $\mathop{\hbox{\rm P}_{b^\infty}}$ generate an ideal of $L_K(E)$ which is dense \cite{CGKS}. In this work we give a step forward in studying the invariance of this another key piece of $L_K(E)$. Although in general we see that the ideal generated by $\mathop{\hbox{\rm P}_{b^\infty}}$ is not invariant, we will determine a subset of vertices inside $\mathop{\hbox{\rm P}_{b^\infty}}$ in which the answer is positive: the set $\mathop{\hbox{\rm P}_{b_p^{\infty}}}$ of vertices with pure infinite bifurcations. Furthermore, the main goal of this paper is to develop a machinery that produces invariant ideals for Leavitt path algebras. In order to do that, we introduce a topology in the set of vertices of a graph $E$ that we will call $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology. Basically the closed sets of this topology will be the set of vertices that connects to the given set. On the one hand, we will establish graph-theoretic notions for Leavitt path algebras in topological terms. On the other hand, via category theory, we will think of the saturated and hereditary set of a graph as an operator (actually a functor). Roughly speaking, we prove that if $H$ is a hereditary and saturated invariant functor, then the functor associated to the set of vertices which do not connect with $H$ is also invariant. Using these tools, we will prove that the ideal generated by the subset of vertices of $\mathop{\hbox{\rm P}_{b^\infty}}$ which do not connect to $\mathop{\hbox{\rm P}_{l}} \cup \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}} \cup \mathop{\hbox{\rm P}_{ec}}$ is invariant (the so-called set $\mathop{\hbox{\rm P}_{b_p^{\infty}}}$). In addition, given a hereditary and saturated functor $H$, we will construct a chain of hereditary and saturated functors $H=H^{(1)}\subset H^{(2)}\subset\cdots\subset H^{(n)}\subset\cdots $ with each $H^{(i)}$, $i\ge 1$ being hereditary and saturated and such that $H^{(i)}$ is invariant when $H$ is. As an extra motivation we would like to link in some future development, the discovery of invariant ideals to the \lq\lq rigidity\rq\rq\ of the automorphism group of a Leavitt path algebra. If the amount of invariant ideals is large enough the freedom degrees of automorphisms are under control. As a general rule, we can guess that the number of invariant ideals is directly proportional to the rigidity of the automorphism group. \vskip 0.2cm This paper is organized as follows. In Section \ref{dcc} we introduce the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology of the set of vertices of a graph. This topological setting is the counterpart of the algebraic one provided by annihilators. The motivation that guide us, is that the exterior (in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology) of an invariant hereditary and saturated set $H$ is again invariant. In Subsection \ref{cati} we use some tools borrowed from the theory of categories and functors which are substantial for our work. We prove that the ideal generated by $\mathop{\hbox{\rm P}_{b^\infty}}$ is not invariant (we also check that the ideal generated by $\mathop{\hbox{\rm P}_{ec}}\cup\mathop{\hbox{\rm P}_{b^\infty}}$ is not invariant). In Section \ref{anni} we use annihilators to produce invariant ideals. In terms of functors the main point here is that for every hereditary and saturated invariant functor $H$, its exterior $\mathop{\hbox{\rm ext}}(H)$ is again invariant. Theorem \ref{enjundia} proves that $\mathop{\hbox{\rm P}_{b_p^{\infty}}}$ is invariant. Then, in Section \ref{rosa} we find how to construct series of functors $H=H^{(1)}\subset\cdots\subset H^{(i)}\subset H^{(i+1)}$ which are invariant when $H$ is. As a motivation we apply this construction to $\mathop{\hbox{\rm P}_{l}}$ in Section \ref{soclechain} and we characterize graphically those functors $\mathop{\hbox{\rm P}_{l}}^{(n)}$ (Theorem \ref{encasa}). Similarly the same idea holds also to $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}$ (Theorem \ref{enbarco2}) and the possibilities are broad enough because we can also perform a kind of composition $H_1*H_2$ in Subsection \ref{colada} which turns out to be associative (Theorem \ref{associative}). Thus, restricting the universe conveniently it appears to be possible to construct a monoid of isomorphism classes of invariant hereditary and saturated functors. We briefly recall concepts which will be used throughout the paper. The basic definitions on graphs and Leavitt path algebras can be seen in the book \cite{AAS}. Let $E=(E^0,E^1,r_E,s_E)$ be a \emph{directed graph}. Then we define $\op{E}$ as the graph $\op{E}=((\op{E})^0,(\op{E})^1,r_{\op{E}},s_{\op{E}})$ where $(\op{E})^0=E^0$, $(\op{E})^1=E^1$, $s_{\op{E}}=r_E$ and $r_{\op{E}}=s_E$. As usual, we will drop the subscript of the source and target maps when no possible ambiguity arises. We denote by $\hat{E}$ the \emph{extended graph} of $E$; concretely, $\hat{E}=(E^0,E^1 \cup (E^1)^*,r',s')$ where $(E^1)^*=\{e^* \; \vert \; e \in E^1\}$, ${r'|}_{E^1}=r$, ${s'|}_{E^1}=s$, $r'(e^*)=s(e)$ and $s'(e^*)=r(e)$ for all $e \in E^1$. A graph $E$ is {\it row-finite} if $ s^{-1}(v)=\{e \in E^1 \; \vert \; s(e)=v\}$ is a finite set for all $v \in E^0$. In this article we will consider row-finite graphs unless otherwise specified. The set of regular vertices (those which are neither sinks nor infinite emitters) is denoted by ${\rm Reg}(E^0)$. The set of all paths of a graph $E$ is denoted by ${\rm Path}(E)$. If there is a path from a vertex $u$ to a vertex $v$, we write $u\geq_{E} v$ and if $v \in H \subset E^{0}$, we write $u\geq_{E} H$ (we eliminate the subscript $E$ in case there is no ambiguity about the graph). A subset $H$ of $E^{0}$ is called \textit{hereditary} if, whenever $v\in H$ and $w\in E^{0}$ satisfy $v\geq w$, then $w\in H$. A set $X$ is \textit{saturated} if for any vertex $v$ which is neither a sink nor an infinite emitter, $r(s^{-1}(v))\subseteq X$ implies $v\in X$. We will denote the subset of all subsets of $E^0$ which are hereditary and saturated by ${\mathcal{H}}_E$. Given a nonempty subset $X$ of vertices, we define the \emph{tree} of $X$, denoted by $T_E(X)$, as the set $$T_E(X):=\{u\in E^0 \ \vert \ x\geq u \ \text{for some} \ x\in X\}.$$ When there is no possible confusion we denote by $T(X)$. This is a hereditary subset of $E^0$. The notation $\overline{X}$ ($\overline{X}^E$ if we want to emphasize the graph $E$) will be used for the hereditary and saturated closure of a non empty set $X$, which is built, for example, in \cite[Lemma 2.0.7]{AAS} in the following way: Let $\Lambda^0(X) := T(X)$ and \begin{equation}\label{onion} \Lambda^{n+1}(X):=\{v \in \text{Reg}(E^0): r(s^{-1}(v))\in \Lambda^n(X)\}\cup \Lambda^n(X). \end{equation} Then $\overline{X}=\cup_{n\geq 0} \Lambda^n(X)$. If there is no confusion with respect to the set $X$ we are considering, we simply write $\Lambda^{n}$. A vertex $u$ in a graph $E$ is a {\it bifurcation, or there is a bifurcation at $u$} if $s^{-1}(u)$ has at least two elements. A vertex $v$ in a graph $E$ will be called a {\it line point} if there are neither bifurcations nor cycles at any vertex $w \in T(v)$. We will denote by $\mathop{\hbox{\rm P}_{l}}{(E)}$ the set of all line points of $E^0$. An {\it exit} for a path $\mu = e_1 \ldots e_n$ with $n \in \mathbb{N}$, is an edge $e $ such that $s(e)=s(e_i)$ for some $i$ and $e \ne e_i$. We say that $E$ satisfies \emph{Condition} (L) if every cycle in $E$ has an exit. We denote by $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(E)$ the set of vertices of the graph lying in cycles without exits. A cycle $c$ in a graph $E$ is an {\it extreme cycle} if $c$ has exits and for every $\lambda\in \hbox{Path}(E)$ starting in a vertex in $c^0$ there exists $\mu \in \hbox{Path}(E)$ such that $0\ne\lambda\mu$ and $r(\lambda\mu)\in c^0$. We will denote by $\mathop{\hbox{\rm P}_{ec}}(E)$ the set of vertices which belong to extreme cycles. Besides, the set of all vertices $v \in E^0$ whose tree $T(v)$ contains infinitely many bifurcation vertices or at least one infinite emitter is denoted by $\mathop{\hbox{\rm P}_{b^\infty}}{(E)}$. Again we will eliminate $E$ in these sets if there is no ambiguity about the graph we are considering. If $H$ is a hereditary subset of $E^0$, then we can define $I(H)$ as in \cite[Lemma 2.4.1]{AAS}. The set of natural numbers (included $0$) will be denoted by ${\mathbb{N}}$. For a given set $X$, we will denote by $\mathop{\mathcal P}(X)$ the power set of $X$. In an algebra $A$ the ideal generated by an element $z\in A$ will be denoted $\mathop{\hbox{\rm ideal}}(z)$. \section{A graph topology}\label{dcc} In this section we define the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology which has sense in any graph. We prove some results relating the topology with algebraic properties of the associated Leavitt path algebra. Since density in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ implies density of the related ideal we have introduced the term \lq\lq connection\rq\rq\ within the topology name. We see that certain properties of subsets of $E^0$, for instance, \lq\lq being hereditary\rq\rq\ is a topological property, so this property is preserved under homeomorphisms. A corollary of the existence of a homemorphism between two graphs is the preservation of certain elements (for instance the cardinal of the initial and of the terminal set of vertices). In certain type of graphs this induces a preservation of the number of sinks and/or sources (see Remark \ref{pollo}). \remove{The optimum manifestation of this conservation laws, is when the graph is acyclic and satisfies Condition \hbox{SING}. In this case, homeomorphisms between the set of vertices of two graphs induce isomorphism between the lattices of ideals of the corresponding Leavitt path algebras (paragraph just before \eqref{eqiv}).} \newline \indent However the main reason to consider this topology is (roughly speaking) that when an ideal $I(H)$ is invariant under isomorphism, then $I(\mathop{\hbox{\rm ext}}(H))$ is also invariant (here $\mathop{\hbox{\rm ext}}(H)$ is the exterior in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology of $H$). This is proved in Proposition \ref{dormit}(\ref{leo}). Thus the exterior operation $H\mapsto\mathop{\hbox{\rm ext}}(H)$ is one of the relevant tools in the construction of invariant ideals. We prove that the shift process induces a $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ continuous map between the set of vertices (Theorem \ref{shiftcont}). \begin{definition}{\rm Let $E$ be a graph and define $c\colon\mathop{\mathcal P}(E^0)\to\mathop{\mathcal P}(E^0)$ the map such that for any $A\subset E^0$ we have $c(A):=\{v\in E^0 \; \vert \; v\ge A\}$. Then this map defines a Kuratowski closure operator that is: \begin{itemize} \item[(i)] $c(\small\text{\O})=\small\text{\O}$; \item[(ii)] $A\subset c(A)$; \item[(iii)] $c(A)=c(c(A))$, and \item[(iv)] $c(A\cup B)=c(A)\cup c(B)$, for any $A,B\subset E^0$. \end{itemize} Consequently there is a topology in $E^0$ whose closed sets are those $A\subset E^0$ such that $A=c(A)$ (see \cite[Chapter III, Section 5, Theorem 5.1]{DU}). We will call this the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ {\it topology} (for $\mathop{\hbox{\mdseries{\small\bf DCC}}}=\hbox{Directed Connection Closure}$). } \end{definition} The open sets are $E^0\setminus c(A)$ for $A\subset E^0$, so an open set $O$ is one for which there is a subset $A\subset E^0$ such that $O=\{v\in E^0\colon v\not\ge A\}$. We will use the notation $A'$ for the set of all vertices $v$ such that $v\not\ge A$ (in fact $A'$ is the exterior of $A$ in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology as we will see later). \begin{remark}\rm For a hereditary subset $H\subset E^0$ we always have $\overline{H}\subset c(H)$ (see \cite[Lemma 1.2]{CMMSS}). In particular, if $\overline{H}=E^0$, the set $H$ is dense in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology. Furthermore, the characterization of the (topological) density of a hereditary set $H$ can be given in terms of the density of the ideal $I(H)$. \end{remark} \begin{lemma}\label{romeo} Let $H$ be a hereditary subset $H\subset E^0$. Then $H$ is dense in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology if and only if the ideal $I(H)$ is dense in $L_K(E)$. \end{lemma} \begin{proof} If $c(H)=E^0$ then for any $v\in E^0$ we have $v\ge H$ hence applying \cite[Proposition 1.10]{CMMSS} the ideal $I(H)$ is dense in $L_K(E)$. Reciprocally if $I(H)$ is dense, then any vertex $v$ connects to $H$ hence $E^0=c(H)$. \end{proof} Some graph properties are invariant under graph homeomorphisms. We list some of them in the following propositions. Recall that a \emph{clopen} set in a topology is a set which is both open and closed. We will use also the notation $A^c$ for the complementary of $A$ (if the ambient universe is clear). \begin{proposition} Let $E$ be an arbitrary graph. Then $E$ is a connected graph if and only if $E$ is connected in the sense of the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology. \end{proposition} \begin{proof} First suppose that $E$ is a connected graph. Let $A$ be a subset of $E^0$ which is clopen in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology. We will see that $A=\small\text{\O}$ or $A=E^0$. Suppose $A \neq \small\text{\O}$ and take $v \in A$. Now for every $w \in E^0$ we have two possibilities: $w \ge v$ or $v \ge w$. If $w \ge v$ then $w \ge A$ and therefore $w \in c(A)=A$, so $w \in A$. In the second case $v \ge w$, if we had $w \notin A$, then $v \ge A^c$ hence $v\in c(A^c)=A^c$ and so $v \in A^c$, that is, $v \notin A$ which is a contradiction. In short $w \in E^0$ implies $w \in A$, so $E^0 = A$. For the converse suppose $E$ is connected in the sense of the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology. On the contrary, assume $E$ is not a connected graph: $E^0= {\bigsqcup}_{i\in I} E_i^0$ with each $E_i$ a connected graph. We claim that every $E_i^0$ is closed because $c(E_i^0)=\{ v \in E^0 \; \vert \; v \ge E_i^0\} \subseteq E_i^0 \subseteq c(E_i^0)$. Also $E_i^0$ is open. In order to prove this claim, write $E_i^0 = (\bigsqcup_{j \neq i} E_j^0)^c$. Now $\bigsqcup_{j \neq i} E_j^0$ is closed since $v \ge \bigsqcup_{j\ne i} E_j^0$ implies $v \in \bigsqcup_{j\ne i} E_j^0$. To sum up, $E_i^0$ is clopen and since $E$ is connected, in the sense of the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology, then there exists an unique $i$ such that $E_i^0 \neq \small\text{\O}$ and $E^0=E_i^0$. \end{proof} Some purely graph-theoretic notions can be formalized in topological terms. Remind that for a subset $S$ of a topological space $X$, the exterior of $S$, denoted $\mathop{\hbox{\rm ext}}(S)$ is the complementary of the closure of $S$, that is, $\mathop{\hbox{\rm ext}}(S)=c(S)^c$. \begin{proposition} A subset $H\subset E^0$ is hereditary if and only if $H=\cap_{w\notin H}\mathop{\hbox{\rm ext}}(w)$. \end{proposition} \begin{proof} Assume that $H$ is hereditary. If $v\in H$ and $w\notin H$ then $v\not\ge w$ hence $v\in\mathop{\hbox{\rm ext}}(w)$. So $H\subset\cap_{w\notin H}\mathop{\hbox{\rm ext}}(w)$. On the other hand, if $v\in \cap_{w\notin H}\mathop{\hbox{\rm ext}}(w)$ and $v\notin H$ we have $v\in\mathop{\hbox{\rm ext}}(v)$ which is a contradiction. So far we have proved that if $H$ is hereditary, then the equality $H=\cap_{w\notin H}\mathop{\hbox{\rm ext}}(w)$ holds. Conversely, if $H=\cap_{w\notin H}\mathop{\hbox{\rm ext}}(w)$ then $H$ is hereditary. Indeed, taking $v\in H$ and $v\ge w$, if $w\notin H$ then $v\in\mathop{\hbox{\rm ext}}(w)$ which means $v\not\ge w$ which is a contradiction. \end{proof} Consequently being hereditary is a topological property. Since the intersection of a family of hereditary subsets is a hereditary subset, also the hereditary closure of a subset $X\subset E^0$ is a topological construction: the intersection of all the hereditary subsets containing $X$. \medskip Recall that a subset $H\subset E^0$ is said to be saturated if $\forall v\in \mathop{\hbox{\rm Reg}}(E^0)$, where $r(s^{-1}(v))\subset H$ this implies $v\in H$. We can extend the source function $s\colon E^1\to E^0$ to a function ${\mathop{\hat{s}}}\colon\mathop{\hbox{\rm Path}}(E)\setminus E^0\to E^0$ where ${\mathop{\hat{s}}}(\lambda)=s(f_1)$ for $\lambda=f_1\cdots f_n\in\mathop{\hbox{\rm Path}}(E)$. When $H$ is hereditary the following are equivalent: \begin{equation}\label{odinurg} r(s^{-1}(v))\subset H \Leftrightarrow r({\hat {s}}^{-1}(v))\subset H. \end{equation} Indeed $s^{-1}(v)\subset{\mathop{\hat{s}}}^{-1}(v)$ hence $r(s^{-1}(v))\subset r({\mathop{\hat{s}}}^{-1}(v))$, implying the right to left implication. Now, if $r(s^{-1}(v))\subset H$ and $\lambda\in{\mathop{\hat{s}}}^{-1}(v)$ then writing $\lambda=f_1\cdots f_n$ we have $r(f_1)\in H$ hence $H$ being hereditary implies $r(\lambda)\in H$. Thus $r({\mathop{\hat{s}}}^{-1}(v))\subset H$. The equivalence given in \eqref{odinurg} allows a reformulation of the definition of saturated hereditary subset. A hereditary subset $H$ is saturated if for any regular vertex $v$ one has the implication \begin{equation}\label{bbva} r({\mathop{\hat{s}}}^{-1}(v))\subset H \Leftrightarrow v\in H. \end{equation} Being saturated is not a topological construction as the following example shows. \begin{example}\label{nublado}\rm\par In the graph $E$ of Figure \ref{soleado}, \begin{figure}[ht] \hbox{\hskip 3cm \begin{tikzpicture}[->] \draw[fill=black] (-0.1,0) circle (1pt); \draw[fill=black] (1.1,0.85) circle (1pt); \draw[fill=black] (1.3,-.5) circle (1pt); \node at (-1.3,0){$E$:}; \node at (-0.4,-0.1) {\tiny $v$}; \node at (1.5,-0.4) {\tiny $w$}; \node at (1.3,1){\tiny $u$}; \node at (0.4,0.6) {\tiny $f$}; \node at (0.4,-0.5) {\tiny $g$}; \draw [->] (-0.15,-0.1) arc (360:40:9pt); \draw[] (0,0.1) -> (1,0.8); \draw[] (0,-0.1) -> (1.2,-.5); \end{tikzpicture} \hskip 1cm \begin{tikzpicture}[->] \draw[fill=black] (-0.1,0) circle (1pt); \draw[fill=black] (1.1,0.85) circle (1pt); \draw[fill=black] (1.3,-.6) circle (1pt); \node at (-1,0){$F$:}; \node at (-0.4,0) {\tiny $v'$}; \node at (1.55,-0.4) {\tiny $w'$}; \node at (1.3,1){\tiny $u'$}; \node at (0.4,0.6) {\tiny $f'$}; \node at (0.4,-0.6) {\tiny $g'$}; \draw[] (0,0.1) -> (1,0.8); \draw[] (0,-0.1) -> (1.2,-.6); \end{tikzpicture} }\caption{}\label{soleado} \end{figure} \noindent the closed subspaces of the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology are $\small\text{\O}$, $\{v\}$, $\{v,u\}$, $\{v,w\}$ and $E^0$. Meanwhile in the graph $F$ the closed ones are $\small\text{\O}$, $\{v'\}$, $\{v',u'\}$, $\{v',w'\}$ and $F^0$. We can define a homeomorphism $\tau: E^0 \rightarrow F^0$ by $\tau(a)=a'$ for every $a \in E^0$. Observe that $\overline{\{u,w\}}=\{u,w\} \neq E^0$ but $\overline{\{\tau(u),\tau(w)\}}=\overline{\{u',w'\}}=F^0$. This shows that hereditary and saturated subsets are not preserved under homeomorphisms. \end{example} In general in an arbitrary graph $E$ it satisfies that $$\big(\bigcup_{v\in c(w)} \{w\}\big)\setminus\{v\}=r({\hat s}^{-1}(v))\setminus\{v\}.$$ However if $E$ is an acyclic graph, for any vertex $v$ (not necessarily regular) we have $$\big(\bigcup_{v\in c(w)} \{w\}\big)\setminus\{v\}=r({\hat s}^{-1}(v)).$$ This implies that an hereditary $H$ is saturated if and only if $\big(\bigcup_{v\in c(w)} \{w\}\big)\setminus\{v\}\subset H$ implies $v\in H$. Thus, for acyclic graphs, hereditary and saturated subsets are described in topological terms. So if $E^0$ and $F^0$ are homeomorphic as topological spaces and acyclic, the sets $\mathop{\mathcal H}_E$ and $\mathop{\mathcal H}_F$ are bijective: more precisely if $f\colon E^0\to F^0$ is a homeomorphism, then the map $f^*\colon\mathop{\mathcal H}_E\to \mathop{\mathcal H}_F$ such that $f^*(H)=f(H)$ is bijective. \begin{definition}\rm Let $E$ be an arbitrary graph. A vertex $v$ of $E^0$ is called an \emph{initial vertex} if \begin{equation}\label{eqiv} s(r^{-1}(v)) \subseteq\{v\}. \end{equation} If $v$ is initial, then any edge in $r^{-1}(v)$ is a loop. Let $n\in{\mathbb{N}}$, an \emph{inital $n$-looped vertex} $v$ is an initial vertex such that $\vert r^{-1}(v)\vert=n$. For instance, any source is an $0$-looped initial vertex. A vertex $v$ of $E^0$ is called an \emph{terminal vertex} if \begin{equation}\label{eqtv} r(s^{-1}(v))\subseteq \{v\}. \end{equation} If $v$ is terminal, then any edge in $s^{-1}(v)$ is a loop. A \emph{terminal} $n$-looped vertex $v$ is a terminal vertex such that $\vert s^{-1}(v)\vert=n$. For example, any sink is a terminal $0$-looped vertex.\medskip For instance, in graph $E$ below, the vertex $v$ is a initial $3$-looped vertex and in the graph $\op{E}$, the vertex $v$ is a terminal $3$-looped vertex. \[ \xygraph{!{(0.25,0.5)}*+{E\colon} !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !{(2,0)}*+{\cdots}="d" !{(3,0)}*+{\bullet}="g" !{(3,0.25)}*+{\hbox{\tiny $v$}}="v" "g":"d" "g" :@(ul,ur) "g" "g" :@(dl,dr) "g" "g" :@(ur,dr) "g" }\hskip2cm \xygraph{!{(0.25,0.5)}*+{E^{\rm op}\colon} !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !{(2,0)}*+{\cdots}="d" !{(3,0)}*+{\bullet}="g" !{(3,0.25)}*+{\hbox{\tiny $v$}}="v" "d":"g" "g" :@(ul,ur) "g" "g" :@(dl,dr) "g" "g" :@(ur,dr) "g" }\] \end{definition} \begin{remark}\rm The formulas \eqref{eqiv} and \eqref{eqtv} are equivalent to the corresponding formulas in which $r,s$ are the natural extensions $r,s: \mathop{\hbox{\rm Path}}{(E)} \rightarrow E^0$. \end{remark} \begin{proposition} Let $v\in E^0$, then: \begin{enumerate} \item The following are equivalent: \begin{enumerate} \item\label{iv} $v$ is initial. \item\label{fuente} $c(v)=\{v\}$. \item\label{alberca} $v$ is an initial $n$-looped vertex for some $n\in{\mathbb{N}}$. \end{enumerate} \item Analogously, these are equivalent: \begin{enumerate} \item\label{tv} $v$ is terminal. \item\label{s_1} $\bigcup_{v\in c(w)}\{w\}=\{v\}$. \item\label{s_2} $v$ is a terminal $n$-looped vertex for some $n \in {\mathbb{N}}$. \end{enumerate} \hskip -1.7cm In particular: \item\label{initial} If $v$ is not an initial $n$-looped vertex for any $n\ge 1$, then $v$ is a source if and only if $c(v)=\{v\}$. \item\label{terminal} If $v$ is not an terminal $n$-looped vertex for any $n\ge 1$, then $v$ is a sink if and only if $\left(\bigcup_{v\in c(w)}\{w\}\right)\setminus\{v\}=\small\text{\O}$. \end{enumerate} \end{proposition} \begin{proof} First for proving \eqref{iv}$\Rightarrow$\eqref{fuente} suppose that $v$ is initial. Take $w \in c(v)$ and assume there is a path $\lambda$ such that $s(\lambda)=w$ and $r(\lambda)=v$. If $\lambda$ is a trivial path, we have $w=v$ and $w=v \in c(v)$. Otherwise, $\lambda=f_1\ldots f_n$ with $r(f_n)=v$ and since $f_n \in r^{-1}(v)$ we get $s(f_n)=v$ so that $f_n$ is a loop based at $v$. In general, if $s(f_i)=v$ then applying the same argument we obtain $s(f_{i-1})=v$ and finally $f_i$ is a loop based at $v$ for every $i \in \{1,\ldots,n\}$. Therefore $w=v$. For \eqref{fuente}$\Rightarrow$\eqref{iv}, assume $c(v)=\{v\}$. Consider $w \in s(r^{-1}(v))$. If $w = v$ we are done. If $w \neq v$, then there exists an edge $f$ with $s(f)=w$ and $r(f)=v$. This implies $w \ge v$ and by hypothesis $w = v$ giving a contradiction. In conclusion $s(r^{-1}(v)) \subseteq \{v\}$, that is, $v$ is an initial vertex. Also observe that \eqref{iv}$\Leftrightarrow$\eqref{alberca} is straightforward. For proving \eqref{tv}$\Rightarrow$\eqref{s_1} assume that $v$ is terminal and take $w\in E^0$ such that $v\ge w$, then either $v=w$ in which case we are done or there is a nontrivial path $\lambda$ from $v$ to $w$. Then the first arrow $f$ of $\lambda$ is in $s^{-1}(v)$ hence $r(f)\in r(s^{-1}(v))\subseteq\{v\}$ and we have $r(f)=v$. Thus the first arrow of $\lambda$ is a loop. Applying this argument repeatedly, we get that $w=r(\lambda)=v$. For \eqref{s_1}$\Rightarrow$\eqref{tv} consider a vertex $u \in r(s^{-1}(v))$, and then there exists an edge $e$ such that $s(e)=v$ and $r(e)=u$. So $u \in \bigcup_{v\in c(w)}\{w\}=\{v\}$ giving $u=v$. Observe that \eqref{tv} is equivalent to $\eqref{s_2}$. Finally, (\ref{initial}) and (\ref{terminal}) are direct consequences of the previously proved items. \end{proof} \begin{remark}\label{pollo}{\rm We have the equality of the number of initial (respectively terminal) vertices in homeomorphic graphs $E$ and $F$, more precisely, homeomorphisms between graphs induce bijections between the sets of initial (resp. terminal) vertices in the corresponding graphs. And if $E$ and $F$ are graphs without initial (respectively terminal) $n$-looped vertices for $n\ge 1$, any homeomorphism between them induces a bijection between the sets of sources (respectively sinks) of $E$ and $F$. } \end{remark} Let $E$ be a graph, $u,v\in E^0$ and assume that there is an injective map $\theta\colon s^{-1}(u)\to s^{-1}(v)$ such that for each $r(f)=r(\theta(f))$ for any $f\in s^{-1}(u)$. Let $F=E(u\hookrightarrow v)$ be the \emph{shift graph} associated to $\theta$, that is, $F^0:=E^0$ and $F^1:=\{g\}\sqcup (E^1\setminus\hbox{Im}(\theta))$ with $g \notin E^1$, where $s_F(g)=v$, $r_F(g)=u$ and for any other arrow $f\in F^1$ we have $s_F(f)=s_E(f)$ and $r_F(f)=r_E(f)$. For more information about the shift graph see \cite[Definition 2.1]{AALP}. Define then the map $\varphi\colon E^0\to F^0$ given by $\varphi(v)=v$ for any $v\in E^0$. \begin{theorem}\label{shiftcont} In the previous conditions $\varphi$ is continuous for the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topologies of $E^0$ and $F^0$. \end{theorem} \begin{proof} First we claim that the two following statements are equivalent: \begin{itemize} \item[(a)] Every closed subset of $F^0$ is a closed subset of $E^0$. \item[(b)] For all $u \in E^0$ and for all $S$ closed subset of $F^0$, if $u \ge_{E} S$ then $u \ge_{F} S$. \end{itemize} Indeed, for proving $(a) \Rightarrow (b)$ let $u \in E^0$ and $S$ a closed subset of $F^0$. Since also $S$ is a closed subset of $E^0$ by hypothesis, we have that if $u \ge_{E} S$ then $u \in S$. And conversely for $(b) \Rightarrow (a)$, consider $S$ a closed subset of $F^0$. Now we know that if $u \ge_{E} S$ then $u \ge_{F} S$. We check that $S$ is closed of $E^0$: if $v \ge_{E} S$ then $v \ge_{F} S$, implying that $v \in S$. Since $v \ge_{E} S \Rightarrow v \in S$, we have that $S$ is closed of $E^0$. In the next step, we prove the statement given in the theorem. We have to check that the set of closed subsets of $F^0$ is contained in the set of closed subsets of $E^0$. Let $S$ be a closed subset in $F^0$. To prove that $S$ is closed in the topology of $E^0$ it suffices to check that $$\forall u\in F^0, ((\ u\ge_E S) \Rightarrow (u\ge_F S )).$$ We will prove something slightly stronger: that if $u\ge_E v$, then $u\ge_F v$. Indeed, the unique arrows of $E$ that has been eliminated in $F$ are those in $\hbox{Im}(\theta)$. Assume $s^{-1}(u)=\{f_1,\ldots f_k\}$ and $s^{-1}(v)=\{h_1,\ldots h_k,\ldots h_n\}$ where $k\le n$ and $\hbox{Im}(\theta)=\{h_1,\ldots,h_k\}$. Also $r(f_i)=r(h_i)$ for $i=1,\ldots,k$. Then $F^1=\{g\}\sqcup (E^1\setminus\{h_1,\ldots,h_k\})$ where $s(g)=v$ and $r(g)=u$ . However the elimination of the edges $h_1,\ldots h_k$ do not eliminate connections since $v$ connects in $F$ with $r(f_i)$ through the path $gf_i$ (for $i=1,\ldots,k$). \end{proof} \begin{example}\rm Consider the graphs $E$ and $F$ given below. In $E$ the closed subsets of the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology are $\small\text{\O}$, $\{u_1\}$, $\{u_1,u_2\}$ and $E^0$. On the other hand, in $F$ we have $\small\text{\O}$, $\{v_1\}$ and $F^0$. \[ \xygraph{!{(0.25,0.5)}*+{E\colon} !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !{(1,0)}*+{\bullet}="c" !{(2,0)}*+{\bullet}="d" !{(3,0)}*+{\bullet}="g" !{(3,0.25)}*+{\hbox{\tiny $u_3$}}="v" !{(2,0.25)}*+{\hbox{\tiny $u_2$}} !{(1,0.25)}*+{\hbox{\tiny $u_1$}} !{(1.5,-0.25)}*+{\hbox{\tiny $f_1$}} !{(2.5,-0.25)}*+{\hbox{\tiny $f_2$}} !{(4,0)}*+{\hbox{\tiny $f_3$}} "c":"d" "d":"g" "g" :@(ur,dr) "g" }\hskip2cm \xygraph{!{(0.25,0.5)}*+{F\colon} !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !{(2,0)}*+{\bullet}="d" !{(3,0)}*+{\bullet}="g" !{(4,0)}*+{\bullet}="h" !{(2,0.25)}*+{\hbox{\tiny $v_1$}} !{(2.5,0.25)}*+{\hbox{\tiny $g_1$}} !{(3,0.25)}*+{\hbox{\tiny $v_2$}}="v" !{(3.5,0.5)}*+{\hbox{\tiny $g_2$}} !{(4.2,0.25)}*+{\hbox{\tiny $v_3$}} !{(3.5,-0.5)}*+{\hbox{\tiny $g_3$}} "d":"g" "g" :@(ur,dr) "h" "h" :@(ur,dr) "g" }\] \medskip The map $\varphi\colon E^0\to F^0$ such that $u_i\mapsto v_i$ for $i=1,2,3$ is continuous but not a homeomorphism since the image of the closed subset $\{u_1,u_2\}$ of $E^0$ is not a closed subset of $F^0$. However the canonical extension $\varphi\colon E\to F$ such that $\varphi(f_i)=g_i$ and $\varphi(f_i^*)=g_i^*$ for $i=1,2,3$ induces an isomorphism of Leavitt path algebras from $L_K(E)$ to $L_K(F)$ (in fact a shift move). \end{example} Before finishing this section, we have to introduce some formalities about categories in the next subsection. \subsection{Graph Categories}\label{cati} \def\mathcal{Grph}{\mathcal{Grph}} \def\mathcal{Set}{\mathcal{Set}} \def\mathfrak{F}{\mathfrak{F}} This subsection arises from the need to define \lq\lq operators\rq\rq\ which can be applied to any graph and produce certain sets. So, for instance, the assignation $E\mapsto{\mathcal{H}}_E$ mapping any graph with its set of hereditary and saturated sets is an example. Also one can map any graph $E$ to its set of line-points: $E\mapsto \mathop{\hbox{\rm P}_{l}}(E)$. So we can think of $\mathop{\hbox{\rm P}_{l}}$ as an operator acting on the class of all graphs. To way to formalize these examples is by using functors, so: category theory. As usual, for a category $\mathcal C$, the notation $X\in\mathcal C$ means that $X$ is an object of the category. When defining functors among categories, usually we will define only the object function when the morphism one is clear. Define by $\mathcal{Grph}$ the category whose objects are the directed graphs and for $E,F\in\mathcal{Grph}$, we define $\hom_{\tiny\mathcal{Grph}}(E,F)$ as the set of all isomorphisms (if any) $E\to F$. Denote by $\mathcal{Set}$ the category of sets. We will have the occasion of dealing with functors $\mathcal{Grph}\to\mathcal{Set}$. For instance ${\mathcal{H}}\colon\mathcal{Grph}\to\mathcal{Set}$ such that $E\mapsto{\mathcal{H}}_E$ the set of all hereditary and saturated subsets of $E^0$. We will also use the functors $\mathcal{E}^0,\mathcal{E}^1\colon \mathcal{Grph}\to\mathcal{Set}$ such that $\mathcal{E}^0(E)=E^0$ and $\mathcal{E}^1(E)=E^1$. We also define functors $\mathop{\hbox{\rm P}_{l}},\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}},\mathop{\hbox{\rm P}_{ec}},\mathop{\hbox{\rm P}_{b^\infty}}\colon\mathcal{Grph}\to \mathcal{Set}$ by writing $\mathop{\hbox{\rm P}_{l}}(E):=\{\hbox{line-points of}\ E\}$, $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(E):=\{\hbox{vertices in cycles without exits of}\ E\}$, $\mathop{\hbox{\rm P}_{ec}}(E):=\{\hbox{vertices in extreme cycles of}\ E\}$, and $$\mathop{\hbox{\rm P}_{b^\infty}}(E):=\{\hbox{vertices of $E$ whose tree contains infinite bifurcations}\}.$$ \begin{definition}\rm A functor $H\colon\mathcal{Grph}\to\mathcal{Set}$ is said to be a \emph{point functor} if $H(E)\subset E^0$ (in other words, if it is a subfunctor of $\mathcal{E}^0$ defined above). \end{definition} Denote by $\mathfrak{F}(\mathcal{Grph},\mathcal{Set})$ the category whose objects are the functors $\mathcal{Grph}\to\mathcal{Set}$ and for $H,J\in\mathfrak{F}(\mathcal{Grph},\mathcal{Set})$ a morphism from $H$ to $J$ in $\mathfrak{F}(\mathcal{Grph},\mathcal{Set})$ is a natural transformation $\tau\colon H\to J$. Thus $\tau=(\tau_E)_{E\in\mathcal{Grph}}$ where $\tau_E\colon H(E)\to J(E)$ for any graph $E$, and the squares \begin{center} \begin{tikzcd} H(E) \arrow[r, "\tau_E"] \arrow[d, "H(\alpha)"'] & J(E) \arrow[d, "J(\alpha)"] \\ H(E') \arrow[r, "\tau_{E'}"'] & J(E') \end{tikzcd} \end{center} \noindent commute when $\alpha\in \hom_{\mathcal{Grph}}(E,E')$. We also define the category $\mathfrak{F}^\sharp(\mathcal{Grph},\mathcal{Set})$ as the full subcategory of $\mathfrak{F}(\mathcal{Grph},\mathcal{Set})$ whose objects are the point functors. \begin{definition}\rm A point functor $H\colon\mathcal{Grph}\to\mathcal{Set}$ is said to be \emph{hereditary} in case $H(E)$ is a hereditary subset of $E^0$ for any graph $E$. Similarly can we define \emph{hereditary saturated} point functors $\mathcal{Grph}\to\mathcal{Set}$. \end{definition} Given a point functor $H\colon\mathcal{Grph}\to\mathcal{Set}$ we define its hereditary closure denoted $\overline{H}^h$ as the new point functor $\overline{H}^h\colon\mathcal{Grph}\to\mathcal{Set}$ given by $\overline{H}^h(E):=\hbox{Hereditary closure of } H(E)$ in $E^0$. Similarly can we define the hereditary and saturated closure of a point functor $H$ (which we will denote by $\overline{H}$). We have the usual relations $H\subset \overline{H}^h\subset \overline{H}$ in the sense of subfunctors. We can think of point functors as if they were ordinary subsets of vertices in a graph. So given two point functors $H_1,H_2$ we can construct in a obvious way the boolean operations $H_1\cup H_2$, $H_1\cap H_2$, $H_1 \setminus H_2$. In particular, if $H$ is a point functor, we can construct a new point functor ${\mathcal E}^0 \setminus H\colon\mathcal{Grph}\to\mathcal{Set}$ given by $({\mathcal E}^0 \setminus H)(E):=E^0\setminus H(E)$. \begin{definition}{\rm A point functor $H\colon\mathcal{Grph}\to\mathcal{Set}$ is said to be {\em closed (respectively open)} if $H(E)$ is closed in the $\mathop{\hbox{\mdseries{\small\bf DCC}}}$ topology of $E$ (respectively open). Also given $H$ we can define new functors $c(H)$, $\accentset{\circ}{H}$, $\mathop{\hbox{\rm ext}}(H)$, $\partial H\colon\mathcal{Grph}\to\mathcal{Set}$, given by $$c(H)(E):=c(H(E)),\ \accentset{\circ}{H}(E):=\accentset{\circ}{\aoverbrace[L1R]{H(E)}}, \ \mathop{\hbox{\rm ext}}(H)(E):=\mathop{\hbox{\rm ext}}(H(E)),\ (\partial H)(E):=\partial(H(E)).$$ These new functors may be referred to by their usual names: closure of $H$ denoted $c(H)$, interior of $H$ denoted $\accentset{\circ}{H}$, exterior of $H$ denoted $\mathop{\hbox{\rm ext}}(H)$, and boundary of $H$ denoted $\partial H$.} \end{definition} Observe that $\mathop{\hbox{\rm ext}}(H)$ can be described in terms of the connection of vertex by \begin{equation}\label{babero} \mathop{\hbox{\rm ext}}(H)(E)=\{v\in E^0 \ \vert \ v\not\ge H(E)\}. \end{equation} We will see later on that when the functor $H$ is hereditary and saturated then so is $\mathop{\hbox{\rm ext}}(H)$ (see Proposition \ref{here}). We also have the following: \begin{definition}\label{hervor}\rm Let $f\colon L_K(E)\to L_K(F)$ be a ring isomorphism. Given two point functors $H_i\colon\mathcal{Grph}\to\mathcal{Set}$ ($i=1,2$), we will say that $H_1$ is \emph{$f$-related} to $H_2$ if and only if $f(I(H_1(E)))=I(H_2(F))$. We will say that a point functor $H$ is \emph{$f$-invariant} if and only if $H$ is $f$-related to itself, that is, $f(I(H(E)))=I(H(F))$. Finally, a point functor $H$ is said to be {\em invariant under isomorphism} if and only if is $f$-invariant for any isomorphism $f$. \end{definition} Note that for $A,B\in\mathop{\mathcal H}_E$ one has $I(A\cup B)=I(A)+I(B)$ (the idea of the proof is in \cite[Proposition 1.6]{CMMSS}). Also $I(A\cap B)=I(A)\cap I(B)$, the inclusion $I(A\cap B)\subset I(A)\cap I(B)$ is straightforward and for the other $I(A)\cap I(B)=I(C)$ for a suitable $C\in\mathop{\mathcal H}_E$. Then $C\subset A\cap B$ hence $I(A)\cap I(B)=I(C)\subset I(A\cap B)$. \begin{proposition}\label{union} Let $H_i\colon\mathcal{Grph}\to\mathcal{Set}$ ($i=1,2$) be hereditary and saturated point functors and let $f\colon L_K(E)\to L_K(F)$ be an isomorphism. If $H_1$ and $H_2$ are $f$-invariant, then $H_1\cup H_2$ and $H_1\cap H_2$ are $f$-invariant point functors. \end{proposition} \begin{proof} For the union, first observe that $I(H_1(E) \cup H_2(E))= I(H_1(E))+I(H_2(E))$. So applying $f$ to both sides of the last equality and our hypothesis we have: $f(I(H_1(E) \cup H_2(E)))= f(I(H_1(E))+I(H_2(E)))= f(I(H_1(E)))+f(I(H_2(E))) = I(H_1(F)) + I(H_1(F)) = I(H_1(F) \cup H_2(F))$ as desired. Now for the intersection, take into account that $I(H_1(E) \cap H_2(E))= I(H_1(E)) \cap I(H_2(E))$ and repeating the same argument then $f(I(H_1(E) \cap H_2(E)))= I(H_1(F) \cap H_2(F))$. \end{proof} It has been proved that certain ideals associated to remarkable hereditary and saturated subsets of vertices are invariant under isomorphism of Leavitt path algebras. Among these ideals we have: \begin{itemize} \item[(1)] the ideal generated by $\mathop{\hbox{\rm P}_{l}}$ the set of line points since $I(\mathop{\hbox{\rm P}_{l}})$ is the socle of the Leavitt path algebra (\cite[Theorem 4.2]{AMMS1}); \item[(2)] the ideal generated by $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}$ the set of vertices in cycles with no exits (\cite[Theorem 6.11]{ABS}); \item[(3)] the ideal generated by $\mathop{\hbox{\rm P}_{ec}}$ the set of vertices in extreme cycles (\cite[Corollary 5.10]{CGKS}); \item[(4)] the ideal generated by $P_{ppi}$ the set of vertices which generates the largest purely infinite ideal of the Leavitt path algebra (\cite[Corollary 4.14]{CGKS}) and \item[(5)] the ideal generated by $P_{ex}$ the set of vertices which generates the largest exchange ideal of a Leavitt path algebra (\cite[Corollary 6.3]{CGKS}). \end{itemize} The point functor $\mathop{\hbox{\rm P}_{b^\infty}}$ is not invariant: the ideal generated by $\mathop{\hbox{\rm P}_{b^\infty}}(E)$ (vertices whose tree contains infinite bifurcations) is not preserved under isomorphism in general, as the following example shows. \begin{example}\label{ohcac} \rm Consider the graphs $E$ and $F$ in Figure \ref{lluvioso}: \begin{figure}[ht] \centering \hbox{\hskip 3cm \begin{tikzpicture}[->] \draw[fill=black] (-0.1,0) circle (1pt); \draw[fill=black] (1.1,0.85) circle (1pt); \draw[fill=black] (1.1,-.5) circle (1pt); \draw[fill=black] (2,-.8) circle (1pt); \draw[fill=black] (2,-.1) circle (1pt); \draw[fill=black] (2.6,0.2) circle (0.5pt); \draw[fill=black] (2.8,0.2) circle (0.5pt); \draw[fill=black] (3,0.2) circle (0.5pt); \draw[fill=black] (2.6,-0.2) circle (0.5pt); \draw[fill=black] (2.8,-0.2) circle (0.5pt); \draw[fill=black] (3,-0.2) circle (0.5pt); \draw[fill=black] (2.6,-0.6) circle (0.5pt); \draw[fill=black] (2.8,-0.6) circle (0.5pt); \draw[fill=black] (3,-0.6) circle (0.5pt); \draw[fill=black] (2.6,-1) circle (0.5pt); \draw[fill=black] (2.8,-1) circle (0.5pt); \draw[fill=black] (3,-1) circle (0.5pt); \node at (-1,0){$E$:}; \node at (-0.4,0) {\tiny $v$}; \node at (1.1,-0.2) {\tiny $w_1$}; \node at (2,-1.1) {\tiny $w_2$}; \node at (2,0.2) {\tiny $w_3$}; \node at (1.3,1){\tiny $u$}; \node at (0.4,0.6) {\tiny $f$}; \node at (0.4,-0.5) {\tiny $g$}; \draw[] (0,0.1) -> (1,0.8); \draw[] (0,-0.1) -> (1,-.5); \draw[ ] (1.2,-0.4)--(1.9,-0.1); \draw[ ] (1.2,-0.6)--(1.9,-0.8); \draw[ ] (2.1,-0.8)--(2.5,-0.6); \draw[ ] (2.1,-0.8)--(2.5,-1); \draw[ ] (2.1,-0.1)--(2.5,-0.2); \draw[ ] (2.1,-0.1)--(2.5,0.2) ; \end{tikzpicture} \hskip 1cm \begin{tikzpicture}[->] \draw[fill=black] (-0.1,0) circle (1pt); \draw[fill=black] (-0.1,1) circle (1pt); \draw[fill=black] (1.1,1) circle (1pt); \draw[fill=black] (1.1,-.5) circle (1pt); \draw[fill=black] (2,-.8) circle (1pt); \draw[fill=black] (2,-.1) circle (1pt); \draw[fill=black] (2.6,0.2) circle (0.5pt); \draw[fill=black] (2.8,0.2) circle (0.5pt); \draw[fill=black] (3,0.2) circle (0.5pt); \draw[fill=black] (2.6,-0.2) circle (0.5pt); \draw[fill=black] (2.8,-0.2) circle (0.5pt); \draw[fill=black] (3,-0.2) circle (0.5pt); \draw[fill=black] (2.6,-0.6) circle (0.5pt); \draw[fill=black] (2.8,-0.6) circle (0.5pt); \draw[fill=black] (3,-0.6) circle (0.5pt); \draw[fill=black] (2.6,-1) circle (0.5pt); \draw[fill=black] (2.8,-1) circle (0.5pt); \draw[fill=black] (3,-1) circle (0.5pt); \node at (-1.3,0){$F$:}; \node at (-0.4,1) {\tiny $v_1$}; \node at (-0.4,0) {\tiny $v_2$}; \node at (1.1,-0.1) {\tiny $w_1'$}; \node at (2,-1.1) {\tiny $w_2'$}; \node at (2,0.2) {\tiny $w_3'$}; \node at (1.3,1.1){\tiny $u'$}; \node at (0.4,1.2) {\tiny $f'$}; \node at (0.4,-0.5) {\tiny $g'$}; \draw[] (0,1) -> (1,1); \draw[] (0,-0.05) -> (1,-.5); \draw[ ] (1.2,-0.4)--(1.9,-0.1); \draw[ ] (1.2,-0.6)--(1.9,-0.8); \draw[ ] (2.1,-0.8)--(2.5,-0.6); \draw[ ] (2.1,-0.8)--(2.5,-1); \draw[ ] (2.1,-0.1)--(2.5,-0.2); \draw[ ] (2.1,-0.1)--(2.5,0.2) ; \end{tikzpicture} } \caption{} \label{lluvioso} \end{figure} \bigskip We assume that $T(w_1)=\{w_i\}_{i\ge 1}$ and each $w_i$ is a bifurcation with two edges for the graph $E$ and similarly for the graph $F$. Thus $\mathop{\hbox{\rm P}_{b^\infty}}(E)=\{v\}\cup\{w_i\}_{i\ge 1}$ and $I(\mathop{\hbox{\rm P}_{b^\infty}}(E))=L_K(E)$. On the other hand $\mathop{\hbox{\rm P}_{b^\infty}}(F)=\{v_2\}\cup\{w_i'\}_{i\ge 1}$ and the ideal $I(\mathop{\hbox{\rm P}_{b^\infty}}(F))$ is not $L_K(F)$. In fact $L_K(F)/I(\mathop{\hbox{\rm P}_{b^\infty}}(F))\cong M_2(K)$. \end{example} However we have: \begin{proposition} Let $E$ and $F$ be the graphs considered in the above Example \ref{ohcac}. There is a graded $*$-isomorphism of $K$-algebras $\theta\colon L_K(E)\to L_K(F)$ such that $\theta(v)=v_1+v_2$ and the image under $\theta$ of the other vertices and edges are the homonymous vertices and edges of $F$ (and the same applies to ghost edges). \end{proposition} \begin{proof} The existence of the isomorphism is based upon the \lq\lq out-split\rq\rq\ move (see \cite{AALP}). However we describe the construction of the isomorphism. We define first the linear map $\psi\colon K \hat E\to L_K(F)$ such that $\psi(v)=v_1+v_2$ and the image under $\psi$ of the other vertices and edges (real or ghost) are the homonymous vertices and edges of $F$ (as elements of $L_K(F)$). Also the image of a nontrivial path $x_1\cdots x_n$ in $K\hat E$ is defined to be $x_1'\cdots x_n'$. Then we prove that for $t\in\hbox{reg}(E)$, each difference $t-\sum hh^*$ (sum extended to edges $h$ with $s(h)=t$) maps to $0$ under $\psi$. This induces by passing to the quotient a homomorphism of $K$-algebras $\theta$ from $L_K(E)$ to $L_K(F)$. This homomorphism is an epimorphism since all the generators of $L_K(F)$ are in the image of $\theta$: for instance $v_1=\theta(ff^*)$ and $v_2=\theta(gg^*)$. To see that $\theta$ is a monomorphism, observe that $E$ satisfies Condition (L) and we apply the Cuntz-Krieger Uniqueness theorem (see \cite[Theorem 2.2.16]{AAS}). The given isomorphism is actually a $*$-isomorphism by construction and it is also a graded isomorphism. \end{proof} \begin{remark}\rm Note that according to \cite[Proposition 2.6]{CGKS}, $I(\mathop{\hbox{\rm P}_{ec}} \cup \mathop{\hbox{\rm P}_{b^\infty}})$ is invariant under any ring isomorphism. But this is not true in general because in that proof it is strongly used that the ideal $I(\mathop{\hbox{\rm P}_{ec}} \cup \mathop{\hbox{\rm P}_{b^\infty}})$ does not contain any primitive idempotents. For instance, in the graph below consisting of one \lq\lq fiber\rq\rq\ \hskip 7cm \xygraph{ !{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.2cm>::} !{(0,0) }*+{\bullet_{u}}="a" !{(1.5,0) }*+{\bullet_{v}}="b" !{(0,1.5)}*+{} "a":^{f_n}_{(\infty)}"b" } \vskip .5cm \noindent there is a sink $v$ which is a primitive idempotent and it belongs to the ideal $I(\mathop{\hbox{\rm P}_{ec}} \cup \mathop{\hbox{\rm P}_{b^\infty}})$. Such primitive idempotents (belonging to $I(\mathop{\hbox{\rm P}_{b^\infty}}))$ may also be present in row-finite graphs (see the graph $E$ in Example \ref{ohcac}). \end{remark} In this work we deal with suitable sets of vertices which define invariant ideals. We will prove in a forthcoming section that for any isomorphism $f\colon L_K(E)\to L_K(F)$, and for any $f$-invariant hereditary and saturated functor $H$, the exterior $\mathop{\hbox{\rm ext}}(H)$ is again $f$-invariant. However the other functors (interior, closure, etc.) are not necessarily $f$-invariant. \begin{example}{\rm The following example shows that $\partial H = c(H) \cap c(H^c)$ is not invariant via isomorphism. Consider the graphs given in \ref{ohcac}. In the graph $E$ take $H(E)=\mathop{\hbox{\rm P}_{l}}(E)=\{u\}$. We have that $c(H(E))=\{u,v\}$ and $c(H(E)^c)=\{v\}\cup \{ w_i\}_{i \geq 1}$ and so $\partial H(E)=\{v\}$. On the other hand, $H(F)=\mathop{\hbox{\rm P}_{l}}(F)=\{v_1, u'\}$ and $c(H(F))=\{v_1,u'\}$ and $c(H(F)^c)=\{v_2\} \cup \{w_i'\}_{i \geq 1}$. Finally $\partial H(F)=\small\text{\O}$. } \end{example} \section{Annihilators}\label{anni} For an arbitrary algebra $A$ (not necessarily associative) and an ideal $I\triangleleft A$, we can consider the {\it annihilator} ${\rm{Ann}}(I):=\{a\in A\colon aI=Ia=0\}$. This is an ideal of $A$ and we have $I\subset{\rm{Ann}}({\rm{Ann}}(I))$. Also it is easy to see that ${\rm{Ann}}({\rm{Ann}}({\rm{Ann}}(I)))={\rm{Ann}}(I)$ for any ideal of $A$. Let us denote $\Tilde I:={\rm{Ann}}({\rm{Ann}}(I))\supset I$ for any ideal $I$ of $A$. Now, we consider the definition of regular ideal in the sense of \cite{Hamana}. These ideals are recently studied in \cite{DanielDanilo} in the context of Leavitt path algebras. \begin{definition}\rm Let $A$ be a $K$-algebra, an ideal $I\triangleleft A$ satisfying $\Tilde{I}=I$ is called {\it regular ideal}. \end{definition} It is easy to see that the ideals of the form $I={\rm{Ann}}(J)$ (for another ideal $J$) are regular. After writing Proposition \ref{here} below and Corollary \ref{qite}, we learn about the work \cite{DanielDanilo} whose Proposition 3.5 contains a similar result. \begin{proposition}\label{here} Let $H \in \mathcal{H}_E$ and define $H'=\{v \in E^0 \; | \; v \not\geq H \}$. Then: \begin{enumerate} \item $H'$ is a hereditary and saturated subset of $E^0$, that is, $H' \in \mathcal{H}_E$. \item ${\rm Ann}(I(H))= I(H')$. \end{enumerate} \end{proposition} \begin{proof} For the first part, take $v\in H'$ and assume $v\ge v'\in E^0$. If $v'\ge H$ then $v\ge H$ a contradiction. So $H'$ is hereditary. To prove that it is saturated consider a vertex $v$ such that $r(s^{-1}(v))\subset H'$. Let $\lambda\in\hbox{\rm path}(E)$ with $s(\lambda)=v$ and $r(\lambda)\in H$. Then writing $\lambda=f_1\cdots f_n$ we have $r(f_1)\in H'$ hence $r(f_1)\not\ge H$. But on the other hand $r(f_1)\ge r(\lambda)\in H$, a contradiction. This proves that any path whose source is $v$ has target out of $H$. Whence $v\not\ge H$ so that $v\in H'$. Let us prove now the second item. Take $u\in H'$ and let us check that $u I(H)=0$. If $z\in I(H)$ we can write $z=\sum_i k_i\alpha_i\beta_i^*$ with $k_i \in K$ and $\alpha_i,\beta_i$ paths whose range is in $H$. In case $uz\ne 0$ there must be some $i$ such that $u\alpha_i\beta_i^*\ne 0$. Then $u\ge r(\alpha_i)\in H$, a contradiction. Hence $H'I(H)=0$ and applying the canonical involution $I(H)H'=0$. Consequently $H'\subset{\rm{Ann}}(I(H))$ implying $I(H')\subset{\rm{Ann}}(I(H))$. Conversely, let $z\in {\rm{Ann}}(I(H))$ be an homogeneous element. We will prove first that for any vertex $u$ such that $u\in \mathop{\hbox{\rm ideal}}(z)$ one has $u\in H'$. Indeed: $\mathop{\hbox{\rm ideal}}(z)\subset {\rm{Ann}}(I(H))$ whence $uI(H)=I(H)u=0$. If $u\ge H$ there is a path $\lambda$ with $s(\lambda)=u$ and $r(\lambda)\in H$. But then $\lambda=u\lambda\in u I(H)=0$ a contradiction. Thus $u\in H'$. So far we have $H_1:=E^0\cap \mathop{\hbox{\rm ideal}}(z)\subset H'$. So $I(H_1)\subset I(H')$. Moreover, $\mathop{\hbox{\rm ideal}}(z)=I(H_1)$ so we deduce that $\mathop{\hbox{\rm ideal}}(z)\subset I(H')$. But this is true for any homogeneous element $z\in {\rm{Ann}}(I(H))$ hence for any element of ${\rm{Ann}}(I(H))$. So ${\rm{Ann}}(I(H))\subset I(H')$. \end{proof} \begin{corollary}\label{qite} Let $H \in {\mathcal{H}}_E$ and $H'=\{v \in E^0 \; | \; v \not\geq H \}$. Define $H''=\{v \in E^0 \; | \; v \not\geq H' \}$. Then: \begin{enumerate} \item $H'' \in \mathcal{H}_E$ and ${\rm Ann}({\rm Ann}(I(H)))=I(H'')$. \item $H'' \subseteq \{v \in E^0 \; \vert \; v \geq H \}$. \item $H \subset H''$. \item $H'' \subset H$ if and only if for any $v\in E^0$ one has $v\not\ge w$ for every $w\not\ge H$ implies $v \in H$. \end{enumerate} \end{corollary} \begin{proof} The first item is straightforward from Proposition \ref{here}. For the second if $v\in H''$, then $v \not \ge H'$ implies $v \notin H'$, so $v \ge H$. For proving (3), if $v\in H$ and $v \not \in H''$ then $v \ge H'$ which is a contradiction. To prove (4) suppose first that $H'' \subset H$. Let $v$ be such that $v\not\ge w$ for every $w\not\ge H$. Then $v \not \ge H'$ which implies $v \in H''$. Because of the assumption $H'' \subset H$, we have $v \in H$. For the converse, consider $v \in H''$. So $v \not \ge H'$, that is, $v\not\ge w$ for every $w\not\ge H$. Then $v \in H$. \qedhere \end{proof} \begin{remark}\rm Observe that by the Proposition \ref{here} it is easy to check that an ideal $I(H)$ is regular if and only if $H''\subset H$ if and only if for any $v\in E^0$ one has $v\not\ge w$ for every $w\not\ge H$ implies $v \in H$. \end{remark} If $f\colon A\to B$ is a ring isomorphism we know that for any ideal $I\triangleleft A$ one has $f({\rm{Ann}}(I))={\rm{Ann}}(f(I))$. Therefore for any $I\triangleleft A$ one has $f(\Tilde I)=\widetilde{f(I)}$, implying that $f$ transforms regular ideals into regular ideals. \begin{proposition} If $f\colon L_K(E)\cong L_K(F)$ is a ring isomorphism and $H_1\in{\mathcal{H}}_E$, $H_2\in{\mathcal{H}}_F$, with $f(I(H_1))=I(H_2)$. Then (following the notation in Corollary \ref{qite}) we have $f(I(H_1'))=I(H_2')$. \end{proposition} \begin{proof} We know that $I(H_i')={\rm{Ann}}(I(H_i))$ for $i=1,2$. So $$f(I(H_1'))=f({\rm{Ann}}(I(H_1))={\rm{Ann}}(f(I(H_1)))={\rm{Ann}}(I(H_2))=I(H_2').$$ \end{proof} The results in this section are re-stated in terms of point functors in the proposition below. We highlight that our main interest is in invariant point functors so item \eqref{leo} is an essential result for our purposes. \begin{proposition}\label{dormit} Let $H\colon\mathcal{Grph}\to\mathcal{Set}$ be a hereditary and saturated point functor. Then: \begin{enumerate} \item $\mathop{\hbox{\rm ext}}(H)$ is a hereditary and saturated point functor. \item ${\rm{Ann}}(I(H(E))=I(\mathop{\hbox{\rm ext}}(H)(E))$. \item $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm ext}}(H))$ is a hereditary and saturated point functor and ${\rm{Ann}}({\rm{Ann}}(I(H(E)))= I(\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm ext}}(H(E)))$. \item $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm ext}}(H(E)) \subseteq \{v \in E^0 \; \vert \; v \geq H(E) \}$. \item $H(E) \subset \mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm ext}}(H(E))$. \item $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm ext}}(H(E)) \subset H(E)$ if and only if for any $v\in E^0$ one has $v\not\ge w$ for every $w\not\ge H(E)$ implies $v \in H(E)$. \item\label{leo} If $H_i\colon\mathcal{Grph}\to\mathcal{Set}$ ($i=1,2$) are point functors and $H_1$ is $f$-related to $H_2$ then $\mathop{\hbox{\rm ext}}{(H_1)}$ is $f$-related to $\mathop{\hbox{\rm ext}}{(H_2)}$. In particular if a point functor $H$ is $f$-invariant, then also $\mathop{\hbox{\rm ext}}{(H)}$ is $f$-invariant. \end{enumerate} \end{proposition} Since the functors $\mathop{\hbox{\rm P}_{l}}$, $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}$ and $\mathop{\hbox{\rm P}_{ec}}$ are invariant, then the functors $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm P}_{l}})$, $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}})$ and $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm P}_{ec}})$ are also invariant. Furthermore, since $\mathop{\hbox{\rm P}_{lce}}:=\mathop{\hbox{\rm P}_{l}}\cup\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\cup\mathop{\hbox{\rm P}_{ec}}$ is invariant by Proposition \ref{union}, we have that $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm P}_{lce}})$ is also invariant. Observe that $\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm P}_{lce}})$ is a subfunctor of $\mathop{\hbox{\rm P}_{b^\infty}}$. Concretely $$\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm P}_{lce}})(E)=\{v\in E^0 \; \vert \; v\not\ge \mathop{\hbox{\rm P}_{lce}}\}.$$ \begin{definition}{\rm For a graph $E$ we define the set of {\it vertices with pure infinite bifurcations} $\mathop{\hbox{\rm P}_{b_p^{\infty}}}(E):=\{v\in E^0 \; \vert \; v\not\ge\mathop{\hbox{\rm P}_{lce}}\}$ and the point functor $\mathop{\hbox{\rm P}_{b_p^{\infty}}}\colon\mathcal{Grph}\to\mathcal{Set}$ such that $E\mapsto \mathop{\hbox{\rm P}_{b_p^{\infty}}}(E)$.} \end{definition} \begin{theorem}\label{enjundia} Let $E$ be a graph, the functor $\mathop{\hbox{\rm P}_{b_p^{\infty}}}\colon\mathcal{Grph}{}\to \mathcal{Set}$ is invariant. \end{theorem} \begin{proof} By Proposition \ref{dormit}\eqref{leo}, it suffices to realize that $\mathop{\hbox{\rm P}_{b_p^{\infty}}}=\mathop{\hbox{\rm ext}}(\mathop{\hbox{\rm P}_{lce}})$ and $\mathop{\hbox{\rm P}_{lce}}$ is invariant. \end{proof} For instance, in the graph $E$ of Example \ref{ohcac}, we have $\mathop{\hbox{\rm P}_{l}}=\{u\}$, $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}=\small\text{\O}$, $\mathop{\hbox{\rm P}_{ec}}= \small\text{\O}$ and $\mathop{\hbox{\rm P}_{b_p^{\infty}}}=\{w_i\}_{i\ge 1}$ hence the ideals $I(\{u\})$ and $I(\{w_i\})$ are invariant under ring isomorphisms. \section{Socle chain in Leavitt path algebras}\label{soclechain} If $R$ is a ring and $M$ an $R$-module, one can define the series of socles of $M$ in the usual way: it is an ascending chain of $R$-submodules $\{\mathop{\hbox{\rm Soc}}^{(i)}(M)\}_{i\ge 1}$ where $\mathop{\hbox{\rm Soc}}^{(1)}(M):=\mathop{\hbox{\rm Soc}}(M)$ and $\mathop{\hbox{\rm Soc}}^{(n+1)}(M)/\mathop{\hbox{\rm Soc}}^{(n)}(M)=\mathop{\hbox{\rm Soc}}(M/\mathop{\hbox{\rm Soc}}^{(n)}(M))$. In particular this can be applied to an algebra $A$ so that the socle series defined in \cite{ARS} is a sequence of ideals $$\mathop{\hbox{\rm Soc}}(A)\subset\cdots\subset\mathop{\hbox{\rm Soc}}\nolimits^{(n)}(A)\subset\mathop{\hbox{\rm Soc}}\nolimits^{(n+1)}(A)\subset\cdots \quad (n\in{\mathbb{N}})$$ such that $$\frac{\mathop{\hbox{\rm Soc}}^{(n+1)}(A)}{\mathop{\hbox{\rm Soc}}^n (A)}=\mathop{\hbox{\rm Soc}}\left(\frac{A}{\mathop{\hbox{\rm Soc}}^n(A)}\right).$$ We will focus on $n\in{\mathbb{N}}$ to avoid dealing with infinite cardinals. One of our goals in this section is to check that the different ideals $\mathop{\hbox{\rm Soc}}^{(n)}(A)$ associated to a Leavitt path algebra $A$ are invariant under isomorphism and to characterize them graphically. The other purpose is more ambitious: since the socle is the ideal generated by a point functor, namely $\mathop{\hbox{\rm P}_{l}}$, we would like to prove that there is a ascending chain of point functors (starting at $\overline{\mathop{\hbox{\rm P}_{l}}}$) all of which are invariant. From this point, we want to extrapolate so that we can apply this circle of ideas to other point functors (for instance $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}$) and even further, for any invariant hereditary and saturated functor (next in Section \ref{rosa}). Consider Example 2.7 of \cite{ARS} \begin{equation} \xymatrix{ E: & {\bullet}^{v_{1,1}} \ar[r] & {\bullet}^{v_{1,2}} \ar[r] & {\bullet}^{v_{1,3}} \ar[r] & {\bullet}^{v_{1,4}} \ar@{.>}[r] & \\ & {\bullet}^{v_{2,1}} \ar[r] \ar[u] & {\bullet}^{v_{2,2}} \ar[r] \ar[ul] & {\bullet}^{v_{2,3}} \ar[r] \ar[ull] & {\bullet}^{v_{2,4}}\ar@{.>}[r] \ar[ulll] & \\ & {\bullet}^{v_{3,1}} \ar[r] \ar[u] & {\bullet}^{v_{3,2}} \ar[r] \ar[ul] & {\bullet}^{v_{3,3}} \ar[r] \ar[ull] & {\bullet}^{v_{3,4}}\ar@{.>}[r] \ar[ulll] & \\ }\label{ahora} \end{equation} \medskip Let $A$ be the Leavitt path algebra $L_K(E)$ where $E$ is the graph in \eqref{ahora}. We have by $\mathop{\hbox{\rm Soc}}(A)=I(P_l)$, with $P_l=\{v_{1,i}\}_{i\ge 1}$ \cite[Theorem 5.2]{AMMS2}. Then $\frac{A}{\mathop{\hbox{\rm \tiny Soc}}(A)}\cong L_K(F)$ where $F$ is the graph in \eqref{ahora2}: \begin{equation} \xymatrix{ F: & {\bullet}^{v_{2,1}} \ar[r] & {\bullet}^{v_{2,2}} \ar[r] & {\bullet}^{v_{2,3}} \ar[r] & {\bullet}^{v_{2,4}}\ar@{.>}[r] & \\ & {\bullet}^{v_{3,1}} \ar[r] \ar[u] & {\bullet}^{v_{3,2}} \ar[r] \ar[ul] & {\bullet}^{v_{3,3}} \ar[r] \ar[ull] & {\bullet}^{v_{3,4}}\ar@{.>}[r] \ar[ulll] & \\ }\label{ahora2} \end{equation} \medskip Thus $\mathop{\hbox{\rm Soc}}\left(\frac{A}{\mathop{\hbox{\rm \tiny Soc}}(A)}\right)\cong\mathop{\hbox{\rm Soc}}(L_K(F))$ and since $\mathop{\hbox{\rm P}_{l}}(F)=\{ v_{2,i}\}_{i\ge 1}$, so we have $\mathop{\hbox{\rm Soc}}^{(2)}(A)= I(H)$ being $H=\{v_{1,i}\}_{i\ge 1}\cup \{v_{2,i}\}_{i\ge 1}$. Let $B=\frac{A}{\mathop{\hbox{\rm \tiny Soc}}^{(2)}(A)}$, then $B\cong L_K(G)$, being $G$ the graph: \begin{equation} \xymatrix{ G: & {\bullet}^{v_{3,1}} \ar[r] & {\bullet}^{v_{3,2}} \ar[r] & {\bullet}^{v_{3,3}} \ar[r] & {\bullet}^{v_{3,4}}\ar@{.>}[r] & \\ }\label{ahora3} \end{equation} \medskip Since $$\frac{\mathop{\hbox{\rm Soc}}^{(3)}(A)}{\mathop{\hbox{\rm Soc}}^{(2)}(A)}=\mathop{\hbox{\rm Soc}}\left(\frac{A}{\mathop{\hbox{\rm Soc}}^{(2)}(A)}\right)=\mathop{\hbox{\rm Soc}}(B)=B=\frac{A}{\mathop{\hbox{\rm Soc}}^{(2)}(A)},$$ we have $\mathop{\hbox{\rm Soc}}^{(3)}(A)=A=I(E^0)$. In this example the series of socles is $\mathop{\hbox{\rm Soc}}(A)\subsetneq\mathop{\hbox{\rm Soc}}^{(2)}(A)\subsetneq\mathop{\hbox{\rm Soc}}^{(3)}(A)=A$, inducing a series of hereditary and saturated subsets $$\{v_{1,i}\}_{i\ge 1}\subsetneq \{v_{1,i}\}_{i\ge 1}\cup \{v_{2,i}\}_{i\ge 1}\subsetneq E^0 $$ and each of the hereditary saturated subsets in this series induces an ideal invariant under isomorphisms. This example illustrates the general phenomenon that we analyze in the following paragraph. \bigskip If $A=L_K(E)$ is a Leavitt path algebra then each ideal $\mathop{\hbox{\rm Soc}}^{(n)}(A)$ is graded by \cite[Theorem 3.2]{ARS}. Applying \cite[Theorem 2.4.8]{AAS} we get $\mathop{\hbox{\rm Soc}}^{(n)}(A)=I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))$ for a certain hereditary and saturated subset named $\mathop{\hbox{\rm P}_{l}}^{(n)}(E)$. Also, since $\mathop{\hbox{\rm Soc}}^{(n)}(A)\subset\mathop{\hbox{\rm Soc}}^{(n+1)}(A)$ we have $$\mathop{\hbox{\rm P}_{l}}\nolimits^{(n)}(E)\subset \mathop{\hbox{\rm P}_{l}}\nolimits^{(n+1)}(E),$$ for any graph $E$. Looking at $\mathop{\hbox{\rm P}_{l}}^{(n)}$ as point functors $\mathop{\hbox{\rm P}_{l}}^{(n)}\colon\mathcal{Grph}\to\mathcal{Set}$, we have a sequence of hereditary and saturated functors $$\overline{\mathop{\hbox{\rm P}_{l}}}=\mathop{\hbox{\rm P}_{l}}\nolimits^{(1)}\subset\mathop{\hbox{\rm P}_{l}}\nolimits^{(2)}\subset\cdots$$ It is easy to see that if $f\colon A\to B$ is a ring isomorphism, then $f(\mathop{\hbox{\rm Soc}}^{(n)}(A))=\mathop{\hbox{\rm Soc}}^{(n)}(B)$. Summarizing we derive the following proposition. \begin{proposition}\label{invariante} The series of functors $\mathop{\hbox{\rm P}_{l}}^{(n)}$, ($n\ge 1$) are invariant under isomorphism in the sense of definition \ref{hervor}. \end{proposition} \begin{remark}\label{tita Antonia} {\rm In general, for a row-finite graph $E$ and $H(E) \in \mathcal{H}_E$, let $\theta$ be the isomorphism given in \cite[Corollary 2.4.13 (i)]{AAS}, that is, $\theta: L_K(E)/I(H(E)) \rightarrow L_K(E/H(E))$. Remember, under this situation, we have $\theta^{-1}$ defined as follows: for $v \in (E/H(E))^0$ and $e \in (E/H(E))^1$, $\theta^{-1}(v) = v + I(H(E))$, $\theta^{-1}(e) = e + I(H(E))$ and $\theta^{-1}(e^{\ast}) = e^{\ast} + I(H(E))$. For short we will identify (without mentioning) an element in $L_K(E)/I(H(E))$ with its corresponding image through $\theta$ inside $L_K(E/H(E))$.} \end{remark} Given that the hereditary saturated functors $P_l^{(n)}$ induce invariant ideals, we now consider the problem of describing in purely graph-theoretic terms, the sets $\mathop{\hbox{\rm P}_{l}}^{(n)}(E)$, ($n\ge 1$). Consider the following diagram where $i_E\colon E^0\to L_K(E)$ is the canonical injection and $\pi$ the canonical projection $\pi\colon L_K(E)\to L_K(E)/I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))\cong L_K(E/\mathop{\hbox{\rm P}_{l}}^{(n)}(E))$ (up to identification). The elements of $L_K(E)/I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))$ will be denoted $x+I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))$ as usual. Denote $F:=E/\mathop{\hbox{\rm P}_{l}}^{(n)}(E)$. We will need to take into account that $F^0=(E/\mathop{\hbox{\rm P}_{l}}^{(n)}(E))^0=\pi i(E^0)$ and $i_F=\pi i_E\vert_{F^0}$. The commutativity of the square below is contained in the proof of \cite[Theorem 2.4.12]{AAS}. \[ \begin{tikzcd}[column sep=small] E^0 \arrow{r}{i_E} & L_K(E) \arrow{d}{\pi} \\ F^0\arrow{r}{i_F}\arrow[u,hook] & L_K(F) \end{tikzcd} \] \begin{proposition} \label{aislu} Let $L_K(E)$ and $L_K(F)$ be the Leavitt path algebras associated to the graphs $E$ and $F=E/\mathop{\hbox{\rm P}_{l}}^{(n)}(E)$, then $\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)=\{ v \in E^0 \; | \; v + I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E)) \in \overline{\mathop{\hbox{\rm P}_{l}}(F)}^F\}$. \end{proposition} \begin{proof} All-through this proof we will shorten the notation $\overline{\mathop{\hbox{\rm P}_{l}}(F)}^F$ to $\overline{\mathop{\hbox{\rm P}_{l}}(F)}$. We know $\mathop{\hbox{\rm Soc}}^{(n)}(L_K(E))=I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))$ and $$\mathop{\hbox{\rm Soc}}(L_K(F))=\mathop{\hbox{\rm Soc}}\left(\frac{L_K(E)}{\mathop{\hbox{\rm Soc}}^{(n)}(L_K(E))}\right)= \frac{\mathop{\hbox{\rm Soc}}^{(n+1)}(L_K(E))}{\mathop{\hbox{\rm Soc}}^{(n)}(L_K(E))}.$$ For the first containment, consider $v \in \mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$. Thus $v + I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E)) \in \mathop{\hbox{\rm Soc}}(L_K(F))$ which implies $v+ I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E)) \in I(\mathop{\hbox{\rm P}_{l}}(F)) \cap F^0$. Then $v + I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E)) \in \overline{\mathop{\hbox{\rm P}_{l}}(F)}$. For the converse, let $v$ be such that $v + I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E)) \in \overline{\mathop{\hbox{\rm P}_{l}}(F)}=\cup_{i\ge 0}\Lambda^i$ (see \eqref{onion}). Recall that $\Lambda^0=\mathop{\hbox{\rm P}_{l}}(F)$. We prove that for any $i$, one has $$v+I(\mathop{\hbox{\rm P}_{l}}\nolimits^{(n)}(E))\in \Lambda^i\implies v\in \mathop{\hbox{\rm P}_{l}}\nolimits^{(n+1)}(E).$$ For $i=0$ we need to prove that if $v+I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))\in \mathop{\hbox{\rm P}_{l}}(F)$ then $v\in \mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$. Take $v + I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E)) \in \mathop{\hbox{\rm Soc}}(L_K(F))$. Then $v \in \mathop{\hbox{\rm Soc}}^{(n+1)}(L_K(E)) \cap E^0 = I(\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E))\cap E^0$, that is, $v\in \mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$. Assume now that for some $i$ we have the implication: $$v+I(\mathop{\hbox{\rm P}_{l}}\nolimits^{(n)}(E))\in \Lambda^i\implies v\in \mathop{\hbox{\rm P}_{l}}\nolimits^{(n+1)}(E).$$ Now we prove that $$v+I(\mathop{\hbox{\rm P}_{l}}\nolimits^{(n)}(E))\in \Lambda^{i+1}\implies v\in \mathop{\hbox{\rm P}_{l}}\nolimits^{(n+1)}(E).$$ So we consider $v+I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))\in\Lambda^{i+1}\setminus\Lambda^i$. We know that $v$ is a regular vertex and since we are considering the row-finite case, we have $r_F(s_F^{-1}(v))=\{w_1,\ldots,w_n\}$. Since $w_j+I(\mathop{\hbox{\rm P}_{l}}\nolimits^{(n)}(E))\in\Lambda^i$ applying the induction hypothesis we have that each $w_j\in\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$ for $1\le j\le n$. On the other hand, we may have $r_E({s_E}^{-1}(v))=\{w_1,\ldots,w_n,w_{n+1},\ldots w_k\}$. But then, for $j\ge 1$, one has $w_{n+j}\in \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$ hence these elements are in $\mathop{\hbox{\rm P}_{l}}^{n+1}(E)$. In conclusion $w_j\in\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$ for any index $j$. So the CK2 applied to the vertex $v$ of $E$ gives $v\in\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$. \end{proof} \begin{notation}{\rm For two subsets $E_1^0, E_2^0$ of vertices of $E^0$ we write $E_1^0 \subseteq^{1} E_2^0$ if all the vertices of $E_1^0$ are contained in $E_2^0$ except at most one.} \end{notation} For the next result we will need to do a previous lemma. \begin{lemma}\label{oportuno} Let $L_K(E)$ be the Leavitt path algebra associated to a graph $E$. Then $\mathop{\hbox{\rm P}_{l}}^{(n)}(E)$ does not contain vertices that are base of a cycle in $E$. \end{lemma} \begin{proof} By induction on $n$, first it is clear for $n=1$ since $\mathop{\hbox{\rm P}_{l}}^{(1)}(E)=\overline{\mathop{\hbox{\rm P}_{l}}(E)}$. Suppose the condition holds for $\mathop{\hbox{\rm P}_{l}}^{(k)}(E)$ for $k < n$. Let $w \in \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$ be such that it is a base of a cycle $c$ in $E$. By Proposition \ref{aislu}, we have that $w + I(\mathop{\hbox{\rm P}_{l}}^{(n-1)}(E)) \in \overline{\mathop{\hbox{\rm P}_{l}}(F)}^F$, where $F=E/\mathop{\hbox{\rm P}_{l}}^{(n-1)}(E)$. Write $\overline{\mathop{\hbox{\rm P}_{l}}(F)}^F=\cup_{i\ge 0}\Lambda^i$. If $w + I(\mathop{\hbox{\rm P}_{l}}^{(n-1)}(E)) \in \Lambda^0=\mathop{\hbox{\rm P}_{l}}(F)$ then $w \in \mathop{\hbox{\rm P}_{l}}^{(n-1)}(E)$ (because $c^0 \cap \mathop{\hbox{\rm P}_{l}}^{(n-1)}(E) \ne \emptyset$), but by induction hypothesis, $\mathop{\hbox{\rm P}_{l}}^{(n-1)}(E)$ does not contain vertices based at cycles so we get a contradiction. Next, we assume that $w \in \Lambda^i \setminus \Lambda^{i-1}$. So $r_F(s_F^{-1}(w)) \subseteq \Lambda^{i-1}$. We have two cases. First, imagine the cycle $c$ based at $w$ is such that $c \in \mathop{\hbox{\rm Path}} (F)$, then $w \in \Lambda^{i-1}$ which is not possible. So secondly, $c \notin \mathop{\hbox{\rm Path}} (F)$, that is, there exists $u \in c^0$ with $u \in \mathop{\hbox{\rm P}_{l}}^{(n-1)}(E)$ hence $w \in \mathop{\hbox{\rm P}_{l}}^{(n-1)}(E)$, a contradiction. \end{proof} \begin{theorem}\label{encasa} Let $L_K(E)$ be the Leavitt path algebra associated to a graph $E$. Then $\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$ is the saturated closure of \begin{equation}\label{chungon} \{ v \in E^0 \; | \; T_E(v) \text{ is acyclic and }\forall w\in T_E(v), r_E(s_E^{-1}(w)) \subseteq^1 \mathop{\hbox{\rm P}_{l}}\nolimits^{(n)}(E)\} \end{equation} for $n \ge 1$. \end{theorem} \begin{proof} It is straightforward to check that (\ref{chungon}) is a hereditary set. We denote $F:=E/\mathop{\hbox{\rm P}_{l}}^{(n)}(E)$. Consider $ v \in \mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$. Identify the vertices of $F$ with the corresponding vertices of $E$, i.e. $ v + I(\mathop{\hbox{\rm P}_{l}}^{(n)}(E))$ as a vertex of $F$ is identified with the vertex $v$ of $E$. Notice that $T_E(v)$ is acyclic by Lemma \ref{oportuno}. Next we prove that for any $w\in T_E(v)$, $r_E(s_E^{-1}(w))\subseteq^{1} \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$. By Proposition \ref{aislu}, we have $v\in\overline{\mathop{\hbox{\rm P}_{l}}(F)}^F=\cup_{i\ge 0}\Lambda_i$. In case $v\in\Lambda^0=\mathop{\hbox{\rm P}_{l}}(F)$, then $T_F(v)$ does not contain bifurcations of $F$. So, take $w\in T_E(v)$, if $s^{-1}(w)=\{g_i\}_{i\in I}$ in $E$, the situation in $F$ is that either all the edges have disappeared when passing to $F$ or at most one, say $g_1$ survives. In this way, in the graph $F$ we have $\vert r_F(s_F^{-1}(w))\vert\le 1$. Whence $w$ is either a sink of $F$ or $r_F(s_F^{-1}(w))$ has cardinal $1$. This proves our claim for $i=0$. Now, assume that the property holds for any $\Lambda^k$ with $k \in\{0,1, \ldots, i\}$. Take $v\in\Lambda^{i+1}\setminus\Lambda^i$. Let $r_F(s_F^{-1}(v))=\{w_1,\ldots,w_n\}$. Since $T_E(w_j)\subset T_E(v)$ then the tree of each $T_E(w_j)$ is acyclic, and since $w_j\in \Lambda^i$, any vertex $v'\in T_E(w_j)$ satisfies $r_E(s_E^{-1}(v'))\subseteq^{1} \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$. Thus each $w_i$ for $i=1,\ldots,n$ is in the set \eqref{chungon}. We may have $r_E({s_E}^{-1}(v))=\{w_1,\ldots,w_n,w_{n+1},\ldots w_q\}$. But then, for $j\geq 1$, one has $w_{n+j}\in \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$. We know $T_E(w_{n+j})$ is acyclic and, on the other hand, for every $z \in T_E(w_{n+j})$ in fact $r_E(s_E^{-1}(z))\subseteq \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$, so in particular $r_E(s_E^{-1}(z))\subseteq^{1} \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$ hence $\{w_{n+1},\ldots w_q\}$ belongs to (\ref{chungon}). Finally applying CK2 to $v$ we have that $v$ is in the ideal generated by the set \eqref{chungon}. Applying \cite[Corollary 2.4.16 (i)]{AAS} we have that $v$ is in the saturated closure of the set in \eqref{chungon}. To prove the converse it suffices to see that the set in \eqref{chungon} is contained in $\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$. Assume that $v$ is a vertex such that $T_E(v)$ is acyclic and for any $w\in T_E(v)$ we have $r(s^{-1}(w)) \subseteq^{1} \mathop{\hbox{\rm P}_{l}}^{(n)}(E)$. Then we have in $F$ that $s^{-1}(w)=\small\text{\O}$ or $\vert s^{-1}(w)\vert=1$. Thus $v$ is a line point of $F$ and taking into account Proposition \ref{aislu} we get $v\in\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$. \end{proof} \begin{remark} \label{aisld}\rm Let $L_K(E)$ be the Leavitt path algebra associated to a graph $E$. Taking into account Theorem \ref{encasa} for $n=2$, we have that $\mathop{\hbox{\rm P}_{l}}^{(2)}(E)$ is the saturated closure of \begin{equation}\label{chungo} \{ v \in E^0 \; | \; T_E(v) \text{ is acyclic and }\forall w\in T_E(v), r(s^{-1}(w)) \subseteq^1 \overline{\mathop{\hbox{\rm P}_{l}}(E)} \}. \end{equation} So, a Leavitt path algebra $L_K(E)$ verifies $\mathop{\hbox{\rm Soc}}^{(2)}({L_K(E)})=L_K(E)$ if and only if $E^0={P_l}^{(2)}(E)$. \end{remark} \begin{example} \rm In order to illustrate Theorem \ref{encasa} for $n=2$ we compute the set $\mathop{\hbox{\rm P}_{l}}^{(2)}(E)$ for the following graph $E$: \begin{equation} \xymatrix{ E: & {\bullet}^{v_{1,1}} \ar[r] & {\bullet}^{v_{1,2}} \ar[r] & {\bullet}^{v_{1,3}} \ar[r] & {\bullet}^{v_{1,4}} \ar@{.>}[r] & \\ & {\bullet}^{v_{2,1}} \ar[r] \ar[u] & {\bullet}^{v_{2,2}} \ar[r] \ar[u] & {\bullet}^{v_{2,3}} \ar[r] \ar[u] & {\bullet}^{v_{2,4}}\ar@{.>}[r] \ar[u] & \\ & {\bullet}^{v_{3,1}} \ar[r] \ar[u] & {\bullet}^{v_{3,2}} \ar[r] \ar[u] & {\bullet}^{v_{3,3}} \ar[r] \ar[u] & {\bullet}^{v_{3,4}}\ar@{.>}[r] \ar[u] & \\ }\label{despues} \end{equation} \smallskip In this case we have $\mathop{\hbox{\rm P}_{l}}^{(1)}(E)=\{v_{1,i}\}_{i \ge 1}$ and by Theorem \ref{encasa} we get the following equality $\mathop{\hbox{\rm P}_{l}}^{(2)}(E)=\{v_{1,i}\}_{i \ge 1} \cup \{v_{2,j}\}_{j \ge 1}$. According to Proposition \ref{invariante}, the ideals generated by these sets are invariant under isomorphism and of course we have $\mathop{\hbox{\rm Soc}}(L_K(E))=I(\{v_{1,i}\}_{i\ge 1})$ and $\mathop{\hbox{\rm Soc}}^{(2)}(L_K(E))=I(\{v_{1,i}\}_{i \ge 1} \cup \{v_{2,j}\}_{j \ge 1})$. Also the quotient graph $F = E/\mathop{\hbox{\rm P}_{l}}(E)$ is \begin{equation} \xymatrix{ F: & {\bullet}^{v_{2,1}} \ar[r] & {\bullet}^{v_{2,2}} \ar[r] & {\bullet}^{v_{2,3}} \ar[r] & {\bullet}^{v_{2,4}}\ar@{.>}[r] & \\ & {\bullet}^{v_{3,1}} \ar[r] \ar[u] & {\bullet}^{v_{3,2}} \ar[r] \ar[u] & {\bullet}^{v_{3,3}} \ar[r] \ar[u] & {\bullet}^{v_{3,4}}\ar@{.>}[r] \ar[u] & \\ }\label{despues2} \end{equation} \smallskip \noindent so that $L_K(E)/\mathop{\hbox{\rm Soc}}^{(2)}(L_K(E))=L_K(G)$ which is simple and coincides with its socle. This implies $\mathop{\hbox{\rm Soc}}^{(3)}(L_K(E))=L_K(E)$. Consequently $\mathop{\hbox{\rm P}_{l}}^{(3)}(E)=E^0$. \begin{equation} \xymatrix{ G: & {\bullet}^{v_{3,1}} \ar[r] & {\bullet}^{v_{3,2}} \ar[r] & {\bullet}^{v_{3,3}} \ar[r] & {\bullet}^{v_{3,4}}\ar@{.>}[r] & \\ }\label{despues3} \end{equation} \smallskip \end{example} \begin{example}\rm Now this example shows the general case in Theorem \ref{encasa}. Let $E$ be the following graph, denoting $A:=L_K(E)$. We have $\mathop{\hbox{\rm Soc}}^{(n)}(A)\subsetneq\mathop{\hbox{\rm Soc}}^{(n+1)}(A)$ for any $n \in {\mathbb{N}}$. \begin{equation} \xymatrix{ E: & {\bullet}^{v_{1,1}} \ar[r] & {\bullet}^{v_{1,2}} \ar[r] & {\bullet}^{v_{1,3}} \ar[r] & {\bullet}^{v_{1,4}} \ar@{.>}[r] & \\ & {\bullet}^{v_{2,1}} \ar[r] \ar[u] & {\bullet}^{v_{2,2}} \ar[r] \ar[u] & {\bullet}^{v_{2,3}} \ar[r] \ar[u] & {\bullet}^{v_{2,4}}\ar@{.>}[r] \ar[u] & \\ & {\bullet}^{v_{3,1}} \ar[r] \ar[u] & {\bullet}^{v_{3,2}} \ar[r] \ar[u] & {\bullet}^{v_{3,3}} \ar[r] \ar[u] & {\bullet}^{v_{3,4}}\ar@{.>}[r] \ar[u] & \\ & {\bullet}^{v_{n,1}} \ar[r] \ar@{.>}[u] & {\bullet}^{v_{n,2}} \ar[r] \ar@{.>}[u] & {\bullet}^{v_{n,3}} \ar[r] \ar@{.>}[u] & {\bullet}^{v_{n,4}}\ar@{.>}[r] \ar@{.>}[u] & \\ & \ar@{.>}[u] & \ar@{.>}[u] & \ar@{.>}[u] & \ar@{.>}[u] & \\ }\label{manana} \end{equation} \medskip In this case $\mathop{\hbox{\rm P}_{l}}^{(n)}(E)=\{v_{i,j} \; \vert \; i=1, \ldots ,n \text{ and }j\ge 1\}$ for $n \in {\mathbb{N}} \setminus \{0\}$. And we see that $\mathop{\hbox{\rm P}_{l}}^{(n)}(E)\subsetneq\mathop{\hbox{\rm P}_{l}}^{(n+1)}(E)$ for any $n$. \end{example} \section{The series of functors of a hereditary and saturated one}\label{rosa} Let $H\colon\mathcal{Grph}\to\mathcal{Set}$ be a hereditary and saturated point functor. Fix a graph $E$ and define $H^{(1)}:=H$. Assuming that $H^{(1)},\ldots, H^{(n)}$ are defined, then we define $H^{(n+1)}$ applied to a graph $E$ as the hereditary and saturated subset of $E^0$ such that the ideal $I(H(E/ H^{(n)}(E)))$, which is an ideal in $L_K(E/ H^{(n)}(E))\cong L_K(E)/ I(H^{(n)}(E))$, satisfies \begin{equation}\tiny I\left(H\left(\frac{E}{H^{(n)}(E)}\right)\right)=\frac{I(H^{(n+1)}(E))}{I(H^{(n)}(E))}. \end{equation} \begin{remark}\label{gatoLeo}\rm Observe that, as a consequence of \cite[Corollary 2.9.11]{AAS} and \cite[Proposition 2.4.9]{AAS}, we have that graded ideals of a quotient algebra are quotient of graded ideals (though this fact seems to be more general and does not need the setting of Leavitt path algebras). \end{remark} By construction we have $H=H^{(1)}\subset H^{(2)}\subset\cdots\subset H^{(n)}\subset\cdots $ and each $H^{(n)}$ being hereditary and saturated. \begin{proposition}\label{hinv} If $H$ is invariant under isomorphism, then the series of functors $H^{(n)}$ ($n\ge 1$) are invariant under isomorphism in the sense of definition \ref{hervor}. \end{proposition} \begin{proof} Assume that $H^{(n)}$ is invariant under isomorphism. To prove that $H^{(n+1)}$ is also invariant, take any isomorphism $f\colon L_K(E)\to L_K(F)$. Then it induces by passing to the quotient an isomorphism \begin{equation}\tiny\bar f\colon L_K\left(\frac{E}{H^{(n)}(E)}\right)\to L_K\left(\frac{F}{ H^{(n)}(F)}\right) \end{equation} and consequently $\bar f(I(H(E/ H^{(n)}(E))))= I(H(F/ H^{(n)}(F)))$. Thus \begin{equation} \tiny \bar f\left(\frac{I(H^{(n+1)}(E))}{I(H^{(n)}(E))}\right)=\frac{I(H^{(n+1)}(F))}{I(H^{(n)}(F))} \Rightarrow f(I(H^{(n+1)}(E)))=I(H^{(n+1)}(F)). \end{equation} \end{proof} The following result is the analogous to the one given in Theorem \ref{encasa} which was referred to the functor $\mathop{\hbox{\rm P}_{l}}^{(n+1)}$. Now we describe graphically $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}$. \begin{theorem}\label{enbarco2} Let $L_K(E)$ be the Leavitt path algebra associated to a graph $E$. Then $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E)$ is the hereditary and saturated closure of \begin{equation}\label{och} \mathfrak{S}_n= \{ v \in E^0 \; | \; v \in c^0, c \text{ is a cycle and }\forall f \text{ exit of } c, r(f) \in \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E)\} \end{equation} for $n \ge 1$. \end{theorem} \begin{proof} Firstly we prove the formula: \begin{equation}\label{tutu} \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E) \subseteq \overline{\mathfrak{S}_n}^E\quad (\hbox{ in the sequel } \overline{\mathfrak{S}_n}^E \hbox{ will be shortened }\overline{\mathfrak{S}_n}). \end{equation} For $n=1$, it suffices to prove that $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(E)\subset\mathfrak{S}_1$ which is trivial (because there is no exits in the cycles involved). Assume that $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(k)}(E)\subset\overline{\mathfrak{S}_k}$ for $k<n$. Take $v\in\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)$ but $v\notin\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n-1)}(E)$. Then $$v+I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n-1)}(E))\in \frac{I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E))}{I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n-1)}(E))}\cong I[\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(E/\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n-1)}(E))]$$ so that $v$ is in the hereditary and saturated closure of $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(E/\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n-1)}(E))$ which is (by \eqref{onion}) $\cup_{i\ge 0}\Lambda^i$ (closure in the quotient graph). We will prove by induction that each $\Lambda^i$ is contained in $\overline{\mathfrak{S}_n}$. If $v\in\Lambda^0$ then $v$ is in a cycle of $E$. If this cycle has no exits then $v \in \mathfrak{S}_n$. And if $v \in c^0$ and $c$ has an exit $f$ then since $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E)$ is hereditary, $r(f) \in \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E)$. Consequently in this case $v\in\mathfrak{S}_n$. On the other hand, assume $\Lambda^j\subset\overline{\mathfrak{S}_n}$ for $j<i$. Take now $v\in\Lambda^i$ with $i>0$ (but $v\notin\Lambda^{i-1}$). Let $s_E^{-1}(v)=\{g_1,\ldots,g_r\}$, then we may assume (reordering if necessary) that for the graph $G:=E/\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n-1)}(E)$, we have $s_G^{-1}(v)=\{g_1,\ldots, g_l\}$; while the others $\{g_{l+1},\ldots, g_r\}$ satisfy $r_E(g_i)\in \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n-1)}(E)\subset \overline{\mathfrak{S}_{n-1}} \subset \overline{\mathfrak{S}_n}$. Note that $r_G(s_G^{-1}(v))\in\Lambda^{i-1}\subset\overline{\mathfrak{S}_n}$. So $r_E(s_E^{-1}(v))\subset\overline{\mathfrak{S}_n}$ hence $v\in\overline{\mathfrak{S}_n}$. So far we have proved formula \eqref{tutu}. Let us prove now that $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n+1)}(E)$ is contained in the saturated closure of the set $\mathfrak{S}_n$. If $v\in\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n+1)}(E)\setminus \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E)$ then $v+I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))\in I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n+1)}(E))/I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))\triangleleft L_K(E)/I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))$. Let us denote $F:=E/\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E)$ the quotient graph. Let $\theta\colon L_K(E)/I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))\to L_K(F)$ be as explained in Remark \ref{tita Antonia}, that is, $\theta$ is the canonical isomorphism such that $v+I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))\buildrel{\theta}\over{\mapsto} v$ (as element of the graph $F$). Restricting $\theta$, we have an isomorphism $\theta\colon I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n+1)}(E))/I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))\to I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(1)}(F))$. Consequently, given that $v+I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))\in I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n+1)}(E))/I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E))$, applying $\theta$ we have $v\in \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(1)}(F)$ (in the graph $F$). Write now $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(1)}(F)=\cup_{i\ge 0}\Lambda^i(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(F))$ (again \eqref{onion}) and let us prove by induction that \begin{equation}\label{kjurg} T_F(v)\cap\Lambda^i\subset\overline{\mathfrak{S}_n}. \end{equation} For $i=0$ we must check that $T_F(v)\cap\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(F)\subset\overline{\mathfrak{S}_n}$. So, if $w\in T_F(v)\cap \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(F)$ then $w\in c^0$ where the cycle has no exit in $F$ but has exits in $E$. If $f$ is any exit of $c$ then $r(f)\in E^0\setminus F^0$ hence $r(f)\in \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)$. Whence $w\in\mathfrak{S}_n$. Assuming $T_F(v)\cap\Lambda^i\subset \overline{\mathfrak{S}_n}$, we prove $T_F(v)\cap\Lambda^{i+1}\subset \overline{\mathfrak{S_n}}$: if $u\in T_F(v)\cap\Lambda^{i+1}$, then $r_F(s_F^{-1}(u))\in T_F(v)\cap \Lambda^{i}\subset \overline{\mathfrak{S}_n}$. However, in order to conclude that $u\in\overline{\mathfrak{S}_n}$ we must see that $r_E(s_E^{-1}(u))\subset \overline{\mathfrak{S}_n}$. But $r_E(s_E^{-1}(u))\setminus r_F(s_F^{-1}(u))\subset \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E)\subset \overline{\mathfrak{S}_n}$ by \eqref{tutu}. Thus $r_E(s_E^{-1}(u))\in \overline{\mathfrak{S}_n}$, so $u\in\overline{\mathfrak{S}_n}$. This completes the induction proof of formula \eqref{kjurg}. Now, since $T_F(v)\subset \overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(F)}=\cup_{i\ge 0}\Lambda^i(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(F))$, we have $T_F(v)=T_F(v)\cap\overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}(F)}=\cup_{i\ge 0}(T_F(v)\cap \Lambda^i)\subset\overline{\mathfrak{S}_n}$. Consequently $T_F(v)\subset \overline{\mathfrak{S}_n}$ implying $v\in\overline{\mathfrak{S}_n}$. For the converse relation it suffices to see that $\mathfrak{S}_n$ is contained in $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E)$. So consider $v$ a vertex in $\mathfrak{S}_n$. If $v$ is in a cycle without exits of $E$ then $v\in\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(1)}(E)\subset\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E)$ so we are done (note that the sets $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)$ form an ascending chain: $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(k)}(E)\subseteq \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(k+1)}(E)$ for any $k$). If $v$ is in a cycle with exits $c$ of $E$, then for any exit $f$ of $c$ we know $r_E(f)\in\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)$. Thus, relative to the graph $F:=E/\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)$ we have that the exit $f$ is not an edge of $F$ because its target is not in $F^0$. So the cycle $c$ has no exit in $F$. Whence $$v\in I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(F))=\theta\left(\frac{I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E))}{I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E))}\right)$$ hence $v=\theta(w+I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)))$ for some $w\in I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E))$. But $\theta(w+I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)))=w$ which implies $v=w\in I(\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E)) \cap E^0$, i.e., $v \in \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E)$. \end{proof} \begin{example} \rm Now we compute the sets $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}\nolimits^{(n)}(E)$ for the following graph in order to illustrate Theorem \ref{enbarco2}. Actually we have that $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n)}(E)= \{v_1, \ldots, v_n\}$ for $n \ge 1$. Also observe that $\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n})(E) \subsetneq \mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}^{(n+1)}(E)$ for any $n$. \begin{equation} E: \xymatrix{ \ar@{.>}[r] & {\bullet}_{v_{4}}\ar@(ul,ur) \ar[r] & {\bullet}_{v_{3}} \ar[r] \ar@(ul,ur) & {\bullet}_{v_{2}} \ar[r] \ar@(ul,ur) & {\bullet}_{v_{1}}\ar@(ur,dr) & \\ }\label{conica} \end{equation} \smallskip \end{example} \subsection{Mixed point-functors}\label{colada} Finally, we define a kind of composition of functors which gives new hereditary and saturated functors when it is applied to hereditary and saturated ones. Furthermore, if the starting functors are invariant, then the composite is also invariant. Assume that for $i=1,2$ we have hereditary and saturated point functors $H_i$. Then we can construct a new point-functor $H_2* H_1$ by the following procedure: for any graph $E$ consider the ideal $I[H_2(E/H_1(E))]\triangleleft L_K(E/H_1(E))\buildrel{\theta}\over\cong L_K(E)/I(H_1(E))$. Thus there exists a unique hereditary and saturated subset of $E$ (see Remark \ref{gatoLeo}), denoted $(H_2\ast H_1)(E)$, such that \begin{equation} I[(H_2\ast H_1)(E)]/I(H_1(E)) \buildrel{\theta}\over{\cong} I[H_2(E/H_1(E))]. \end{equation} \begin{example}{\rm For instance, in the graph $E$, below we can consider the functors $\overline{\mathop{\hbox{\rm P}_{l}}}$ and $\overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}}$. \[ \xygraph{!{(0.25,0)}*+{E\colon} !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !{(2,0)}*+{\bullet}="d" !{(3,0)}*+{\bullet}="g" !{(3,-0.25)}*+{\hbox{\tiny $v$}}="v" !{(2,-0.25)}*+{\hbox{\tiny $u$}}="u" !{(4,0)}*+{\bullet}="h" !{(4,-0.25)}*+{\hbox{\tiny $w$}}="e" "d" :@(ul,ur) "d" "d":"g" "g":"h" "g" :@(ul,ur) "g" }\] Then $\overline{\mathop{\hbox{\rm P}_{l}}}(E)=\{w\}$ and $\overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}}(E)=\small\text{\O}$. However $(\overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}}\ast \overline{\mathop{\hbox{\rm P}_{l}}})(E)=\{v,w\}$ and $(\overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}}\ast (\overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}}\ast \overline{\mathop{\hbox{\rm P}_{l}}}))(E)=E^0$. In the example, we see that in general we do not have commutativity of the operation $*$ because $(\overline{\mathop{\hbox{\rm P}_{l}}}\ast \overline{\mathop{\hbox{\rm P}_{\hbox{\rm\tiny c}}}})(E)=\{w\}$. } \end{example} The empty set point functor $\small\text{\O}\colon\mathcal{Grph}\to\mathcal{Set}$ mapping any graph to the emptyset is an identity element for the $*$-operation: $$\frac{I((H*\small\text{\O})(E))}{I(\small\text{\O}(E))}{\buildrel{\hbox{\tiny def}}\over\cong} I(H(E/\small\text{\O}(E)))\cong I(H(E))$$ whence $H*\small\text{\O}\cong H$. On the other hand $$\frac{I((\small\text{\O}* H)(E))}{I(H(E))}{\buildrel{\hbox{\tiny def}}\over\cong} I(\small\text{\O}(E/H(E)))=0$$ implying $\small\text{\O}*H\cong H$. Also, it is remarkable that: \begin{proposition} If $H_i$ are invariant hereditary and saturated point functors for $i\in \{1,2\}$, then $H_2\ast H_1$ is also invariant. \end{proposition} \begin{proof} We know that for any isomorphism $f\colon L_K(E)\to L_K(E')$ one has $f(I(H_i(E)))=I(H_i(E'))$, ($i=1,2$). Consider the induced isomorphisms $$\begin{matrix} \omega_i\colon L_K(E)/I(H_i(E))\cong L_K(E')/I(H_i(E'))\cr { x+I(H_i(E))\buildrel{\omega_i}\over {\mapsto} f(x)+I(H_i(E'))} \end{matrix}$$ together with the canonical isomorphisms $$\theta_i\colon L_K(E)/I(H_i(E))\to L_K(E/H_i(E)),\quad \theta_i'\colon L_K(E')/I(H_i(E'))\to L_K(E'/H_i(E')).$$ Then define $\bar f_i$ as the unique isomorphism $\bar f_i\colon L_K(E/H_i(E))\to L_K(E'/H_i(E'))$ making commutative the diagram \begin{center} \begin{tikzcd} \frac{L_K(E)}{I(H_ (E))} \arrow[r, "\omega_i"] \arrow[d, "\theta_i"'] & \frac{L_K(E')}{I(H_i(E'))} \arrow[d, "\theta_i'"] \\ L_K(\frac{E}{H_i(E)}) \arrow[r, "\bar f_i"] & L_K(\frac{E'}{H_i(E')}) \end{tikzcd} \end{center} Then $\theta_i$ restricts to an isomorphism $\theta_i\colon I((H_j\ast H_i)(E))/I(H_i(E))\cong I(H_j(E/H_i(E)))$ so that $\bar f_i\theta_i$ is an isomorphism and {\tiny$$ \bar f_i \theta_i \left[I((H_j\ast H_i)(E))/I(H_i(E))\right]\cong \bar f_i[I(H_j(E/H_i(E)))]=I(H_j(E'/H_i(E')))\cong I((H_j\ast H_i)(E'))/I(H_i(E'))$$} implying that $f[I((H_j\ast H_i)(E))]=I((H_j\ast H_i)(E'))$. \end{proof} The $*$ operation has a kind of associativity property which can be formalized in terms of natural isomorphism of functors: \begin{proposition}\label{associative} Let $H_i$ be hereditary and saturated invariant point functors for $i\in \{1,2,3\}$, then there is a natural isomorphism of functors $(H_1*H_2)*H_3\cong H_1*(H_2*H_3)$. \end{proposition} \begin{proof} {\tiny $$L_K\left(\frac{E}{H_2*H_3(E)}\right)\cong\frac{L_K(E)}{I(H_2*H_3(E))}\cong\frac{L_K(E)/I(H_3(E))}{I(H_2*H_3(E))/I(H_3(E))}\cong \frac{L_K(E/H_3(E))}{I(H_2(E/H_3(E)))}\cong L_K\left(\frac{E/H_3(E)}{H_2(E/H_3(E))}\right).$$} Let $f$ be the isomorphism from $L_K\left(\frac{E}{H_2*H_3(E)}\right)$ to $L_K\left(\frac{E/H_3(E)}{H_2(E/H_3(E))}\right)$, applying that $H_1$ is invariant, we have {\tiny $$ \frac{I((H_1*(H_2*H_3))(E)}{I(H_2*H_3(E))}\cong I\left(H_1\left(\frac{E}{H_2*H_3(E)}\right)\right){\buildrel{f}\over{\cong}} \ I\left(H_1\left(\frac{E/H_3(E)}{H_2(E/H_3(E))}\right)\right)\cong \frac{I((H_1*H_2)(E/H_3(E))}{I(H_2(E/H_3(E))}\cong$$ $$\frac{I(((H_1*H_2)*H_3))(E)}{I(H_2*H_3(E))}.$$} But then $I((H_1*(H_2*H_3))(E))\cong I(((H_1*H_2)*H_3)(E))$ which induces the natural isomorphism of functors $(H_1*H_2)*H_3\cong H_1*(H_2*H_3)$. \end{proof}
1,116,691,499,518
arxiv
\section*{Introduction} Let us take three classical particles in $d$-dimensional space with potential depending on mutual distances alone. Assume that all the initial particle positions lie on the same plane. If initial velocities are chosen parallel to this plane then the motion will be planar: all trajectories are in the plane. Thus, after separation of the center-of-mass motion the trajectories are defined by evolution of the relative (mutual) distances. The old question is to find equations for trajectories which depend on relative distances only. The aim of the present paper is to construct the Hamiltonian which depends on relative distances and also describes such a planar dynamics for the three particle case. Our strategy is to study the quantum problem and then take the classical limit, making {\it de-quantization}. The quantum Hamiltonian for three $d$-dimensional particles with translation-invariant potential, which depends on relative (mutual) distances between particles only, is of the form, \begin{equation} \label{Hgen} {\cal H}\ =\ -\sum_{i=1}^3 \frac{1}{2 m_i} \De_i^{(d)}\ +\ V(r_{12},\,r_{13},\,r_{23})\ ,\ \end{equation} with coordinate vector of $i$th particle ${\bf r}_i \equiv {\bf r}^{(d)}_i=(x_{i,1}\,, x_{i,2}\,,x_{i,3}\ldots \,,x_{i,d})$\,, where \begin{equation} \label{rel-coord} r_{ij}=|{\bf r}_i - {\bf r}_j|\ ,\quad i,j=1,2,3\ , \end{equation} is the (relative) distance between particles $i$ and $j$, $r_{ij}=r_{ji}$, sometimes called the Jacobi distances. \begin{figure}[h] \centering \includegraphics[width=13.0cm]{System.eps} \caption{Three-particle system: the coordinate vectors ${\bf r}_i$ mark positions of vertices of the triangle of interaction with sides $r_{ij}$. The center-of-mass (the barycenter of the triangle) is marked by a (red) bubble.} \label{Fig} \end{figure} The number of relative distances is equal to the number of edges of the triangle which is formed by taking the particle positions as vertices. We call this triangle the {\it triangle of interaction}, see for illustration Fig.\ref{Fig}. Here, $\De_i^{(d)}$ is the $d$-dimensional Laplacian, \[ \De_i^{(d)}\ =\ \frac{\pa^2}{\pa{{\bf r}_i} \pa{{\bf r}_i}}\ , \] associated with the $i$th body. For simplicity all masses in (\ref{Hgen}) are assumed to be equal: $m_i=m=1$. The configuration space for ${\cal H}$ is ${\bf R}^{3d}$. The center-of-mass motion described by vectorial coordinate \[ {\bf R}_{_0} \ =\ \frac{1}{{\sqrt 3}}\,\sum_{k=1}^{3} {\bf r}_{_k}\ , \] can be separated out; this motion is described by a $d$-dimensional plane wave, $\sim e^{i\, {\bf k} \cdot {\bf R}_{_0}}$. The spectral problem is formulated in the space of relative motion ${\bf R}_r \equiv {\bf R}^{2d}$; it is of the form, \begin{equation} \label{Hrel} {\cal H}_r\,\Psi(x)\ \equiv \ \bigg(- \frac{1}{2}\,\De_r^{(2d)} + V(r_{12},\,r_{13},\,r_{23})\bigg)\, \Psi(x)\ =\ E \,\Psi(x)\ ,\ \Psi \in L_2 ({\bf R}_r)\ , \end{equation} where $\De_r^{(2d)}$ is the flat-space Laplacian in the space of relative motion. If the space of relative motion ${\bf R}_r$ is parameterized by two, $d$-dimensional vectorial Jacobi coordinates \begin{equation} \label{rj} {\bf q}_{j} \ = \ \frac{1}{\sqrt{j(j+1)}}\sum_{k=1}^j k\,({\bf r}_{k+1} - {\bf r}_{{k}})\ , \qquad\qquad j=1,2\ , \end{equation} the flat-space $2d$-dimensional Laplacian in the space of relative motion becomes diagonal \begin{equation} \label{Dflat} \De_r^{(2d)}\ =\ \sum_{j=1,2} \frac{\pa^2}{\pa{{\bf q}_j} \pa{{\bf q}_j}}\ . \end{equation} Thus, ${\bf q}_{j}$ plays a role of the Cartesian coordinate vector in the space of relative motion. The case $d=1$ (three bodies on a line) is special. The triangle of interaction degenerates into an interval with the marked point inside - the vertices of the triangle correspond to the endpoints and the marked point - its area is equal to zero. Moreover, for $d=1$ the relative distances obey the constraint (hyperplane condition), \begin{equation} \label{rel3} r_{12} + r_{23} + r_{13}\ =\ 0\ , \end{equation} where it is assumed that $r_{13}=-r_{31}$. Hence, the three relative distances are related and only two relative distances can serve as independent variables. Therefore, the Laplacian in the space of relative variables becomes \begin{equation} \label{Drel3-1} \De_r^{(2)}\ =\ 2\,\bigg(\frac{\pa^{2}}{\pa r_{12}^{2}}\ +\ \frac{\pa^{2}}{\pa r_{23}^{2}} - \frac{\pa^{2}}{\pa r_{12} \,\pa r_{23}}\bigg) \ , \end{equation} cf. (\ref{Dflat}). The configuration space ${\bf R}_r$ is the quadrant, $r_{12}, r_{23} \geq 0$. \bigskip {\large \it Observation} \cite{Ter}\,: \begin{quote} There exists a family of eigenstates of the Hamiltonian (\ref{Hgen}), including the ground state, which depend on the three relative distances $\{r_{ij}\}$ alone\,. The same is correct for the $n$ body problem: there exists a family of eigenstates, including the ground state, which depend on relative distances alone. \end{quote} This observation is presented for the case of scalar particles, bosons. It can be generalized to the case of fermions, \bigskip {\large \it Conjecture}\,: \begin{quote} In the case of three fermions there exists a family of the eigenstates of the Hamiltonian (\ref{Hgen}), including the ground state, in which the coordinate functions depend on three relative distances $\{r_{ij}\}$ only\,. The same is correct for the $n$ body problem: there exists a family of the eigenstates, including the ground state, in which the coordinate functions depend on relative distances only. \end{quote} \bigskip \noindent Our primary goal is to find the differential operator in the space of relative distances $\{r_{ij}\}$ for which these states are eigenstates. In other words, to find a differential equation depending only on $\{r_{ij}\}$ for which these states are solutions. This implies a study of the evolution of the triangle of interaction with fixed barycenter (center-of-mass). We consider the case of three spinless particles. In our previous paper \cite{TMA:2016} the physically important case $d=3$ was studied in detail. In this paper it will be shown that the generalization to arbitrary $d$ is straightforward. Except for $d=1$ almost all formulas remain conceptually unchanged, the presentation (and even wording) remains almost the same and most conclusions are unaltered. \section{Generalities} As a first step let us change variables in the space of relative motion ${\bf R}_r$: $$({\bf q}_{j}) \lrar (r_{ij}, \Om)\ ,$$ where for $d>1$ the number of (independent) relative distances $r_{ij}$ is equal to 3 and $\Om$ is a collection of $(2d-3)$ angular variables. Thus, we split ${\bf R}_r$ into a combination of the space of relative distances ${\bf \tilde R}$ and a space parameterized by angular variables, essentially those on the sphere $S^{2d-3}$. There are known several ways to introduce variables in ${\bf R}_r$: the perimetric coordinates by Hylleraas \cite{Hylleraas}, the scalar products of vectorial Jacobi coordinates ${\bf r}_{ij}$ \cite{Gu} and the three relative (mutual) distances $r_{ij}$ (see e.g. \cite{Loos}). We follow the last one. In turn, the angular variables are introduced as the Euler angles on the $S^{2d-4}$ sphere defining the normal to the interaction plane (triangle) and the azimuthal angle of rotation of the interaction triangle around its barycenter, see e.g. \cite{Gu}. A key observation is that in new coordinates $(r_{ij}, \Om)$ the flat-space Laplace operator (the kinetic energy operator) in the space of relative motion ${\bf R}_r$ takes the form of the sum of two second-order differential operators \begin{equation} \label{addition} \frac{1}{2}\De_r^{(2d)}\ =\ {\De_R}(r_{ij}, \pa_{ij}) + {\tilde \De} (r_{ij}, \Om, \pa_{ij}, \pa_{\Om}) \ ,\quad \pa_{ij} \equiv \frac{\pa}{\pa r_{ij}}\ , \end{equation} where the first operator depends on relative distances {\it only}, (hence, the coefficient functions do not depend on angles), while the second operator depends on angular derivatives in such a way that it annihilates any angle-independent function, \[ {\tilde \De} (r_{ij}, \Om, \pa_{ij}, \pa_{\Om})\, \Psi(r_{ij})\ =\ 0\ . \] This observation holds for the $n$-body case as well \cite{MTA:2017}. For $d=1$ the operator ${\tilde \De}$ is absent (no angular variables occur) and we have \[ {\De_R}(r_{ij}, \pa_{ij})\ =\ \De_r^{(2)}\ , \] see (\ref{Drel3-1}). In general, for $d>1$ the commutator $$[{\De_R}(r_{ij})\ ,\ {\tilde \De} (r_{ij}, \Om, \pa_{\Om})] \neq 0\ .$$ If we look for angle-independent solutions of (\ref{Hrel}), due to the decomposition (\ref{addition}) the general spectral problem (\ref{Hrel}) reduces to a particular 3-body spectral problem \begin{equation} \label{Hrel-Mod} {\tilde {\cal H}}_R\,\Psi(r_{ij})\ \equiv \ \bigg(- {\De_R}(r_{ij}, \pa_{ij}) + V(r_{ij})\bigg)\, \Psi(r_{ij})\ =\ E\,\Psi(r_{ij})\ ,\quad \Psi \in L_2 ({\bf \tilde R})\ , \end{equation} where ${\bf \tilde R}$ is the space of relative distances. Clearly, we can write \begin{equation} \label{gmunu} {\De_R}(r_{ij}, \pa_{ij})\ =\ g^{\mu \nu}(r) \pa_{\mu} \pa_{\nu}\ +\ b^{\mu} \pa_{\mu}\ , \end{equation} where $g^{\mu \nu}(r)$ is the matrix made out of coefficients in front of the second derivatives and $b^{\mu}$ is a column vector. Surprisingly, for any $d>1$ one can find the $d$-dependent gauge factor $\Gamma(r_{ij})$ such that the operator ${\De_R}(r_{ij}, \pa_{ij})$ takes the form of the Schr\"odinger operator, \begin{equation} \label{DLB} \Gamma^{-1}\,{\De_R}\,(r_{ij}, \pa_{ij})\, \Gamma\ =\ {\De_{LB}}(r) - {\tilde V}(r)\ \equiv -{\tilde \De}_R \ , \end{equation} where $\De_{LB}(r)$ is the Laplace-Beltrami operator with contravariant, $d$-independent metric $g^{\mu \nu}(r)$, in general, on some non-flat, (non-constant curvature) manifold. It makes sense of the kinetic energy. Here ${\tilde V}(r)$ is the effective potential. The potential ${\tilde V}$ becomes singular at the boundary of the configuration space, where the determinant $D(r)=\det g^{\mu \nu}(r)$ vanishes. The operator ${\tilde \De}_R$ is Hermitian with measure $D(r)^{-\frac{1}{2}}$. Eventually, we arrive at the spectral problem for the Hamiltonian \begin{equation} \label{Hrel-final} {H}_{LB}(r)\ \equiv\ -{\De_{LB}}(r) + V(r) + {\tilde V}(r)\ , \end{equation} with $d$-independent kinetic energy ${\De_{LB}}(r)$. Again the case $d=1$ is special, the gauge factor is trivial, $\Gamma=1$, and \[ {\De_{LB}}(r) \ =\ {\De_R}(r)\ =\ \De_r^{(2)}\ . \] Following the {\it de-quantization} procedure of replacement of the quantum momentum (derivative) by the classical momentum \[ -i\,\pa\ \rar\ P\ , \] one can get a classical analogue of (\ref{Hrel-final}), \begin{equation} \label{Hrel-Cl-final} {H}^{(c)}_{LB}\ =\ g^{\mu \nu}(r) P_{\mu} P_{\nu} + V(r) + {\tilde V}(r)\ . \end{equation} It describes the internal motion of a 3-dimensional body with tensor of inertia $(g^{\mu \nu})^{-1}$\, with center of mass fixed. The Hamiltonians (\ref{Hrel-final}), (\ref{Hrel-Cl-final}) are the main objects of study of this paper. \section{Three-body case: $d=1$, concrete results } In the one-dimensional case $d=1$ the Laplace-Beltrami operator in (\ref{Hrel-final}) becomes \[ {\De_{LB}}(r)\ =\ 2\bigg(\frac{\pa^{2}}{\pa r_{12}^{2}}\ +\ \frac{\pa^{2}}{\pa r_{23}^{2}} - \frac{\pa^{2}}{\pa r_{12} \,\pa r_{23}}\bigg) \ , \] see (\ref{Drel3-1}). It corresponds to the two-dimensional flat space Laplacian and is evidently an algebraic operator. Formally, it is not $S_3$ invariant unlike the original $3d$-Laplacian in (\ref{Hgen}), the kinetic energy, although it remains $S_2$ invariant. Note that in variables, \[ r_{12}^2\ =\ \rho_{12}\ ,\ r_{13}^2\ =\ \rho_{13}\ ,\ r_{23}^2\ =\ \rho_{23}\ , \] see below, the emerging flat-space Laplacian ${\De_{LB}}(\rho)$ is not anymore algebraic. As a realization of $S_2$ invariance of (\ref{Drel3-1}) let us introduce $S_2$ invariants \begin{equation} \label{d1-xi} \xi_1 = r_{12} + r_{23}\quad ,\quad \xi_2 = r_{12} \, r_{23}\ , \end{equation} as new variables, which is a polynomial change of variables, then \begin{equation} \label{Drel3-1-xi} {\De_{LB}}(\xi)\ =\ 2\,\bigg(\frac{\pa^{2}}{\pa \xi_{1}^{2}}\ +\ (\xi_{1}^2 - 3 \xi_{2}) \frac{\pa^{2}}{\pa \xi_{2}^{2}} + \xi_{1} \frac{\pa^{2}}{\pa \xi_{1} \,\pa \xi_{2}} - \frac{\pa}{\pa \xi_{2}} \bigg) \ . \end{equation} This is an algebraic operator which can be rewritten in terms of the generators of the maximal affine subalgebra $b_2$ of the algebra $sl(3,{\bf R})$ in $\xi$-variables, c.f. below (\ref{sl4R}), see \cite{RT:1995,Turbiner:1998}. There exists another polynomial $S_2$-symmetric change of variables \cite{Turbiner:1998} \begin{equation} \label{d1-si} \si_2 = - r_{12} r_{23} - r_{12}^2 - r_{23}^2\ =\ \xi_2 - \xi_1^2\ ,\ \si_3 = - r_{12} \, r_{23}(r_{12} + r_{23}) = - \xi_1 \xi_2\ , \end{equation} which leaves the operator ${\De_{LB}}$ algebraic, \begin{equation} \label{Drel3-1-si} {\De_{LB}}(\si)\ =\ -2\,\bigg(3 \si_2\,\frac{\pa^{2}}{\pa \si_{2}^{2}}\ -\ \si_{2}^2 \frac{\pa^{2}}{\pa \si_{3}^{2}} + 9\, \si_{3} \frac{\pa^{2}}{\pa \si_{2} \,\pa \si_{3}} + 3 \frac{\pa}{\pa \si_{2}} \bigg) \ . \end{equation} In fact, the variables (\ref{d1-si}) \begin{equation} \label{d-si} \si_1 = r_{12} + r_{23} + r_{31}\ ,\ \si_2 = r_{12} r_{23} + r_{12} r_{13} + r_{23} r_{13}\quad ,\quad \si_3 = r_{12} \, r_{23} \, r_{13}\ , \end{equation} are $S_3$ invariants, subject to the condition that the 1st invariant $\si_1$ vanishes, \[ \si_1\ =\ 0\ . \] Hence, the operator (\ref{Drel3-1-si}) is, in fact, $S_3$ permutationally invariant. It can be immediately seen that the operator (\ref{Drel3-1-si}) describes the kinetic energy of relative motion of the 3-body $(A_2)$ rational Calogero model \cite{Calogero:1969} with potential \begin{equation} \label{Vcalogero} V_{A_2}\ =\ g \bigg(\frac{1}{r_{12}^2} + \frac{1}{r_{23}^2} + \frac{1}{r_{13}^2}\bigg)\ =\ g \bigg(\frac{1}{\rho_{12}} + \frac{1}{\rho_{23}} + \frac{1}{\rho_{13}}\bigg)\ , \end{equation} in algebraic form, see \cite{Turbiner:1998}. It is easy to check that the potential $V_{A_2}$ is a rational function in $\si_{2,3}$ (\ref{d1-si}). It was shown in \cite{RT:1995,Turbiner:1998} that the Hamiltonian of relative motion of the 3-body $(A_2)$ rational Calogero model (even with potential modified by adding the harmonic oscillator potential), gauge-rotated with its ground state function and written in variables $\si_{2,3}$ is an algebraic operator as well. This operator can be rewritten in terms of the generators of the maximal affine subalgebra $b_2$ of the algebra $sl(3,{\bf R})$ \begin{eqnarray} \label{sl3R} {\cal J}_i^- &=& \frac{\pa}{\pa u_i}\ ,\qquad \quad i=1,2\ , \non \\ {{\cal J}_{ij}}^0 &=& u_i \frac{\pa}{\pa u_j}\ , \qquad i,j=1,2 \ , \\ {\cal J}^0(N) &=& \sum_{i=1}^{2} u_i \frac{\pa}{\pa u_i}-N\, , \non \\ {\cal J}_i^+(N) &=& u_i {\cal J}^0(N)\ =\ u_i\, \left( \sum_{j=1}^{2} u_j\frac{\pa}{\pa u_j}-N \right)\ , \quad i=1,2\ , \end{eqnarray} where $N$ is a parameter and \[ u_1\ =\ \si_2\ ,\ u_2\ =\ \si_3\ . \] Another polynomial change of variables \begin{equation} \label{d1-la} \la_1 = \si_2 = - r_{12} r_{23} - r_{12}^2 - r_{23}^2\ =\ \xi_2 - \xi_1^2\ ,\ \la_2 = \si_3^2 = r_{12}^2 \, r_{23}^2(r_{12} + r_{23})^2 = \xi_1^2 \xi_2^2\ , \end{equation} leaves the operator ${\De_{LB}}$ algebraic, \begin{equation} \label{Drel3-1-la} {\De_{LB}}(\la)\ =\ -2\,\bigg(3 \la_1\,\frac{\pa^{2}}{\pa \la_{1}^{2}}\ -\ 4\la_{1}^2 \la_2 \frac{\pa^{2}}{\pa \la_{2}^{2}} + 18\, \la_{2} \frac{\pa^{2}}{\pa \la_{1} \,\pa \la_{2}} + 3 \frac{\pa}{\pa \la_{1}} - 2 \la_{1}^2 \frac{\pa}{\pa \la_{2}} \bigg) \ . \end{equation} It can be immediately seen that the operator (\ref{Drel3-1-la}) describes the kinetic energy of relative motion of the 3-body $(G_2)$ rational Wolfes model \cite{Wolfes:1974} with potential \[ V_{G_2}\ =\ g \bigg(\frac{1}{r_{12}^2} + \frac{1}{r_{23}^2} + \frac{1}{r_{13}^2}\bigg)\ +\ g_1 \bigg(\frac{1}{(r_{12}-r_{23})^2} + \frac{1}{(r_{23}-r_{31})^2} + \frac{1}{(r_{12}-r_{31})^2}\bigg)\ , \] in algebraic form, see \cite{Turbiner:1998}. It can be rewritten in terms of the generators of the algebra $g^{(2)}$, see \cite{Turbiner:1998}. It is easy to check that the potential $V_{G_2}$ is a rational function in $\la_{1,2}$ (\ref{d1-la}). It was shown in \cite{RT:1995,Turbiner:1998} that the Hamiltonian of relative motion of the 3-body $(G_2)$ rational Wolfes model (even with potential modified by adding the harmonic oscillator potential), gauge-rotated with its ground state function and written in variables $\la_{1,2}$ is an algebraic operator. This operator can also be rewritten in terms of the generators of the algebra $g^{(2)}$. The most general polynomial change of variables known so far, which leaves the operator ${\De_{LB}}$ algebraic, has led to the discovery of the so-called TTW model \cite{TTW:2009} - the most general superintegrable and exactly-solvable model on the plane. Following Calogero \cite{Calogero:1969} let us introduce polar coordinates in the space of relative distances, \[ q_1\,=\,\frac{1}{\sqrt{2}}\,r_{12} = r \cos \varphi\ ,\ q_2\,=\,\sqrt{\frac{2}{3}}\,(r_{13}+r_{23}) = r \sin \varphi\ , \] see (\ref{rj}). Now we define new variables \[ t\ =\ r^2\ ,\ u\ =\ r^{2k} \sin^2 {k\varphi}\ , \] which are the invariants of the dihedral group $I_2(k)$ for integer $k$\, (and even rational $k$). The operator (\ref{Drel3-1}) takes an amazingly simple algebraic form, \[ {\De_{LB}}^{(k)} \ =\ -4 t \pa^2_t - 8k u \pa^2_{tu} - 4k^2 t^{k-1} u \pa^2_u - 4\pa_t - 2 k^2 t^{k-1}\pa_u \ , \] for $k=1,2,\ldots$\,. This operator can be rewritten in terms of the generators of the algebra $g^{(k)}$, see \cite{TTW:2009}\,. Forming the Schr\"odinger operator \begin{equation} \label{TTW} {\cal H}_{TTW}^{(k)}\ =\ -{\De_{LB}}^{(k)} + \om^2 t + \frac{k^2\al\, t^{k-1}}{t^k-u} + \frac{k^2 \beta\, t^{k-1}}{u}\ , \end{equation} we arrive at the Tremblay-Turbiner-Winternitz (TTW) model; if $k=3$ at $\al=0$ the 3-body $(A_2)$ rational Calogero model occurs, otherwise for $\al \neq 0$ the 3-body $(G_2)$ rational Wolfes model occurs, both after separation out center-of-mass motion. Interestingly, for $k=1$ we get the Smorodinsky-Winternitz model, while for $k=2$ it will be the $BC_2$ rational model. Both models describe the relative motion of 3-body problem. The TTW model is exactly-solvable and integrable for any real $k$ - there exists a 2nd order integral (the 1st order symmetry operator squared), while for rational $k=p/q$ the model is superintegrable - there exists an integral of order $2(p+q)-1$ \cite{KKM:2010}. For integer $k$ by gauge-rotation of the Hamiltonian (\ref{TTW}) with its ground state function one can transform it to an algebraic operator. The same is true for both integrals: gauge rotation with ground state eigenfunction leads them to a form of algebraic operators in variables $t,u$\,. All these algebraic operators can be rewritten in terms of the generators of the algebra $g^{(k)}$. It must be emphasized that there are non-polynomial changes of variables $(r_{12}\,,\, r_{23})$ (trigonometric and elliptic) in ( \ref{Drel3-1}) which can still lead to algebraic operators. In the case of trigonometric change of variables there occurs the kinetic energy of the relative motion of 3-body, $(A_2)$ trigonometric Sutherland model \cite{RT:1995}, or $(G_2)$ trigonometric Sutherland model \cite{Turbiner:1998}, or the kinetic energy of $BC_2$ trigonometric model \cite{Brink:1997}. For discussion see \cite{Turbiner:2013}. In the case of elliptic change of variables there occurs the kinetic energy of relative motion of the 3-body, $(A_2)$ elliptic Calogero model, or $(G_2)$ elliptic model \cite{ST:2015}, or the kinetic energy of the $BC_2$ elliptic model \cite{DGU:2001}. \section{Three-body case: $d > 1$, concrete results } \subsection{$r$-representation} Assuming $d > 1$, after straightforward calculations the operator ${\De_R}(r_{ij}, \pa_{ij})$ (in decomposition (\ref{addition})) can be found to be \[ 2{\De_R}(r_{ij}, \pa_{ij})\ =\ \bigg[\ 2\,(\pa^{2}_{r_{12}} +\pa^{2}_{r_{23}}+\pa^{2}_{r_{13}}) + \frac{2(d-1)}{r_{12}}\,\pa_{r_{12}} + \frac{2(d-1)}{r_{23}}\,\pa_{r_{23}} + \frac{2(d-1)}{r_{13}}\,\pa_{r_{13}} \] \begin{equation} \label{addition3-3r} + \frac{r_{12}^2-r_{13}^2+r_{23}^2}{r_{12} r_{23}}\,\pa_{r_{12}}\pa_{r_{23}} + \frac{r_{12}^2+r_{13}^2-r_{23}^2}{r_{12} r_{13}}\,\pa_{r_{12}}\pa_{r_{13}} + \frac{r_{13}^2+r_{23}^2-r_{12}^2}{r_{13} r_{23}}\,\pa_{r_{23}}\pa_{r_{13}} \ \bigg]\ , \end{equation} cf. e.g. \cite{Loos}. Note that at $d=1$ the operator (\ref{addition3-3r}) becomes degenerate. This can be seen by calculating the determinant of the metric $g^{\mu \nu}$, see (\ref{gmunu}) for definition, of the operator (\ref{addition3-3r}), \[ D(r)=\det g^{\mu \nu}(r)\ =\ \frac{12}{r_{12}^2\,r_{23}^2\,r_{13}^2}(r_{12}^2+r_{23}^2+r_{13}^2)\,S^2_{\triangle} \ , \] where $S_{\triangle}$ is the area of the interaction triangle, see below. For $d=1$ the interaction triangle shrinks to the interval with marked point, its area vanishes, $S_{\triangle}=0$, and the determinant is identically zero, $D(r)=0$. This implies that the original three-dimensional configuration space at $d>1$ given by $S_{\triangle}\geq 0$, shrinks to the boundary, $S_{\triangle}=0$ and effectively becomes two-dimensional. In general, the operator (\ref{addition3-3r}) does not depend on the choice of the angular variables $\Om$. While the operator ${\tilde \De} (r_{ij}, \pa_{ij}, \Om, \pa_{\Om})$ in (\ref{addition}), for example at $d=2$, where there is a single angular variable, $\Om=\tha$, is equal to \[ {\tilde \De} \ =\ -\ \frac{12\,S_{\triangle}}{r_{12}^2\,r_{23}^2\,r_{13}^2}\,\bigg({r_{12}^3}\,\pa_{r_{12}}\, +\, {r_{23}^3}\,\pa_{r_{23}}\, +\,{r_{13}^3}\,\pa_{r_{13}}\, -\,\frac{r_{12}^4+r_{23}^4+r_{13}^4}{S_{\triangle}}\,\pa_{\tha} \bigg)\pa_{\tha}\ , \] where $\tha$ is the azimuthal angle of rotation around barycenter of triangle of interaction, see Fig.~1 and $S_{\triangle}$ is the area of the interaction triangle, see below Eq.(\ref{CFrho}). It is evident that ${\tilde \De}$ annihilates any angle-independent function. The variable $\tha$ is not separated in ${\tilde \De}$ and, thus, in $\De_r^{(2d)}$ (\ref{addition}): the eigenfunctions in (\ref{Hrel}) are {\it not} factorizable to the form $R(r_{ij})\, A (\tha)$. The configuration space in the space of relative distances is \begin{equation} \label{CFr} 0 < r_{12},r_{13},r_{23} < \infty\,,\quad r_{23} < \ r_{12} + r_{13}\,,\quad r_{13}< r_{12}+r_{23}\,,\quad r_{12}< r_{13}+r_{23}\ , \end{equation} equivalent to $S_{\triangle}>0$. In the space with Cartesian coordinates $(x,y,z)=(r_{12},r_{13},r_{23})$ the configuration space lies in the first octant and is the interior of the inverted tetrahedral-shaped object with base at infinity, vertex at the origin and edges $(t,t,2t)$, $(t,2t,t)$ and $(2t,t,t)$, $0\leq t<\infty$. \subsection{$\rho$-representation} Formally, the operator (\ref{addition3-3r}) is invariant under reflections $Z_2 \oplus Z_2 \oplus Z_2$, \[ r_{12} \rightarrow -r_{12} \ ,\qquad r_{13} \Leftrightarrow -r_{13} \ ,\qquad r_{23} \Leftrightarrow -r_{23} \ , \] and w.r.t. $S_3$-group action. If we introduce new variables, \begin{equation} \label{rho} r_{12}^2\ =\ \rho_{12}\ ,\ r_{13}^2\ =\ \rho_{13}\ ,\ r_{23}^2\ =\ \rho_{23}\ , \end{equation} the operator (\ref{addition3-3r}) becomes algebraic, \[ {\De_R}(\rho)\ =\ 4(\rho_{12} \pa^2_{\rho_{12}} + \rho_{13} \pa^2_{\rho_{13}} +\rho_{23} \pa^2_{\rho_{23}})\ +\ \] \[ 2 \bigg((\rho_{12} + \rho_{13} - \rho_{23})\pa_{\rho_{12}}\pa_{\rho_{13}}\ + (\rho_{12} + \rho_{23} - \rho_{13})\pa_{\rho_{12}}\pa_{\rho_{23}}\ + (\rho_{13} + \rho_{23} - \rho_{12})\pa_{\rho_{13}}\pa_{\rho_{23}} \bigg)\ + \] \begin{equation} 2d (\pa_{\rho_{12}} + \pa_{\rho_{13}}+ \pa_{\rho_{23}}) \ , \label{addition3-3rho} \end{equation} c.f. \cite{TMA:2016} at $d=3$. Note that the operator (\ref{addition3-3rho}) is of Lie-algebraic nature: it can be rewritten in terms of the generators of the algebra $sl(4,{\bf R})$ realized by the first order differential operators, see below. It acts as a filtration for the flag of finite-dimensional representation spaces of this algebra \[ {\mathcal P}^{(1,2,3)}_{N}\ =\ \langle \rho_{12}^{p_1} \rho_{13}^{p_2} \rho_{23}^{p_3} \vert \ 0 \le p_1 + p_2+ p_3 \le N \rangle\ ,\ N=0,1,2, \ldots\ . \] From (\ref{CFr}) and (\ref{rho}) it follows that the corresponding configuration space in $\rho$ variables is given by the conditions \[ 0 < \rho_{12},\rho_{13},\rho_{23} < \infty\ ,\ {\rho}_{23} < (\sqrt{{\rho}_{12}} + \sqrt{{\rho}_{13}})^2,\ {\rho}_{13} < (\sqrt{{\rho}_{12}} + \sqrt{{\rho}_{23}})^2,\ {\rho}_{12} < \ (\sqrt{{\rho}_{13}} + \sqrt{{\rho}_{23}})^2\ , \] equivalent to $S_{\triangle}>0$. The boundary of the configuration space is given by $S(\rho)=S^2_{\triangle}=0$. We remark that \begin{equation} \label{CFrho} \quad -16\,S \ \equiv \ \rho _{12}^2+\rho _{13}^2+\rho _{23}^2 -2 (\rho _{12} \rho _{13}+ \rho _{12} \rho _{23}+ \rho _{13} \rho _{23}) \ \leq \ 0 \ , \end{equation} because the left-hand side (l.h.s.) is equal to $$-(r_{12}+r_{13}-r_{23})(r_{12}+r_{23}-r_{13})(r_{13}+r_{23}-r_{12})(r_{12}+r_{13}+r_{23})$$ and conditions (\ref{CFr}) should hold. Therefore, following the Heron formula, l.h.s. is proportional to the square of the area of the triangle of interaction $S^2_{\triangle} = S$\,. The associated contravariant metric for the operator ${\De_R}(\rho)$ defined by coefficients in front of second derivatives is remarkably simple \begin{equation} \label{gmn33-rho} g^{\mu \nu}(\rho)\ = \left| \begin{array}{ccc} 4\rho_{12} & \ \rho_{12} + \rho_{13} - \rho_{23} & \ \rho_{12} + \rho_{23} - \rho_{13} \\ & & \\ \rho_{12} + \rho_{13} - \rho_{23} & \ 4\rho_{13} & \ \rho_{13} + \rho_{23} - \rho_{12} \\ & \ & \\ \rho_{12} + \rho_{23} - \rho_{13} & \ \rho_{13} + \rho_{23} - \rho_{12} & 4\rho_{23} \end{array} \right| \ , \end{equation} it is linear in $\rho$-coordinates(!) with factorized determinant \begin{equation} \label{gmn33-rho-det} \det g^{\mu \nu}(\rho)\ =\ - 6\left(\rho _{12}+\rho _{13}+\rho _{23}\right) \left(\rho _{12}^2+\rho _{13}^2+\rho _{23}^2 -2 (\rho _{12}\, \rho _{13}+ \rho _{12}\, \rho _{23}+ \rho _{13} \,\rho _{23})\,\right) \equiv D(\rho) \geq 0\ , \end{equation} and is positive definite. It does not depend on dimension $d$, c.f. \cite{TMA:2016}, explicitly. However, at $d=1$ this determinant vanishes identically(!) (see below). It is worth noting a remarkable factorization property of the determinant \[ D(\rho)\ =\ 6\,(r_{12}^2+r_{13}^2+r_{23}^2)\ \times \] \[ (r_{12}+r_{13}-r_{23})(r_{12}+r_{23}-r_{13})(r_{13}+r_{23}-r_{12})(r_{12}+r_{13}+r_{23})\ = \] \[ =\ 96\, P \ S^2_{\triangle}\ , \] where $P=r_{12}^2+r_{13}^2+r_{23}^2$ - the sum of squares of the sides of the interaction triangle - the trace of metric tensor, $\mbox{Tr}\, g^{\mu \nu}(\rho)=P/4\,$. The operator (\ref{addition3-3rho}) is $S_3$ permutationally-invariant. Hence, it can be rewritten in terms of elementary symmetric polynomials $\si_{1,2,3}$, \begin{equation} \label{taus} \begin{aligned} & \ta_1\ = \ \si_1(\rho _{12},\,\rho _{13},\,\rho _{23}) \ = \ \rho _{12}+\rho _{13}+\rho _{23}\ , \\ & \ta_2 \ = \ \si_2(\rho _{12},\,\rho _{13},\,\rho _{23}) \ = \ \rho _{12} \,\rho _{13}+ \rho _{12} \,\rho _{23}+ \rho _{13}\, \rho _{23} \ , \\ & \ta_3\ = \ \si_3(\rho _{12},\,\rho _{13},\,\rho _{23}) \ = \ \rho _{12}\,\rho _{13}\,\rho _{23}\ . \end{aligned} \end{equation} The determinant $D(\rho)$ has the very simple form, \begin{equation} \label{gmn33-rho-det-gamma} D(\rho)\ =\ 6\, \ta_1\ (4\ta_2-\ta_1^2)\ , \end{equation} with $$16\, S^2_{\triangle} \ = \ (4\,\ta_2-\ta_1^2)\ ,$$ where only the elementary symmetric polynomials $\ta_{1,2}$ are involved. When $\det g^{\mu \nu}(\rho)=0$, hence, either $\ta_1=0$, or $\ta_1^2 = 4 \ta_2$, it defines the boundary of the configuration space, see (\ref{CFrho}). \subsection{$\tau$-representation} The operator (\ref{addition3-3rho}) being rewritten in terms of elementary symmetric polynomials $\si_{1,2,3}$ in $\rho$-variables (\ref{taus}), remains algebraic \[ {\De_R}(\tau) \ = \ 6\,\ta_1\pa_{\ta_1}^2\ +\ 2\ta_1(7\ta_2-\ta_1^2)\pa_{\ta_2}^2\ +\ 2\ta_3(6\ta_2-\ta_1^2)\pa_{\ta_3}^2 \ +\ 24\,\ta_2\pa_{1,2}^2\ +\ 36\ta_3\pa_{{\ta_1},{\ta_3}}^2 \] \begin{equation} \label{addition3-3tau} \ +\ 2\,[9\ta_3\ta_1 + \ta_2 (4\,\ta_2 - \ta_1^2)]\pa_{{\ta_2},{\ta_3}}^2\ +\ 6\,d\,\pa_{\ta_1}\ +\ 2\,(2d+1)\ta_1\,\pa_{\ta_2}\ +\ 2\,[(d+4)\ta_2-\ta_1^2]\,\pa_{\ta_3}\ , \end{equation} with metric \begin{equation} \label{gmn33-tau} g^{\mu \nu}(\tau)\ = \left| \begin{array}{ccc} 6\,\ta_1 & 12\,\ta_2 & 18\,\ta_3 \\ & & \\ 12\,\ta_2 & 2\,\ta_1\,(7\,\ta_2-\ta_1^2) &\ 9\,\ta_3\,\ta_1 + 4\,\ta_2^2 - \ta_2 \,\ta_1^2 \\ & & \\ 18\,\ta_3 &\ 9\,\ta_3\,\ta_1 + 4\,\ta_2^2 - \ta_2 \ta_1^2\ & 2\,\ta_3\,(6\,\ta_2-\ta_1^2) \end{array} \right| \ , \end{equation} see below (\ref{hQES-N-tau}), (\ref{hES-N-tau}). Its determinant, \[ \det g^{\mu \nu}(\tau)\ =\ 6\, \ta_1\, \left(4\ta_2-\ta_1^2 \right) [ 2\, \ta_1\, (9\,\ta_2 - 2 \,\ta_1^2)\,\ta_3 -(4\,\ta_2-\ta_1^2) - 27 \,\ta_3^3] \ , \] c.f. (\ref{gmn33-rho-det-gamma}). Again, the determinant vanishes, if $(16\,S^2_{\triangle})=({4\ta_2-\ta_1^2})=0$. \subsection{Geometrical variables representation} It is important to point out that the operator ${\De_R}(\tau)$ can be rewritten in geometrical terms using them as {\it geometrical} variables \begin{equation} \label{G} \begin{aligned} & P \ = \ \ta_1 \ = \ \rho _{12} \ + \ \rho _{13}\ + \ \rho _{23} \ , \\ & S \ = \ S^2_{\triangle} \ = \ \frac{4\,\ta_2-\ta_1^2}{16} \ = \ \frac{ 2 (\rho _{12} \,\rho _{13}+ \rho _{12} \,\rho _{23}+ \rho _{13}\, \rho _{23}) - (\rho _{12}^2+\rho _{13}^2+\rho _{23}^2) }{16}\ , \\ & T=\ta_3 \ = \ \rho _{12} \, \rho _{13} \, \rho _{23} \ , \end{aligned} \end{equation} namely, \begin{equation} \label{addition3-3tauS} \begin{aligned} \De_R \ = & \ 6\,P\,\pa^2_P + \frac{1}{2}\,P\,S\,\pa_{S}^2 + T\,(48\,S + P^2)\,\pa_{T}^2 + 36\,T\,\pa_{P,{T}} + 24\,S\,\pa_{P,S} + 2\,S (16\,S + P^2)\,\pa_{S,{T}} \\ &\ +\ 6\,d\,\pa_P\ +\ \frac{1}{4}\,(d-1)\,P\,\pa_{S}\ +\ [8\,(d+4)\,S + \frac{d}{2}\,P^2]\,\pa_{T} \ , \end{aligned} \end{equation} where the metric is of the form \begin{equation} \label{} g^{\mu \nu}\ = \left| \begin{array}{ccc} 6\,P & \ 12\,S & \ 18\,T \\ 12\,S & \ \frac{1}{2}P\,S & \ S(16\,S+P^2) \\ 18\,T & \ S(16\,S+P^2) & \ \tau_3(48\,S+P^2) \\ \end{array} \right| \ , \end{equation} with determinant \[ \det g^{\mu \nu}\ = \ -3\, P\, S\, \left(\ 54\, T^2 \ - \ T\,P \left(P^2\,+\,144\, \,S\right)\ +\ 2 \,S\, \left(P^2+16\, S\right)^2\ \right) \ . \] Note that the operator (\ref{addition3-3tauS}) is of Lie-algebraic nature: it can be rewritten in terms of the generators of the algebra $h^{(3)}$ realized by the differential operators, see below. It acts as a filtration for the flag of finite-dimensional representation spaces of this algebra \[ {\mathcal P}^{(1,2,3)}_{N}\ =\ \langle P^{p_1} S^{p_2} T^{p_3} \vert \ 0 \le p_1+2p_2+3p_3 \le N \rangle\ ,\ N=0,1,2, \ldots\ . \] This operator will be instrumental for construction of (quasi)-exactly-solvable problems for Case III. \subsection{Towards $d=1$} For $d=1$ when the square of the interaction triangle vanishes, hence, $S \equiv \frac{1}{16}(4\ta_2-\ta_1^2)=0$, the metric becomes degenerate, $\det g^{\mu \nu}(\rho)=\det g^{\mu \nu}(\ta)=0$. Effectively, it leads to a reduction of the dimension of the space of relative distances from 3 to 2: the configuration space $4\ta_2-\ta_1^2 \geq 0$ shrinks to the boundary $4\ta_2-\ta_1^2=0$. In order to see this dimensional reduction explicitly, let us change variables in (\ref{addition3-3rho}), \[ (\rho_{12}, \rho_{13}, \rho_{23}) \rar (\rho_{12}, \rho_{13}, S)\ . \] It follows that \[ {\De_R}(\rho)\ =\ 4(\rho_{12} \pa^2_{\rho_{12}} + \rho_{13} \pa^2_{\rho_{13}}) + \frac{1}{2} \,S\, (\rho_{12} + \rho_{13} + \rho_{23}) \pa^2_{S}\ +\ 2 (\rho_{12} + \rho_{13} - \rho_{23})\pa_{\rho_{12}}\pa_{\rho_{13}}\ + \] \begin{equation} 8\, S\, (\pa_{\rho_{12}}\pa_{S}\ +\ \pa_{\rho_{13}}\pa_{S})\ +\ 2d (\pa_{\rho_{12}} + \pa_{\rho_{13}})+ \frac{1}{4} (d-1)(\rho_{12} + \rho_{13} + \rho_{23}) \pa_{S} \ , \label{addition3-3rho-A} \end{equation} with metric \begin{equation} \label{} g^{\mu \nu}\ = \left| \begin{array}{ccc} 4\,\rho_{12} & \ 4\,S & \ \rho_{12}+\rho_{13}-\rho_{23} \\ 4\,S & \ \frac{1}{2}\,S\,(\rho_{12}+\rho_{13}+\rho_{23}) & \ 4\,S \\ \rho_{12}+\rho_{13}-\rho_{23} & \ 4\,S & \ 4\,\rho_{13} \\ \end{array} \right| \ , \end{equation} where ${\rho}_{23} = {\rho}_{23}({\rho}_{12}, {\rho}_{13}, S)$ and determinant \[ \det g^{\mu \nu}\ = \ -12\,S\,(4\,S-\rho_{12}\,\rho_{13})\,{\rho}_{23}\ , \] which vanishes identically if $S=0$. Imposing conditions: $d=1$ and $S=0$ on $\De_R$ (\ref{addition3-3rho-A}), we get \begin{equation} {{\tilde \De}_R}(\rho)\ =\ 4(\rho_{12} \pa^2_{\rho_{12}} + \rho_{13} \pa^2_{\rho_{13}})\ +\ 2 (\rho_{12} + \rho_{13} - \rho_{23})\pa_{\rho_{12}}\pa_{\rho_{13}}\ +\ 2 (\pa_{\rho_{12}} + \pa_{\rho_{13}}) \ , \label{addition3-3rho-1} \end{equation} where ${\rho}_{23} = (\sqrt{{\rho}_{12}} \pm \sqrt{{\rho}_{13}})^2$\,. The operator (\ref{addition3-3rho-1}) is not algebraic anymore. However, it can be easily checked by calculating the curvature that is a Laplace-Beltrami operator in flat space. By changing variables $\rho \rar r$ in (\ref{addition3-3rho-1}), \[ r_{12}^2\ =\ \rho_{12}\ ,\ r_{13}^2\ =\ \rho_{13}\ , \] we arrive at the algebraic operator (\ref{Drel3-1}), which is nothing but the flat Laplacian. There are also numerous changes of the coordinates $\rho_{12}, \rho_{13}$, see Section II, into $\xi_{1,2}$ (\ref{d1-xi}), $\si_{1,2}$ (\ref{d1-si}), $\la_{1,2}$ (\ref{d1-la}), $(t, u)$ {\it etc.} in all those the operator (\ref{addition3-3rho-1}) is algebraic. It is important to emphasize that unlike (\ref{addition3-3rho-A}), in the limit $d\rightarrow 1$ and $S\rightarrow 0$ the algebraic operator (\ref{addition3-3tauS}) continues to be algebraic, \begin{equation} \label{addition3-3tauS2} \begin{aligned} \De_R \ = & \ 6\,P\,\pa^2_P \ + \ T\, P^2\,\pa_{T}^2 \ + \ 36\,T\,\pa_{P,{T}} \ +\ 6\,\pa_P\ +\ \frac{1}{2}\,P^2\,\pa_{T} \ . \end{aligned} \end{equation} The metric of operator (\ref{addition3-3tauS2}) is given by \begin{equation} \label{gmunu-d1} g^{\mu \nu}\ = \ \left| \begin{array}{cc} \ 6\,P & \ 18\,T \\ \ 18\,T & \ T\,P^2 \\ \end{array} \right| \ , \end{equation} with determinant \[ \det g^{\mu \nu}\ = \ 6\,T\,(P^3 - 54\,T)\ . \] Its curvature is zero, thus, $\De_R$ is the Laplace-Beltrami operator in flat space. In fact, the geometrical coordinates $P, T$ correspond to $\la_{1,2}$ (\ref{d1-la}), in those the 3-body $G_2$ rational, Wolfes model becomes algebraic. Later the geometrical coordinates $P, S, T$ will play an important role in construction of (quasi)-exactly-solvable 3-body problems. \subsection{Integral} It can be shown that there exists the 1st order symmetry operator written in $\rho-$variables as \begin{equation} L_1 \ = \ (\rho_{13}-\rho_{23})\pa_{\rho_{12}} + (\rho_{23}-\rho_{12})\pa_{\rho_{13}} + (\rho_{12}-\rho_{13})\pa_{\rho_{23}} \ , \label{integral} \end{equation} for the operator (\ref{addition3-3rho}), such that \[ [{\De_R}(\rho)\ ,\ L_1]=0\ . \] Here, $L_1$ is an algebraic operator, which is anti-invariant under the $S_3$-group action. The existence of the symmetry $L_1$ implies that in the space of relative distances one variable can be separated out in (\ref{addition3-3rho}). Set \begin{equation} \label{wcoords1} w_1\ =\ \rho_{12}+\rho_{13}+\rho_{23}=P\ ,\quad w_2\ =\ 2\, \sqrt{\rho_{12}^2+\rho_{13}^2+\rho_{23}^2-\rho_{12}\rho_{13}-\rho_{12}\rho_{23} -\rho_{13}\rho_{23}} \quad , \end{equation} where $w_2^2=4(\ta_1^2 - 3 \ta_2)=P^2-48S$ in geometrical variables as well\,, which are invariant under the action of $L_1$, and \[ w_3=\frac{\sqrt{3}}{9}\left({\rm sgn}\,(\rho_{23}-\rho_{13})\arcsin(\frac{2\rho_{12}-\rho_{23}-\rho_{13}}{w_2}) +{\rm sgn}(\rho_{13}-\rho_{12})\,\arcsin(\frac{2\rho_{23}-\rho_{13}-\rho_{12}}{w_2})\right. \] \begin{equation} \label{wcoords2} \left. + {\rm sgn}(\rho_{12}-\rho_{23})\,\arcsin(\frac{2\rho_{13}-\rho_{23}-\rho_{12}}{w_2})-\frac{3\pi}{4}\right), \end{equation} with ${\rm sgn}(x)=\frac{x}{|x|}$ for nonzero $x$. These coordinates are invariant under a cyclic permutation of the indices on the $\rho_{jk}$: $1\to 2\to 3\to 1$. Under a transposition of exactly two indices, see e.g. $(12), (3)$\,, we see that $w_1,w_2$ remain invariant, and $w_3 \to -w_3-\frac{\sqrt{3}\pi}{6}$. (For the method used to compute $w_3$ see \cite{Ince}.) Expressions for $w_3$ vary, depending on which of the 6 non-overlapping regions of $(\rho_{12}, \rho_{13}, \rho_{23})$ space we choose to evaluate them: \begin{enumerate} \item \[ (a):\ \rho_{23}>\rho_{13}>\rho_{12}\ ,\quad (b):\ \rho_{13}>\rho_{12}>\rho_{23}\ ,\quad (c):\ \rho_{12}>\rho_{23}>\rho_{13}\ ,\] \item \[(d):\ \rho_{13}>\rho_{23}>\rho_{12}\ ,\quad (e):\ \rho_{12}>\rho_{13}>\rho_{23}\ ,\quad (f):\ \rho_{23}>\rho_{12}>\rho_{13}\ ,\] \end{enumerate} The regions in class 1 are related by cyclic permutations, as are the regions in class 2. We map between regions by a transposition. Thus it is enough to evaluate $w_3$ in the region $(a):\ \rho_{23}>\rho_{13}>\rho_{12}$. The other 5 expressions will then follow from the permutation symmetries. In this case we have \[ (a):\ w_3=-\frac{\sqrt{3}}{9}\arcsin\left[\frac{2\sqrt{2}}{w_2^3}((2-\sqrt{3})\rho_{13}-\rho_{23}+(\sqrt{3}-1)\rho_{12})\times\right.\] \[\left.(2\rho_{23} -(1+\sqrt{3})\rho_{13}+(\sqrt{3}-1)\rho_{12})((2+\sqrt{3})\rho_{12}-(1+\sqrt{3})\rho_{13}-\rho_{23})\right].\] (The special cases where exactly two of the $\rho_{jk}$ are equal can be obtained from these results by continuity. Here, $w_3$ is a single-valued differentiable function of $\rho_{12},\rho_{13},\rho_{23}$ everywhere in the physical domain (configuration space), except for the points $\rho_{12}=\rho_{13}=\rho_{23}$ where it is undefined.) In these coordinates, the operators (\ref{integral}) and (\ref{addition3-3rho}) take the form \[ L_1(w) \ = \pa_{w_3}\ , \] \[ \frac{1}{6}\De_R(w) \ =\ \ w_1\pa_{w_1}^2\ +\ w_1\pa_{w_2}^2\ +\ \frac{w_1}{3w_2^2}\pa_{w_3}^2 \ +\ 2\,w_2\pa_{w_1w_2}^2\ +\ d\,\pa_{w_1}\ +\ \frac{w_1}{w_2}\pa_{w_2} \ , \] respectively. It is evident that for the $w_3$-independent potential \[ V(w_1,\,w_2; w_3) \ = \ 6\,g(w_1,\,w_2) \ , \] where the factor 6 is introduced for convenience, the operator $L_1$ is still an integral for an arbitrary function $g$: \[ [L_1(w) , -\De_R(w) + 6\,g(w_1,\,w_2)] = 0\ . \] This property of integrability permits separation of the variable $w_3$ in the spectral problem \[ [-\De_R(w) + 6\,g(w_1,\,w_2)]\Psi \ =\ E \Psi\ , \] where $\Psi = \psi(w_1,\,w_2)\,\xi(w_3)$ is defined by the differential equations, \begin{equation} \label{eq-xi} \pa_{w_3} \xi\ =\ i\,p\, \xi\ ,\quad \xi\,=\,e^{i p w_3}\ ,\quad p=0,\pm 1,\pm 2\ \ldots \end{equation} \[ \left(w_1\pa_{w_1}^2\ +\ w_1\pa_{w_2}^2 \ +\ 2\,w_2\pa_{w_1w_2}^2\ +\ d\,\pa_{w_1}\ +\ \frac{w_1}{w_2}\pa_{w_2} \ -\ p^2\,\frac{w_1}{3\,w_2^2}\ -\ g(w_1,\,w_2) \right) \psi\ \] \begin{equation} \label{eq-psi} =\ -E\, \psi\ . \end{equation} Note that the integral $L_1$ is the integral for the three-dimensional quantum problem (\ref{Hrel-final}). As for the original 3-body problem (\ref{Hrel}) this integral is a {\it particular} integral \cite{Turbiner:2013p}: it commutes with the Hamiltonian (\ref{Hrel}) over the space of relative distances ${\bf \tilde R}$ only \[ [{\cal H}_r , L_1]\, :\, {\bf \tilde R}\ \rar \ \{0\}\ . \] As for the operators ${\cal H}_r$ and $L_1$, they do not commute. \subsection{The Representations of $sl(4,{\bf R})$} Both operators (\ref{addition3-3rho}) and (\ref{integral}) are $sl(4,{\bf R})$-Lie algebraic - they can be rewritten in terms of the generators of the maximal affine subalgebra $b_4$ of the algebra $sl(4,{\bf R})$, see e.g. \cite{Turbiner:1988,Turbiner:2016} \begin{eqnarray} \label{sl4R} {\cal J}_i^- &=& \frac{\pa}{\pa u_i}\ ,\qquad \quad i=1,2,3\ , \non \\ {{\cal J}_{ij}}^0 &=& u_i \frac{\pa}{\pa u_j}\ , \qquad i,j=1,2,3 \ , \\ {\cal J}^0(N) &=& \sum_{i=1}^{3} u_i \frac{\pa}{\pa u_i}-N\, , \non \\ {\cal J}_i^+(N) &=& u_i {\cal J}^0(N)\ =\ u_i\, \left( \sum_{j=1}^{3} u_j\frac{\pa}{\pa u_j}-N \right)\ , \quad i=1,2,3\ , \end{eqnarray} where $N$ is a parameter and \[ u_1\equiv\rho_{12}\ ,\qquad u_2\equiv\rho_{13}\ , \qquad u_3\equiv\rho_{23} \ . \] If $N$ is a non-negative integer, a finite-dimensional representation space occurs, \begin{equation} \label{P3} {\cal P}_{N}\ =\ \langle u_1^{p_1} u_2^{p_2} u_3^{p_3} \vert \ 0 \le p_1+p_2+p_3 \le N \rangle\ . \end{equation} It is easy to check that the space ${\cal P}_{N}$ is invariant with respect to projective transformations, \[ u_i \rar \frac{a_i u_1 + b_i u_2 + c_i u_3 + d_i}{\al u_1 + \beta u_2 + \gamma u_3 + \delta}\ ,\ i=1,2,3 \ , \] where $a_i, b_i, c_i, d_i, \al , \beta, \gamma, \delta$ are real parameters, taking them as the rows of the 4 x 4 matrix $G$ we arrive at the condition $G \in GL(4,R)$. Explicitly, above-mentioned operators $\De_R, L_1$ take the form \begin{equation} \label{HRex} \frac{1}{2}\, \De_R({\cal J}) \ = \ 2(\, {\cal J}_{11}^0\,{\cal J}_1^- + {\cal J}_{22}^0\,{\cal J}_2^- + {\cal J}_{33}^0\,{\cal J}_3^- \,)\ +\ d \,({\cal J}_1^- + {\cal J}_2^- + {\cal J}_3^-)\ + \end{equation} \[ \bigg({\cal J}_{11}^0\,({\cal J}_2^- + {\cal J}_3^- ) + {\cal J}_{22}^0\,({\cal J}_1^- + {\cal J}_3^- ) + {\cal J}_{33}^0\,({\cal J}_1^- + {\cal J}_2^- ) - {\cal J}_{31}^0\,{\cal J}_2^- - {\cal J}_{23}^0\,{\cal J}_1^- - {\cal J}_{12}^0\,{\cal J}_3^- \bigg) \ , \] and \begin{equation} L_1 \ = \ {\cal J}_{21}^0\,-\,{\cal J}_{31}^0\, + \,{\cal J}_{32}^0\,-{\cal J}_{12}^0\, + \, {\cal J}_{13}^0\,-\,{\cal J}_{23}^0\, \ . \label{integral-J} \end{equation} in terms of $sl(4,{\bf R})$ generators. \subsection{The Laplace-Beltrami operator, underlying geometry} A remarkable property of the algebraic operator ${\De_R}(\rho)$ (\ref{addition3-3rho}) is its gauge-equivalence to the Schr\"odinger operator. Making the gauge transformation with determinant $D(\rho)$\ (\ref{gmn33-rho-det}), (\ref{gmn33-rho-det-gamma}) included into the factor for $d \neq 1$, \[ \Gamma \ = \ D^{-\frac{1}{4}}(4 \ta_2-\ta_1^2)^{\frac{3-d}{4}} \ \sim \ \frac{1}{{\ta_1^{\frac{1}{4}}(4 \ta_2-\ta_1^2)}^{\frac{d-2}{4}}} \ \sim \ P^{-\frac{1}{4}}\ S^{\frac{2-d}{2}}_{\triangle} \ , \] see also (\ref{taus}), we find that \begin{equation} \Gamma^{-1}\, {\De_R}(\rho)\,\Gamma \ = \ \De_{LB}(\rho) - \tilde V(\rho) \ , \label{HLB3} \end{equation} where the effective potential is of the form \[ \tilde V(\rho)\ = \ \frac{9}{8 \left(\rho_{12}+\rho_{13}+\rho_{23}\right)}\ -\ \frac{(d-2)(d-4)}{2} \, \frac{\left(\rho_{12}+\rho_{13}+\rho_{23}\right)} {\left(\rho_{12}^2+\rho_{13}^2+\rho_{23}^2 -2 \rho_{12} \rho_{13}- 2 \rho_{12} \rho_{23}-2 \rho_{13} \rho_{23}\right)} \] \begin{equation} =\ \frac{9}{8\,P}\ +\ \frac{(d-2)(d-4)}{32} \frac{P}{S^2_{\triangle}} \ , \label{Veff} \end{equation} thus, of geometric nature, with vanishing second term at $d=2, 4$. In turn, \[ \De_{LB}(\rho) \ =\ 4(\rho_{12} \pa^2_{\rho_{12}} + \rho_{13} \pa^2_{\rho_{13}} +\rho_{23} \pa^2_{\rho_{23}}) \] \[ + \ 2\, \bigg((\rho_{12} + \rho_{13} - \rho_{23})\pa_{\rho_{12}}\pa_{\rho_{13}}\ + (\rho_{12} + \rho_{23} - \rho_{13})\pa_{\rho_{12}}\pa_{\rho_{23}}\ + (\rho_{13} + \rho_{23} - \rho_{12})\pa_{\rho_{13}}\pa_{\rho_{23}} \bigg) \] \begin{equation} \label{LB3} - 3\, \bigg(\frac{\rho_{12}\pa_{\rho_{12}}+\rho_{13}\pa_{\rho_{13}}+\rho_{23}\pa_{\rho_{23}}} {\rho _{12}+\rho _{13}+\rho _{23}} \bigg) + 4\, (\pa_{\rho_{12}}+\pa_{\rho_{23}}+ \pa_{\rho_{13}})\ , \end{equation} is the $d$-independent Laplace-Beltrami operator, \[ \De_{LB}(\rho)\ =\ {\sqrt {D(\rho)}}\, \pa_{\mu} \frac{1}{\sqrt {D(\rho)}}\, g^{\mu \nu} \pa_{\nu}\ ,\quad \pa_{\nu}\equiv \frac{\pa}{\pa{\rho_{\nu}}}\ , \] see (\ref{gmn33-rho}), (\ref{gmn33-rho-det}). Eventually, taking into account the gauge rotation (\ref{HLB3}) we arrive at the three-dimensional Hamiltonian \begin{equation*} \label{H-3-3r-r} {H}_{LB} (r) \ =\ -\De_{LB}(r) + \tilde V (r) + V(r) \ , \end{equation*} in the space of $r$-relative distances, or \begin{equation*} \label{H-3-3r-rho} {H}_{LB} (\rho) \ =\ -\De_{LB}(\rho) + \tilde V (\rho) + V(\rho)\ , \end{equation*} in $\rho$-space, see (\ref{rho}), or \begin{equation*} \label{H-3-3r-tau} {H}_{LB} (\tau) \ =\ -\De_{LB}(\tau) + \tilde V (\tau) + V(\tau)\ , \end{equation*} in $\tau$-space, see (\ref{taus}). These Hamiltonians describe the three-dimensional quantum particle moving in the curved space with metric $g^{\mu \nu}$ with kinetic energy $\De_{LB}$, in particular, in $\rho$-space with metric $g^{\mu \nu}(\rho)$ (\ref{gmn33-rho}) with kinetic energy $\De_{LB}(\rho)$ (\ref{LB3}) and effective potential $\tilde V(\rho)$ (\ref{Veff}). The Ricci scalar, see e.g. \cite{Eis}, in $\rho$-space is equal to \[ Rs \ = \ -\frac{41\,{(\rho_{12}+\rho_{13}+\rho_{23})}^2 -84\,(\rho_{12}\,\rho_{13}+\rho_{12}\,\rho_{23}+\rho_{23}\,\rho_{13})}{12\,(\rho_{12}+\rho_{13}+\rho_{23}) \left({(\rho_{12}+\rho_{13}+\rho_{23})}^2-4\,(\rho_{12}\,\rho_{13}+\rho_{12}\,\rho_{23} +\rho_{23}\,\rho_{13})\right)} \] \[ \ = \ \frac{-84\,\tau_2\ +\ 41\,{\tau_1}^2 }{12\,\tau_1(4\,\tau_2\ -\ \tau_1^2)} \ =\ \frac{5 P^2 - 84 S}{48\,P\, S}\ =\ -\frac{7}{4\,P} + \frac{5\,P}{48\,S}\ , \] interestingly, it has a structure similar to one of the effective potential (\ref{HLB3}). It is singular at the boundary of the configuration space. (For $d=1$ the configuration space degenerates to the boundary, $4\,\tau_2\ =\ \tau_1^2$ and becomes flat.) The Cotton tensor, see e.g. \cite{Eis}, for the metric (\ref{gmn33-rho}) is nonzero, so the space is {\it not} conformally flat. Making the de-quantization of (\ref{H-3-3r-rho}) we arrive at a three-dimensional classical system which is characterized by the Hamiltonian, \begin{equation} \label{H-3-3r-rho-class} {H}_{LB}^{(c)} (\rho) \ =\ g^{\mu \nu}(\rho)\,P_{\mu}\, P_{\nu} \ + \ \tilde V (\rho) \ + \ V(\rho)\ , \end{equation} where $P_{\mu}\,, \ \mu=12,23,13$ is classical canonical momenta in $\rho$-space and $g^{\mu \nu}(\rho)$ is given by (\ref{gmn33-rho}). Here the underlying manifold (zero-potential case) admits an $so(3)$ algebra of constants of the motion linear in the momenta, i.e., Killing vectors. Thus, the free Hamilton-Jacobi equation is integrable. However, it admits no separable coordinate system. The classical kinetic energy $T=g^{\mu \nu}(\rho)\,P_{\mu}\, P_{\nu}$ Poisson-commutes with \[ L_1^{(c)} \ = \ (\rho_{13}-\rho_{23})P_{12} + (\rho_{23}-\rho_{12})P_{13} + (\rho_{12}-\rho_{13})P_{23}\ . \] \section{{(Quasi)}-exact-solvability} \subsection{QES in $\rho, \ta-$ variables, $d \neq 1$} {\bf (I).\ \it Quasi-Exactly-Solvable problem in $\rho$-variables.} Let us take the $d$-independent function \begin{equation} \label{psi_cal-r-d3} \Psi_0(\rho_{12},\,\rho _{13},\,\rho _{23}) \ = \ \ta_1^{1/4} {(4\,\ta_2-\tau_1^2)}^{\frac{\gamma}{2}}\,e^{-\om\,\ta_1 - \frac{A}{2}\,\ta_1^2} \ \equiv \Psi_0(\ta_1,\ta_2)\ , \end{equation} assuming $d \neq 1$, where $\gamma,\,\om > 0$ and $A \geq 0$ and for $\om=0$, $A>0$ are constants and $\ta$'s are given by (\ref{taus}). We look for the potential for which the function (\ref{psi_cal-r-d3}) is the ground state function for the Hamiltonian ${H}_{LB}(\rho)$ of the 3-dimensional quantum particle. This potential can be found immediately by calculating the ratio \[ \frac{\De_{LB}(\rho) \Psi_0 }{ \Psi_0}\ =\ V_0 - E_0 \ , \] where $\De_{LB}(\rho)$ is given by (\ref{LB3}) with metric (\ref{gmn33-rho}). The result is \[ V_0(\ta_1,\,\ta_2)\ = \ \frac{9}{8 \ta_1} + \,\gamma(\gamma-1) \left ( \frac{2\ta_1}{4\ta_2-\tau_1^2} \right )\ + \] \begin{equation} 6\,\om^2\,\ta_1\, +\, 6\,A\,\ta_1\,(2\,\om\,\ta_1\, -\, 2\gamma - 3)\, +\, 6\, A^2\ta_1^3 \ , \label{VQES-0} \end{equation} which is $d$-independent, it includes the effective potential $\tilde V$ and many-body potential $V$ with the energy of the ground state \begin{equation} E_0\ =\ 12\,\om\,(1+\,\gamma) \ . \label{EQES-0} \end{equation} Now, let us take the Hamiltonian ${H}_{LB,0} \equiv -\De_{LB}(\rho) + V_0$ with potential (\ref{VQES-0}), subtract $E_0$ (\ref{EQES-0}) and make the gauge rotation with $\Psi_0$ (\ref{psi_cal-r-d3}). As the result we obtain the $sl(4, {\bf R})$-Lie-algebraic operator with additional potential $\De V_N$, \cite{Turbiner:1988,Turbiner:2016} \[ \Psi_0^{-1}\,(-{\De_{LB}}(\rho) + V_0 - E_0)\,\Psi_0\ =\ -{\De_R}({\cal J})\ +\ 2(d - 2 - 2\,\gamma)\,({\cal J}_1^- + {\cal J}_2^- + {\cal J}_3^-)\ + \] \begin{equation} 12\,\om\,({\cal J}_{11}^0 +{\cal J}_{22}^0 + {\cal J}_{33}^0) + 12\,A\,\left({\cal J}_1^+(N) + {\cal J}_2^+(N) + {\cal J}_3^+(N) \right)\ +\ 12 \,A\,N \ta_1 \label{HQES-0-Lie} \end{equation} \[ \equiv h^{(qes)}(J)\ +\ \De V_N\ , \] see (\ref{HRex}), where \[ \De V_N\ =\ 12 \,A\,N \ta_1\ . \] It is evident that for integer $N$ the $d$-independent operator $h^{(qes)}(J)$ has a finite-dimensional invariant subspace ${\cal P}_{N}$, (\ref{P3}), with $\dim {\cal P}_{N} \sim N^3$ at large $N$. Finally, we arrive at the quasi-exactly-solvable, $d$-independent, single particle Hamiltonian in the space of relative distances $\rho$, \begin{equation} \label{HQES-3-3} {H}_{LB}^{(qes)}(\rho) \ =\ -\De_{LB}(\rho) \ + \ V_N^{(qes)}(\rho)\ , \end{equation} cf.(\ref{Hrel-final}), where \[ V^{(qes)}_N(\ta_1,\,\ta_2)\ = \ \frac{9}{8 \ta_1} + \,\gamma(\gamma-1)\left ( \frac{2\ta_1}{4\ta_2-\ta_1^2}\right )\ + \] \begin{equation} +\ 6\,\om^2\,\ta_1\ +\ 6\,A\,\ta_1\,(2\,\om\,\ta_1\,-\,2\gamma\ -\ 2 N\, -\, 3)\ +\ 6\, A^2\ta_1^3 \ . \label{VQES-N} \end{equation} is two-variable QES potential. Its configuration space is $\ta_1 \geq 0$ and $4\ta_2 \geq \ta_1^2$. For this potential $\sim N^3$ eigenstates can be found by algebraic means. They have the factorized form of the polynomial in $\rho$ multiplied by $\Psi_0$ (\ref{psi_cal-r-d3}), \[ \mbox{Pol}_N (\rho_{12}, \rho_{13},\rho_{23})\ \Psi_0 (\ta_1, \ta_2)\ . \] These polynomials are the eigenfunctions of the quasi-exactly-solvable algebraic operator \begin{equation} \label{hQES-N-alg} \frac{1}{2}\,h^{(qes)}(\rho) \ = \ \end{equation} \[ -2(\rho_{12} \pa^2_{\rho_{12}} + \rho_{13} \pa^2_{\rho_{13}} +\rho_{23} \pa^2_{\rho_{23}}) \] \[ - \bigg( (\rho_{12} + \rho_{13} - \rho_{23})\pa_{\rho_{12}}\pa_{\rho_{13}} +(\rho_{12} + \rho_{23} - \rho_{13})\pa_{\rho_{12}}\pa_{\rho_{23}} +(\rho_{13} + \rho_{23} - \rho_{12})\pa_{\rho_{13}}\pa_{\rho_{23}}\bigg) \] \[ -\ 2\,(1\,+\,\gamma)(\pa_{\rho_{12}} + \pa_{\rho_{13}}+ \pa_{\rho_{23}}) + 6\,\om(\rho_{12}\pa_{\rho_{12}}+\rho_{13}\pa_{\rho_{13}}+\rho_{23}\pa_{\rho_{23}}) \] \[ - 6\,A\,(\rho_{12}+ \rho_{13}+\rho_{23})(\rho_{12}\,\pa_{\rho_{12}} + \rho_{13}\,\pa_{\rho_{13}}+ \rho_{23}\,\pa_{\rho_{23}}-N ) \] which is the quasi-exactly-solvable $sl(4,\,{\bf R})$-Lie-algebraic operator \begin{equation} \frac{1}{2}\,h^{(qes)}(J) \ = \ -2(\, {\cal J}_{11}^0\,{\cal J}_1^- + {\cal J}_{22}^0\,{\cal J}_2^- + {\cal J}_{33}^0\,{\cal J}_3^- \,) \label{hQES-N-Lie} \end{equation} \[ - \bigg({\cal J}_{11}^0\,({\cal J}_2^- + {\cal J}_3^- ) + {\cal J}_{22}^0\,({\cal J}_1^- + {\cal J}_3^- ) + {\cal J}_{33}^0\,({\cal J}_1^- + {\cal J}_2^- ) - {\cal J}_{31}^0\,{\cal J}_2^- - {\cal J}_{23}^0\,{\cal J}_1^- - {\cal J}_{12}^0\,{\cal J}_3^- \bigg) \] \[ -\ 2\,(1\, +\,\gamma)\,( {\cal J}_1^- + {\cal J}_2^- + {\cal J}_3^- ) + 6\,\om\,({\cal J}_{11}^0 + {\cal J}_{22}^0 + {\cal J}_{33}^0) \] \[ +\, 6 \,A\,(\, J_1^+(N)+J_2^+(N)+J_3^+(N)\, ) \ , \] cf. (\ref{HQES-0-Lie}). As for the original many-body problem (\ref{Hrel-Mod}) in the space of relative motion \[ {\tilde {\cal H}}_R\,\Psi(r) \equiv \ \bigg(- {\De_R}(r)\ + \ V(r)\bigg)\, \Psi(r)\ =\ E\,\Psi(r)\ ,\ \Psi \in L_2 ({\bf \tilde R})\ , \] the potential for which quasi-exactly-solvable, polynomial solutions occur in the form \[ \mbox{Pol}_N (\rho_{12}, \rho_{13},\rho_{23})\ \Gamma \ \Psi_0 (\ta_1, \ta_2) \ , \] where $\Gamma \sim D^{-1/4}(\rho)\,(4 \ta_2-\ta_1^2)^{\frac{3-d}{4}}$, see (\ref{gmn33-rho-det-gamma}). The three-body potential is given by \[ V_{relative, N}^{(qes)}(\ta) \ = \ V^{(qes)}_N \ - \ \tilde{V}\ = \] \begin{equation} \frac{4\,\gamma(\gamma-1)-(d-2)(d-4)}{2}\left (\frac{\ta_1}{4\ta_2-\ta_1^2}\right ) +\ 6\,\om^2\,\ta_1\ +\ 6\,A\,\ta_1\,(2\,\om\,\ta_1\,-\,2\,{\gamma}\ -\ 2 N\, -\, 3)\ +\ 6\, A^2\ta_1^3 \ , \label{VQES-N-rel} \end{equation} c.f. (\ref{VQES-N}); it does not depend on the $\ta_3$-variable and does not contain a singular term $\sim 1/\ta_1$. \bigskip {\bf (II).\ \it Exactly-Solvable problem in $\rho$-variables.} If the parameter $A$ vanishes in (\ref{psi_cal-r-d3}), (\ref{VQES-N}) and (\ref{HQES-0-Lie}), (\ref{hQES-N-Lie}) we will arrive at the exactly-solvable problem, where $\Psi_0$ (\ref{psi_cal-r-d3}) at $A=0$ plays the role of the ground state function, \begin{equation} \label{psi_cal-r-d2exact} \Psi_0(\rho_{12},\,\rho _{13},\,\rho _{23}) \ = \ \ta_1^{1/4} {(4\,\ta_2-\ta_1^2)}^{\frac{\gamma}{2}}\,e^{-\om\,\ta_1} \ . \end{equation} The $sl(4, {\bf R})$-Lie-algebraic operator (\ref{hQES-N-Lie}) contains no raising generators $\{{\cal J}^+(N)\}$ and becomes \[ h^{(exact)} = -{\De_R}({\cal J}) + 2(d - 2 -2\,\gamma)\,({\cal J}_1^- + {\cal J}_2^- + {\cal J}_3^-) + 12\,\om\,({\cal J}_{11}^0 +{\cal J}_{22}^0 + {\cal J}_{33}^0)\ , \] see (\ref{HRex}), and, hence, preserves the infinite flag of finite-dimensional invariant subspaces ${\cal P}_{N}$ (\ref{P3}) at $N=0,1,2\ldots$\,. The single particle potential (\ref{VQES-N}) becomes \begin{equation} V^{(es)}(\ta_1,\,\ta_2)\ = \ \frac{9}{8 \ta_1} + \,\gamma(\gamma-1)\left ( \frac{2\ta_1}{4\ta_2-\ta_1^2}\right )\ + \ 6\,\om^2\,\ta_1\ =\ \label{VES} \end{equation} \[ =\ \frac{9}{8 \left(\rho _{12}+\rho _{13}+\rho _{23}\right)}\ +\ 6\om^2 \left(\rho _{12}+\rho _{13}+ \rho _{23}\right) \] \[ -\ \gamma(\gamma-1) \left ( \frac{2(\rho_{12}+\rho_{13}+\rho_{13})}{\rho_{12}^2+\rho_{13}^2+\rho_{23}^2-2\rho_{12}\rho_{13}-2\rho_{12}\rho_{23} -2\rho_{13}\rho_{23}} \right )\ \] Eventually, we arrive at the exactly-solvable single particle Hamiltonian in the space of relative distances \begin{equation} \label{HES-3-2} {H}_{LB}^{(es)}(\rho) \ =\ -\De_{LB}(\rho) \ + \ V^{(es)}(\rho)\ , \end{equation} where the spectra of energies \[ E_{N=n_1+n_2+n_3}\ =\ 12\, \om\, (N + \gamma + 1)\ ,\quad N=0,1,2,\ldots \] is equidistant. Its degeneracy is equal to number of partitions of $N=n_1 + n_2 + n_3$. All eigenfunctions have the factorized form of a polynomial in $\rho$ multiplied by $\Psi_0$ (\ref{psi_cal-r-d2exact}), \[ \mbox{Pol}_N (\rho_{12}, \rho_{13},\rho_{23})\ \Psi_0 (\ta_1, \ta_2)\ ,\quad N=0,1,\ldots\ . \] These polynomials are eigenfunctions of the exactly-solvable, $d$-independent, algebraic operator \[ \frac{1}{2}\,h^{(exact)}(\rho)\ = \] \[ \ - 2 (\rho_{12} \pa^2_{\rho_{12}} + \rho_{13} \pa^2_{\rho_{13}} +\rho_{23} \pa^2_{\rho_{23}}) -\ 2\,(1\,+\,\gamma)(\pa_{\rho_{12}} + \pa_{\rho_{13}}+ \pa_{\rho_{23}}) + 6\,\om(\rho_{12}\pa_{\rho_{12}}+\rho_{13}\pa_{\rho_{13}}+\rho_{23}\pa_{\rho_{23}}) \] \begin{equation} \label{hES-rho} -\, (\rho_{12} + \rho_{13} - \rho_{23})\pa_{\rho_{12}}\pa_{\rho_{13}} -\, (\rho_{12} + \rho_{23} - \rho_{13})\pa_{\rho_{12}}\pa_{\rho_{23}} -\, (\rho_{13} + \rho_{23} - \rho_{12})\pa_{\rho_{13}}\pa_{\rho_{23}} \ , \end{equation} or, equivalently, of the exactly-solvable $sl(4,{\bf R})$-Lie-algebraic operator \[ \frac{1}{2}\,h^{(exact)}(J) \ = \ -2(\, {\cal J}_{11}^0\,{\cal J}_1^- + {\cal J}_{22}^0\,{\cal J}_2^- + {\cal J}_{33}^0\,{\cal J}_3^- \,) \] \[ - \,\bigg({\cal J}_{11}^0\,({\cal J}_2^- + {\cal J}_3^- ) + {\cal J}_{22}^0\,({\cal J}_1^- + {\cal J}_3^- ) + {\cal J}_{33}^0\,({\cal J}_1^- + {\cal J}_2^- ) - {\cal J}_{31}^0\,{\cal J}_2^- - {\cal J}_{23}^0\,{\cal J}_1^- - {\cal J}_{12}^0\,{\cal J}_3^- \bigg) \] \begin{equation} \label{hES-N-Lie} -\ 2\,(1\,+\,\gamma)\,( {\cal J}_1^- + {\cal J}_2^- + {\cal J}_3^- ) + 6\,\om\,({\cal J}_{11}^0 + {\cal J}_{22}^0 + {\cal J}_{33}^0)\ . \end{equation} Those polynomials $\mbox{Pol}_N$ are orthogonal w.r.t. $\Psi_0^2$\,, (\ref{psi_cal-r-d3}) at $A=0$, their domain is given by (\ref{CFrho}). Being written in variables $w_{1,2,3}$, see above, they are factorisable, $F(w_1,w_2)\, f(w_3)$. To the best of our knowledge these orthogonal polynomials have not been studied in literature. The Hamiltonian with potential (\ref{VES}) can be considered as a type of a $d$-dimensional generalization of the 3-body Calogero model \cite{Calogero:1969}, see also \cite{RT:1995}, \cite{ST:2015}, with loss of the property of pairwise interaction and absence of singular interaction terms $\sim \frac{\tau_2}{\tau_3}$. Now the potential of interaction contains two- and three-body interaction terms. If $\gamma=0,1$ in (\ref{VES}) we arrive at the celebrated harmonic oscillator potential in the space of relative distances, see e.g. \cite{Green}. In turn, in the space of relative motion this potential contains no singular terms at all and becomes, \[ V \ =\ 6\,\om^2\, \ta_1\ =\ 6\,\om^2\,(\rho_{12} + \rho_{13} + \rho_{23})\ =\ 6\,\om^2\,(r^2_{12} + \ r^2_{13} + \ r^2_{23})\ , \] see \cite{Green}. We arrive at the harmonic oscillator potential $V$. Therefore, the potential (\ref{VES}) is a $d$-dimensional generalization of the harmonic oscillator in the space of relative motion rather than 3-body (rational) Calogero model. An attempt to construct a true $d$-dimensional generalization of the 3-body (rational) Calogero model will be made below. \bigskip {\bf (III).\ \it (Quasi)-Exactly-Solvable problem in $\ta$-variables.} The quasi-exactly-solvable $sl(4,\,{\bf R})$-Lie-algebraic operator $h^{(qes)}(J)$\,, (\ref{hQES-N-Lie}) as well as the exactly-solvable operator $h^{(es)}(J)$ (\ref{hES-N-Lie}) as its degeneration at $A=0$, were acting originally in $\rho$ variables (\ref{hQES-N-alg}). They are characterized by {\it accidental} permutation symmetry $S^3$ in $\rho$ variables and, hence, they can be rewritten in $\ta$ variables (\ref{taus}). Surprisingly, the operator (\ref{hQES-N-alg}) remains algebraic (!) \begin{equation} \label{hQES-N-tau} h^{(qes)}(\tau) \ = \ -6\,\ta_1\pa_{\ta_1}^2 -2\ta_1(7\ta_2-\ta_1^2)\pa_{\ta_2}^2 -2\ta_3(6\ta_2-\ta_1^2)\pa_{\ta_3}^2 -\,24\,\ta_2\pa_{\ta_1,\ta_2}^2 - 36\ta_3\pa_{\ta_3,\ta_3}^2\ - \end{equation} \[ 2\,(4\ta_2^2+9\ta_1\ta_3-\ta_1^2\ta_2)\pa_{\ta_2,\ta_3}^2 -18\pa_1 -14\ta_1\pa_2-2(7\ta_2-\ta_1^2)\pa_{\ta_3}\ \] \[ -\ 4\,(1\,+\,\gamma)\,(3\pa_{\ta_1}+2\ta_1\pa_{\ta_2}+\ta_2\pa_{\ta_3}) +12\om\,(\ta_1\pa_1+2\ta_2\pa_2+3\ta_3\pa_3)\ + \] \[ 12\,A\ta_1(\ta_1\pa_{\ta_1} + 2\ta_2\pa_{\ta_2} + 3\ta_3\pa_{\ta_3} - N) \ . \] Evidently, it remains algebraic at $A=0$ as well, \begin{equation} \label{hES-N-tau} h^{(es)}(\tau) \ = \ -6\,\ta_1\pa_{\ta_1}^2 -2\ta_1(7\ta_2-\ta_1^2)\pa_{\ta_2}^2 -2\ta_3(6\ta_2-\ta_1^2)\pa_{\ta_3}^2 -\,24\,\ta_2\pa_{\ta_1,\ta_2}^2 - 36\ta_3\pa_{\ta_3,\ta_3}^2\ - \end{equation} \[ 2\,(4\ta_2^2+9\ta_1\ta_3-\ta_1^2\ta_2)\pa_{\ta_2,\ta_3}^2 -18\pa_1 -14\ta_1\pa_2-2(7\ta_2-\ta_1^2)\pa_{\ta_3}\ \] \[ -\ 4\,(1\,+\,\gamma)\,(3\pa_{\ta_1}+2\ta_1\pa_{\ta_2}+\ta_2\pa_{\ta_3}) +12\om\,(\ta_1\pa_{\ta_1}+2\ta_2\pa_{\ta_2}+3\ta_3\pa_{\ta_3})\ , \] becoming the exactly-solvable one. Note that both quasi-exactly-solvable operator (\ref{hQES-N-tau}) and exactly-solvable operator (\ref{hES-N-tau}) admit the integral \begin{equation} \label{integral-tau} -L_1^2 \ =\ \left(27\tau_3^2 - 18\tau_3\tau_2\tau_1 + 4\tau_3\tau_1^3 + 4\tau_2^3 - \ta_2^2 \ta_1^2 \right) \pa^2_{\tau_3}\ +\ \left(27\tau_3 - 9\tau_1\tau_2 + 2\tau_1^3 \right) \pa_{\tau_3}\ , \end{equation} cf. (\ref{integral}), $[h^{(qes)}(\tau), L_1^2]=0$. This integral is an algebraic operator. It involves derivatives w.r.t. $\ta_3$ only. It can be immediately checked that the quasi-exactly-solvable operator (\ref{hQES-N-tau}) has the finite-dimensional invariant subspace in polynomials in $\ta$, \begin{equation} \label{P3-tau} {\mathcal P}^{(1,2,3)}_{N}\ =\ \langle \ta_1^{p_1} \ta_2^{p_2} \ta_3^{p_3} \vert \ 0 \le p_1+2p_2+3p_3 \le N \rangle\ , \end{equation} cf. (\ref{P3}), with characteristic vector $(1,2,3)$, hence, the Newton pyramid has sides 1,2,3, associated with solid angle of $90^o$, for discussion see e.g. \cite{Turbiner:2013}. This finite-dimensional space appears as a finite-dimensional representation space of the algebra of differential operators $h^{(3)}$ which was discovered in the relation with $H_3$ (non-crystallographic) rational Calogero model as its hidden algebra \cite{GT}. Note that the space ${\mathcal P}^{(1,2,3)}_{N}$ is invariant with respect to the quasi-projective transformation, \begin{equation} \label{QPT-tau} \ta_1 \rar \ta_1\ ,\ \ta_2 \rar \ta_2 + A \ta_1^2\ ,\ \ta_3 \rar \ta_3 + B \ta_1 \ta_2 + C \ta_1^3\ , \end{equation} where $A, B, C$ are parameters. (AVT thanks M. Kontsevich for bringing attention to this property). The algebra $h^{(3)}$ is infinite-dimensional but finitely-generated, for discussion see \cite{GT}. Their generating elements can be split into two classes. The first class of generators (lowering and Cartan operators) act in $\mathcal{P}^{(1,2,3)}_N$ for any $N$ and therefore they preserve the flag $\mathcal{P}^{(1,2,3)}$. The second class operators (raising operators) act on the space $\mathcal{P}^{(1,2,3)}_N$ with fixed $N$ only. Let us introduce the following notation for the derivatives: \[ \pa_i\equiv\frac{\pa}{\pa\tau_i}\ ,\quad \pa_{ij}\equiv\frac{\pa^2}{\pa\tau_{i}\pa\tau_{j}}\ ,\quad\pa_{ijk}\equiv\frac{\pa^3}{\pa\tau_{i}\pa\tau_{j}\pa\tau_{k}}\ . \] The first class of generating elements consists of the 22 generators where 13 of them are the first order operators \begin{equation} \begin{aligned} \label{ops_1} & T_0^{(1)}=\pa_1\,, && T_0^{(2)}=\pa_2\,, && T_0^{(3)}=\pa_3\,,\\ & T_1^{(1)}=\tau_1\pa_1\,, && T_2^{(2)}=\tau_2\pa_2\,, && T_3^{(3)}=\tau_3\pa_3\,,\\ & T_1^{(3)}=\tau_1\pa_3\,, && T_{11}^{(3)}=\tau_1^2\pa_3\,, && T_{111}^{(3)}=\tau_1^3\pa_3\,,\\ & T_1^{(2)}=\tau_1\pa_2\,, && T_{11}^{(2)}=\tau_1^2\pa_2\,, && T_2^{(3)}=\tau_2\pa_3\,,\\ & &&T_{12}^{(3)}=\tau_1\tau_2\pa_3\ ,&& \end{aligned} \end{equation} the 6 are of the second order \begin{equation} \begin{aligned} \label{ops_2} & T_2^{(11)}=\tau_2\pa_{11}\,, && T_{22}^{(13)}=\tau_2^2\pa_{13}\,, && T_{222}^{(33)}=\tau_2^3\pa_{33}\,,\\ & T_3^{(12)}=\tau_3\pa_{12}\,, && T_3^{(22)}=\tau_3\pa_{22}\,, && T_{13}^{(22)}=\tau_1\tau_3\pa_{22}\ , \end{aligned} \end{equation} and 2 are of the third order \begin{equation} \begin{aligned} \label{ops_3} & T_3^{(111)}=\tau_3\pa_{111}\,, && T_{33}^{(222)}=\tau_3^2\pa_{222}\ . \end{aligned} \end{equation} The generators of the second class consist of 8 operators where single one of them is of the first order \begin{equation} \label{R1} T_1^+ = \ta_1 T_0\ , \end{equation} 4 are of the second order \begin{equation} \begin{aligned} \label{R2} & T_{2,-1}^+=\tau_2\pa_1T_0\,, && T_{3,-2}^+=\tau_3\pa_2T_0\,, && T_{22,-3}^+ = \ta_2^2\pa_3T_0\,, && T_2^+ = \tau_2T_0(T_0+1)\ , \end{aligned} \end{equation} and 3 are of the third order \begin{equation} \begin{aligned} \label{R3} & T_{3,-11}^{+}=\tau_3\pa_{11}T_0\ , && T_{3,-1}^+=\tau_3\pa_1T_0(T_0+1)\ , && T_3^+=\tau_3T_0(T_0+1)(T_0+2)\ , \end{aligned} \end{equation} where we have introduced the diagonal operator (the Euler-Cartan generator) \begin{equation} \label{jo} T_0=\tau_1\pa_1+2\tau_2\pa_2+3\tau_3\pa_3 - N\ . \end{equation} for a convenience. In fact, this operator is the identity operator, it is of the zeroth order and, hence, it belongs to the first class. It is not surprising that the algebraic operator $h^{(qes)}(\tau)$, (\ref{hQES-N-tau}), can be rewritten in terms of generators of the $h^{(3)}$-algebra, \begin{equation} \begin{aligned} h^{(qes)}(T) \ = & \ - \bigg[6\,T_1^{(1)}\,T_0^{(1)} + 2\,(7\,T_2^{(2)} - T_{11}^{(2)})\,T_1^{(2)} + T_3^{(3)}(6\,T_2^{(3)} - T_{11}^{(3)}) \\ & \ +\ 12 T_0^{(1)}\,(2\,T_2^{(2)} +3\,T_3^{(3)}) + 2\,(4\,T_2^{(3)}\,T_2^{(2)}+9\,T_1^{(2)}\,T_3^{(3)}-\,T_{11}^{(3)}\,T_2^{(2)}) \\ & + 2\,(9\,T_0^{(1)}+7\,T_1^{(2)}) + 2\,(7\,T_2^{(3)}-T_{11}^{(3)}) \bigg] \label{hj} \end{aligned} \end{equation} \[ -\ 4\,(1\, + \,\gamma)\,(T_2^{(3)} + 2\,T_1^{(2)} + 3\,T_0^{(1)})\ +\ 12\,\om\,(T_0+N)\ +\ 12\,A\,T_1^+\ , \] as well as the algebraic operator $h^{(es)}(\tau)$, (\ref{hES-N-tau}), which occurs at $A=0$, can be rewritten in terms of generators of the $h^{(3)}$-algebra, \begin{equation} \begin{aligned} h^{(es)}(T) \ = & \ -\bigg[6\,T_1^{(1)}\,T_0^{(1)} +2\,(7\,T_2^{(2)} - T_{11}^{(2)})\,T_1^{(2)} + T_3^{(3)}(6\,T_2^{(3)} - T_{11}^{(3)}) \\ & \ +\ 12 T_0^{(1)}\,(2\,T_2^{(2)}\, + \,3\,T_3^{(3)}) + 2\,(4\,T_2^{(3)}\,T_2^{(2)}+9\,T_1^{(2)}\,T_3^{(3)}-\,T_{11}^{(3)}\,T_2^{(2)}) \\ & + 2\,(9\,T_0^{(1)}+7\,T_1^{(2)}) + 2\,(7\,T_2^{(3)}-T_{11}^{(3)}) \bigg] \label{hj1} \end{aligned} \end{equation} \[ -\ 4\,(1\, + \,\gamma)\,(T_2^{(3)} + 2\,T_1^{(2)} + 3\,T_0^{(1)})\ +\ 12\,\om\,T_0\ , \] where without a loss of generality we put $N=0$. The integral (\ref{integral-tau}) can be rewritten in terms of generators of the $h^{(3)}$-algebra as well, \begin{equation}\label{integral-h3} -L_1^2 \ =\ 27 T_3^{(3)} T_3^{(3)} - 18 T_3^{(3)} T_{12}^{(3)} + 4 T_3^{(3)}T_{111}^{(3)} + 4 T_{222}^{(33)} - T_{12}^{(3)} T_{12}^{(3)} - 9 T_{12}^{(3)} + 2 T_{111}^{(3)}\ . \end{equation} It involves the generators of the first class only: (\ref{ops_1}), (\ref{ops_2}). Hence, it preserves the infinite flag of polynomials ${\mathcal P}^{(1,2,3)}_{N}$, see (\ref{P3-tau}), $N=0,1,2, \ldots$. It can be immediately verified that with respect to the action of the operator (\ref{hQES-N-tau}) the finite-dimensional invariant subspace (\ref{P3-tau}) is reducible: it preserves \begin{equation} \label{P2-tau} {\mathcal P}^{(1,2)}_{N}\ \equiv \ \langle \ta_1^{p_1} \ta_2^{p_2} \vert \ 0 \le p_1+2p_2 \le N \rangle\ \subset {\mathcal P}^{(1,2,3)}_{N}\ . \end{equation} The operator which acts on ${\mathcal P}^{(1,2)}_{N}$ has the form, \begin{equation} \label{hQES-N-tau-2} h^{(qes)}(\ta_1,\ta_2) \ = \ -6\,\ta_1\pa_1^2 -2\ta_1(7\ta_2-\ta_1^2)\pa_2^2 -\,24\,\ta_2\pa_{1,2}^2\ -\ 6\,(5\, +\, 2\gamma)\pa_1 - 2(11\,+\,4\gamma)\ta_1\pa_2\ \end{equation} \[ +\ 12\om\,(\ta_1\pa_1+2\ta_2\pa_2)\ +\ 12\,A\ta_1(\ta_1\pa_1 + 2\ta_2\pa_2 - N) \ , \] cf. (\ref{eq-psi}). It has $\sim {N}^2$ polynomial eigenfunctions which depends on two variables $\ta_{1,2}$ only. The space ${\mathcal P}^{(1,2)}_{N}$ is finite-dimensional representation space of the non-semi-simple Lie algebra $g^{(2)} \subset gl(2,{\bf R}) \oplus R^3$ realized by the first order differential operators, \cite{gko1} (see also \cite{Turbiner:1994}, \cite{ghko}, \cite{Turbiner:1998}), \[ t_1\ =\ \pa_{\ta_1} \ , \] \[ t_2 ({ N})\ =\ {\ta_1} \pa_{\ta_1}\ -\ \frac{{ N}}{3} \ , \ t_3 ({ N})\ =\ 2 {\ta_2}\pa_{\ta_2}\ -\ \frac{{ N}}{3}\ ,\] \[ t_4 ({ N})\ =\ {\ta_1}^2 \pa_{\ta_1} \ +\ 2 {\ta_1} {\ta_2} \pa_{\ta_2} \ - \ { N} {\ta_1} \ ,\] \begin{equation} \label{gr} r_{i}\ = \ {\ta_1}^{i}\pa_{\ta_2}\ ,\quad i=0, 1, 2\ . \end{equation} The operator (\ref{hQES-N-tau-2}) can be rewritten in terms of $gl(2,{\bf R}) \oplus R^3$ operators alone \[ h^{(qes)}(t,r)\ =\ -6\,r_1 t_1 - 14 (t_3 + \frac{{ N}}{3}) r_1 + 2 r_2 r_1 - 24 t_1 (t_3 + \frac{{ N}}{3}) \] \[ - 6(5 \,+\, 2\gamma)t_1 - 2(11\,+\,4\gamma) r_1 + 12\,\om (t_2 + t_3 + N)\ +\ 12 A t_4\ . \] The space (\ref{P3-tau}) is reducible further: the operator (\ref{hQES-N-tau}) (and also the operator (\ref{hQES-N-tau-2})) preserves \begin{equation} \label{P1-tau} {\mathcal P}^{(1)}_{N}\ \equiv \ \langle \ta_1^{p_1} \vert \ 0 \le p_1 \le N \rangle\ \subset {\mathcal P}^{(1,2)}_{N}\ \subset {\mathcal P}^{(1,2,3)}_{N}\ , \end{equation} as well. The operator, which acts on ${\mathcal P}^{(1)}_{N}$, has the form, \begin{equation} \label{hQES-N-tau-1} \frac{1}{6}h^{(qes)}(\ta_1) \ = \ -\,\ta_1\pa_1^2 + \bigg(2\,A\ta_1^2 + 2\om\,\ta_1 - (5\, +\, 2\gamma)\bigg)\pa_1\ - 2\,A N \ta_1 \ . \end{equation} It can be rewritten in terms of $sl(2, {\bf R})$ algebra generators, \begin{equation} \label{sl2} {\cal J}^+(N)\ =\ \ta_1^2\pa_{\ta_1} - N \ta_1\ ,\ {\cal J}^0(N)\ =\ 2\ta_1\pa_{\ta_1} - N\ ,\ {\cal J}^-(N)\ =\ \pa_{\ta_1}\ . \end{equation} It can be immediately recognized that the spectra of polynomial eigenfunctions of (\ref{hQES-N-tau-1}) corresponds to the spectra of the QES sextic polynomial potential with singular term $\sim 1/\ta_1$, see \cite{Turbiner:2016}, Case VII. Eventually, it can be stated that among $\sim N^3$ polynomial eigenfunctions in $\ta$ variables of the quasi-exactly-solvable operator (\ref{hQES-N-tau}) there are $\sim N^2$ polynomial eigenfunctions of the quasi-exactly-solvable operator (\ref{hQES-N-tau-2}) and $\sim N$ polynomial eigenfunctions of the quasi-exactly-solvable operator (\ref{hQES-N-tau-1}). A similar situation occurs for the exactly-solvable operator (\ref{hES-N-tau}), see (\ref{hQES-N-tau}) at $A=0$, for which there exist infinitely-many polynomial eigenfunctions in $\ta$ variables. Among these eigenfunctions there exists the infinite family of the polynomial eigenfunctions in $\ta_{1,2}$ variables, which are eigenfunctions of the operator \begin{equation} \label{hES-N-tau-2} h^{(es)}(\ta_1,\ta_2) \ = \ -6\,\ta_1\pa_1^2 -2\ta_1(7\ta_2-\ta_1^2)\pa_2^2 -\,24\,\ta_2\pa_{1,2}^2\ -\ 30\,\pa_1 - 22\,\ta_1\pa_2\ -\ \end{equation} \[ 4\gamma(3\pa_1+2\ta_1\pa_2) +12\om\,(\ta_1\pa_1+2\ta_2\pa_2)\ . \] Besides that there exists the infinite family of the polynomial eigenfunctions in the $\ta_{1}$ variable, which are eigensolutions of the operator \begin{equation} \label{hES-N-tau-1} h^{(es)}(\ta_1) \ = \ -6\,\ta_1\pa_1^2\ +\ 6\,(2\,\om\,\ta_1\,-\,2\, \gamma\, -\, 5)\pa_1\ ; \end{equation} they are nothing but the Laguerre polynomials. The spectra of polynomial eigenfunctions is equidistant, \[ E_N\ =\ 12 \,\om\, N\ , \] and it corresponds to the spectra of a harmonic oscillator (with a singular term $\sim 1/\ta_1$ in the potential). Finally, we emphasize that both the above-described QES problems in $\rho$ and $\ta$ variables exclude conceptually the limit $d=1$: the determinant of the metric $g^{\mu\nu}(\rho)$ and $g^{\mu\nu}(\ta)$ is identically zero at $d=1$, since the area of the triangle of interaction shrinks to zero, and the operator ${\De_{LB}}$ becomes singular. \subsection{QES in geometrical variables for arbitrary $d$} We consider the $n$-body Hamiltonian (\ref{Hrel-Mod}) \[ {\tilde {\cal H}}_R\ = \ - {\De_R} + V_G\ , \] written in geometrical variables $P, S, T$ and look for potentials $V_G$ for which there exists an (in)finite number of polynomial eigenfunctions for any positive $d > 0$. This problem can be reduced to search for a square-integrable function $\Psi_0(P,\,S,\,T)$ such that the gauge-rotated operator $(\Psi_0)^{-1} \De_R \Psi_0$ remains algebraic (up to an additive function) and can be rewritten in terms of the generators of the algebra $h^{(3)}$ (acting on functions of variables $P, S, T$). \subsubsection{Exactly-solvable problem} Let us take the operator (\ref{addition3-3tauS}) \[ \De_R \ = \ 6\,P\,\pa^2_P + \frac{1}{2}\,P\,S\,\pa_{S}^2 + T\,(48\,S + P^2)\,\pa_{T}^2 + 36\,T\,\pa_{P,T} + 24\,S\,\pa_{P,S} + 2\,S (16\,S + P^2)\,\pa_{S,T} \] \[ +\ 6\,d\,\pa_P\ +\ \frac{1}{4}\,(d-1)\,P\,\pa_{S}\ +\ \frac{1}{2}\,[16\,(d+4)\,S + d\,P^2]\,\pa_{T} \ , \] and gauge-rotate it with a $T$-independent function \begin{equation} \label{psi_cal-d3-1} \Psi_0(P,\,S,\,T=0) \ = \ {S}^{\tilde \gamma}\,e^{-\om\,P} \ \equiv \Psi_0(P,S)\ ,\ \tilde \gamma\ \geq \ 0\ . \end{equation} As a result we get the additional terms to $\De_R$, \[ (\Psi_0)^{-1} \De_R \Psi_0\ =\ \De_R - 12\om (P\pa_P + 2 S \pa_S + 3 \,T\, \pa_{T}) + 24 \tilde \gamma \pa_P + \tilde \gamma P \pa_S + 2 \tilde\gamma (16S + P^2) \pa_{T} \] \[ + 6 \om^2 P + \frac{\tilde\gamma (2\tilde\gamma -3 + d)P}{4S} - 6\om(d + 4\tilde\gamma) \equiv h^{(exact)} + V^{(exact)} - E_0\ , \] where evidently \begin{equation} \label{exact-III} h^{(exact)}\ =\ \ 6\,P\,\pa^2_P + \frac{1}{2}\,P\,S\,\pa_{S}^2 + T\,(48\,S + P^2)\,\pa_{T}^2 + 36\,\tau_3\,\pa_{P,T} + 24\,S\,\pa_{P,S} + 2\,S (16\,S + P^2)\,\pa_{S,T} \end{equation} \[ -12\om (P\pa_P + 2 S \pa_S + 3 T \pa_{T}) \] \[ +\ 6\,(d+4 \tilde\gamma)\,\pa_P\ +\ \frac{1}{4}\,(d-1+4 \tilde\gamma)\,P\,\pa_{S}\ +\ \frac{1}{2}\,[16\,(d+4+4 \tilde\gamma)\,S + (d+4 \tilde\gamma)\,P^2]\,\pa_{T} \ , \] is an exactly-solvable, algebraic operator, see below, and the potential \begin{equation} \label{V0-III} V^{(exact)}\ =\ 6 \om^2 P + \frac{\tilde\gamma (2\tilde\gamma -3 + d)}{4} \frac{P}{S}\ , \end{equation} is the exactly-solvable many-body potential, which at $\tilde \gamma=0$ can be identified with a harmonic oscillator potential, see e.g. \cite{Green}. The second terms play the role of a centrifugal potential due to rotation of the interaction plane (triangle) around the center-of-mass. Here \begin{equation} \label{E0-III} E_0\ =\ 6\om(d + 4\tilde\gamma)\ , \end{equation} is the ground state energy. The function $\Psi_0(P,S)$ is nothing but the ground state function for the potential (\ref{V0-III}); it is positive in the configuration space $S_{\triangle}>0$. It can be immediately checked that the exactly-solvable operator (\ref{exact-III}) has infinitely-many finite-dimensional invariant subspaces in polynomials in variables $P, S, T$, \begin{equation} \label{P3-tauS} {\mathcal P}^{(1,2,3)}_{N}\ =\ \langle P^{p_1} S^{p_2} T^{p_3} \vert \ 0 \le p_1+2p_2+3p_3 \le N \rangle\ , \end{equation} cf. (\ref{P3-tau}), with characteristic vector $(1,2,3)$, which form the infinite flag. The spectra of $-h^{(exact)}$ coincides with the spectra of the Hamiltonian, it is \begin{equation} \label{E3-tauS} E_{p_1,p_2,p_3}\ =\ 12 \om (p_1 + 2p_2 + 3p_3 ) + E_0\ , \end{equation} where $p_{1,2,3}=0,1,\ldots$ are quantum numbers, with multiplicity \[ M\ =\ p_1 + 2 p_2 + 3 p_3\ . \] The operator (\ref{exact-III}) acts on (\ref{P3-tauS}) reducibly. It maps \[ h^{(exact)}: {\mathcal P}^{(1,2,0)}_{N} \rar {\mathcal P}^{(1,2,0)}_{N} = \langle P^{p_1} S^{p_2} \vert \ 0 \le p_1+2p_2 \le N \rangle \equiv {\mathcal P}^{(1,2)}_{N} \subset {\mathcal P}^{(1,2,3)}_{N}\ . \] and \[ h^{(exact)}: {\mathcal P}^{(1,0,0)}_{N} \rar {\mathcal P}^{(1,0,0)}_{N} = \langle P^{p_1} \vert \ 0 \le p_1 \le N \rangle \equiv {\mathcal P}^{(1)}_{N} \subset {\mathcal P}^{(1,2,3)}_{N}\ . \] Therefore, $h^{(exact)}$ preserves the subflag of spaces of polynomials made of ${\mathcal P}^{(1,2)}_{N},\ N=0,1,2,\ldots$ as well as polynomials ${\mathcal P}^{(1)}_{N},\ N=0,1,2,\ldots$. It implies the existence of a (sub)-family of the eigenpolynomials of the form $\mbox{Pol}_N (P, S)$ as well as another (sub)-family $\mbox{Pol}_N (P)$\,. The first sub-family leads to the eigenfunctions of the reduced operator (\ref{exact-III}), \begin{equation} \label{exact-III-1} h^{(exact)}_r \ =\ \ 6\,P\,\pa^2_P + \frac{1}{2}\,P\,S\,\pa_{S}^2 + 24\,S\,\pa_{P,S} -12\om (P\pa_P + 2 S \pa_S) \end{equation} \[ +\ 6\,(d+4 \tilde\gamma)\,\pa_P\ +\ \frac{1}{4}\,(d-1+4 \tilde\gamma)\,P\,\pa_{S}\ , \] namely, \[ h^{(exact)}_r: {\mathcal P}^{(1,2)}_{N} \rar {\mathcal P}^{(1,2)}_{N} \ . \] while the second sub-family leads to the eigenfunctions of another reduced operator (\ref{exact-III}), \begin{equation} \label{exact-III-2} h^{(exact)}_{rr} \ =\ 6\,P\,\pa^2_P\ -\ 12\om \,P\pa_P \ +\ 6\,(d+4 \tilde\gamma)\,\pa_P\ , \end{equation} namely, \[ h^{(exact)}_{rr}: {\mathcal P}^{(1)}_{N} \rar {\mathcal P}^{(1)}_{N} \ . \] One can recognize that (\ref{exact-III-2}) is the Laguerre operator. In general, the eigenfunctions of the algebraic sector of ${\tilde {\cal H}}_R\ = \ - {\De_R} + V^{(exact)}$, which correspond to the eigenvalues (\ref{P3-tauS}) are factorized to the product of a polynomial and $\Psi_0 (P, S)$, thus, they are of the form \[ \mbox{Pol}_N (P, S, T)\ \Psi_0 (P, S)\ . \] However, among them there exist two particular forms of eigenfunctions, \[ \mbox{Pol}_N (P, S)\ \Psi_0 (P, S)\ , \] and \[ \mbox{Pol}_N (P)\ \Psi_0 (P, S)\ . \] It is evident that they form the infinite family of eigenstates of the reduced $n$-body Hamiltonian ${\tilde {\cal H}}_R$, hence, this problem is exactly solvable. We do not know whether their spectra is complete. However, from the point of view of the original problem (\ref{Hrel}), $${\cal H}_r =\ -\sum_{i=1}^3 \frac{1}{2 m_i} \De_i^{(d)}\ +\ V^{(exact)}\ ,$$ it is quasi-exactly-solvable, since it has infinitely-many angle-dependent eigenfunction which likely are of non-algebraic nature. The limit $d=1$ corresponds to vanishing area of the interaction triangle, hence, $S=0$, and also $\tilde\gamma=0$. The ground state function (\ref{psi_cal-d3-1}) becomes \begin{equation} \label{psi_cal-d3-d1} \Psi_0(P,\,S=0,\,T=0) \ =\ e^{-\om\,P} \ \equiv\ \Psi_0(P)\ ,\ \om > 0\ , \end{equation} the operator $\De_R$ remains algebraic, see (\ref{addition3-3tauS2}), and also the operator (\ref{exact-III}), \begin{equation} \label{exact-III-d1} h^{(exact)}_{d=1}\ =\ \ 6\,P\,\pa^2_P + T\,P^2\,\pa_{T}^2 + 36\,T\,\pa_{P,T} - 12\om (P\pa_P + 3 T \pa_{T}) +\ 6\,\pa_P\ +\ \frac{P^2}{2}\,\pa_{T} \ . \end{equation} It is easy to check that at $\om=0$ the operator $h^{(exact)}_{d=1}$ is the flat Laplace-Beltrami operator with metric (\ref{gmunu-d1}). The potential contains no singular part, \begin{equation} \label{V0-III-d1} V^{(exact)}_{d=1}\ =\ 6 \om^2 P \ , \end{equation} and coincides with regular part of the 3-body $G_2$ rational, Wolfes model. The geometrical coordinates $P, T$ correspond to $\la_{1,2}$ (\ref{d1-la}), in those the 3-body $G_2$ rational, Wolfes model becomes algebraic, and \begin{equation} \label{E0-III-d1} E_0{(d=1)}\ =\ 6\om\ . \end{equation} The spectra of the operator $h^{(exact)}_{d=1}$ (\ref{exact-III-d1}) is equidistant \begin{equation} \label{P3-tauS-d0} E_{p_1,0,p_3}\ =\ 12 \om (p_1 + 3p_3 ) + 6 \om\ , \end{equation} cf.(\ref{P3-tauS}), where $p_{1,3}=0,1,\ldots$ are quantum numbers, with multiplicity \[ M\ =\ p_1 + 3 p_3\ . \] It can be immediately checked that the exactly-solvable operator (\ref{exact-III-d1}) has infinitely-many finite-dimensional invariant subspaces of polynomials in variables $P, T$, \begin{equation} \label{P3-tauS-d1} {\mathcal P}^{(1,0,3)}_{N}\ =\ \langle P^{p_1} T^{p_3} \vert \ 0 \le p_1+3p_3 \le N \rangle\ ,\ {\mathcal P}^{(1,0,3)}_{N} \subset {\mathcal P}^{(1,2,3)}_{N} \end{equation} cf. (\ref{P3-tauS}), with characteristic vector $(1,3)$, which form an infinite flag. It is easy to check that the operator (\ref{exact-III-d1}) acts on (\ref{P3-tauS-d1}) reducibly. It maps \[ h^{(exact)}_{d=1}: {\mathcal P}^{(1,0,0)}_{N} \rar {\mathcal P}^{(1,0,0)}_{N} = \langle P^{p_1} \vert \ 0 \le p_1 \le N \rangle \equiv {\mathcal P}^{(1)}_{N} \subset {\mathcal P}^{(1,0,3)}_{N}\ . \] It leads to the eigenfunctions of a reduced operator (\ref{exact-III-d1}), \begin{equation} \label{exact-III-d1-1} h^{(exact)}_{d=1,r} \ =\ 6\,P\,\pa^2_P\ -\ 12\om \,P\pa_P \ +\ 6\,\pa_P\ , \end{equation} cf. (\ref{exact-III-2}). It is again the Laguerre operator. In general, the eigenfunctions of the algebraic sector of ${\tilde {\cal H}}_R\ = \ - {\De_R} + V^{(exact)}_{d=1}$, which correspond to the eigenvalues (\ref{P3-tauS}-d1) are factorized as the product of a polynomial and $\Psi_0 (P)$, thus, they are of the form \[ \mbox{Pol}_N (P, T)\ \Psi_0 (P)\ . \] However, among them there exists a particular form of eigenfunctions, \[ \mbox{Pol}_N (P)\ \Psi_0 (P)\ . \] \subsubsection{Quasi-Exactly-solvable problem} Let us take the function \begin{equation} \label{psi_cal-d3-2} \Psi_0(P,\,S,\,T=0) \ = \ {S}^{\tilde \gamma}\,e^{-\om\,P - \frac{A}{2} P^2} \ \equiv \Psi_0(P,S)\ ,\ \tilde \gamma\ \geq \ 0\ ,\ A \geq 0\ , \end{equation} cf.(\ref{psi_cal-d3-1}), and make the gauge rotation of $\De_R$ (\ref{addition3-3tauS}) with $\Psi_0$. As the result we get $\De_R$ and the additional first order terms, overall it can be split into three terms \[ (\Psi_0)^{-1} \De_R \Psi_0\ =\ \De_R - 12\om (P\pa_P + 2 S \pa_S + 3 T \pa_{T}) + 24 \tilde \gamma \pa_P + \tilde \gamma P \pa_S + 2 \tilde\gamma (16S + P^2) \pa_{T} \] \[ + 6 \om^2 P + \frac{\tilde\gamma (2\tilde\gamma -3 + d)P}{4S} - 6\om(d + 4\tilde\gamma) \equiv h^{(qes)} + V^{(qes)} - E_0\ , \] where \begin{equation} \label{qes-III} h^{(qes)}\ =\ \ 6\,P\,\pa^2_P + \frac{1}{2}\,P\,S\,\pa_{S}^2 + T\,(48\,S + P^2)\,\pa_{\ta_3}^2 + 36\,T\,\pa_{P,T} + 24\,S\,\pa_{P,S} + 2\,S (16\,S + P^2)\,\pa_{S,T} \end{equation} \[ -12\om (P\pa_P + 2 S \pa_S + 3 T \pa_{T}) \] \[ +\ 6\,(d+4 \tilde\gamma)\,\pa_P\ +\ \frac{1}{4}\,(d-1+4 \tilde\gamma)\,P\,\pa_{S}\ +\ \frac{1}{2}\,[16\,(d+4+4 \tilde\gamma)\,S + (d+4 \tilde\gamma)\,P^2]\,\pa_{T} \] \[ - 12 A P (P\pa_P + 2 S \pa_S + 3 T \pa_{T} - N) \ , \] is the algebraic, quasi-exactly-solvable, if $N$ is integer, operator, see below, and the potential \begin{equation} \label{V0-qes-III} V^{(qes)}\ =\ 6 [\om^2 - A (4 \tilde\gamma + 2 N + d + 1)]\,P + 12 \om A P^2 + 6 A^2 P^3 + \frac{\tilde\gamma (2\tilde\gamma -3 + d)}{4} \frac{P}{S}\ , \end{equation} is the quasi-exactly-solvable many-body sextic potential, which at $A=\tilde \gamma=0$ can be identified with harmonic oscillator (non-singular) potential (\ref{V0-III}), see e.g. \cite{Green}, and $E_0$ (\ref{E0-III}). The last term in $V^{(qes)}$ plays the role of a centrifugal potential due to rotation of the interaction plane (triangle) around the center-of-mass (baricenter). Note that the term $(2 A N P)$ is added to (\ref{qes-III}) and subtracted in (\ref{V0-qes-III}). For general value of the parameter $N$ the operator (\ref{qes-III}) is $h^{(3)}$ Lie-algebraic: it can be rewritten in terms of the generators of the algebra $h^{(3)}$ (\ref{ops_1})-(\ref{ops_2}), (\ref{R1}). However, it can be immediately checked that for integer $N$ the operator (\ref{qes-III}) has single finite-dimensional invariant subspace in polynomials in variables $P, S, T$, \[ {\mathcal P}^{(1,2,3)}_{N}\ =\ \langle P^{p_1} S^{p_2} T^{p_3} \vert \ 0 \le p_1+2p_2+3p_3 \le N \rangle\ , \] cf. (\ref{P3-tauS}), with characteristic vector $(1,2,3)$. Thus, it is quasi-exactly-solvable operator where $\sim N^3/6$ eigenstates can be found algebraically. In particular, it can be constructed the algebraic secular equation of the degree $\sim N^3/6$ with real roots alone whose roots are the eigenvalues. Simple analysis of the operator (\ref{qes-III}) shows that among the $\sim N^3/6$ polynomial eigenfunctions in three variables $P,S, T$ there exist the $\sim N^2/2$ polynomial eigenfunctions in two variables $P,S$ and the $(N+1)$ polynomial eigenfunctions in variable $P$. These latter eigenfunctions are the eigenfunctions of the operators \begin{equation} \label{III-qes-1} h^{(qes)}_r \ =\ \ 6\,P\,\pa^2_P + \frac{1}{2}\,P\,S\,\pa_{S}^2 + 24\,S\,\pa_{P,S} -12\om (P\pa_P + 2 S \pa_S) \end{equation} \[ +\ 6\,(d+4 \tilde\gamma)\,\pa_P\ +\ \frac{1}{4}\,(d-1+4 \tilde\gamma)\,P\,\pa_{S} - 12 A P (P\pa_P + 2 S \pa_S - N) \ , \] cf. (\ref{exact-III-1}) and \begin{equation} \label{III-qes-2} h^{(qes)}_{rr} \ =\ 6\,P\,\pa^2_P\ -\ 12\om \,P\pa_P \ +\ 6\,(d+4 \tilde\gamma)\,\pa_P- 12 A P (P\pa_P - N) \ , \end{equation} cf. (\ref{exact-III-2}), respectively. It is easy to check that the quasi-exactly-solvable operator $h^{(qes)}_r$ (\ref{III-qes-1}) can be rewritten in terms of the generators of the algebra $g^{(2)} \supset gl(2, {\bf R}) \ltimes {\cal R}^{(2)}$, see e.g. \cite{Turbiner:1998},\cite{TTW:2009} while the (\ref{III-qes-2}) can be rewritten in terms of the generators of the algebra $sl(2)$ see \cite{Turbiner:1988}. The quasi-exactly-solvable operator $h^{(qes)}_{rr}$ corresponds to the Case VII in classification of one-dimensional QES operators \cite{Turbiner:2016} and describes the algebraic sector of the one-dimensional QES singular sextic polynomial potential. Note that the ground state function of the QES $n$-body Hamiltonian (\ref{Hrel-Mod}) with potential (\ref{V0-qes-III}) \[ {\tilde {\cal H}}_R\ = \ - {\De_R}\ +\ V^{(qes)}\ , \] is of the form \[ \Psi_{ground\, state}\ =\ P_N(P) \Psi_0(P,S)\ =\ P_N(P)\,{S}^{\tilde \gamma}\,e^{-\om\,P - \frac{A}{2} P^2} \ , \] see (\ref{psi_cal-d3-2}), where $P_N(P)$ is the positive eigenfunction of the operator $h^{(qes)}_{rr}$ at $P > 0$. In the limit $d=1$ the area of the interaction triangle vanishes $S=0$ as well as $\tilde\gamma=0$ to ensure that the ground state function (\ref{psi_cal-d3-2}) remains finite \begin{equation} \label{psi_qes-d1} \Psi_0(P,\,S=0,\,T=0) \ =\ e^{-\om\,P\ -\ \frac{A}{2}\, P^2} \ \equiv\ \Psi_0(P)\ ,\ \om \geq 0\ ,\ A \geq 0\ . \end{equation} the operator $h^{(qes)}$ remains algebraic, \begin{equation} \label{qes-III-d1} h^{(qes)}_{d=1}\ =\ \ 6\,P\,\pa^2_P + T\,P^2\,\pa_{T}^2 + 36\,T\,\pa_{P,T} +\ 6\,\pa_P\ +\ \frac{P^2}{2}\,\pa_{T} \end{equation} \[ -12\om (P\pa_P + 3 T \pa_{T}) - 12 A P (P\pa_P + 3 T \pa_{T} - N) \ . \] It is easy to check that at $\om=0, A=0$ the operator $h^{(qes)}_{d=1}$ is the flat Laplace-Beltrami operator with metric (\ref{gmunu-d1}). The operator $h^{(qes)}_{d=1}$ (\ref{qes-III-d1}) can be rewritten in terms of the generators of the algebra $g^{(3)} \supset gl(2, {\bf R}) \ltimes {\cal R}^{(3)}$, see e.g. \cite{TTW:2009}. If $N$ is integer, the operator $h^{(qes)}_{d=1}$ has finite-dimensional invariant subspace \[ {\mathcal P}^{(1,3)}_{N}\ =\ \langle P^{p_1} T^{p_3} \vert \ 0 \le p_1+3p_3 \le N \rangle\ , \] its $\sim N^2/2$ eigenfunctions are $N$th degree polynomials in variables $P, T$. Interestingly, among these eigenfunctions there are $(N+1)$ eigenfunctions in the form of polynomials of degree $N$ in variable $P$. They are the eigenfunctions of the operator \begin{equation} \label{III-qes-2-d1} h^{(qes)}_{d=1,r} \ =\ 6\,P\,\pa^2_P\ -\ 12\om \,P\pa_P \ +\ 6\,\pa_P - 12 A P (P\pa_P - N) \ , \end{equation} cf.(\ref{qes-III-d1}). The quasi-exactly-solvable operator $h^{(qes)}_{r,d=1}$ corresponds to the Case VI in classification of one-dimensional QES operators \cite{Turbiner:2016} and describes the algebraic sector of the one-dimensional QES (non-singular) sextic polynomial potential. It can be rewritten in terms of the generators of the algebra $sl(2, {\bf R} )$. The potential of the QES $n$-body Hamiltonian (\ref{Hrel-Mod}) at $d=1$ contains no singular part, \begin{equation} \label{V0-qes-III-d1} V^{(qes)}\ =\ 6 [\om^2 - 2 A (N + 1)]\,P + 12 \om A P^2 + 6 A^2 P^3 \ . \end{equation} Its ground state has the form \[ \Psi_{ground \,state,\, d=1}\ =\ P_N(P)\,e^{-\om\,P - \frac{A}{2} P^2} \ , \] where $P_N(P)$ is the lowest eigenfunction of the operator $(-h^{(qes)}_{d=1,r})$ with the property $P_{N}(P) > 0$ at $P>0$. \subsection{Primitive QES problems} {\bf (a)} Let us take the $S_3$-permutationally symmetric function \begin{equation} \label{psi_cal} \Psi_a (r_{12},\,r_{13},\,r_{23}) \ = \ {(r_{12}\,r_{13}\,r_{23})}^{\gamma}\,e^{-\frac{\om}{2}(r_{12}^2\, + \, r_{13}^2\, + \, r_{23}^2)} \ = \ {\ta_3}^{\frac{\gamma}{2}}\,e^{-\frac{\om}{2}\,\ta_1} \ , \end{equation} where $\gamma,\,\om > 0$ are constants and $\ta$'s are given by (\ref{taus}). If $d=1$, then, for the ordering $r_1 \leq r_2 \leq r_3$, \begin{equation} \label{con} r_{23}=|r_{12}-r_{13}| \ , \end{equation} and (\ref{psi_cal}) becomes 3-body Calogero ground state function (the Wigner-Dyson distribution). Here, (\ref{psi_cal}) is a natural generalization to arbitrary $d$. Now we look for the potential for which the expression (\ref{psi_cal}) is the ground state function for the Hamiltonian ${\cal H}_{r}$, see (\ref{Hrel}), (\ref{Hrel-Mod}). This potential can be found immediately by calculating the ratio \[ \frac{\De_{R}(r) \Psi_a}{ \Psi_a}\ =\ V_a - E_a \ , \] where $\De_{R}(r)$ is given by (\ref{addition3-3r}). The result is \[ V_a^{(d)} \ = \ 2\,\gamma\,(d+2\,\gamma-2)\bigg[\frac{1}{r_{12}^2} + \frac{1}{r_{13}^2} + \frac{1}{r_{23}^2} \bigg] - \gamma^2\,\bigg[ \frac{r_{12}^2}{r_{13}^2\,r_{23}^2} + \frac{r_{13}^2}{r_{12}^2\,r_{23}^2} + \frac{r_{23}^2}{r_{12}^2\,r_{13}^2} \bigg] \] \begin{equation} + 3\,\om^2\,(r_{12}^2+r_{13}^2+r_{23}^2)\ , \label{VQES2-0} \end{equation} with the energy of the ground state \begin{equation} E_a\ =\ 6\,\om\,(d+3\,\gamma) \ . \label{EQES2-0} \end{equation} It can be checked that for $d=1$, imposing (\ref{con}), the potential (\ref{VQES2-0}) becomes the familiar 3-body Calogero potential \cite{Calogero:1969}, \[ V_a^{(d=1)} \ = \ 2\,\gamma\,(\gamma-1)\bigg[ \frac{1}{r_{12}^2} + \frac{1}{r_{13}^2} + \frac{1}{{(r_{13}-r_{12})}^2} \bigg] + 6\,\om^2\,(r_{12}^2+r_{13}^2-r_{12}\,r_{13})\ . \] Let us define the Hamiltonian \[ {\cal H}^{(a)}_{r} \ =\ -{ \De_R} + V_a^{(d)} \ , \] and make a gauge rotation $\psi_a^{-1}\,{\cal H}^{(a)}_{r}\,\psi_a\,=\,-\De^{\prime}_R $, \begin{equation} \label{DRr} \De^{\prime}_R(r)\ =\ \ 2\,(\pa^{2}_{r_{12}} +\pa^{2}_{r_{23}}+\pa^{2}_{r_{13}}) \end{equation} \[ \ +\ \frac{r_{12}^2-r_{13}^2+r_{23}^2}{r_{12} r_{23}}\,\pa_{r_{12}}\pa_{r_{23}}\ +\ \frac{r_{12}^2+r_{13}^2-r_{23}^2}{r_{12} r_{13}}\,\pa_{r_{12}}\pa_{r_{13}}\ + \ \frac{r_{13}^2+r_{23}^2-r_{12}^2}{r_{13} r_{23}}\,\pa_{r_{23}}\pa_{r_{13}} \ \] \[ + \ \frac{2(d-1)(r_{13}^2\,r_{23}^2)+\gamma\,(6r_{13}^2\,r_{23}^2+r_{12}^2(r_{13}^2+r_{23}^2)-r_{13}^4-r_{23}^4)\, -6\,\om\,r_{12}^2r_{13}^2r_{23}^2}{r_{12}\,r_{13}^2\,r_{23}^2}\,\pa_{r_{12}} \] \[ + \ \frac{2(d-1)(r_{13}^2\,r_{12}^2)+\gamma\,(6r_{13}^2\,r_{12}^2+r_{23}^2(r_{13}^2+r_{12}^2)-r_{13}^4-r_{12}^4)\, -6\,\om\,r_{12}^2r_{13}^2r_{23}^2}{r_{23}\,r_{13}^2\,r_{12}^2}\,\pa_{r_{23}} \] \[ + \ \frac{2(d-1)(r_{12}^2\,r_{23}^2)+\gamma\,(6r_{12}^2\,r_{23}^2+r_{13}^2(r_{12}^2+r_{23}^2)-r_{12}^4-r_{23}^4)\, -6\,\om\,r_{12}^2r_{13}^2r_{23}^2}{r_{13}\,r_{12}^2\,r_{23}^2}\,\pa_{r_{13}}\ . \] By construction the operator $\De^{\prime}_R(r)$ (\ref{DRr}) has a single one-dimensional invariant subspace $<1>$ in space of polynomials mapping it to itself. Hence, the Hamiltonian ${\cal H}^{(a)}_{r}$ is a primitive QES problem where only the ground state is known. {\bf (b)} Let us take another $S_3$-permutationally symmetric function \begin{equation} \label{psi_cal-b} \Psi_b (r_{12},\,r_{13},\,r_{23}) \ = \ {(|r_{12}-r_{13}|\,|r_{13}-r_{23}|\,|r_{12}-r_{23}|)}^{\gamma}\,e^{-\frac{\om}{2}(r_{12}^2\, + \, r_{13}^2\, + \, r_{23}^2)} \ , \end{equation} cf. (\ref{psi_cal}), where $\gamma,\,\om > 0$ are constants. If $d=1$, then, for the ordering $r_1 \leq r_2 \leq r_3$, \begin{equation} r_{23}=|r_{12}-r_{13}| \ , \end{equation} and (\ref{psi_cal-b}) becomes 3-body Calogero ground state function as (\ref{psi_cal}) does. Also (\ref{psi_cal-b}) is a natural generalization to arbitrary $d$. The potential for which the expression (\ref{psi_cal-b}) is the ground state function for the Hamiltonian ${\cal H}_{r}$, see (\ref{Hrel}) and (\ref{Hrel-Mod}), is given by \[ V_b^{(d)} \ = \ \frac{\gamma ^2 \left({\si}_1^7-9 {\si}_1^5 {\si}_2 + 33 {\si}_1^4 {\si}_3 +20 {\si}_1^3 {\si}_2^2 -153 {\si}_1^2 {\si}_2 {\si}_3 +162 {\si}_1 {\si}_3^2 +54 {\si}_2^2 {\si}_3\right)}{{\si}_3 \left( 18 {\si}_1 {\si}_2 {\si}_3 +{\si}_1^2 {\si}_2^2 -4 {\si}_1^3 {\si}_3 -4 {\si}_2^3-27 {\si}_3^2 \right)}\ +\ 3\om^2\, \left({\si}_1^2-2 {\si}_2\right) \ + \] \begin{equation} \frac{\gamma [54 (d-2) {\si}_1 {\si}_3^2+{\si}_3 \left((8d-25){\si}_1^4-9 (4 d-13) {\si}_2 {\si}_1^2-54 {\si}_2^2\right)-{\si}_1 ({\si}_1^2-4 {\si}_2) (2 (d+1) {\si}_2^2+{\si}_1^4-5 {\si}_2 {\si}_1^2)]}{{\si}_3 ( 18 {\si}_1 {\si}_2 {\si}_3 +{\si}_1^2 {\si}_2^2 -4 {\si}_1^3 {\si}_3 -4 {\si}_2^3-27 {\si}_3^2)} \ , \label{VQES3-0} \end{equation} where \begin{equation*} \begin{aligned} &{\si}_1 \ = \ r_{12} + r_{13} + r_{23} \ , \\ & {\si}_2 \ = \ r_{12}\,r_{13} + r_{12}\,r_{23} +r_{13}\,r_{23} \ , \\ & {\si}_3 \ = \ r_{12}\,r_{13}\,r_{23} \ , \end{aligned} \end{equation*} are $S^3$ permutationally-symmetric, relative $r$-coordinate polynomials (elementary symmetric polynomials in $r_{ij}$), see (\ref{d-si}), with the energy of the ground state \begin{equation} E_b\ =\ 6\,\om\,(d+3\,\gamma) \ . \label{EQES3-0} \end{equation} It can be checked that for $d=1$ imposing (\ref{con}) the potential (\ref{VQES3-0}) becomes the familiar 3-body Calogero potential \cite{Calogero:1969}, \[ V_b^{(d=1)} \ = \ 2\,\gamma\,(\gamma-1)\bigg[ \frac{1}{r_{12}^2} + \frac{1}{r_{13}^2} + \frac{1}{{(r_{13}-r_{12})}^2} \bigg] + 6\,\om^2\,(r_{12}^2+r_{13}^2-r_{12}\,r_{13})\ . \] Let us define the Hamiltonian \[ {\cal H}^{(b)}_{r} \ =\ -{ \De_R} + V_b^{(d)} \ , \] and make a gauge rotation $\psi_b^{-1}\,{\cal H}^{(b)}_{r}\,\psi_b\,=\,-\De^{\prime}_R$, \begin{equation} \label{DRb} \De^{\prime}_R(r)\ =\ \ 2\,(\pa^{2}_{r_{12}} +\pa^{2}_{r_{23}}+\pa^{2}_{r_{13}}) \end{equation} \[ \ +\ \frac{r_{12}^2-r_{13}^2+r_{23}^2}{r_{12} r_{23}}\,\pa_{r_{12}}\pa_{r_{23}}\ +\ \frac{r_{12}^2+r_{13}^2-r_{23}^2}{r_{12} r_{13}}\,\pa_{r_{12}}\pa_{r_{13}}\ + \ \frac{r_{13}^2+r_{23}^2-r_{12}^2}{r_{13} r_{23}}\,\pa_{r_{23}}\pa_{r_{13}} \ \] \[ -\ \bigg[ \gamma\,\frac{ 8r_{12}r_{13}r_{23}(r_{13}+r_{23}) + r_{12}^4 +{(r_{13}+r_{23})}^4 -5r_{13}r_{23}( r_{12}^2 +{(r_{13}+r_{23})}^2 ) -2r_{12}^2{(r_{13}+r_{23})}^2 }{r_{12}\,r_{13}\,r_{23}\,(r_{12}-r_{13})(r_{12}-r_{23})} \] \[ - \frac{2\,(d-1)}{r_{12}}+6\,\om\,r_{12} \bigg] \,\pa_{r_{12}} \] \[ -\ \bigg[ \gamma\,\frac{ 8r_{12}r_{13}r_{23}(r_{12}+r_{23}) + r_{13}^4 +{(r_{12}+r_{23})}^4 -5r_{12}r_{23}( r_{13}^2 +{(r_{12}+r_{23})}^2 ) -2r_{13}^2{(r_{12}+r_{23})}^2 }{r_{12}\,r_{13}\,r_{23}\,(r_{13}-r_{12})(r_{13}-r_{23})} \] \[ - \frac{2\,(d-1)}{r_{13}}+6\,\om\,r_{13} \bigg] \,\pa_{r_{13}} \] \[ -\ \bigg[ \gamma\,\frac{ 8r_{12}r_{13}r_{23}(r_{13}+r_{12}) + r_{23}^4 +{(r_{13}+r_{12})}^4 -5r_{13}r_{12}( r_{23}^2 +{(r_{13}+r_{12})}^2 ) -2r_{23}^2{(r_{13}+r_{12})}^2 }{r_{12}\,r_{13}\,r_{23}\,(r_{23}-r_{13})(r_{23}-r_{12})} \] \[ - \frac{2\,(d-1)}{r_{23}}+6\,\om\,r_{23} \bigg] \,\pa_{r_{23}} \ . \] The operator (\ref{DRb}) has no invariant subspaces except for $<1>$. Hence, the Hamiltonian ${\cal H}^{(b)}_{r}$ is also a primitive QES problem where only the ground state is known. \section*{Conclusions} In this paper for the 3-body problem with equal masses in d-dimensional space it is found the Schr\"odinger type equation in the space ${\bf \tilde R}$ of relative distances $\{ r_{ij} \}$, \begin{equation} \label{H3} {H}_{LB}\, \Psi(r_{12},\,r_{13},\,r_{23}) \ =\ E\, \Psi(r_{12},\,r_{13},\,r_{23})\ ,\qquad {H}_{LB} \ =\ -\De_{LB}(r_{ij}) + \tilde V(r_{ij};d) + V(r_{ij}) \ , \end{equation} where the Laplace-Beltrami operator $\De_{LB}$, see e.g. (\ref{LB3}), is $d$-independent and makes sense as the kinetic energy of a three-dimensional particle in curved space with metric (\ref{gmn33-rho}) for $d>1$, and for $d=1$ it degenerates to the kinetic energy of a two-dimensional particle in flat space, and $\tilde V(r_{ij};d)$ is effective potential. The operator (\ref{H3}) describes angle-independent solutions of the original 3-body problem (\ref{Hgen}), including the ground state. Hence, finding the ground state (and some other states) involves the solution of the differential equation in three variables, contrary to the original $(2d)$-dimensional Schr\"odinger equation of the relative motion. Since the Hamiltonian ${H}_{LB}$ is Hermitian, the variational method can be employed with only three-dimensional integrals involved. Generalization to the case of three bodies of arbitrary masses is straightforward and is done in the Appendix. The classical analogue of the Hamiltonian in (\ref{H3}) was presented as well in (\ref{Hrel-Cl-final}). Note that in this case the operator $\De_R$ is algebraic in the $\rho$-representation but not in $\tau-$representation or geometric variables representation. The gauge-rotated Laplace-Beltrami operator, with $d$-independent determinant of the metric $D$ raised to a certain degree as the gauge factor, appears as an algebraic operator both in the variables which are squares of relative distances and which are the elementary symmetric polynomials in squares of relative distances as arguments. The former algebraic operator has the hidden algebra $sl(4, \bf R)$, while latter one has the hidden algebra $h^{(3)}$, thus, becoming Lie-algebraic operators. Both operators can be extended to (quasi)-exactly-solvable operators with potentials in a form of rational functions in either variables. Interestingly, both (quasi)-exactly-solvable operators (with hidden algebra $sl(4, \bf R)$ and $h^{(3)}$, respectively) lead to the {\it same} (quasi)-exactly-solvable potentials in the space of relative distances. Naturally, these (quasi)-exactly-solvable potentials in the space of relative distances are quasi-exactly-solvable potentials in the space of relative motion. We show there exists a special (quasi)-exactly-solvable problem in geometric variables $P,S,T$ which admits limit $d=1$ preserving polynomiality of the eigenfunctions. The ground state function always depends on the single variable $P$. The exactly-solvable problem looks as the singular harmonic oscillator in the space of relative distances, while the quasi-exactly-solvable problem appears as the singular sextic anharmonic oscillator. Both problems will be discussed in details elsewhere. \section*{Acknowledgments} A.V.T. is thankful to University of Minnesota, USA for kind hospitality extended to him where this work was initiated and IHES, France where it was mostly completed. He is deeply grateful to I.E.~Dzyaloshinsky, T.~Damour and M.~Kontsevich for useful discussions and important remarks in the early stage of the work. A.V.T. is supported in part by the PAPIIT grant {\bf IN108815}. W.M. was partially supported by a grant from the Simons Foundation (\# 412351 to Willard Miller, Jr.). M.A.E.R. is grateful to ICN UNAM, Mexico for the kind hospitality during his visit, where a part of the research was done as well as CRM, Montreal, where it was completed, he was supported in part by DGAPA grant {\bf IN108815} (Mexico) and, in general, by CONACyT grant {\bf 250881}~(Mexico) for postdoctoral research. \section*{Appendix: non-equal masses} Consider the general case of the particles located at points ${\bf r}_1,{\bf r}_2,{\bf r}_3$ of masses $m_1,m_2,m_3$, respectively. Then the operator (\ref{addition3-3rho}) becomes (in terms of the relative coordinates $\rho_{ij}=r_{ij}^2$): \[ {\De_R}'(\rho_{ij})\ =\ \frac{2}{\mu_{13}} \rho_{13}\, \pa_{\rho_{13}}^2 + \frac{2}{\mu_{23}} \rho_{23}\, \pa_{\rho_{23}}^2 + \frac{2}{\mu_{12}} \rho_{12}\,\pa_{\rho_{12}}^2 + \] \[ \frac{2(\rho_{13} + \rho_{12} - \rho_{23})}{m_1}\pa_{\rho_{13}\rho_{12}} + \frac{2(\rho_{13} + \rho_{23} - \rho_{12})}{m_3}\pa_{\rho_{13}\rho_{23}} + \frac{2(\rho_{23} + \rho_{12} - \rho_{13})}{m_2}\pa_{\rho_{23}\rho_{12}} + \] \begin{equation} \label{addition3-3r-M} \frac{d}{\mu_{13}} \pa_{\rho_{13}} + \frac{d}{\mu_{23}} \pa_{\rho_{23}} + \frac{d}{\mu_{12}} \pa_{\rho_{12}}\ , \end{equation} where \[ \frac{1}{\mu_{ij}}\ =\ \frac{m_i+m_j}{m_i m_j}\ , \] is reduced mass for particles $i$ and $j$; it is in agreement with (\ref{addition3-3rho}) for $m_1=m_2=m_3=1$. At $d=3$ it coincides with (68) at \cite{{TMA:2016}}. This operator has the same algebraic structure as ${\De_R}(\rho_{ij})$ but lives on a different manifold in general. It can be rewritten in terms of the generators of the maximal affine subalgebra $b_4$ of the algebra $sl(4,{\bf R})$, see (\ref{sl4R}), c.f. (\ref{HRex}). The contravariant metric tensor is does not depends on $d$ and its determinant is \[ D_m\ =\ \det g^{\mu \nu}\ =\ 2\,\frac{m_1+m_2+m_3}{m_1^2m_2^2m_3^2} \times \] \begin{equation} \label{gmn33-rho-det-M} \left(m_1m_2\rho_{12}+m_1m_3\rho_{13}+m_2m_3\rho_{23}\right) \left(2\rho_{12}\rho_{13} + 2 \rho_{12}\rho_{23} + 2 \rho_{13}\rho_{23}-\rho_{12}^2- \rho_{13}^2 - \rho_{23}^2\right) \ , \end{equation} and is positive definite. It is worth noting a remarkable factorization property of the determinant \[ D_m\ =\ 2\frac{m_1+m_2+m_3}{m_1^2m_2^2m_3^2} \,(m_1 m_2 r_{12}^2+m_1 m_3 r_{13}^2+m_2 m_3 r_{23}^2)\ \times \] \[ (r_{12}+r_{13}-r_{23})(r_{12}+r_{23}-r_{13})(r_{13}+r_{23}-r_{12})(r_{12}+r_{13}+r_{23})\ = \] \[ =\ 32\, \frac{m_1+m_2+m_3}{m_1^2m_2^2m_3^2}\, P_m \ S^2_{\triangle}\ , \] where $P_m=m_1 m_2 r_{12}^2+m_1 m_3 r_{13}^2+m_2 m_3 r_{23}^2$ - the weighted sum of squared of sides of the interaction triangle and ${S}_{\triangle}$ is their area. Hence, $D_m$ is still proportional to ${S}_{\triangle}^2$, c.f. (\ref{gmn33-rho-det-gamma}). Making the gauge transformation of (\ref{addition3-3r-M}) with determinant (\ref{gmn33-rho-det-M}) as the factor, \[ \Gamma \ = \ D_m^{-1/4} (4 \ta_2-\ta_1^2)^{\frac{3-d}{4}}\ =\ D_m^{-1/4}\,(16 S^2_{\triangle})^{\frac{3-d}{4}} \sim (P_m)^{-1/4} (S^2_{\triangle})^{\frac{2-d}{4}} \ , \] we find that \begin{equation} \Gamma^{-1}\, {\De_R}'(\rho_{ij})\,\Gamma \ = \ \De'_{LB}(\rho_{ij}) - \tilde V_m(\rho_{ij}) \ , \label{HLB3M} \end{equation} is the Laplace-Beltrami operator with the effective potential \[ {\tilde V_m} \ =\ \frac{3}{8}\ \frac{(m_1+m_2+m_3)}{\left(m_1 m_2 \rho_{12}+m_1 m_3 \rho_{13}+m_2 m_3 \rho_{23}\right)}\ - \] \[ \frac{(d-2)(d-4)}{2}\ \frac{\left(m_1 m_2 \rho_{12}+m_1 m_3 \rho_{13}+m_2 m_3 \rho_{23}\right)} { m_1 m_2 m_3\left(\rho_{12}^2+\rho_{13}^2+\rho_{23}^2 -2 \rho_{12} \rho_{13}- 2 \rho_{12} \rho_{23}-2 \rho_{13} \rho_{23}\right)}\ , \] or in geometrical terms, \[ {\tilde V_m} \ =\ \frac{3}{8}\ \frac{(m_1+m_2+m_3)}{P_m}\ +\frac{(d-2)(d-4)}{2}\ \frac{P_m}{m_1 m_2 m_3\,S^2_{\triangle}} \] where the second term is absent for $d=2,4$. The Laplace-Beltrami operator plays a role of the kinetic energy of three-dimensional quantum particle moving in curved space. It seems evident the existence of (quasi)-exactly-solvable problems with such a kinetic energy, see e.g. \cite{Crandall:1985} as for the example of exactly-solvable problem at $d=3$. \subsection*{The symmetry operator} For unequal masses, the symmetry operator for both (\ref{addition3-3r-M}) and (\ref{HLB3M}) is \[ L_{1,m} = \frac{\rho_{12}(m_1-m_2)+(\rho_{13}-\rho_{23})(m_1+m_2)}{m_1m_2}\pa_{\rho_{12}} + \frac{\rho_{13}(m_3-m_1)+(\rho_{23}-\rho_{12})(m_1+m_3)}{m_1m_3}\pa_{\rho_{13}} \] \begin{equation} \label{Lop} +\ \frac{\rho_{23}(m_2-m_3)+(\rho_{12}-\rho_{13})(m_2+m_3)}{m_2m_3}\pa_{\rho_{23}}\ . \end{equation} which commutes, \[ [{\De_R}'(\rho_{ij})\ ,\ L_{1,m}]\ =\ 0\ . \] Invariants under this action are the functions \[ W_1=\frac{\rho_{12}}{m_3}+\frac{\rho_{13}}{m_2}+\frac{\rho_{23}}{m_1}\ , \] \[ W_2=2\sqrt{2\rho_{12}\rho_{13}+2\rho_{12}\rho_{13}+2\rho_{13}\rho_{23} -\rho_{12}^2-\rho_{13}^2+\rho_{23}^2}\ , \] \[ W_4= \rho_{23}^2-\frac{2m_1}{m_1-m_3}\rho_{13}\rho_{23}-\frac{2m_1}{m_1+m_2}\rho_{12}\rho_{13}+\frac{m_1}{m_1+m_3}\rho_{13}^2+\frac{m_1m_3}{m_2(m_1+m_3)}\rho_{13}^2\] \[-\frac{2m_1m_2}{(m_1+m_2)(m_1+m_3)}\rho_{12}\rho_{13} -\frac{2m_1m_3}{(m_1+m_2)(m_1+m_3)}\rho_{12}\rho_{13}+\frac{m_1m_2}{(m_1+m_2)m_3}\rho_{12}^2 + \frac{m_1}{m_1+m_2}\rho_{12}^2\ . \] These invariants are related by \[ W_4+\frac{m_1(m_1+m_2+m_3)}{4(m_1+m_2)(m_1+m_3)}W_2^2-\frac{m_1^2m_2m_3} {(m_1+m_2)(m_1+m_3)}W_1^2=0\ . \] Furthermore, all are nonnegative. In particular, \[ W_4=(a_1\rho_{12}-a_2\rho_{13})^2+(a_3\rho_{12}-a_4\rho_{23})^2+(a_5\rho_{13}-a_6\rho_{23})^2\ , \] where \[ a_1=\frac{(m_2+m_3)m_1}{\sqrt{(m_2+m_3)\frac{m_1m_3}{m_2}}(m_1+m_2)},\ a_2=\frac{\sqrt{(m_2+m_3)\frac{m_1m_3}{m_2}}}{m_1+m_3},\ a_3=\frac{\sqrt{(m_2+m_3)\frac{1}{m_3}}}{m_1+m_2}\ , \] \[ a_4=\frac{1}{\sqrt{(m_2+m_3)\frac{1}{m3}}},\ a_5=\frac{m_1}{\sqrt{\frac{m_2}{m2+m3}}(m_1+m_3)},\ a_6=\sqrt{\frac{m_2}{m2+m3}}\ . \] Now we make a change of variables from $\rho_{12},\rho_{13},\rho_{23}$ to $W_1,W_3,W_4$ so that $L_1=\pa_{W_3}$ in the new coordinates. We have already defined $W_1$ and $W_4$. While we define $W_3$ by the equations \begin{equation} \label{W_3} \sqrt{W_4}\cos{\left(\frac{2\Omega\, W_3}{m_1 m_2m_3}\right)}=A(\rho_{12},\rho_{13},\rho_{23}),\ \sqrt{W_4}\cos{\left(\frac{2\Omega\, W_3}{m_1 m_2m_3}\right)}=B(\rho_{12},\rho_{13},\rho_{23})\ , \end{equation} where \[ A= -\frac{\left(\rho_{12}m_2(m_1+m_3)-\rho_{13}m_3(m_1+m_2)\right)\Omega}{m_2m_3(m_1+m_2)(m_1+m_3)}\ ,\ \Omega=\sqrt{m_1m_2m_3(m_1+m_2+m_3)}\ , \] \[ B=-\frac{\rho_{23}(m_1+m_2)(m_1+m_3)-\rho_{12}m_1(m_1+m_3)-\rho_{13}m_1(m_1+m_2)}{(m_1+m_2)(m_1+m_3)}\ . \] Due to the easily verified identity \[ W_4=A^2+B^2 \] we see that equations (\ref{W_3}) have a locally unique solution for an angle $W_3$. In terms of these new variables we find \begin{align*} {\De_R}'(\rho_{ij})\ =& \ \frac{2(m_1+m_2+m_3)W_1}{m_1m_2m_3}\pa_{W_1}^2\ +\ \frac{8m_1(m_1+m_2+m_3)W_1W_4}{(m_1+m_2)(m_1+m_3)} \pa_{W_4}^2\ +\ \\ & \frac{m_1^2m_2m_3W_1}{2(m_1+m_2)(m_1+m_3)W_4}\pa_{W_3}^2\ +\ \frac{8(m_1+m_2+m_3)W_4}{m_1m_2m_3}\pa_{W_1W_4}^2\\ &\ +\ \frac{8m_1(m_1+m_2+m_3)W_1}{(m_1+m_2)(m_1+m_3)}\pa_{W_4} +\ \frac{2d(m_1+m_2+m_3)}{m_1m_2m_3}\pa_{W_1}\ , \\ L_1=&\,\partial_{W_3}\ . \end{align*} \newpage
1,116,691,499,519
arxiv
\section{Introduction}\label{intro} After more than a decade of continuous improvements in the accuracy of cosmological observations -- which has led to the establishment of a broadly accepted representation of our Universe known as the {\em Concordance Cosmological Model} (CCM) -- we are now entering the epoch of precision cosmology. The great wealth of high-precision cosmological data expected throughout the next few years offers the exciting prospect of tightly constraining the parameters of the CCM or possibly detecting deviations from its predictions. The unprecedented resolution of the Planck satellite \citep[][]{Ade:2011ah} in measuring the angular temperature fluctuations and the polarization of the Cosmic Microwave Background (CMB) radiation will extract a significant amount of new information beyond that yielded by previous CMB experiments, affording very tight constraints on the initial conditions of the Universe. At the other end of the cosmic expansion history, \ie at low redshifts, present and future surveys measuring the clustering of sources \citep[][]{Euclid-r} and gravitational lensing effects \citep[][]{Massey:2007gh,Euclid-r} will greatly improve our knowledge of the spatial distribution of baryonic and cold dark matter (CDM) in the local Universe, and of its time evolution, providing new tests of the gravitational instability processes driving the growth of cosmic structures. In particular, one of the most mysterious phenomena characterizing the low-redshift Universe is the appearance of a Dark Energy (DE) component capable of driving the observed present acceleration of the cosmic expansion \citep[][]{Riess_etal_1998,Perlmutter_etal_1999,Schmidt_etal_1998,snls,kowalski_etal_2008}, and required to also explain a number of other observations, \eg the angular power spectrum of CMB temperature anisotropies \citep[][]{wmap5,wmap7}, the evolution of the number counts of massive galaxy clusters as a function of redshift \citep[][]{Borgani_2006,Vikhlinin_etal_2009b,Mantz_etal_2010}, the angular correlation of galaxies in large galaxy surveys \citep[][]{Percival_etal_2001,Cole_etal_2005,Reid_etal_2010}, or the observed scale of the Baryon Acoustic Oscillations (BAO) \citep[][]{Percival_etal_2009}. In the standard $\Lambda $CDM model, such a DE component is identified with a cosmological constant $\Lambda $, a quantity with negative pressure and constant energy density throughout the whole expansion history of the Universe, and with no spatial fluctuations. This simple picture is very successful in reproducing a wide range of observational data. However, the cosmological constant scenario suffers from serious conceptual problems concerning the extremely small value of the constant DE density as compared to the typical densities of the early Universe, known as the ``fine tuning problem" \citep[see e.g.][]{Weinberg_1988}, and the apparent coincidence that it dominates over CDM only at relatively recent cosmological epochs, the ``coincidence problem" \citep[see e.g.][]{Huey_Wandelt_2006}. In order to overcome these problems, alternative models based on the dynamic evolution of a classical scalar field have been proposed \citep[][]{Wetterich_1988,Ratra_Peebles_1988,ArmendarizPicon_etal_2000}. Abandoning the simple picture of a cosmological constant, however, necessarily requires us to consider and to include in our models of the Universe the presence of spatial fluctuations and of possible interactions of the new physical degree of freedom represented by the DE scalar field. It is in this context that models of interacting DE have been proposed as a natural extension of the minimally coupled dynamic scalar field scenario \citep[][]{Wetterich_1995,Amendola_2000,Farrar2007}. Although an interaction of the DE scalar field with baryonic particles is tightly constrained by observations \citep[][]{Hagiwara_etal_2002}, the same bounds do not apply to the case of a selective interaction between DE and CDM, as first speculated by \cite{Damour_Gibbons_Gundlach_1990}, which has therefore received substantial attention as a realistic competitor to the standard $\Lambda $CDM model. Various different forms of interactions between DE and CDM particles (including massive neutrinos) have been proposed and investigated in the literature \citep[as \eg by][]{Amendola_2004,CalderaCabral:2008bx,Pettorino_Baccigalupi_2008,Amendola_Baldi_Wetterich_2008,Boehmer:2009tk,Koyama_etal_2009,Honorez_etal_2010}, and their impact on the linear growth of density perturbations \citep[see \eg][]{DiPorto_Amendola_2008,CalderaCabral:2009ja,Valiviita:2008iv,Majerotto:2009np,Valiviita:2009nu,Clemson:2011an} and on the nonlinear regime of structure formation \citep[][]{Maccio_etal_2004,Baldi_etal_2010,Baldi_2011a,Li_Barrow_2011,Baldi_Pettorino_2010,Li:2010zzx,Li:2010eu} has been extensively studied in recent years. For many such models, robust and realistic observational constraints on the interaction strength have been derived based on CMB and LSS data \citep[][]{Bean_etal_2008,LaVacca_etal_2009,xia_2009}, local dynamical tests \citep[][]{Kesden_Kamionkowski_2006,Keselman_Nusser_Peebles_2009}, and Lyman-$\alpha $ observables \citep[][]{Baldi_Viel_2010}. Although these observational bounds have strongly restricted the allowed parameter space for interacting DE cosmologies, none of them has yet been able to rule out the model, or to unambiguously detect the presence of a DE-CDM interaction with compelling statistical significance. In this respect, exciting times are ahead of us, with the realistic possibility of exploiting the joint power of forthcoming high-precision cosmological observations to break many of the existing degeneracies between competing cosmological models and finally disentangle the distinctive features of alternative scenarios. Dark energy interactions will be one of the issues that can be tested, and so the next generation of cosmological data will possibly provide a real indication of the nature of the DE phenomenon. In the present paper, we examine the usefulness of weak gravitational lensing for discriminating between interacting dark energy models. We wish to show how the lensing signal depends on the dark energy interaction, and whether this dependence is sufficiently strong that it could be detected with forthcoming lensing surveys. In particular, we will provide forecasts for the capability of future large Weak Lensing (WL) surveys --both a ground-based survey similar to the {\em Dark Energy Survey} (DES)\footnote[1]{http://www.darkenergysurvey.org}, and a space-based survey, i.e. EUCLID\footnote[2]{http://www.ias.u-psud.fr/imEuclid} -- to detect a DE-CDM interaction. Our particular focus in this paper is the non-linear regime, as this regime provides much of the power for lensing. To this end, we exploit the full non-linear matter power spectrum as predicted by the {\small CoDECS} simulations \citep[][]{CoDECS}, the largest suite of self-consistent and high-resolution N-body simulations for interacting DE cosmologies to date. The paper is organized as follows: in Sec.~\ref{coupledDE} we describe the main features of the interacting DE models under investigation; in Sec.~\ref{lenscoupled} we discuss gravitational lensing in the context of interacting DE models, and in Sec.~\ref{Simulations} we describe the methods used to compute the necessary nonlinear power spectra. In Sec.~\ref{Results} we present the results of our analysis, giving forecasts for forthcoming lensing surveys; we draw our conclusions in Sec.~\ref{Conclusions}. \section{Coupled dark energy models}\label{coupledDE} Coupled DE (cDE) models have been widely investigated in the literature concerning their cosmological background evolution as well as the behaviour of linear and nonlinear density perturbations in these models \citep[see \eg][and references therein]{Amendola_2000,Amendola_2004,Pettorino_Baccigalupi_2008, DiPorto_Amendola_2008,Baldi_etal_2010,Li_Barrow_2011,Baldi_2011a}. Here we only briefly introduce the definitions and the notation that will be assumed throughout the paper for the different cDE models; we refer the interested reader to the literature above for a thorough discussion of cDE scenarios. In the present work, we will consider cDE models defined by the following set of background dynamic equations: \begin{eqnarray} \label{klein_gordon} \ddot{\phi } + 3H\dot{\phi } +\frac{dV}{d\phi } &=& \sqrt{\frac{2}{3}}\beta _{c}(\phi ) \frac{\rho _{c}}{M_{{\rm Pl}}} \,, \\ \label{continuity_cdm} \dot{\rho }_{c} + 3H\rho _{c} &=& -\sqrt{\frac{2}{3}}\beta _{c}(\phi )\frac{\rho _{c}\dot{\phi }}{M_{{\rm Pl}}} \,, \\ \label{continuity_baryons} \dot{\rho }_{b} + 3H\rho _{b} &=& 0 \,, \\ \label{continuity_radiation} \dot{\rho }_{r} + 4H\rho _{r} &=& 0\,, \\ \label{friedmann} 3H^{2} &=& \frac{1}{M_{{\rm Pl}}^{2}}\left( \rho _{r} + \rho _{c} + \rho _{b} + \rho _{\phi} \right)\,, \end{eqnarray} where an overdot represents a derivative with respect to the cosmic time $t$, $H\equiv \dot{a}/a$ is the Hubble function, $V(\phi )$ is the scalar field self-interaction potential, $M_{\rm Pl}\equiv 1/\sqrt{8\pi G}$ is the reduced Planck Mass, and the subscripts $b\,,c\,,r$, indicate baryons, CDM, and radiation, respectively. The function $\beta _{c}(\phi )$ sets the direction and the strength of the energy-momentum flow between the DE scalar field $\phi $ and the CDM fluid, while the function $V(\phi )$ determines the dynamical evolution of the DE density. In the present work we will consider two possible choices for each of these two functions, namely an exponential \citep[][]{Lucchin_Matarrese_1984,Wetterich_1988} and a SUGRA \citep[][]{Brax_Martin_1999} potential, \begin{eqnarray} {\rm EXP:}&\quad & V(\phi ) = Ae^{-\alpha \phi } \,,\\ {\rm SUGRA:}&\quad & V(\phi ) = A\phi ^{-\alpha }e^{\phi ^{2}/2} \,, \end{eqnarray} where $\alpha $ is a positive constant and where for simplicity the field $\phi $ has been expressed in units of the reduced Planck mass $M_{\rm Pl}$, as well as both a constant and an exponentially growing coupling function $\beta _{c}(\phi )$: \begin{equation} \beta _{c}(\phi ) = \beta _{0}e^{\beta _{1}\phi }\,, \end{equation} characterized by $\beta _{1}=0$ and $\beta _{1}>0$, respectively. The most relevant difference between the exponential potential and the SUGRA potential relies on the fact that the latter features a global minimum at finite scalar field values; this allows for a change of direction of the scalar field motion, which is the main feature of the recently proposed ``Bouncing cDE" scenario \citep[][]{Baldi_2011c}. One should also notice that the notation introduced in Eqs.~(\ref{klein_gordon}-\ref{friedmann}) corresponds to the original convention proposed by \citet{Amendola_2000} and has been adopted by several other studies, including the {\small CoDECS} project considered in the present work, but it differs by a constant factor $\sqrt{2/3}$ from what is used in another part of the related literature \citep[as \eg][]{Pettorino_Baccigalupi_2008,Baldi_etal_2010}. The specific models considered in the present work have been described in full detail by \citet{Baldi_2011c} and \citet{CoDECS}; we summarize them in Table~\ref{tab:models}, where the features and the specific parameters of each model are outlined. \begin{table*} \begin{tabular}{llccccc} \hline Model & Potential & $\alpha $ & $\beta _{0}$ & $\beta _{1}$ & $w_{\phi }(z=0)$ & $\sigma _{8}(z=0)$\\ \\ \hline $\Lambda $CDM & $V(\phi ) = A$ & -- & -- & -- & $-1.0$ & $0.809$ \\ EXP001 & $V(\phi ) = Ae^{-\alpha \phi }$ & 0.08 & 0.05 & 0 & $-0.997$ & $0.825$ \\ EXP002 & $V(\phi ) = Ae^{-\alpha \phi }$ & 0.08 & 0.1 & 0 & $-0.995$ & $0.875$ \\ EXP003 & $V(\phi ) = Ae^{-\alpha \phi }$ & 0.08 & 0.15 & 0 & $-0.992$ & $0.967$\\ EXP008e3 & $V(\phi ) = Ae^{-\alpha \phi }$ & 0.08 & 0.4 & 3 & $-0.982$ & $0.895$ \\ SUGRA003 & $V(\phi ) = A\phi ^{-\alpha }e^{\phi ^{2}/2}$ & 2.15 & -0.15 & 0 & $-0.901$ & $0.806$ \\ \hline \end{tabular} \caption{Interacting dark energy models considered in this work. In addition to the concordance $\Lambda$CDM model, we consider the exponential potential with three interaction strengths; the exponential potential with a time-varying strength; and the SUGRA potential.} \label{tab:models} \end{table*} \ \\ The evolution equations for linear density perturbations in the context of a cDE cosmology have been derived in the literature \citep[see \eg][]{Amendola_2004,Pettorino_Baccigalupi_2008}, and in the Newtonian limit of sub-horizon scales can be expressed as follows: \begin{eqnarray} \label{gf_c} \ddot{\delta }_{c} &=& -2H\left[ 1 - \beta _{c}\frac{\dot{\phi }}{H\sqrt{6}}\right] \dot{\delta }_{c} + 4\pi G \left[ \rho _{b}\delta _{b} + \rho _{c}\delta _{c}\Gamma _{c}\right] \,, \\ \label{gf_b} \ddot{\delta }_{b} &=& - 2H \dot{\delta }_{b} + 4\pi G \left[ \rho _{b}\delta _{b} + \rho _{c}\delta _{c}\right]\,, \end{eqnarray} where $\delta _{c,b}\equiv \delta \rho _{c,b,}/\rho _{c,b}$ are the relative density perturbations of the coupled CDM and uncoupled baryonic fluids, respectively, and where the scalar field dependence of the coupling function $\beta _{c}(\phi )$ has been omitted for simplicity. In Eq.~(\ref{gf_c}), the factor $\Gamma _{c}\equiv 1 + 4\beta _{c}^{2}(\phi )/3$ represents an additional fifth-force mediated by the DE scalar field $\phi $ for CDM perturbations, while the second term in the first square bracket at the right-hand-side of Eq.~(\ref{gf_c}) is an extra friction term on CDM fluctuations arising as a consequence of momentum conservation \citep[see e.g.][for a derivation of Eqs.~(\ref{klein_gordon}-\ref{friedmann},\ref{gf_c},\ref{gf_b}) and for a detailed discussion of the extra friction and fifth force corrections to the evolution of linear perturbations]{Amendola_2004, Pettorino_Baccigalupi_2008,Baldi_etal_2010,Baldi_2011b}. As a consequence of these two additional terms in the perturbed dynamic equations, CDM fluctuations will grow faster in cDE models with respect to a standard $\Lambda $CDM cosmology, thereby reaching a higher $\sigma _{8}$ normalization at $z=0$ if starting from the same amplitude at the last scattering surface $z_{\rm CMB}\approx 1100$, as shown in the last column of Table~\ref{tab:models}. However, in the nonlinear regime the interplay between the friction term and the fifth force is not so straightforward as for the case of linear perturbations, due to the fact that as a consequence of virialization processes, the local velocity field will not necessarily be aligned to the local gradient of the gravitational potential, as one can see from the three-dimentional generalization of Eq.~(\ref{gf_c}) to a system of point-like massive particles: \begin{equation} \dot{\vec{v}}_{c} = \beta _{c}(\phi )\frac{\dot{\phi }}{\sqrt{6}}\vec{v}_{c} - \vec{\nabla }\left[ \sum_{c}\frac{GM_{c}(\phi )\Gamma _{c}}{r_{c}} + \sum_{b}\frac{GM_{b}}{r_{b}}\right] \,, \end{equation} where $r_{c,b}$ is the physical distance of the target coupled particle from the other CDM and baryonic particles, respectively. The effect of the friction term in the nonlinear regime has been shown to induce a suppression of small-scale power in the cDE models with respect to the nonlinear power that would be inferred based on the large-scale $\sigma _{8}$ normalization in the context of a $\Lambda $CDM universe \citep[][]{Baldi_2011b,CoDECS}. Such suppression will have important consequences on the weak lensing constraints on cDE models that we want to address in the present work. Therefore, although it is possible to estimate the full matter power in cDE scenarios by applying nonlinear corrections (calibrated on $\Lambda $CDM simulations) to the re-normalized linear power spectrum \citep[as recently done e.g. by][]{Amendola_etal_2011}, in order to reach high accuracy at scales relevant for present and future large lensing surveys it is necessary to rely on a fully nonlinear treatment of cDE scenarios via specific N-body simulations. A discussion on the comparison between these two approaches is presented in Section \ref{comparison}. \begin{figure} \hspace{7mm} \psfig{figure=powerspectraConstant.ps,width=76mm} \caption{Power spectrum for $\Lambda$CDM and cDE models with constant coupling at z=0.} \label{fig:PowerspectrumConstant} \end{figure} Figure \ref{fig:PowerspectrumConstant} shows the power spectra for each of the constant coupling ($\beta_1=0$) models normalised by WMAP7. The values of these couplings were chosen since cDE models with $\beta_0 \leq 0.15$ can fit the angular diameter distance to decoupling measured by WMAP7, so these are of particular interest as they are consistent with current observations of the background, but may on the other hand affect the growth of structures. It can be seen that there is a 2-7\% difference in the $z=0$ power spectrum between $\Lambda$CDM and EXP001, the lowest of the couplings investigated here, and a 25-65\% difference between $\Lambda $CDM and the highest of the couplings, EXP003. In this analysis we do not use the simulated power spectrum directly but instead use the ratio between the $\Lambda$CDM and cDE power spectra to find the difference in the growth of modes for different couplings with the same initial conditions. Using this method reduces the error associated with the limited number of independent $k$-modes that enter the computation of the power in each $k$ bin to only the error on the $\Lambda$CDM power spectrum. \section{Lensing in coupled dark energy cosmologies}\label{lenscoupled} Now we present the framework for calculating the gravitational lensing signal in the cDE scenario. The way that light is deflected along the path from its source to an observer is determined by the mass distribution and the geometry of the Universe. The deflections of light lead to distortions of the observed image of the source. The mapping between the original source shape and the observed image is given by \begin{equation} {\cal A}= \left(\begin{array}{cc} 1-\kappa-\gamma_1 & -\gamma_2 \\ -\gamma_2 & 1-\kappa+\gamma_1 \end{array}\right), \end{equation} \citep[][]{Bartelmann:1999yn} where the convergence, $\kappa$, causes an isotropic dilation and the shear, $\gamma=\gamma_1+i\gamma_2$, changes the ellipticity. $\kappa$ is challenging to measure, as the original size of the source is unknown; equally $\gamma$ cannot be measured for a single source as the intrinsic ellipticity of the source is unknown. However if the shear of a large number of sources is correlated, then the lensing signal can be measured as a correlation function (insofar as the intrinsic ellipticities are not themselves correlated; see discussion in section \ref{Results} below). Therefore we will be interested in the shear correlation function $C_\gamma$ in order to quantify our predictions, given by \citep[][]{Bartelmann:1999yn} \begin{equation} C_\gamma(\theta) = \int ^\infty _0 dl\frac{l}{2\pi}P_\kappa (l)J_0(l\theta) \label{eq:ckappa}, \end{equation} where $\theta$ is the angular distance between the correlated sources, $l$ is the angular wavenumber and the lensing power spectrum $P_\kappa$ is given by \citep[][]{Bacon:2004ht,Massey:2007gh} \begin{equation} \label{eq:Pkappa} P_\kappa (l) = \frac{9}{4} \left(\frac{H_0}{c}\right)^4 \int ^{\chi _{\rm H}} _0 d \chi W_1(\chi) W_2(\chi) a^4 \Omega_{\rm m}(a)^2 P_\delta \left(\frac{l}{\chi},\chi\right), \end{equation} with the weight functions \begin{equation} W(\chi)=\int^{\chi _{\rm H}} _\chi d\chi' G(\chi')\left(1-\frac{\chi}{\chi'}\right), \label{eq:w} \end{equation} where $\chi$ is comoving distance, $\chi _{\rm H}$ is the comoving distance to the horizon and $G(\chi)$ is the normalised distribution of the sources in comoving distance, corresponding to a redshift distribution for the sources. We use two weight functions in Equation \ref{eq:Pkappa} since we are using tomographic weak lensing. Equation (\ref{eq:w}) is valid for flat cosmologies, which are all that are considered in this paper. Usually Eq.~(\ref{eq:Pkappa}) is written with the assumption $\Omega_{\rm m}(a)=\Omega_{\rm m}/a^3$; however the form above does not include such an assumption, as coupling CDM and DE means that $\Omega_{\rm m}$ has a different dependence on time, as shown in Eqs.~(\ref{klein_gordon}-\ref{continuity_radiation}). We have modified the COSMOS CosmoMC code \citep[][]{Lesgourgues:2007te,Lewis:2002ah,Massey:2007gh}, which calculates the combined shear correlation function from the theoretical power spectrum prediction given by CosmoMC, to include cross-correlation of redshift bins and to calculate the predicted weak lensing signal directly from the cDE model power spectra, according to Eqs.~(\ref{eq:ckappa}-\ref{eq:w}). We will now use these results to estimate the discriminatory power from lensing between different coupled DE models. \section{Simulations}\label{Simulations} For our analysis we will rely on the public nonlinear power spectrum data computed from the {\small CoDECS} simulations \citep[][]{CoDECS}, the largest suite of cosmological N-body simulations for cDE models to date, carried out with the modified version by \citet{Baldi_etal_2010} of the widely used Tree-PM parallel N-body code {\small GADGET} \citep[][]{gadget-2}. In particular we will consider the {\small H-CoDECS} suite that includes hydrodynamical simulations of all the cDE models summarized in Table~\ref{tab:models} on relatively small scales. More specifically, the {\small H-CoDECS} runs follow the evolution of $512^{3}$ CDM and $512^{3}$ gas particles in a cosmological comoving box of $80$ Mpc$/h$ a side, with a mass resolution at $z=0$ of $m_{c}=2.39\times 10^{8}$ M$_{\odot }/h$ for CDM and $m_{b}=4.78\times 10^{7}$ M$_{\odot }/h$ for baryons, and a force resolution set by the gravitational softening $\epsilon _{g} = 3.5$ kpc$/h$. Adiabatic hydrodynamical forces on the gas particles are computed by means of the entropy conserving formulation of {\em Smoothed Particle Hydrodynamics} \citep[SPH,][]{Springel_Hernquist_2002} and other radiative processes such as gas cooling, star formation, or feedback mechanisms are not included in the simulations. Initial conditions are generated at $z_{i} = 99$ by rescaling, with the appropriate growth factor for each specific model, the displacements obtained for a particular random field realization of the linear power spectrum $P_{\rm lin}(k)$ at $z_{\rm CMB}$. This power spectrum is computed by the publicly available Boltzmann code {\small CAMB} \citep[][]{camb} for a $\Lambda $CDM cosmology with parameters consistent with the latest ``CMB only Maximum Likelihood" constraints from WMAP7 \citep[][]{wmap7}, which are summarized in Table~\ref{tab:parameters}. This means that all the different simulations have exactly the same initial conditions at $z_{\rm CMB}$, and their different features at low redshifts depend uniquely on the different cosmology in place between last scattering and the present time. \begin{table} \begin{center} \begin{tabular}{cc} \hline Parameter & Value\\ \hline $H_{0}$ & 70.3 km s$^{-1}$ Mpc$^{-1}$\\ $\Omega _{\rm CDM} $ & 0.226 \\ $\Omega _{\rm DE} $ & 0.729 \\ ${\cal A}_{s}(\sigma_8)$ & $2.42 \times 10^{-9}$ (0.801 for $\Lambda$CDM)\\ $ \Omega _{b} $ & 0.0451 \\ $n_{s}$ & 0.966\\ \hline \end{tabular} \end{center} \caption{The set of cosmological parameters at $z=0$ assumed for all the models included in the {\small CoDECS} project, consistent with the latest results of the WMAP collaboration for CMB data alone \citep[][]{wmap7}.} \label{tab:parameters} \end{table} The {\small H-CoDECS} matter power spectra have been computed by evaluating the density of the different matter components on a grid with the same size of the PM grid (\ie $512^{3}$ grid nodes) through a Cloud-in-Cell mass assignment of the different matter species and of the total matter distribution. This procedure allows us to compute the power spectrum up to scales corresponding to the Nyquist frequency of the grid, \ie $k_{\rm Ny} = \pi N/L \approx 20.0\, h/$Mpc. Beyond this limiting frequency, the power spectrum has been computed with the folding method of \citet{Jenkins_etal_1998,Colombi_etal_2008}, and the two estimations have been smoothly interpolated around $k_{\rm Ny}$. Finally, the combined power spectrum has been truncated at scales where the shot-noise reaches $10\%$ of the measured power. With the power spectra computed with the procedure just described, we have investigated how future weak lensing probes could perform in constraining cDE cosmologies, as discussed in the next Section. \section{Results}\label{Results} \begin{table} \centering \begin{tabular}{cccc} \hline & $n$/ & Area/ \\ Survey & galaxy arcmin$^{-2}$ & degree$^2$\\ \hline DES & 13 & 5000 \\ Euclid & 30 & 15000 \\ \hline \end{tabular} \caption{Galaxy density, $n$, and area assumed for our fiducial DES and Euclid surveys.} \label{table:galDensity} \end{table} We calculated the combined shear correlation function for each of our models using equations \ref{eq:ckappa}-\ref{eq:w}. We consider two types of survey: a ground-based survey modelled on DES, and a space-based survey, Euclid; the adopted galaxy density and survey area are shown in Table \ref{table:galDensity}. In calculating the shear correlation function for these surveys we therefore use a DES-like redshift distribution given by \begin{equation} n(z)=(z^a+z^{ab})/(z^b+c) \end{equation} where $a=0.612$, $b=8.125$, $c=0.62$, and a space survey redshift distribution for Euclid given by \begin{equation} n(z)=\alpha\Sigma_0\frac{z^2}{z_0^3}\exp(-(z/z_0)^\beta) \end{equation} where $\alpha=2$, $\beta=3/2$, $z_0=0.63$ and $\Sigma_0=27$ as used in \citet{Beynon:2009yd}. We also calculated simulated covariance matrices including sample variance and shape noise in a similar way to that calculated in \citet{Beynon:2009yd} using the Horizon simulation \citep[][]{Teyssier:2009zd}; here we used 81 patches of 3.4 square degrees to estimate cosmological sample variance, and assumed an intrinsic shape noise of $\sigma_\gamma=0.2$ on each component of the shear. In order to examine whether interacting dark energy models can be detected by forthcoming space and ground-based missions, we can assess the difference in $\chi^2$ between the best-fit $\Lambda$CDM and best-fit interacting DE model for a given dataset. One could choose a fiducial $\Lambda$CDM shear correlation function with realistic error-bars, and find the best-fit interacting DE model for this; but it is more convenient computationally to choose a fiducial interacting DE model and vary parameters of the easily obtained $\Lambda$CDM models to find the best standard cosmology fit. The difference in $\chi^2$ between the two best-fit models is the same whichever way round we choose, and is a measure of our ability to distinguish between the two types of model. We ran CosmoMC to find the best fit $\Lambda$CDM models for each of the cDE models with different CDM couplings. We used the following parameter space: $0\leq\Omega_m\leq0.5$, $0.5\leq\sigma_8\leq1$, $0.4\leq h\leq1$, $-2\leq w \leq0$ and $0.01\leq\Omega_b\leq0.15$. The tomographic lensing results were studied for 3 cross-correlated redshift bins of equal size between $z=0.3$ and $z=1.5$ and $1' \leq \theta \leq 90'$. \subsection{Constant coupling models with an exponential potential} \begin{figure*} \centering \hspace{3mm} \mbox{ {\psfig{figure=DES_Ctheta.ps,width=76mm}} } \hspace{7 mm} \mbox{ {\psfig{figure=Euclid_Ctheta.ps,width=76mm}} } \caption{Correlation function predicted for cDE models with error estimates for DES (left) and Euclid (right) surveys using WMAP7 best fit parameters.} \label{fig:correlation} \end{figure*} \begin{table} \centering \begin{tabular}{cccc} \hline & & DES & Euclid \\ Model & $\beta_0$ & $\Delta\chi^2$ & $\Delta\chi^2$\\ \hline EXP001&0.05 & 3 & 30 \\ EXP002&0.1 & 48 & 480\\ EXP003&0.15 & 340 & 3300 \\ \hline \end{tabular} \caption{Best fit $\Delta\chi^2$ for different couplings, using errors calculated for DES and Euclid surveys.} \label{chi sq table} \end{table} \begin{figure*} \centering \mbox{ \subfigure[$\Lambda$CDM]{\psfig{figure=LCDM_contours.ps,width=110mm} \label{fig:lcdm}} \hspace{-22 mm} \subfigure[EXP001 ($\beta_0=0.05$)]{\psfig{figure=EXP001_contours.ps,width=110mm} \label{fig:exp001}} } \mbox{ \subfigure[EXP002 ($\beta_0=0.1$)]{\psfig{figure=EXP002_contours.ps,width=110mm} \label{fig:exp002}} \hspace{-22 mm} \subfigure[EXP003 ($\beta_0=0.15$)]{\psfig{figure=EXP003_contours.ps,width=110mm} \label{fig:exp003}} } \caption{Constraints on $\Omega_m$, $\sigma_8$, $n_s$, $w$ and $H_0$. The light grey contours show the 68\% and 95\% confidence limits for DES, while the dark grey contours show the 68\% and 95\% confidence limits for Euclid.} \label{fig:contours} \end{figure*} In this section we look at how introducing a constant coupling between DM and DE (models EXP001-3 in Table \ref{tab:models}) affects the weak lensing signal. The shear correlation functions, with WMAP7 initial conditions, are shown in Figure \ref{fig:correlation}. Note that $\beta_0$ primarily changes the amplitude of the correlation function, with an additional slight alteration in slope. The difference in $\chi^2$ for each of the different constant couplings is shown in Table \ref{chi sq table}, and we see that lensing with Euclid should be able to discriminate between $\beta_0\geq0.05$ and $\Lambda$CDM at a confidence level of $5\sigma$, while DES should be able to discriminate between $\beta_0\geq0.1$ and $\Lambda $CDM at a confidence level of $4\sigma$. Figure \ref{fig:contours} shows that the best fit $\Lambda$CDM models for each of the couplings occupy quite different parameter regions, especially for Euclid. The discrepancies between DES and Euclid predictions in these plots are found to be due to the off-diagonal covariance matrix terms; this can be seen by examining the best fit models for DES and Euclid along with the cDE model we are trying to fit. The best fit for our DES survey appears to be a worse fit at small $\theta$ and a better fit at large $\theta$ than the Euclid best fit. This is due to the covariance being largest for large angles and high redshifts. So while DES has a larger contribution from shape noise at small $\theta$ allowing a worse fit on small scales, conversely Euclid is more sensitive to the covariance on large scales. This descrepancy between the DES and Euclid best fit $\Lambda$CDM increases as $\beta_0$ increases. These results show that if dark energy and dark matter truly do interact in the way described by our class of models, and we attempt to fit a $\Lambda$CDM cosmology to the observations, then we will infer increased values of $H_0$ and $\sigma_8$, and a decrease in $w$ and $\Omega_m$ as $\beta_0$ increases. It should be noted that \cite{Kirk:2011sw} and \cite{Laszlo:2011sv} have recently shown that the effects of modified gravity and alternative dark energy models can be degenerate with systematics due to intrinsic alignments. Baryonic physics has also been shown to have possibly large effects on the matter power spectrum from scales as small as $k$=0.3 h/Mpc \citep[][]{vanDaalen:2011xb,Semboloni_etal_2011}. In this paper we do not include these effects, as we are seeking to present the pure shear signal predictions. Our results should therefore be considered best-case predictions which will be diluted by the impact of systematic and baryonic effects. \begin{table*} \begin{tabular}{llccccc} \hline Survey & Model & $w$ & $H_0 $ & $\sigma_8$ & $\Omega_m$ & $n_s$ \\ \hline \multirow{5}{*}{DES} &EXP001 & $-0.974 \pm 0.020$ & $69.2\pm3.5$ & $0.834\pm0.005$ & $0.264\pm0.003$ & $0.952\pm0.013$\\ & EXP002 & $-1.012\pm0.047$ & $82.7\pm9.9$ & $0.881\pm0.010$ & $0.259\pm0.005$ & $0.973\pm0.012$\\ & EXP003 & $-1.110\pm0.045$ & $95.1\pm2.8$ & $0.946\pm0.008$ & $0.258\pm0.004$ & $0.947\pm0.009$ \\ & EXP008e3 & $-0.981\pm0.048$ & $77.3\pm10.0$ & $0.889\pm0.010$ & $0.262\pm0.005$ & $0.954\pm0.014$ \\ & SUGRA003 & $-0.755\pm0.044$ & $81.1\pm6.1$ & $0.760\pm0.013$ & $0.305\pm0.008$ & $0.760\pm0.013$ \\ \hline \multirow{5}{*}{Euclid} &EXP001 & $-0.974\pm0.020$ & $69.2\pm3.5$ & $0.834\pm0.005$ & $0.264\pm0.003$ & $0.952\pm0.013$ \\ & EXP002 & $-0.888\pm0.020$ & $66.1\pm1.5$ & $0.918\pm0.004$ & $0.251\pm0.002$ & $0.956\pm0.018$ \\ & EXP003 & $-1.004\pm0.020$ & $73.3\pm1.3$ & $1.060\pm0.002$ & $0.218\pm0.001$ & $1.009\pm0.007$ \\ & EXP008e3 & $-0.881\pm0.020$ & $65.6\pm0.5$ & $0.935\pm0.004$ & $0.247\pm0.002$ & $0.922\pm0.016$ \\ & SUGRA003 & $-0.804\pm0.020$ & $85.4\pm2.2 $ & $0.745\pm0.004$ & $0.314\pm0.004$ & $1.092\pm0.007$ \\ \hline \end{tabular} \caption{Marginalised parameters for $\Lambda$CDM fit to models for DES and Euclid surveys with $1\sigma$ errors.} \label{tab:margparams} \end{table*} \subsection{Other potentials and coupling} Although for the previous section we restricted ourselves to looking at constant coupling models with an exponential potential, the cDE model has the freedom to examine different potentials and an evolving coupling. Two of the {\small CoDECS} simulations explore this freedom: EXP008e3, which has the same potential as the models in the previous section but with an evolving coupling, and SUGRA003, which has a SUGRA potential with a constant coupling. Since there is not yet a suite of these types of simulations exploring the full range of parameter space, we have included them as lone examples simply to demonstrate the range of the cDE model. The power spectrum for these models is shown in Figure \ref{fig:PowerspectrumEvolve}, where we can see that for the EXP008e3 model we get similar differences between the cDE model and $\Lambda$CDM to those shown in the larger constant coupling models (EXP002/3). On the other hand, the SUGRA003 model has smaller differences to this at large scales and much larger differences at small scales (almost 100\% at $k=10h/Mpc$) demonstrating how important it is to carry out full simulations of these models in order to obtain small scale predictions. We again attempted to find a best fit $\Lambda$CDM model using CosmoMC and the $\chi^2$ for the best fit result, shown in Table \ref{chi sq table other}, demonstrates that for these particular models we would be able to exclude both models at $>7\sigma$ for both DES and Euclid. Further investigation of these types of model would allow constraints to be made on the parameters characterizing the coupling and the potential. \begin{table} \centering \begin{tabular}{ccc} \hline & DES & Euclid \\ Model & $\Delta\chi^2$ & $\Delta\chi^2$\\ \hline EXP008e3& 64 & 570 \\ SUGRA003& 16 & 100 \\ \hline \end{tabular} \caption{Best fit $\Delta\chi^2$ for EXP008e3 and SUGRA003 using errors calculated for DES and Euclid.} \label{chi sq table other} \end{table} \begin{figure} \hspace{7mm} \psfig{figure=powerspectraEvolve.ps,width=76mm} \caption{Power spectrum for an evolving coupling model with an exponential potential (EXP008e3), and a constant coupling model with a SUGRA potential.} \label{fig:PowerspectrumEvolve} \end{figure} \subsection{Comparison of simulations and Halofit} \label{comparison} In section \ref{coupledDE} we discussed the importance of using N-body simulations over using $\Lambda$CDM non-linear fitting formulas such as Halofit \citep[][]{Smith:2002dz} to estimate the non-linear power spectrum for cDE models. In Figure \ref{fig:correlation_halo} we show that the use of Halofit to estimate the non-linear power spectrum results in errors in the shear correlation function that exceed the statistical errors, for each of the surveys and for all of the models considered. This demonstrates the importance of using N-body simulations to predict the non-linear matter power spectrum for cDE models, and that further simulations for a variety of cDE models are required to make accurate weak lensing forecasts using non-linear scales. \begin{figure*} \centering \hspace{3mm} \mbox{ {\psfig{figure=DES_halofit.ps,width=76mm}} } \hspace{7 mm} \mbox{ {\psfig{figure=Euclid_halofit.ps,width=76mm}} } \caption{Difference between shear correlation function calculated using simulations and shear correlation function calculated using Halofit. Also shown is the measurement error (from sample variance and shape noise) for DES (left) and Euclid (right) using WMAP7 best fit parameters.} \label{fig:correlation_halo} \end{figure*} \section{Conclusions}\label{Conclusions} In this paper we have presented weak lensing predictions for cDE models using the non-linear power spectrum calculated by the {\small CoDECS} simulations. We have calculated the total shear power spectrum for each of the models, and used CosmoMC to find the best fit $\Lambda$CDM model; we have demonstrated the discriminatory power of future lensing surveys such as DES and Euclid, where it should be possible to tightly constrain constant coupling models with exponential potentials to $\beta_0<0.05$ with Euclid, or $\beta_0<0.1$ with DES. However, this should be considered a best-case scenario, since the inclusion of intrinsic alignments and baryonic physics may impact the constraining power; this will be the subject of future work. We have shown that for cDE models with larger coupling there is a clear difference between the best fit $\Lambda$CDM for the same model but different surveys. This difference is due to the dominance of the off-diagonal covariance matrix terms over the diagonal for larger surveys, and shows the importance of including these off-diagonal terms in weak lensing predictions. We have also calculated the expected signal for a non-constant coupling model and a non-exponential potential model. These models could be excluded by $\geq2\sigma$ for a DES-like survey and $>7\sigma$ for Euclid. However we have not obtained constraints on the parameters of these types of model, since currently N-body simulations for these models have only been run with one parameter set. A substantial set of simulations would be required in order to properly sample the parameter space of these more complex scenarios. This will be a worthwhile task, as the effects of these cosmologies appear to be more difficult to detect in the background and in the linear regime with respect to standard interacting dark energy models, making non-linear N-body simulations vital for realistic lensing predictions. We have also shown the size of the error on weak lensing predictions if a $\Lambda$CDM non-linear fitting formula, such as Halofit, is used to estimate the matter power spectrum, instead of using simulations. We find that this Halofit error is larger than the statistical error for the DES and Euclid surveys, and for all the models considered here. This demonstrates the importance of using a full N-body code to estimate the non-linear power spectrum. \section*{Acknowledgements} DB is supported by an RCUK Academic Fellowship. KK is supported by the STFC (grant no. ST/H002774/1), a European Research Council Starting Grant and the Leverhulme trust. EB is funded by an STFC PhD studentship. MB acknowledges support by the DFG Cluster of Excellence ``Origin and Structure of the Universe'' and by the TRR33 Transregio Collaborative Research Network on the ``Dark Universe''.
1,116,691,499,520
arxiv
\section*{References}} \usepackage{pifont} \usepackage{natbib} \usepackage{geometry} \usepackage{graphicx} \usepackage{hyperref} \RequirePackage{lineno} \usepackage{subfig} \usepackage{amssymb} \usepackage{multirow} \usepackage{subfig} \usepackage{setspace} \usepackage{url} \usepackage{color} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}{Corollary} \newtheorem{rmk}{Remark} \newtheorem{defi}{Definition} \newtheorem{assu}{Assumption} \newproof{proof}{Proof} \begin{document} \begin{frontmatter} \title{Generalized Canonical Correlation Analysis for Classification} \author[jhuams]{Cencheng Shen} \ead{[email protected]} \author[jhuece]{Ming Sun} \ead{[email protected]} \author[jhuams]{Minh Tang} \ead{[email protected]} \author[jhuams]{Carey E. Priebe\corref{cor1}} \ead{[email protected]} \address[jhuams]{Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD 21218} \address[jhuece]{Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218} \cortext[cor1]{Corresponding author} \begin{abstract} For multiple multivariate datasets, we derive conditions under which Generalized Canonical Correlation Analysis (GCCA) improves classification performance of the projected datasets, compared to standard Canonical Correlation Analysis (CCA) using only two data sets. We illustrate our theoretical results with simulations and a real data experiment. \end{abstract} \begin{keyword} generalized canonical correlation analysis, classification, low-dimensional projection, Stiefel manifold \end{keyword} \end{frontmatter} \section{Introduction} With the advent of big data acquisition technology, collected datasets have grown faster than our understanding of how to make optimal use of them. It is common to find collections/measurements of related objects, such as the same article in different languages, similar talks given by different presenters, similar weather patterns in different years, etc. It remains to determine how much the available big data helps us in statistical analysis; simply throwing every collected dataset into the mix may not yield an optimal output. Thus it is natural and important to understand theoretically when and how additional datasets improve the performance of various statistical analysis tasks such as regression, clustering, classification, etc. This is our motivation to explore the following classification problem. Let $(X,Y) \sim F_{XY}$ be an $\mathbb{R}^{m} \times \{1,\ldots,K\}$ random pair, where $X$ is the feature vector and $Y$ is the class label. In statistical pattern recognition (see, e.g., \cite{DGLPatternRecognitionBook}, \cite{DHSPatternRecognitionBook}) one seeks a classifier $g: \mathbb{R}^{m} \rightarrow \{1,\ldots,K\}$ such that the probability of misclassification $L(g)=P\{g(X)\neq Y\}$ is acceptably small. Because modern datasets are often multi-dimensional, the feature vector $X$ is assumed to be a multivariate random variable of dimension $m$ and it is often beneficial to carry out the classification in some lower dimension $d$ ($1 \leq d < m$) as $m$ is usually large. Therefore dimension reduction is applied to first embed $X$ from $\mathbb{R}^{m}$ to $\mathbb{R}^{d}$, prior to subsequent classification. Herein we consider only linear projections, which are commonly used and are the foundation for many nonlinear methods. We denote a linear projection from $\mathbb{R}^{m}$ to $\mathbb{R}^{d}$ by an $m \times d$ matrix $A$; then $A^{'}X$ (the $'$ sign denotes transpose) is the projected feature vector in $\mathbb{R}^{d}$. It follows that the classification error for a given classifier $g$ (whose domain is $\mathbb{R}^{d}$ from now on) is $L_{A} = P\{g(A^{'}X) \neq Y\}$. Given a distribution $F_{XY}$, a classifier $g$, and a non-empty set of linear projections $\mathcal{A}$, we define an optimal projection $A^{*} \in \arg\min_{A\in \mathcal{A}} \{ L_{A} \} $ and denote the corresponding minimum error as $L_{A^{*}}$. The set $\mathcal{A}$ and the existence of $A^{*}$ are discussed in Section~\ref{preliminaries} and Assumption~\ref{assum}. Roughly speaking, $L_{A^{*}}$ is the minimum error one can hope to achieve by choosing $A$ cleverly among linear projections. Assuming that the classifier $g$ is specified, the crucial step is to choose the dimension reduction method. If we have only $X$ available as the feature vector, then PCA (Principal Component Analysis) \cite{JolliffePCABook} is a natural choice, which is applied for classification in \cite{YangYangPCALDA2003}. On the other hand, if there is an auxiliary feature $Z_{1}$ of dimension $m_{1}$ available, that is, $(X, Z_{1},Y) \sim F_{XZ_{1}Y}$ on $\mathbb{R}^{m} \times \mathbb{R}^{m_{1}} \times \{1,\ldots,K\}$, then CCA (Canonical Correlation Analysis) \cite{HotellingCCA1936} is applicable on the pair $(X,Z_{1})$ to derive the projection $A$, which is used in \cite{CCAOverview}. In general, if there are $S$ auxiliary features $\{Z_{s} \in \mathbb{R}^{m_{s}}, s=1,\ldots,S \}$ (we always assume $1 \leq d \leq \min{\{m,m_{1},\ldots,m_{S}\}}$), then GCCA (Generalized Canonical Correlation Analysis) \cite{Kettenring1971CCA} is applicable on $(X, Z_{1},\cdots,Z_{S})$ to derive $A$ based on $X$ and the auxiliary features $\{ Z_{s} \}$. Note that our classification task remains the same, so that at the classification step we observe only $X$ but not $\{Z_{s}\}$; and so by ``GCCA/CCA is applicable'' we mean ``GCCA/CCA can be used to derive the projection matrix $A$ for use in the classifier $g(A^{'}X)$''. Furthermore, although CCA is a special case of GCCA, for clarity purposes we shall assume that GCCA uses at least two auxiliary features whenever GCCA is compared to CCA. If we consider those auxiliary features as extra datasets available for use, GCCA can make use of additional datasets compared to CCA, but we do not know whether these additional datasets will allow GCCA to outperform CCA. At this moment, we should also point out that another popular approach combines GCCA/CCA into the supervised learning step explicitly as a classification rule \cite{HastieBujaTibshirani1995}, \cite{SunJiYeCCA2011}, \cite{TenenhausTenenhausRGCCA2011}, which is empirically more suitable if classification is the only purpose; while in our setting we first apply GCCA/CCA to project the data, followed by the supervised learning step based on the projected data and known labels, which is a more general and more classical view in exploring given data and can be followed by other inference tasks such as testing, clustering, classification, etc. These two approaches are not in conflict with each other: one may first apply GCCA/CCA to project the data without the labels, followed by classification using supervised CCA (which in fact is equivalent to linear discriminant analysis in the two-class case \cite{HastieBujaTibshirani1995}). The above setting leads to the following questions. Does GCCA perform better than CCA in classification when using additional auxiliary features? From an application point of view, do additional datasets help in the later classification task, and what type of datasets should be included as auxiliary features in deriving the projection? It turns out the answer is not simple. We consider these questions theoretically, by deriving conditions on the auxiliary features that imply the superiority of GCCA. Let us say the joint feature $(X,Z_{1},\cdots,Z_{S}) \sim F_{S+1}$, and a projection matrix $A$ derived from GCCA/CCA using $X$ and $s$ auxiliary features is denoted by $A_{s+1}$. Our main objective is to derive sufficient conditions on $F_{3}$ such that if $\max{\{L_{A_{2}}\}}=L_{A^{*}}$, then $L_{A_{3}}=L_{A^{*}}$, as well as sufficient conditions such that $L_{A^{*}} = L_{A_{3}} < \min{\{L_{A_{2}}\}}$; and their generalizations to $F_{S+1}$ with arbitrary $s \geq 2$. (Note that when there are two auxiliary features, $A_{2}$ may come from applying CCA to either $(X,Z_{1})$ or $(X,Z_{2})$; hence the `max' and `min'.) Equivalently, the objective is to demonstrate that additional datasets can be useful for the classification task when conditions are satisfied. The necessary prerequisites are discussed in Section~\ref{preliminaries}. The sufficient conditions and the following theorems are shown in Section~\ref{results}. Some discussions are offered in Section~\ref{discuss} to relate the results to practical scenarios such as high-dimensional data and functional data, in addition to the classical multivariate setting the theorems are based on. Our theoretical results are illustrated via simulations, as well as a real data experiment on Wikipedia documents, in Section~\ref{numer}. All proofs are put into Section~\ref{append}, including brief comments to elaborate on the sufficient conditions. \section{Preliminaries} \label{preliminaries} Given two auxiliary features $Z_{1}$ and $Z_{2}$, the joint distribution of $(X, Z_{1}, Z_{2})$ is denoted by $F_{3} \in \Omega_{3}$, where $\Omega_{3}$ is a family of multivariate distributions on $\mathbb{R}^{(m+m_{1}+m_{2})}$. The overall covariance matrix of $F_{3}$ is denoted by \[ \Sigma_{F_{3}}= \left [ \begin{array}{ccc} \Sigma_{X} & \Sigma_{XZ_{1}} & \Sigma_{XZ_{2}} \\ \Sigma_{XZ_{1}}^{'} & \Sigma_{Z_{1}} & \Sigma_{Z_{1}Z_{2}} \\ \Sigma_{XZ_{2}}^{'} & \Sigma_{Z_{1}Z_{2}}^{'} & \Sigma_{Z_{2}} \end{array} \right ] \in \mathbb{R}^{(m+m_{1}+m_{2}) \times (m+m_{1}+m_{2})}. \] The overall covariance matrix, along with the individual $\Sigma_{X}$, $\Sigma_{Z_{1}}$ and $\Sigma_{Z_{2}}$, are all assumed finite and positive semi-definite with rank no less than $d$. We can consider GCCA/CCA either with the population covariances or with the sample covariances. For our theoretical analysis we consider the population covariances directly, while in the numerical section we use the sample covariances, which are asymptotically equivalent in the classical multivariate setting under standard regularity conditions \cite{AndersonBook}. Identifying the CCA projection $A_{2}=A_{2}(X,Z_{1})$ can be approached as the problem of finding two sets of unit-length canonical vectors $\{a_{i}\}$ and $\{b_{i}\}$ to maximize the correlation between $a_{i}^{'}X$ and $b_{i}^{'}Z_{1}$ for each $i =1,\ldots,d$. (The size of $a_{i}$ is $m \times 1$ and the size of $b_{i}$ is $m_{1} \times 1$.) That is, we wish to identity \begin{equation} \label{ccaCond} \arg\max_{a_{i},b_{i}} \rho_{\{a_{i}^{'}X,b_{i}^{'}Z_{1}\}}=\frac{a_{i}^{'}\Sigma_{XZ_{1}}b_{i}}{\sqrt{a_{i}^{'}\Sigma_{X}a_{i}}\sqrt{b_{i}^{'}\Sigma_{Z_{1}}b_{i}}}, \end{equation} \begin{center}subject to the \textit{uncorrelated constraints} \end{center} \begin{align*} \rho_{\{a_{i}^{'}X,a_{j}^{'}X\}} = \frac{a_{i}^{'}\Sigma_{X}a_{j}}{\sqrt{a_{i}^{'}\Sigma_{X}a_{i}}\sqrt{a_{j}^{'}\Sigma_{X}a_{j}}}=0 \mbox{\ and \ }\rho_{\{b_{i}^{'}Z_{1},b_{j}^{'}Z_{1}\}} = \frac{b_{i}^{'}\Sigma_{Z_{1}}b_{j}}{\sqrt{b_{i}^{'}\Sigma_{Z_{1}}b_{i}}\sqrt{b_{j}^{'}\Sigma_{Z_{1}}b_{j}}}=0, \forall j <i. \end{align*} Then the $m \times d$ matrix $A_{2}=[a_{1},\ldots,a_{d}]$ is the CCA projection matrix for $X$, and $A_{2}^{'}X \in \mathbb{R}^{d}$ is the projected feature vector. Alternatively, a different $A_{2}=A_{2}(X,Z_{2})$ can be identified. Note that the arguments to $A_{2}$ -- $(X,Z_{1})$ or $(X,Z_{2})$ -- represent the choice of auxiliary features, and will be suppressed if the choice is clear or irrelevant in the context. To identify the GCCA projection $A_{3}$ based on $(X,Z_{1},Z_{2})$, we are looking for three sets of unit-length canonical vectors $\{a_{i}\}$, $\{b_{i}\}$ and $\{c_{i}\}$ as follows: \begin{equation} \label{gccaCond} \begin{split} &\arg\max_{a_{i}, b_{i}, c_{i} } (\rho_{\{a_{i}^{'}X,b_{i}^{'}Z_{1}\}}^{r}+\rho_{\{b_{i}^{'}Z_{1},c_{i}^{'}Z_{2}\}}^{r}+\rho_{\{a_{i}^{'}X,c_{i}^{'}Z_{2}\}}^{r}) \\ \mbox{ subject to } & \rho_{\{a_{i}^{'}X,a_{j}^{'}X\}}=\rho_{\{b_{i}^{'}Z_{1},b_{j}^{'}Z_{1}\}}=\rho_{\{c_{i}^{'}Z_{2},c_{j}^{'}Z_{2}\}}=0, \ \forall j <i, \end{split} \end{equation} where the exponent $r$ in the GCCA formulation~(\ref{gccaCond}) indicates the specific GCCA criterion. A common practice is to set $r=1$ or $2$, which maximizes either the sum of correlations or the sum of squared correlations \cite{Kettenring1971CCA}. Then $A_{3}=[a_{1},\ldots,a_{d}]$ is the desired GCCA projection. In general, given $F_{S+1}$ we can derive the GCCA projection $A_{s+1}$ for any $1 \leq s \leq S$, and CCA is merely a special case for $s=1$. Because our results are shown to hold for any $r \geq 1$, we implicitly take $r=1$ unless mentioned otherwise. Given $\Sigma_{X}$, we shall call an $m \times d$ matrix $A=[a_{1},\ldots,a_{d}]$ a ``potential" GCCA projection if and only if its columns $\{a_{i}\}$ are of unit-length and satisfy the uncorrelated constraints. The set containing all potential GCCA projections is denoted by $\mathcal{A}=\{A | \ \rho_{\{a_{i}^{'}X,a_{j}^{'}X\}}=0 \ \forall i \neq j \mbox{ and } \|a_{i}\|=1 \ \forall i\}$. As a different choice of auxiliary features yields a different projection, we denote the set containing the GCCA projections $A_{3}$ by $\mathcal{A}_{3}$ and the set containing all CCA projections $A_{2}$ by $\mathcal{A}_{2}$, as well as the set $\mathcal{A}_{s+1}$ in general. Clearly the elements of $\mathcal{A}_{s+1}$ as well as $\mathcal{A}$ depend on $\Sigma_{X}$. Note that the PCA projection is also an element of $\mathcal{A}$, but this is not of our concern in this paper. An important special case: $\mathcal{A}$ represents the Stiefel manifold \cite{ChikuseBook} (containing all orthogonal projections onto dimension $d$ linear subspaces) when $\Sigma_{X}$ is a multiple of the identity. Note that the original GCCA/CCA algorithm does not require the norm of $a_{i}$ to be the same for all $i$. We choose them to be unit-length consistently in order to avoid scaling issues in the classification step (alternatively, it is a common practice to set $a_{i}^{'}\Sigma_{X}a_{i}=1$ for all $i$, which is equivalent for our purposes). Also note that the choice of the GCCA/CCA projections can be arbitrary. For example, let $\Sigma_{X}$ and $\Sigma_{Z_{1}}$ be identity matrices and all the singular values of $\Sigma_{XZ_{1}}$ be the same; then $A_{2}(X,Z_{1})$ can be chosen arbitrarily in the Stiefel manifold $\mathcal{V}_{d,m}$. In this case $A_{2}$ has $md-\frac{d^2+d}{2}$ degrees of freedom, where $md$ comes from the dimension freedom by repeating singular values and $\frac{d^2+d}{2}$ comes from the unit-length requirement and uncorrelated constraints. But if $\Sigma_{XZ_{1}}$ does not have repeating singular values, $A_{2}$ represents a fixed subspace and has $\frac{d^2-d}{2}$ degrees of freedom, which is implied by the fact that two $m \times d$ matrices $A$ and $B$ represent the same subspace if and only if $AA^{'}=BB^{'}$. The same phenomenon applies for any GCCA projection $A_{s+1}$. Returning to the classification problem: given a classifier $g: \mathbb{R}^{d} \rightarrow \{1,\ldots,K\}$ for the low-dimensional feature vector $A^{'}X$, the error $L_{A}$ may differ for different $A \in \mathcal{A}$. Clearly $\mathcal{A}$ is compact for finite $\Sigma_{X}$ and $\{L_{A}|A \in \mathcal{A}\}$ is bounded between $[0,1]$, but an optimal low-dimensional projection (with respect to the classification error) is not guaranteed to exist. We make the following assumption to avoid non-existence: \begin{assu} \label{assum} Given a classifier $g$, we assume for the theory in the sequel that an optimal projection $A^{*}=\arg\min_{A\in \mathcal{A}} \{ L_{A} \}$ exists for any finite $\Sigma_{X}$ of rank at least $d$. \end{assu} For example, if the class-conditional distributions $F_{X|Y=k}$ admit probability density functions $f_{X|Y=k}$ for $k=1,\ldots,K$, then the assumption always holds. (In this case $L_{A}$ is continuous with respect to $A$, and thus $\{L_{A}|A \in \mathcal{A}\}$ is compact and admits a minimum.) By this assumption, the minimum error $L_{A^{*}}$ always exists and it follows that $L_{A_{s+1}} \geq L_{A^{*}}$ always holds for any $s$. Note that the optimal projection $A^{*}$ need not be unique, since the existence suffices for our purposes. Now we are able to define the notion that GCCA improves CCA using $L_{A^{*}}$. \begin{defi} \label{improve} Assuming the existence of $A^{*}$, we say GCCA improves CCA within a family of distributions $\Omega_{3}$ if and only if $\{F_{3} \in \Omega_{3}|L_{A_{2}}=L_{A^{*}}, \ \forall A_{2} \in \mathcal{A}_{2}\} \subset \{F_{3} \in \Omega_{3}|L_{A_{3}}=L_{A^{*}}, \ \forall A_{3} \in \mathcal{A}_{3}\}$. In general, we say the set of GCCA projections $\mathcal{A}_{s+1}$ improves the set of GCCA projections $\mathcal{A}_{t+1}$ within $\Omega_{S+1}$ ($1 \leq s,t \leq S$) if and only if $\{F_{S+1} \in \Omega_{S+1}|L_{A_{t+1}}=L_{A^{*}}, \ \forall A_{t+1} \in \mathcal{A}_{t+1}\} \subset \{F_{S+1} \in \Omega_{S+1}|L_{A_{s+1}}=L_{A^{*}}, \ \forall A_{s+1} \in \mathcal{A}_{s+1}\}$. (Here the notation ``$\subset$" indicates proper subset.) \end{defi} Put in words, suppose GCCA improves CCA within $\Omega_{3}$. Then the optimality of the CCA projections implies the optimality of the GCCA projection, and there exists $F_{3}$ such that the GCCA projection is optimal while at least one of the CCA projections is not. Such improvement implies that additional datasets should be used, though it is not equivalent to $L_{A_{3}} \leq L_{A_{2}}$. If $\Omega_{3}$ includes every possible multivariate distribution, then GCCA fails to improve CCA. For example, if $Z_{1}$ and $Z_{2}$ are both positively correlated to $X$ but $Z_{1}$ and $Z_{2}$ are negatively correlated, then it might happen that $A_{2}$ is optimal while $A_{3}$ is not. Hence it is not always a good idea to incorporate additional auxiliary features, and we shall look for a family $\Omega_{3}$ imposing certain relationships among $X$ and $\{Z_{s}\}$ such that GCCA is guaranteed to improve CCA. First, we transform $X$ by centering and whitening, so that the population mean is zero and the population covariance matrix becomes the identity matrix. Then $\mathcal{A}$ consists of orthogonal projections onto dimension $d$ linear subspaces, and there exists an orthogonal matrix such that the feature vector can be rotated to guarantee $A^{*}$ is equivalent to the subspace $\mathbb{R}^{d}$ spanned by the first $d$ coordinate axes. We denote the transformed random variable by $\tilde{X}=H_{X}(X-E(X))$, where $E(X)$ is the expectation for centering and $H_{X}$ is a non-singular $m \times m$ matrix for whitening and rotation. Since the optimal projection for $\tilde{X}$ is spanned by the first $d$ coordinate axes, the form of $\tilde{X}$ based on the class label $Y=\{1,\ldots,K\}$ can be expressed as: \begin{equation} \label{X} \tilde{X} = H_{X}(X-E(X)) \stackrel{law}{=} \left[ {\begin{array}{cc} U_{1}\bold{1}_{1}+U_{2}\bold{1}_{2}+\cdots+U_{K}\bold{1}_{K} \\ \\ W \\ \end{array} } \right], \end{equation} where $\bold{1}_{k}$ is the class label indicator taking value $k$ with probability $p_{k}$ and $\sum_{k=1}^{K}p_{k}=1$, each $U_{k} \in \mathbb{R}^{d}$ is the marginal distribution of $\tilde{X}$ under class $k$, and $W \in \mathbb{R}^{m-d}$ is the ``irrelevant" marginal of $\tilde{X}$. By the above transformation it holds that $E(W)=0_{(m-d) \times 1}$ and $E(WW^{'})=I_{(m-d) \times (m-d)}$, where $I$ denotes the identity matrix. Clearly $H_{X}$ always exists, and there are multiple choices for $H_{X}$ if $A^{*}$ is not unique. Now we impose our conditions on $F_{S+1}$ and define what we call the similar family. \section{Main Results} \label{results} \begin{defi} \label{XYZ} We say the family of distributions $\Omega_{S+1}^{*}$ is \emph{the similar family} if and only if it includes every $F_{S+1}$ such that $(X, Z_{1}, \cdots, Z_{S}) \sim F_{S+1}$ satisfies the following conditions: Condition (1): For each $A^{*}$, there exists non-singular matrices $H_{X} \in \mathbb{R}^{m \times m}$ and $H_{Z_{s}} \in \mathbb{R}^{m_{s} \times m_{s}}$ for all $s=1,\ldots,S$, such that Equation~(\ref{X}) holds and there exist non-negative scalars $q_{sk}$ with \begin{equation} \label{YZZ} \tilde{Z}_{s} = H_{Z_{s}}(Z_{s}-E(Z_{s})) \stackrel{law}{=} \left[ {\begin{array}{cc} q_{s1}U_{1}\bold{1}_{1}+q_{s2}U_{2}\bold{1}_{2}+\cdots+q_{sK}U_{K}\bold{1}_{K}+e_{s} \\ \\ W_{s} \\ \end{array} } \right], \end{equation} where $e_{s}$ represents independent noise and $W_{s} \in \mathbb{R}^{m_{s}-d}$. Note that unlike $H_{X}$, $H_{Z_{s}}$ need only be non-singular and $Z_{s}$ are not necessarily whitened and rotated. Condition (2): $E(U_{k}U_{k}^{'})=I$, and $U_{k}$ is uncorrelated with $W$ and $W_{s}$, for all $k=1,\ldots,K$ and $s=1,\ldots,S$. Condition (3): $\sigma_{1}(E(W_{s}W_{t}^{'})) \leq \sigma_{1}(E(WW_{s}^{'}))\sigma_{1}(E(WW_{t}^{'}))$ for all $1 \leq s \neq t \leq S$, where we denote $\sigma_{i}(\Sigma)$ as the $i$th largest singular value for any matrix $\Sigma$ henceforth. Condition (4): $(q_{sk_{1}}-q_{sk_{2}})(q_{tk_{1}}-q_{tk_{2}}) > 0$ for all $1 \leq s < t \leq S$ and $k_{1},k_{2}=1,\ldots,K$; namely the ordering of coefficients $q_{sk}$ is consistent throughout $Z_{s}$. \end{defi} The purpose of condition (1) is to guarantee that the marginal distribution restricted to $A^{*}$ of every transformed auxiliary feature under each class is a scalar multiple of the corresponding marginal of $\tilde{X}$ plus error. The possible non-uniqueness of $A^{*}$ is (mostly) avoided by requiring (1) to hold for any $A^{*}$, though the transformation matrices and respective scalars probably differ under different $A^{*}$. Condition (2) is to simplify the analysis, without which the proof is much more complex. Given conditions (1) and (2), conditions (3) and (4) are technical conditions used in the proof, implying certain relationships among features. Interpreted by words, condition (3) implies the ``noisy'' dimensions (where $W$ and $W_{s}$ live in) among the auxiliary features should be less related, while condition (4) implies the ``signal'' dimensions (where $U_{k}$ lives) among the auxiliary features should be more related. In this case GCCA is more likely to extract information from the ``signal'' dimensions, for which utilizing additional datasets is likely to improve the classification error. As we will see in the numerical experiments, this interpretation is useful for judging qualitatively whether additional datasets should be included, even if $A^{*}$ is unknown or condition (2) is not satisfied. And we will provide additional comments at the end of the proof section to discuss the magnitude of $q_{sk}$ and its potential impact on the sufficient conditions and model selection. \begin{thm} \label{main} GCCA improves CCA in the similar family $\Omega_{3}^{*}$. \end{thm} Therefore it is beneficial to use the GCCA projection $A_{3}$ within the similar family $\Omega_{3}^{*}$, whose conditions are sufficient but not necessary for GCCA to improve CCA. Equivalently, deriving the projection using additional datasets helps the classification task when the sufficient conditions are satisfied. Furthermore, the similar family can be decomposed into three disjoint subsets as follows: $\Omega_{3}^{*} = \{F_{3}\in \Omega_{3}^{*}| \max{\{L_{A_{2}}\}}=L_{A_{3}}=L_{A^{*}}\} \cup \{F_{3}\in \Omega_{3}^{*}| \max{\{L_{A_{2}}\}} > L_{A_{3}}=L_{A^{*}}\} \cup \{F_{3}\in \Omega_{3}^{*}| \max{\{L_{A_{2}}\}} > L_{A^{*}} \textup{ and } L_{A_{3}}>L_{A^{*}}\}$, with all the subsets shown to be non-empty and proper in the proof (we can also replace all the `max' by `min'). Specifically, if the optimal $A^{*}$ is known (which may be difficult in practice), then one can check which subset a given $F_{3} \in \Omega_{3}^{*}$ belongs to according to Inequality~(\ref{cond1}) and Inequality~(\ref{cond2}) in the proof below. When the distribution lies in the first or the second subset above, the GCCA projection performs no worse than the CCA projections, and adding a ``qualified'' additional dataset yields better classification result. It is natural to consider a generalization to $\Omega_{S+1}^{*}$ because there may be many additional datasets satisfying the conditions. Indeed we have an easy generalization of the above theorem. \begin{cor} \label{main2} For any $S \geq S^{'} \geq 2$, the set of GCCA projections $\mathcal{A}_{S^{'}+1}$ improves the set of CCA projections $\mathcal{A}_{2}$ in the similar family $\Omega_{S+1}^{*}$. \end{cor} Under a simplified setting, we can also show that the set of GCCA projections continue to improve when additional auxiliary features are included in deriving the projections. This means in the context of the similar family, additional datasets will always improve the performance in the classification task. \begin{cor} \label{main3} Let us replace condition (4) by a simplifying condition (4'): $W_{s}=W_{t}$ and $q_{sk}=q_{tk}$ for all $1 \leq s,t \leq S$. Namely the auxiliary features follow the same distribution for $s=1,\ldots,S$. Then for any $S \geq S^{'} \geq 2$, the set of GCCA projections $\mathcal{A}_{S^{'}+1}$ always improves the set of GCCA projections $\mathcal{A}_{S^{'}}$ in the similar family $\Omega_{S+1}^{*}$. \end{cor} \section{Discussions} \label{discuss} Since our analysis is carried out on the population covariance instead of the sample covariance, our results so far rely on the fact that the sample covariance converges to the population covariance as dimension reduction methods including GCCA/CCA are mostly carried out on the sample data. Let us provide some justifications for the high-dimensional data case, where the dimension $m$ is large when compared to the number of training observations $n'$ such that the covariance convergence is not guaranteed. For high-dimensional data, if the sample covariance is still close to the true covariance with high probability as discussed in \cite{VershyninClosenessCovariance2012} and \cite{VershyninCovariance2013}, then our results still apply and GCCA improves CCA in the similar family with high probability. Otherwise our conditions in Definition~\ref{XYZ} cannot be directly used to justify the GCCA/CCA behavior on sample covariances of high-dimensional data. However, one may heuristically claim that if GCCA is better than CCA in the population model for the classification task, then GCCA is expected to be better than CCA for the sample data: Since the classification error is actually a function of the data, if $L_{A_{3}} < L_{A_{2}}$ for $A_{2}$ and $A_{3}$ derived from the population model, then at a suitable level of $n'/m$ we can have $Prob\{L_{A_{3}} < L_{A_{2}}\} > 0.5$ for $A_{2}$ and $A_{3}$ derived from the sample data, because this probability converges to $1$ in the classical multivariate setting where $n'/m \rightarrow \infty$. (A point of interest is to derive the minimum level $n'/m$, which may depend on the classifier we use. For our simulations on the synthetic data generated within the similar family, it seems the minimum level is no larger than 1 in order for GCCA to be better than CCA.) In practice one rarely applies CCA directly on data of very high dimension with $m > n$. Often one opts to use kernel CCA \cite{HardoonKernelCCA2007}, \cite{LaffertyCCA2012}, sparse CCA \cite{HardoonSparseCCA2011}, \cite{WittenTibshiraniHastie2009} or functional CCA \cite{HwangJungTakaneCCA2012}, \cite{HeMullerWangCCA2003} to deal with noisy high-dimensional data, assuming that the data intrinsically lives in some low-dimensional linear subspace. For example, instead of working on $(X,Y) \in \mathbb{R}^{m}$ where $m$ is very large, kernel/functional CCA works on $(f(X),g(Y))$ by assuming appropriate $f$ and $g$ exist for nonlinear/functional data. But the analysis of sparse/functional CCA will be quite different and difficult when penalty terms are introduced in the constraints, which requires numerical methods to solve and gives different GCCA/CCA transformations that cannot be efficiently expressed in matrix notation. Another aspect worth noting is that a similar conclusion may be reached for clustering. This is because GCCA makes it easier to find the optimal subspace than CCA under the same conditions, as long as one is able to define an optimal subspace $A^{*}$ in terms of some clustering algorithm with respect to a specific performance index. However, we do not pursue this direction here because it is more challenging to evaluate clustering performance than classification performance. Furthermore, since GCCA/CCA does not make use of label information in the dimension reduction step, it is natural to compare with some existing algorithms such as p-LDA (penalized linear discriminant analysis) \cite{HastieBujaTibshirani1995}, \cite{WittenTibshirani2011} and $\ell 1$-SVM ($1$-norm support vector machine) \cite{ZhuRossetHastieTibshirani2003}, \cite{fm2004}, which make use of labels and may work for data of high/unknown dimensions. Even though we will include their classification results in the numerical section for benchmark purposes, our target is not to find the best method for a given dataset. In addition to being more appropriate for an exploratory task, there are other reasons that applying unsupervised dimension reduction methods first is more favorable than doing supervised dimension reduction directly, e.g., it is easier and faster to use unsupervised dimension reduction for real data, it may be slow and difficult to choose a suitable penalty term in p-LDA, the data before dimension reduction may not have access to the labels or may be different from the data on which we perform classification as in the transfer learning task \cite{PanYang2010}, etc. At last, the choice of projection dimension $d$ is crucial for the classification (or any inference) performance, especially when working with real data of unknown true dimension. There are a number of papers on dimension choice for projecting a single dataset \cite{ZhuGhodsiAutomaticDimensionSelection2006}, \cite{HoyleAutomaticDimensionSelection2008} but not for multiple correlated datasets, which may be an interesting point to pursue. Still, our results are always valid no matter the choice of $d$, which means GCCA improves CCA for any $d$ when conditions are satisfied. \section{Numerical Experiments} \label{numer} To investigate the performance of the GCCA/CCA projections in classification, we present both numerical simulations and a real data experiment. We use sample covariances to derive the GCCA projections with the GCCA algorithm implemented according to \cite{TenenhausTenenhausRGCCA2011} (though no covariance matrix regularization is required in our experiments in contrast to their RGCCA algorithm; and we apply Gram-Schmidt to all output vectors in the iteration of the algorithm to enforce the uncorrelated constraints of all the canonical vectors), and the usual LDA as our main classification rule for the following supervised learning. Whenever applicable, we also include p-LDA and $\ell 1$-SVM classification results based on the single dataset to compare with the LDA classification results based on the GCCA/CCA projected dataset. Note that our previous numerical work illustrating GCCA improvement under kNN (k-nearest neighbor) classifier is available in \cite{sptGCCA}. \subsection{Numerical Simulations} We start with four random variables $U_{1}, U_{2}\in \mathbb{R}^3$ and $V_1,V_2\in \mathbb{R}^6$ all independently normally distributed. The parameters are set as follows: $E(U_{1}U_{1}')=E(U_{2}U_{2}')=I_{3 \times 3}$, $E(U_{1})=-E(U_{2})=0.2_{3 \times 1}$, $E(V_1 V'_1) = E(V_2 V'_2) = 0.5I_{6 \times 6}$, $E(V_1)=E(V_2)=0_{6 \times 1}$. The three random variables $X, Z_{1}, Z_{2}\in\mathbb{R}^9$ are constructed as follows: \begin{equation} X \stackrel{law}{=} {U_{1}\bold{1}_1+U_{2}\bold{1}_2 \brack V_1+V_2}, \ \ Z_{1} \stackrel{law}{=} {0.6U_{1}\bold{1}_1+0.4U_{2}\bold{1}_2+e_1 \brack V_1+e_3}, \ \ Z_{2} \stackrel{law}{=} {0.6U_{1}\bold{1}_1+0.4U_{2}\bold{1}_2+e_2 \brack V_2+e_4}, \end{equation} where $e_1, e_2\stackrel{\text{i.i.d.}}{\sim} N(0, 0.75I_{3 \times 3})$, $e_3, e_4\stackrel{\text{i.i.d.}}{\sim} N(0, 0.5I_{6 \times 6})$, $\bold{1}_{1}$ and $\bold{1}_{2}$ are class label indicators having equal probability. Using LDA, it is clear that at $d=3$ the ideal optimal projection $A^{*}$ uniquely represents the subspace spanned by the first $d$ coordinate axes. Hence we can fit the joint distribution into Definition~\ref{XYZ} with $d=3$, such that $q_{11}=q_{21}=0.6$, $q_{12}=q_{22}=0.4$, $W=V_{1}+V_{2}$, $W_{1}=V_1+e_3$, $W_{2}=V_2+e_4$, etc. This joint distribution satisfies the required conditions, so it belongs to $\Omega_{3}^{*}$. Further, by checking Inequality~(\ref{cond1}) and Inequality~(\ref{cond2}) in the proof, the joint distribution is actually an element of the subset $\{F_{3}\in \Omega_{3}^{*}| \max{\{L_{A_{2}}\}} > L_{A_{3}}=L_{A^{*}}\} \in \Omega_{3}^{*}$. So we expect GCCA to outperform CCA when projected onto $\mathbb{R}^{3}$. Note that in this case we can explicitly calculate $L^{*}$ for the population model, which is $36.45\%$. For each Monte Carlo replicate, $n=1500$ observations are generated for each random variable. That is, $\{x^{(1)},\ldots,x^{(1500)}\}$ for $X$, $\{z_{1}^{(1)},\ldots,z_{1}^{(1500)}\}$ for $Z_{1}$ and $\{z_{2}^{(1)},\ldots,z_{2}^{(1500)}\}$ for $Z_{2}$. All data points are used to learn the GCCA/CCA projections respectively for $d=3$. (One may instead derive the projections based on the training data only, which is asymptotically equivalent to deriving the projections from all the available data if the testing data is distributed the same as the training.) Then the first 1000 points generated from $X$ are projected and used to train the classifier; the remaining 500 points are projected and used for classification error testing. The classification error is recorded separately for the CCA projections $A_{2}(X,Z_{1})$ and $A_{2}(X,Z_{2})$ and for the GCCA projections $A_{3}$, using both sum of correlation ($r=1$) and sum of squared correlation ($r=2$) criteria. The above is done for 500 Monte Carlo replications, and we show in Table \ref{table:simu1} the average classification error and the average difference between the derived GCCA/CCA subspace and the optimal subspace for each projection (we use the Hausdorff distance \cite{QZLMetrics2005} for the difference between subspaces). The average GCCA classification error is lower than that of CCA as expected, and is fairly close to the optimal error $L^{*}$. In this case the average errors using the p-LDA and $\ell 1$-SVM are $37.37\%$ and $36.50\%$ respectively (the penalty terms are always chosen based on cross-validation for the best performances and benchmark purposes). Note that the standard deviations for the average errors of all the methods are within $0.3\%$, and those for the distance of the subspaces are within $0.002$, which are the same for all the later simulations. Also note that the distances of the subspaces are not expected to be $0$, because the $A^{*}$ we use is the ideal optimal subspace for the population model and different from the optimal subspace for the sample data; but even so, it seems that the classification error is positively correlated to the distance of the subspaces. To investigate the effect of higher dimension and less sample data, we repeat the same procedure three times, for $m=20$ with $n=1500$, $m=50$ with $n=1500$, and $m=50$ with $n=75$ ($50$ points used for training and the remaining $25$ used for testing). The settings are the same with $d=3$ fixed, e.g., the dimensions of $U_{i}$ stay at $3$ but the dimensions of $V_{i}$ are increased as $m$ increases. The results are shown in Table \ref{table:simu2}, Table \ref{table:simu3} and Table \ref{table:simu4}. A higher dimension or a smaller training size means the sample covariance does a worse job in approximating the population covariance, possibly making the differences between the derived GCCA/CCA subspace and the optimal subspace larger as $m$ increases and/or $n$ decrease; but still GCCA is better than CCA for the classification task in all the tables, reflecting our heuristic argument in the discussion section. This time the average errors using the p-LDA and $\ell 1$-SVM are $39.01\%$ and $39.27\%$ at $m=20$ with $n=1500$, $38.91\%$ and $38.81\%$ at $m=50$ with $n=1500$, and $47.43\%$ and $45.76\%$ at $m=50$ with $n=75$, most of which turn out to be slightly better than using LDA on GCCA projected data throughout these simulations. We also present another simulation to show that GCCA does not necessarily improve CCA at $m=9$ with $n=1500$, by replacing the auxiliary feature $Z_{2}$ by $Z_{2'} \stackrel{law}{=} {0.6U_{1}\bold{1}_1+0.4U_{2}\bold{1}_2+e_2 \brack V_1+e_4}$. We re-generate all observations and carry out the same simulation steps. Although the auxiliary feature $Z_{2'}$ looks reasonably ``similar'' to $X$ (differing from $Z_{1}$ only by noise), the joint distribution of $(X,Z_{1},Z_{2'})$ does not satisfy condition (3) and GCCA does not improve CCA by checking the covariance structure explicitly. Interpreted by words, $Z_{1}$ and $Z_{2'}$ are too correlated in the ``noisy'' dimensions, hindering GCCA from recognizing the correct ``signal'' dimensions. The average simulated classification errors are shown in Table \ref{table:simu5}. In this case GCCA performs worse than CCA, which demonstrates that simply adding more datasets does not automatically yield a better result. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \centering {\begin{tabular}{|c||c|c|c|c|} \hline projections & CCA on $(X,Z_{1})$ & CCA on $(X,Z_{2})$ & GCCA $(r=1)$ & GCCA $(r=2)$ \\ \hline average error ($L_{A}$) & $42.03\%$ & $41.89\%$ & $37.00\%$ & $38.16\%$ \\ \hline $\|A-A^{*}\|$ & $1.688$ & $1.591$ & $0.714$ & $0.989$\\ \hline \end{tabular} \caption{GCCA Improves CCA in simulation at $m=9, n=1500$} \label{table:simu1} } \end{table*} \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \centering {\begin{tabular}{|c||c|c|c|c|} \hline projections & CCA on $(X,Z_{1})$ & CCA on $(X,Z_{2})$ & GCCA $(r=1)$ & GCCA $(r=2)$ \\ \hline average error ($L_{A}$) & $47.02\%$ & $46.18\%$ & $42.84\%$ & $44.19\%$ \\ \hline $\|A-A^{*}\|$ & $2.161$ & $2.037$ & $1.364$ & $1.825$ \\ \hline \end{tabular} \caption{GCCA Improves CCA in simulation at $m=20, n=1500$} \label{table:simu2} } \end{table*} \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \centering {\begin{tabular}{|c||c|c|c|c|} \hline projections & CCA on $(X,Z_{1})$ & CCA on $(X,Z_{2})$ & GCCA $(r=1)$ & GCCA $(r=2)$ \\ \hline average error ($L_{A}$) & $47.58\%$ & $46.02\%$ & $42.41\%$ & $44.31\%$ \\ \hline $\|A-A^{*}\|$ & $2.197$ & $2.161$ & $1.643$ & $1.895$ \\ \hline \end{tabular} \caption{GCCA Improves CCA in simulation at $m=50, n=1500$} \label{table:simu3} } \end{table*} \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \centering {\begin{tabular}{|c||c|c|c|c|} \hline projections & CCA on $(X,Z_{1})$ & CCA on $(X,Z_{2})$ & GCCA $(r=1)$ & GCCA $(r=2)$ \\ \hline average error ($L_{A}$) & $51.98\%$ & $51.60\%$ & $45.76\%$ & $49.98\%$ \\ \hline $\|A-A^{*}\|$ & $2.256$ & $2.236$ & $2.179$ & $2.203$ \\ \hline \end{tabular} \caption{GCCA Improves CCA in simulation at $m=50, n=75$} \label{table:simu4} } \end{table*} \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \centering {\begin{tabular}{|c||c|c|c|c|c|} \hline projections &CCA on $(X,Z_{1})$ & CCA on $(X,Z_{2'})$ & GCCA $(r=1)$ & GCCA $(r=2)$ \\ \hline average error ($L_{A}$) & $41.34\%$ & $41.33\%$ & $46.86\%$ & $46.90\%$\\ \hline $\|A-A^{*}\|$ & $1.545$ & $1.537$ & $2.009$ & $2.018$ \\ \hline \end{tabular} \caption{GCCA Fails to Improve CCA in simulation} \label{table:simu5} } \end{table*} \subsection{Wikipedia Documents} The real data experiment applies GCCA/CCA to text document classification. The dataset is obtained from Wikipedia, an open-source multilingual web-based encyclopedia with millions of articles in more than 280 languages. In Wikipedia each article can be related to others in the same language, or articles in other languages with the same subject. Articles of the same subject in different languages are not necessarily exact translations of one another; it is very likely they are written by different people and their contents might differ significantly. English articles within a 2-neighborhood of the English article ``Algebraic Geometry" are collected, and the corresponding French articles of those English documents are also collected, which totals $n=1382$ pairs of articles in English and French. Let $a^{e}_1,\ldots, a^{e}_{1382}$ denote the English articles and $a^{f}_1,\ldots, a^{f}_{1382}$ denote the French articles. All articles are manually labeled into $5$ disjoint classes ($1-5$) based on their topics, as shown in Table \ref{tbl:wikitpc}. \begin{table*}[!t] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline topic & category & people & locations & date & math \\ \hline class label & 1 & 2 & 3 & 4 & 5 \\ \hline article number & 119 & 372 & 270 & 191 & 430 \\ \hline \end{tabular} \caption{Wikipedia Dataset Topics} \label{tbl:wikitpc} \end{table*} For the purposes of GCCA/CCA, first we need to embed each article onto the Euclidean space $\mathbb{R}^{m}$ by Multi-dimensional Scaling (MDS) \cite{TorgersonBook1}, \cite{CoxBook}, \cite{BorgBook}. MDS strives to give a Euclidean representation while approximately preserving the dissimilarities of the original data: given an $n \times n$ dissimilarity matrix $\Delta = [\delta_{ij}]$ for $n$ observations with $\delta_{ij}$ being the dissimilarity measure between the $i$th and $j$th observation, MDS generates embeddings $x_{i} \in \mathbb{R}^{m}$ for the $i$th data point to preserve the dissimilarity among the objects pairs, i.e. $||x_i - x_j|| \approx \delta_{ij}$. For our work two different types of dissimilarity measures are considered for English and French articles, giving four dissimilarity matrices of dimension $1382 \times 1382$: the graph topology dissimilarity matrix $\bar{\Delta}^{e}, \bar{\Delta}^{f}$ and the text content dissimilarity matrix $\hat{\Delta}^{e}, \hat{\Delta}^{f}$. For the graph dissimilarities, $\bar{\Delta}^{e}$ and $\bar{\Delta}^{f}$ are constructed based on an undirected graph $G(V, E)$, where $V$ represents the set of vertices of the $1382$ Wikipedia documents, and $E$ is the set of edges connecting those articles. There is an edge between two vertices if they are linked in Wikipedia. Then the entry $\bar{\Delta}^{e}(i,j)$ is calculated from the number of steps on the shortest path from document $i$ to document $j$ in $G$. For the English articles, $\bar{\Delta}^e(i,j) \in \{0, \ldots, 4, 6\}$, where $4$ is the upper bound of the step number with any higher number setting to $6$. For the French articles $\bar{\Delta}^{f}(i,j)$ depends on the French graph connections, so it is possible that $\bar{\Delta}^{f}(i,j) \neq \bar{\Delta}^{e}(i,j)$. At the extreme end, $\bar{\Delta}^{f}(i,j) = \infty$ when $a^f_i$ and $a^f_j$ are not connected, and we set $\bar{\Delta}^{f}(i,j) = 6$ for $\bar{\Delta}^{f}(i,j) > 4$. For the text dissimilarities, $\hat{\Delta}^{e}$ and $\hat{\Delta}^{f}$ are based on the text processing features for documents $\{a^{e}_{i}\}$ and $\{a^{f}_{i}\}$. Suppose $\mathbf{z}_i, \mathbf{z}_j$ are the feature vectors for the $i$th and $j$th English articles. Then $\hat{\Delta}^{e}(i,j)$ is calculated by the cosine dissimilarity $\hat{\Delta}^{e}(i,j) = 1 - \frac{\mathbf{z}_i \cdot \mathbf{z}_j}{\|\mathbf{z}_i\|_2 \|\mathbf{z}_j\|_2}$. For the experiment we consider the latent semantic indexing (LSI) features \cite{deerwester90}. \begin{table*}[!t] \centering \begin{tabular}{|c||c|c|} \hline & Graph Topology Dissimilarity & Text Content Dissimilarity \\ \hline English articles $\{a^{e}_i\}$ & $\{\bar{x}^e_i\} (GE)$ & $\{\hat{x}^e_i\} (TE)$ \\ \hline French articles $\{a^{f}_i\}$ & $\{\bar{x}^f_i\} (GF)$ & $\{\hat{x}^f_i\} (TF)$ \\ \hline \end{tabular} \caption{Euclidean Embeddings ($\mathbb{R}^{m}$) for Wikipedia Articles} \label{tbl:artemb} \end{table*} Once different dissimilarity matrices are constructed, the Euclidean space embeddings with $m=50$ are obtained via MDS. The articles' embeddings are shown in Table \ref{tbl:artemb}. At first, English graph dissimilarity (GE) is the classification target, and others (GF, TE, TF) are treated as auxiliary features: all data points are used to learn the GCCA/CCA projections from $\mathbb{R}^{m}$ to $\mathbb{R}^{d}$ based on GE and a certain choice of auxiliary features, and the data points of GE are projected by the learned projections. Then $600$ observations are randomly picked to train the classifier, with the remaining $782$ documents used for classification error testing. We repeat 500 times to calculate the average classification error, for every possible GCCA/CCA projection and various choice of $d$. The same procedure is repeated with the French graph dissimilarity (GF) being the classification target and the remaining being the auxiliary features. The full results for every possible projection are shown in Figure \ref{fig1} for the classification of GE. For illustration purposes, two simplified plots are shown in Figure~\ref{fig2} for the classification of GE/GF, for which we omit most projections in order to better quantify the effects of increasing $s$ (the number of chosen auxiliary features), i.e., only the best $A_{2}$ and $A_{3}$ are shown. Note that for comparison purposes the PCA projections are also included, and all the classification errors have standard deviations within $0.2\%$. \begin{figure}[htbp] \centering \includegraphics[width=1.10\textwidth]{GE} \caption{Classification Error for GE} \label{fig1} \end{figure} \begin{figure}[htbp] \centering \subfloat[]{ \includegraphics[width=0.70\textwidth]{GES} } \hfil \subfloat[]{ \includegraphics[width=0.70\textwidth]{GFS} } \caption{Classification Error for GE/GF (simplified)} \label{fig2} \end{figure} Based on Figure~\ref{fig2}, we observe that for most choices of $d$ the best GCCA projection $A_{3}$ admits a lower error than the best CCA projection $A_{2}$, and both of them are better than the PCA projection. The figure also illustrates the last paragraph of our discussion section, i.e., GCCA is expected to be better than CCA no matter the choice of projection dimension. However, it turns out that the GCCA projection $A_{4}$ does not yield the lowest error for classifying the Wikipedia data. This is not a surprise and tells that not all datasets should be included in this example, as one can judge from Figure~\ref{fig1} and our previous simulations that the choice of auxiliary features is crucial for the classification errors. For benchmark purposes, the average classification errors using p-LDA on the MDS-embedded data are $48.40\%$ for GE and $56.65\%$ for GF, which are slightly better than the average LDA errors using PCA projected data but worse than the average LDA errors using multiple datasets and the best GCCA/CCA projections at $d=20$ in this experiment. Unfortunately, one cannot easily check the joint distribution by Definition~\ref{XYZ} like in the simulation part, because the optimal projection $A^{*}$ is unknown for the Wikipedia datasets. Therefore in a real-world application, one must be cautious in adding a new dataset and/or choosing the best dimension. Both of these are difficult model selection problems in practice, which can be addressed by cross-validation as in this experiment. Still, the interpretation after Definition~\ref{XYZ} is useful from a qualitative perspective. On one hand, the graph dissimilarities GE and GF are of questionable value because they depend on the Internet links, which may be erroneous. On the other hand, the text dissimilarities TE and TF are much more faithful because they are extracted from the document contents, thus more likely to have commonality in certain ``signal'' dimensions. Therefore it is reasonable to believe that choosing a text dissimilarity is better than choosing a graph dissimilarity, which explains why the best $A_{2}$ and $A_{3}$ do not choose any graph dissimilarity as the auxiliary variable and why $A_{4}$ performs worse. \section{Proofs} \label{append} \subsection{Proof of Theorem~\ref{main} when $K=2$ and $r=1$} \begin{proof} We consider $K=2$ and $r=1$ here (and generalize in the next proof), so the number of classes is two and the GCCA criterion is the sum of correlations. If a projection $A$ represents the same subspace as the optimal projection $A^{*}$ (i.e., $AA^{'}=A^{*}A^{*'}$), then $A$ is optimal for classification such that $L_{A}=L_{A^{*}}$. For most parts it suffices to assume that $A^{*}$ is unique (in the sense of representing the same subspace), which is justified towards the end of the proof. In addition to the uniqueness of $A^{*}$, we also assume that $H_{X}, H_{Z_{s}}, \Sigma_{Z_{s}}$ are all identity matrices for $s=1,2$. This is also justified later, as we will show the theorem is invariant under proper transformations. Further, the expectations $E(X)$ and $E(Z_{s})$ are treated as zeros throughout all proofs because the GCCA/CCA projections and the classification task are not affected. Under the above assumptions, we have the following: the optimal projection $A^{*}$ is spanned by the first $d$ coordinate axes; any potential projection $A \in \mathcal{A}$ must be orthonormal and equivalent to an orthogonal projection onto a dimension $d$ linear subspace; and the GCCA/CCA projections $A_{s+1}$ are optimal if and only if $A_{s+1}A_{s+1}^{'}=A^{*}A^{*'}$. Because all the pre-multiplication matrices are assumed to be identity matrices, together with conditions (1) and (2) in Definition~\ref{XYZ} we have the covariance matrices \begin{align*} \Sigma_{XZ_{1}} &= \left[ {\begin{array}{cc} p q_{11}E(U_{1}U_{1}^{'})+(1-p)q_{12}E(U_{2}U_{2}^{'}) & p E(U_{1}W_{1}^{'})+ (1-p) E(U_{2}W_{1}^{'}) \\ \\ p q_{11}E(WU_{1}^{'})+ (1-p) q_{12}E(WU_{2}^{'}) & E(WW_{1}^{'}) \\ \end{array} } \right]\\ &= \left[ {\begin{array}{cc} (p q_{11}+(1-p)q_{12})I_{d \times d} & 0\\ \\ 0 & E(WW_{1}^{'}) \\ \end{array} } \right], \end{align*} \begin{align*} \Sigma_{XZ_{2}} &= \left[ {\begin{array}{cc} p q_{21}E(U_{1}U_{1}^{'})+(1-p)q_{22}E(U_{2}U_{2}^{'}) & p E(U_{1}W_{2}^{'})+ (1-p) E(U_{2}W_{2}^{'}) \\ \\ p q_{21}E(WU_{1}^{'})+ (1-p) q_{22}E(WU_{2}^{'}) & E(WW_{2}^{'}) \\ \end{array} } \right]\\ &=\left[ {\begin{array}{cc} (p q_{21}+(1-p)q_{22})I_{d \times d} & 0 \\ \\ 0 & E(WW_{2}^{'}) \\ \end{array} } \right], \end{align*} where we denote $p_{1}=p$ and $p_{2}=1-p$ in case of two classes. To derive the CCA projection $A_{2}=A_{2}(X,Z_{1})$, the two $m \times d$ orthonormal matrices $A_{2}$ and $B_{2}$ shall maximize the singular values of $A_{2}^{'}\Sigma_{XZ_{1}}B_{2}$ (we take $B_{2}=[b_{1},\ldots,b_{d}]$ as in Equation~(\ref{ccaCond}), similarly to how we define $A_{2}$) \cite{HornJohnsonBook}. Because $A^{*}$ represents the dimension $d$ subspace spanned by the first $d$ coordinate axes, $A_{2}(X,Z_{1})$ is optimal if and only if $A_{2}$ consists of the first $d$ left singular vectors of $\Sigma_{XZ_{1}}$. Due to the form of $\Sigma_{XZ_{1}}$, in this case $B_{2}$ must consist of the first $d$ right singular vectors and the respective correlations are maximized to the decreasingly ordered singular values of the $d \times d$ leading principal sub-matrix of $\Sigma_{XZ_{1}}$. Therefore $A_{2}A_{2}^{'}=A^{*}A^{*'}$ if and only if $A_{2}$ is spanned by the first $d$ coordinate axes, or equivalently the largest $d$ singular values of $\Sigma_{XZ_{1}}$ all come from the $d \times d$ leading principal sub-matrix. Putting into inequalities, the CCA projections $A_{2}(X,Z_{s})$ are optimal if and only if \begin{equation} \label{cond1} h_{s}=p q_{s1}+(1-p)q_{s2} - \sigma_{1}(E(WW_{s}^{'})) > 0. \end{equation} When either CCA projections is not optimal, at least one $h_{s}$ is non-positive and represents the ``singular value loss" of using CCA. To derive the GCCA projection $A_{3}$ based on $(X,Z_{1},Z_{2})$, the covariance matrix between $Z_{1}$ and $Z_{2}$ also comes into play: \begin{align*} \Sigma_{Z_{1}Z_{2}} &=\left[ {\begin{array}{cc} p q_{11}q_{21}E(U_{1}U_{1}^{'})+ (1-p) q_{12}q_{22}E(U_{2}U_{2}^{'}) & p q_{11}E(U_{1}W_{2}^{'})+ (1-p) q_{12}E(U_{2}W_{2}^{'}) \\ \\ p q_{21}E(W_{1}U_{1}^{'})+ (1-p) q_{22}E(W_{1}U_{2}^{'}) & E(W_{1}W_{2}^{'}) \\ \end{array} } \right]\\ &=\left[ {\begin{array}{cc} (p q_{11}q_{21}+ (1-p) q_{12}q_{22})I_{d \times d} & 0 \\ \\ 0 & E(W_{1}W_{2}^{'}) \\ \end{array} } \right]. \end{align*} Argued in a similar manner, the GCCA projection is optimal if and only if $A_{3}$ is spanned by the first $d$ coordinate axes. The necessary and sufficient condition for that is \begin{equation} \label{cond2} h+ h_{1}+h_{2} > 0,\\ \end{equation} where we define $h=p q_{11}q_{21}+ (1-p) q_{12}q_{22} -\sigma_{1}(E(W_{1}W_{2}^{'}))$. In words, if both the CCA projections are already optimal, it is sufficient that the largest $d$ singular values of $\Sigma_{Z_{1}Z_{2}}$ all come from the $d \times d$ leading principal sub-matrix; else if either CCA projections is not optimal, the ``singular value gain" from $\Sigma_{Z_{1}Z_{2}}$ has to compensate the possible ``singular value loss" from $\Sigma_{XZ_{1}}$ and $\Sigma_{XZ_{2}}$ in order for the GCCA projection to be optimal. An important step is to prove that if $h_{s} \geq 0$ for $s=1,2$, then $h > 0$. This is true because \begin{align*} h &= p q_{11}q_{21}+ (1-p) q_{12}q_{22} - \sigma_{1}(E(W_{1}W_{2}^{'})) \\ &\geq p q_{11}q_{21}+ (1-p) q_{12}q_{22} - \sigma_{1}(E(WW_{1}^{'}))\sigma_{1}(E(WW_{2}^{'})) \\ &\geq p q_{11}q_{21}+ (1-p) q_{12}q_{22} - (p q_{11}+(1-p)q_{12})(p q_{21}+(1-p)q_{22}) \\ &= p(1-p)(q_{11}-q_{12})(q_{21}-q_{22}) \\ & > 0, \end{align*} where the first inequality uses condition (3) in Definition~\ref{XYZ}, the second inequality is by the fact that $h_{s} \geq 0$, and the last inequality uses condition (4). By the above derivation, if both CCA projections are optimal such that $h_{s} > 0$ for $s=1,2$, then Inequality~(\ref{cond2}) automatically holds and the GCCA projection $A_{3}$ is also optimal. This shows that any $F_{3} \in \Omega_{3}^{*}$ satisfying Inequality~(\ref{cond1}) for $s=1,2$ is an element of the subset $\{F_{3}\in \Omega_{3}^{*}| \max{\{L_{A_{2}}\}}=L_{A_{3}}=L_{A^{*}}\}$. Next we show there exists $F_{3} \in \Omega_{3}^{*}$ such that Inequality~(\ref{cond2}) holds while Inequality~(\ref{cond1}) fails for at least one $s$. The trivial example is that: if $h_{1} = h_{2} = 0$, then the GCCA projection is optimal. Furthermore, fixing $h$, $p$ and all the $q_{sk}$, the left-hand side of Inequality~(\ref{cond2}) is clearly continuous with respect to $\sigma_{1}(E(WW_{s}^{'}))$ for each $s$. This means $\sigma_{1}(E(WW_{s}^{'}))$ can be increased such that $h_{s}<0$ (and condition (3) in Definition~\ref{XYZ} will not be violated) while Inequality~(\ref{cond2}) still holds. So there also exists $F_{3}$ such that the GCCA projection is optimal when $h_{s} <0$. Thus $\exists F_{3} \in \{F_{3}\in \Omega_{3}^{*}| \max{\{L_{A_{2}}\}} > L_{A_{3}}=L_{A^{*}}\}$. Therefore, when $A^{*}$ is unique and $H_{X},H_{Z_{s}}, \Sigma_{Z_{s}}$ are all identity matrices, we proved that: for any given $F_{3} \in \Omega_{3}^{*}$, if the CCA projections are optimal, so are the GCCA projections; if the CCA projections are not optimal (Inequality~(\ref{cond1}) is not satisfied for at least one $s$), the GCCA projection may be optimal (depending on whether the covariance structure satisfies Inequality~(\ref{cond2})). Equivalently, we demonstrate that the similarity definition is sufficient for GCCA to improve CCA. Note that the step that ensures $h>0$ when $h_{s} \geq 0$ will be used again. Next we show that the result so far is invariant under any $H_{X}, H_{Z_{s}}, \Sigma_{Z_{s}}$ that satisfy Definition~\ref{XYZ}. Take CCA on $(X, Z_{1})$ for an example: by Equation~(\ref{X}) and Equation~(\ref{YZZ}) we have $\Sigma_{\tilde{X}}=H_{X}\Sigma_{X}H_{X}^{'}=I$ and $\Sigma_{\tilde{Z}_{1}}=H_{Z_{1}}\Sigma_{Z_{1}}H_{Z_{1}}^{'}$; also by eigendecomposition there exists $m_{1} \times m_{1}$ matrix $V$ s.t.\ $\Sigma_{\tilde{Z}_{1}}=V^{'}V$. Then $\Sigma_{X} = H_{X}^{-1}H_{X}^{-1'}$ and $\Sigma_{Z_{1}}=H_{Z_{1}}^{-1}V^{'}(H_{Z_{1}}^{-1}V^{'})^{'}$, and the CCA formulation~(\ref{ccaCond}) is equivalent to \begin{align*} &\rho_{\{a_{i}^{'}X,b_{i}^{'}Z_{1}\}}=\frac{(H_{X}^{-1'}a_{i})^{'}H_{X}^{'}\Sigma_{XZ_{1}}H_{Z_{1}}^{'}V^{-1}(VH_{Z_{1}}^{-1'})b_{i}}{\sqrt{(H_{X}^{-1'}a_{i})^{'}H_{X}^{-1'}a_{i}}\sqrt{(VH_{Z_{1}}^{-1'}b_{i})^{'}VH_{Z_{1}}^{-1'}b_{i}}},\\ \mbox{ subject to }&\rho_{\{a_{i}^{'}X,a_{j}^{'}X\}} = \frac{(H_{X}^{-1'}a_{i})^{'}H_{X}^{-1'}a_{j}}{\sqrt{(H_{X}^{-1'}a_{i})^{'}H_{X}^{-1'}a_{i}}\sqrt{(H_{X}^{-1'}a_{j})^{'}H_{X}^{-1'}a_{j}}}=0 \\ \mbox{ and } &\rho_{\{b_{i}^{'}Z_{1},b_{j}^{'}Z_{1}\}}=\frac{(VH_{Z_{1}}^{-1'}b_{i})^{'}VH_{Z_{1}}^{-1'}b_{j}}{\sqrt{(VH_{Z_{1}}^{-1'}b_{i})^{'}VH_{Z_{1}}^{-1'}b_{i}}\sqrt{(VH_{Z_{1}}^{-1'}b_{j})^{'}VH_{Z_{1}}^{-1'}b_{j}}}=0, \end{align*} where $V^{-1}$ is defined as the unique Moore-Penrose pseudo inverse if $\Sigma_{\tilde{Z}_{1}}$ is singular. Hence it is equivalent to consider the projections $H_{X}^{-1'}A_{2}$ and $VH_{Z_{1}}^{-1'}B_{2}$ on $(\tilde{X}, V^{-1'}\tilde{Z}_{1})$ (both $\tilde{X}$ and $V^{-1'}\tilde{Z}_{1}$ are of identity variance) with covariance $H_{X}^{'}\Sigma_{XZ_{1}}H_{Z_{1}}^{'}V^{-1}$, instead of the projections $A_{2}$ and $B_{2}$ on $(X, Z_{1})$. The same holds for the GCCA formulation~(\ref{gccaCond}). Furthermore, the classification task remains the same because the projected feature $A^{'}X=(H_{X}^{-1'}A)^{'}H_{X}X$ is invariant under the full-rank transformation $H_{X}$. Therefore the optimal projection $A^{*}$ and the GCCA/CCA projections $A_{s+1}$ are all equivalent to the identity variance case up to $H_{X}$, and the result is clearly invariant. At last we justify the case when $A^{*}$ is not unique, which means there exists $A^{*}$ that is spanned by the first $d$ coordinate axes under different transformation matrices. Because the conditions in Definition~\ref{XYZ} are required to be satisfied for all $A^{*}$, in most cases the CCA optimality is still equivalent to Inequality~(\ref{cond1}), i.e., CCA is optimal if and only if Inequality~(\ref{cond1}) is satisfied for at least one $A^{*}$ after proper transformations for each $A^{*}$. The same holds for the GCCA optimality (Inequality~(\ref{cond2})), and we can still conclude that GCCA improves CCA following the same steps. However, a special case should be taken into consideration, and we take the CCA projection $A_{2}(X,Z_{1})$ for an illustration: Suppose the singular vector $\sigma_{1}(E(WW_{s}^{'}))$ corresponds to is the $(d+1)$th coordinate axes and $\sigma_{1}(E(WW_{s}^{'})) > \sigma_{2}(E(WW_{s}^{'}))$. Then $A_{2}(X,Z_{1})$ can be chosen to represent any dimension $d$ subspace of the space spanned by the first $(d+1)$ coordinate axes, and the degrees of freedom is $(d+1)d-\frac{d^2+d}{2}$ (the degrees of freedom may increase if there are repeating singular values). Now, if $A^{*}$ happens to have the same degrees of freedom in the space spanned by the first $(d+1)$ coordinate axes, then $A_{2}(X,Z_{1})$ is optimal if and only if $h_{1} \geq 0$ (rather than $h_{1} >0$) because any arbitrary choice of $A_{2}$ is optimal. Similar phenomenon applies for $A_{s+1}$, in which case Inequality~(\ref{cond1}) and Inequality~(\ref{cond2}) should be adjusted to include equalities. However, in this case we still have $h+h_{1}+h_{2} >0$ when the CCA projections are optimal, which is still sufficient (but may not be necessary) for GCCA to be optimal. Therefore, GCCA still improves CCA in case of non-unique $A^{*}$, and the justification is done. \qed \end{proof} \subsection{Proof of Theorem~\ref{main} for any $K \geq 2$ and $r \geq 1$} \begin{proof} Now we generalize the result to arbitrary $K \geq 2$ (multi-class) and any $r \geq 1$ (the GCCA criterion). Without loss of generality, we assume that $A^{*}$ is unique and $H_{X}, H_{Z_{s}}, \Sigma_{Z_{s}}$ are all identity matrices. Let us treat the case that $r=1$ first. Using the setting in Equation~(\ref{YZZ}) and argue similarly as before, GCCA improves CCA if and only if \begin{equation} \label{multiclass} h=\sum_{k=1}^{K} p_{k}q_{1k}q_{2k} - \sigma_{1}(E(W_{1}W_{2}^{'})) > 0 \end{equation} is true when $h_{s}= \sum_{k=1}^{K} p_{k}q_{sk} - \sigma_{1}(E(WW_{s}^{'})) \geq 0$ for $s=1,2$. This is true because \begin{align} \label{hmul} h &= \sum_{k=1}^{K} p_{k}q_{1k}q_{2k}- \sigma_{1}(E(W_{1}W_{2}^{'})) \nonumber \\ &\geq \sum_{k=1}^{K} p_{k}q_{1k}q_{2k}- \sigma_{1}(E(WW_{1}^{'}))\sigma_{1}(E(WW_{2}^{'})) \nonumber \\ &\geq \sum_{k=1}^{K} p_{k}q_{1k}q_{2k} - (\sum_{k=1}^{K} p_{k}q_{1k}) (\sum_{k=1}^{K} p_{k}q_{2k}) \nonumber \\ &= \sum_{1 \leq k_{1} < k_{2} \leq K} p_{k_{1}}p_{k_{2}}(q_{1k_{1}}-q_{1k_{2}})(q_{2k_{1}}-q_{2k_{2}}) \\ &> 0, \nonumber \end{align} where the first inequality follows from conditions (3), the second inequality follows from $h_{s} \geq 0$, the next equality follows from simple algebra, and the last inequality follows from condition (4). As to the GCCA criterion with $r \geq 1$, GCCA improves CCA if and only if \begin{align*} (\sum_{k=1}^{K}p_{k}q_{1k}q_{2k})^{r} - \sigma_{1}^{r}(E(W_{1}W_{2}^{'})) > 0 \end{align*} is true when $h_{s} \geq 0$. Clearly this inequality holds if and only if it holds for $r=1$, which is Inequality~(\ref{multiclass}). Hence it is true and GCCA improves CCA in the similar family for any $r \geq 1$. Thus Theorem~\ref{main} is proved for any number of classes and any GCCA criterion with $r \geq 1$. \qed \end{proof} \subsection{Proof of Corollary~\ref{main2} and Corollary~\ref{main3}} \begin{proof} Without loss of generality, we carry out the proof assuming $A^{*}$ is unique, $H_{X}, H_{Z_{s}}, \Sigma_{Z_{s}}$ are all identity matrices, and $K=2$ and $r=1$. There are $S$ auxiliary features in total, and thus $\dbinom{S}{S^{'}}$ choices of auxiliary features for $A_{S^{'}+1}$. We define $h_{s}=p q_{s1}+(1-p)q_{s2}-\sigma_{1}(E(WW_{s}^{'}))$ and $h_{st}=p q_{s1}q_{t1}+ (1-p) q_{s2}q_{t2} - \sigma_{1}(E(W_{s}W_{t}^{'}))$ for any $s$ and $t$ satisfying $S \geq s,t \geq 1$, where $h_{st}$ is a generalization of $h$ in the proof of Theorem~\ref{main}. Then the GCCA projection $A_{S^{'}+1}$ using the first $S^{'}$ auxiliary features is optimal if and only if \begin{equation} \label{cond3} \sum_{1 \leq s<t \leq S^{'}} h_{st}+ \sum_{s=1}^{S^{'}}h_{s} > 0. \end{equation} This is a generalization of Inequality~(\ref{cond2}), because there are $S^{'}$ possible ``singular value loss" caused by $\Sigma_{XZ_{s}}$ and $\frac{S^{'}(S^{'}-1)}{2}$ additional cross-covariance terms $\Sigma_{Z_{s}Z_{t}}$ between the auxiliary features. Note that for any other $A_{S^{'}+1} \in \mathcal{A}_{S^{'}+1}$ with a different choice of auxiliary features, we can still use Inequality~(\ref{cond3}) for the optimality by switching the first $S^{'}$ auxiliary features with the chosen $S^{'}$ auxiliary features. All the CCA projections are optimal if and only if $h_{s} >0$ for all $s=1,\ldots,S$. This implies that $h_{st}>0$ is always true for any $1 \leq s<t \leq S$, and Inequality~(\ref{cond3}) holds for any $A_{S^{'}+1} \in \mathcal{A}_{S^{'}+1}$ with $S \geq S^{'} \geq 2$. Therefore the set of GCCA projections $\mathcal{A}_{S^{'}+1}$ always improves the set of CCA projections $\mathcal{A}_{2}$, and Corollary~\ref{main2} is proved. To prove Corollary~\ref{main3}, we use the simplifying condition (4'). Then Inequality~(\ref{cond3}) simplifies to $\frac{S^{'}-1}{2} h_{12} + h_{1} >0$, because $h_{st}$ are the same for all $1 \leq s,t \leq S^{'}$ and so are $h_{s}$. We need to show that if $A_{S^{'}}$ are optimal for certain $F_{S+1}$, so is $A_{S^{'}+1}$. (note that the choice of auxiliary features no longer matters because they follow the same distribution, which means all the elements in $\mathcal{A}_{S^{'}+1}$ represent the same subspace.) When $S^{'}=2$, it is a special case of Theorem~\ref{main} because any $F_{S+1}$ satisfying condition (4') also satisfies condition (4). Clearly $A_{2}$ is optimal if and only if $h_{1}=h_{2} >0$, which implies $h_{12} >0$. So Inequality~(\ref{cond3}) holds and $A_{3}$ is also optimal. When $S^{'}=3$, $A_{3}$ is optimal if and only if $h_{12}+h_{1}>0$. In this case if $h_{1} >0$, then we have $h_{12} > 0$; if $h_{1}<0$, then $h_{12} >0$ must be true in order for $A_{3}$ to be optimal. In any case, $\frac{3}{2}h_{12}+h_{1} >0$ is true and $A_{4}$ is optimal. Therefore, the optimality of $A_{3}$ implies the optimality of $A_{4}$. By induction, for any $S \geq S^{'} \geq 2$, the optimality of $\mathcal{A}_{S^{'}}$ implies the optimality of $A_{S^{'}+1}$ under the simplifying condition (4'), and Corollary~\ref{main3} is proved. Note that the corollary is not true under the original condition (4), and one can easily make up a counter-example by checking Inequality~(\ref{cond3}). \qed \end{proof} \subsection{Comments} We conclude the proof section by considering the term $h=\sum_{k=1}^{K} p_{k}q_{1k}q_{2k} - \sigma_{1}(E(W_{1}W_{2}^{'})) $ in Equation~\ref{multiclass} for the case of two auxiliary features, which offers additional insights for Definition~\ref{XYZ} of the similar family and is potentially useful for model selection. Firstly, the equation offers a relaxation of condition (4) in the similar family: instead of $(q_{sk_{1}}-q_{sk_{2}})(q_{tk_{1}}-q_{tk_{2}}) > 0$ for all $1 \leq s < t \leq S$ and $k_{1},k_{2}=1,\ldots,K$, we can replace it by either $h>0$ or $\sum_{1 \leq k_{1} < k_{2} \leq K} p_{k_{1}}p_{k_{2}}(q_{1k_{1}}-q_{1k_{2}})(q_{2k_{1}}-q_{2k_{2}}) > 0$ (by Equation~\ref{hmul}), which is more difficult to interpret than the original condition but less restrictive. Secondly, the improvement of GCCA over CCA depends almost solely on the magnitude of $h$. The larger the $h$, the more likely that GCCA may be optimal even if CCA is not. Towards this direction, the magnitude of $q_{sk}$ plays an important role: for fixed $E(W_{1}W_{2}^{'})$, assuming all coefficients non-negative, $h$ increases with $q_{sk}$ and GCCA projection is potentially more superior. Finally, the above observation may be useful for the choice of auxiliary variables and the projecting dimension without using cross-validation. Other things being equal, an auxiliary variable with larger $h$ or $q_{sk}$ is more favorable, as is a projection dimension with larger $h$ or $q_{sk}$; thus it is reasonable to choose an auxiliary variable and/or a projection dimension with a more significant ``signal'' part (where $U_{k}$ lives) for later inference, which agrees with intuition. Numerically, within the similar family this observation is useful for model selection purposes (choose the auxiliary feature and/or the projection dimension with the largest $h$ using greedy algorithms, among all available auxiliary features and all possible dimensions); but out of the similar family definition, whether a modified version of $h$ can serve the model selection purpose or not requires further investigation. \section*{Acknowledgments} \addcontentsline{toc}{section}{Acknowledgment} This work was partially supported by National Security Science and Engineering Faculty Fellowship (NSSEFF), Johns Hopkins University Human Language Technology Center of Excellence (JHU HLT COE), and the XDATA program of the Defense Advanced Research Projects Agency (DARPA) administered through Air Force Research Laboratory contract FA8750-12-2-0303. The authors would like to thank the reviewers for their insightful and valuable suggestions in improving the exposition of the paper. \bibliographystyle{ieeetr}
1,116,691,499,521
arxiv
\subsection{Abstractions} We may verify properties of transition systems by considering their abstractions. An abstraction function that abstracts the configurations of a program induces an abstract transition system in which each abstract transition corresponds to a concrete transition followed by applying the abstraction on the resulting configuration. The formal definition follows. \sharon{do we want to say that the abstractions leave the program statement $s$ and the state $q$ intact, but abstract the path condition. It is not needed now, but it may let us state more things generally -- need to check} \begin{definition} \label{def:abstract-TS-general} Given a transition system $\mathcal{S} = (C, c_0,\mathcal{R})$ and a (possibly partial) abstraction function $\sharp: C \to C$, the corresponding \emph{abstract transition system} is $\genAbs(\TS) = (C, \initconf^{{\genAbs}}, \Tr^{{\genAbs}})$, where \begin{itemize} \item $\initconf^{{\genAbs}} = \sharp(c_0)$ \item $\Tr^{{\genAbs}} = \{(c,c') \mid \exists \tilde{c}.~c \to \tilde{c}~\land~c' = \sharp(\tilde{c}) \}$ \end{itemize} We write $c \to^{\genAbs} c'$ when $(c,c') \in \Tr^{{\genAbs}}$. While $\sharp$ may be a partial function, we require that it is defined for $c_0$. \end{definition} Weak, respectively strong, preservation of properties between the abstract and the concrete transition systems are ensured by the notions of \emph{simulation}, respectively \emph{bisimulation}. \begin{definition}[\cite{DBLP:books/daglib/0067019}] Let $\mathcal{S} = (C, c_0, \mathcal{R})$ and $\genAbs(\TS) = (C, \initconf^{{\genAbs}}, \Tr^{{\genAbs}})$ be transition systems. A relation $B \subseteq C \times C$ is a \emph{simulation} from $\mathcal{S}$ to $\genAbs(\TS)$, if for every $(c,c_\sharp) \in B$: \begin{itemize} \item if $c \to c'$ then there exists $c_\sharp'$ such that $c_\sharp \to^{\genAbs} c_\sharp'$ and $(c',c_\sharp') \in B$. \end{itemize} $B \subseteq C \times C$ is a \emph{bisimulation} from $\mathcal{S}$ to $\genAbs(\TS)$ if $B$ is a simulation from $\mathcal{S}$ to $\genAbs(\TS)$ and $B^{-1}\triangleq \{(c_\sharp,c) \mid (c,c_\sharp) \in B \}$ is a simulation from $\genAbs(\TS)$ to $\mathcal{S}$. We say that $\genAbs(\TS)$ \emph{simulates}, respectively \emph{is bisimilar to}, $\mathcal{S}$ if there exists a simulation, respectively, a bisimulation, $B$ from $\mathcal{S}$ to $\genAbs(\TS)$ such that $(c_0, \initconf^{{\genAbs}}) \in B$. \end{definition} \hg{Why do we need $\to^{\genAbs}$? From definition 2, it seemed like $\to$ is enough. } \section{Abstraction} Given a symbolic state $q$, we define the \emph{store set}, denoted $\mathcal{S}_q$, as the set of constants stored in $q$: \[ \mathcal{S}_q = \{ X \mid \exists v \cdot q[v] = X \} \] Note that the elements of the store set are always constants because we always use a fresh constant in assignment statements. Given a program configuration $\langle\mathunderscore, q, pc \rangle$, we denote the set of equalities that hold between the elements of stored set using $E_{\langle q, pc \rangle}$: \[ E_{\langle q, pc \rangle} = \{ (t_1, t_2) \mid pc \models (t_1 = t_2) \text{ and } t_1, t_2 \in \mathcal{S}_q \} \] Note that $E_{\langle q, pc \rangle}$ is a finite set and can be represented as a conjunction of literals. Similarly, we denote the set of inequalities that hold between the elements of the stored set using $D_{\langle q, pc \rangle}$: \[ D_{\langle q, pc \rangle} = \{ (t_1, t_2) \mid pc \models (t_1 \neq t_2) \text{ and } t_1, t_2 \in \mathcal{S}_q \} \] We define the store set abstraction of path condition as $\alpha(q, pc) = E_{\langle q, pc \rangle} \wedge D_{\langle q, pc \rangle}$. We omit $q$ and write $\alpha(pc)$ when $q$ is obvious from context. \begin{lemma} Given a configuration $\langle \mathbf{assume}(x = y), q, pc \rangle$ in a coherent program, where $pc$ is satisfiable, $q[x] = X$, and $q[y] = Y$. $pc \wedge X = Y$ is unsatisfiable if and only if $\alpha(pc) \wedge X = Y$ is unsatisfiable. \end{lemma} \begin{proof} \textbf{only if direction} Since all the literals in $E_{\langle q, pc \rangle} \wedge D_{\langle q, pc \rangle}$ are implied by $pc$, if the abstraction is unsatisfiable, so is $pc$. \textbf{if direction} Since $pc \wedge X = Y$ is unsatisfiable, $pc \models X \neq Y$. Therefore $D_{\langle q, pc \rangle} \models X \neq Y$. Hence $D_{\langle q, pc \rangle} \wedge X = Y$ will also be unsatsifiable. \iffalse \textbf{if direction} Let $pc \wedge X = Y \models t_j = t_k$ as well as $pc \wedge X = Y \models t_j \neq t_k$. Since $pc$ by itself is satisfiable, it has to be case that $t_j, t_k$ are superterms of $X, Y$ modulo $pc$. By the early assumes property, there are constants $J, K \in \mathcal{S}_q$ such that $pc \models J = t_j$ and $pc \models K = t_k$. Hence $pc \models J = K$. Therefore $E_{\langle q, pc \rangle} \models J = K$. By the same argument, $D_{\langle q, pc \rangle}\models J \neq K$. Hence the abstraction is also unsatisfiable. \fi \end{proof} \begin{lemma} Given a configuration $\langle \mathbf{assume}(x \neq y), q, pc \rangle$ in a coherent program, where $pc$ is satisfiable, $q[x] = X$, and $q[y] = Y$. $(pc \land X \neq Y)$ is unsatisfiable if and only if $\alpha(pc) \wedge X\neq Y)$ is unsatisfiable. \end{lemma} \begin{proof} \textbf{only if direction} Since all the literals in $E_{\langle q, pc \rangle} \wedge D_{\langle q, pc \rangle}$ are implied by $pc$, if the abstraction is unsatisfiable, so is $pc$. \textbf{if direction} Since $pc \wedge X \neq Y$ is unsatisfiable, $pc \models X = Y$. Therefore $E_{\langle q, pc \rangle} \models X = Y$. Hence $E_{\langle q, pc \rangle} \wedge X \neq Y$ will also be unsatsifiable. \end{proof} \begin{theorem} Given a coherent program, any program configuration $\langle \mathunderscore, q, pc \rangle$ is feasible if and only if $\alpha(pc)$ is satisfiable. \end{theorem} The abstraction did not make use of any property of coherant programs. The benefit of coherant programs is that we can keep track of this abstraction using simple abstract transformers. \iffalse \begin{example} Consider the coherent program \[ \begin{array}{ll} 1.&\mathbf{assume}(x = y);\\ 2.&x := f(p, x);\\ 3.&y := f(q, y);\\ 4.&\mathbf{assume}(p = q);\\ 5.&\mathbf{assume}(x \neq y); \end{array} \] The symbolic state after each line of execution is \[ \begin{array}{ll} 1.&\langle [x: X_0, y: Y_0, p: P_0, q: Q_0], X_0 = Y_0\rangle\\ 2.&\langle [x: X_1, y: Y_0, p: P_0, q: Q_0], pc_1 \wedge X_1 = f(P_0, X_0)\rangle\\ 3.&\langle [x: X_1, y: Y_1, p: P_0, q: Q_0], pc_2 \wedge Y_1 = f(Q_0, Y_0)\rangle\\ 4.&\langle q_3, pc_3 \wedge P_0 = Q_0\rangle\\ 5.&\langle q_3, pc_4 \wedge X_1 \neq Y_1 \rangle \end{array} \] abstracting each state gives us \[ \begin{array}{ll} 1.&\langle [x: X_0, y: Y_0, p: P_0, q: Q_0], X_0 = Y_0\rangle\\ 2.&\langle [x: X_1, y: Y_0, p: P_0, q: Q_0], X_1 = f(P_0, Y_0)\rangle\\ 3.&\langle [x: X_1, y: Y_1, p: P_0, q: Q_0], X_1 = f(P_0, Y_0) \wedge Y_1 = f(Q_0, Y_0)\rangle\\ 4.&\langle q_3,P_0 = Q_0 \wedge X_1 = X_2 \wedge X_1 = f(P_0, Y_0) \wedge Y_1 = f(Q_0, Y_0) \rangle\\ 5.&\langle q_3, \bot \rangle \end{array} \] Arie: abstracting each state gives us \[ \begin{array}{ll} 1.&\langle [y_1 \to \hat{Y}_1, y_2 \to \hat{Y}_2, x_1 \to \hat{X}_1, x_2 \to \hat{X}_2], \hat{Y}_1 = \hat{Y}_2\rangle\\ 2.&\langle [y_1 \rightarrow Y_1, y_2 \rightarrow \hat{Y}_2, x_1 \to \hat{X}_1, x_2 \to \hat{X}_2 ], Y_1 = f(\hat{X}_1, \hat{Y}_2)\rangle\\ 3.&\langle [y_1 \rightarrow Y_1, y_2 \rightarrow Y_2, x_1 \to \hat{X}_1, x_2 \to \hat{X}_2 ], Y_1 = f(\hat{X}_1, \hat{Y}_2) \wedge Y_2 = f(\hat{X}_2, \hat{Y}_2)\rangle\\ 4.&\langle q_3,\hat{X}_1 = \hat{X}_2 \wedge Y_1 = Y_2 \wedge Y_1 = f(\hat{X}_1, \hat{Y}_2) \wedge Y_2 = f(\hat{X}_2, \hat{Y}_2)\rangle\\ 5.&\langle q_3, \bot \rangle \end{array} \] Abstract state 3 is wrong according to the definition of $\mathcal{S}_q$. \end{example} Given a program configuration $\langle s, q, pc\rangle$, we define the \emph{stored set abstraction} as the abstract state $\langle s, q, E_{\langle q, pc \rangle} \wedge D_{\langle q, pc \rangle}\rangle$. The abstraction can be extended to terminal configurations in the same way. To define the abstract semantics of a program, we change the semantics of the assume and assignment statement to: \[ \begin{array}{c} \infer{\langle \mathbf{assume}(c), q, pc \rangle \to \langle q, E_{\langle q, pc'\rangle} \wedge D_{\langle q, pc'\rangle}\rangle}{\langle c, q\rangle\Downarrow v\quad pc' = pc \wedge v}\\[1ex] \infer{\langle x := e, q, pc \rangle \to \langle q', E_{\langle q', pc'\rangle } \wedge D_{\langle q', pc'\rangle}\rangle}{\langle e, q\rangle \Downarrow v \quad \text{X is fresh} \quad q' = q[x \mapsto X] \quad pc' = pc \wedge X = v}\\[1ex] \end{array} \] \begin{theorem} A coherant program is safe if and only if it is safe relative to abstract semantics. \end{theorem} assume(x = y) [x = X0, y = Y0], X0 = Y0 x := f(x) [x = X1, y = Y0], X0 = Y0 && X1 = f(X0) [x = X0, y = Y0], X0 = f(Y0) y := f(y) [x = X1, y = Y1], X0 = Y0 && X1 = f(X0) && Y1 = f(Y0) [x = X0, y = Y0], Y0 = X0 x := f(x) [x = X2, y = Y1], X0 = Y0 && X1 = f(X0) && X2 = f(X1) && Y1 = f(Y0) [x = X2, y = Y1], X2 = f(Y1) y := f(y) [x = X2, y = Y2], X0 = Y0 && X1 = f(X0) && X2 = f(X1) && Y1 = f(Y0) && Y2 = f(Y1) pc_{abs} = abs(\cong_{pc}) [x = X0, y = Y0], Y0 = X0 assume(x != y) [x = X2, y = Y2], X0 = Y0 && X1 = f(X0) && X2 = f(X1) && Y1 = f(Y0) && Y2 = f(Y1) && Y2 != X2 [x = X0, y = Y0], Y0 = X0 && X0 != Y0 \fi \section{Store set abstraction} Given a symbolic state $q$, we define the \emph{store set}, denoted $\mathcal{S}_q$, as the set of constants stored in $q$: \[ \mathcal{S}_q = \{ X \mid \exists v \cdot q[v] = X \} \] Note that the elements of the store set are always constants because we always use a fresh constant in assignment statements. Given a conjunction of literals $pc$ and a set of constants $V$, we define the \emph{store set} abstraction of $pc$, denoted $\alpha(pc)$, as the strongest subterm closed subset of $pc^*$ such that $V \cap \mathcal{T}(\alpha(pc)) = V \cap pc$. \begin{lemma} For a coherant program $P$, let $\langle \mathbf{assume}(x = y), q, pc \rangle$ be a reachable configuration such that $pc$ is satisfiable, $q[x] = X$, and $q[y] = Y$. Then $pc \wedge X = Y$ is unsatisfiable if and only if $\alpha(pc) \wedge X = Y$ is unsatisfiable. \end{lemma} \begin{proof} \textbf{only if direction} Since all the literals in $\alpha(pc)$ are implied by $pc$, if the abstraction is unsatisfiable, so is $pc$. \textbf{if direction} We know that congruence closure is complete for EUF~\cite{DBLP:conf/ijcai/Shostak77}. Let $cc(S)$ denote the congruence closure of $S$ as defined in~\cite{DBLP:conf/ijcai/Shostak77}. If $pc \wedge X = Y$ is unsatisfiable, its either the case that $X \neq Y \in pc$ or $t_j = t_k \in cc(pc \wedge X = Y)$ and $t_j \neq t_k \in pc$. Here $t_j$ has to be a superterm of $X$ modulo $pc$ and $t_k$ has to be a superterm of $Y$ modulo $pc$. If this were not the case, $t_j = t_k \in cc(pc)$ and therefore $pc$ would be unsatisfiable. We prove the lemma by doing an induction on the number of times the congruence axiom is applied. We prove that all the literals used to derive $t_j = t_k$ are in $\alpha(pc)$. The base case is when $X \neq Y \in pc$. In this scenario, it holds that $X \neq Y \in \alpha(pc)$. To prove the inductive step, assume that all literals required to derive $u_j = v_j$ are in $\alpha(pc)$, forall $1 \leq j \leq n$, and that we need to apply the congruence axiom to prove that $f(u_1, \ldots, u_n) = f(v_1,\ldots, v_n)$. Since $f(u_1, \ldots, u_n)$ and $f(v_1,\ldots,v_n)$ are superterms of $X,Y$ modulo $pc$, by the early assumes property, $pc$ should contain equalities $U = f(u_1, \ldots, u_n)$ and $V = f(v_1,\ldots,v_n)$ where $U,V \in S_q$. Hence $f(u_1, \ldots, u_n), (v_1,\ldots,v_n) \in \alpha(pc)$. Therefore, $\alpha(pc) \wedge X = Y \models t_j = t_k$ and is unsatisfiable. \end{proof} \begin{lemma} For a coherant program $P$, let $\langle \mathbf{assume}(x \neq y), q, pc \rangle$ be a reachable configuration such that $pc$ is satisfiable, $q[x] = X$, and $q[y] = Y$. Then $pc \wedge X \neq Y$ is unsatisfiable if and only if $\alpha(pc) \wedge X \neq Y$ is unsatisfiable. \end{lemma} \begin{proof} The only way to derive false is that $pc \models X \approx Y$. Since $X$ and $Y$ are constants, this is possible only if either $X = Y \in pc$ or $X = f(u_1,\ldots, u_n) \in pc$, $Y = f(v_1,\ldots, v_n) \in pc$ and $pc \models u_i = v_i$ for all $i$. In the first case, $\alpha(pc)$ should also conatain the equality $X = Y$. In the second case, since $\alpha(pc)$ is subterm closed, $u_i = v_i \in \alpha(pc)$. Therefore $\alpha(pc) \models X = Y$ in both cases. \end{proof} \begin{theorem} For a coherant program $P$, let $\langle s, q, pc \rangle$ be a reachable configuration such that $pc$ is satisfiable. Then $pc \wedge c$ is unsatisfiable if and only if $\alpha(pc) \wedge c$ is unsatisfiable. \end{theorem} \section{Abstraction and Bisimulation for UP} \label{sec:abs} In this section, we review abstractions for transition systems. We then define two abstraction for UP: cover and renaming, and show that they induce bisimulation. That is, for UP, these abstractions preserve all properties. Finally, we show a simple logical characterization result for UP to set the stage for our main results in the following sections. \begin{definition} \label{def:abstract-TS-general} Given a transition system $\mathcal{S} = (C, c_0,\mathcal{R})$ and a (possibly partial) abstraction function $\sharp: C \to \Conf$, the induced \emph{abstract transition system} is $\genAbs(\TS) = (\Conf, \initconf^{{\genAbs}}, \Tr^{{\genAbs}})$, where \begin{align*} \initconf^{{\genAbs}} &\triangleq \sharp(c_0) \\ \Tr^{{\genAbs}} &\triangleq \{(c_\sharp,c_\sharp') \mid \exists c, c'.~c \to c'~\land~c_\sharp = \sharp(c)~\land~c_\sharp' = \sharp(c') \} \end{align*} We write $c \to^{\genAbs} c'$ when $(c,c') \in \Tr^{{\genAbs}}$. Note that $\sharp$ must be defined for $c_0$. \end{definition} Throughout the paper, we construct several abstract transition systems. All transition systems considered are \emph{attentive}. Intuitively, this means that their transitions do not distinguish between configurations that have $q$-equivalent path conditions. We say that two configurations $c_1 = \langle s, q, pc_1\rangle$ and $c_2 = \langle s, q, pc_2\rangle$ are equivalent, denoted $c_1 \equiv c_2$ if $pc_1 \equiv_q pc_2$. \begin{definition}[Attentive TS] \label{def:attentive} A transition system $\mathcal{S} = (C, c_0, \mathcal{R})$ is \emph{attentive} if for any two configurations $c_1, c_2 \in C$ s.t. $c_1 \equiv c_2$, if there exists $c_1'\in C$ s.t. $(c_1, c_1') \in \mathcal{R}$, then there exists $c_2'\in C$, s.t. $(c_2, c_2')\in \mathcal{R}$ and $c_1' \equiv c_2'$ and vice versa. \end{definition} Weak, respectively strong, preservation of properties between the abstract and the concrete transition systems are ensured by the notions of \emph{simulation}, respectively \emph{bisimulation}. \begin{definition}[\cite{DBLP:books/daglib/0067019}] Let $\mathcal{S} = (C, c_0, \mathcal{R})$ and $\genAbs(\TS) = (\Conf, \initconf^{{\genAbs}}, \Tr^{{\genAbs}})$ be transition systems. A relation $\rho \subseteq C \times \Conf$ is a \emph{simulation} from $\mathcal{S}$ to $\genAbs(\TS)$, if for every $(c,c_\sharp) \in \rho$: \begin{itemize} \item if $c \to c'$ then there exists $c_\sharp'$ such that $c_\sharp \to^{\genAbs} c_\sharp'$ and $(c',c_\sharp') \in \rho$. \end{itemize} $\rho \subseteq C \times \Conf$ is a \emph{bisimulation} from $\mathcal{S}$ to $\genAbs(\TS)$ if $\rho$ is a simulation from $\mathcal{S}$ to $\genAbs(\TS)$ and $\rho^{-1}\triangleq \{(c_\sharp,c) \mid (c,c_\sharp) \in \rho \}$ is a simulation from $\genAbs(\TS)$ to $\mathcal{S}$. We say that $\genAbs(\TS)$ \emph{simulates}, respectively \emph{is bisimilar to}, $\mathcal{S}$ if there exists a simulation, respectively, a bisimulation, $\rho$ from $\mathcal{S}$ to $\genAbs(\TS)$ such that $(c_0, \initconf^{{\genAbs}}) \in \rho$. \end{definition} We say that a bisimulation $\rho \subseteq C \times \Conf$ is \emph{finite} if its range, $\{ \rho(c) \mid c \in C \}$, is finite. A finite bisimulation relates a (possibly infinite) transition system with a finite one. Next, we define two abstractions for UP programs and show that they result in bisimilar abstract transition systems. The first abstraction eliminates all constants that are not assigned to program variables from the path condition, using the cover operation. The second abstraction renames the constants assigned to program variables back to the initial constants $\const_0$. Both abstractions together ensure that all reachable configurations in the abstract transition system are defined over $\Sigma_0$ (i.e., the only constants that appear in states, as well as in path conditions, are from $\const_0$). There may still be infinitely many such configurations since the depth of terms may be unbounded. We show that whenever the obtained abstract transition system has finitely many reachable configurations, the concrete one has an inductive assertion map that characterizes the set of reachable configurations. \begin{definition}[Cover abstraction] \label{def:cover-abs} The cover abstraction function $\alpha_\mathbb{C}: C \to C$ is defined by \[ \alpha_\mathbb{C}(\langle s, q, pc\rangle) \triangleq \langle s,q,\mathbb{C} (\const \setminus \const(q)) \cdot pc\rangle \] \end{definition} Since $pc \equiv_{q} \mathbb{C} (\const \setminus \const(q)) \cdot pc$, the cover abstraction also results in a bisimilar abstract transition system. \begin{theorem} \label{thm:cover-bisimilar} For any attentive transition system $\mathcal{S} = (C, c_0, \mathcal{R})$, the relation $\rho = \{(c, \alpha_\mathbb{C}(c)) \mid c \in \Reach(\mathcal{S})\}$ is a bisimulation from $\mathcal{S}$ to $\alpha_\mathbb{C}(\mathcal{S})$. \end{theorem} To introduce the renaming abstraction, we need some notation. Given a quantifier free formula $\varphi$, constants $a, b \in \mathcal{C}(\varphi)$ such that $a\neq b$, let $\varphi[a \rightarrowtail b]$ denote $\varphi[b\mapsto x][a \mapsto b]$, where $x$ is a constant not in $\mathcal{C}(\varphi)$. For example, if $\varphi = (a \approx c \land b \approx d)$, $\varphi[a\rightarrowtail b] = (b \approx c \wedge x \approx d)$. Given a path condition $pc$ and a state $q$, let $\mathit{r}_{0}(pc, q)$ denote the formula obtained by renaming all constants in $\mathcal{C}(q)$ using their initial values. $\mathit{r}_{0}(pc, q) = pc[q(\pv{v}) \rightarrowtail v_0]$ for all $\pv{v} \in \pv{V}$ such that $q(\pv{v}) \neq v_0$. \begin{definition}[Renaming abstraction] \label{def:rename-abstraction} The renaming abstraction function $\alpha_r: C \to C$ is defined by \[ \alpha_r(\langle s, q, pc\rangle) \triangleq \langle s,q_{0},\mathit{r}_{0}(pc, q)\rangle \] \end{definition} \begin{theorem} \label{thm:renaming-bisimilar} For any attentive transition system $\mathcal{S} = (C, c_0, \mathcal{R})$, the relation $\rho = \{(c, \alpha_r(c)) \mid c \in \Reach(\mathcal{S})\}$ is a bisimulation from $\mathcal{S}$ to $\alpha_{\mathit{r}}(\mathcal{S})$. \end{theorem} Finally, we denote by $\alpha_{\mathbb{C},\mathit{r}}$ the composition of the renaming and cover abstractions: $\alpha_{\mathbb{C},\mathit{r}} \triangleq \alpha_\mathbb{C} \circ \alpha_\mathit{r}$ (i.e., $\alpha_{\mathbb{C},\mathit{r}}(c) = \alpha_\mathit{r}(\alpha_\mathbb{C}(c))$). Since the composition of bisimulation relations is also a bisimulation, $\alpha_{\mathbb{C}, \mathit{r}}(\mathcal{S})$ is bisimilar to $\mathcal{S}$. \begin{theorem}[Logical Characterization of UP]\label{th:lcup} If $\alpha_{\mathbb{C},\mathit{r}}$ induces a finite bisimulation on an UP $P$, then, there exists an inductive assertion map $\eta$ for $P$ that characterizes the reachable configurations of $P$. \end{theorem} \begin{proof} Define $\eta(s) \triangleq \bigvee \{ pc \mid \langle s, q, pc \rangle \in \Reach(\alpha_{\mathbb{C},\mathit{r}}(P))\}$. Then, $\eta(s)$ is such an inductive assertion map. \end{proof} Intuitively, \cref{th:lcup} says that inductive invariant of UP, whenever it exists, can be described using EUF formulas over program variables. That is, any extra variables that are added to the path condition during program execution can be abstracted away (specifically, using the cover abstraction). There are, of course, infinitely many such invariants since the depth of terms is not bounded (only constants occurring in them). In the sequel, we systematically construct a similar result for CUP. \section{Abstract transformer} We define the \emph{superterm set}, denoted $\mathcal{S}^{\uparrow}_q$, as union of $\mathcal{S}_q$ and any superterms that can be constructed from it: \[ \mathcal{S}^{\uparrow}_q = \mathcal{S}_q \cup \{ f(t_1, \ldots, t_n) \mid \exists i \cdot t_i \in \mathcal{S}_q\text{ and } f, t_1, \ldots, t_n \in \mathcal{T}(\Sigma)\} \] Note that the terms $t_i$ can be \emph{any} terms including applications of $f$. Given a program configuration $\langle\mathunderscore, q, pc \rangle$, we denote the set of equalities that hold between the elements of the superterm set of $q$ using $E_{\langle q, pc \rangle}$: \[ E^\uparrow_{\langle q, pc \rangle} = \{ (t_1, t_2) \mid pc \models (t_1 = t_2) \text{ and } t_1, t_2 \in \mathcal{S}^{\uparrow}_q \} \] Note that $E^\uparrow_{\langle q, pc \rangle}$ may be infinite. \hg{WHY CAN WE WRITE IT DOWN AS FORMULAS?\cite{DBLP:conf/cade/BachmairT00}}. Similarly, we denote the set of inequalities that hold between the elements of the superterm set using $D_{\langle q, pc \rangle}$: \[ D^\uparrow_{\langle q, pc \rangle} = \{ (t_1, t_2) \mid pc \models (t_1 \neq t_2) \text{ and } t_1, t_2 \in \mathcal{S}^{\uparrow}_q \} \] Now we define \emph{superterm abstract semantics} of a program by changing the semantics of the assume and assignment statement to: \[ \begin{array}{c} \infer{\langle \mathbf{assume}(c), q, pc \rangle \to \langle q, E^\uparrow_{\langle q, pc'\rangle} \wedge D^\uparrow_{\langle q, pc'\rangle}\rangle}{\langle c, q\rangle\Downarrow v\quad pc' = pc \wedge v}\\[1ex] \infer{\langle x := e, q, pc \rangle \to \langle q', E^\uparrow_{\langle q', pc'\rangle } \wedge D^\uparrow_{\langle q', pc'\rangle}\rangle}{\langle e, q\rangle \Downarrow v \quad \text{X is fresh} \quad q' = q[x \mapsto X] \quad pc' = pc \wedge X = v} \end{array} \] \begin{example} Consider the coherent program \[ \begin{array}{ll} 1.&\mathbf{assume}(x = y);\\ 2.&x := f(p, x);\\ 3.&y := f(q, y);\\ 4.&\mathbf{assume}(p = q);\\ 5.&\mathbf{assume}(x \neq y); \end{array} \] The symbolic state after each line of execution is \[ \begin{array}{ll} 1.&\langle [x: X_0, y: Y_0, p: P_0, q: Q_0], X_0 = Y_0\rangle\\ 2.&\langle [x: X_1, y: Y_0, p: P_0, q: Q_0], pc_1 \wedge X_1 = f(P_0, X_0)\rangle\\ 3.&\langle [x: X_1, y: Y_1, p: P_0, q: Q_0], pc_2 \wedge Y_1 = f(Q_0, Y_0)\rangle\\ 4.&\langle q_3, pc_3 \wedge P_0 = Q_0\rangle\\ 5.&\langle q_3, pc_4 \wedge X_1 \neq Y_1 \rangle \end{array} \] The execution according to superterm abstract semantics is \[ \begin{array}{ll} 1.&\langle [x: X_0, y: Y_0, p: P_0, q: Q_0], X_0 = Y_0\rangle\\ 2.&\langle [x: X_1, y: Y_0, p: P_0, q: Q_0], X_1 = f(P_0, Y_0)\rangle\\ 3.&\langle [x: X_1, y: Y_1, p: P_0, q: Q_0], X_1 = f(P_0, Y_0) \wedge Y_1 = f(Q_0, Y_0)\rangle\\ 4.&\langle q_3,P_0 = Q_0 \wedge X_1 = X_2 \wedge X_1 = f(P_0, Y_0) \wedge Y_1 = f(Q_0, Y_0) \rangle\\ 5.&\langle q_3, \bot \rangle \end{array} \] \end{example} \begin{theorem} A coherent program is safe according to the concrete semantics if and only if it is safe according to the superterm abstract semantics. \end{theorem} \begin{proof} The abstraction does not change either the structure of derivation trees or symbolic state. The only possibility is that some configurations are feasible in one semantics but not in the other. Since we have restricted the conditionals of assumes to be equalities or inequalities over program variables, we do not require path conditions in both semantics to be equivalent to preserve safety. Instead, all we have to prove is that, given a concrete program configuration $\langle s, q, pc_c\rangle$ and the corresponding abstract program configuration $\langle s, q, pc_a\rangle$, $pc_c \models \ell$ iff $pc_a \models \ell$ where $\ell = q[v_1] \sim q[v_2]$, $\sim$ is either $=$ or $\neq$. Let $pc_c, pc_a$ be the path conditions before a rule is applied and let $pc_c'. pc_a'$ be the path conditions after the rule is applied. For each rule, we assume that $pc_c \models \ell$ iff $pc_a \models \ell$ and show that $pc_c' \models \ell$ iff $pc_a' \models \ell$. Under our assumption, it holds that $pc_a' \models \ell$ then $pc_c' \models \ell$. This is because, for all literals $c$ in $pc_a'\setminus pc_a$ it holds that $pc_c' \models c$. To prove the other direction, we examine each changed rule individually: \paragraph{assignment} \[ \begin{array}{c} \infer{\langle x := e, q, pc_c \rangle \to \langle q', pc_c'\rangle}{\langle e, q\rangle \Downarrow v \quad \text{X is fresh}\quad q' = q[x \mapsto X] \quad pc_c' = pc_c \wedge X = v} \\[1ex] \infer{\langle x := e, q, pc_a \rangle \to \langle q', pc_a'\rangle}{\langle e, q\rangle \Downarrow v \quad \text{X is fresh} \quad q' = q[x \mapsto X]\quad pc_a^1 = pc_a \wedge X = v \\[1ex]\hfill pc_a' = E_{\langle q', pc_a^1\rangle } \wedge D_{\langle q', pc_a^1\rangle}} \end{array} \] Since the constant $X$ is fresh, the only new equalities implied by $pc_c'$ are of the form $X = t$ where $t \in \mathcal{T}(pc_c)$ and $pc_c \models t = v$. By the memoizing property, it holds that there exists a variable $v$ such that $pc_c \models t = q[v]$. Hence $pc_a' \models X = t$. \paragraph{assume(x = y)} Let $q[x] = X$ and $q[y] = Y$. Let $s_X, s_Y$ be superterms of $X, Y$ in $pc_c$. By the early assumes property, both $s_X$ and $s_Y$ are stored in variables. Let $c$ be a literal implied by $pc_c'$ by not by $pc_c$. $c$ has to be of the form $f(\ldots, s_X, \ldots) = f(\ldots, s_Y, \ldots)$. Since both of these terms are in $\mathcal{S}^{\uparrow}_q$, $pc_a' \models c$ \end{proof} \subsubsection*{Acknowledgment} The research leading to these results has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No [759102-SVIS]). This research was partially supported by the United States-Israel Binational Science Foundation (BSF) grant No. 2016260, and the Israeli Science Foundation (ISF) grant No. 1810/18. We also acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). \section{Proofs} \label{sec:proofs} Given a set of literals $\Gamma$ and a set of constants $V$, let $base_\beta(\Gamma, V) \triangleq \{\beta \mid \exists W, \delta \cdot \langle W, \beta, \delta\rangle \in \base(\Gamma, V)\}$. \begin{alemma}\label{lm:alpheq}\hg{new} Let $\varphi_1$ and $\varphi_2$ be two sets of literals and $V$ be a set of constants. Then, the following three statements are equivalent: \begin{enumerate}[(1)] \item $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$ \item $base_\beta(\varphi_1, V) \cap base_\beta(\varphi_2, V) \neq \emptyset$ \item $base_\beta(\varphi_1, V) = base_\beta(\varphi_2, V)$ \end{enumerate} \end{alemma} \begin{alemma}\label{lm:trmingam} Let $\Gamma$ be a set of literals, $v \in \mathcal{C}(\Gamma)$. If $\Gamma \vdash v \approx f(t_1,\ldots,t_n)$ for some term $f(t_1,\ldots,t_n) \in \mathcal{T}(\Sigma)$ then there exists a term $f(t'_1,\ldots,t'_n) \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash v \approx f(t'_1,\ldots,t'_n)$ and $\Gamma \vdash t_i \approx t_i'$ for all $1 \leq i \leq n$. \end{alemma} \begin{alemma}\label{lm:trmingamdeq} Let $\Gamma$ be a set of literals, $v \in \mathcal{C}(\Gamma)$. If $\Gamma \vdash v \not\approx f(t_1,\ldots,t_n)$ for some term $f(t_1,\ldots,t_n) \in \mathcal{T}(\Sigma)$ then there exists a term $f(t'_1,\ldots,t'_n) \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash v \not\approx f(t'_1,\ldots,t'_n)$ and $\Gamma \vdash t_i \approx t_i'$ for all $1 \leq i \leq n$. \end{alemma} \setcounter{lemma}{1} \begin{lemma} Let $\Gamma$ be a set of literals, $x$ and $y$ be two constants in $\const(\Gamma)$, $V \subseteq \const(\Gamma)$ be a purifier for $\{x, y\} \subseteq V$, and $\beta = \alpha_V(\Gamma)$. Then, for any $u, v \in V$ \[ (\Gamma \land x \approx y \vdash u \approx v) \iff ( \beta \land x \approx y \vdash u \approx v) \] \end{lemma} \begin{proof} By the definition of $\beta$, $(\Gamma \vdash u \approx v) \iff (\beta \vdash u \approx v)$. Thus, assume that $\Gamma \not \vdash u \approx v$. The only-if direction is trivial since $\beta$ is weaker than $\Gamma$. For the if-direction, By Lemma~\ref{lm:sprtrm}, there are superterms $s_1[x]$ and $s_2[y]$ of $x$ and $y$, respectively, s.t. $\Gamma \vdash \{u \approx s_1[x], v \approx s_2[y]\}$, and $(\Gamma \land x \approx y) \vdash (s_1[x] \approx s_2[y])$. The proof proceeds by induction on the maximum depth of $s_1$ and $s_2$. The base case, $s_1 = x$ and $s_2 = y$, is trivial. For the inductive case, we show one sub-cases, others are similar. Assume $s_1 = f(t_1[x], \vec{r})$ and $s_2 = f(t_2[y], \vec{r})$, for some terms $t_1[x]$, $t_2[y]$, $\vec{r}$, and a function $f$. Furthermore, $(\Gamma \land x \approx y) \vdash t_1[x] \approx t_2[y]$. Since $\Gamma \vdash \{u \approx f(t_1[x], \vec{r}), v \approx f(t_2[y], \vec{r})\}$, by Lemma~\ref{lm:trmingam}, there exists terms $f(t_1', \vec{r}_1), f(t_2', \vec{r}_2) \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash \{u \approx f(t_1', \vec{r}_1), t_1[x] \approx t_1', \vec{r} \approx \vec{r}_1, v \approx f(t_2', \vec{r}_2), t_2[y] \approx t_2', \vec{r} \approx \vec{r}_2 \}$. Since $V$ is a purifier for $\{x, y\}$, there are $x', y' \in V$ s.t. $\Gamma \vdash \{x' \approx t_1', y' \approx t_2'\}$, and $\Gamma \vdash \{u \approx f(x', \vec{r}), v \approx f(y', \vec{r})\}$. By construction, $\beta \vdash \{u \approx f(x', \vec{w}), v \approx f(y', \vec{w})\}$, for some constants $\vec{w} \in \const(\beta)$. By IH, $(\beta \land x \approx y) \vdash x' \approx y'$. Hence, by congruence, $(\beta \land x \approx y) \vdash v \approx u$. \end{proof} \begin{lemma} Let $\Gamma$ be a set of literals, $x$ and $y$ be two constants in $\const(\Gamma)$, $V \subseteq \const(\Gamma)$ be a purifier for $\{x, y\} \subseteq V$, and $\beta = \alpha_V(\Gamma)$. Then, for any $u, v \in V$ \[ (\Gamma \land x \approx y \vdash u \not\approx v) \iff ( \beta \land x \approx y \vdash u \not\approx v) \] \end{lemma} \begin{proof} By the definition of $\beta$, $(\Gamma \vdash u \not\approx v) \iff (\beta \vdash u \not\approx v)$. Assume $\Gamma \not \vdash u \not\approx v$. Then, there is a term $t \in \mathcal{T}(\Sigma)$, s.t. $\Gamma \vdash u \not\approx t$ and $(\Gamma \land x \approx y) \vdash v \approx t$. By Lemma~\ref{lm:sprtrm}, $\Gamma \vdash t \approx s[y]$. We case split on whether $s[y]$ is $y$ itself or some superterm of $y$. \begin{itemize} \item case $s[y] = s$, Since $\Gamma \vdash t \approx y$, $(\Gamma \land x \approx y) \vdash v \approx y$ and $\Gamma \vdash u \not\approx y$. By Lemma~\ref{lm:Visenough}, $(\beta \land x \approx y) \vdash v \approx y$. By definition $\beta \vdash u \not\approx y$. Therefore, $(\beta \land x \approx y) \vdash u \not\approx v$. \item case $s[y] = f(t_1,\ldots, t_n)$, where at least one $t_i$ is a superterm of $y$. Since $\Gamma \vdash t \approx f(t_1,\ldots, t_n)$, $\Gamma \vdash u \not\approx f(t_1,\ldots, t_n)$. By Lemma~\ref{lm:trmingamdeq}, there exists a term $f(t_1',\ldots, t_n') \in \mathcal{T}(\Gamma)$ s.t., $\Gamma \vdash \{ u \not\approx f(t_1',\ldots, t_n'), t_1' \approx t_1,\ldots, t_n' \approx t_n\}$. Since $f(t_1',\ldots, t_n') \in \mathcal{T}(\Gamma)$, $\Gamma \vdash f(t_1',\ldots, t_n') \approx s[y]$, and $V$ is a purifier for $y$ in $\Gamma$, there exists a constant $y' \in V$ s.t. $\Gamma \vdash y' \approx f(t_1',\ldots, t_n')$. Therefore, $(\Gamma \land x \approx y) \vdash v \approx y'$ and $\Gamma \vdash u \not\approx y'$. By Lemma~\ref{lm:Visenough}, $(\beta \land x \approx y) \vdash v \approx y'$. By the definition of $\beta$, $\beta \vdash u \not\approx y'$. Therefore, $(\beta \land x \approx y) \vdash u \not\approx v$. \end{itemize} \end{proof} \setcounter{lemma}{5} \begin{lemma} Let $V$ be a set of constants, $\varphi_1$ and $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, and $V$ be a purifier for $\{x, y\}$ in both $\varphi_1$ and $\varphi_2$. Then, $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$ \end{lemma} \begin{proof} Let $\beta \in base_\beta(\varphi, V)$. Let $\beta' \in base_\beta(\varphi_1 \land x \approx y, V)$ s.t. $\beta \subseteq \beta'$. Let $L_{\approx}$ be a set of equalities between constants in $V$, $L_{\not\approx}$ be a set of disequalities between constants in $V$, and $L_{\mathcal{F}}$ is a set of equalities of the form $v \approx f(\Vec{w})$ where $v \in V$, and $\Vec{w}$ is a set of constants, some of which are in $V$, and the rest are not in $\mathcal{C}(\varphi_1)\cup\mathcal{C}(\varphi_2)$. Let $\beta' = \beta \cup L_{\approx}\cup L_{\mathcal{F}} \cup L_{\not\approx}$. By Lemma~\ref{lm:Visenough} and Lemma~\ref{lm:Visenoughdeq}, $\beta \land x \approx y \vdash \ell$ for all $\ell \in L_{\approx}\cup L_{\not\approx}$. Next, we prove that for all $\ell \in L_\mathcal{F}$, $\beta \land x \approx y \vdash \ell$. In the following, assume that $u, v \in V$ and $w \not \in \mathcal{C}(\Gamma)\cup V$. We assume that $\ell = v \approx f(u, w)$. All other cases are similar. We have $\beta' \vdash v \approx f(u, w)$ iff $(\varphi_1 \land x \approx y) \vdash v \approx f(u, t)$ for some term $t \in \mathcal{T}(\Sigma)$ and there no $v' \in V$ s.t. $(\varphi_1 \land x \approx y) \vdash v' \approx t$. If $\varphi_1 \vdash v \approx f(u, t)$ then $\beta \vdash v \approx f(u, w)$ by definition. Assume that $\varphi_1 \nvdash v \approx f(u, t)$. By Lemma~\ref{lm:sprtrm}, we have $\varphi_1 \vdash v \approx s_1[x]$ and $\varphi_1 \vdash f(u, t) \approx s_2[y]$ and $\varphi_1 \wedge x \approx y \vdash s_1[x] \approx s_2[y]$. We case split on $s_2[y]$. \begin{enumerate} \item case $s_2[y] = y$. We have $\varphi_1 \vdash f(u, t) \approx y$. From $(\varphi_1 \land x \approx y) \vdash v \approx f(u, t)$ and $\varphi_1 \vdash f(u, t) \approx y$, we have $(\varphi_1 \land x \approx y) \vdash v \approx y$. From \Cref{lm:Visenough}, we have $(\beta \land x \approx y) \vdash v \approx y$. Since $\varphi_1 \vdash f(u, t) \approx y$, $\beta \vdash f(u, w) \approx y$ by definition. Hence, $(\beta \land x \approx y) \vdash v \approx f(u, w)$. \item case $s_2[y] = g(b_1,\ldots,b_n)$, where, $b_i$ is a superterm of $y$ for at least one $i$. We have, $\varphi_1 \vdash f(u, t) \approx g(b_1,\ldots,b_n)$. It is either the case that there exist a term $t'$ s.t. $g(b_1,\ldots, b_n) \approx t' \in \varphi_1$ and $\varphi_1 \vdash t' \approx f(u, t)$, or $g = f$ and $\varphi_1 \vdash\{u \approx b_1, t \approx b_2\}$. \begin{enumerate} \item case there exist a term $t'$ s.t. $g(b_1,\ldots, b_n) \approx t' \in \varphi_1$ and $\varphi_1 \vdash t' \approx f(u, t)$. Since $g(b_1,\ldots, b_n) \in \mathcal{T}(\varphi_1)$ and $V$ is a purifier for $y$ in $\varphi_1$, there exists a $y'\in V$ s.t. $\varphi_1 \vdash y' \approx g(b_1,\ldots,b_n)$. Therefore, $\varphi_1\vdash y' \approx f(u, t)$ and $(\varphi_1 \land x \approx y) \vdash v \approx y'$. By definition, $\beta \vdash y' \approx f(u, w)$. By \Cref{lm:Visenough}, we have $(\beta \land x \approx y) \vdash v \approx y'$. Hence, $(\beta \land x \approx y) \vdash v \approx f(u, w)$. \item case $g = f$ and $\varphi_1 \vdash\{u \approx b_1, t \approx b_2\}$. It has to be the case that $s_1[x] = f(a_1, a_2)$. We have $\varphi_1 \vdash v \approx f(a_1, a_2)$ where $a_1$ or $a_2$ is a superterm of $x$. By \Cref{lm:trmingam}, we have a term $f(a_1', a_2') \in \mathcal{T}(\varphi_1)$ s.t. $\varphi_1 \vdash \{v \approx f(a_1', a_2'), a_1' \approx a_1, a_2' \approx a_2\}$. We case split on whether $a_1$ or $a_2$ is a superterm of $x$: \begin{enumerate} \item case $a_1$ is a superterm of $x$. We have, $(\varphi_1 \land x \approx y) \vdash a_1 \approx b_1$. Since $\varphi_1 \vdash a_1' \approx a_1$, $a_1' \in \mathcal{T}(\varphi_1)$, and $V$ is a purifier for $x$ in $\varphi_1$, there must exists a constant $x' \in V$ s.t. $\varphi_1 \vdash x' \approx a_1'$. Since $(\varphi_1 \land x \approx y) \vdash a_1 \approx b_1$, $(\varphi_1 \land x \approx y) \vdash x' \approx b_1$. From $\varphi_1 \vdash u \approx b_1$, we have $\varphi_1 \land x \approx y \vdash x' \approx u$. By \Cref{lm:Visenough}, we have $\beta \land x \approx y \vdash x' \approx u$. Since $\varphi \wedge v \approx f(a_1', a_2')$ and $\varphi \wedge x' \approx a_1'$, we have $\varphi_1 \vdash v \approx f(x', a_2')$ and hence $\beta \vdash v \approx f(x', w)$ be definition. Since $\beta \vdash v \approx f(x', w)$ and $\beta \land x \approx y \vdash x' \approx u$, $\beta \land x \approx y \vdash v \approx f(u, w)$. \item case $a_2$ is a superterm of $x$. We have $(\varphi \land x \approx y) \vdash a_2 \approx b_2$. Since $\varphi_1 \vdash a_2' \approx a_2$, $a_2' \in \mathcal{T}(\varphi_1)$, and $V$ is a purifier for $x$ in $\varphi_1$, there must exists a constant $x' \in V$ s.t. $\varphi_1 \vdash x' \approx a_2'$. Since $(\varphi_1 \land x \approx y) \vdash a_2 \approx b_2$, $(\varphi_1 \land x \approx y) \vdash x' \approx b_2$. However, $\varphi \vdash b_2 \approx t$ and hence $(\varphi_1 \land x \approx y) \vdash t \approx x'$ which contradicts our assumption that there is no $v' \in V$ such that $(\varphi_1 \land x \approx y) \vdash v' \approx t$ . \end{enumerate} \end{enumerate} \end{enumerate} Since $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, by \Cref{lm:alpheq} we have $\beta \in base_\beta(\varphi_2, V)$. Therefore, $\beta' \in base_\beta(\varphi_2 \land x \approx y, V)$ as well. Since $base_\beta(\varphi_1 \land x \approx y, V) \cap base_\beta(\varphi_2 \land x \approx y, V) \neq \emptyset$, by \Cref{lm:alpheq}, $\alpha_V(\varphi_1\land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$. \end{proof} \setcounter{lemma}{6} \begin{lemma} Let $V$ be a set of constants, $\varphi_1$, $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. Then, for any $U \subseteq V$, $\alpha_U(\varphi_1) = \alpha_U(\varphi_2)$. \end{lemma} \begin{proof} Follows from $\alpha_U(\alpha_V(\varphi_1)) = \alpha_U(\alpha_V(\varphi_2))$, and $\alpha_U(\alpha_V(\varphi_i)) = \alpha_U(\varphi_i)$, for $i \in \{1, 2\}$. \end{proof} \setcounter{lemma}{7} \begin{lemma} Let $V$ be a set of constants s.t. $x, y \in V$, $\varphi_1$ and $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. Then, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$ \end{lemma} \begin{proof} Let $\beta \in base_\beta(\varphi_1, V)$. Then, $\beta \land L \in base_\beta(\varphi_1\land x \not\approx y, V)$, where, $L = \{x \not\approx u \mid y \approx u \in \beta, u \in V\}$. Since $\alpha_V(\varphi_1)\approx\alpha_V(\varphi_2)$, by~\Cref{lm:alpheq}, $\beta \in base_\beta(\varphi_2, V)$. Therefore, $\beta \land L \in base_\beta(\varphi_2\land x\not\approx y, V)$. Since, $base_\beta(\varphi_1\land x \not\approx y, V)\cap base_\beta(\varphi_1\land x \not\approx y, V)\neq \emptyset$, by \Cref{lm:alpheq}, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$. \end{proof} \setcounter{lemma}{8} \begin{lemma}\hg{modified statement} Let $V$ be a set of constants, $\varphi_1$, $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, $y \in V, \Vec{y}\subseteq V$, $x'$ be a constant s.t. $x' \not \in \mathcal{C}(\varphi_1) \cup \mathcal{C}(\varphi_2)$, $V' = V \cup\{x'\}$, and $f(\Vec{y})$ be a term s.t there does not exists a term $t \in \mathcal{T}(\varphi_1) \cup \mathcal{T}(\varphi_2)$ s.t. $\varphi_1 \vdash t \approx f(\Vec{y})$ or $\varphi_2 \vdash t \approx f(\Vec{y})$. Then, \begin{enumerate}[(1)] \item $\alpha_{V'}(\varphi_1 \land x' \approx y) = \alpha_{V'}(\varphi_2 \land x' \approx y)$ \item $\alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y})) = \alpha_{V'}(\varphi_2 \land x' \approx f(\Vec{y}))$ \end{enumerate} \end{lemma} \begin{proof}\hg{new} \begin{enumerate}[(1)] \item Let $\beta \in base_\beta(\varphi_1, V)$. By definition of basis, $\beta \land L \in base_\beta(\varphi_1 \land x' \approx y, V')$, where $L = \{\ell \mid \ell[x'\mapsto y] \in \beta\}$. Since $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, by \Cref{lm:alpheq}, $\beta \in base_\beta(\varphi_2, V)$. Therefore, $\beta \land L \in base_\beta(\varphi_2 \land x' \approx y, V')$ as well. Hence $base_\beta(\varphi_1 \land x' \approx y, V')\cap base_\beta(\varphi_2 \land x' \approx y, V') \neq \emptyset$. By \Cref{lm:alpheq}, $\alpha_{V'}(\varphi_1 \land x' \approx y) = \alpha_{V'}(\varphi_2 \land x' \approx y)$. \item Let $\beta \in base_\beta(\varphi_1, V)$. By definition of basis, $\beta \land X_{def} \in base_\beta(\varphi_1 \land x' \approx f(\Vec{y}), V')$, where $X_{def}$ is \begin{multline*} \{x' \bowtie t \mid \beta \vdash f(\Vec{y}) \bowtie t, \depth(t) = 1, \mathcal{C}(t) \cap \mathcal{C}(\beta) \subseteq V\} \end{multline*} Since $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, by \Cref{lm:alpheq}, $\beta \in base_\beta(\varphi_2, V)$. Therefore, $\beta \land X_{def} \in base_\beta(\varphi_2 \land x' \approx f(\Vec{y}), V')$ as well. Hence $base_\beta(\varphi_1 \land x' \approx f(\Vec{y}), V')\cap base_\beta(\varphi_2 \land x' \approx f(\Vec{y}), V') \neq \emptyset$. By \Cref{lm:alpheq}, $\alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y})) = \alpha_{V'}(\varphi_2 \land x' \approx f(\Vec{y}))$. \end{enumerate} \end{proof} \section{Background} \label{sec:background} We assume that the reader is familiar with the basics of First Order Logic (FOL), and the theory of Equality and Uninterpreted Functions (EUF). We use $\Sigma = (\const, \mathcal{F}, \{\approx, \not\approx\})$ to denote a FOL signature with constants $\const$, functions $\mathcal{F}$, and predicates $\{\approx, \not\approx\}$, representing equality and disequality, respectively. A term is a constant or (well-formed) application of a function to terms. A literal is either $x \approx y$ or $x \not\approx y$, where $x$ and $y$ are terms. A formula is a Boolean combination of literals. We assume that all formulas are in Negation Normal Form (NNF), so negation is defined as a shorthand: $\neg (x \approx y) \triangleq x \not\approx y$, and $\neg (x \not\approx y) \triangleq x \approx y$. Throughout the paper, we use $\bowtie$ to indicate a predicate in $\{\approx, \not\approx\}$. For example, $\{x \bowtie y\}$ means $\{x \approx y, x \not\approx y\}$. We write $\bot$ for false, and $\top$ for true. We do not differentiate between sets of literals $\Gamma$ and their conjunction $(\bigwedge \Gamma)$. We write $\depth(t)$ for the maximal depth of function applications in a term $t$: \[\depth(t) = \begin{cases} 0 & \text{if $t \in \const$}\\ 1 + \max_i(\depth(t_i)) & \text{if $t = f(t_0, \ldots, t_k)$} \end{cases} \] We write $\mathcal{T}(\varphi)$, $\const(\varphi)$, and $\mathcal{F}(\varphi)$ for the set of all terms, constants, and functions, in $\varphi$, respectively, where $\varphi$ is either a formula or a collection of formulas \newcommand{\mathcal{P}_{EUF\xspace}}{\mathcal{P}_{EUF\xspace}} For a literal $\ell$, we write $\Gamma \models \ell$ if $\Gamma$ \emph{entails} $\ell$. We use a proof system $\mathcal{P}_{EUF\xspace}$ for EUF shown in Fig.~\ref{fig:peuf}. The $\textsc{Refl}$ ensures that the only new terms introduced by a derivation are constructed from some existing terms in $\Gamma$. We write $\Gamma \vdash \ell$, pronounced $\ell$ is \emph{derived} from $\Gamma$, if a literal $\ell$ is derivable from $\Gamma$ by $\mathcal{P}_{EUF\xspace}$. % A deductive (EUF\xspace) closure, $\Gamma^*$, of a set of literals $\Gamma$ is defined as: $\Gamma^* \triangleq \{\ell \mid \Gamma \vdash \ell\}$. By refutational completeness of $\mathcal{P}_{EUF\xspace}$, $\Gamma$ is unsatisfiable iff $\Gamma \vdash \bot$. For a satisfiable set $\Gamma$ of EUF literals, and $a, b \in \mathcal{T}(\Gamma)$: \begin{enumerate}[(1)] \item $\Gamma \models a \approx b$ iff $a \approx b \in \Gamma^*$ \item $\Gamma \models a \not\approx b$ iff $\bot \in (\Gamma \cup \{a \approx b\})^*$ \end{enumerate} Note that $\Gamma \models a \not\approx b$ does not imply $\Gamma \vdash a \not\approx b$, since $\mathcal{P}_{EUF\xspace}$ has no \textsc{Hyp} and \textsc{Contra} proof rules. The \textsc{PMod} is a form of paramodulation that is used to derive new literals by substituing equal for equal. It is useful for interpolation. \section{Background} \label{sec:background} We assume that the reader is familiar with the basics of First Order Logic (FOL), and the theory of Equality and Uninterpreted Functions (EUF). We use $\Sigma = (\const, \mathcal{F}, \{\approx, \not\approx\})$ to denote a FOL signature with constants $\const$, functions $\mathcal{F}$, and predicates $\{\approx, \not\approx\}$, representing equality and disequality, respectively. A term is a constant or (well-formed) application of a function to terms. A literal is either $x \approx y$ or $x \not\approx y$, where $x$ and $y$ are terms. A formula is a Boolean combination of literals. We assume that all formulas are quantifier free unless stated otherwise. We further assume that all formulas are in Negation Normal Form (NNF), so negation is defined as a shorthand: $\neg (x \approx y) \triangleq x \not\approx y$, and $\neg (x \not\approx y) \triangleq x \approx y$. Throughout the paper, we use $\bowtie$ to indicate a predicate in $\{\approx, \not\approx\}$. For example, $\{x \bowtie y\}$ means $\{x \approx y, x \not\approx y\}$. We write $\bot$ for false, and $\top$ for true. We do not differentiate between sets of literals $\Gamma$ and their conjunction $(\bigwedge \Gamma)$. We write $\depth(t)$ for the maximal depth of function applications in a term $t$. We write $\mathcal{T}(\varphi)$, $\const(\varphi)$, and $\mathcal{F}(\varphi)$ for the set of all terms, constants, and functions, in $\varphi$, respectively, where $\varphi$ is either a formula or a collection of formulas. Finally, we write $t[x]$ to mean that the term $t$ contains $x$ as a subterm. For a formula $\varphi$, we write $\Gamma \models \varphi$ if $\Gamma$ \emph{entails} $\varphi$, that is every model of $\Gamma$ is also a model of $\varphi$. For any literal $\ell$, we write $\Gamma \vdash \ell$, pronounced $\ell$ is \emph{derived} from $\Gamma$, if $\ell$ is derivable from $\Gamma$ by the usual EUF proof system $\mathcal{P}_{EUF\xspace}$.\footnote{Shown in Appendix~\ref{sec:euf_extra}.} By refutational completeness of $\mathcal{P}_{EUF\xspace}$, $\Gamma$ is unsatisfiable iff $\Gamma \vdash \bot$. Given two EUF\xspace formulas $\varphi_1$ and $\varphi_2$ and a set of constants $V \subseteq \const$, we say that the formulas are $V$-equivalent, denoted $\varphi_1 \equiv_V \varphi_2$, if, for all quantifier free EUF\xspace formulas $\psi$ such that $\mathcal{C}(\psi) \subseteq V$, $(\varphi_1 \wedge \psi) \models \bot$ if and only if $(\varphi_2 \wedge \psi) \models \bot$. \begin{example} Let $\varphi_1 = \{x_1 \approx f(a_0, x_0), y_1 \approx f(b_0, y_0), x_0 \approx y_0\}$, $\varphi_2 = \{x_1 \approx f(a_0, w), y_1 \approx f(b_0, w)\}$, $\varphi_3 = \{x_1 \approx f(a_0, x_0), y_1 \approx f(b_0, y_0)\}$, and $V = \{x_1, y_1, a_0, b_0\}$. Then, $\varphi_1 \equiv_V \varphi_2$ but $\varphi_1 \not\equiv_V \varphi_3$. \end{example} While EUF does not admit quantifier elimination, it does admit elimination of constants while preserving quantifier free consequences. Formally, a \emph{cover}~\cite{DBLP:conf/esop/GulwaniM08, DBLP:conf/cade/CalvaneseGGMR19,DBLP:conf/cilc/GhilardiGK20} of an EUF formula $\varphi$ w.r.t. a set of constants $V$ is an EUF formula $\psi$ such that $\const(\psi) \subseteq \const(\varphi) \setminus V$ and $\varphi \equiv_{\const(\varphi) \setminus V} \psi$. By~\cite{DBLP:conf/esop/GulwaniM08}, such $\psi$ exists and is unique up to equivalence; we denote it by $\mathbb{C} V \cdot \varphi$. \section{Computing $V$-basis} \hg{TODO: fix alignment, change $rep$} We describe a procedure that, give a set of EUF literals $\Gamma$ and a set of constants $V$, computes the $V$-basis abstraction of $\Gamma$: $\base(\Gamma, V) = \langle W, \beta, \delta \rangle$. The procedure first computes the congruence closed graph $G$ of $\Gamma$. $G$ is a set of labelled nodes. Each node denotes a term in $\Gamma^*$. There are directed edges from nodes representing function applications to each of the function arguments. If $\Gamma^* \vdash t_1 \approx t_2$ then there is an equality edge between nodes of $t_1$ and $t_2$. If $t_1\not\approx t_2 \in \Gamma$, there is a disequlity edge between $t_1$ and $t_2$. A node corresponding to term $t$ is labelled using the representative, $rep(t)$, of the equivalence class $t$ belongs to. Representatives are in the set $V \cup W$: If $\Gamma^* \vdash v \approx t$ where $v \in V$, then $rep(t) = v$~(we use a fixed ordering on constants in $V$ to break ties), otherwise $rep(t) = w$ for some $w \in W$. In this graph, \begin{multline*} \beta = \{ v \approx u \in \Gamma^* \mid v,u \in V\} \cup \hphantom{a} \\ \{v \not\approx u \mid v = rep(t_1), u = rep(t_2), t_1 \not\approx t_2 \in \Gamma, v, u \in V\} \cup \hphantom{a}\\ \{ v \approx f(rep(t_1), \ldots, rep(t_n)) \mid v \approx f(t_1, \ldots, t_n) \in \Gamma^*, v \in V \} \end{multline*} \begin{multline*} \delta = \{w \approx u \in \Gamma^* \mid u \text{ is a constant } \not \in V, w \in W \} \cup \hphantom{a}\\ \{w \not\approx u \mid w = rep(t_1), u = rep(t_2), t_1 \not\approx t_2 \in \Gamma,\\ \hfill w \in V, u \in V \cup W\} \cup \hphantom{a}\\ \{ w \approx f(rep(t_1), \ldots, rep(t_n)) \mid w \approx f(t_1, \ldots, t_n) \in \Gamma^* \} \end{multline*} \section{Finite Bismulation of CUP} \label{sec:extabase} \ag{this paragraph needs work} In Sec.~\ref{sec:abs}\ag{be more direct}, we provided a logical characterization for arbitrary UP programs, using the cover and renaming abstractions, which induce a bisimilar transition system. The characterization was given provided that the bisimilar transition system has finitely many reachable configurations. In the sequel, we follow a similar recipe to provide a finite bismulation relation for \emph{any} CUP program. \sharon{slight rephrasing here (to show that this is done in two parts):}First, in this section, we propose an abstraction function that is used instead of the cover abstraction to eliminate some (but not all) of the constants that are not assigned to program variables. The abstraction does not maintain $\const(q)$-equivalence of the path conditions like the cover abstraction. Nonetheless, it results in a bisimilar transition system for CUP programs (albeit only simulating the original transition system of arbitrary UP programs). Unlike the cover abstraction, which may result in terms of unbounded depth, the proposed abstraction ensures that the depth of terms is bounded. As such, when followed by the renaming abstraction, the obtained transition system is finite, thus establishing that any CUP is bisimilar to a finite transition system. In section~\ref{sec:char}, we use this result to show decidability of the reachability problem for CUPs, as well as to logically characterize CUPs. Intuitively, the abstraction ``truncates'' the congruence graph induced by a path condition in nodes that have no representative in the set of constants assigned to the program variables ($V$ in the following definition), and assigns to the truncated nodes fresh constants (from $W$ in the following definition). Congruence closure procedures for EUF use a \emph{congruence graph} to concisely represent the deductive closure of a set of EUF literals~\cite{DBLP:journals/jacm/NelsonO80,DBLP:conf/rta/NieuwenhuisO05}. Here, we use a logical characterization of a congruence graph, called an \emph{$V$-basis}. Let $\Gamma$ be a set of EUF literals, a triple $\langle W, \beta, \delta \rangle$ is a $V$-basis of $\Gamma$ relative to a set of constants $V$, written $\langle W, \beta, \delta \rangle = \base(\Gamma, V)$, iff (a) $W$ is a set of fresh constants not in $\const(\Gamma)$, and $\beta$ and $\delta$ are conjunctions of EUF literals; (b) $(\beta \land \delta) \equiv_V \Gamma$; (c) $\beta \triangleq \beta_{\approx} \cup \beta_{\not\approx} \cup \beta_{\mathcal{F}}$ and $\delta \triangleq \delta_{\not\approx} \cup \delta_{\mathcal{F}}$, where \begin{align*} \beta_{\approx} &\subseteq \{u \approx v \mid u, v \in V\} \qquad \beta_{\not\approx}\subseteq \{u \not\approx v \mid u, v \in V\} \\ \beta_{\mathcal{F}} &\subseteq \{v \approx f(\Vec{w}) \mid v \in V, \Vec{w} \subseteq V \cup W, \Vec{w} \cap V \neq \emptyset\}\\ \delta_{\not\approx} &\subseteq \{u \not\approx w \mid u \in W, w \in W \cup V\}\\ \delta_{\mathcal{F}} &\subseteq \{ u \approx f(\Vec{w}) \mid u \in W, \Vec{w} \subseteq V \cup W\} \end{align*} and (d) $W$, $\beta$, and $\delta$ are minimal, i.e., they cannot be represented with fewer literals or fewer fresh constants. Note that we represent both equalities and disequalities in the $V$-basis as common in implementations (but not in the theoretical presentations) of the congruence closure algorithm. Intuitively, $V$ are constants in $\const(\Gamma)$ that represent equivalence classes in $\Gamma$, and $W$ are constants added to represent equivalence classes that do not have a representative in $V$. A $V$-basis is unique up to renaming of constants in $W$ and ordering of equalities between constants in $V$. In fact, given any two sets of literals, $\Gamma_1$ and $\Gamma_2$, with $\langle W_1, \beta_1, \delta_1\rangle = \base(\Gamma_1, V)$ and $\langle W_2, \beta_2, \delta_2\rangle = \base(\Gamma_2, V)$ s.t. $\beta_1 \equiv_V \beta_2$, there is a set of literals $\beta$ over the constants $V \cup W$ s.t. $\langle W\cup W_1', \beta, \delta_1'\rangle = \base(\Gamma_1, V)$ and $\langle W \cup W_2', \beta, \delta_2'\rangle = \base(\Gamma_2, V)$. That is, there is a renaming of constants in $\beta_1$ and $\beta_2$ s.t. their literals are syntactically identical.\sharon{took me a while to parse the last few sentences. It's really confusing. If we use base as a function, we can't change its result like that. Suggestion: change to $\langle W, \beta, \delta \rangle \in \base(\Gamma, V)$ and slightly rephrase (we say "there is a set of literals $\beta$ over the constants $V \cup W$" but we don't say where the other components, like $W$, but also $W_1',\delta_1'$... come from -- need to say that they also exist} \sharon{must give an example!!} \begin{definition}[$V$-base abstraction] \label{def:vabst} A $V$-base abstraction $\alpha_V$ for a set of constants $V$, is a function mapping a set of literals to a set of literals s.t. for any sets of literals $\Gamma$ and $\Gamma'$: \begin{enumerate}[(1)] \item $\alpha_V(\Gamma) \triangleq \beta$, where $\langle W, \beta, \delta\rangle = \base(\varphi, V)$, \item $\Gamma \equiv_V \Gamma'$, implies $\alpha_V(\Gamma) = \alpha_V(\Gamma')$. \end{enumerate} \end{definition} \ag{new} The second requirement of Def.~\ref{def:vabst} ensures that $V$-abstraction is canonical for $V$-equivalent formulas. We do not require $\alpha_V$ to be computable in general. However, $\alpha_V$ is computable over finitely many inputs by memoization. \begin{example} Let $\Gamma = \{x \approx f(a, v_1), y \approx f(b, v_2), v_1 \approx v_2\}$ and $V = \{a, b, x, y\}$. The $V$-basis of $\Gamma$ is $\langle W, \beta, \delta\rangle$, where $W = \{w\}$, $\beta = \{ x \approx f(a, w), y \approx f(b, w)\}$, $\delta = \{w \approx v_1, w \approx v_2\}$. The $V$-base abstraction of $\Gamma$ is $\alpha_V(\Gamma) = \{x \approx f(a, w), y \approx f(b, w)\}$. \hg{TODO: change definition of $\delta$ to accomodate this.} \end{example} \begin{definition}[Base abstraction] \label{def:alpha-abstraction} The base abstraction function $\alpha_b: C \to C$ is defined for configurations of the form $\langle s, q, pc \rangle$ where $pc$ is a conjunction of literals. For such a configuration, we define: $\alpha_b(\langle s, q, pc\rangle) \triangleq \langle s,q,\alpha_{\mathcal{C}(q)}(pc)\rangle$. \end{definition} Namely, the $\mathcal{C}(q)$-base abstraction $\alpha_{\mathcal{C}(q)}$ applied to the path condition is determined by the state $q$ in the configuration. With abuse of notation we sometimes write $\alpha_q(\varphi)$ as a shorthand for $\alpha_{\const(q)}(\varphi)$. \ag{Highlight that this is main theorem} Next, we show that, given a CUP $P$, the abstract transition system $\alpha_b(\mathcal{S}_P) = (C, c_0^\alpha_b, \mathcal{R}^\alpha_b)$ is bisimilar to the concrete transition system $\mathcal{S}_P = (C, c_0, \mathcal{R})$. \begin{theorem} \label{thm:main} Let $\langle s, q, pc\rangle$ be a reachable configuration of a CUP $P$. Then, \begin{enumerate}[(1)] \item $\langle s, q, pc\rangle \to \langle s', q', pc \land pc'\rangle$ iff\\ $\langle s, q, \alpha_q(pc)\rangle \to \langle s', q', \alpha_{q}(pc) \land pc'\rangle$, and \item $\alpha_{q'}(pc \land pc') = \alpha_{q'}(\alpha_q(pc) \land pc')$. \end{enumerate} \end{theorem} The proof of Thm.~\ref{thm:main} is not complicated, but it is tedious and technical. It depends on many basic properties of EUF. We summarize the key results that we require in the following lemmas. The proofs of the lemmas are provided in App.~\ref{sec:proofs}. We begin by defining a \emph{purifier} -- a set of constants sufficient to represent a set of EUF literals with terms of depth one. \begin{definition}[Purifier] \label{def:purifier} We say that a set of constants $V$ is a \emph{purifier} of a constant $a$ in a set of literals $\Gamma$, if $a \in V$ and for every term $t \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash t \approx s[a]$, $\exists v \in V$ s.t. $\Gamma \vdash v \approx t$. \sharon{notation $s[a]$ wasn't defined} \end{definition} For example, if $\Gamma = \{ c \approx f(a), d \approx f(b), d \not\approx e\}$. Then, $V = \{a, b, c\}$ is a purifier for $a$, but not a purifier for $b$, even though $b \in V$. In all the following lemmas, $\Gamma$, $\varphi_1$, $\varphi_2$ are sets of literals; $V$ a set constants; $a, b \in \mathcal{C}(\Gamma)$; $u, v, x, y \in V$; $V$ is a purifier for $\{x, y\}$ in $\Gamma$, $\varphi_1$, and in $\varphi_2$; $\beta = \alpha_V(\Gamma)$; and $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$.\sharon{if we have time I suggest to group the conditions less aggressively. For example, $V$ is not used in the first lemma and it is confusing} Lemma~\ref{lm:sprtrm} says that anything newly derivable from $\Gamma$ and a new equality $a \approx b$ is derivable using superterms of $a$ and $b$: \begin{slemma}\label{lm:sprtrm} Let $t_1$ and $t_2$ be two terms in $\mathcal{T}(\Sigma)$ s.t. $\Gamma \not\vdash (t_1 \approx t_2)$. Then, $(\Gamma \land a \approx b) \vdash (t_1 \approx t_2)$, for some constants $a$ and $b$ in $\const(\Gamma)$, iff there are two superterms, $s_1[a]$ and $s_2[b]$, of $a$ and $b$, respectively, s.t. (i) $\Gamma \vdash (t_1 \approx s_1[a])$, (ii) $\Gamma \vdash (t_2 \approx s_2[b])$, and (iii) $(\Gamma \land a \approx b ) \vdash (s_1[a] \approx s_2[b])$. \end{slemma} Lemma~\ref{lm:Visenough} and Lemma~\ref{lm:Visenoughdeq} say that all relevant consequences of $\Gamma$\sharon{relevant to what? can we rephrase "all consequences of $\Gamma$ that are relevant to $V$ are..."} are present in $\beta = \alpha_V(\Gamma)$ as well. \begin{slemma}\label{lm:Visenough} $(\Gamma \land x \approx y \vdash u \approx v) \iff (\beta \land x \approx y \vdash u \approx v) $. \end{slemma} \begin{slemma}\label{lm:Visenoughdeq} $(\Gamma \land x \approx y \vdash u \not\approx v) \iff ( \beta \land x \approx y \vdash u \not\approx v)$. \end{slemma} Lemma~\ref{lm:pur} says that $\beta = \alpha_V(\Gamma)$ can be described using terms of depth one using constants in $V$. \begin{slemma}\label{lm:pur} $V$ is a purifier for $x \in V$ in $\beta$. \end{slemma} Lemma~\ref{lm:idem} says that $\alpha_V$ is idempotent. \begin{slemma}\label{lm:idem} $\alpha_V(\Gamma) = \alpha_V(\alpha_V(\Gamma))$. \end{slemma} Lemma~\ref{lm:eqpres1v} and Lemma~\ref{lm:subset} say that $\alpha_V$ preserves addition of new literals and dropping of constants. \begin{slemma}\label{lm:eqpres1v} $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$. \end{slemma} \begin{slemma}\label{lm:subset} If $U \subseteq V$, then \[(\alpha_V(\varphi_1)= \alpha_V(\varphi_2)) \Rightarrow (\alpha_U(\varphi_1) = \alpha_U(\varphi_2))\] \end{slemma} Lemma~\ref{lm:deqpres1v} and Lemma~\ref{lm:propequiv} extend the preservation results to disequalities, and for equalities involving a fresh constant. In Lemma~\ref{lm:deqpres1v} and Lemma~\ref{lm:propequiv}, $V$ is a set of constants, $x, y \in V, \Vec{y}\subseteq V$. $V$ is not required to be a purifier (as it was in the previous lemmas). $x'$ is a constant s.t. $x' \not \in \mathcal{C}(\varphi_1) \cup \mathcal{C}(\varphi_2)$, $V' = V \cup\{x'\}$, and $f(\Vec{y})$ is a term not in $\mathcal{T}(\varphi_1) \cup \mathcal{T}(\varphi_2)$.\sharon{same as before: most of the conditions are only for lemma 9, I suggest to move them there} \begin{slemma}\label{lm:deqpres1v} $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$. \end{slemma} \begin{slemma}\label{lm:propequiv} \begin{align*} \tag{1}\alpha_{V'}(\varphi_1 \land x' \approx y) &= \alpha_{V'}(\varphi_2 \land x' \approx y)\\ \tag{2}\alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y})) &= \alpha_{V'}(\varphi_2 \land x' \approx f(\Vec{y})) \end{align*} \end{slemma} We are now ready to present the proof of Thm.~\ref{thm:main}: \begin{proof}[Theorem~\ref{thm:main}] In the proof, we use $x = q(\pv{x})$, and $y = q(\pv{y})$. For part (1), we only show the proof for $s = \textbf{assume}(\pv{x} \bowtie \pv{y})$ since the other cases are trivial. The only-if direction follows since $\alpha_q(pc)$ is weaker than $pc$. For the if direction, $pc \not\vdash \bot$ since it is part of a reachable configuration. Then, there are two cases: \begin{itemize} \item case $s = \textbf{assume}(\pv{x}=\pv{y})$. Assume $(pc \land x \approx y) \models \bot$. Then, $(pc \land x \approx y) \vdash t_1 \approx t_2$ and $pc \vdash t_1 \not\approx t_2$ for some $t_1, t_2 \in \mathcal{T}(pc)$. By Lemma~\ref{lm:sprtrm}, in any new equality $(t_1 \approx t_2)$ that is implied by $pc \land (x \approx y)$ (but not by $pc$), $t_1$ and $t_2$ are equivalent (in $pc$) to superterms of $x$ or $y$. By the early assume property of CUP, $\const(q)$ purifies $\{x, y\}$ in $pc$. Therefore, every superterm of $x$ or $y$ is equivalent (in $pc$) to some constant in $\const(q)$. Thus, $(pc \land x \approx y) \vdash u \approx v$ and $(pc \land x \approx y) \vdash u \not\approx v$ for some $u, v \in \const(q)$. By Lemma~\ref{lm:Visenough}, $(\alpha_q(pc) \land x \approx y) \vdash u \approx v$. By Lemma~\ref{lm:Visenoughdeq}, $(\alpha_q(pc) \land x \approx y) \vdash u \not\approx v$. Thus, $(\alpha_q(pc)\land x \approx y) \models \bot$. \item case $s = \textbf{assume}(\pv{x}\neq\pv{y})$. $(pc \land x \not\approx y) \models \bot$ if and only if $pc \vdash x \approx y$. Since $x, y \in \const(q)$, $\alpha_q(pc) \vdash x \approx y$. \end{itemize} For part (2), we only show the cases for assume and assignment statements, the other cases are trivial. \begin{itemize} \item case $s = \textbf{assume}(\pv{x} = \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \approx y) = \alpha_{q}(\alpha_q(pc) \land x \approx y)$. From the early assumes property, $\mathcal{C}(q)$ purifies $\{x, y\}$ in $pc$. By Lemma~\ref{lm:pur}, $\const(q)$ purifies $\{x, y\}$ in $\alpha_q(pc)$ as well. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:eqpres1v}, $\alpha_q(pc \land x \approx y) = \alpha_q(\alpha_q(pc) \land x \approx y)$. \item case $s = \textbf{assume}(\pv{x} \neq \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:deqpres1v}, $\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. \item case $s = \pv{x} \gets \pv{y}$. W.l.o.g., assume $q' = q[\pv{x}\mapsto x']$, for some constant $x' \not\in \mathcal{C}(pc)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~1), $\alpha_{\const(q) \cup \{x'\}}(pc \land x' \approx y) = \alpha_{\const(q) \cup \{x'\}}(\alpha_q(pc) \land x' \approx y)$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x' \approx y ) = \alpha_{q'}(\alpha_q(pc) \land x' \approx y)$, since $\const(q') \subseteq (\const(q) \cup \{x'\})$. \item case $s = \pv{x} \gets f(\Vec{y})$. W.l.o.g., $q' = q[\pv{x}\mapsto x']$ for some constant $x' \not\in \mathcal{C}(pc)$. There are two cases: (a) there is a term $t \in \mathcal{T}(pc)$ s.t. $pc \vdash t \approx f(\Vec{y})$, (b) there is no such term $t$. \begin{enumerate}[label=(\alph*)] \item By the memoizing property of CUP, there is a program variable $\pv{z}$ s.t. $q(\pv{z}) = z$ and $pc \vdash z \approx f(\Vec{y})$. Therefore, by definition of $\alpha_q$, $\alpha_q(pc) \vdash z \approx f(\Vec{y})$. The rest of the proof is identical to the case of $s = \pv{x} \gets \pv{z}$. \item Since there is no term $t\in\mathcal{T}(pc)$ s.t. $pc \vdash t \approx f(\Vec{y})$, $f(\Vec{y}) \not \in\mathcal{T}(pc)$ as well as $f(\Vec{y}) \not \in \mathcal{T}(\alpha_q(pc))$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~2), $\alpha_{\mathcal{C}(q)\cup \{x'\}}(pc \land x \approx f(\Vec{y})) = \alpha_{\mathcal{C}(q)\cup \{x'\}}(\alpha_q(pc) \land x \approx f(\Vec{y}))$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x \approx f(\Vec{y})) = \alpha_{q'}(\alpha_q(pc) \land x \approx f(\Vec{y}))$ since $\const(q') \subseteq (\const(q) \cup \{x'\})$.\hfill\ProofSymbol \end{enumerate} \end{itemize} \end{proof} \begin{corollary} \label{cor:main} For a CUP $P$, the relation $\rho \triangleq \{(c, \alpha_b(c)) \mid c \in \Reach(\mathcal{S}_P)\}$ is a bisimulation from $\mathcal{S}_P$ to $\alpha_b(\mathcal{S}_P)$. \end{corollary} The path condition in configurations of $\alpha_b(\mathcal{S}_P)$ is restricted to terms of depth one. However, since at every assignment to a variable $\pv{x}$, a fresh constant $x'$ is used to represent the new value of $\pv{x}$,\sharon{and since the $W$-constants introduced by $\alpha_b$ (\Cref{def:vabst}) may be different in each configuration,} the number of configurations is not bounded. As before, this is easy to fix by renaming all constants in the path condition to be in $\const_0$\sharon{ and renaming all the $W$-variables to be taken from one set of constants}, after the abstraction. \ag{The jump between $b$ abstraction and $q$ abstraction is hard to follow} Let $\alpha_{b,\mathit{r}}$ be the composition of the renaming and base abstractions, i.e., $\alpha_{b,\mathit{r}} \triangleq \alpha_b \circ \alpha_\mathit{r}$.\sharon{(from Arie + my suggestion to extend) where $\mathit{r}'$ is the renaming abstraction from \Cref{def:rename-abstraction} that additionally renames all constants in $W$s to ensure canonicity of the resulting formulas. Note that (1) the renaming of $\const(q)$ to $\const_0$ does not introduce new constants (since a constant from $\const_0$ may only appear in $\alpha_q(pc)$ if it is still assigned to $\pv{v}$, in which case no renaming is needed), and (2) \Cref{def:vabst} ensures that the number of $W$ variables in $\alpha_q(pc)$, and accordingly in $\mathit{r}_0(\alpha_q(pc), q)$, is bounded by a function of the number of program variables. As a result, the range of the renaming abstraction is finite.} Recall that $\mathit{r}_0(pc, q)$\sharon{should it be $\mathit{r}_0(\alpha_q(pc), q)$?}\hg{no, this was just meant to recall the definition of $\mathit{r}_0$. Maybe I shouldn't have used $pc$} renames all constants in $\mathcal{C}(q)$ with their corresponding constants from $\mathcal{C}_0$. However, $\mathit{r}_0$ can introduce new constants not in $\mathcal{C}(pc)$\sharon{should it be $\const(\alpha_q(pc))$?}, if (a) $pc$\sharon{$\alpha_q(pc)$?} contains a constant $v_0 \in \mathcal{C}_0$ and (b) $q(\pv{v}) \neq v_0$. However, $\alpha_q(pc)$ does not contain any constant in $\mathcal{C}(pc)\setminus \mathcal{C}(q)$. In particular, $v_0 \in \mathcal{C}(\alpha_q(pc))$ iff $q(\pv{v}) = v_0$. Therefore, $\mathit{r}_{0}(\alpha_q(pc), q)$ does not introduce any new constants: $\mathcal{C}(\mathit{r}_{0}(\alpha_q(pc), q)) = \mathcal{C}(\alpha_q(pc)) \cup \mathcal{C}_0$.\sharon{so what is the benefit? we wanted to get rid of some of the constants... I think we need a finer claim: $\mathcal{C}(\mathit{r}_{0}(\alpha_q(pc), q)) = (\mathcal{C}(\alpha_q(pc)) \setminus \const(q)) \cup \const_0$} Let $V$ be a finite set of constants and let $W$ be a possibly infinite set of constants. There are only finitely many formulas of depth at most 1 over constants $V \cup W$ such that they are not $V$ equivalent.\sharon{it's confusing to go back to $V,W$ after being concrete. Can we at least use $\const_0$ instead of $V$?} \sharon{By \Cref{def:vabst},}If two formulas $pc_1$ and $pc_2$ are $q$-equivalent, then $\alpha_q(pc_1) = \alpha_q(pc_2)$ and, hence, $\mathit{r}_{0}(\alpha_q(pc_1), q) = \mathit{r}_{0}(\alpha_q(pc_2), q)$.\sharon{the last two paragraphs can be improved significantly. too complicated and hard to follow. In particular, your argument was that it doesn't matter if $W$ is infinite, so why is it important that renaming doesn't add new constants? I actually think it is wrong since $\alpha$ is applied before renaming, so there is no guarantee that it maps "equivalent" configurations to the same thing (it uses a different set of constants!)} Therefore, $\alpha_{b,\mathit{r}}$ is finite, and induces a finite bisimulation: \begin{corollary} \label{cor:finite-bis} For a CUP $P$, $\rho \triangleq \{(c, \alpha_{b, \mathit{r}}(c)) \mid c \in \Reach(\mathcal{S}_P)\}$ is a finite bisimulation from $\mathcal{S}_P$ to $\alpha_{b, \mathit{r}}(\mathcal{S}_P)$. \end{corollary} \Cref{cor:finite-bis} ensures that $\alpha_{b, \mathit{r}}$ is sound and complete for CUP. Furthermore, $\alpha_{b, \mathit{r}}$ is sound (but not complete) for any UP: $\alpha_{b, \mathit{r}}(c)$ induces a finite simulation for any UP (since $\alpha_b$ only weakens path conditions). \Cref{cor:finite-bis} suggests that CUPs are essentially finite state systems. Thus, reachability, and, more generally, temporal model checking, for them are decidable. The only issue is that we have not established the computability of $\alpha_b$. We address this complication in \Cref{sec:char}. \section{Bismulation of CUP} \label{sec:extabase} The first step in extending~\cref{th:lcup} to CUP is to design an abstraction function that bounds the depth of terms that appear in any reachable (abstract) state. It is easy to design such a function while maintaining soundness -- simply forget literals that have terms that are too deep. However, we want to maintain precision as well. That is, we want the abstract transition system to be bisimilar to the concrete one. Just like cover abstraction, the base abstraction function also eliminates all constants that are not assigned to program variables. Unlike cover abstraction, the base abstraction does not maintain $\const(q)$-equivalence of the path conditions, but, rather, forgets most literals that cannot be expressed over program variables. In this section, we focus on the definition of the base abstraction and prove that it induces bisimulation for CUP. This result is used in \cref{sec:char}, to logically characterize CUPs. Intuitively, the base abstraction ``truncates'' the congruence graph induced by a path condition in nodes that have no representative in the set of constants assigned to the program variables ($V$ in the following definition), and assigns to the truncated nodes fresh constants (from $W$ in the following definition). Congruence closure procedures for EUF use a \emph{congruence graph} to concisely represent the deductive closure of a set of EUF literals~\cite{DBLP:journals/jacm/NelsonO80,DBLP:conf/rta/NieuwenhuisO05}. Here, we use a logical characterization of a congruence graph, called a \emph{$V$-basis}. Let $\Gamma$ be a set of EUF literals. A triple $\langle W, \beta, \delta \rangle$ is a $V$-basis of $\Gamma$ relative to a set of constants $V$, written $\langle W, \beta, \delta \rangle \in \base(\Gamma, V)$, iff (a) $W$ is a set of fresh constants not in $\const(\Gamma)$, and $\beta$ and $\delta$ are conjunctions of EUF literals; (b) ($\exists W\cdot \beta \land \delta) \equiv \Gamma$; (c) $\beta \triangleq \beta_{\approx} \cup \beta_{\not\approx} \cup \beta_{\mathcal{F}}$ and $\delta \triangleq \delta_{\approx} \cup \delta_{\not\approx} \cup \delta_{\mathcal{F}}$, where \begin{align*} \beta_{\approx} &\subseteq \{u \approx v \mid u, v \in V\} \qquad \beta_{\not\approx}\subseteq \{u \not\approx v \mid u, v \in V\} \\ \beta_{\mathcal{F}} &\subseteq \{v \approx f(\Vec{w}) \mid v \in V, \Vec{w} \subseteq V \cup W, \Vec{w} \cap V \neq \emptyset\}\\ \delta_{\approx} &\subseteq \{ w \approx u \mid w \in V \cup W, u \not \in V \cup W\ \\ \delta_{\not\approx} &\subseteq \{u \not\approx w \mid u \in W, w \in W \cup V\}\\ \delta_{\mathcal{F}} &\subseteq \{v \approx f(\Vec{w}) \mid v,\Vec{w}\subseteq V\cup W, v \in V \Rightarrow \Vec{w} \subseteq W\ \end{align*} (d) $\beta \land \delta \nvdash v \approx w$ for any $v \in V$, $w \in W$; and (e) $\beta \land \delta \nvdash w_1 \approx w_2$ for any $w_1, w_2 \in W$ s.t. $w_1 \neq w_2$. Note that we represent both equalities and disequalities in the $V$-basis as common in implementations (but not in the theoretical presentations) of the congruence closure algorithm. Intuitively, $V$ are constants in $\const(\Gamma)$ that represent equivalence classes in $\Gamma$, and $W$ are constants added to represent equivalence classes that do not have a representative in $V$. A $V$-basis, of any satisfiable set $\Gamma$, is unique up to renaming of constants in $W$ and ordering of equalities between constants in $V$. \begin{example} \label{ex:basis} Let $\Gamma = \{x \approx f(a, v_1), y \approx f(b, v_2), v_1 \approx v_2\}$ and $V = \{a, b, x, y\}$. A $V$-basis of $\Gamma$ is $\langle W, \beta, \delta\rangle$, where $W = \{w\}$, $\beta = \{ x \approx f(a, w), y \approx f(b, w)\}$, $\delta = \{w \approx v_1, w \approx v_2\}$. Renaming $w$ to $w'$ is a different $V$-basis: $\langle W', \beta', \delta'\rangle \in \base(\Gamma, V)$ where $W' = \{w'\}$, $\beta' = \beta[w \mapsto w']$ and $\delta' = \delta[w \mapsto w']$. As another example, consider $\Gamma = \{x \approx f(a, p), x \approx f(a, n(p)), y = f(b, p), y = f(c, n(p))\}$ and $V =\{x, y, a, b, c\}$. A $V$-basis of $\Gamma$ is $\langle W, \beta, \delta\rangle$, where $W = \{w_0, w_1\}$, $\delta_2 = \{w_0 \approx p, w_1 \approx n(w_0)\}$, and \[ \beta_2 = \left\{\begin{aligned} x &\approx f(a, w_0) & x &\approx f(a, w_1) \\ y &\approx f(b, w_0) & y &\approx f(c, w_1) \end{aligned}\right\} \] \end{example} While a basis maintains all consequences of $\Gamma$ (since $(\exists W\cdot\beta \wedge \delta) \equiv \Gamma$), the $V$-base abstraction of $\Gamma$, defined next, is weaker. It preserves consequences of $\beta$ only: \begin{definition}[$V$-base abstraction] \label{def:vabst} The $V$-base abstraction $\alpha_V$ for a set of constants $V$, is a function between sets of literals s.t. for any sets of literals $\Gamma$ and $\Gamma'$: \begin{enumerate}[(1)] \item $\alpha_V(\Gamma) \triangleq \beta$, where $\langle W, \beta, \delta\rangle \in \base(\varphi, V)$, \item if there exists a $\beta$ s.t. $\langle W_1, \beta, \delta_1\rangle \in\base(\Gamma, V)$ and $\langle W_2, \beta, \delta_2\rangle \in \base(\Gamma', V)$, then $\alpha_V(\Gamma) = \alpha_V(\Gamma')$. \end{enumerate} \end{definition} The second requirement of Def.~\ref{def:vabst} ensures that two formulas that have the same $V$-consequences, have the same $V$-abstraction. For example, for a set of constants $V = \{u, v\}$, the formulas $\varphi_1 = \{v \approx f(u, x)\}$ and $\varphi_2 = \{v \approx f(u, y)\}$, have the same $V$-base abstraction: $v \approx f(u, w)$. Note that at this point, we only require that $\alpha_V$ is well defined (for example, it does not have to be computable.) We now extend $V$-base abstraction to program configuration, calling it simply \emph{base abstraction}, since the set of preserved constants is determined by the configuration: \begin{definition}[Base abstraction] \label{def:alpha-abstraction} The base abstraction $\alpha_b: C \to C$ is defined for configurations $\langle s, q, pc \rangle \in C$, where $pc$ is a \emph{conjunction} of literals: $\alpha_b(\langle s, q, pc\rangle) \triangleq \langle s,q,\alpha_{\mathcal{C}(q)}(pc)\rangle$. \end{definition} Namely, the base abstraction $\alpha_{\mathcal{C}(q)}$ applied to the path condition is determined by the state $q$ in the configuration. We often write $\alpha_q(\varphi)$ as a shorthand for $\alpha_{\const(q)}(\varphi)$. We are now in position to state the main result of this section. Given a CUP $P$, the abstract transition system $\alpha_b(\mathcal{S}_P) = (C, c_0^\alpha_b, \mathcal{R}^\alpha_b)$ is bisimilar to the concrete transition system $\mathcal{S}_P = (C, c_0, \mathcal{R})$. Note that at this point, we do not claim that $\alpha_b(\mathcal{S}_P)$ is finite, or that it is computable. We focus only on the fact that the literals that are forgotten by the base abstraction do not matter for any future transitions. The key technical step is summarized in the following theorem: \begin{theorem} \label{thm:main} Let $\langle s, q, pc\rangle$ be a reachable configuration of a CUP $P$. Then, \begin{enumerate}[(1)] \item $\langle s, q, pc\rangle \to \langle s', q', pc \land pc'\rangle$ iff\\ $\langle s, q, \alpha_q(pc)\rangle \to \langle s', q', \alpha_{q}(pc) \land pc'\rangle$, and \item $\alpha_{q'}(pc \land pc') = \alpha_{q'}(\alpha_q(pc) \land pc')$. \end{enumerate} \end{theorem} The proof of Thm.~\ref{thm:main} is not complicated, but it is tedious and technical. It depends on many basic properties of EUF. We summarize the key results that we require in the following lemmas. The proofs of the lemmas are provided in App.~\ref{sec:proofs}. We begin by defining a \emph{purifier} -- a set of constants sufficient to represent a set of EUF literals with terms of depth one. \begin{definition}[Purifier] \label{def:purifier} We say that a set of constants $V$ is a \emph{purifier} of a constant $a$ in a set of literals $\Gamma$, if $a \in V$ and for every term $t \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash t \approx s[a]$, $\exists v \in V$ s.t. $\Gamma \vdash v \approx t$. \end{definition} For example, if $\Gamma = \{ c \approx f(a), d \approx f(b), d \not\approx e\}$. Then, $V = \{a, b, c\}$ is a purifier for $a$, but not a purifier for $b$, even though $b \in V$. In all the following lemmas, $\Gamma$, $\varphi_1$, $\varphi_2$ are sets of literals; $V$ a set constants; $a, b \in \mathcal{C}(\Gamma)$; $u, v, x, y \in V$; $V$ is a purifier for $\{x, y\}$ in $\Gamma$, $\varphi_1$, and in $\varphi_2$; $\beta = \alpha_V(\Gamma)$; and $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. Lemma~\ref{lm:sprtrm} says that anything newly derivable from $\Gamma$ and a new equality $a \approx b$ is derivable using superterms of $a$ and $b$: \begin{slemma}\label{lm:sprtrm} Let $t_1$ and $t_2$ be two terms in $\mathcal{T}(\Sigma)$ s.t. $\Gamma \not\vdash (t_1 \approx t_2)$. Then, $(\Gamma \land a \approx b) \vdash (t_1 \approx t_2)$, for some constants $a$ and $b$ in $\const(\Gamma)$, iff there are two superterms, $s_1[a]$ and $s_2[b]$, of $a$ and $b$, respectively, s.t. (i) $\Gamma \vdash (t_1 \approx s_1[a])$, (ii) $\Gamma \vdash (t_2 \approx s_2[b])$, and (iii) $(\Gamma \land a \approx b ) \vdash (s_1[a] \approx s_2[b])$. \end{slemma} Lemma~\ref{lm:Visenough} and Lemma~\ref{lm:Visenoughdeq} say that all consequences of $\Gamma$ that are relevant to $V$ are present in $\beta = \alpha_V(\Gamma)$ as well. \begin{slemma}\label{lm:Visenough} $(\Gamma \land x \approx y \vdash u \approx v) \iff (\beta \land x \approx y \vdash u \approx v) $. \end{slemma} \begin{slemma}\label{lm:Visenoughdeq} $(\Gamma \land x \approx y \vdash u \not\approx v) \iff ( \beta \land x \approx y \vdash u \not\approx v)$. \end{slemma} Lemma~\ref{lm:pur} says that $\beta = \alpha_V(\Gamma)$ can be described using terms of depth one using constants in $V$. \begin{slemma}\label{lm:pur} $V$ is a purifier for $x \in V$ in $\beta$. \end{slemma} Lemma~\ref{lm:idem} says that $\alpha_V$ is idempotent. \begin{slemma}\label{lm:idem} $\alpha_V(\Gamma) = \alpha_V(\alpha_V(\Gamma))$. \end{slemma} Lemma~\ref{lm:eqpres1v} and Lemma~\ref{lm:subset} say that $\alpha_V$ preserves addition of new literals and dropping of constants. \begin{slemma}\label{lm:eqpres1v} $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$. \end{slemma} \begin{slemma}\label{lm:subset} If $U \subseteq V$, then \[(\alpha_V(\varphi_1)= \alpha_V(\varphi_2)) \Rightarrow (\alpha_U(\varphi_1) = \alpha_U(\varphi_2))\] \end{slemma} Lemma~\ref{lm:deqpres1v} extends the preservation results to disequalities. $V$ is a set of constants, $x, y \in V$. $V$ is not required to be a purifier (as it was in the previous lemmas). \begin{slemma}\label{lm:deqpres1v} $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$. \end{slemma} Lemma~\ref{lm:propequiv} extends the preservation results for equalities involving a fresh constant $x'$ s.t. $x' \not \in \mathcal{C}(\varphi_1) \cup \mathcal{C}(\varphi_2)$. $\Vec{y}\subseteq V$, $V' = V \cup\{x'\}$, and $f(\Vec{y})$ be a term s.t there does not exists a term $t \in \mathcal{T}(\varphi_1) \cup \mathcal{T}(\varphi_2)$ s.t. $\varphi_1 \vdash t \approx f(\Vec{y})$ or $\varphi_2 \vdash t \approx f(\Vec{y})$. \begin{slemma}\label{lm:propequiv} \begin{align*} \tag{1}\alpha_{V'}(\varphi_1 \land x' \approx y) &= \alpha_{V'}(\varphi_2 \land x' \approx y)\\ \tag{2}\alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y})) &= \alpha_{V'}(\varphi_2 \land x' \approx f(\Vec{y})) \end{align*} \end{slemma} We are now ready to present the proof of Thm.~\ref{thm:main}: \begin{proof}[Theorem~\ref{thm:main}] In the proof, we use $x = q(\pv{x})$, and $y = q(\pv{y})$. For part (1), we only show the proof for $s = \textbf{assume}(\pv{x} \bowtie \pv{y})$ since the other cases are trivial. The only-if direction follows since $\alpha_q(pc)$ is weaker than $pc$. For the if direction, $pc \not\vdash \bot$ since it is part of a reachable configuration. Then, there are two cases: \begin{itemize} \item case $s = \textbf{assume}(\pv{x}=\pv{y})$. Assume $(pc \land x \approx y) \models \bot$. Then, $(pc \land x \approx y) \vdash t_1 \approx t_2$ and $pc \vdash t_1 \not\approx t_2$ for some $t_1, t_2 \in \mathcal{T}(pc)$. By Lemma~\ref{lm:sprtrm}, in any new equality $(t_1 \approx t_2)$ that is implied by $pc \land (x \approx y)$ (but not by $pc$), $t_1$ and $t_2$ are equivalent (in $pc$) to superterms of $x$ or $y$. By the early assume property of CUP, $\const(q)$ purifies $\{x, y\}$ in $pc$. Therefore, every superterm of $x$ or $y$ is equivalent (in $pc$) to some constant in $\const(q)$. Thus, $(pc \land x \approx y) \vdash u \approx v$ and $(pc \land x \approx y) \vdash u \not\approx v$ for some $u, v \in \const(q)$. By Lemma~\ref{lm:Visenough}, $(\alpha_q(pc) \land x \approx y) \vdash u \approx v$. By Lemma~\ref{lm:Visenoughdeq}, $(\alpha_q(pc) \land x \approx y) \vdash u \not\approx v$. Thus, $(\alpha_q(pc)\land x \approx y) \models \bot$. \item case $s = \textbf{assume}(\pv{x}\neq\pv{y})$. $(pc \land x \not\approx y) \models \bot$ if and only if $pc \vdash x \approx y$. Since $x, y \in \const(q)$, $\alpha_q(pc) \vdash x \approx y$. \end{itemize} For part (2), we only show the cases for assume and assignment statements, the other cases are trivial. \begin{itemize} \item case $s = \textbf{assume}(\pv{x} = \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \approx y) = \alpha_{q}(\alpha_q(pc) \land x \approx y)$. From the early assumes property, $\mathcal{C}(q)$ purifies $\{x, y\}$ in $pc$. By Lemma~\ref{lm:pur}, $\const(q)$ purifies $\{x, y\}$ in $\alpha_q(pc)$ as well. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:eqpres1v}, $\alpha_q(pc \land x \approx y) = \alpha_q(\alpha_q(pc) \land x \approx y)$. \item case $s = \textbf{assume}(\pv{x} \neq \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:deqpres1v}, $\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. \item case $s = \pv{x} \gets \pv{y}$. W.l.o.g., assume $q' = q[\pv{x}\mapsto x']$, for some constant $x' \not\in \mathcal{C}(pc)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~1), $\alpha_{\const(q) \cup \{x'\}}(pc \land x' \approx y) = \alpha_{\const(q) \cup \{x'\}}(\alpha_q(pc) \land x' \approx y)$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x' \approx y ) = \alpha_{q'}(\alpha_q(pc) \land x' \approx y)$, since $\const(q') \subseteq (\const(q) \cup \{x'\})$. \item case $s = \pv{x} \gets f(\Vec{y})$. W.l.o.g., $q' = q[\pv{x}\mapsto x']$ for some constant $x' \not\in \mathcal{C}(pc)$. There are two cases: (a) there is a term $t \in \mathcal{T}(pc)$ s.t. $pc \vdash t \approx f(\Vec{y})$, (b) there is no such term $t$. \begin{enumerate}[label=(\alph*)] \item By the memoizing property of CUP, there is a program variable $\pv{z}$ s.t. $q(\pv{z}) = z$ and $pc \vdash z \approx f(\Vec{y})$. Therefore, by definition of $\alpha_q$, $\alpha_q(pc) \vdash z \approx f(\Vec{y})$. The rest of the proof is identical to the case of $s = \pv{x} \gets \pv{z}$. \item Since there is no term $t\in\mathcal{T}(pc)$ s.t. $pc \vdash t \approx f(\Vec{y})$, there is also no such term in $\mathcal{T}(\alpha_q(pc))$ as well. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~2), $\alpha_{\mathcal{C}(q)\cup \{x'\}}(pc \land x \approx f(\Vec{y})) = \alpha_{\mathcal{C}(q)\cup \{x'\}}(\alpha_q(pc) \land x \approx f(\Vec{y}))$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x \approx f(\Vec{y})) = \alpha_{q'}(\alpha_q(pc) \land x \approx f(\Vec{y}))$ since $\const(q') \subseteq (\const(q) \cup \{x'\})$.\hfill\ProofSymbol \end{enumerate} \end{itemize} \end{proof} \begin{corollary} \label{cor:main} For a CUP $P$, the relation $\rho \triangleq \{(c, \alpha_b(c)) \mid c \in \Reach(\mathcal{S}_P)\}$ is a bisimulation from $\mathcal{S}_P$ to $\alpha_b(\mathcal{S}_P)$. \end{corollary} Note that for an arbitrary UP, $\alpha_{b}$ induces a simulation (since $\alpha_b$ only weakens path conditions). By construction, for any configuration in an abstract system constructed using $\alpha_{b}$, the path condition will be at most depth-1. In \Cref{sec:char}, we use this property to build a logical characterization of CUP and show that reachability of CUP programs is decidable. \subsection{Relationship to \cite{DBLP:journals/pacmpl/MathurMV19}} In~\cite{DBLP:journals/pacmpl/MathurMV19}, \Cref{cor:cupdec} is proven by constructing a deterministic finite automaton that accepts all \emph{feasible} coherent executions.\footnote{ In our setting, feasible coherent executions correspond to paths in the transition system of any CUP.} However, the construction fails for the executions of the CUP in \Cref{fig:cup}: the execution that reaches a terminal configuration is infeasible, but it is (wrongfully) accepted by the automaton. Intuitively, the reason is that the automaton is deterministic and its states are not sufficiently expressive. The states of the automaton keep track of equalities between program variables (which correspond to $\beta_\approx$ in our abstraction), disequalities between them ($\beta_\not\approx$ in our case), and partial function interpretations ($\beta_\mathcal{F}$). However, the partial function interpretations are restricted to $\beta_{\mathcal{F}_V}$, i.e., do not allow auxiliary constants that are not assigned to program variables. Thus, they are unable to keep track of $x_0 \approx f(a_0, w) \land y_0 \approx f(b_0, w) \land c_0 \approx d_0$ in line 9, which is essential for showing infeasibility of the execution. Eliminating the auxiliary constants, as we do in the cover abstraction, does not remedy the situation since it introduces a disjunction $(a_0 \not\approx b_0 \wedge c_0 \approx d_0) \lor (x_0 \approx y_0 \wedge c_0 \approx d_0)$, which the deterministic automaton does not capture. \section{Conclusion} \label{sec:conclusion} In this paper, we study theoretical properties of Coherent Uninterpreted Programs (CUPs) that have been recently proposed by Mathur et al.~\cite{DBLP:journals/pacmpl/MathurMV19}. We identify a bug in the original paper, and provide an alternative proof of decidability of the reachability problem for CUP. More significantly, we provide a logical characterization of CUP. First, we show that inductive invariant of CUP is describable by shallow formulas. Hence, the set of all candidate invariants can be effectively enumerated. Second, we show that CUPs are bisimilar to finite transition systems. Thus, while they are formally infinite state, they are not any more expressive than a finite state system. Third, we propose an algorithm to compute a finite transition system of a CUP. This lifts all existing results on finite state model checking to CUPs. In the paper, we have focused on the core result of Mathur et al, and have left out several interesting extensions. In~\cite{DBLP:journals/pacmpl/MathurMV19}, the notion of CUP is extended with $k$-coherence -- a UP $P$ is $k$-coherent if it is possible to transform $P$ into a CUP $\hat{P}$ by adding $k$ \emph{ghost} variables to $P$. This is an interesting extension since it makes potentially many more programs amenable to decidable verification. We observe that addition of \emph{ghost} variables is a form of abstraction. Thus, invariants of $\hat{P}$ can be translated to invariants of $P$ using techniques of Namjoshi et al.~\cite{DBLP:conf/sas/NamjoshiZ13,DBLP:conf/vmcai/Namjoshi03}. This essentially amounts to existentially eliminating ghost variables from the invariant of $\hat{P}$. Such elimination increases the depth of terms in the invariant at most by one for each variable eliminated. Thus, we conjecture that $k$-coherent programs are characterized by invariants with terms of depth at most $k$. Mathur et al.~\cite{DBLP:journals/pacmpl/MathurMV19} extend their results to recursive UP programs (i.e., UP programs with recursive procedures). We believe our logical characterization results extend to this setting as well. In this case, both the invariants and procedure summaries (i.e., procedure pre- and post-conditions) are described using terms of depth at most 1. Our results also hold when CUPs are extended with simple axiom schemes, as in~\cite{DBLP:conf/tacas/MathurM020}, while for most non-trivial axiom schemes CUPs become undecidable. Perhaps most interestingly, our results suggest efficient verification algorithms for CUPs and interesting abstraction for UPs. Since the space of invariant candidates is finite, it can be enumerated, for example, using implicit predicate abstraction. For CUPs, this is a complete verification method. For UPs it is an abstraction. Most importantly, it does not require prior knowledge to whether an UP is a CUP! \section{Verifying CUP} \begin{theorem} Given a coherent program $P$, the problem of verifying $P$ is decidable. \end{theorem} \begin{proof} By Algorithm. \end{proof} \begin{theorem} Given a coherent program $P$ with only unary functions, the problem of verifying $P$ is PSPACE-complete. \end{theorem} \subsection{Programs} We assume a signature $\Sigma = (\mathcal{C}, \mathcal{F})$ with constants symbols $\mathcal{C}$, (uninterpreted) function symbols $\mathcal{F}$ and no relation symbols. We fix $V = \{v_1, \ldots, v_n\}$ to be the set of variables in a program. We assume that a program uses special constants $\hat{V} = \{\hat{v} \mid v \in V\}$, $\hat{V} \subseteq \mathcal{C}$, to initialize its variables. We also assume that all function symbols in the program are in $\mathcal{F}$. $\mathcal{T}$ are the set of all terms over $\Sigma$. The \emph{syntax} of programs is \begin{align*} \langle stmt \rangle ::=\; &\mathbf{skip} \mid x := e \mid x := f(x) \mid \\ &\mathbf{assume}\;(\langle cond \rangle) \mid \langle stmt \rangle; \langle stmt \rangle \mid \\ &\mathbf{if}\;(\langle cond \rangle ) \;\mathbf{then} \;\langle stmt \rangle \;\mathbf{else}\; \langle stmt \rangle \mid \\ &\textbf{while}\;(\langle cond \rangle) \;\langle stmt \rangle\\ \langle cond\rangle ::=\; &x = y \mid c = d \mid \langle cond \rangle \vee \langle cond\rangle \mid \neg \langle cond \rangle \end{align*} An \emph{execution} over a finite set of variable $V$ is a word over the alphabet \begin{multline*} \Pi = \{``x:= y", ``x := f(z)", ``\mathbf{assume}(x = y)",\\ ``\mathbf{assume}(x \neq y)" \mid x,y,z \in V \} \end{multline*} \emph{Complete executions} of programs that manipulate a set of variables $V$ are executions over $V$ defined as: \begin{alignat*}{2} \text{Exec}(\mathbf{skip}) &=\;&& \epsilon \\ \text{Exec} (x := y) &=\;&& ``x := y"\\ \text{Exec} (x := f(x)) &=\;&& ``x := f(z)" \\ \text{Exec} (\mathbf{assume}(c)) &=\;&& ``\mathbf{assume}(c)" \\ \text{Exec} (\mathbf{if}\;(c)\;\mathbf{then}\; s_1 \;\mathbf{else} \;s_2) &=\;&& ``\mathbf{assume}(c)"\cdot \text{Exec}(s_1) \cup \hphantom{a} \\ &\hphantom{=}&&``\mathbf{assume}(\neg c)" \cdot \text{Exec}(s_2)\\ \text{Exec} (s_1;s_2) &=\;&& \text{Exec}(s_1)\cdot \text{Exec}(s_2)\\ \text{Exec} (\textbf{while}(c) s) &=\;&& [``\textbf{assume}(c)"\cdot \text{Exec}(s)]^*\cdot\\ &\hphantom{=}&&``\textbf{assume}(\neg c)" \end{alignat*} A \emph{partial execution} is any prefix of a complete execution. The term assigned to a variable $x$ after some partial execution is captured using the function $\text{Comp} : \Pi^* \times V \mapsto \mathcal{T}$ defined as: \begin{alignat*}{3} \text{Comp}(\epsilon, x) &=\;&& \mathrlap{\hat{x}} \\ \text{Comp} (\rho \cdot ``x := y", x) &=\;&& \mathrlap{\text{Comp}(\rho, y)}\\ \text{Comp} (\rho \cdot ``x := y", x') &=\;&& \text{Comp}(\rho, x') && x' \neq x\\ \text{Comp} (\rho \cdot ``x := \mathit{f}(\textbf{z})", x) &=\;&& \mathit{f}(\text{Comp}(\rho, z_1),\ldots,\text{Comp}(\rho, z_n)) && \text{where } \textbf{z} = z_1, \ldots, z_n\\ \text{Comp} (\rho \cdot ``x := \mathit{f}(\textbf{z})", x') &=\;&& \text{Comp}(\rho, x') && x' \neq x\\ \text{Comp} (\rho \cdot ``assume(x = y)", x) &=\;&& \mathrlap{\text{Comp}(\rho, x)}\\ \text{Comp} (\rho \cdot ``assume(x \neq y)", x) &=\;&& \mathrlap{\text{Comp}(\rho, x)}\\ \end{alignat*} The \emph{term graph} is a 5 tuple $(N,S,E,D,T)$ where $N \subseteq \mathcal{T}$ is a set of nodes, $S \subseteq N \times N$ is a set of directed edges denoting the subterm relation, $E \subseteq N \times N$ is a set of equality edges, $D \subseteq N \times N$ is a set of disequality edges, and $T$ is a \emph{partial} function $N \mapsto V$ that maps some nodes to program variables. Now we define the term graph of an execution $\sigma$. If $\sigma = \epsilon$, we initialize the term graph as: \begin{alignat*}{2} \text{TG}(\epsilon) &=\;&& (\hat{V}, \emptyset, \emptyset, \emptyset,\{\hat{v}\mapsto v\mid v \in V\} ) \\ \end{alignat*} In the following, we assume that $\sigma = \rho \cdot s$ and $\text{TG}(\rho) = (N,S,E,D,T)$. If $s = ``x := y"$, then we update the term graph to $(N,S,E,D,T')$ where $T' = T \setminus \{(n, x) \mid (n, x) \in \mathcal{T} \} \cup \{(n, x) \mid (n, y) \in \mathcal{T}\}$. If $s = ``x := \mathit{f}(\textbf{z})"$, then we add a new vertex for $t = \text{Comp}(\sigma, x)$, add subterm edges $(t, \text{Comp}(\rho, z))$ for each $z \in \textbf{z}$, remap $x$ variable as we did in the previous case, and update the equality edges by doing a \emph{congruence closure} on the previous term graph. That is, we update term graph to $(N',S',E',D,T')$ where \begin{align*} N' =& N \cup \{t\}\\ S' =& S \cup \{(t, \text{Comp}(\rho, z)) \mid z \in \textbf{z}\} \\ E =& E \cup \{(\mathit{f}(\textbf{z}), \mathit{f}(\textbf{k})) \mid \textbf{z} = (z_1, \ldots, z_n) \wedge \textbf{k} = (k_1, \ldots, k_n) \wedge \\ &\forall i \cdot (z_i, k_i) \in E \wedge \mathit{f}(\textbf{k}) \in N\} \\ T =& T \setminus \{(n, x) \mid (n, x) \in \mathcal{T} \} \cup \{(n, x) \mid (n, y) \in \mathcal{T}\} \end{align*} If $s = ``\textbf{assume}(x = y)"$ and $n_1 = \text{Comp}(\rho, x)$, $n_2 = \text{Comp}(\rho, y)$ then we update the term graph to $(N,S,E',D,T)$ where $E' = E \cup \{(n_1, n_2)\} \cup \{(n_1, n_3) \mid (n_2, n_3) \in E\} \cup \{(n_2, n_3) \mid (n_1, n_3) \in E\}$. If $s = ``\textbf{assume}(x \neq y)"$ then we update $D$ to $D \cup \{ (n_1, n_2) \mid n_1 = \text{Comp}(\rho, x) \wedge n_2 = \text{Comp}(\rho, y)\}$. \section{Additional Background on EUF} \label{sec:euf_extra} \begin{figure}[t] \begin{mathpar} \prftree[r]{\textsc{Refl}}{\Gamma \vdash x \approx x}\qquad \prftree[r]{\textsc{Symm}}{\Gamma \vdash x \approx y}{\Gamma \vdash y \approx x}\\ \prftree[r]{\textsc{Trans}}{\Gamma \vdash x \approx y}{\Gamma \vdash y \approx z}{\Gamma \vdash x \approx z}\\ \prftree[r]{\textsc{Cong}}{\Gamma \vdash x_1 \approx y_1 \quad\cdots\quad \Gamma \vdash x_n \approx y_n}{\Gamma \vdash f(x_1, \ldots, x_n) \approx f(y_1, \ldots, y_n)}\\ \prftree[r]{\textsc{EqNeq}}{\Gamma\vdash x \approx y}{x \not\approx y \in \Gamma}{\Gamma\vdash \bot} \quad \prftree[r]{\textsc{PMod}}{\Gamma \vdash \ell}{\Gamma \vdash x \approx y}{\Gamma \vdash \ell[x \mapsto y]} \end{mathpar} \caption{Proof system $\mathcal{P}_{EUF\xspace}$.} \label{fig:peuf} \end{figure} In this section, we formalize some of the concepts about EUF that are well known and have been excluded from the main content of the paper due to space limitations. The proof rules of the proof system $\mathcal{P}_{EUF\xspace}$ for EUF shown in \Cref{fig:peuf}. These are the usual rules. The exception is \textsc{PMod} that is a form of paramodulation. It is used to derive new literals by substituing equal for equal. While not typically included in the proof rules for EUF, \textsc{PMod} is used implicitly in the congruence graph algorithms, and in interpolation over EUF. A deductive (EUF\xspace) closure, $\Gamma^*$, of a set of literals $\Gamma$ is defined as: $\Gamma^* \triangleq \{\ell \mid \Gamma \vdash \ell\}$. A set $\Gamma$ is deductively closed if $\Gamma = \Gamma^*$. For a satisfiable set $\Gamma$ of EUF literals, and $a, b \in \mathcal{T}(\Gamma)$: \begin{enumerate}[(1)] \item $\Gamma \models a \approx b$ iff $a \approx b \in \Gamma^*$ \item $\Gamma \models a \not\approx b$ iff $\bot \in (\Gamma \cup \{a \approx b\})^*$ \end{enumerate} Note that $\Gamma \models a \not\approx b$ does not imply $\Gamma \vdash a \not\approx b$, since $\mathcal{P}_{EUF\xspace}$ has no \textsc{Hyp} and \textsc{Contra} proof rules. Depth of a term is formally defined as follows: \[\depth(t) = \begin{cases} 0 & \text{if $t \in \const$}\\ 1 + \max_i(\depth(t_i)) & \text{if $t = f(t_0, \ldots, t_k)$} \end{cases} \] \section{Abstraction} In this section we introduce several abstractions of UPL programs. Each abstraction is obtained by applying a different abstraction function to the configurations of the program. \sharon{do we want to add this? where the abstractions leave the program statement $s$ and the state $q$ intact, but abstract the path condition. It is not needed now, but it may let us state more things generally -- need to check} The abstract transitions are obtained by taking a concrete transition and applying the abstraction on the resulting configuration. The formal definition follows. \begin{definition} \label{def:abstract-TS-general} Given a transition system $\mathcal{S} = (C, c_0,\mathcal{R})$ and a (possibly partial) abstraction function $\sharp: C \to C$, we define the abstract transition system $\genAbs(\TS) = (C, \initconf^{{\genAbs}}, \Tr^{{\genAbs}})$ as follows. The abstract initial configuration is $\initconf^{{\genAbs}} = \sharp(c_0)$, and the abstract transition relation is $\Tr^{{\genAbs}} = \{(c,c') \mid \exists \tilde{c}.~c \to \tilde{c}~\land~c' = \sharp(\tilde{c}) \}$. We write $c \to^{\genAbs} c'$ when $(c,c') \in \Tr^{{\genAbs}}$. While $\sharp$ may be a partial function, we require that it is defined for $c_0$. \end{definition} In the sequel we consider several abstraction functions and discuss their properties. \sharon{I want to revise this to mention $\equiv_\sharp$} In all cases, the resulting abstract transition systems simulate the concrete transition system. In the interesting cases, the abstract and the concrete transition systems are in fact bisimilar. Simulation and bisimulation ensure that temporal properties are preserved from the abstract transition system to the concrete transition system, where the basis ensures that properties of configurations are preserved. The definition therefore assumes an equivalence relation $\equiv$ (for bisimulation), respectively a preorder $\leq$ (for simulation), between configurations, which we will define later. \sharon{I don't know if talking about equiv and pre-order is really needed, but this is the analogue to the req that the atomic propositions agree} \begin{definition}[\cite{DBLP:books/daglib/0067019}] Let $\mathcal{S} = (C, c_0, \mathcal{R})$ and $\genAbs(\TS) = (C, \initconf^{{\genAbs}}, \Tr^{{\genAbs}})$ be transition systems. A relation $B \subseteq C \times C$ is a \emph{bisimulation} relation from $\mathcal{S}$ to $\genAbs(\TS)$, if for every $(c,c_\sharp) \in B$: \begin{enumerate}[(a)] \item $c \equiv c_\sharp$, and \item if $c \to c'$ then there exists $c_\sharp'$ such that $c_\sharp \to^{\genAbs} c_\sharp'$ and $(c',c_\sharp') \in B$, and \item if $c_\sharp \to^{\genAbs} c_\sharp'$ then there exists $c'$ such that $c \to c'$ and $(c',c_\sharp') \in B$. \end{enumerate} $B \subseteq C \times C$ is a \emph{simulation} relation from $\mathcal{S}$ to $\genAbs(\TS)$ if for every $(c,c_\sharp) \in B$: \begin{enumerate}[(a)] \item $c \leq c_\sharp$, and \item if $c \to c'$ then there exists $c_\sharp'$ such that $c_\sharp \to^{\genAbs} c_\sharp'$ and $(c',c_\sharp') \in B$. \end{enumerate} $\genAbs(\TS)$ \emph{is bisimilar}, respectively \emph{simulates}, $\mathcal{S}$ if there exists a bisimulation, respectively, a simulation, relation $B$ from $\mathcal{S}$ to $\genAbs(\TS)$ such that $(c_0, \initconf^{{\genAbs}}) \in B$. \end{definition} \begin{definition}[NAME ME] \label{def:alpha-abstraction} The abstraction function $\alpha: C \to C$ is defined for configurations of the form $\langle s, q, pc \rangle$ where $pc$ is a conjunction of literals. For such a configuration, we define \[ \alpha(\langle s, q, pc\rangle) = \langle s,q,\alpha_{\const(q)}(pc)\rangle \] where $\const(q) = \{c \mid \exists v \cdot q[v] = c\}$ is the set of all constants that represent current variable assignments in $q$. \end{definition} Namely, the abstraction $\alpha_q$ applied to the path condition is determined by the state $q$ in the configuration. With abuse of notation we sometimes write $\alpha_q(\varphi)$ as a shorthand for $\alpha_{\const(q)}(\varphi)$. \sharon{note that to apply the abstraction, we rely on $pc$ being a conjunction of literals (at least this is an assumption in the current def of $V$-abstraction)} \sharon{claim that the set of reachable states of the abstract transition system is finite. Actually, it is not finite per-se because we don't restrict the fresh constants that appear in the abstraction (and in the transitions). If we say that they are existentially quantified, then it is clear. Otherwise, should we say something about it? Should we define some kind of equivalence? (basically, the width of the transition relation is infinite because we can choose different constants each time)} \ag{Under current definition of abstraction, it is finite. That was the main point behind the change to the new definition that restricts (declaratively) what constants are chosen.} \section{Abstraction} In this section we introduce several abstractions of UPL programs. \label{sec:extabase} Given two EUF\xspace formulas $\varphi_1$ and $\varphi_2$ and a set of constants $V$, we say that the formulas are $V$-equivalent, denoted $\varphi_1 \equiv_V \varphi_2$, if, for all EUF\xspace formulas $\psi$ such that $\mathcal{C}(\psi) \subseteq V$, $(\varphi_1 \wedge \psi) \models \bot$ if and only if $(\varphi_2 \wedge \psi) \models \bot$. \sharon{I suggest to add a footnote saying that this means that the formulas are equivalent when all constants other than $V$ are treated as existentially quantified variables. }\ag{That would not be true. We want a formula $A$ and its unifrom interpolant $U$ to be equivalent by our definitions. However, their existential closures are not equivalent} \begin{example} Let $\varphi_1 = \{x_1 \approx f(p_0, x_0), y_1 \approx f(q_0, y_0), x_0 \approx y_0\}$, $\varphi_2 = \{x_1 \approx f(p_0, w), y_1 \approx f(q_0, w)\}$, $\varphi_3 = \{x_1 \approx f(p_0, x_0), y_1 \approx f(q_0, y_0)\}$, and $V = \{x_1, y_1, p_0, q_0\}$. $\varphi_1 \equiv_V \varphi_2$ but $\varphi_1 \not\equiv_V \varphi_3$. \end{example} Congruence closure procedures for EUF use a \emph{congruence graph} to concisely represent the deductive closure of a set of EUF literals~\cite{DBLP:journals/jacm/NelsonO80,DBLP:conf/rta/NieuwenhuisO05}. In this paper, we use a logical characterization of a congruence graph, called an \emph{extensional basis}. Let $\Gamma$ be a set of EUF literals, a triple $\langle W, \beta, \delta \rangle$ is an extensional basis of $\Gamma$ relative to a set of constants $V$, written $\langle W, \beta, \delta \rangle = \extbase(\Gamma, V)$, iff (a) $W$ is a set of fresh constants not in $\const(\Gamma)$\sharon{can we not require taht $W$ are fresh but only that $W \cap V = \emptyset$?} and $\beta$ and $\delta$ are conjunctions of EUF literals; (b) $(\beta \land \delta) \equiv_V \Gamma$; (c) $\beta \triangleq \beta_{\approx} \cup \beta_{\not\approx} \cup \beta_{\mathcal{F}}$ and $\delta \triangleq \delta_{\not\approx} \cup \delta_{\mathcal{F}}$, where \begin{align*} \beta_{\approx} &\subseteq \{u \approx v \mid u, v \in V\} \\ \beta_{\mathcal{F}} &\subseteq \{v \approx f(\Vec{w}) \mid v \in V, \Vec{w} \in V \cup W, \Vec{w} \cap V \neq \emptyset\}\\ \beta_{\not\approx}&\subseteq \{u \not\approx v \mid u, v \in V\}\\ \delta_{\not\approx} &\subseteq \{u \not\approx w \mid u \in W, w \in W \cup V\}\\ \delta_{\mathcal{F}} &\subseteq \{ u \approx f(\Vec{w}) \mid u \in W, \Vec{w} \in V \cup W\} \end{align*} and (d) $W$, $\beta$, and $\delta$ are minimal, i.e., they cannot be represented by a fewer number of literals or fresh constants. Note that we represent both equalities and disequalities in the extensional basis as common in implementations (but not in the theoretical presentations) of the congruence closure algorithm. Intuitively, $V$ are constants in $\const(\Gamma)$ that represent equivalence classes in $\Gamma$, and $W$ are constants added to represent equivalence classes that do not have a representative in $V$. An extensional basis is unique up to renaming of constants in $W$ and ordering of equalities between constants in $V$. In fact, given any two sets of literals $\Gamma_1$ and $\Gamma_2$ with $\langle W_1, \beta_1, \delta_1\rangle = \extbase(\Gamma_1, V)$ and $\langle W_2, \beta_2, \delta_2\rangle = \extbase(\Gamma_2, V)$ such that $\beta_1 \equiv_V \beta_2$, we can have a set of literals $\beta$ over a set of constants $V \cup W$ such that $\langle W\cup W_1', \beta, \delta_1'\rangle = \extbase(\Gamma_1, V)$ as well as $\langle W \cup W_2', \beta, \delta_2'\rangle = \extbase(\Gamma_2, V)$. That is, we can rename the constants in $\beta_1$ and $\beta_2$ in such a way that all their literals are syntactically identical. \begin{definition}\label{def:vabst} Given a set of constants $V$, we define the $V$-abstraction function, denoted $\alpha_V$, as a function from a set of literals to a set of literals such that \begin{enumerate} \item For any set of literals $\varphi$, $\alpha_V(\varphi) = \beta \text{ where } \langle W, \beta, \delta\rangle = \extbase(\varphi, V)$ \item For any two sets of literals $\varphi_1$ and $\varphi_2$, such that $\varphi_1 \equiv_V \varphi_2$, we have $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. \end{enumerate} \end{definition} \sharon{I put it here. need to connect the two} \begin{definition}[NAME ME] \label{def:alpha-abstraction} The abstraction function $\alpha: C \to C$ is defined for configurations of the form $\langle s, q, pc \rangle$ where $pc$ is a conjunction of literals. For such a configuration, we define \[ \alpha(\langle s, q, pc\rangle) = \langle s,q,\alpha_{\const(q)}(pc)\rangle \] where $\const(q) = \{c \mid \exists v \cdot q[v] = c\}$ is the set of all constants that represent current variable assignments in $q$. \end{definition} Namely, the abstraction $\alpha_q$ applied to the path condition is determined by the state $q$ in the configuration. With abuse of notation we sometimes write $\alpha_q(\varphi)$ as a shorthand for $\alpha_{\const(q)}(\varphi)$. \sharon{note that to apply the abstraction, we rely on $pc$ being a conjunction of literals (at least this is an assumption in the current def of $V$-abstraction)} \sharon{claim that the set of reachable states of the abstract transition system is finite. Actually, it is not finite per-se because we don't restrict the fresh constants that appear in the abstraction (and in the transitions). If we say that they are existentially quantified, then it is clear. Otherwise, should we say something about it? Should we define some kind of equivalence? (basically, the width of the transition relation is infinite because we can choose different constants each time)} \ag{Under current definition of abstraction, it is finite. That was the main point behind the change to the new definition that restricts (declaratively) what constants are chosen.} \hg{rough idea. will refine later.} Given a quantifier free formula $\varphi$, constants $a, b \in \mathcal{C}(\varphi)$ such that $a\neq b$, let $\varphi[a \rightarrowtail b]$ denote $\varphi[b\mapsto x][a \mapsto b]$, where $x$ is a constant not in $\mathcal{C}(\varphi)$. For example, if $\varphi = (a \approx c \land b \approx d)$, $\varphi[a\rightarrowtail b] = (b \approx c \wedge x \approx d)$. Given a path condition $pc$ and a state $q$, let $r_{init}(pc, q)$ denote the formula obtained by renaming all constants in $\mathcal{C}(q)$ using their initial values. $r_{init}(pc, q) = pc[q[\pv{v}]\rightarrowtail v_0]$ for all $v \in \mathcal{V}$ such that $q[\pv{v}] \neq v_0$. \begin{definition}[renaming] \label{def:rename-abstraction} The abstraction function $\alpha_r: C \to C$ is defined for configurations of the form $\langle s, q, pc \rangle$ where $pc$ is a conjunction of literals. For such a configuration, we define \[ \alpha_r(\langle s, q, pc\rangle) = \langle s,q_{init},r_{init}(pc, q)\rangle \] \end{definition} The relation $R = \{(c, \alpha_r(c)\}$ is a bisimulation relation. $\alpha_q(pc)$ does not contain any constant in $\mathcal{C}(pc)\setminus \mathcal{C}(q)$. In particular, $v_0 \in \mathcal{C}(\alpha_q(pc))$ if and only if $q[\pv{v}] = v_0$. Therefore, $r_{init}(\alpha_q(pc), q)$ does not introduce any new constants: $\mathcal{C}(r_{init}(\alpha_q(pc), q)) = \mathcal{C}(\alpha_q(pc))$. The abstraction $\alpha_r(\alpha(c))$ is a finite abstraction modulo $w$. \iffalse \subsection{XXX} \sharon{this is the main abstraction - give it a name} Let $\Gamma$ be a set of literals, and $\Gamma^*$ be the deductive closure of $\Gamma$\sharon{obtained by the rules... (otherwise, a deductive closure is everything that is implied, not just using a certain proof system)}\ag{This is a repeat from background. Will be removed}. A set of literals $\beta$ is a basis for $\Gamma$, if $\beta^* = \Gamma^*$, and for any strict subset $A \subset \beta$, $A^{*} \neq \Gamma^*$. That is, $\beta$ represents the same (possibly infinite) set of literals as $\Gamma$, and $\beta$ is an irreducible such set. A basis can be computed using the congruence closure algorithm~\cite{abstractcongruenceclosure} \ag{I think we do not need definition of $\base$. Current definition is broken since it might force $\beta$ to be infinite.} Let $V$ be a set of constants, we write $\beta = \base(\Gamma, V)$ for a basis of $\Gamma$ with representatives in $V$. That is, (a) $\beta$ is a basis for $\Gamma$, and if $v \approx t \in \Gamma^*$ and $v \in V$, then there exists $u \in V$ such that (a) $u \approx t \in \beta$; (b) $u = v$ or $u \approx v \in \beta$; and (c) $\forall \ell \in \beta$ if $t \in \mathcal{T}(\ell)$ then $\ell$ is $u \approx t$. The last condition ensures that a term $t$ that is equivalent to some constant $v$ is always represented by a constant (possibly different from $v$ in case $t$ is equivalent to multiple constants in $V$). An extensional base is a refactoring of a base to introduce representative constants for every term. It is defined as a triple $(W, \delta, \beta) = extbase(\Gamma, V)$, where $W$ is the set of new constants introduced by refactoring, $\delta$ is a set of new definitions\sharon{maybe it would help to define $\delta$ explicitly: for every term $t$ in $base(\Gamma, V)$ except for constants, introduce an equality $w_t \approx t$, right We probably want to rewrite the defs to also use $W$ in function applications, but this can be done by computing the base: Maybe define $extbase$ by base of $base(\Gamma, V) \cup \delta$ w.r.t. $V \cup W$? I think it will immediately imply that $V$ has priority since the first base ensures that no definition will be added to something that is equal to a variable in $V$. This suggestion would address my comment below. Also, I think it is good to explain that this does two things: uses $V$ when possible, and purifies other terms using $W$. Maybe we can combine this with the def of purifier that appears later? I don't have a good concrete suggestion yet.}, and $\beta$ is as in definition of $base(\Gamma, V)$. The requirement is that $\beta \cup \delta = base(\Gamma \cup \delta, V \cup W)$. That is, the union of $\beta$ with the definitions is the base of $\Gamma$ extended with the same definitions.\sharon{need to ensure that no new literals over the terms of $\Gamma^*$ are added. That is, the projection of the closure to the terms of $\Gamma^*$ is the same (I see now that it's written later, but I don't think the current def ensures it). Maybe this can be achieved indirectly by requiring that $W$ or $\delta$ is minimal (which may also address the next sentence)} Also need to ensure that constants in $V$ are used as representatives if at all possible. It holds that if $t_1 \approx t_2 \in (\Gamma \cup \delta)^*$ and $\mathcal{T}(t_1) \cap W = \mathcal{T}(t_2) \cap W = \emptyset$ then $t_1 \approx t_2 \in \Gamma^*$. Also, $\Gamma^* \subseteq (\Gamma \cup \delta)^*$. Our two desired abstractions are then defined as follows. A subset $A \subseteq \Gamma$ is \emph{subterm closed} iff \[\forall \ell \in \Gamma \cdot \mathcal{T}(\ell) \subseteq \mathcal{T}(A) \implies \ell \in A\] \sharon{this doesn't capture the case where one of the terms in $\ell$ is in $\mathcal{T}(A)$ but the other isn't. In that case, do we want to require that the literal is in $A$? I don't currently see the context, so keeping this question.} \ag{This was issue you pointed before. Our abstraction goes down from equalities between ``variables'' and other terms, and subterms, but not to the ``side''.} \sharon{I am struggling with this (more precisely, with its usage and why the current def ensures what we needed when used -- see next comment). Can we simplify?} First abstraction is, given a set of constants $V$, an abstraction $\alpha_V(\Gamma)$ is the smallest subterm closed subset $A \subseteq \Gamma^*$ such that $\{ v \approx t \in base(\Gamma, V) \mid v \in V\} \subseteq A $. \sharon{this is where I don't see why subterm closed gives us what we wanted: suppose $v \approx f(t) \in base$ and $x \approx t \in base$ but $x \not \in V$ (meaning $t$ is not equivalent to any constant in $V$). Then $A$ will not include $x \approx t$ or anything equivalent. Is that really the intention?} Second abstraction is, let $V$ be a set of variables, and $(W, \delta, \beta) = extbase(\Gamma, V)$, then, $\alpha_V(\Gamma) = \beta$. \sharon{it is worth mentioning that $\beta$ can contain constants from $W$ (which are not in $\Gamma$).} \hg{This paragraph is just to make sure that we are all on the same page. I am not going to use this in any of the proofs} \textbf{Computing $\beta$} I intend to clarify the process of computing $\langle W, \beta, \delta \rangle = extbase(\Gamma, V)$. We look at the congruence closed graph $G$ representing $\Gamma^*$. $G$ is a set of labelled nodes. Each node denotes a term in $\Gamma^*$ and there are directed edges from nodes representing function applications to each of the function arguments. If $\Gamma^* \vdash t_1 \approx t_2$ then there is an equality edge between nodes of $t_1$ and $t_2$. If $t_1\not\approx t_2 \in \Gamma$, there is a disequlity edge between $t_1$ and $t_2$. A node corresponding to term $t$ is labelled using the representative, $rep(t)$, of the equivalence class $t$ belongs to. Representatives are in the set $V \cup W$: If $\Gamma^* \vdash v \approx t$ where $v \in V$, then $rep(t) = v$~(we use a fixed ordering on constants in $V$ to break ties), otherwise $rep(t) = w$ for some $w \in W$. In this graph, \begin{multline*} \beta = \{ v \approx u \in \Gamma^* \mid v,u \in V\} \cup \hphantom{a} \\ \{v \not\approx u \mid v = rep(t_1), u = rep(t_2), t_1 \not\approx t_2 \in \Gamma, v, u \in V\} \cup \hphantom{a}\\ \{ v \approx f(rep(t_1), \ldots, rep(t_n)) \mid v \approx f(t_1, \ldots, t_n) \in \Gamma^*, v \in V \} \end{multline*} \begin{multline*} \delta = \{w \approx u \in \Gamma^* \mid u \text{ is a constant } \not \in V, w \in W \} \cup \hphantom{a}\\ \{w \not\approx u \mid w = rep(t_1), u = rep(t_2), t_1 \not\approx t_2 \in \Gamma,\\ \hfill w \in V, u \in V \cup W\} \cup \hphantom{a}\\ \{ w \approx f(rep(t_1), \ldots, rep(t_n)) \mid w \approx f(t_1, \ldots, t_n) \in \Gamma^* \} \end{multline*} Next we define an abstraction function $\alpha$ for program configurations, such that the abstract transition system induced by $\alpha$ per \Cref{def:abstract-TS-general}, denoted $\alpha(\TS) = (C, \initconf^{\alpha},\Tr^{\alpha})$, is bisimilar to the concrete one whenever the program is coherent. The abstraction function abstracts the path condition in each configuration to only maintain certain correlations between the constants to which the program variables are mapped. \begin{definition}\label{def:vabst} Given a conjunction of literals $\Gamma$ and a set of constants $V \subseteq \const$, we define the $V$-abstraction, $\alpha_V(\Gamma)$, to be the extended basis of $\Gamma$ relative to $V$: \[ \alpha_V(\Gamma) = \beta \text{ where } \langle W, \beta, \delta\rangle = extbase(\Gamma, V) \] \end{definition} \fi \section{Introduction} \label{sec:intro} The theory of Equality with Uninterpreted Functions (EUF) is an important fragment of First Order Logic, defined by a set of functions, equality axioms, and congruence axioms. Its satisfiability problem is decidable. It is a core theory of most SMT solvers, used as a glue (or abstraction) for more complex theories. A closely related notion is that of Uninterpreted Programs (UP), where all basic operations are defined by uninterpreted functions. Feasibility of a UP computation is characterized by satisfiability of its path condition in EUF. UPs provide a natural abstraction layer for reasoning about software. They have been used (sometimes without explicitly being named), in equivalence checking of pipelined microprocesors~\cite{DBLP:conf/cav/BurchD94}, and equivalence checking of C programs~\cite{DBLP:conf/vstte/StrichmanG05}. They also provide the foundations of Global Value Numbering (GVN) optimization in many modern compilers\cite{DBLP:conf/popl/Kildall73,DBLP:conf/sas/GulwaniN04,dblp:conf/vmcai/muller-olmrs05}. Unlike EUF, reachability in UP is undecidable. That is, in the \emph{lingua franca} of SMT, the satisfiability of Constrained Horn Clauses over EUF is undecidable. Recently, Mathur et al.~\cite{DBLP:journals/pacmpl/MathurMV19}, have proposed a variant of UPs, called \emph{coherent uninterpreted program} (CUPs). The precise definition of coherence is rather technical (see Def.~\ref{def:cup}), but intuitively the program is restricted from depending on arbitrarily deep terms. The key result of~\cite{DBLP:journals/pacmpl/MathurMV19} is to show that both reachability of CUPs and deciding whether an UP is coherent are decidable. This makes CUP an interesting infinite state abstraction with a \emph{decidable} reachability problem. Unfortunately, as shown by our counterexample in \Cref{fig:cup} (and described in Sec.~\ref{sec:char}), the key construction in~\cite{DBLP:journals/pacmpl/MathurMV19} is incorrect. More precisely, the proofs of~\cite{DBLP:journals/pacmpl/MathurMV19} hold only of CUPs restricted to unary functions. In this paper, we address this bug. We provide an alternative (in our view simpler) proof of decidability and extend the results from reachability to arbitrary model checking. The case of non-unary CUPS is much more complex than unary. This is not surprising, since similar complications arise in related results on Uniform Interpolation~\cite{DBLP:conf/cilc/GhilardiGK20} and Cover~\cite{DBLP:conf/esop/GulwaniM08} for EUF. Our key result is a logical characterization of CUP. We show that the set of reachable states (i.e., the strongest inductive invariant) of a CUP is definable by an EUF formula, over program variables, with terms of depth at most 1. That is, the most complex term that can appear in the invariant is of the form $v \approx f(\Vec{w})$, where $v$ and $\Vec{w}$ are program variables, and $f$ a function. This characterization has several important consequences since the number of such bounded depth formulas is finite. Decidability of reachability, for example, follows trivially by enumerating all possible candidate inductive invariants. More importantly from a practical perspective, it leads to an efficient analysis of \emph{arbitrary} UPs. Take a UP $P$, and check whether it has a safe inductive invariant of bounded terms. Since the number of terms is finite, this can be done by implicit predicate abstraction~\cite{DBLP:conf/tacas/CimattiGMT14}. If no invariant is found, and the counterexample is not feasible, then $P$ is not a CUP. At this point, the process either terminates, or another verification round is done with predicates over deeper terms. Crucially, this does not require knowing whether $P$ is a CUP apriori -- a problem that itself is shown in~\cite{DBLP:journals/pacmpl/MathurMV19} to be at least PSPACE. We extend the results further and show that CUPs are bisimilar to a finite state system, showing, in particular, that arbitrary model checking for CUP (not just reachability) is decidable. Our proofs are structured around a series of abstractions, illustrated in a commuting diagram in \cref{fig:cd}. Our key abstraction is the base abstraction $\alpha_b$. It forgets terms deeper than depth 1, while maintaining all their consequences (by using additional fresh variables). We show that $\alpha_b$ is sound and complete (i.e., preserves all properties) for CUPs (while, sound, but not complete for UP). It is combined with a cover abstraction $\alpha_\mathbb{C}$, that we borrow from~\cite{DBLP:conf/esop/GulwaniM08}. The cover abstraction ensures that reachable states are always expressible over program variables. It serves the purpose of existential quantifier elimination, that is not available for EUF. Finally, a renaming abstraction $\alpha_r$ is a technical tool to bound the occurrences of constants in abstract reachable states. The rest of the paper is structured as follows. We review the necessary background on EUF in \cref{sec:background}. We introduce our formalization of UPs and CUPs in \cref{sec:up}. \Cref{sec:abs} presents bisimulation inducing abstractions for UP. \Cref{sec:extabase} presents our base abstraction and shows that it induces a bisimulation for CUPs. \Cref{sec:char} develops logical characterization for CUPs, presents our decidability results, and shows that a finite state abstraction of CUPs is computable. We conclude the paper in \cref{sec:conclusion} with summary of results and a discussion of open challenges and future work. \section{Invariants of CUP} \begin{theorem} Let $P$ be a CUP. $P$ is safe iff there exists an inductive $Inv$ s.t. $\depth(Inv) \leq 1$. \end{theorem} \section{k-CUP} \section{Decidability and Logical Characterization} \label{sec:char} In section~\ref{sec:extabase}, we show that $\alpha_{b, \mathit{r}}$ induces a finite bisimulation for CUPs. In this section, we show that the corresponding finite abstract transition system is computable, and use this to give a logical characterization of the reachable configurations of CUPs. Recall that the $V$-base abstraction~(\Cref{def:vabst}) is canonical for $V$-equivalent formulas. This abstraction is not necessarily computable: while it is straightforward to compute \emph{a} $V$-basis from the congruence closure graph, it is difficult to ensure that the base is the same for all $V$-equivalent formulas. However, $V$-base abstraction is computable on the fly for any finite collection of formulas. Similarly, $\alpha_{b, \mathit{r}}$ is also computable for any finite collection of configurations. Thus, given a CUP $P$, its finite bisimilar abstraction is computable. Hence, \begin{theorem} CUP reachability is decidable. \end{theorem} In fact, a stronger result follows immediately -- temporal logic model checking over CUP is decidable as well. \hg{TODO: add motivation, explain significance} \begin{theorem}[Logical Characterization of CUP]\label{th:lccup} For any CUP program $P$, there exists an inductive assertion map $\eta$, ranging over EUF formulas of depth at most 1, that characterizes the reachable configurations of $P$. \end{theorem} In other words, for any CUP program $P$, the strongest inductive invariant of $P$ is definable by EUF formulas over state variables of depth at most 1\sharon{emphasize that this is a \emph{finite} set of formulas}. This result is far from obvious. We split the proof into two cases: CUPs restricted to unary functions, and arbitrary CUPs. Let 1-CUP denote the set of all CUP programs where only unary functions are allowed~(and, of course, constants). \begin{proof}[Theorem~\ref{th:lccup}, 1-CUP] Let $\Sigma^1$ be a signature containing function symbols of arity atmost $1$, $\Sigma^1 \triangleq (\const, \mathcal{F}^1, \{\approx, \not\approx\})$. Let $\Gamma$ be a set of literals in $\Sigma^1$ and $V$ be a set of constants. By the definition of $V$-base abstraction~(Def.~\ref{def:vabst}), $\alpha_V(\Gamma) = \beta_\approx \land \beta_\not\approx \land \beta_\mathcal{F}$. $\beta_\approx$ and $\beta_\not\approx$ are over constants in $V$. $\beta_\mathcal{F}$ contains two types of literals: $\beta_{\mathcal{F}_V}$ and $\beta_{\mathcal{F}_W}$. $\beta_{\mathcal{F}_V}$ are 1 depth literals over constants in $V$. $\beta_{\mathcal{F}_W}$ are literals of the form $v \approx f(\Vec{w})$ where $v \in V$ and $\Vec{w}$ is a list of constants, at least one of which is in $V$: $\Vec{w}\cap V \neq \emptyset$ and $\Vec{w}\not\subseteq V$. Since $\Gamma$ can only have unary functions, $\beta_{\mathcal{F}_W} = \emptyset$. Therefore all literals in $\alpha_V(\Gamma)$ are of depth at most 1 and only contain constants from $V$. Therefore, \[ \eta(s) \triangleq \bigvee \{ pc \mid \langle s, q_0, pc \rangle \in \Reach(\alpha_{b,\mathit{r}}(\mathcal{S}_P))\} \] is an inductive assertion map that characterizes the reachable configurations of $P$. Clearly, $\eta$ ranges over formulas of depth at most 1. Moreover, the size of each disjunct in $\eta(s)$ is polynomial in the number of program variables and functions in $P$. \end{proof} A corollary of the above proof is that reachability of $1$-CUP is in PSPACE. \ag{make cref use short names for Def. and Sec. and Fig.} \begin{proof}[Theorem~\ref{th:lccup}, general case] The $V$-base abstraction~(\Cref{def:vabst}) introduces fresh constants. Let $W$ be the set of all constants introduced by the $V$-base abstraction. That is, for any set of literals $\Gamma$ and set of constants $V$, $\mathcal{C}(\alpha_V(\Gamma))\subseteq V \cup W$. We can use the cover abstraction~(Def.~\ref{def:cover-abs}) to eliminate all $W$ constants. Let $\alpha_{b,\mathit{r},\mathbb{C}}$ be the composition of the cover, renaming, and base abstractions: $\alpha_{b,\mathit{r},\mathbb{C}} \triangleq \alpha_b \circ \alpha_\mathit{r} \circ \alpha_\mathbb{C}$. By Thm.~\ref{thm:cover-bisimilar}, $\alpha_{b, \mathit{r}, \mathbb{C}}(\mathcal{S}_P)$ is bisimilar to $\alpha_{b, \mathit{r}}(\mathcal{S}_P)$. Hence, \[ \eta(s) \triangleq \bigvee \{ pc \mid \langle s, q_0, pc \rangle \in \Reach(\alpha_{b,\mathit{r},\mathbb{C}}(\mathcal{S}_P))\} \] is an inductive assertion map that characterizes the reachable configurations of $P$. The $V$-basis of any formula is at most of depth 1. Moreover, all the fresh constants in the $V$-basis are arguments to function applications. Therefore, the cover algorithm presented in~\cite{DBLP:conf/esop/GulwaniM08} does not increase the depth of our formulas. Hence, $\eta$ ranges over formulas of depth at most 1. \ag{Do not refer to the cover algorithm. Our proof is simpler, by enumerating all consequences} \end{proof} Consider the CUP shown in Fig.~\ref{fig:cup}. At line~9, the $\alpha_{b,\mathit{r}}$ abstraction produces the following abstract $pc$: $x_0 \approx f(a_0, w) \land y_0 \approx f(b_0, w) \land c_0 \approx d_0$. Using cover to eliminate the constant $w$ gives us $\mathbb{C} w \cdot pc = (a_0 \approx b_0 \Rightarrow x_0 \approx y_0) \land c_0 \approx d_0$, which is exactly the invariant assertion mapping $\eta(9)$ at line~9. \section{Logical Characterization of CUP} \label{sec:char} In this section, we show that for any CUP program $P$, all reachable configurations of $P$ can be characterized using formulas in EUF, whose size is bounded by the number of program variables in $P$. \begin{theorem}[Logical Characterization of CUP]\label{th:lccup} For any CUP $P$, there exists an inductive assertion map $\eta$, ranging over EUF formulas of depth at most 1, that characterizes the reachable configurations of $P$. \end{theorem} The first step in the proof is to compose the renaming abstraction~(\cref{def:rename-abstraction}) with the base abstraction~(\cref{def:alpha-abstraction}). We denote the composition with $\alpha_{b,\mathit{r}}$, i.e., $\alpha_{b,\mathit{r}} \triangleq \alpha_b \circ \alpha_\mathit{r}$. \Cref{cor:main} and \Cref{thm:renaming-bisimilar} ensures that $\alpha_{b, \mathit{r}}$ is sound and complete for CUP. We split the rest of the proof into two cases: CUPs restricted to unary functions, called 1-CUP, followed by arbitrary CUPs. \begin{proof}[\Cref{th:lccup}, 1-CUP] Let $\Sigma^1$ be a signature containing function symbols of arity atmost $1$, $\Sigma^1 \triangleq (\const, \mathcal{F}^1, \{\approx, \not\approx\})$. Let $\Gamma$ be a set of literals in $\Sigma^1$ and $V$ be a set of constants. By the definition of $V$-base abstraction~(\Cref{def:vabst}), $\alpha_V(\Gamma) = \beta_\approx \land \beta_\not\approx \land \beta_\mathcal{F}$. $\beta_\approx$ and $\beta_\not\approx$ are over constants in $V$. $\beta_\mathcal{F}$ contains two types of literals: $\beta_{\mathcal{F}_V}$ and $\beta_{\mathcal{F}_W}$. $\beta_{\mathcal{F}_V}$ are 1 depth literals over constants in $V$. $\beta_{\mathcal{F}_W}$ are literals of the form $v \approx f(\Vec{w})$ where $v \in V$ and $\Vec{w}$ is a list of constants, at least one of which is in $V$: $\Vec{w}\cap V \neq \emptyset$ and $\Vec{w}\not\subseteq V$. Since $\Gamma$ can only have unary functions, $\beta_{\mathcal{F}_W} = \emptyset$. Therefore, all literals in $\alpha_V(\Gamma)$ are of depth at most 1 and only contain constants from $V$. Hence, there are only finitely many configurations in $\alpha_{b, \mathit{r}}(\mathcal{S}_P)$. Therefore, \[ \eta(s) \triangleq \bigvee \{ pc \mid \langle s, q_0, pc \rangle \in \Reach(\alpha_{b,\mathit{r}}(\mathcal{S}_P))\} \] is an inductive assertion map, ranging over formulas for depth at most 1, that characterizes the reachable configurations of $P$. Moreover, the size of each disjunct in $\eta(s)$ is polynomial in the number of program variables and functions in $P$. \end{proof} An interesting consequence of the above proof is that, for 1-CUPs, $\alpha_b$ is efficiently computable (since, $\beta_{\mathcal{F}_W} = \emptyset$). Thus, the transition system $\alpha_{b,\mathit{r}}(\mathcal{S}_P)$ is finite, and can be constructed on-the-fly. Hence, reachability of $1$-CUP is in PSPACE. \begin{proof}[\Cref{th:lccup}, general case] In general, CUP programs can contain unary and non-unary functions. Therefore, the $V$-base abstraction~(\Cref{def:vabst}) may introduce fresh constants. We use the cover abstraction~(\Cref{def:cover-abs}) to eliminate these fresh constants. By \Cref{thm:cover-bisimilar}, $\alpha_\mathbb{C}(\alpha_{b, \mathit{r}}(\mathcal{S}_P))$ is bisimilar to $\alpha_{b, \mathit{r}}(\mathcal{S}_P)$. Notice that all the fresh constants introduced by the $V$-base abstraction are arguments to function applications. Therefore, all consequences of eliminating the fresh constants are Horn clauses of the form $\bigwedge_i (x_i \approx y_i)\Rightarrow x \approx y$, where $x_i, y_i, x, y \in \mathcal{C}_0$. Since $V$-basis is of depth at most 1, cover of the $V$-basis is also of depth at most 1. Since there are only finitely many formulas of depth at most~1 over $\const_0$, $\alpha_\mathbb{C}(\alpha_{b, \mathit{r}}(\mathcal{S}_P))$ has only finitely many configurations. Hence, \[ \eta(s) \triangleq \bigvee \{ pc \mid \langle s, q_0, pc \rangle \in \Reach(\alpha_\mathbb{C}(\alpha_{b,\mathit{r}}(\mathcal{S}_P))\} \] is an inductive assertion map that characterizes the reachable configurations of $P$ and ranges over depth-1 formulas. \end{proof} Consider the CUP shown in \Cref{fig:cup}. At line~9, the $\alpha_{b,\mathit{r}}$ abstraction produces the following abstract $pc$: $x_0 \approx f(a_0, w) \land y_0 \approx f(b_0, w) \land c_0 \approx d_0$. Using cover to eliminate the constant $w$ gives us $\mathbb{C} w \cdot pc = (a_0 \approx b_0 \Rightarrow x_0 \approx y_0) \land c_0 \approx d_0$, which is exactly the invariant assertion mapping $\eta(9)$ at line~9. We have seen that all CUP programs have an inductive assertion map that characterizes their reachable configurations and ranges over a finite set of formulas. Therefore, \begin{corollary}\label{cor:cupdec} CUP reachability is decidable. \end{corollary} \input{bugcoherent} \input{normalization} \section{Proving bisimulation} \sharon{title: Soundness and Completeness} \sharon{(not sure if still relevant) the signature was mentioned only in the previous section. Need to re-introduce it in this section} In this section we show that, given a coherent UPL program $P$, the \ag{finite} abstract transition system $\alpha(\TS)_P = (C, \initconf^{\alpha}, \Tr^{\alpha})$ is bisimilar to the concrete transition system $\mathcal{S}_P = (C, c_0, \mathcal{R})$. \sharon{we should also show that we get a simulation relation for any program, not just coherent} The bisimulation relation is induced by the abstraction function $\alpha$ (\Cref{def:q-abstraction}). However, since the abstraction introduces fresh constants (which can be understood as existentially quantified variables), we relax the relation using a notion of equivalence that we define next. \ag{This sentence needs to be updated to reflect current definition of $\alpha$.} \sharon{my goal is to eventually claim that $B = \{(c,c_\alpha) \mid c, c_\alpha \text{ are reachable and } c = \langle s, q, pc \rangle ~\land~ c_\alpha \equiv_{\const(q)} \alpha(c)\}$ is a bisimulation relation. Is this correct?}\ag{I would define $B \triangleq \{ (c, \alpha(c)) \mid c \text{ is reachable} \}$} Given two EUF\xspace formulas $\varphi_1$ and $\varphi_2$ and a set of constants $V$, we say that the formulas are $1$-depth $V$-equivalent, denoted $\varphi_1 \equiv^1_V \varphi_2$, if and only if they have the same $V$-abstractions: $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. \begin{example} Let $\varphi_1 = x \approx f(f(y))$, $\varphi_2 = x \approx f(z)$, and $V = \{x, y\}$. The $V$-abstraction of $\varphi_1$ is $\alpha_V(\varphi_1) = x \approx f(w)$ and that of $\varphi_2$ is $\alpha_V(\varphi_2) = x \approx f(w)$. Therefore, $\varphi_1 \equiv^1_V \varphi_2$ but not $\varphi_1 \equiv_V \varphi_2$. \end{example} \ag{Restructure this section to have the main theorem as early as possible. Followed by all the lemmas. Followed by the proof.} \begin{lemma}[Idempotence]\label{lm:idem} For any set of literals $\Gamma$ and any set of constants $V$, $\alpha_V(\Gamma) = \alpha_V(\alpha_V(\Gamma))$. \end{lemma} \begin{lemma}\label{lm:frshtrm} Let $\Gamma$ be a set of literals, $V$ a set of constants, $\Vec{y} \subseteq V$, $x'$ a constant s.t. $x' \not\in \mathcal{C}(\Gamma) \cup V$, $V' = V \cup \{x'\}$, and $f(\Vec{y})$ be a term not in $\mathcal{T}(\Gamma)$. Then, \[ \alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \] \end{lemma} \begin{proof} Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V$. We show that $\alpha_V(\Gamma) \land \psi \models \bot$ iff $\alpha_{V'}(\Gamma \land x' \approx t) \land \psi \models \bot$. By definition of $V$-abstraction, $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) = \alpha_V(\Gamma) \cup X_{def}$, where $X_{def}$ is \begin{multline*} \{x' \bowtie t \mid \alpha_V(\Gamma) \vdash f(\Vec{y}) \bowtie t, \depth(t) = 1, \mathcal{C}(t) \cap \mathcal{C}(\Gamma) \subseteq V\} \end{multline*} Since, $\alpha_V(\Gamma) \subseteq \alpha_{V'}(\Gamma \land x' \approx f(\Vec{y}))$, if $\alpha_V(\Gamma) \land \psi \models \bot$, then $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \land \psi \models \bot$. For the other direction, assume $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \land \psi \models \bot$. Since $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \vdash x' \approx f(\Vec{y})$, $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y}))[x'\mapsto f(\Vec{y})] \land \psi \models \bot$. Therefore, $\alpha_V(\Gamma) \land \psi \models \bot$. Hence, $\alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx f(\Vec{y}))$. \end{proof} \begin{lemma}\label{lm:frshvar} Let $\Gamma$ be a set of literals, $V$ be any set of constants, $y \in V$, $x'$ be a constant s.t. $x' \not \in \mathcal{C}(\Gamma) \cup V$, and $V' = V \cup \{x'\}$. Then, \[ \alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx y) \] \end{lemma} \begin{proof} Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V$. We show that $\alpha_V(\Gamma) \land \psi \models \bot$ iff $\alpha_{V'}(\Gamma \land x' \approx y) \land \psi \models \bot$. Let $L = \{\ell \mid \ell[x'\mapsto y] \in \alpha_V(\Gamma)\}$. By definition of $V$-abstraction, $\alpha_{V'}(\Gamma \land x' \approx y) = \alpha_V(\Gamma) \cup L$. Since $\alpha_V(\Gamma) \subseteq \alpha_{V'}(\Gamma \land x' \approx y)$, if $\alpha_V(\Gamma) \land \psi \models \bot$, $\alpha_{V'}(\Gamma \land x' \approx y) \land \psi \models \bot$. For the other direction, assume $\alpha_{V'}(\Gamma \land x' \approx y) \land \psi \models \bot$. Since $\alpha_{V'}(\Gamma \land x' \approx y) \vdash x' \approx y$, $\alpha_{V'}(\Gamma \land x' \approx y)[x'\mapsto y] \land \psi \models \bot$. Therefore, we have $\alpha_V(\Gamma) \land \psi \models \bot$. Hence $\alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx y)$. \end{proof} \begin{lemma}\label{lm:propequiv} Let $V$ be a set of constants, $\varphi_1$, $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, $y \in V, \Vec{y}\subseteq V$, $x'$ be a constant s.t. $x' \not \in \mathcal{C}(\varphi_1) \cup \mathcal{C}(\varphi_2)$, $V' = V \cup\{x'\}$, and $f(\Vec{y})$ be a term not in $\mathcal{T}(\varphi_1) \cup \mathcal{T}(\varphi_2)$. Then, \begin{enumerate}[(1)] \item $\alpha_{V'}(\varphi_1 \land x' \approx y) = \alpha_{V'}(\varphi_2 \land x' \approx y)$ \item $\alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y})) = \alpha_{V'}(\varphi_2 \land x' \approx f(\Vec{y}))$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(1)] \item Let $\beta_1 = \alpha_V(\varphi_1)$ and $\beta_1' = \alpha_{V'}(\varphi_1 \land x' \approx y)$, and $\beta_2 = \alpha_V(\varphi_2)$ and $\beta_2' = \alpha_{V'}(\varphi_2 \land x' \approx y)$. Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V'$. We show that $\beta_1' \land \psi \models \bot$ iff $\beta_2'\land \psi \models \bot$. We show one direction, the other one is symmetric. Assume $\beta_1' \land \psi \models \bot$. By definition of $V$-abstraction, both $\beta_1' \vdash x' \approx y$ and $\beta_2' \vdash x' \approx y$. Since $\beta_1'\land \psi \models \bot$ and $\beta_1' \vdash x' \approx y$, we have $\beta_1'\land \psi[x'\mapsto y] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_1' \equiv_V \beta_1$. Since $\beta_1'\land \psi[x'\mapsto y] \models \bot$, $\mathcal{C}(\psi[x' \mapsto y]) \subseteq V$, and $\beta_1' \equiv_V \beta_1$, then $\beta_1 \land \psi[x' \mapsto y] \models \bot$. Since $\beta_1 = \beta_2$, $\beta_2 \land \psi[x' \mapsto y] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_2' \equiv_V \beta_2$. Therefore, $\beta_2' \land \psi[x' \mapsto y] \models \bot$. Since $\beta_2' \vdash x' \approx y$ and $\beta_2' \land \psi[x' \mapsto y] \models \bot$, $\beta_2' \land \psi \models \bot$. Therefore, $\beta_1' \equiv_{V'} \beta_2'$. Therefore, $\alpha_{V'}(\beta_1') = \alpha_{V'}(\beta_2')$. By Lemma~\ref{lm:idem}, $\beta_1' = \beta_2'$. \item Let $\beta_1 = \alpha_V(\varphi_1)$ and $\beta_1' = \alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y}))$, and $\beta_2 = \alpha_V(\varphi_2)$ and $\beta_2' = \alpha_{V'}(\varphi_2 \land \land x' \approx f(\Vec{y}))$. Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V'$. We have to show that $\beta_1' \land \psi \models \bot$ iff $\beta_2'\land \psi \models \bot$. We prove one direction, the other direction is symmetric. Assume $\beta_1' \land \psi \models \bot$. By definition of $V$-abstraction, both $\beta_1' \vdash x' \approx f(\Vec{y})$ and $\beta_2' \vdash x' \approx f(\Vec{y})$. Since $\beta_1'\land \psi \models \bot$ and $\beta_1' \vdash x' \approx f(\Vec{y})$, then $\beta_1'\land \psi[x'\mapsto f(\Vec{y})] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_1' \equiv_V \beta_1$. Since $\beta_1'\land \psi[x'\mapsto f(\Vec{y})] \models \bot$, $\mathcal{C}(\psi[x' \mapsto f(\Vec{y})]) \subseteq V$, and $\beta_1' \equiv_V \beta_1$, then $\beta_1 \land \psi[x' \mapsto f(\Vec{y})] \models \bot$. Since are given that $\beta_1 = \beta_2$, $\beta_2 \land \psi[x' \mapsto f(\Vec{y})] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_2' \equiv_V \beta_2$. Therefore, $\beta_2' \land \psi[x' \mapsto f(\Vec{y})] \models \bot$. Since $\beta_2' \vdash x' \approx f(\Vec{y})$ and $\beta_2' \land \psi[x' \mapsto f(\Vec{y})] \models \bot$, $\beta_2' \land \psi \models \bot$. Therefore, $\beta_1' \equiv_{V'} \beta_2'$. Therefore, $\alpha_{V'}(\beta_1') = \alpha_{V'}(\beta_2')$. By Lemma~\ref{lm:idem}, $\beta_1' = \beta_2'$. \end{enumerate} \end{proof} \begin{lemma}\label{lm:subset} Let $V$ be a set of constants, $\varphi_1$, $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. Then, for any $U \subseteq V$, $\alpha_U(\varphi_1) = \alpha_U(\varphi_2)$. \end{lemma} \begin{proof} Follow from $\alpha_U(\alpha_V(\varphi_1)) = \alpha_U(\alpha_V(\varphi_2))$, and $\alpha_U(\alpha_V(\varphi_i)) = \alpha_U(\varphi_i)$, for $i \in \{1, 2\}$. \end{proof} \iffalse \begin{lemma}\label{lm:addlit} If $\varphi_1 \equiv_V \varphi_2$ and $\ell$ is a literal s.t. $\mathcal{C}(\ell) \subseteq V$ then \[ \varphi_1 \land \ell \equiv_V \varphi_2 \land \ell \] \end{lemma} \begin{proof} Assume to the contrary that there exists a literal $\ell$, $\mathcal{C}(\ell) \subseteq V$, formulas $\varphi_1$, $\varphi_2$ s.t. $\varphi_1 \equiv_V \varphi_2$ but $\varphi_1 \land \ell$ is not $V$-equivalent to $\varphi_2 \land \ell$. That is, there is a formula $\psi$ s.t. $\mathcal{C}(\psi) \cap (\mathcal{C}(\varphi_1) \cup \mathcal{C}(\varphi_2)) \subseteq V$, $\varphi_1 \land \ell \land \psi \models \bot$ but $\varphi_2 \land \ell \land \psi \not \models \bot$. Since $\varphi_1 \land \ell \land \psi \models \bot$, $\varphi_1 \Rightarrow \neg(\ell \land \psi)$ where $\mathcal{C}(\ell \land \psi) \cap (\mathcal{C}(\varphi_1) \cup \mathcal{C}(\varphi_2)) \subseteq V$. Since $\varphi_2 \land \ell \land \psi \not \models \bot$, $\varphi_1 \not\Rightarrow \neg(\ell \land \psi)$. This contradicts $\varphi_1 \equiv_V \varphi_2$. \end{proof} \fi \iffalse \begin{example} Consider the following abstraction that keeps all literals that have at least one term that is represented by constants in $V$: $\alpha'_V(\Gamma) = \{ \ell \in \Gamma^* \mid \text{there exists } t \in \mathcal{T}(\ell) \text{ and } v \in V \text{ s.t. } \Gamma^* \vdash v \approx t \}$. This abstraction does not satisfy $\alpha'_V(\Gamma) \equiv^1_V \Gamma$. Let $\Gamma = x_1 \approx f(p_0, x_0) \land y_1 \approx f(q_0, y_0) \land x_0 \approx y_0$. $\alpha'_V(\Gamma) = x_1 \approx f(p_0, x_0) \land y_1 \approx f(q_0, y_0)$. Let $\psi = p_0 \approx q_0 \land x_1 \not\approx y_1$. $\Gamma \land \psi$ is unsatisfiable whereas $\alpha'_V(\Gamma) \land \psi$ is satisfiable. In our case, if $\langle W, \beta, \delta \rangle = extbase(\Gamma, V)$, $\beta = x_1 \approx f(p_0, w) \land y_1 \approx f(q_0, w)$ and $\beta \equiv^1_V \Gamma$. \end{example} \fi Lemma~\ref{lm:sprtrm} and Lemma~\ref{lm:trmingam} outline important properties of EUF that are useful in the proof of Thm.~\ref{thm:main}. \begin{lemma}\label{lm:sprtrm} Let $\Gamma$ be a set of literals, and $t_1$ and $t_2$ be two terms in $\mathcal{T}(\Sigma)$ s.t. $\Gamma \not\vdash (t_1 \approx t_2)$. Then, $(\Gamma \land a \approx b) \vdash (t_1 \approx t_2)$, for some constants $a$ and $b$ in $\const(\Gamma)$, iff there are two superterms, $s_1[a]$ and $s_2[b]$, of $a$ and $b$, respectively, s.t. (i) $\Gamma \vdash (t_1 \approx s_1[a])$, (ii) $\Gamma \vdash (t_2 \approx s_2[b])$, and (iii) $(\Gamma \land a \approx b ) \vdash (s_1[a] \approx s_2[b])$. \end{lemma} \begin{lemma}\label{lm:trmingam} Let $\Gamma$ be a set of literals, $v \in \mathcal{C}(\Gamma)$. If $\Gamma \vdash v \approx t$ for some term $t \in \mathcal{T}(\Sigma)$ then there exists a term $t' \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash v \approx t'$. \end{lemma} \begin{definition}[Purifier] \label{def:purifier} We say that a set of constants $V$ is a \emph{purifier} of a constant $a$ in $\Gamma$ if $a \in V$, and for every term $t[a] \in \mathcal{T}(\Gamma)$, there exists $v \in V$ s.t. $\Gamma \vdash v \approx t[a]$. \end{definition} \begin{example}[Purifier] Let $\Gamma = \{ c \approx f(a), d \approx f(b), d \not\approx e\}$. Then, $V = \{a, b, c\}$ is a purifier for $a$, but not a purifier for $b$ even though $b \in V$. \end{example} \begin{lemma}\label{lm:pur} If $V$ is a purifier for $x$ in $\Gamma$, then $V$ is a purifier for $x$ in $\alpha_V(\Gamma)$. \end{lemma} \begin{lemma}\label{lm:Visenough} Let $\Gamma$ be a set of literals, $x$ and $y$ be two constants in $\const(\Gamma)$, $V \subseteq \const(\Gamma)$ be a purifier for $\{x, y\} \subseteq V$, and $\beta = \alpha_V(\Gamma)$. Then, for any $u, v \in V$ \[ (\Gamma \land x \approx y \vdash u \approx v) \iff ( \beta \land x \approx y \vdash u \approx v) \] \end{lemma} \begin{proof} By the definition of $\beta$, $(\Gamma \vdash u \approx v) \iff (\beta \vdash u \approx v)$. Thus, assume that $\Gamma \not \vdash u \approx v$. The only-if direction is trivial since $\beta$ is weaker than $\Gamma$. For the if-direction, By Lemma~\ref{lm:sprtrm}, there are superterms $s_1[x]$ and $s_2[y]$ of $x$ and $y$, respectively, s.t. $\Gamma \vdash \{u \approx s_1[x], v \approx s_2[y]\}$, and $(\Gamma \land x \approx y) \vdash (s_1[x] \approx s_2[y])$. The proof proceeds by induction on the maximum depth of $s_1$ and $s_2$. The base case, $s_1 = x$ and $s_2 = y$, is trivial. For the inductive case, we show one sub-cases, others are similar. Assume $s_1 = f(t_1[x], \vec{r})$ and $s_2 = f(t_2[y], \vec{r})$, for some terms $t_1[x]$, $t_2[y]$, $\vec{r}$, and a function $f$. Furthermore, $(\Gamma \land x \approx y) \vdash t_1[x] \approx t_2[y]$. Since $V$ is a purifier for $\{x, y\}$, there are $x', y' \in V$ s.t. $\Gamma \vdash \{x' \approx t_1[x], y' \approx t_2[y]\}$, and $\Gamma \vdash \{u \approx f(x', \vec{r}), v \approx f(y', \vec{r})\}$. By construction, $\beta \vdash \{u \approx f(x', \vec{w}), v \approx f(y', \vec{w})\}$, for some constants $\vec{w} \in \const(\beta)$. By IH, $(\beta \land x \approx y) \vdash x' \approx y'$. Hence, by congruence, $(\beta \land x \approx y) \vdash v \approx u$. \end{proof} \iffalse \ag{Original pf starts here} If $\Gamma \not \vdash v \approx u$, according to Lemma~\ref{lm:sprtrm}, $\Gamma \vdash v \approx t_1[a]$ and $\Gamma \vdash u \approx t_2[b]$. We do induction on the number of times the congruence axiom is applied to derive $v \approx u$. If we do not have to apply the congruence axiom, the lemma holds because $V$ is a purifier for $a$ and $b$, and, by the definition of $\beta$, $\beta \vdash v \approx a$, $\beta \vdash u \approx b$. Therefore, $\beta \land a \approx b \vdash v \approx u$. For the inductive step, we have $\Gamma \land a \approx b \vdash v \approx u$ where $\Gamma \vdash v \approx f(t_1[a], r_1)$, $\Gamma \vdash u \approx f(t_2[b], r_2)$, and $\Gamma \vdash r_1 \approx r_2$, $r_1, r_2$ are any terms constructed using signature $\Sigma$. We have to prove that $\beta \land a \approx b \vdash v \approx u$. Note that, for readability, we have restricted $f$ to be a 2-ary function. However, the proof applies to any function symbol with any arguments. Since $V$ is a purifier for both $a$ and $b$ in $\Gamma$, we have constants $v', u' \in V$ s.t. $\Gamma \vdash v' \approx t_1[a]$, $\Gamma \vdash u' \approx t_2[b]$. Our inductive assumption is that if $\Gamma \land a \approx b \vdash v' \approx u'$ then $\beta \land a \approx b \vdash v' \approx u'$. \sharon{I can't follow the proof. It is very formal but imprecise. You already fixed $v,u$ before. How come it is now forall? if you are being so formal, also be precise and precisely state what you prove by induction. I can't follow what you assume in each step and what you prove. The overall structure of the proof is not clear.} Since $\Gamma \vdash v \approx f(t_1[a], r_1)$ and $\Gamma \vdash v' \approx t_1[a]$, we have $\Gamma \vdash v \approx f(v', r_1)$. Similarly, we have $\Gamma \vdash u \approx f(u', r_2)$. Since $\Gamma \vdash r_1 \approx r_2$, they must have the same equivalence class. Let $w$ be the representative for this class. By the definition of $\beta$, we have, $\beta \vdash v \approx f(v', w)$, $\beta \vdash u \approx f(u', w)$. By the inductive assumption, we have $\beta \land a \approx b\vdash u' \approx v'$. By applying congruence axiom, we have $\beta \land a \approx b \vdash v \approx u$. \fi \begin{lemma}\label{lm:Visenoughdeq} Let $\Gamma$ be a set of literals, $x$ and $y$ be two constants in $\const(\Gamma)$, $V \subseteq \const(\Gamma)$ be a purifier for $\{x, y\} \subseteq V$, and $\langle W, \beta, \delta\rangle = \extbase(\Gamma, V)$. Then, for any $u, v \in V$ \[ (\Gamma \land x \approx y \vdash u \not\approx v) \iff ( \beta \land x \approx y \vdash u \not\approx v) \] \end{lemma} \begin{proof} By the definition of $\beta$, $(\Gamma \vdash u \not\approx v) \iff (\beta \vdash u \not\approx v)$. Assume $\Gamma \not \vdash u \not\approx v$. Then, there is a term $t \in \mathcal{T}(\Sigma)$, s.t. $\Gamma \vdash u \not\approx t$ and $(\Gamma \land x \approx y) \vdash v \approx t$. W.l.o.g, we can assume that $t \in \mathcal{T}(\Gamma)$~(Lemma~\ref{lm:trmingam}). Since $V$ is a purifier for $y$, there is a $y' \in V$ s.t. $\Gamma \vdash y' \approx t$. Therefore, $(\Gamma \land x \approx y) \vdash v \approx y'$ and $\Gamma \vdash u \not\approx y'$. By Lemma~\ref{lm:Visenough}, $(\beta \land x \approx y) \vdash v \approx y'$. By the definition of $\beta$, $\beta \vdash u \not\approx y'$. Therefore, $(\beta \land x \approx y) \vdash u \not\approx v$. \end{proof} \iffalse \begin{lemma}\label{lm:Visenoughdeq2} Let $\Gamma$ be a set of literals, $V \subseteq \const(\Gamma)$, and $\langle W, \beta, \delta\rangle = \extbase(\Gamma, V)$. Then, for any $a, b, u, v \in V$ \[ (\Gamma \land a \not\approx b \vdash u \not\approx v) \iff (\beta \land a \not\approx b \vdash u \not\approx v) \] \end{lemma} \begin{proof} By the definition of $\beta$, $(\Gamma \vdash u \not\approx v) \iff (\beta \vdash u \not\approx v)$. Assume $\Gamma \not\vdash u \not\approx v$. Then, $\Gamma \vdash \{u \approx a , v \approx b\}$. By definition, $\beta \vdash \{u \approx a, v \approx b\}$. Hence, \mbox{$(\beta \land a \not\approx b) \vdash u \not\approx v$}. \end{proof} \fi \begin{lemma}\label{lm:eqpres1v} Let $V$ be a set of constants, $\varphi_1$ and $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, and $V$ be a purifier for $\{x, y\}$ in both $\varphi_1$ and $\varphi_2$. Then, $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$ \end{lemma} \begin{proof} Let $L_{\approx}$ be a set of equalities between constants in $V$, $L_{\not\approx}$ be a set of disequalities between constants in $V$, and $L_{\mathcal{F}}$ is a set of equalities of the form $v \approx f(\Vec{w})$ where $v \in V$, and $\Vec{w}$ is a set of constants, some of which are in $V$, and the rest are not in $\mathcal{C}(\varphi_1)\cup\mathcal{C}(\varphi_2)$. Let $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_1) \cup L_{\approx}\cup L_{\mathcal{F}} \cup L_{\not\approx}$. By Lemma~\ref{lm:Visenough} and Lemma~\ref{lm:Visenoughdeq}, $\alpha_V(\varphi_1)\land x \approx y \vdash \ell$ for all $\ell \in L_{\approx}\cup L_{\not\approx}$. \hg{TODO: $L_{\mathcal{F}}$} We have, for all $\ell \in L_{\approx} \cup L_{\not\approx} \cup L_{\mathcal{F}}$, $\alpha_V(\varphi_1) \land x \approx y \vdash \ell$. Hence, $\alpha_V(\varphi_2) \land x \approx y \vdash \ell$ for all $\ell \in L_{\approx} \cup L_{\not\approx} \cup L_{\mathcal{F}}$. Therefore, $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$. \end{proof} \begin{lemma}\label{lm:deqpres1v} Let $V$ be a set of constants s.t. $x, y \in V$, $\varphi_1$ and $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. Then, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$ \end{lemma} \begin{proof} Let $L = \{x \not\approx u \mid y \approx u \in \alpha_V(\varphi_1), u \in V\}$. By definition, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_1) \land L$ as well as $\alpha_V(\varphi_2 \land x \not\approx y) = \alpha_V(\varphi_2) \land L$. Since $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$. \end{proof} For a state $q$, we write $\varphi \equiv_q \psi$ for $\varphi \equiv_{\const(q)} \psi$. \iffalse \begin{lemma}\label{lm:cpveqv} Let $P$ be a coherent program. Let $\langle s, q, pc\rangle$ be a reachable configuration in $P$. Then $\alpha_Q(pc) \equiv_Q pc$. \end{lemma} \begin{proof} We prove this by induction on the number of steps in the derivation sequence for $\langle s, q, pc\rangle$. The lemma clearly holds for the initial configuration. Now we have to prove that if the lemma holds for the configuration $\langle s, q, pc\rangle$, it holds for $\langle s', q', pc'\rangle$ where $\langle s, q, pc\rangle \to \langle s', q', pc'\rangle$. We case split on the statement $s$. We only have to consider the assume and assignment statements. \hg{The proof for assume statements is incomplete. It proves $\alpha_Q(pc) \land pc' \equiv_Q pc \land pc'$, not $\alpha_Q(pc \land pc') \equiv_Q pc \land pc'$ } \begin{itemize} \item \textbf{assume}$(\pv{x}=\pv{y})$:$q[\pv{x}] = x, q[\pv{y}] = y$. By Lemma~\ref{lm:sprtrm}, all new equalities implied by $x \approx y$ are between terms that are in the equivalence class of superterms of $x$ and $y$. By early assume property, $Q$ is a \emph{purifier} for $x, y$ in $pc$. Therefore, all superterms of $x$ and $y$ are represented by constants in $Q$. Lemmas~\ref{lm:Visenough} and \ref{lm:Visenoughdeq} show that all new equalities and inequalities between constants in $Q$ are captured by $\alpha_Q(pc)$. Combining this with our assumption that $pc \equiv_Q \alpha_Q(pc)$, we get, for every literal $\ell$ s.t. $\mathcal{C}(\ell) \subseteq Q$, if $pc \land x \approx y \vdash \ell$, then $\alpha_Q(pc) \land x \approx y \vdash \ell$. If $\alpha_Q(pc) \land x \approx y \vdash \ell$, then $pc \land x \approx y \vdash \ell$ because $\alpha_Q(pc)$ is an abstraction of $pc$. Therefore, $pc \land x \approx y \equiv_Q \alpha_Q(pc) \land x\approx y$. \item \textbf{assume}$(\pv{x}\neq\pv{y})$: $q[\pv{x}] = x, q[\pv{y}] = y$. Adding a disequality does not entail any new equalities. By Lemma~\ref{lm:Visenoughdeq2}, all disequalities introduced because of $x \not\approx y$ are entailed by $\alpha_Q(pc)$ as well. Therefore, if $pc \equiv_Q \alpha_Q(pc)$ then $pc \land x \not\approx y \equiv_Q \alpha_Q(pc) \land x\not\approx y$. \item $\pv{x} := \pv{y}$ Follows from lemma \ref{lm:propequiv}. \item $\pv{x} := f(\Vec{\pv{y}})$ Let $q[\pv{y}] = y$ for each $\pv{y} \in \Vec{\pv{y}}$. If $f(\Vec{y}) \not \in \mathcal{T}(pc)$, we can use the third case in Lemma~\ref{lm:propequiv} otherwise, by memoization property, there exists a constant $z \in Q$ s.t. $pc \vdash z \approx f(\Vec{y})$. Since $\Vec{y} \subseteq Q$, we have $\alpha_Q(pc) \vdash z \approx f(\Vec{y})$, in which case we can apply the second case in Lemma~\ref{lm:propequiv}. \end{itemize} \end{proof} \fi \begin{theorem} \label{thm:main} Let $\langle s, q, pc\rangle$ be a reachable configuration of a CUP\footnote{coherent uninterpreted program} $P$. Then, \begin{enumerate}[(1)] \item $\langle s, q, pc\rangle \to \langle s', q', pc \land pc'\rangle$ iff\\ $\langle s, q, \alpha_q(pc)\rangle \to \langle s', q', \alpha_{q}(pc) \land pc'\rangle$, and \item $\alpha_{q'}(pc \land pc') = \alpha_{q'}(\alpha_q(pc) \land pc')$. \end{enumerate} \end{theorem} \begin{proof} Throughout the proof, we use $x = q[\pv{x}]$, and $y = q[\pv{y}]$. We only show the proof of part (1) for $s = \textbf{assume}(\pv{x} \bowtie \pv{y})$ since the other cases are trivial. The only-if direction follows since $\alpha_q(pc)$ is weaker than $pc$. For the if direction, $pc \not\vdash \bot$ since it is part of a reachable configuration. Then, there are two cases: \begin{itemize} \item case $s = \textbf{assume}(\pv{x}=\pv{y})$. Assume $(pc \land x \approx y) \models \bot$. Then, $(pc \land x \approx y) \vdash t_1 \approx t_2$ and $pc \vdash t_1 \not\approx t_2$ for some $t_1, t_2 \in \mathcal{T}(pc)$. By Lemma~\ref{lm:sprtrm}, in any new equality $(t_1 \approx t_2)$ that is implied by $pc \land (x \approx y)$ (but not by $pc$), $t_1$ and $t_2$ are equivalent (in $pc$) to superterms of $x$ or $y$. By the early assume property of CUP, $\const(q)$ purifies $\{x, y\}$ in $pc$. Therefore, every superterm of $x$ or $y$ is equivalent (in $pc$) to some constant in $\const(q)$. Thus, $(pc \land x \approx y) \vdash u \approx v$ and $(pc \land x \approx y) \vdash u \not\approx v$ for some $u, v \in \const(q)$. By Lemma~\ref{lm:Visenough}, $(\alpha_q(pc) \land x \approx y) \vdash u \approx v$. By Lemma~\ref{lm:Visenoughdeq}, $(\alpha_q(pc) \land x \approx y) \vdash u \not\approx v$. Thus, $(\alpha_q(pc)\land x \approx y) \models \bot$. \item case $s = \textbf{assume}(\pv{x}\neq\pv{y})$. $(pc \land x \not\approx y) \models \bot$ if and only if $pc \vdash x \approx y$. Since $x, y \in \const(q)$, $\alpha_q(pc) \vdash x \approx y$. \end{itemize} For part (2), we only show the cases for assume and assignment statements, the other cases are trivial. \begin{itemize} \item case $s = \textbf{assume}(\pv{x} = \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \approx y) = \alpha_{q}(\alpha_q(pc) \land x \approx y)$. From the early assumes property, $\mathcal{C}(q)$ purifies $\{x, y\}$ in $pc$. By Lemma~\ref{lm:pur}, $\const(q)$ purifies $\{x, y\}$ in $\alpha_q(pc)$ as well. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:eqpres1v}, $\alpha_q(pc \land x \approx y) = \alpha_q(\alpha_q(pc) \land x \approx y)$. \item case $s = \textbf{assume}(\pv{x} \neq \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:deqpres1v},$\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. \item case $s = \pv{x} \gets \pv{y}$. W.l.o.g., assume $q' = q[\pv{x}\mapsto x']$, for some constant $x' \not\in \mathcal{C}(pc)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~1), $\alpha_{\const(q) \cup \{x'\}}(pc \land x' \approx y) = \alpha_{\const(q) \cup \{x'\}}(\alpha_q(pc) \land x' \approx y)$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x' \approx y ) = \alpha_{q'}(\alpha_q(pc) \land x' \approx y)$, since $\const(q') \subseteq (\const(q) \cup \{x'\})$. \item case $s = \pv{x} \gets f(\Vec{y})$. W.l.o.g., $q' = q[\pv{x}\mapsto x']$ for some constant $x' \not\in \mathcal{C}(pc)$. There are two cases: (a) there is a term $t \in \mathcal{T}(pc)$ s.t. $pc \vdash t \approx f(\Vec{y})$ \hg{was $t \approx f(\Vec{y})$ in $pc$.}, (b) there is no such term $t$. \begin{enumerate}[label=(\alph*)] \item By the memoization property of CUP, there is a program variable $\pv{z}$ s.t. $q[\pv{z}] = z$ and $pc \vdash z \approx f(\Vec{y})$. Therefore, by definition of $\alpha_q$, $\alpha_q(pc) \vdash z \approx f(\Vec{y})$. The rest of the proof is identical to the case of $s = \pv{x} \gets \pv{z}$. \item The term $f(\Vec{y})$ is not in $\mathcal{T}(pc)$ \hg{changed $pc^*$ to $pc$}\ag{What happens if the term in not in $\mathcal{T}(pc)$ but there is an equivalent term. For example, say there is $f(z)$ somewhere in the $pc$ and $z \approx y$, which case is that?}\hg{this would be the first case as $pc \vdash f(z) \approx f(y)$. By memoization, since $f(z) \in \mathcal{T}(pc)$, there has to be a variable in $q$ that stores $f(z)$.} and, therefore, not in $\mathcal{T}(\alpha_q(pc))$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~2), $\alpha_{\mathcal{C}(q)\cup \{x'\}}(pc \land x \approx f(\Vec{y})) = \alpha_{\mathcal{C}(q)\cup \{x'\}}(\alpha_q(pc) \land x \approx f(\Vec{y}))$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x \approx f(\Vec{y})) = \alpha_{q'}(\alpha_q(pc) \land x \approx f(\Vec{y}))$ since $\const(q') \subseteq (\const(q) \cup \{x'\})$. \end{enumerate} \end{itemize} \end{proof} \hg{rough statements. Will refine later.} \begin{corollary} The relation $R = \{(c, \alpha(c))\}$ is a bisimulation relation for CUP. \end{corollary} \begin{corollary} The relation $R = \{(c, \alpha_r(\alpha(c)))\}$ is a finite bisimulation relation for CUP. \end{corollary} \iffalse \begin{theorem} Let $\langle s, q, pc\rangle$ be a reachable configuration of a CUP\footnote{coherent uninterpreted program} $P$. Then, \begin{enumerate}[(1)] \item $\langle s, q, pc\rangle \to \langle s', q', pc \land pc'\rangle$ if and only if $\langle s, q, \alpha_q(pc)\rangle \to \langle s', q', \alpha_{q}(pc) \land pc'\rangle$, and \item $\alpha_{q'}(pc \land pc') \equiv_{q'} \alpha_{q'}(\alpha_q(pc) \land pc')$. \end{enumerate} \end{theorem} \begin{proof} Throughout the proof, $\bowtie \in \{\approx, \not\approx\}$, $x = q[\pv{x}]$, and $y = q[\pv{y}]$. We only show the proof of part (1) for $s = \textbf{assume}(\pv{x} \bowtie \pv{y})$ since the other cases are trivial. If $\alpha_q(pc) \land (x \bowtie y) \models \bot$, then, since $\alpha_q(pc)$ is weaker than $pc$, $pc \land (x \bowtie y) \models \bot$. For the other direction, assume $pc \land (x \bowtie y) \models \bot$. Then, \begin{itemize} \item case $s = \textbf{assume}(\pv{x}=\pv{y})$. By Lemma~\ref{lm:sprtrm}, in any new equality $(t_1 \approx t_2)$ that is implied by $pc \land (x \approx y)$ (but not by $pc$), $t_1$ and $t_2$ are equivalent (in $pc$) to superterms of $x$ or $y$. By the early assume property of CUP, $\const(q)$ purifies $\{x, y\}$ in $pc$. Therefore, every superterm of $x$ or $y$ is equivalent (in $pc$) to some constant in $\const(q)$. \ag{the proof breaks at this point. will revisit later} Lemma~\ref{lm:Visenough} and Lemma~\ref{lm:Visenoughdeq} show that all new (in)equalities between constants in $\const(q)$ are captured by $\alpha_q(pc)$. Therefore, if $pc \land x \approx y \models \bot$ then $\alpha_q(pc) \land x\approx y \models \bot$. \item case $s = \textbf{assume}(\pv{x}\neq\pv{y})$. $pc \land x \not\approx y \models \bot$ if and only if $pc \vdash x \approx y$. Since $x, y \in \const{q}$, $\alpha_q(pc) \vdash x \approx y$. \end{itemize} For part (2), we only show the cases for assume and assignment statements, the other cases are trivial. \begin{itemize} \item case $s = \textbf{assume}(\pv{x}\bowtie\pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \bowtie y) \equiv_{q} \alpha_{q}(\alpha_q(pc) \land x \bowtie y)$. This follows from lemmas~\ref{lm:eqpres1v} and \ref{lm:deqpres1v}. \item case $s = \pv{x} \gets \pv{y}$. W.l.o.g., assume $q' = q[\pv{x}\mapsto x']$, for some constant $x' \not\in \mathcal{C}(pc)$. By Lemma~\ref{lm:betapreswequiv}, $pc \equiv^1_q \alpha_q(pc)$. By Lemma~\ref{lm:propequiv} (case~2), $(pc \land x' \approx y) \equiv^1_{\const(q) \cup \{x'\}} (\alpha_q(pc) \land x' \approx y)$. By Lemma~\ref{lm:propequiv} (case 1), $(pc \land x' \approx y ) \equiv^1_{q'} (\alpha_q(pc) \land x' \approx y)$, since $\const(q') \subseteq (\const(q) \cup \{x'\})$. By the definition of $1$-depth $V$-equivalence, $\alpha_{q'}(pc \land x \approx y) \equiv_{q'} \alpha_{q'}(\alpha_q(pc) \land x \approx y)$. \item case $s = \pv{x} \gets f(\Vec{y})$. W.l.o.g., $q' = q[\pv{x}\mapsto x']$ for some constant $x' \not\in \mathcal{C}(pc)$. There are two cases: (a) there is a term $t \in \mathcal{T}(pc)$ s.t. $t \approx f(\Vec{y})$ in $pc$, (b) there is no such term $t$. \begin{enumerate}[label=(\alph*)] \item By the memoization property of CUP, there is a program variable $\pv{z}$ s.t. $q[\pv{z}] = z$ and $pc \vdash z \approx f(\Vec{y})$. Therefore, by definition of $\alpha_q$, $\alpha_q(pc) \vdash z \approx f(\Vec{y})$. The rest of the proof is identical to the case of $s = \pv{x} \gets \pv{z}$. \item The term $f(\Vec{y})$ is not in $\mathcal{T}(pc^*)$ and, therefore, not in $\mathcal{T}(\alpha_q(pc))$ \hg{TODO: justify}. By Lemma~\ref{lm:betapreswequiv}, $pc \equiv^1_Q \alpha_Q(pc)$. By Lemma~\ref{lm:propequiv} (case~3), $(pc \land x \approx f(\Vec{y})) \equiv^1_{\mathcal{C}(q) \cup \{x'\}} (\alpha_q(pc) \land x \approx f(\Vec{y}))$. By Lemma~\ref{lm:propequiv} (case~1), $(pc \land x \approx f(\Vec{y})) \equiv^1_{q'} (\alpha_q(pc) \land x \approx f(\Vec{y}))$. By Lemma~\ref{lm:betapreswequiv}, $\alpha_q(pc \land x \approx f(\Vec{y})) \equiv_{q'} \alpha_{q'}(\alpha_q(pc) \land x \approx f(\Vec{y}))$. \end{enumerate} \end{itemize} \end{proof} We only have to prove the theorem for the rules for assume and assignment statement. \paragraph{proof of first part} We only have to consider \textbf{assume} statements because none of the other statements are guarded. According to Lemma~\ref{lm:cpveqv} $pc \equiv_Q \alpha_Q(pc)$. $pc'$ is always over the constants in $Q$. Therefore, by the definition of $\equiv_Q$, $pc \land pc' \models \bot$ if and only if $\alpha_Q(pc) \land pc' \models \bot$. By the same reasoning, $\varphi \land pc' \models \bot$ if and only if $pc \land pc' \models \bot$. \fi \section{Proving bisimulation} \sharon{title: Soundness and Completeness} \sharon{(not sure if still relevant) the signature was mentioned only in the previous section. Need to re-introduce it in this section} In this section we show that, given a coherent UPL program $P$, the \ag{finite} abstract transition system $\alpha(\TS)_P = (C, \initconf^{\alpha}, \Tr^{\alpha})$ is bisimilar to the concrete transition system $\mathcal{S}_P = (C, c_0, \mathcal{R})$. \sharon{we should also show that we get a simulation relation for any program, not just coherent} The bisimulation relation is induced by the abstraction function $\alpha$ (\Cref{def:q-abstraction}). However, since the abstraction introduces fresh constants (which can be understood as existentially quantified variables), we relax the relation using a notion of equivalence that we define next. \ag{This sentence needs to be updated to reflect current definition of $\alpha$.} \sharon{my goal is to eventually claim that $B = \{(c,c_\alpha) \mid c, c_\alpha \text{ are reachable and } c = \langle s, q, pc \rangle ~\land~ c_\alpha \equiv_{\const(q)} \alpha(c)\}$ is a bisimulation relation. Is this correct?}\ag{I would define $B \triangleq \{ (c, \alpha(c)) \mid c \text{ is reachable} \}$} Given two EUF\xspace formulas $\varphi_1$ and $\varphi_2$ and a set of constants $V$, we say that the formulas are $1$-depth $V$-equivalent, denoted $\varphi_1 \equiv^1_V \varphi_2$, if and only if they have the same $V$-abstractions: $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. \begin{example} Let $\varphi_1 = x \approx f(f(y))$, $\varphi_2 = x \approx f(z)$, and $V = \{x, y\}$. The $V$-abstraction of $\varphi_1$ is $\alpha_V(\varphi_1) = x \approx f(w)$ and that of $\varphi_2$ is $\alpha_V(\varphi_2) = x \approx f(w)$. Therefore, $\varphi_1 \equiv^1_V \varphi_2$ but not $\varphi_1 \equiv_V \varphi_2$. \end{example} \ag{Restructure this section to have the main theorem as early as possible. Followed by all the lemmas. Followed by the proof.} \begin{lemma}[Idempotence]\label{lm:idem} For any set of literals $\Gamma$ and any set of constants $V$, $\alpha_V(\Gamma) = \alpha_V(\alpha_V(\Gamma))$. \end{lemma} \begin{lemma}\label{lm:frshtrm} Let $\Gamma$ be a set of literals, $V$ a set of constants, $\Vec{y} \subseteq V$, $x'$ a constant s.t. $x' \not\in \mathcal{C}(\Gamma) \cup V$, $V' = V \cup \{x'\}$, and $f(\Vec{y})$ be a term not in $\mathcal{T}(\Gamma)$. Then, \[ \alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \] \end{lemma} \begin{proof} Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V$. We show that $\alpha_V(\Gamma) \land \psi \models \bot$ iff $\alpha_{V'}(\Gamma \land x' \approx t) \land \psi \models \bot$. By definition of $V$-abstraction, $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) = \alpha_V(\Gamma) \cup X_{def}$, where $X_{def}$ is \begin{multline*} \{x' \bowtie t \mid \alpha_V(\Gamma) \vdash f(\Vec{y}) \bowtie t, \depth(t) = 1, \mathcal{C}(t) \cap \mathcal{C}(\Gamma) \subseteq V\} \end{multline*} Since, $\alpha_V(\Gamma) \subseteq \alpha_{V'}(\Gamma \land x' \approx f(\Vec{y}))$, if $\alpha_V(\Gamma) \land \psi \models \bot$, then $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \land \psi \models \bot$. For the other direction, assume $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \land \psi \models \bot$. Since $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y})) \vdash x' \approx f(\Vec{y})$, $\alpha_{V'}(\Gamma \land x' \approx f(\Vec{y}))[x'\mapsto f(\Vec{y})] \land \psi \models \bot$. Therefore, $\alpha_V(\Gamma) \land \psi \models \bot$. Hence, $\alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx f(\Vec{y}))$. \end{proof} \begin{lemma}\label{lm:frshvar} Let $\Gamma$ be a set of literals, $V$ be any set of constants, $y \in V$, $x'$ be a constant s.t. $x' \not \in \mathcal{C}(\Gamma) \cup V$, and $V' = V \cup \{x'\}$. Then, \[ \alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx y) \] \end{lemma} \begin{proof} Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V$. We show that $\alpha_V(\Gamma) \land \psi \models \bot$ iff $\alpha_{V'}(\Gamma \land x' \approx y) \land \psi \models \bot$. Let $L = \{\ell \mid \ell[x'\mapsto y] \in \alpha_V(\Gamma)\}$. By definition of $V$-abstraction, $\alpha_{V'}(\Gamma \land x' \approx y) = \alpha_V(\Gamma) \cup L$. Since $\alpha_V(\Gamma) \subseteq \alpha_{V'}(\Gamma \land x' \approx y)$, if $\alpha_V(\Gamma) \land \psi \models \bot$, $\alpha_{V'}(\Gamma \land x' \approx y) \land \psi \models \bot$. For the other direction, assume $\alpha_{V'}(\Gamma \land x' \approx y) \land \psi \models \bot$. Since $\alpha_{V'}(\Gamma \land x' \approx y) \vdash x' \approx y$, $\alpha_{V'}(\Gamma \land x' \approx y)[x'\mapsto y] \land \psi \models \bot$. Therefore, we have $\alpha_V(\Gamma) \land \psi \models \bot$. Hence $\alpha_V(\Gamma) ~\equiv_V~ \alpha_{V'}(\Gamma \land x' \approx y)$. \end{proof} \begin{lemma}\label{lm:propequiv} Let $V$ be a set of constants, $\varphi_1$, $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, $y \in V, \Vec{y}\subseteq V$, $x'$ be a constant s.t. $x' \not \in \mathcal{C}(\varphi_1) \cup \mathcal{C}(\varphi_2)$, $V' = V \cup\{x'\}$, and $f(\Vec{y})$ be a term not in $\mathcal{T}(\varphi_1) \cup \mathcal{T}(\varphi_2)$. Then, \begin{enumerate}[(1)] \item $\alpha_{V'}(\varphi_1 \land x' \approx y) = \alpha_{V'}(\varphi_2 \land x' \approx y)$ \item $\alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y})) = \alpha_{V'}(\varphi_2 \land x' \approx f(\Vec{y}))$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(1)] \item Let $\beta_1 = \alpha_V(\varphi_1)$ and $\beta_1' = \alpha_{V'}(\varphi_1 \land x' \approx y)$, and $\beta_2 = \alpha_V(\varphi_2)$ and $\beta_2' = \alpha_{V'}(\varphi_2 \land x' \approx y)$. Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V'$. We show that $\beta_1' \land \psi \models \bot$ iff $\beta_2'\land \psi \models \bot$. We show one direction, the other one is symmetric. Assume $\beta_1' \land \psi \models \bot$. By definition of $V$-abstraction, both $\beta_1' \vdash x' \approx y$ and $\beta_2' \vdash x' \approx y$. Since $\beta_1'\land \psi \models \bot$ and $\beta_1' \vdash x' \approx y$, we have $\beta_1'\land \psi[x'\mapsto y] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_1' \equiv_V \beta_1$. Since $\beta_1'\land \psi[x'\mapsto y] \models \bot$, $\mathcal{C}(\psi[x' \mapsto y]) \subseteq V$, and $\beta_1' \equiv_V \beta_1$, then $\beta_1 \land \psi[x' \mapsto y] \models \bot$. Since $\beta_1 = \beta_2$, $\beta_2 \land \psi[x' \mapsto y] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_2' \equiv_V \beta_2$. Therefore, $\beta_2' \land \psi[x' \mapsto y] \models \bot$. Since $\beta_2' \vdash x' \approx y$ and $\beta_2' \land \psi[x' \mapsto y] \models \bot$, $\beta_2' \land \psi \models \bot$. Therefore, $\beta_1' \equiv_{V'} \beta_2'$. Therefore, $\alpha_{V'}(\beta_1') = \alpha_{V'}(\beta_2')$. By Lemma~\ref{lm:idem}, $\beta_1' = \beta_2'$. \item Let $\beta_1 = \alpha_V(\varphi_1)$ and $\beta_1' = \alpha_{V'}(\varphi_1 \land x' \approx f(\Vec{y}))$, and $\beta_2 = \alpha_V(\varphi_2)$ and $\beta_2' = \alpha_{V'}(\varphi_2 \land \land x' \approx f(\Vec{y}))$. Let $\psi$ be a formula s.t. $\mathcal{C}(\psi) \subseteq V'$. We have to show that $\beta_1' \land \psi \models \bot$ iff $\beta_2'\land \psi \models \bot$. We prove one direction, the other direction is symmetric. Assume $\beta_1' \land \psi \models \bot$. By definition of $V$-abstraction, both $\beta_1' \vdash x' \approx f(\Vec{y})$ and $\beta_2' \vdash x' \approx f(\Vec{y})$. Since $\beta_1'\land \psi \models \bot$ and $\beta_1' \vdash x' \approx f(\Vec{y})$, then $\beta_1'\land \psi[x'\mapsto f(\Vec{y})] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_1' \equiv_V \beta_1$. Since $\beta_1'\land \psi[x'\mapsto f(\Vec{y})] \models \bot$, $\mathcal{C}(\psi[x' \mapsto f(\Vec{y})]) \subseteq V$, and $\beta_1' \equiv_V \beta_1$, then $\beta_1 \land \psi[x' \mapsto f(\Vec{y})] \models \bot$. Since are given that $\beta_1 = \beta_2$, $\beta_2 \land \psi[x' \mapsto f(\Vec{y})] \models \bot$. By Lemma~\ref{lm:frshvar}, $\beta_2' \equiv_V \beta_2$. Therefore, $\beta_2' \land \psi[x' \mapsto f(\Vec{y})] \models \bot$. Since $\beta_2' \vdash x' \approx f(\Vec{y})$ and $\beta_2' \land \psi[x' \mapsto f(\Vec{y})] \models \bot$, $\beta_2' \land \psi \models \bot$. Therefore, $\beta_1' \equiv_{V'} \beta_2'$. Therefore, $\alpha_{V'}(\beta_1') = \alpha_{V'}(\beta_2')$. By Lemma~\ref{lm:idem}, $\beta_1' = \beta_2'$. \end{enumerate} \end{proof} \begin{lemma}\label{lm:subset} Let $V$ be a set of constants, $\varphi_1$, $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. Then, for any $U \subseteq V$, $\alpha_U(\varphi_1) = \alpha_U(\varphi_2)$. \end{lemma} \begin{proof} Follow from $\alpha_U(\alpha_V(\varphi_1)) = \alpha_U(\alpha_V(\varphi_2))$, and $\alpha_U(\alpha_V(\varphi_i)) = \alpha_U(\varphi_i)$, for $i \in \{1, 2\}$. \end{proof} Lemma~\ref{lm:sprtrm} and Lemma~\ref{lm:trmingam} outline important properties of EUF that are useful in the proof of Thm.~\ref{thm:main}. \begin{lemma}\label{lm:sprtrm} Let $\Gamma$ be a set of literals, and $t_1$ and $t_2$ be two terms in $\mathcal{T}(\Sigma)$ s.t. $\Gamma \not\vdash (t_1 \approx t_2)$. Then, $(\Gamma \land a \approx b) \vdash (t_1 \approx t_2)$, for some constants $a$ and $b$ in $\const(\Gamma)$, iff there are two superterms, $s_1[a]$ and $s_2[b]$, of $a$ and $b$, respectively, s.t. (i) $\Gamma \vdash (t_1 \approx s_1[a])$, (ii) $\Gamma \vdash (t_2 \approx s_2[b])$, and (iii) $(\Gamma \land a \approx b ) \vdash (s_1[a] \approx s_2[b])$. \end{lemma} \begin{lemma}\label{lm:trmingam} Let $\Gamma$ be a set of literals, $v \in \mathcal{C}(\Gamma)$. If $\Gamma \vdash v \approx f(t_1,\ldots,t_n)$ for some term $f(t_1,\ldots,t_n) \in \mathcal{T}(\Sigma)$ then there exists a term $f(t'_1,\ldots,t'_n) \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash v \approx f(t'_1,\ldots,t'_n)$ and $\Gamma \vdash t_i \approx t_i'$ for all $1 \leq i \leq n$. \end{lemma} \begin{lemma}\label{lm:trmingamdeq} Let $\Gamma$ be a set of literals, $v \in \mathcal{C}(\Gamma)$. If $\Gamma \vdash v \not\approx f(t_1,\ldots,t_n)$ for some term $f(t_1,\ldots,t_n) \in \mathcal{T}(\Sigma)$ then there exists a term $f(t'_1,\ldots,t'_n) \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash v \not\approx f(t'_1,\ldots,t'_n)$ and $\Gamma \vdash t_i \approx t_i'$ for all $1 \leq i \leq n$. \end{lemma} \begin{definition}[Purifier] \label{def:purifier} We say that a set of constants $V$ is a \emph{purifier} of a constant $a$ in a set of literals $\Gamma$ if $a \in V$ and for every term $t \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash t \approx s[a]$, there exists $v \in V$ s.t. $\Gamma \vdash v \approx t$. \end{definition} \begin{example}[Purifier] Let $\Gamma = \{ c \approx f(a), d \approx f(b), d \not\approx e\}$. Then, $V = \{a, b, c\}$ is a purifier for $a$, but not a purifier for $b$ even though $b \in V$. \end{example} \begin{lemma}\label{lm:pur} If $V$ is a purifier for $x$ in $\Gamma$, then $V$ is a purifier for $x$ in $\alpha_V(\Gamma)$. \end{lemma} \begin{lemma}\label{lm:Visenough} Let $\Gamma$ be a set of literals, $x$ and $y$ be two constants in $\const(\Gamma)$, $V \subseteq \const(\Gamma)$ be a purifier for $\{x, y\} \subseteq V$, and $\beta = \alpha_V(\Gamma)$. Then, for any $u, v \in V$ \[ (\Gamma \land x \approx y \vdash u \approx v) \iff ( \beta \land x \approx y \vdash u \approx v) \] \end{lemma} \begin{proof} By the definition of $\beta$, $(\Gamma \vdash u \approx v) \iff (\beta \vdash u \approx v)$. Thus, assume that $\Gamma \not \vdash u \approx v$. The only-if direction is trivial since $\beta$ is weaker than $\Gamma$. For the if-direction, By Lemma~\ref{lm:sprtrm}, there are superterms $s_1[x]$ and $s_2[y]$ of $x$ and $y$, respectively, s.t. $\Gamma \vdash \{u \approx s_1[x], v \approx s_2[y]\}$, and $(\Gamma \land x \approx y) \vdash (s_1[x] \approx s_2[y])$. The proof proceeds by induction on the maximum depth of $s_1$ and $s_2$. The base case, $s_1 = x$ and $s_2 = y$, is trivial. For the inductive case, we show one sub-cases, others are similar. Assume $s_1 = f(t_1[x], \vec{r})$ and $s_2 = f(t_2[y], \vec{r})$, for some terms $t_1[x]$, $t_2[y]$, $\vec{r}$, and a function $f$. Furthermore, $(\Gamma \land x \approx y) \vdash t_1[x] \approx t_2[y]$. Since $\Gamma \vdash \{u \approx f(t_1[x], \vec{r}), v \approx f(t_2[y], \vec{r})\}$, by Lemma~\ref{lm:trmingam}, there exists terms $f(t_1', \vec{r}_1), f(t_2', \vec{r}_2) \in \mathcal{T}(\Gamma)$ s.t. $\Gamma \vdash \{u \approx f(t_1', \vec{r}_1), t_1[x] \approx t_1', \vec{r} \approx \vec{r}_1, v \approx f(t_2', \vec{r}_2), t_2[y] \approx t_2', \vec{r} \approx \vec{r}_2 \}$. Since $V$ is a purifier for $\{x, y\}$, there are $x', y' \in V$ s.t. $\Gamma \vdash \{x' \approx t_1', y' \approx t_2'\}$, and $\Gamma \vdash \{u \approx f(x', \vec{r}), v \approx f(y', \vec{r})\}$. By construction, $\beta \vdash \{u \approx f(x', \vec{w}), v \approx f(y', \vec{w})\}$, for some constants $\vec{w} \in \const(\beta)$. By IH, $(\beta \land x \approx y) \vdash x' \approx y'$. Hence, by congruence, $(\beta \land x \approx y) \vdash v \approx u$. \end{proof} \begin{lemma}\label{lm:Visenoughdeq} Let $\Gamma$ be a set of literals, $x$ and $y$ be two constants in $\const(\Gamma)$, $V \subseteq \const(\Gamma)$ be a purifier for $\{x, y\} \subseteq V$, and $\langle W, \beta, \delta\rangle = \extbase(\Gamma, V)$. Then, for any $u, v \in V$ \[ (\Gamma \land x \approx y \vdash u \not\approx v) \iff ( \beta \land x \approx y \vdash u \not\approx v) \] \end{lemma} \begin{proof} By the definition of $\beta$, $(\Gamma \vdash u \not\approx v) \iff (\beta \vdash u \not\approx v)$. Assume $\Gamma \not \vdash u \not\approx v$. Then, there is a term $t \in \mathcal{T}(\Sigma)$, s.t. $\Gamma \vdash u \not\approx t$ and $(\Gamma \land x \approx y) \vdash v \approx t$. By Lemma~\ref{lm:sprtrm}, $\Gamma \vdash t \approx s[y]$. We case split on whether $s[y]$ is $y$ itself or some superterm of $y$. \begin{itemize} \item case $s[y] = s$, Since $\Gamma \vdash t \approx y$, $(\Gamma \land x \approx y) \vdash v \approx y$ and $\Gamma \vdash u \not\approx y$. By Lemma~\ref{lm:Visenough}, $(\beta \land x \approx y) \vdash v \approx y$. By definition $\beta \vdash u \not\approx y$. Therefore $(\beta \land x \approx y) \vdash u \not\approx v$. \item case $s[y] = f(t_1,\ldots, t_n)$, where atleast one $t_i$ is a superterm of $y$. Since $\Gamma \vdash t \approx f(t_1,\ldots, t_n)$, $\Gamma \vdash u \not\approx f(t_1,\ldots, t_n)$. By Lemma~\ref{lm:trmingamdeq}, there exists a term $f(t_1',\ldots, t_n') \in \mathcal{T}(\Gamma)$ s.t., $\Gamma \vdash \{ u \not\approx f(t_1',\ldots, t_n'), t_1' \approx t_1,\ldots, t_n' \approx t_n\}$. Since $f(t_1',\ldots, t_n') \in \mathcal{T}(\Gamma)$, $\Gamma \vdash f(t_1',\ldots, t_n') \approx s[y]$, and $V$ is a purifier for $y$ in $\Gamma$, there exists a constant $y' \in V$ s.t. $\Gamma \vdash y' \approx f(t_1',\ldots, t_n')$. Therefore, $(\Gamma \land x \approx y) \vdash v \approx y'$ and $\Gamma \vdash u \not\approx y'$. By Lemma~\ref{lm:Visenough}, $(\beta \land x \approx y) \vdash v \approx y'$. By the definition of $\beta$, $\beta \vdash u \not\approx y'$. Therefore, $(\beta \land x \approx y) \vdash u \not\approx v$. \end{itemize} \end{proof} \begin{lemma}\label{lm:eqpres1v} Let $V$ be a set of constants, $\varphi_1$ and $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, and $V$ be a purifier for $\{x, y\}$ in both $\varphi_1$ and $\varphi_2$. Then, $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$ \end{lemma} \begin{proof} Let $L_{\approx}$ be a set of equalities between constants in $V$, $L_{\not\approx}$ be a set of disequalities between constants in $V$, and $L_{\mathcal{F}}$ is a set of equalities of the form $v \approx f(\Vec{w})$ where $v \in V$, and $\Vec{w}$ is a set of constants, some of which are in $V$, and the rest are not in $\mathcal{C}(\varphi_1)\cup\mathcal{C}(\varphi_2)$. Let $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_1) \cup L_{\approx}\cup L_{\mathcal{F}} \cup L_{\not\approx}$. By Lemma~\ref{lm:Visenough} and Lemma~\ref{lm:Visenoughdeq}, $\alpha_V(\varphi_1)\land x \approx y \vdash \ell$ for all $\ell \in L_{\approx}\cup L_{\not\approx}$. Next, we prove that for all $\ell \in L_\mathcal{F}$, $\alpha_V(\varphi_1) \land x \approx y \vdash \ell$. In the following, assume that $u, v \in V$ and $w \not \in \mathcal{C}(\Gamma)\cup V$. We assume that $\ell = v \approx f(u, w)$. All other cases are similar. We have $(\alpha_V(\varphi_1 \land x \approx y)) \vdash v \approx f(u, w)$ iff $(\varphi_1 \land x \approx y) \vdash v \approx f(u, t)$ for some term $t \in \mathcal{T}(\Sigma)$ and there no $v' \in V$ such that $(\varphi_1 \land x \approx y) \vdash v' \approx t$. If $\varphi_1 \vdash v \approx f(u, t)$ then $\alpha_V(\varphi_1) \vdash v \approx f(u, w)$ by definition. Assume that $\varphi_1 \not \vdash v \approx f(u, t)$. By Lemma~\ref{lm:sprtrm}, we have $\varphi_1 \vdash v \approx s_1[x]$ and $\varphi_1 \vdash f(u, t) \approx s_2[y]$ and $\varphi_1 \wedge x \approx y \vdash s_1[x] \approx s_2[y]$. We case split on $s_1[x], s_2[y]$. \begin{enumerate} \item case $s_2[y] = y$. We have $\varphi_1 \vdash f(u, t) \approx y$. From $(\varphi_1 \land x \approx y) \vdash v \approx f(u, t)$ and $\varphi_1 \vdash f(u, t) \approx y$, we have $(\varphi_1 \land x \approx y) \vdash v \approx y$. From Lemma~\ref{lm:Visenough}, we have $(\alpha_V(\varphi_1) \land x \approx y) \vdash v \approx y$. Since $\varphi_1 \vdash f(u, t) \approx y$, $\alpha_V(\varphi_1) \vdash f(u, w) \approx y$ by definition. Hence $(\alpha_V(\varphi_1) \land x \approx y) \vdash v \approx f(u, w)$. \item case $s_2[y] = g(b_1,\ldots,b_n)$ where $b_i$ is a superterm of $y$ for atleast one $i$. We have $\varphi_1 \vdash f(u, t) \approx g(b_1,\ldots,b_n)$. It is either the case that there exist a term $t'$ such that $g(b_1,\ldots, b_n) \approx t' \in \varphi_1$ and $\varphi_1 \vdash t' \approx f(u, t)$ or $g = f$ and $\varphi_1 \vdash\{u \approx b_1, t \approx b_2\}$. \begin{enumerate} \item case there exist a term $t'$ such that $g(b_1,\ldots, b_n) \approx t' \in \varphi_1$ and $\varphi_1 \vdash t' \approx f(u, t)$. Since $g(b_1,\ldots, b_n) \in \mathcal{T}(\varphi_1)$ and $V$ is a purifier for $y$ in $\varphi_1$, there should be a $y'\in V$ such that $\varphi_1 \vdash y' \approx g(b_1,\ldots,b_n)$. Therefore, $\varphi_1\vdash y' \approx f(u, t)$ and $(\varphi_1 \land x \approx y) \vdash v \approx y'$. By definition, $\alpha_V(\varphi_1) \vdash y' \approx f(u, w)$. By Lemma~\ref{lm:Visenough}, we have $(\alpha_V(\varphi_1) \land x \approx y) \vdash v \approx y'$. Hence $(\alpha_V(\varphi_1) \land x \approx y) \vdash v \approx f(u, w)$. \item case $g = f$ and $\varphi_1 \vdash\{u \approx b_1, t \approx b_2\}$. $s_1[x] \neq x$ \hg{why?}. It has to be the case that $s_1[x] = f(a_1, a_2)$. We have $\varphi_1 \vdash v \approx f(a_1, a_2)$ where $a_1$ or $a_2$ is a superterm of $x$. By Lemma~\ref{lm:trmingam}, we have a term $f(a_1', a_2') \in \mathcal{T}(\varphi_1)$ s.t. $\varphi_1 \vdash \{v \approx f(a_1', a_2'), a_1' \approx a_1, a_2' \approx a_2\}$. We case split on whether $a_1$ or $a_2$ is a superterm of $x$: \begin{enumerate} \item case $a_1$ is a superterm of $x$. We have $(\varphi_1 \land x \approx y) \vdash a_1 \approx b_1$. Since $\varphi_1 \vdash a_1' \approx a_1$, $a_1' \in \mathcal{T}(\varphi_1)$, and $V$ is a purifier for $x$ in $\varphi_1$, there must exists a constant $x' \in V$ s.t. $\varphi_1 \vdash x' \approx a_1'$. Since $(\varphi_1 \land x \approx y) \vdash a_1 \approx b_1$, $(\varphi_1 \land x \approx y) \vdash x' \approx b_1$. From $\varphi_1 \vdash u \approx b_1$, we have $\varphi_1 \land x \approx y \vdash x' \approx u$. By Lemma~\ref{lm:Visenough}, we have $\alpha_V(\varphi_1) \land x \approx y \vdash x' \approx u$. Since $\varphi \wedge v \approx f(a_1', a_2')$ and $\varphi \wedge x' \approx a_1'$, we have $\varphi_1 \vdash v \approx f(x', a_2')$ and hence $\alpha(varphi_1) \vdash v \approx f(x', w)$ be definition. Since $\alpha(varphi_1) \vdash v \approx f(x', w)$ and $\alpha_V(\varphi_1) \land x \approx y \vdash x' \approx u$, $\alpha_V(\varphi_1) \land x \approx y \vdash v \approx f(u, w)$. \item case $a_2$ is a superterm of $x$. We have $(\varphi \land x \approx y) \vdash a_2 \approx b_2$. Since $\varphi_1 \vdash a_2' \approx a_2$, $a_2' \in \mathcal{T}(\varphi_1)$, and $V$ is a purifier for $x$ in $\varphi_1$, there must exists a constant $x' \in V$ s.t. $\varphi_1 \vdash x' \approx a_2'$. Since $(\varphi_1 \land x \approx y) \vdash a_2 \approx b_2$, $(\varphi_1 \land x \approx y) \vdash x' \approx b_2$. However, $\varphi \vdash b_2 \approx t$ and hence $(\varphi_1 \land x \approx y) \vdash t \approx x'$ which contradicts our assumption that there is no $v' \in V$ such that $(\varphi_1 \land x \approx y) \vdash v' \approx t$ . \end{enumerate} \end{enumerate} \end{enumerate} We have, for all $\ell \in L_{\approx} \cup L_{\not\approx} \cup L_{\mathcal{F}}$, $\alpha_V(\varphi_1) \land x \approx y \vdash \ell$. Hence, $\alpha_V(\varphi_2) \land x \approx y \vdash \ell$ for all $\ell \in L_{\approx} \cup L_{\not\approx} \cup L_{\mathcal{F}}$. Therefore, $\alpha_V(\varphi_1 \land x \approx y) = \alpha_V(\varphi_2 \land x \approx y)$. \end{proof} \begin{lemma}\label{lm:deqpres1v} Let $V$ be a set of constants s.t. $x, y \in V$, $\varphi_1$ and $\varphi_2$ be two sets of literals s.t. $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$. Then, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$ \end{lemma} \begin{proof} Let $L = \{x \not\approx u \mid y \approx u \in \alpha_V(\varphi_1), u \in V\}$. By definition, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_1) \land L$ as well as $\alpha_V(\varphi_2 \land x \not\approx y) = \alpha_V(\varphi_2) \land L$. Since $\alpha_V(\varphi_1) = \alpha_V(\varphi_2)$, $\alpha_V(\varphi_1 \land x \not\approx y) = \alpha_V(\varphi_2 \land x \not\approx y)$. \end{proof} For a state $q$, we write $\varphi \equiv_q \psi$ for $\varphi \equiv_{\const(q)} \psi$. \begin{theorem} \label{thm:main} Let $\langle s, q, pc\rangle$ be a reachable configuration of a CUP\footnote{coherent uninterpreted program} $P$. Then, \begin{enumerate}[(1)] \item $\langle s, q, pc\rangle \to \langle s', q', pc \land pc'\rangle$ iff\\ $\langle s, q, \alpha_q(pc)\rangle \to \langle s', q', \alpha_{q}(pc) \land pc'\rangle$, and \item $\alpha_{q'}(pc \land pc') = \alpha_{q'}(\alpha_q(pc) \land pc')$. \end{enumerate} \end{theorem} \begin{proof} Throughout the proof, we use $x = q[\pv{x}]$, and $y = q[\pv{y}]$. We only show the proof of part (1) for $s = \textbf{assume}(\pv{x} \bowtie \pv{y})$ since the other cases are trivial. The only-if direction follows since $\alpha_q(pc)$ is weaker than $pc$. For the if direction, $pc \not\vdash \bot$ since it is part of a reachable configuration. Then, there are two cases: \begin{itemize} \item case $s = \textbf{assume}(\pv{x}=\pv{y})$. Assume $(pc \land x \approx y) \models \bot$. Then, $(pc \land x \approx y) \vdash t_1 \approx t_2$ and $pc \vdash t_1 \not\approx t_2$ for some $t_1, t_2 \in \mathcal{T}(pc)$. By Lemma~\ref{lm:sprtrm}, in any new equality $(t_1 \approx t_2)$ that is implied by $pc \land (x \approx y)$ (but not by $pc$), $t_1$ and $t_2$ are equivalent (in $pc$) to superterms of $x$ or $y$. By the early assume property of CUP, $\const(q)$ purifies $\{x, y\}$ in $pc$. Therefore, every superterm of $x$ or $y$ is equivalent (in $pc$) to some constant in $\const(q)$. Thus, $(pc \land x \approx y) \vdash u \approx v$ and $(pc \land x \approx y) \vdash u \not\approx v$ for some $u, v \in \const(q)$. By Lemma~\ref{lm:Visenough}, $(\alpha_q(pc) \land x \approx y) \vdash u \approx v$. By Lemma~\ref{lm:Visenoughdeq}, $(\alpha_q(pc) \land x \approx y) \vdash u \not\approx v$. Thus, $(\alpha_q(pc)\land x \approx y) \models \bot$. \item case $s = \textbf{assume}(\pv{x}\neq\pv{y})$. $(pc \land x \not\approx y) \models \bot$ if and only if $pc \vdash x \approx y$. Since $x, y \in \const(q)$, $\alpha_q(pc) \vdash x \approx y$. \end{itemize} For part (2), we only show the cases for assume and assignment statements, the other cases are trivial. \begin{itemize} \item case $s = \textbf{assume}(\pv{x} = \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \approx y) = \alpha_{q}(\alpha_q(pc) \land x \approx y)$. From the early assumes property, $\mathcal{C}(q)$ purifies $\{x, y\}$ in $pc$. By Lemma~\ref{lm:pur}, $\const(q)$ purifies $\{x, y\}$ in $\alpha_q(pc)$ as well. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:eqpres1v}, $\alpha_q(pc \land x \approx y) = \alpha_q(\alpha_q(pc) \land x \approx y)$. \item case $s = \textbf{assume}(\pv{x} \neq \pv{y})$, Since $q' = q$, we need to show that $\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:deqpres1v},$\alpha_{q}(pc \land x \not\approx y) = \alpha_{q}(\alpha_q(pc) \land x \not\approx y)$. \item case $s = \pv{x} \gets \pv{y}$. W.l.o.g., assume $q' = q[\pv{x}\mapsto x']$, for some constant $x' \not\in \mathcal{C}(pc)$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~1), $\alpha_{\const(q) \cup \{x'\}}(pc \land x' \approx y) = \alpha_{\const(q) \cup \{x'\}}(\alpha_q(pc) \land x' \approx y)$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x' \approx y ) = \alpha_{q'}(\alpha_q(pc) \land x' \approx y)$, since $\const(q') \subseteq (\const(q) \cup \{x'\})$. \item case $s = \pv{x} \gets f(\Vec{y})$. W.l.o.g., $q' = q[\pv{x}\mapsto x']$ for some constant $x' \not\in \mathcal{C}(pc)$. There are two cases: (a) there is a term $t \in \mathcal{T}(pc)$ s.t. $pc \vdash t \approx f(\Vec{y})$, (b) there is no such term $t$. \begin{enumerate}[label=(\alph*)] \item By the memoization property of CUP, there is a program variable $\pv{z}$ s.t. $q[\pv{z}] = z$ and $pc \vdash z \approx f(\Vec{y})$. Therefore, by definition of $\alpha_q$, $\alpha_q(pc) \vdash z \approx f(\Vec{y})$. The rest of the proof is identical to the case of $s = \pv{x} \gets \pv{z}$. \item Since there is no term $t\in\mathcal{T}(pc)$ s.t. $pc \vdash t \approx f(\Vec{y})$, $f(\Vec{y}) \not \in\mathcal{T}(pc)$ as well as $f(\Vec{y}) \not \in \mathcal{T}(\alpha_q(pc))$. By Lemma~\ref{lm:idem}, $\alpha_q(pc) = \alpha_q(\alpha_q(pc))$. By Lemma~\ref{lm:propequiv} (case~2), $\alpha_{\mathcal{C}(q)\cup \{x'\}}(pc \land x \approx f(\Vec{y})) = \alpha_{\mathcal{C}(q)\cup \{x'\}}(\alpha_q(pc) \land x \approx f(\Vec{y}))$. By Lemma~\ref{lm:subset}, $\alpha_{q'}(pc \land x \approx f(\Vec{y})) = \alpha_{q'}(\alpha_q(pc) \land x \approx f(\Vec{y}))$ since $\const(q') \subseteq (\const(q) \cup \{x'\})$. \end{enumerate} \end{itemize} \end{proof} \hg{rough statements. Will refine later.} \begin{corollary} The relation $R = \{(c, \alpha(c))\}$ is a bisimulation relation for CUP. \end{corollary} \begin{corollary} The relation $R = \{(c, \alpha_r(\alpha(c)))\}$ is a finite bisimulation relation for CUP. \end{corollary} \subsection{Computing a Finite Abstraction} \newcommand{n}{n} \newcommand{nw}{nw} We have shown that CUP programs are bisimilar to finite state systems. However, all our proofs depend on $\alpha_b$, which was not assumed to be computable. In this section, we show how to implement $\alpha_b$, and, thereby, show how to compute a finite state system that is bisimilar to a CUP program. Note that our prior results are independent of this section. The main difficulty is in naming the fresh constants, which we always refer to as $W$, that are introduced by the base abstraction. Since we require that base abstraction is canonical, the naming has to be unique. Furthermore, we have to show that the number of such $W$ constants is bounded. We solve both of these problems by proposing a deterministic naming scheme. The scheme is determined by a normalization function $n_V$ that replaces all the fresh constants in a $V$-basis with canonical constants. Let $\beta$ be a $V$-basis. We denote the auxiliary constants in $\beta$ ($\const(\beta) \setminus V$) by $W = \{w_0, w_1, \ldots\}$, and by `$?$' some unused constant that we call a \emph{hole}. Recall that constants from $W$ may only appear in literals of the form $v \approx f(\Vec{w})$. We define the set of $W$-templates as the set of all terms $f(\Vec{a})$, where each element in $\Vec{a}$ is either a hole or a constant in $W$. A term $t$ \emph{matches} a template $f(\Vec{a})$ if $t = f(\Vec{b})$, and $\Vec{a}$ and $\Vec{b}$ agree on all constants in $W$. For example, let $\xi$ be the template $f(?, w_1, ?, w_2)$. The term $f(a, w_1, b, w_2)$ matches $\xi$, but $f(w_0, w_1, b, w_2)$ does not, because one of the holes is filled with $w_0 \in W$. We say that a literal $v \approx f(\Vec{b})$ matches a template $\xi$ if $f(\Vec{b})$ matches $\xi$. The $W$-context of a $W$-template $\xi$ in a set of literals $L$, denoted $Z_L(\xi)$, is the set $Z_L(\xi) \triangleq \{\ell[W\mapsto ?] \mid \ell\in L \land \ell \text{ matches } \xi\}$, where $\ell[W\mapsto ?]$ means that all occurrences of constants in $W$ are replaced with a hole. For example, let $\xi = f(?, w_1, w_2, ?)$ and $L = \{v \approx f(a, w_1, w_2, b), u \approx f(c, w_1, w_2, a), w \approx f(x, w_1, w_2, b), x \approx g(x, w_1, w_2, b))\}$ then $Z_L(\xi) =\{v \approx f(a, ?, ?, b), u \approx f(c, ?, ?, a), w \approx f(x, ?, ?, b)\}$. Since $V$ and $\mathcal{F}$ are finite, the number of $W$-contexts is finite, independent of $W$. Let $w_Z$ be a fresh constant for context $Z$. \begin{definition}[Normalization Function] \label{def:normfunc} The normalization function $n_{V}(\beta)$ is defined as follows: \begin{enumerate}[(1)] \item for each $t \in \mathcal{T}(\Gamma)$ s.t. $\mathcal{C}(t)\cap W \neq \emptyset$, create a template $\xi$ by dropping all constants not in $W$. Let $\Xi$ denote the set of templates so obtained. \item Let $\mathit{Ctx} \triangleq \{Z_{\Gamma}(\xi) \mid \xi \in \Xi\}$. \item For each $\ell \in \Gamma$, if $\ell[W\mapsto ?] \in Z$ for some $Z \in \mathit{Ctx}$, then replace all occurrences of $W$ in $\ell$ with $w_Z$. \end{enumerate} \end{definition} The normalization preserves $V$-equivalence of $\beta$ because it renames local constants, while maintaining all consequences that are derivable through them. That is, $n_V(\beta) \equiv_V \beta$. Furthermore, $n_V(\beta)$ is cannonical. Therefore, given a set of literals $\Gamma$, we use $n_V(\beta)$ as a computable implementation of the $V$-base abstraction, $\alpha_V$~(\cref{def:vabst}). That is, $\alpha_V(\Gamma) \triangleq n_V(\beta)$ where $\langle W, \beta, \delta\rangle \in \base(\Gamma, V)$. Even though $n_V(\beta)$ may not be a part of a $V$-basis for $\Gamma$, it satisfies all the properties used in the proof of \Cref{thm:main}. We define the normalizing abstraction in the usual way: \begin{definition}[Normalizing abstraction] The normalizing abstraction function $\alpha_n: C \to C$ is defined by \[ \alpha_n(\langle s, q_0, pc\rangle) \triangleq \langle s,q_0,n(pc)\rangle \] \end{definition} Let $\alpha_{b,\mathit{r}, n} \triangleq \alpha_{b} \circ \alpha_\mathit{r} \circ \alpha_n$ be the composition of normalization abstraction with renaming and base abstraction where $\alpha_b$ is implemented using normalization. Notice that, for any state $c = \langle s, q, pc\rangle$, $\alpha_{b, \mathit{r}, n}(c)$ is computed by first computing \emph{any} $V$-basis of $pc$, applying $n_q$, renaming all $\mathcal{C}(q)$ constants to $q_0$, and applying $n_q_0$. The second normalization is required to ensure that the fresh constants are canonical with respect to $q_0$. By definition $\alpha_{b, \mathit{r}, n}$ is computable. Hence, it can be used to compute the finite abstraction of any CUP. \begin{theorem} \label{thm:ttt} For a CUP $P$, the finite abstract transition system $\alpha_{b', \mathit{r}, n}(\mathcal{S}_P)$ is bisimilar to $P$ and is computable. \end{theorem} \Cref{thm:ttt} implies that any property that is decidable over a finite transition system is also decidable over CUPs. In particular, temporal logic model checking is decidable. \subsection{Programs} We consider \emph{uninterpreted programs (UP)}, specified in the \emph{uninterpreted programming language (UPL)}.\sharon{is that the convention we want? it means that we will later write UP program or just UP? a bit strange. I won't change other occurrences until we decide} The \emph{syntax} of programs in UPL is shown in \Cref{fig:syntax}. Program variables are taken from a fixed set, denoted $\pv{V}$. We use lower case letters in a special font: $\pv{x}$, $\pv{y}$, etc. to denote individual variables in $\pv{V}$. We write $\Vec{\pv{y}}$ for a list of program variables. Function symbols are taken from a fixed set $\mathcal{F}$. Following~\cite{DBLP:journals/pacmpl/MathurMV19}, while the language does not allow for Boolean combination of conditionals nor relational symbols, these can be modelled as described in~\cite{DBLP:journals/pacmpl/MathurMV19}. \begin{figure}[t] \begin{align*} \langle stmt \rangle ::=\; &\mathbf{skip} \mid \pv{x} \gets \pv{x} \mid \pv{x} \gets f(\Vec{\pv{x}}) \mid \\ &\mathbf{assume}\;(\langle cond \rangle) \mid \langle stmt \rangle \mathbin{;} \langle stmt \rangle \mid \\ &\mathbf{if}\;(\langle cond \rangle ) \;\mathbf{then} \;\langle stmt \rangle \;\mathbf{else}\; \langle stmt \rangle \mid \\ &\textbf{while}\;(\langle cond \rangle) \;\langle stmt \rangle\\ \langle cond\rangle ::=\; &\langle var \rangle = \langle var \rangle \mid \langle var \rangle \neq \langle var \rangle\\ \langle var \rangle ::=\; & \pv{x} \mid \pv{y} \mid \cdots \end{align*} \caption{Syntax of the programming language UPL.} \label{fig:syntax} \end{figure} The small step symbolic operational semantics of UPL is defined by the rules shown in Fig.~\ref{fig:ssesem}, with respect to a FOL signature $\Sigma = (\const, \mathcal{F}, \{\approx, \not\approx\})$.\sharon{should we re-arrange the rules to first display assume and assignments? these are the only "real" steps} A program \emph{configuration} is a triple $\langle s, q, pc \rangle$, where $s$ is a program in UPL to be executed, $q : \pv{V} \to \const$ is a \emph{state} mapping program variables to FOL constants, and $pc$, called the \emph{path condition}, is a FOL\sharon{EUF?}\ag{We need to decide on EUF, I find it distracting, but maybe it is needed in some places} formula over $\Sigma$. We assume that every program variable $\pv{x}$ is initialized to a constant $x_0 \in \const$ before the start of a program. Let $\mathcal{V}_{\mathit{init}} = \{v_0 \mid \pv{v} \in \pv{V}\} \subseteq \const$ be the set of constants to which program variables are initialized. For a state $q$, we write $q[\pv{x} \mapsto x']$ for a state $q'$ that is identical to $q$, except that it maps $\pv{x}$ to $x'$. We write $\langle e, q \rangle \Downarrow v$ to denote that $v$ is the value of the expression $e$ in state $q$, i.e., the result of substituting each program variable $\pv{x}$ in $e$ with $q[\pv{x]}$, and replacing functions and predicates with their FOL counterparts. The value of $e$ is an FOL term or an FOL formula over $\Sigma$. For example, $\langle \pv{x} = \pv{y}, [\pv{x} \mapsto x, \pv{y} \mapsto y] \rangle \Downarrow x \approx y$. Given two configurations $c$ and $c'$, we write $c \to c'$ if $c$ reduces to $c'$ using one of the rules in Fig.~\ref{fig:ssesem}. Note that there is no rule for $\textbf{skip}$ -- the program terminates once it gets into a configuration $\langle \textbf{skip}, q, pc\rangle$. For a UPL program $P$, the operational semantics induces a transition system $\mathcal{S}_P = \tuple{C,c_0,\mathcal{R}}$, where $C$ is the set of configurations of $P$, $c_0 \triangleq \langle P, q_{\mathit{init}}, \top \rangle$ is the initial configuration, $q_{\mathit{init}} \triangleq \{\pv{x} \mapsto x_0 \mid \pv{x} \in \pv{V}\}$ is the initial state, and $\mathcal{R} \triangleq \{(c, c') \mid c \to c'\}$. A configuration $c$ of $P$ is \emph{reachable} if $c$ is reachable from $c_0$ in $\mathcal{S}_P$. \begin{definition}[The verification problem for UPL programs] Given a program $P$ in UPL, determine whether the configuration $\langle\textbf{skip}, q, pc\rangle$ is reachable in $P$. \end{definition} Our semantics of UPL differs in some respects from the one in~\cite{DBLP:journals/pacmpl/MathurMV19}. First, we follow a more traditional small-step operational semantics presentation, by providing semantics rules and the corresponding transition system. However, this does not change the semantics conceptually. More importantly, we ensure that the path condition remains satisfiable in all reachable configurations (by only allowing an assume statement to execute when it results in a satisfiable path condition). We believe this is a more natural choice that is also consistent with what is typically used in other symbolic semantics. \renewcommand{\bot}{\mathbf{skip}} \begin{figure} \centering \begin{gather*} \prftree{\langle \mathbf{skip}\mathbin{;}s, q, pc \rangle \to \langle s, q, pc \rangle}\\[1ex] \prftree{\langle s_1, q, pc\rangle \to \langle s_1', q', pc'\rangle}% {\langle s_1\mathbin{;}s_2, q, pc \rangle \to \langle s_1'\mathbin{;}s_2, q', pc' \rangle}\\[1ex] \prftree{\langle c, q\rangle\Downarrow v\qquad}{(pc \land v) \not \models \bot}% {\langle \mathbf{assume}(c), q, pc \rangle \to \langle \bot, q, pc\wedge v\rangle}\\[1ex] \prftree{\langle e, q\rangle \Downarrow v\qquad}{\text{$x' \in \const(\Sigma)$ is fresh in $pc$}}{\langle \pv{x} \gets e, q, pc \rangle \to \langle\bot, q[\pv{x} \mapsto x'], pc \land x' = v\rangle}\\[1ex] \begin{aligned} \langle \mathbf{if}\;(c)\;\mathbf{then}\;s_1\;\mathbf{else}\;s_2, q, pc \rangle &\to \langle \mathbf{assume}(c)\mathbin{;}s_1, q, pc\rangle\\[1ex] \langle \mathbf{if}\;(c)\;\mathbf{then}\;s_1\;\mathbf{else}\;s_2, q, pc \rangle &\to \langle \mathbf{assume}(\neg c)\mathbin{;}s_2, q, pc\rangle \end{aligned}\\[1ex] \begin{multlined} \langle \mathbf{while}\;(c)\;s, q, pc \rangle \to{} \\ \qquad\qquad \langle \mathbf{if}\;(c)\;\mathbf{then}\;(s\mathbin{;} \mathbf{while}\;(c)\;s)\;\mathbf{else}\;\mathbf{skip}, q, pc \rangle \end{multlined} \end{gather*} \caption{Small step symbolic operational semantics of UPL, where $\neg c$ denotes $\pv{x} \neq \pv{y}$ when $c$ is $\pv{x} = \pv{y}$, and $\pv{x} = \pv{y}$ when $c$ is $\pv{x} \neq \pv{y}$.} \label{fig:ssesem} \end{figure} \iffalse \infer{\langle \mathbf{if}\;(c)\;\mathbf{then}\;s_1\;\mathbf{else}\;s_2, q, pc \rangle \Downarrow q', pc'}{\langle c, q\rangle \Downarrow v \quad isSAT(pc \wedge v) \quad \langle s_1, q, pc \wedge v\rangle \Downarrow q', pc'}\\[1ex] \infer{\langle \mathbf{if}\;(c)\;\mathbf{then}\;s_1\;\mathbf{else}\;s_2, q, pc \rangle \Downarrow q', pc'}{\langle \neg c, q\rangle \Downarrow v \quad isSAT(pc \wedge v) \quad \langle s_2, q, pc \wedge v\rangle \Downarrow q', pc'}\\[1ex] \infer{\langle \mathbf{while}\;(c)\;\{s\}, q, pc \rangle \Downarrow q', pc'}{\langle c, q\rangle \Downarrow v\quad isSAT(pc \wedge v)\; \langle s;\mathbf{while}(c)\;\{s\}, q, pc \wedge v\rangle \Downarrow q', pc'}\\[1ex] \infer{\langle \mathbf{while}\;(c)\;\{s\}, q, pc \rangle \Downarrow q, pc \wedge v}{\langle \neg c, q\rangle \Downarrow v\quad isSAT(pc \wedge v)}\\[1ex] \fi \begin{definition}[Coherent Uninterpreted Program~\cite{DBLP:journals/pacmpl/MathurMV19}] \label{def:cup}% A UPL program $P$ is \emph{coherent} (CUP for short) if all of the reachable configurations of $P$ satisfy the following two properties: \begin{description}[topsep=0pt,noitemsep] \item[Memoizing] for any configuration $\langle \pv{x} \gets f(\Vec{\pv{y}}), q, pc \rangle$, if there is a term $t \in \mathcal{T}(pc^*)$ s.t. $pc \models t \approx f(q[\Vec{\pv{y}}])$, then there is $\pv{v} \in \pv{V}$ s.t. $pc \models q[\pv{v}] \approx t$. That is, every assigned term is either fresh, or is stored in some variable. \item[Early assume] for any configuration\\\mbox{$\langle \mathbf{assume}(\pv{x} = \pv{y}), q, pc \rangle$}, if a term $t \in \mathcal{T}(pc^*)$ is a superterm of $q[\pv{x}]$ or $q[\pv{y}]$, then, there is $\pv{v} \in \pv{V}$ s.t. $pc \models q[\pv{v}] \approx t$. That is, any superterm of a variable in an assumed equality is stored in some variable. \end{description} \end{definition} \hg{The definition of coherence is not what it used to be. We (and the definition from the original paper) cared only about terms in the $pc$, not terms in $pc^*$.}\ag{This was my attempt to avoid talking about terms modulo $pc$. The original definition definitely cares about terms that are equivalent to a superterm, even if they are superterms themselves} \hg{If $pc = x \approx y$, shouldn't $pc^*$ contain $f(x) \approx f(y), f(f(x))\approx f(f(y))$ etc? Seems to me like the early assumes property, as stated now, requires all of them to be stored in variables.} \ag{Hmm, good point. Then our definition of deductive closure is wrong. Same issue as reflexivity. If $pc = x \approx y$, then I want $pc^* = x \approx y$} \sharon{I think we should comment about the diff compared to orig def: since the path condition accumulates the entire history of the execution, we don't need to talk about executions}\ag{I did somewhere already, we can add more}\sharon{I saw you explained it about the semantics, but I thought that we can highlight it again for the def of coherence. Maybe it's not needed, not sure. But I really like that it is now stated as a "non-temporal" safety property (non-temporal in the sense that it is defined by a set of states, no need to talk about executions)} \begin{example} Consider the coherent program \[ \begin{array}{ll} 1.&\mathbf{assume}(\pv{x} = \pv{y});\\ 2.&\pv{x} \gets f(\pvp, \pv{x});\\ 3.&\pv{y} \gets f(\pvq, \pv{y});\\ 4.&\mathbf{assume}(\pvp = \pvq);\\ 5.&\mathbf{assume}(\pv{x} \neq \pv{y}) \end{array} \] The symbolic state after each line of execution is \ag{Update to follow new semantics. Explain what $pc_i$ is. Explain why the program is coherent} \[ \begin{array}{ll} 1.&\langle [\pv{x}: x_0, \pv{y}: y_0, \pvp: p_0, \pvq: q_0], x_0 = y_0\rangle\\ 2.&\langle [\pv{x}: x_1, \pv{y}: y_0, \pvp: p_0, \pvq: q_0], pc_1 \land x_1 = f(p_0, x_0)\rangle\\ 3.&\langle [\pv{x}: x_1, \pv{y}: y_1, \pvp: p_0, \pvq: q_0], pc_2 \land y_1 = f(q_0, y_0)\rangle\\ 4.&\langle q_3, pc_3 \land p_0 = q_0\rangle\\ 5.&\langle q_3, pc_4 \land x_1 \neq y_1 \rangle \end{array} \] \end{example} \section{Invariants of the abstraction} Let $\langle q, pc_1\rangle$ and $\langle q, pc_2\rangle$ be two program configurations. We say that $\langle q, pc_1\rangle \cong \langle q, pc_2\rangle$ if and only if \begin{itemize} \item $pc_1 \models q[v] \approx t \iff pc_2 \models q[v] \approx t$ \item $pc_1 \models q[v] \not\approx t \iff pc_2 \models q[v] \not\approx t$ \end{itemize} where $v$ is any program variable and $t$ is any term over signature $\Sigma$. \begin{lemma}\label{lm:eqprescoh} If $\langle q, pc_1\rangle \cong \langle q, pc_2\rangle$ then for any statement $s$, $\langle s, q, pc_1\rangle$ satisfies both Early Assume and Memoizing property if and only if $\langle s, q, pc_2\rangle$ satisfies them as well. \end{lemma} \begin{proof} Both properties require some term to be stored in a variable. \end{proof} \begin{lemma}\label{lm:sprtrmder} If $\Gamma \wedge X \approx Y \models t_1 \approx t_2$ where $X, Y \in \mathcal{T}(\Gamma)$, then either $\Gamma \models t_1 \approx t_2$ or $t_1, t_2$ are superterms of $X, Y$ modulo $\Gamma$ respectively. \hg{add $t_1$ can be superterm of $Y$ and $t_2$ can be a superterm of $X$.} \end{lemma} \begin{proof} Assume that $t_1$ is not a superterm of $X$ (or, equivalently $t_2$ is not a superterm of $Y$). We are going to prove that if $\Gamma \wedge X \approx Y \models t_1 \approx t_2$ then $\Gamma \models t_1 \approx t_2$. We do this by induction on the number of number of times the congruence axioms are used to derive $t_1 \approx t_2$. Clearly, if the axiom is not used then $\Gamma \models t_1 \approx t_2$. Inductive case: assume that $t_1 = f(u_1, \ldots, u_n)$ and $t_2 = f(v_1, \ldots, v_n)$ such that none of the $u_i$'s are superterms of $X$ modulo $\Gamma$ and $\Gamma \models u_i \approx v_i$. Clearly, $\Gamma \models f(u_1,\ldots,u_n) \approx f(v_1,\ldots, v_n)$. \end{proof} \begin{lemma}\label{lm:frshcnst} If $\Gamma \wedge X \approx t \models \ell$ where $\ell$ is ground and $X \not\in \mathcal{T}(\Gamma)$, then \begin{itemize} \item if $X \not\in\mathcal{T}(\ell)$ then $\Gamma \models \ell$ \item else $\Gamma \models \ell[X \mapsto t]$ where $\mapsto$ is syntactic substitution of all occurrences of $X$ with $t$ \end{itemize} \end{lemma} \begin{lemma}\label{lm:sprtrmdeqder} If $\Gamma \wedge X \not\approx Y \models t_1 \not\approx t_2$ where $X, Y \in \mathcal{T}(\Gamma)$, then either $\Gamma \models t_1 \not\approx t_2$ or both $\Gamma \models X \approx t_1$ and $\Gamma \models Y \approx t_2$. \hg{add $t_1 \approx Y$ and $t_2 \approx X$.} \end{lemma} \begin{lemma}\label{lm:sprtrmdeqeqqder} If $\Gamma \wedge X \approx Y \models t_1 \not\approx t_2$ where $X, Y \in \mathcal{T}(\Gamma)$, then either $\Gamma \models t_1 \not\approx t_2$ or both $\Gamma \models X \not\approx t_1$ and $\Gamma \models Y \approx t_2$. \hg{or $\Gamma \models Y \not\approx t_1$ and $\Gamma \models X \approx t_2$} \end{lemma} \begin{lemma}\label{lm:sprtrmcls} Let $\langle q, pc_1\rangle \cong \langle q, pc_2\rangle$ such that, for all superterms $t_x$ of $x$, there exists variable $v$ such that $pc_1 \models t_x \approx q[v]$ if and only if $pc_2 \models t_x \approx q[v]$. Then $t$ is a superterm of $q[x]$ modulo $pc_1$ if and only if $t$ is a superterm of $q[x]$ modulo $pc_2$. \end{lemma} \begin{proof} Proof relies only on the transitivity property. If direction: It has to be the case that $pc_1 \models t \approx t_x$. Therefore, $pc_1 \models q[v] \approx t$. Hence $pc_2 \models q[v] \approx t$ and, by assumption, $pc_2 \models q[v] \approx t_x$. Therefore $pc_2 \models t_x \approx t$. \end{proof} \begin{theorem} Let $P$ be a coherent program and let $\langle q, pc_1 \rangle$ be a reachable configuration in $P$. If $\langle q, pc_1\rangle \cong \langle q, pc_2\rangle$, $\langle s, q, pc_1\rangle \rightarrow \langle s', q', pc_1'\rangle$, and $\langle s, q, pc_2\rangle \rightarrow \langle s', q', pc_2'\rangle$, then $\langle q', pc_1'\rangle \cong \langle q', pc_2'\rangle$ \end{theorem} \begin{proof} By lemma~\ref{lm:eqprescoh}, both configurations are coherent. We only have to prove the theorem for the rules for assume and assignment statement. \begin{itemize} \item assume$(x = y)$: By early assume property and lemmas~\ref{lm:sprtrmcls},~\ref{lm:sprtrmder}, $pc_1' \models q[v] \approx t \iff pc_2' \models q[v] \approx t$. Lemma~\ref{lm:sprtrmdeqeqqder} shows that exactly the same inequalities are produced. \item assume$(x\neq y)$. No new equalities are produced, therefore, $pc_1' \models q[v] \approx t \iff pc_2' \models q[v] \approx t$. By lemma~\ref{lm:sprtrmdeqder}, $pc_1' \models q[v] \not\approx t \iff pc_2' \models q[v] \not\approx t$. \item $x := e$: Let $\langle e, q\rangle \Downarrow t_e$, $q' = q[x \mapsto X]$. We have to show that $pc_1 \wedge X \approx t_e \models q'[v]\approx t$ if and only if $pc_2 \wedge X \approx t_e \models q'[v] \approx t$. We prove the if direction. \begin{itemize} \item if $v$ is not the variable $x$. $q[v] = q'[v]$. Consider the two cases in lemma~\ref{lm:frshcnst}. If $t$ does not contain $X$, the proof is trivial. Otherwise, $pc_1 \models q'[v] \approx t[X \mapsto t_e]$. By our assumption $pc_2 \models q'[v] \approx t[X \mapsto t_e]$. Hence $X = t_e \wedge pc_2 \models q'[v] \approx t$. \item if $v = x$. If $pc_1 \wedge X \approx t_e \models q[x] \approx t$, it has to be the case that $pc_1 \models t_e \approx t$. By memoization property, there has to be a variable $u$ such that $pc_1 \models q[u] \approx t_e$ and therefore $pc_1 \models q[u] \approx t$. Hence $pc_2 \models q[u] \approx t \wedge q[u] \approx t_e$, therefore, $pc_2 \models t_e \approx t$. Hence $pc_2 \wedge X = t_e \models q[x] \approx t$. The same argument can be used to show that if $pc_1 \wedge X \approx t_e \models q[x] \not\approx t$ then $pc_2 \wedge X \approx t_e \models q[x] \not\approx t$ as well. \end{itemize} \end{itemize} \end{proof} Throughout the rest of this section, we assume that all non-constant function symbols the signature $\Sigma$ are unary. Given a program configuration $\langle q, pc \rangle$, let $S_q$ denote the set of constants in $q$: \[ S_q = \{ c \mid \exists v \cdot q[v] = c\} \] \begin{theorem} Let $P$ be a coherent program. Let $\langle s, q, pc\rangle$ be a reachable configuration in $P$. Then $\langle q, pc\rangle \cong \langle q, \alpha^1_{S_q}(pc)\rangle$ \end{theorem} \begin{theorem} Let $P$ be a coherent program. Let $\langle s, q, pc\rangle$ be a reachable configuration in $P$. Then $\langle q, pc\rangle \cong \langle q, \alpha^2_{S_q}(pc)\rangle$ \end{theorem} \section{Invariants of the abstraction} Throughout this section, we assume that all non-constant function symbols in the signature $\Sigma$ are unary. Lemmas~\ref{lm:sprtrm} to \ref{lm:frshcnst} outline important properties of EUF that are useful in our proofs. \hg{talk about symmetry} Lemma~\ref{lm:sprtrm} states that introducing an equality between two constants only affects their superterms. \begin{lemma}\label{lm:sprtrm} Let $\Gamma$ be a set of literals, $a, b$ be constants in $\Gamma$. Let $t_1, t_2$ be terms constructed using signature $\Sigma$ such that $\Gamma \not\vdash t_1 \approx t_2$. Then $\Gamma \wedge a \approx b \vdash t_1 \approx t_2$ if and only if $\Gamma \vdash t_1 \approx f^k(a)$ and $\Gamma \vdash t_2 \approx f^k(b)$, $k \geq 0$. \end{lemma} The following lemma specialises lemma~\ref{lm:sprtrm} to the case of constants and function applied to constants: \begin{lemma}\label{lm:sprtrm2} Let $\Gamma$ be a set of literals, $a, b$ be constants in $\Gamma$. Let $u, v$ be constants in $\Gamma$ such that $\Gamma \not\vdash v \approx f(u)$. Then $\Gamma \wedge a \approx b \vdash v \approx f(u)$ if and only if $\Gamma \vdash v \approx f^k(a)$ and $\Gamma \vdash u \approx f^{k - 1}(b)$, $k \geq 1$. \end{lemma} Let $V$ be a set of constants. We denote by $V^1$ the set of terms obtained by applying function symbols atmost once on the constants in $V$: \[ V^1 = \{f(c) \mid c \in V\} \cup V \] We denote by $\Gamma^{V}$ the set \[ \{v \approx t \in \Gamma^* \mid t \in V^1, v \in V\} \cup \{ v\not\approx u \in \Gamma^* \mid u, v \in V\} \] \hg{TODO: come up with a better name than represents. Something that says all superterms have representatives in this set.} We say that a set of constants $V$ \emph{represents} a constant $a$, in a set of literals $\Gamma$ if, $a \in V$ and for every term $f^k(a) \in \mathcal{T}(\Gamma)$~($k \geq 1$), there exists $v \in V$ such that $\Gamma \vdash v \approx f^k(a)$. It follows that there exists a variable $u \in V$ such that $\Gamma \vdash u \approx f^{k - 1}(a)$ and that $\Gamma \vdash v \approx f(u)$. Moreover, if $k \geq 2$, there exists a constant $u' \in V$ such that $\Gamma \vdash u' \approx f^{k - 2}(a)$ and so on. \begin{example} Let $\Gamma = \{ c \approx f(a), d \approx f(b), d \not\approx e\}$. The set of constants $V_1 = \{a, b, c\}$ is represents $a$ but not $b$. \end{example} \begin{lemma}\label{lm:Visenough} Let $\Gamma$ be a set of literals and $a, b$ be constants in $\Gamma$. Let $V$ be a set of constants that represents both $a, b$ in $\Gamma$. If \[ \Gamma \wedge a \approx b \vdash v \approx u \] then \[ \Gamma^V \wedge a \approx b \vdash v \approx u \] where $u, v \in V$. \end{lemma} \begin{proof} By the definition of $\Gamma^V$, if $\Gamma \vdash v \approx u$ then $\Gamma^V \vdash v \approx u$. If $\Gamma \not \vdash v \approx u$, according to lemma~\ref{lm:sprtrm}, $\Gamma \vdash v \approx f^k(a)$ and $\Gamma \vdash u \approx f^k(b)$. We do induction on $k$. If $k = 0$ the lemma holds by the definition of $\Gamma^V$. For the inductive step, we have $\Gamma \wedge a \approx b \vdash v \approx u$ where $\Gamma \vdash v \approx f^k(a)$ and $\Gamma \vdash u \approx f^k(b)$. Assume that the lemma holds for some $k - 1$. That is, for every $u', v' \in V$, if $\Gamma \vdash v' \approx f^{k - 1}(a)$, $\Gamma \vdash u' \approx f^{k - 1}(b)$ then $\Gamma^V \wedge a \approx b \vdash v' \approx u'$. Since $V$ represents both $a, b$ in $\Gamma$, we have $\Gamma \vdash v \approx f(v'')$ and $\Gamma \vdash u \approx f(u'')$ for some $u'', v'' \in V$. Therefore, $\Gamma^V \vdash v \approx f(v'')$ and $\Gamma^V \vdash u \approx f(u'')$. By the inductive assumption, we have $\Gamma^V \wedge a \approx b\vdash u'' \approx v''$. By applying congruence axiom, we have $\Gamma^V \wedge a \approx b \vdash v \approx u$. \end{proof} \begin{lemma}\label{lm:Visenough2} Let $\Gamma$ be a set of literals and $a, b$ be constants in $\Gamma$. Let $V$ be a set of constants that represents both $a, b$ in $\Gamma$. If \[ \Gamma \wedge a \approx b \vdash v \approx f(u) \] then \[ \Gamma^V \wedge a \approx b \vdash v \approx f(u) \] where $u, v \in V$. \end{lemma} \begin{proof} By the definition of $\Gamma^V$, if $\Gamma \vdash v \approx f(u)$ then $\Gamma^V_{\{a, b\}} \vdash v \approx f(u)$. Now, according to lemma~\ref{lm:sprtrm2}, $\Gamma \vdash v \approx f^k(a)$ and $\Gamma \vdash u \approx f^{k - 1}(b)$. Since $V$ represents $a$ in $\Gamma$, there exists a constant $v' \in V$ such that $\Gamma \vdash v \approx f(v')$ and $\Gamma \vdash v' \approx f^{k -1}(a)$. By the definition of $\Gamma^V$, we have $\Gamma^V \vdash v \approx f(v')$. By lemma~\ref{lm:Visenough}, we have $\Gamma^V \wedge a \approx b \vdash v' \approx u$. Therefore, $\Gamma^V\wedge a \approx b \vdash v \approx f(u)$. \end{proof} \begin{lemma}\label{lm:Visenoughdeq} Let $\Gamma$ be a set of literals and $a, b$ be constants in $\Gamma$. Let $V$ be a set of constants that represents both $a, b$ in $\Gamma$. If \[ \Gamma \wedge a \approx b \vdash v \not\approx u \] then \[ \Gamma^V \wedge a \approx b \vdash v \not\approx u \] where $u, v \in V$. \end{lemma} \begin{proof} By the definition of $\Gamma^V$, if $\Gamma \vdash v \not\approx u$ then $\Gamma^V \vdash v \not\approx u$. If $\Gamma \not \vdash v \not\approx u$, it has to be the case that there is a term $t$ over the signature $\Sigma$ such that $\Gamma \vdash t \not\approx u$ and $\Gamma \wedge a \approx b \vdash v \approx t$. According to lemma~\ref{lm:sprtrm}, we have $\Gamma \vdash v \approx f^k(a)$ and $\Gamma \vdash t \approx f^k(b)$. \hg{justify why $f^k(b)\in \mathcal{T}(\Gamma)$} Since $V$ represents $b$, there exists $u' \in V$ such that $\Gamma \vdash u' \approx f^k(b)$. Therefore, $\Gamma \wedge a \approx b \vdash v \approx u'$ and $\Gamma \vdash u' \not\approx u$. By the definition of $\Gamma^V$, we have $\Gamma^V \vdash u' \not\approx u$ and, according to lemma~\ref{lm:Visenough}, we have $\Gamma^V \wedge a \approx b \vdash v \approx u'$. Hence $\Gamma^V \wedge a \approx b \vdash v \not\approx u$. \end{proof} \begin{lemma}\label{lm:Visenoughdeq2} Let $\Gamma$ be a set of literals. Let $V$ be a set of constants including $a, b$. If \[ \Gamma \wedge a \not\approx b \vdash v \not\approx u \] then \[ \Gamma^V \wedge a \not\approx b \vdash v \not\approx u \] where $u, v \in V$. \end{lemma} \begin{proof} By the definition of $\Gamma^V$, if $\Gamma \vdash v \not\approx u$ then $\Gamma^V \vdash v \not\approx u$. If$\Gamma \not\vdash v \not\approx u$ and $\Gamma \wedge a \not\approx b \vdash v \not\approx u$, it has to be case that $\Gamma \vdash v \approx a$ and $\Gamma \vdash u \approx b$. Therefore, $\Gamma^V \vdash v \approx a$ as well as $\Gamma^V \vdash u \approx b$. Therefore $\Gamma^V \wedge a \not\approx b \vdash v \not\approx u$. \end{proof} \begin{lemma}\label{lm:frshcnst} If $\Gamma \wedge v \approx t \vdash \ell$ where $\ell$ is ground, $v$ is a fresh constant: $v \not\in \mathcal{T}(\Gamma) \cup \mathcal{T}(t)$, then \begin{itemize} \item if $v \not\in\mathcal{T}(\ell)$ then $\Gamma \vdash \ell$ \item if $\ell = v \circ t_2$, then $\Gamma \vdash t \circ t_2$, $\circ \in \{\approx, \not\approx\}$ \item else $\Gamma \vdash \ell[v \mapsto t]$ where $v \mapsto t$ is syntactic substitution of all occurrences of $v$ with $t$ \end{itemize} \end{lemma} Given a program configuration $\langle q, pc \rangle$, let $Q$ denote the set of constants in $q$: \[ Q = \{ c \mid \exists \pv{v} \cdot q[\pv{v}] = c\} \] Let $Q^1 = \{f(c) \mid c \in Q\} \cup Q$. Let $\langle q, pc_1\rangle$ and $\langle q, pc_2\rangle$ be two program configurations. We say that $\langle q, pc_1\rangle \cong \langle q, pc_2\rangle$ if \begin{itemize} \item $pc_1 \vdash q[v] \approx t$ if and only if $pc_2 \vdash q[v] \approx t$ for any $t \in Q^1$ \item $pc_1 \vdash q[v] \not\approx t$ if and only if $pc_2 \vdash q[v] \not\approx t$ for any $t \in Q$ \end{itemize} where $v$ is any program variable. \begin{lemma}\label{lm:pceqprop} Let $\langle q, pc_1\rangle$ and $\langle q, pc_2\rangle$ be two program configurations such that $\langle q, pc_1\rangle \cong \langle q, pc_2\rangle$. \begin{itemize} \item $pc_1^Q \subseteq pc_2$ and $pc_2^Q \subseteq pc_1$. \item $Q$ represents $v$ in $pc_1$ if and only if $Q$ represents $v$ in $pc_2$, for any constant $v$. \end{itemize} \end{lemma} \begin{theorem} Let $P$ be a coherent program and let $\langle q, pc_1 \rangle$ be a reachable configuration in $P$. If $\langle q, pc_1\rangle \cong \langle q, pc_2\rangle$, $\langle s, q, pc_1\rangle \rightarrow \langle s', q', pc_1'\rangle$, and $\langle s, q, pc_2\rangle \rightarrow \langle s', q', pc_2'\rangle$, then $\langle q', pc_1'\rangle \cong \langle q', pc_2'\rangle$ \end{theorem} \begin{proof} We only have to prove the theorem for the rules for assume and assignment statement. \begin{itemize} \item \textbf{assume}$(\pv{x}=\pv{y})$: Let $q[\pv{x}] = x, q[\pv{y}] = y$. By early assume property, $Q$ \emph{represents} $x, y$ in $pc_1$. By lemma~\ref{lm:Visenough}~and~\ref{lm:Visenough2}, we have, if $pc_1 \wedge x \approx y \vdash q[\pv{v}] \approx t$ then $pc_1^Q \wedge x \approx y \vdash q[\pv{v}] \approx t$ for any $t \in Q^1$. By lemma~\ref{lm:pceqprop}, $pc_1^Q \subseteq pc_2$. Therefore, $pc_2 \wedge x \approx y \vdash q[\pv{v}] \approx t$ as well. The other direction can be established using the same lemmas. For the inequalities, we can apply lemma~\ref{lm:Visenoughdeq}. \item \textbf{assume}$(\pv{x}\neq\pv{y})$: Let $q[\pv{x}] = x, q[\pv{y}] = y$. Then $pc_1' = pc_1 \wedge x \not\approx y$ and $pc_2' = pc_2 \wedge x \not\approx y$. The equality relation $pc_1' \vdash q[\pv{v}] \approx t$ if and only if $pc_2' \vdash q[\pv{v}] \approx t$ for any $t \in Q^1$, holds because no new equalities are introduced because of the new inequality. We can apply lemma~\ref{lm:Visenoughdeq2} to show that $pc_1' \vdash q[\pv{v}] \not\approx t$ if and only if $pc_2' \vdash q[\pv{v}] \not\approx t$ for any $t \in Q$ \item $\pv{x} := \pv{y}$: Let $q[\pv{y}] = y$, $q' = q[\pv{x} \mapsto x]$, $x$ is fresh. We have to show that $pc_1 \wedge x \approx y \vdash q'[\pv{v}]\approx t$ if and only if $pc_2 \wedge x \approx y \vdash q'[\pv{v}] \approx t$, $t \in (Q')^1$. We prove the if direction. The other direction follows from symmetry. \begin{itemize} \item if $\pv{v}$ is not the variable $\pv{x}$. $q[\pv{v}] = q'[\pv{v}]$. Consider the first and third cases in lemma~\ref{lm:frshcnst}. If $t$ does not contain $x$, the proof is trivial. Otherwise, $pc_1 \vdash q'[\pv{v}] \approx t[x \mapsto y]$. By our assumption $pc_2 \vdash q'[\pv{v}] \approx t[x \mapsto y]$ as well. Hence $pc_2 \wedge x \approx y\vdash q'[\pv{v}] \approx t$. \item if $\pv{v} = \pv{x}$. $q'[\pv{x}] = x$. If $pc_1 \wedge x \approx y \vdash x \approx t$, it has to be the case that $pc_1 \vdash y \approx t$. Therefore, $pc_2 \vdash y \approx t$. Hence $pc_2 \wedge x \approx y \vdash x \approx t$. \end{itemize} \item $\pv{x} := \pv{f(y)}$: Let $q[\pv{y}] = y$, $q' = q[\pv{x} \mapsto x]$. Assume that $pc_1 \wedge x \approx f(y) \vdash z \approx t$ where $z \in Q'$ and $t \in (Q')^1$. We need to prove that $pc_2 \wedge x \approx f(y) \vdash z \approx t$. We split based on whether $f(y) \in \mathcal{T}(pc_1)$. \begin{itemize} \item $f(y) \not \in \mathcal{T}(pc_1)$. The only possibility for $t$ is $f(u)$ where $u \in Q'$ and $pc_1 \vdash y \approx u$. Since $x$ is fresh in $pc_1$, $u \neq x$. In this case $pc_2 \vdash y \approx u$ as well. If $z \neq x$, we are in case 1 of lemma~\ref{lm:frshcnst} and the proof holds trivially. If $z = x$ then, $pc_1 \vdash f(y) \approx f(u)$. Since $pc_2 \vdash y \approx u$, $pc_2 \vdash f(y) \approx f(u)$. Hence, by case 2 of lemma~\ref{lm:frshcnst} $pc_2 \wedge x \approx f(y) \vdash x \approx f(u)$ as well. \item $f(y) \in \mathcal{T}(pc_1)$. Because of memoization $pc_1 \vdash u \approx f(y)$ for some $u \in Q$. Therefore, this is just like $pc_1 \wedge x \approx u$. \end{itemize} \hg{rewrite} \end{itemize} \end{proof} The theorem only works when $\langle s, q, pc\rangle$ satisfies both memoization and early assumes property. \begin{theorem} Let $P$ be a coherent program. Let $\langle s, q, pc\rangle$ be a reachable configuration in $P$. Then $\langle q, pc\rangle \cong \langle q, \alpha^1_{S_q}(pc)\rangle$ \end{theorem} \begin{theorem} Let $P$ be a coherent program. Let $\langle s, q, pc\rangle$ be a reachable configuration in $P$. Then $\langle q, pc\rangle \cong \langle q, \alpha^2_{S_q}(pc)\rangle$ \end{theorem}\textbf{} \section{Uninterpreted Programs} \label{sec:up} An \emph{uninterpreted program (UP)} is a program in the \emph{uninterpreted programming language (UPL)}. The \emph{syntax} of UPL is shown in \Cref{fig:syntax}. Let $\pv{V}$ denote a fixed set of program variables. We use lower case letters in a special font: $\pv{x}$, $\pv{y}$, etc. to denote individual variables in $\pv{V}$. We write $\Vec{\pv{y}}$ for a list of program variables. Function symbols are taken from a fixed set $\mathcal{F}$. As in~\cite{DBLP:journals/pacmpl/MathurMV19}, w.l.o.g., UPL does not allow for Boolean combination of conditionals and relational symbols. \begin{figure}[t] \begin{align*} \langle stmt \rangle ::=\; &\mathbf{skip} \mid \langle var \rangle \gets \langle var \rangle \mid \langle var \rangle \gets f(\Vec{\langle var \rangle}) \mid \\ &\mathbf{assume}\;(\langle cond \rangle) \mid \langle stmt \rangle \mathbin{;} \langle stmt \rangle \mid \\ &\mathbf{if}\;(\langle cond \rangle ) \;\mathbf{then} \;\langle stmt \rangle \;\mathbf{else}\; \langle stmt \rangle \mid \\ &\textbf{while}\;(\langle cond \rangle) \;\langle stmt \rangle\\ \langle cond\rangle ::=\; &\langle var \rangle = \langle var \rangle \mid \langle var \rangle \neq \langle var \rangle\\ \langle var \rangle ::=\; & \pv{x} \mid \pv{y} \mid \cdots \end{align*} \caption{Syntax of the programming language UPL.} \label{fig:syntax} \end{figure} The small step symbolic operational semantics of UPL is defined with respect to a FOL signature $\Sigma = (\const, \mathcal{F}, \{\approx, \not\approx\})$ by the rules shown in Fig.~\ref{fig:ssesem}. A program \emph{configuration} is a triple $\langle s, q, pc \rangle$, where $s$, called a statement, is a UP being executed, $q : \pv{V} \to \const$ is a \emph{state} mapping program variables to constants in $\mathcal{C}$, and $pc$, called the \emph{path condition}, is a EUF formula over $\Sigma$. We use $\const(q) \triangleq \{c \mid \exists \pv{v} \cdot q(\pv{v}) = c\}$ to denote the set of all constants that represent current variable assignments in $q$. With abuse of notation, we use $\mathcal{C}(q)$ and $q$ interchangebly. We write $\equiv_q$ to mean $\equiv_{\mathcal{C}(q)}$. \renewcommand{\bot}{\mathbf{skip}} \begin{figure} \centering \begin{gather*} \langle \mathbf{skip}\mathbin{;}s, q, pc \rangle \to \langle s, q, pc \rangle\\[1ex] \prftree{\langle s_1, q, pc\rangle \to \langle s_1', q', pc'\rangle}% {\langle s_1\mathbin{;}s_2, q, pc \rangle \to \langle s_1'\mathbin{;}s_2, q', pc' \rangle}\\[1ex] \prftree{\langle c, q\rangle\Downarrow v\qquad}{(pc \land v) \not \models \bot}% {\langle \mathbf{assume}(c), q, pc \rangle \to \langle \bot, q, pc\wedge v\rangle}\\[1ex] \prftree{\langle e, q\rangle \Downarrow v\qquad}{\text{$x' \in \const(\Sigma)$ is fresh in $pc$}}{\langle \pv{x} \gets e, q, pc \rangle \to \langle\bot, q[\pv{x} \mapsto x'], pc \land x' = v\rangle}\\[1ex] \begin{aligned} \langle \mathbf{if}\;(c)\;\mathbf{then}\;s_1\;\mathbf{else}\;s_2, q, pc \rangle &\to \langle \mathbf{assume}(c)\mathbin{;}s_1, q, pc\rangle\\[1ex] \langle \mathbf{if}\;(c)\;\mathbf{then}\;s_1\;\mathbf{else}\;s_2, q, pc \rangle &\to \langle \mathbf{assume}(\neg c)\mathbin{;}s_2, q, pc\rangle \end{aligned}\\[1ex] \begin{multlined} \langle \mathbf{while}\;(c)\;s, q, pc \rangle \to{} \\ \qquad\qquad \langle \mathbf{if}\;(c)\;\mathbf{then}\;(s\mathbin{;} \mathbf{while}\;(c)\;s)\;\mathbf{else}\;\mathbf{skip}, q, pc \rangle \end{multlined} \end{gather*} \caption{Small step symbolic operational semantics of UPL, where $\neg c$ denotes $\pv{x} \neq \pv{y}$ when $c$ is $\pv{x} = \pv{y}$, and $\pv{x} = \pv{y}$ when $c$ is $\pv{x} \neq \pv{y}$.} \label{fig:ssesem} \end{figure} For a state $q$, we write $q[\pv{x} \mapsto x']$ for a state $q'$ that is identical to $q$, except that it maps $\pv{x}$ to $x'$. We write $\langle e, q \rangle \Downarrow v$ to denote that $v$ is the value of the expression $e$ in state $q$, i.e., the result of substituting each program variable $\pv{x}$ in $e$ with $q(\pv{x})$, and replacing functions and predicates with their FOL counterparts. The value of $e$ is an FOL term or an FOL formula over $\Sigma$. For example, $\langle \pv{x} = \pv{y}, [\pv{x} \mapsto x, \pv{y} \mapsto y] \rangle \Downarrow x \approx y$. Given two configurations $c$ and $c'$, we write $c \to c'$ if $c$ reduces to $c'$ using one of the rules in \Cref{fig:ssesem}. Note that there is no rule for $\textbf{skip}$ -- the program terminates once it gets into a configuration $\langle \textbf{skip}, q, pc\rangle$. Let $\const_0 = \{v_0 \mid \pv{v} \in \pv{V}\} \subseteq \const$ be a set of initial constants. In the initial state $q_0$ of a program, every variable is mapped to the corresponding initial constant, i.e., $q_0(\pv{v}) = v_0$. The operational semantics induces, for an UP $P$, a transition system $\mathcal{S}_P = \tuple{C,c_0,\mathcal{R}}$, where $C$ is the set of configurations, $c_0 \triangleq \langle P, q_0, \top \rangle$ is the initial configuration, and $\mathcal{R} \triangleq \{(c, c') \mid c \to c'\}$. A configuration $c$ of $P$ is \emph{reachable} if $c$ is reachable from $c_0$ in $\mathcal{S}_P$. We denote the set of all reachable configurations in $\mathcal{S}_P$ using $\Reach(\mathcal{S}_P)$. The set of all statements in the semantics of $P$, including the intermediate statements, are called \emph{locations} of $P$, and are denoted by $\mathcal{L}(P)$. We often use $P$ and $\mathcal{S}_P$ interchangeably. Our semantics of UPL differs in some respects from the one in~\cite{DBLP:journals/pacmpl/MathurMV19}. First, we follow a more traditional small-step operational semantics presentation, by providing semantics rules and the corresponding transition system. However, this does not change the semantics conceptually. More importantly, we ensure that the path condition remains satisfiable in all reachable configurations (by only allowing an assume statement to execute when it results in a satisfiable path condition). We believe this is a more natural choice that is also consistent with what is typically used in other symbolic semantics. UP reachability under our semantics coincides with the definition of~\cite{DBLP:journals/pacmpl/MathurMV19}. \begin{definition}[UP Reachability] \label{def:reach} \hg{modified} Given an UP $P$, determine whether there exists a state $q$ and a path condition $pc$ s.t., the configuration $\langle\textbf{skip}, q, pc\rangle$ is reachable in $P$. \end{definition} A certificate for unreachability of location $s$, is an inductive assertion map $\eta$ (or an inductive invariant) s.t. $\eta(s) = \bot$. \begin{definition}[Inductive Assertion Map] \label{def:inductive} Let $\Sigma_0 \triangleq (\const_0, \mathcal{F}, \{\approx, \not\approx\})$, be restriction of $\Sigma$ to $\const_0$. An \emph{inductive assertion map} of an UP $P$, is a map $\eta : \mathcal{L}(P) \to EUF(\Sigma_0)$ s.t. (a) $\eta(P) = \top$, and (b) if $\langle s, q_0, \eta(s) \rangle \to \langle s', q', pc' \rangle$, then $pc' \models (\eta(s')[v_0 \mapsto q'(\pv{v}) \mid \pv{v} \in \pv{V}])$. \end{definition} In~\cite{DBLP:journals/pacmpl/MathurMV19}, a special sub-class of UPs has been introduced with a decidable reachability problem. \begin{definition}[Coherent Uninterpreted Program~\cite{DBLP:journals/pacmpl/MathurMV19}] \label{def:cup}% An UP $P$ is \emph{coherent} (CUP) if all of the reachable configurations of $P$ satisfy the following two properties: \begin{description}[topsep=0pt,noitemsep] \item[Memoizing] for any configuration $\langle \pv{x} \gets f(\Vec{\pv{y}}), q, pc \rangle$, if there is a term $t \in \mathcal{T}(pc)$ s.t. $pc \models t \approx f(q(\Vec{\pv{y}}))$, then there is $\pv{v} \in \pv{V}$ s.t. $pc \models q(\pv{v}) \approx t$. \item[Early assume] for any configuration\\\mbox{$\langle \mathbf{assume}(\pv{x} = \pv{y}), q, pc \rangle$}, if there is a term $t \in \mathcal{T}(pc)$ s.t. $pc \models t \approx s$ where $s$ is a superterm of either $q(\pv{x})$ or $q(\pv{y})$, then, there is $\pv{v} \in \pv{V}$ s.t. $pc \models q(\pv{v}) \approx t$. \end{description} \end{definition} Intuitively, memoization ensures that if a term is recomputed, then it is already stored in a program variable; early assumes ensures that whenever an equality between variables is assumed, any of their superterms that was ever computed is still stored in a program variable. Note that unlike the original definition of CUP in~\cite{DBLP:journals/pacmpl/MathurMV19}, we do not require the notion of an \emph{execution}. The path condition accumulates the history of the execution in a configuration, which is sufficient. \begin{figure} \centering \begin{subfigure}[t]{0.2\textwidth} \begin{lstlisting}[numbers=left] x := t; y := t; while (c != d) { x := n(x); y := n(y); c := n(c); }; x := f(a, x); y := f(b, y); assume(a == b); assume(x != y); \end{lstlisting} \end{subfigure} \begin{subfigure}[t]{0.2\textwidth} \[\footnotesize \begin{array}{l} x_0 \approx t_0\\ x_0 \approx t_0 \land y_0 \approx t_0\\ x_0 \approx y_0\\ x_0 \approx n(y_0) \land c_0 \not\approx d_0\\ x_0 \approx y_0 \land c_0 \not\approx d_0\\ x_0 \approx y_0\\ \\ x_0 \approx f(a_0, y_0) \land c_0 \approx d_0\\ (a_0 \approx b_0 \Rightarrow x_0 \approx y_0) \land c_0 \approx d_0\\ a_0 \approx b_0 \land x_0 \approx y_0 \land c_0 \approx d_0 \\ \bot \end{array} \] \end{subfigure} \caption{An example CUP program and its inductive assertions.} \label{fig:cup} \end{figure} \begin{example}\NoEndMark An example of a CUP is shown in Fig.~\ref{fig:cup}. Some reachable states in the first iteration of the loop are shown below, where line numbers are used as locations, and $pc_i$ stands for the path condition at line $i$: \[ \begin{lgathered} \langle 2, q_0[\pv{x} \mapsto x_1, \pv{y} \mapsto y_1], x_1 \approx t_0 \land y_1 \approx t_0\rangle\\ \begin{multlined}[t] \langle 6, q_0[\pv{x} \mapsto x_2, \pv{y} \mapsto y_2, \pv{c} \mapsto c_1], pc_2 \land{}\\ \quad c_0 \not\approx d_0 \land x_2 \approx n(x_1) \land y_2 \approx n(y_1) \land c_1 \approx n(c_0)\rangle \end{multlined}\\ \begin{multlined}[t] \langle 9, q_0[\pv{x} \mapsto x_3, \pv{y} \mapsto y_3, \pv{c} \mapsto c_1]\rangle, pc_6 \land{}\\ \qquad\qquad c_1 \approx d_0 \land x_3 \approx f(a_0, x_2) \land y_3 \approx f(b_0, y_2) \rangle \end{multlined} \end{lgathered} \] The program is coherent because (a)~no term is recomputed; (b)~for the assume at line~10, the only superterms of $a_0$ and $b_0$ are $f(a_0, x_n)$ and $f(b_0, y_n)$, and they are stored in $\pv{x}$ and $\pv{y}$, respectively; and \hg{added}(c)~for the assume $(c_n = d_0)$ introduced by the exit condition of the while loop, no superterms of $c_n$, $d_0$ are ever computed. The program does not reduce to $\bot$ (i.e., it does not reach a final configuration). Its inductive assertion map is shown in Fig.~\ref{fig:cup} (right). \hfill\ExampleSymbol \end{example} Note that UP are closely related, but are not equivalent, to the Herbrand programs of~\cite{dblp:conf/vmcai/muller-olmrs05}. While Herbrand programs use the syntax of UPL, they are interpreted over a fixed universe of Herbrand terms. In particular, in Herbrand programs $f(x) \approx g(x)$ is always false (since $f(x)$ and $g(x)$ have different top-level functions), while in UP, it is satisfiable.
1,116,691,499,522
arxiv
\section{Introduction} Suppose $M$ is a $C^{*}$-algebra and $E$ is a $C^{*}$-correspondence over $M$ in the sense of \cite{MS98b}. This means, first of all, that $E$ is a (right) Hilbert $C^{*}$-module, and secondly, that if $\mathcal{L}(E)$ denotes the space of all bounded adjointable module maps on $E$, then $E$ becomes a left $M$-module via a $C^{*}$-homomorphism $\varphi_{M}$ from $M$ into $\mathcal{L}(E)$. To emphasize the connection between $E$ and $M$, we will call the pair, $(E,M)$, a \emph{$C^{*}$-correspondence pair}. Form the \emph{Fock space} built from $(E,M$), $\mathcal{F}(E)$. This is the direct sum $\sum_{n\geq0}E^{\otimes n}$, where $E^{\otimes n}$ is the internal tensor product of $E$ with itself $n$ times. (The tensor products are balanced over $M$.) The Fock space $\mathcal{F}(E)$ is, itself, a $C^{*}$-correspondence over $M$ and we write $\varphi_{M\infty}$ for the left action of $M$. For $\xi\in E$, $T_{\xi}$ denotes the creation operator on $\mathcal{F}(E)$ determined by $\xi$, i.e., for $\eta\in\mathcal{F}(E)$, $T_{\xi}\eta=\xi\otimes\eta$. We let $\mathcal{T}_{+}(E)$ denote the norm closed subalgebra of $\mathcal{L}(\mathcal{F}(E))$ generated by $\varphi_{M\infty}(M)$ and $\{T_{\xi}\mid\xi\in E\}$, and we call $\mathcal{T}_{+}(E)$ \emph{the tensor algebra of} $E$ or of $(E,M)$. In \cite[Defintion 2.1]{MS2000}, we introduced the following definition. \begin{defn} \label{def:Morita-equi-Wstar-correspondence}We say that two $C^{*}$-correspondence pairs $(E,M)$ and $(F,N)$ are \emph{Morita equivalent} in case there is a $C^{*}$-equivalence bimodule $\mathcal{X}$ in the sense of \cite[Definition 7.5]{mR74a} such that \[ \mathcal{X}\otimes_{N}F\simeq E\otimes_{M}\mathcal{X}\] as $C^{*}$-correspondences. In this case, we say that $\mathcal{X}$ \emph{implements} a Morita equivalence between $(E,M)$ and $(F,N)$. \end{defn} Observe that the equation $\mathcal{X}\otimes_{N}F\simeq E\otimes_{M}\mathcal{X}$ is equivalent to the equation $\mathcal{X}\otimes_{N}F\otimes_{N}\widetilde{\mathcal{X}}\simeq E$ and to the equation $F\simeq\widetilde{\mathcal{X}}\otimes_{M}E\otimes_{M}\mathcal{X}$, where $\widetilde{\mathcal{X}}$ is the dual or opposite module of $\mathcal{X}$. We showed there that if $(E,M)$ and $(F,N)$ are Morita equivalent, then the tensor algebras $\mathcal{T}_{+}(E)$ and $\mathcal{T}_{+}(F)$ are Morita equivalent in the sense of \cite{BMP2000}. It follows that $\mathcal{T}_{+}(E)$ and $\mathcal{T}_{+}(F)$ have isometrically isomorphic representation theories. However, when looking at the formulas involved in the isomorphism between the representation theories, certain details become obscure. Our objective in this note is to show that simply tensoring with $\mathcal{X}$ implements an explicit isometric isomorphism between the representation theories of $\mathcal{T}_{+}(E)$ and $\mathcal{T}_{+}(F)$ in a fashion that preserves important properties that we shall introduce shortly. The first step is to have a clear picture of the representation theory of an operator tensor algebra. \section{The Representations of $\mathcal{T}_{+}(E)$} We begin with a restatement of Theorem 3.10 in \cite{MS98b}. \begin{thm} \label{thm:Disintigration}Let $\rho$ be a completely contractive representation of $\mathcal{T}_{+}(E)$ on a Hilbert space $H$. Define $\sigma:M\to B(H)$ by the formula $\sigma(a)=\rho\circ\varphi_{M\infty}(a)$ and define $T:E\to B(H)$ by the formula $T(\xi)=\rho(T_{\xi})$. Then $\sigma$ is a $C^{*}$-representation of $M$ on $H$ and $T$ is a completely contractive bimodule map in the sense that $T(\varphi_{M}(a)\xi b)=\sigma(a)T(\xi)\sigma(b)$ for all $a,b\in M$ and all $\xi\in E$. Conversely, given a $C^{*}$-representation $\sigma:M\to B(H)$ and a completely contractive bimodule map $T:E\to B(H)$, there is a unique completely contractive representation $\rho:\mathcal{T}_{+}(E)\to B(H)$ such that $\sigma=\rho\circ\varphi_{M}$ and $T(\xi)=\rho(T_{\xi})$ for all $\xi\in E$. \end{thm} If $T$ is a completely contractive bimodule map with respect to a $C^{*}$-representation $\sigma$ of $M$, then we call $(T,\sigma)$ a \emph{completely contractive covariant pair}. We call the completely contractive representation $\rho$ of $\mathcal{T}_{+}(E)$ that $(T,\sigma)$ determines the \emph{integrated form} of $(T,\sigma)$ and write $\rho=T\times\sigma$. Theorem \ref{thm:Disintigration} begs the question: How does one construct completely contractive covariant pairs? For this purpose, we need to recall the definition of Rieffel's induced representation \cite{mR74b}. If $\sigma:M\to B(H)$ is a Hilbert space representation of $M$, then we may build the Hilbert space $E\otimes_{\sigma}H$, which is the separated completion of the algebraic tensor product $E\otimes H$ in the pre-inner product defined by the formula \[ \langle\xi\otimes h,\eta\otimes k\rangle:=\langle h,\sigma(\langle\xi,\eta\rangle)k\rangle,\qquad\xi\otimes h,\eta\otimes k\in E\otimes H.\] The representation $\sigma^{E}$ of $\mathcal{L}(E)$ on $E\otimes_{\sigma}H$ defined by the formula, $\sigma^{E}(T):=T\otimes I$, $T\in\mathcal{L}(E)$, is called the representation of $\mathcal{L}(E)$ \emph{induced} by $\sigma$. The following theorem is essentially Lemma 2.5 of \cite{MSHardy}. \begin{thm} \label{thm:Intertwiner-Covariant}Let $\sigma:M\to B(H)$ be a $C^{*}$-representation. A completely contractive linear map $T$ from $E$ to $B(H)$ is a bimodule map with respect to $\sigma$ if and only if there is an operator $\widetilde{T}:E\otimes_{\sigma}H\to H$ with $\Vert\widetilde{T}\Vert\leq1$ such that $\widetilde{T}\sigma^{E}\circ\varphi_{M}=\sigma\widetilde{T}$ and $T(\xi)h=\widetilde{T}(\xi\otimes h)$, for all $\xi\otimes h\in E\otimes_{\sigma}H$. \end{thm} Thus the completely contractive bimodule maps are in bijective correspondence with (contractive) intertwiners. The space of intertwiners of $\sigma$ and $\sigma^{E}\circ\varphi_{M}$ is a key player in our theory and to keep the notation manageable, when there is no risk of confusion in the context under discussion, we shall not distinguish notationally between bimodule maps $T$ and the corresponding intertwiner $\widetilde{T}$. Further, for reasons that will be explained in a minute, we frequently also denote bimodule maps by lower case fraktur letters from the end of the alphabet, as we do now. \begin{defn} \label{def:sigma-dual}Let $\sigma:M\to B(H)$ be a $C^{*}$-representation. \emph{The $\sigma$-dual} of $E$, denoted $E^{\sigma}$, is defined to be $\{\mathfrak{z}\in B(H,E\otimes_{\sigma}H)\mid\mathfrak{z}\sigma=\sigma^{E}\circ\varphi_{M}\mathfrak{z}\}$. We write $E^{\sigma*}$ for the space $\{\mathfrak{z}^{*}\mid\mathfrak{z}\in E^{\sigma}\}$ and we write $\mathbb{D}(E^{\sigma*})$ for $\{\mathfrak{z}^{*}\in E^{\sigma*}\mid\Vert\mathfrak{z}^{*}\Vert<1\}$, i.e., $\mathbb{D}(E^{\sigma*})$ is the open unit ball in $E^{\sigma*}$. \end{defn} Thanks to Theorem \ref{thm:Intertwiner-Covariant}, $\overline{\mathbb{D}(E^{\sigma*})}$ labels \emph{all} the completely contractive representations $\rho$ of $\mathcal{T}_{+}(E)$ with the property that $\rho\circ\varphi_{M\infty}=\sigma$. The reason we introduced $E^{\sigma}$, instead of focusing exclusively on $E^{\sigma*}$ is that $E^{\sigma}$ is a $W^{*}$-correspondence over $\sigma(M)'$. (A $W^{*}$-correspondence is a $C^{*}$-correspondence with some additional structure that we discuss below.) For $\mathfrak{z}_{1},\mathfrak{z}_{2}\in E^{\sigma}$, $\langle\mathfrak{z}_{1},\mathfrak{z}_{2}\rangle:=\mathfrak{z}_{1}^{*}\mathfrak{z}_{2}$, and the $\sigma(M)'$-bimodule actions are given by the formula\[ a\cdot\mathfrak{z}\cdot b:=(I_{E}\otimes a)\mathfrak{z}b,\qquad a,b\in\sigma(M)',\mathfrak{z}\in E^{\sigma},\] where the products on the right are just composition of the maps involved. The reason for introducing the notation $\overline{\mathbb{D}(E^{\sigma*})}$ and writing elements in this ball as lower case $\mathfrak{z}$'s, $\mathfrak{w}$'s, etc., is that we may view an element $F\in\mathcal{T}_{+}(E)$ as a function $\widehat{F}$ on $\overline{\mathbb{D}(E^{\sigma*})}$ via the formula \[ \widehat{F}(\mathfrak{z}):=\mathfrak{z}\times\sigma(F),\qquad\mathfrak{z}\in\overline{\mathbb{D}(E^{\sigma*})}.\] Functions of the form $\widehat{F}$ are bonafide $B(H_{\sigma})$-valued analytic functions on $\mathbb{D}(E^{\sigma*})$ with additional very interesting properties, and they can be studied with function-theoretic techniques. (See \cite{MSHardy,MS08,MS09}.) For the purpose of emphasizing the function-theoretic properties of the $\widehat{F}$'s, it seems preferable to write their arguments as $\mathfrak{z}$'s instead of $T$'s. But when representation-theoretic features need emphasis, the use of $T$ and $T\times\sigma$ is sometimes preferable. \section{The Functor} Our objective in this section is to show that Morita equivalence of $C^{*}$-correspondence pairs $(E,M)$ and $(F,N)$ gives rise to a natural isometric isomorphism between representation theory of $\mathcal{T}_{+}(E)$ and $\mathcal{T}_{+}(F)$. \begin{thm} \label{thm:functor}Suppose $(E,M)$ and $(F,N)$ are Morita equivalent $C^{*}$-correspondence pairs via an $M,N$-equivalence bimodule $\mathcal{X}$ and correspondence isomorphism $W:E\otimes_{M}\mathcal{X}\to\mathcal{X}\otimes_{N}F$. Suppose further that $\sigma:N\to B(H)$ is a $C^{*}$-representation and let $\sigma^{\mathcal{X}}:M\to B(\mathcal{X}\otimes_{\sigma}H)$ be the representation of $M$ induced by $\mathcal{X}$. Then for each $\mathfrak{z}^{*}\in\overline{\mathbb{D}(F^{\sigma*})}$, $\mathfrak{z}^{*\mathcal{X}}:=(I_{\mathcal{X}}\otimes\mathfrak{z}^{*})(W\otimes I_{H})$ lies in $\overline{\mathbb{D}(E^{\sigma^{\mathcal{X}}*})}$ and the map $\mathfrak{z}^{*}\to\mathfrak{z}^{*\mathcal{X}}$ is an isometric surjection onto $\overline{\mathbb{D}(E^{\sigma^{\mathcal{X}}*})}$.\end{thm} \begin{proof} For $\mathfrak{z}^{*}\in\overline{\mathbb{D}(F^{\sigma*})}$ set $\mathfrak{z}_{1}^{*}:=\left[\begin{array}{cc} 0 & \mathfrak{z}^{*}\\ 0 & 0\end{array}\right]$ acting on $H\oplus(F\otimes_{\sigma}H)$. Then $\mathfrak{z}_{1}^{*}$ commutes with $\left[\begin{array}{cc} \sigma & 0\\ 0 & \sigma^{F}\circ\varphi_{N}\end{array}\right]$. Consequently, $I_{\mathcal{X}}\otimes\mathfrak{z}_{1}^{*}=\left[\begin{array}{cc} 0 & I_{\mathcal{X}}\otimes\mathfrak{z}^{*}\\ 0 & 0\end{array}\right]$ acting on $\mathcal{X}\otimes_{\sigma}H\oplus\mathcal{X}\otimes_{\sigma^{F}\circ\varphi_{N}}(F\otimes_{\sigma}H)$ commutes with $\left[\begin{array}{cc} \sigma^{\mathcal{X}} & 0\\ 0 & (\sigma^{F}\circ\varphi_{N})^{\mathcal{X}}\end{array}\right]$. Since $W\otimes I_{H}:E\otimes\mathcal{X}\otimes_{\sigma}H\to\mathcal{X}\otimes F\otimes_{\sigma}H$ intertwines $(\sigma^{\mathcal{X}})^{E}\circ\varphi_{M}$ and $(\sigma^{F}\circ\varphi_{N})^{\mathcal{X}}$ by hypothesis, we see that $\left[\begin{array}{cc} 0 & (I_{\mathcal{X}}\otimes\mathfrak{z}^{*})(W\otimes I_{H})\\ 0 & 0\end{array}\right]$ commutes with $\left[\begin{array}{cc} \sigma^{\mathcal{X}} & 0\\ 0 & (\sigma^{\mathcal{X}})^{E}\circ\varphi_{M}\end{array}\right]$. Since $\Vert(I_{\mathcal{X}}\otimes\mathfrak{z}^{*})(W\otimes I_{H})\Vert=\Vert\mathfrak{z}^{*}\Vert$, it follows that $\mathfrak{z}^{*\mathcal{X}}:=(I_{\mathcal{X}}\otimes\mathfrak{z}^{*})(W\otimes I_{H})$ lies in $\overline{\mathbb{D}(E^{\sigma^{\mathcal{X}}*})}$ and that the map $\mathfrak{z}^{*}\to\mathfrak{z}^{*\mathcal{X}}$ is isometric. Finally, to see that the map is surjective, we appeal to \cite[Theorem 6.23]{mR74b}: Let $\mathfrak{w}^{*}\in\overline{\mathbb{D}(E^{\sigma^{\mathcal{X}}*})}$. Then $\mathfrak{w}^{*}$ intertwines $\sigma^{\mathcal{X}}$ and $(\sigma^{\mathcal{X}})^{E}\circ\varphi_{M}$ by hypothesis. Consequently, $\mathfrak{w}^{*}(W\otimes I_{H})^{-1}$ intertwines $\sigma^{\mathcal{X}}$ and $(\sigma^{F}\circ\varphi_{N})^{\mathcal{X}}$. That is $\left[\begin{array}{cc} 0 & \mathfrak{w}^{*}(W\otimes I_{H})^{-1}\\ 0 & 0\end{array}\right]$ lies in the commutant of $\left[\begin{array}{cc} \sigma^{\mathcal{X}}(\mathcal{L}(\mathcal{X})) & 0\\ 0 & (\sigma^{F}\circ\varphi_{N})^{\mathcal{X}}(\mathcal{L}(\mathcal{X}))\end{array}\right]=\left[\begin{array}{cc} \sigma & 0\\ 0 & (\sigma^{F}\circ\varphi_{N})\end{array}\right]^{\mathcal{X}}(\mathcal{L}(\mathcal{X}))$ and so, by \cite[Theorem 6.23]{mR74b}, $\left[\begin{array}{cc} 0 & \mathfrak{w}^{*}(W\otimes I_{H})^{-1}\\ 0 & 0\end{array}\right]$ must have the form $I_{\mathcal{X}}\otimes\left[\begin{array}{cc} \mathfrak{z}_{11} & \mathfrak{z}_{12}\\ \mathfrak{z}_{21} & \mathfrak{z}_{22}\end{array}\right]$, where $\left[\begin{array}{cc} \mathfrak{z}_{11} & \mathfrak{z}_{12}\\ \mathfrak{z}_{21} & \mathfrak{z}_{22}\end{array}\right]$ lies in the commutant of $\left[\begin{array}{cc} \sigma & 0\\ 0 & (\sigma^{F}\circ\varphi_{N})\end{array}\right]$. Since $I_{\mathcal{X}}\otimes\left[\begin{array}{cc} \mathfrak{z}_{11} & \mathfrak{z}_{12}\\ \mathfrak{z}_{21} & \mathfrak{z}_{22}\end{array}\right]$ maps $\mathcal{X}\otimes_{N}(F\otimes_{\sigma}H)$ to $\mathcal{X}\otimes_{\sigma}H$ and is zero on $\mathcal{X}\otimes_{\sigma}H$, it follows that $\left[\begin{array}{cc} \mathfrak{z}_{11} & \mathfrak{z}_{12}\\ \mathfrak{z}_{21} & \mathfrak{z}_{22}\end{array}\right]=\left[\begin{array}{cc} 0 & \mathfrak{z}_{12}\\ 0 & 0\end{array}\right]$ for $\mathfrak{z}_{12}\in\overline{\mathbb{D}(F^{\sigma*})}$, proving that the map $\mathfrak{z}^{*}\to\mathfrak{z}^{*\mathcal{X}}$ is surjective. \end{proof} \begin{defn} If $(E,M)$ and $(F,N)$ are Morita equivalent $C^{*}$-correspondence pairs via an equivalence $M,N$-bimodule $\mathcal{X}$, then the map $(T,\sigma)\to(T^{\mathcal{X}},\sigma^{\mathcal{X}})$ from the representation theory of $\mathcal{T}_{+}(E)$ to the representation theory of $\mathcal{T}_{+}(F)$ defined by $\mathcal{X}$ will be called the \emph{Morita transform} determined by $\mathcal{X}$. \end{defn} We like to think of the Morita transform as a generalized conformal map. \section{Morita Equivalence and Absolute Continuity} Our focus in this section will be on Morita equivalence in the context of $W^{*}$-algebras and $W^{*}$-correspondences. As we noted above, a $W^{*}$-correspondence is a $C^{*}$-correspondence with additional structure. We begin by highlighting what the additional structure is and how to deal with it. So, throughout this section $M$ and $N$ will be $W^{*}$-algebras and $E$ (resp. $F$) will be a $W^{*}$-correspondence over $M$ (resp. $N$). This means, in particular, that $E$ and $F$ are \emph{self-dual} Hilbert $C^{*}$-modules over $M$ and $N$, respectively, in the sense of Paschke \cite[Section 3, p. 449]{wP73}, and that the left actions of $M$ and $N$ are given by \emph{normal} representations, $\varphi_{M}$ and $\varphi_{N}$ of $M$ and $N$ into $\mathcal{L}(E)$ and $\mathcal{L}(F)$, respectively. (Recall that Paschke showed that in the setting of self-dual Hilbert modules over $W^{*}$-algebras, every continuous module map is adjointable and $\mathcal{L}(E)$ is a $W^{*}$-algebra by \cite[Corollary 3.5 and Proposition 3.10]{wP73}.) To avoid technical distractions, we assume that $\varphi_{M}$ and $\varphi_{N}$ are faithful and unital. A key role in this theory is played by Paschke's Theorem 3.2 in \cite{wP73}, which says among other things that any Hilbert $C^{*}$-module $E$ over a $W^{*}$-algebra has a canonical embedding into a self-dual Hilbert module over the algebra, which should be viewed as a canonical completion of $E$. This allows us to perform $C^{*}$-algebraic constructions and pass immediately to the completions to obtain $W^{*}$-algebraic objects. For instance, if $E$ is a Hilbert $W^{*}$-module over $M$, then we may form the $C^{*}$-tensor square, $E^{\otimes2}=E\otimes_{M}E$, which is not, in general, a $W^{*}$-correspondence over $M$. However, its self-dual completion is. More generally, we can form the \emph{$C^{*}$-Fock space} built from $(E,M$), $\mathcal{F}_{c}(E)$, as we did at the outset of this note. Then we let $\mathcal{F}(E)$ be the self-dual completion of $\mathcal{F}_{c}(E)$ in the sense of \cite[Theorem 3.2]{wP73}, and call $\mathcal{F}(E)$ \emph{the Fock space of the $W^{*}$-correspondence $E$.} Similarly, we form $\mathcal{F}_{c}(F)$ and $\mathcal{F}(F)$. We write $\varphi_{M\infty}$ for the left action of $M$ on both $\mathcal{F}_{c}(E)$ and on $\mathcal{F}(E)$. This should cause no confusion, since every element of $\mathcal{L}(\mathcal{F}_{c}(E))$ has a unique extension to an element of $\mathcal{L}(\mathcal{F}(E))$, by \cite[Corollary 3.7]{wP73}, and the process of mapping each element in $\mathcal{L}(\mathcal{F}_{c}(E))$ to its extension in $\mathcal{L}(\mathcal{F}(E))$ gives an isometric embedding of $\mathcal{L}(\mathcal{F}_{c}(E))$ in $\mathcal{L}(\mathcal{F}(E))$. Likewise, $\varphi_{N\infty}$ denotes the left action of $N$ on both $\mathcal{F}_{c}(F)$ and $\mathcal{F}(F)$. The creation operator $T_{\xi}$ on $\mathcal{F}_{c}(E)$ determined by $\xi\in E$, therefore has a unique extension to $\mathcal{F}(E)$ and we do not distinguish notationally between the original and the extension. But in the $W^{*}$-setting we let $\mathcal{T}_{+}(E)$ denote the norm closed subalgebra of $\mathcal{L}(\mathcal{F}(E))$ generated by $\varphi_{M\infty}(M)$ and $\{T_{\xi}\mid\xi\in E\}$, and we call $\mathcal{T}_{+}(E)$ \emph{the tensor algebra of} $E$ or of $(E,M)$. That is, we focus on the tensor algebra as living on the $W^{*}$-Fock space $\mathcal{F}(E)$. We view $\mathcal{T}_{+}(F)$ similarly. Finally, we let $H^{\infty}(E)$ denote the ultra-weak closure of $\mathcal{T}_{+}(E)$ in $\mathcal{L}(\mathcal{F}(E))$, and we let $H^{\infty}(F)$ denote the ultra-weak closure of $\mathcal{T}_{+}(F)$ in $\mathcal{L}(\mathcal{F}(F))$. The algebras $H^{\infty}(E)$ and $H^{\infty}(F)$ are called the \emph{Hardy algebras} of $E$ and $F$, respectively. In the special case when $M=\mathbb{C}=E$, we see that $\mathcal{F}_{c}(E)=\mathcal{F}(E)=\ell^{2}(\mathbb{N})$, $\mathcal{T}_{+}(E)$ is the disc algebra $A(\mathbb{D})$ and $H^{\infty}(E)=H^{\infty}(\mathbb{T})$. More generally, when $M=\mathbb{C}$ and $E=\mathbb{C}^{d}$, $\mathcal{T}_{+}(E)$ is Popescu's noncommutative disc algebra and $H^{\infty}(E)$ is his noncommutative Hardy algebra \cite{gP91}. Somewhat later, Davidson and Pitts studied $H^{\infty}(\mathbb{C}^d)$ under the name \emph{noncommutative analytic Toeplitz algebra} \cite{DP98}. \begin{defn} \label{def:Morita-equi-Wstar-correspondence-1}If $M$ and $N$ are $W^{*}$-algebras and if $E$ and $F$ are $W^{*}$-correspondences over $M$ and $N$, respectively, we say that $(E,M)$ and $(F,N)$ are \emph{Morita equivalent} in case there is a self-dual $M-N$ equivalence bimodule $\mathcal{X}$ in the sense of \cite[Definition 7.5]{mR74a} such that \[ \mathcal{X}\otimes_{N}F\simeq E\otimes_{M}\mathcal{X}\] as $W^{*}$-correspondences. In this case, we say that $\mathcal{X}$ \emph{implements} a Morita equivalence between $(E,M)$ and $(F,N)$. \end{defn} We emphasize that the modules $\mathcal{X}\otimes_{N}F$ and $E\otimes_{M}\mathcal{X}$ are self-dual completions of the balanced $C^{*}$-tensor products. A completely contractive representation of a $W^{*}$-correspondence pair $(E,M)$ on a Hilbert space $H$ is a pair $(T,\sigma)$ where $\sigma$ is a normal representation of $M$ on $H$ and where $T$ is an ultra-weakly continuous, completely contractive bimodule map from $E$ to $B(H)$. However, as we noted in \cite[Remark 2.6]{MSHardy}, the ultra-weak continuity of $T$ follows automatically from the bimodule property of $T$ and the normality of $\sigma$. Our goal is to show that Morita equivalence in the sense of Definition \ref{def:Morita-equi-Wstar-correspondence-1} preserves absolute continuity in the sense of the following definition, which was inspired by the important paper of Davidson, Li and Pitts \cite{DLP2005}. \begin{defn} \label{def:absolute continuity}Let $(T,\sigma)$ be a completely contractive covariant representation of $(E,M)$ on $H$ and assume that $\sigma$ is a normal representation of $M$. Then a vector $x\in H$ is called \emph{absolutely continuous} if and only if the functional $a\to\langle(T\times\sigma)(a)x,x\rangle,\,\, a\in\mathcal{T}_{+}(E)$, extends to an ultra-weakly continuous linear functional on $H^{\infty}(E)$. The collection of all absolutely continuous vectors in $H$ is denoted $\mathcal{V}_{ac}(T,\sigma)$, and we say $(T,\sigma)$ and $T\times\sigma$ are absolutely continuous in case $\mathcal{V}_{ac}(T,\sigma)=H$.\end{defn} \begin{rem} The definition of an absolutely continuous vector just given is not quite the one given in \cite[Definition 3.1]{MS2010}. However, by \cite[Remark 3.2]{MS2010}, it is equivalent to the one given there. Also, by virtue of \cite[Theorem 4.11]{MS2010}, $T\times\sigma$ extends to an ultra-weakly continuous completely contractive representation of $H^{\infty}(E)$ if and only $T\times\sigma$ is absolutely continuous. \end{rem} \begin{thm} \label{thm:Absolute_continuity_preservation}Suppose that $(E,M)$ and $(F,N)$ are $W^{*}$-correspondence pairs that are Morita equivalent via an equivalence bimodule $\mathcal{X}$. If $(\mathfrak{z}^{*},\sigma)$ is a completely contractive covariant representation of $(F,N)$, where $\sigma$ is normal, then \begin{equation} \mathcal{X}\otimes_{\sigma}\mathcal{V}_{ac}(\mathfrak{z}^{*},\sigma)=\mathcal{V}_{ac}(\mathfrak{z}^{\mathcal{X}^{*}},\sigma^{\mathcal{X}}).\label{eq:abs_cont_subspace}\end{equation} In particular, $(\mathfrak{z}^{*},\sigma)$ is absolutely continuous if and only if $(\mathfrak{z}^{\mathcal{X}^{*}},\sigma^{\mathcal{X}})$ is absolutely continuous. \end{thm} The proof of this theorem rests on a calculation of independent interest. Recall that each $\mathfrak{z}\in\overline{\mathbb{D}(F^{\sigma})}$ determines a completely positive map $\Phi_{\mathfrak{z}}$ on $\sigma(N)'$ via the formula\[ \Phi_{\mathfrak{z}}(a):=\mathfrak{z}^{*}(I_{E}\otimes a)\mathfrak{z},\qquad a\in\sigma(N)'.\] Recall, also, that the commutant of $\sigma^{\mathcal{X}}(M)$ is $I_{\mathcal{X}}\otimes\sigma(N)'$, by \cite[Theorem 6.23]{mR74b}. \begin{lem} \label{lem:CP-induced}With the notation as in Theorem \ref{thm:Absolute_continuity_preservation},\[ \Phi_{\mathfrak{z}{}^{\mathcal{X}}}=I_{\mathcal{X}}\otimes\Phi_{\mathfrak{z}},\] i.e., for all $a\in\sigma(N)'$, $\Phi_{\mathfrak{z}{}^{\mathcal{X}}}(I_{\mathcal{X}}\otimes a)=I_{\mathcal{X}}\otimes\Phi_{\mathfrak{z}}(a)$.\end{lem} \begin{proof} By Theorem \ref{thm:functor}, $\mathfrak{z}^{\mathcal{X}}=(W\otimes I_{H})^{*}(I_{\mathcal{X}}\otimes\mathfrak{z})$. Consequently, for all $a\in\sigma(N)'$, \begin{align*} \Phi_{\mathfrak{z}{}^{\mathcal{X}}}(I_{\mathcal{X}}\otimes a) & =\mathfrak{z}^{\mathcal{X}^{*}}(I_{E}\otimes(I_{\mathcal{X}}\otimes a))\mathfrak{z}^{\mathcal{X}}\\ = & (I_{\mathcal{X}}\otimes\mathfrak{z}^{*})(W\otimes I_{H})(I_{E\otimes\mathcal{X}}\otimes a)(W\otimes I_{H})^{*}(I_{\mathcal{X}}\otimes\mathfrak{z})\\ = & (I_{\mathcal{X}}\otimes\mathfrak{z}^{*})(I_{\mathcal{X}\otimes F}\otimes a)(I_{\mathcal{X}}\otimes\mathfrak{z})\\ = & (I_{\mathcal{X}}\otimes\mathfrak{z}^{*})(I_{\mathcal{X}}\otimes(I_{F}\otimes a))(I_{\mathcal{X}}\otimes\mathfrak{z})\\ = & I_{\mathcal{X}}\otimes\Phi_{\mathfrak{z}}(a).\end{align*} \end{proof} For the proof of Theorem \ref{thm:Absolute_continuity_preservation}, we need one more ingredient: \begin{defn} \label{def:superharmonic_operator}Let $\Phi$ be a completely positive map on a $W^{*}$-algebra $M$. A positive element $a\in M$ is called \emph{superharmonic} with respect to $\Phi$ in case $\Phi(a)\leq a$. A superharmonic element $a\in M$ is called a \emph{pure} superharmonic element in case $\Phi^{n}(a)\to0$ ultra-strongly as $n\to\infty$. \end{defn} \begin{proof} (of Theorem \ref{thm:Absolute_continuity_preservation}) In \cite[Theorem 4.7]{MS2010}, we proved that absolutely continuous subspace for $(\mathfrak{z}^{*},\sigma)$ is the closed linear span of the ranges of all the pure superharmonic operators for $\Phi_{\mathfrak{z}}$, i.e., the projection onto $\mathcal{V}_{ac}(\mathfrak{z}^{*},\sigma)$ is the supremum taken over all the projections $P$, where $P$ is the projection onto the range of a pure superharmonic operator for $\Phi_{\mathfrak{z}}$. From Lemma \ref{lem:CP-induced} we see that $a\in N$ is a pure superharmonic operator for $\Phi_{\mathfrak{z}}$ if and only if $I_{\mathcal{X}}\otimes a$ is a pure superharmonic operator for $\Phi_{\mathfrak{z}{}^{\mathcal{X}}}$. Since the range projection of $I_{\mathcal{X}}\otimes a$ is $I_{\mathcal{X}}\otimes P$, if $P$ is the range projection of $a$, the equation \eqref{eq:abs_cont_subspace} is immediate. \end{proof} \section{Stabilization and Reconstruction} We return to the $C^{*}$-setting, although everything we will say has an analogue in the $W^{*}$-setting. So let $N$ be a $C^{*}$-algebra and let $F$ be a $C^{*}$-correspondence over $N$. We are out to identify a special pair $(E,M)$ that is Morita equivalent to $(F,N)$ and is a kind of stabilization of $(F,N)$. As we will see, $(E,M)$ will have a representation theory that is closely connected to Popescu's reconstruction operator. Form the Fock space over $F$, $\mathcal{F}(F)$, and let $M=\mathcal{K}(\mathcal{F}(F))$. Also, let $P_{0}$ be the projection onto the sum $F\oplus F^{\otimes2}\oplus F^{\otimes3}\oplus\cdots$ in $\mathcal{F}(F)$. Then $P_{0}$ lies in $\mathcal{L}(\mathcal{F}(F))$, which is the multiplier algebra of $M=\mathcal{K}(\mathcal{F}(F))$. We set $E:=P_{0}\mathcal{K}(\mathcal{F}(F))$ and endow $E$ with its obvious structure as a right Hilbert $C^{*}$-module over $\mathcal{K}(\mathcal{F}(F))$. Note that $\mathcal{L}(E)=P_{0}\mathcal{L}(\mathcal{F}(F))P_{0}$. Define $R:\mathcal{F}(F)\otimes F\to\mathcal{F}(F)$ by the formula $R(\xi\otimes f)=\xi\otimes f$, where the first $\xi\otimes f$, the argument of $R$, is viewed as an element in $\mathcal{F}(F)\otimes_{N}F$, while the second $\xi\otimes f$, the image of $R(\xi\otimes f)$, is viewed as an element of $\mathcal{F}(F)$. It appears that $R$ is the identity map. However, this is only because we have suppressed the isomorphisms between $F^{\otimes n}\otimes F$ and $F^{\otimes(n+1)}$. The map $R$ is adjointable, and its adjoint is given by the formulae $R^{*}(a)=0$, if $a\in N$, viewed as the zero$^{\underline{th}}$ component of $\mathcal{F}(F)$, while $R^{*}(\xi_{1}\otimes\xi_{2}\otimes\xi_{3}\otimes\cdots\otimes\xi_{n})=(\xi_{1}\otimes\xi_{2}\otimes\cdots\otimes\xi_{n-1})\otimes\xi_{n}$, if $n\geq1$ and $\xi_{1}\otimes\xi_{2}\otimes\xi_{3}\otimes\cdots\otimes\xi_{n}$ is a decomposable element of $F^{\otimes n}\subseteq\mathcal{F}(F)$. In particular, $RR^{*}=P_{0}$. We define $\varphi_{M}:M\to\mathcal{L}(E)$ by the formula \[ \varphi_{M}(a):=R(a\otimes I_{F})R^{*},\qquad a\in M.\] Observe that $\varphi_{M}$ extends naturally to the multiplier algebra of $M$, which is $\mathcal{L}(\mathcal{F}(F))$ and $\varphi_{M}(I)=P_{0}$. Consequently, $E$ is an essential left module over $M$. \begin{prop} \label{pro:Stable_Morita_equivalence}If $\mathcal{X}=\mathcal{F}(F)$, then $\mathcal{X}$ is an equivalence bimodule between $M=\mathcal{K}(\mathcal{F}(F))$ and $N$ and the map $W$ from $E\otimes_{M}\mathcal{X}$ to $\mathcal{X}\otimes_{N}F$ defined by the formula\[ W(P_{0}a\otimes\xi)=R^{*}P_{0}a\xi,\qquad P_{0}a\otimes\xi\in E\otimes_{M}\mathcal{X},\] is an isomorphism of $M,N$-correspondences. Consequently, $(E,M)$ and $(F,N)$ are Morita equivalent. \end{prop} \begin{proof} By definition, $\mathcal{X}$ is an equivalence bimodule implementing a Morita equivalence between $M$ and $N$. Also, it is clear that $W$ is a right $N$-module map. To see that $W$ is a left $M$-module map, it may be helpful to emphasize that the tensor product $E\otimes_{M}\mathcal{X}$ is balanced over $M$. So, if $P_{0}$ and $I$ were in $\mathcal{K}(\mathcal{F}(F))$ (which they aren't; they're only multipliers of $\mathcal{K}(\mathcal{F}(F))$), then $P_{0}a\otimes\xi$ could be replaced by $P_{0}\otimes\xi$, which in turn could be replaced by $I\otimes P_{0}\xi$. Further, sending $I\otimes P_{0}\xi$ to $P_{0}\xi$ effects an isomorphism between $E\otimes_{M}\mathcal{X}$ and $P_{0}\mathcal{F}(F)$. It results that $W$ is effectively $R^{*}$. The following equation, then gives the desired result.\begin{align*} W\varphi_{M}(b)(P_{0}a\otimes\xi) & =W(R(b\otimes I_{F})R^{*})P_{0}a\otimes\xi\\ = & (b\otimes I_{F})R^{*}a\xi\\ = & (b\otimes I_{F})W(P_{0}a\otimes\xi).\end{align*} The fact that $W$ is isometric is another easy computation: For all $a,b\in M$, and $\xi,\eta\in F$, \begin{align*} \langle P_{0}a\otimes\xi,P_{0}b\otimes\eta\rangle & =\langle\xi,a^{*}P_{0}b\eta\rangle\\ = & \langle P_{0}a\xi,P_{0}b\eta\rangle\\ = & \langle R^{*}a\xi,R^{*}b\eta\rangle\\ = & \langle W(P_{0}a\otimes\xi),W(P_{0}b\otimes\eta)\rangle.\end{align*} (Note that we have used the fact that $P_{0}=RR^{*}$ when passing from the second line to the third.) Since $\mathcal{K}(\mathcal{F}(F))\mathcal{F}(F)=\mathcal{F}(F)$, $P_{0}\mathcal{K}(\mathcal{F}(F))\mathcal{F}(F)=P_{0}\mathcal{F}(F)$, and so $R^{*}P_{0}\mathcal{K}(\mathcal{F}(F))\mathcal{F}(F)=R^{*}P_{0}\mathcal{F}(F)=\mathcal{F}(F)\otimes F$. This shows that $W$ is surjective.\end{proof} \begin{defn} Given a $C^{*}$-correspondence pair $(F,N)$, we call the $C^{*}$-correspondence pair $(E,M)=(P_{0}\mathcal{K}(\mathcal{F}(F)),\mathcal{K}(\mathcal{F}(F))$ constructed in Proposition \ref{pro:Stable_Morita_equivalence} the \emph{canonical stabilization} of $(F,N)$, and we call $(\mathcal{F}(F),W)$ the \emph{canonical (E,M)$,$$(F,N)$-equivalence}. \end{defn} We want to illustrate the calculations of Proposition \ref{pro:Stable_Morita_equivalence} in a concrete setting first considered by Popescu. For this purpose, we require two observations. First, recall that $E$ has the form $PM$. In general, if $M$ is a $C^{*}$-algebra and if $E$ has the form $PM$, where $P$ is a projection in the multiplier algebra of $M$, then we called $(E,M)$ \emph{strictly cyclic} in \cite[Page 419]{MS98b}. In this case, if $(T,\sigma)$ is a completely contractive covariant representation of $(E,M)$ on a Hilbert space $H$, then $E\otimes_{\sigma}H$ is really $\sigma(P)H$, where we have extended $\sigma$ to the multiplier algebra of $M$, if $M$ is not unital. Consequently, the intertwiner $\widetilde{T}$ really maps the \emph{subspace} $\sigma(P)H$ into $H$ but the adjoint of $\widetilde{T}$ may be viewed as an operator \emph{on} $H$, i.e., from $H$ to $H$, with range contained in $\sigma(P)H$, of course. Second, observe that in general, if $(T,\sigma)$ is a covariant representation of $(F,N)$ on a Hilbert space $H$, then the representation induced from the canonical equivalence is $(T^{\mathcal{F}(F)},\sigma^{\mathcal{F}(F)})$. We know $\sigma^{\mathcal{F}(F)}$ represents $\mathcal{K}(\mathcal{F}(F))$ on $\mathcal{F}(F)\otimes_{\sigma}H$ via the ordinary action of $\mathcal{K}(\mathcal{F}(F))$ on $\mathcal{F}(F)$, tensored with the identity operator on $H$, i.e., $\sigma^{\mathcal{F}(H)}(a)=a\otimes I_{H}$. On the other hand, from Theorem \ref{thm:functor}, $\widetilde{T^{\mathcal{F}(F)}}=(I_{\mathcal{F}(F)}\otimes\widetilde{T})(W\otimes I_{H})$. But as we noted in the proof of Proposition \ref{pro:Stable_Morita_equivalence}, $W$ is effectively $R^{*}$, and taking into account all the balancing that is taking place, we may write $\widetilde{T^{\mathcal{F}(F)}}=(I_{\mathcal{F}(F)}\otimes\widetilde{T})(R^{*}\otimes I_{H})$. Since, as we just remarked, $\widetilde{T^{\mathcal{F}(F)}}$ maps from $E\otimes_{M}\mathcal{F}(F)\otimes_{\sigma}H=P_{0}\mathcal{K}(\mathcal{F}(F))\otimes_{\mathcal{K}(\mathcal{F}(F))}\mathcal{F}(F)\otimes_{\sigma}H$, which can be identified with the subspace $P_{0}\mathcal{F}(F)\otimes_{\sigma}H$ of $\mathcal{F}(F)\otimes_{\sigma}H$, it will be more convenient in the example below to work with the adjoint of $\widetilde{T^{\mathcal{F}(F)}}$,\begin{equation} \left(\widetilde{T^{\mathcal{F}(F)}}\right)^{*}=(R\otimes I_{H})(I_{\mathcal{F}(F)}\otimes\widetilde{T}^{*}),\label{eq:T-star}\end{equation} and view $\left(\widetilde{T^{\mathcal{F}(F)}}\right)^{*}$ as an operator in $B(\mathcal{F}(F)\otimes_{\sigma}H)$. \begin{example} \label{exa:Popescu's_reconstruction_operator}In this example, we let $N=\mathbb{C}$ and we let $F=\mathbb{C}^{d}$. We interpret $\mathbb{C}^{d}$ as $\ell^{2}(\mathbb{N})$, if $d=\infty$. If $(T,\sigma)$ is a completely contractive covariant representation of $(\mathbb{C}^{d},\mathbb{C})$ on a Hilbert space $H$, then $\sigma$ is just the $n$-fold multiple of the identity representation of $\mathbb{C}$, where $n$ is the dimension of $H$. Also, $\widetilde{T}$ may be viewed in terms of a $1\times d$ matrix of operators on $H$, $[T_{1},T_{2},\cdots,T_{d}]$, such that $\sum_{i=1}^{d}T_{i}T_{i}^{*}\leq I_{H}$, i.e. $[T_{1},T_{2},\cdots,T_{d}]$ is a row contraction. When $\mathbb{C}^{d}\otimes H$ is identified with the column direct sum of $d$ copies of $H$, the formula for $\widetilde{T}:\mathbb{C}^{d}\otimes H\to H$ is $\widetilde{T}(\left(\begin{array}{c} h_{1}\\ h_{2}\\ \vdots\\ h_{d}\end{array}\right))=\sum_{i=1}^{d}T_{i}h_{i}$. Consequently, $\widetilde{T}^{*}:H\to\mathbb{C}^{d}\otimes H$ is given by the formula \[ \widetilde{T}^{*}h=\left(\begin{array}{c} T_{1}^{*}h\\ T_{2}^{*}h\\ \vdots\\ T_{d}^{*}h\end{array}\right).\] On the other hand, $\mathcal{F}(\mathbb{C}^{d})\otimes\mathbb{C}^{d}$ may be viewed as the column direct sum of $d$ copies of $\mathcal{F}(\mathbb{C}^{d})$ and when this is done, $R$ has a matricial representation as $[R_{1},R_{2},\cdots,R_{d}]$, where $R_{i}$ is the right creation operator on $\mathcal{F}(\mathbb{C}^{d})$ determined by the $i^{th}$ canonical basis vector $e_{i}=(0,0,\cdots,0,1,0,\cdots,0)^{\intercal}$ for $\mathbb{C}^{d}$, i.e., $R_{i}\xi=\xi\otimes e_{i}$. Notice that $[R_{1},R_{2},\cdots,R_{d}]$ is a \emph{row isometry,} meaning that $R_{i}$'s are all isometries and that their range projections $R_{i}R_{i}^{*}$ are mutually orthogonal. Thus, the formula for $R:\mathcal{F}(\mathbb{C}^{d})\otimes\mbox{C}^{d}\to\mathcal{F}(\mathbb{C}^{d})$ is $R\left(\begin{array}{c} \xi_{1}\\ \xi_{2}\\ \vdots\\ \xi_{d}\end{array}\right)=\sum_{i=1}^{d}R_{i}\xi_{i}$. Consequently, in the context of this example, equation \eqref{eq:T-star} becomes \begin{align*} \left(\widetilde{T^{\mathcal{F}(\mathbb{C}^{d})}}\right)^{*}(\xi\otimes h) & =(R\otimes I_{H})(I_{\mathcal{F}(\mathbb{C}^{d})}\otimes\widetilde{T}^{*})(\xi\otimes h)\\ = & (R\otimes I_{H})\left(\begin{array}{c} \xi\otimes T_{1}^{*}h\\ \xi\otimes T_{2}^{*}h\\ \vdots\\ \xi\otimes T_{d}^{*}h\end{array}\right)\\ = & \sum_{i=1}^{d}R_{i}\xi\otimes T_{i}^{*}h\\ = & (\sum_{i=1}^{d}R_{i}\otimes T_{i}^{*})(\xi\otimes h),\end{align*} i.e., $\left(\widetilde{T^{\mathcal{F}(\mathbb{C}^{d})}}\right)^{*}$ is Popescu's \emph{reconstruction operator} $\sum_{i=1}^{d}R_{i}\otimes T_{i}^{*}$. \end{example} The reconstruction operator first appeared implicitly in \cite{gP89}, where Popescu developed a characteristic operator function for noncommuting $d$-tuples of contractions. (In this connection it was used explicitly in \cite{gP2006}.) The first place the term ``reconstruction operator'' appeared in the literature is \cite[Page 50]{gP2009}, which began circulating as a preprint in 2004. Since that time, the reconstruction operator has played an increasingly prominent role in Popescu's work. In addition, the reconstruction operator has popped up elsewhere in the literature, but without the name attached to it. One notable example is Orr Shalit's paper \cite[Page 69]{oS2008}. There he attached a whole semigroup of them to representations of certain product systems of correspondences. Because of Example \ref{exa:Popescu's_reconstruction_operator} we feel justified in introducing the following terminology. \begin{defn} \label{def:Reconstruction_operator}If $(T,\sigma)$ is a completely contractive covariant representation of a $C^{*}$-correspondence pair $(F,N)$ on a Hilbert space $H$, then the adjoint of the intertwiner of the Morita transform of the canonical stabilization of $(F,N)$ is called the \emph{reconstruction operator} of $(T,\sigma)$; i.e., the reconstruction operator of $(T,\sigma)$ is defined to be $(\widetilde{T^{\mathcal{F}(F)}})^{*}$ viewed as an operator in $B(\mathcal{F}(F)\otimes_{\sigma}H)$. \end{defn} Our analysis begs the questions: How unique is the canonical stabilization of a $C^{*}$-correspondence pair? Are there non-canonical stabilizations? In general there are many stabilizations that ``compete'' with the canonical stabilization. Organizing them seems to be a complicated matter. To see a little of what is possible, we will briefly outline what happens in the setting of Example \ref{exa:Popescu's_reconstruction_operator}. So fix $(\mathbb{C}^{d},\mathbb{C})$. We shall assume $d$ is finite to keep matters simple. We can stabilize $\mathbb{C}$ as a $C^{*}$- algebra getting the compact operators on $\ell^{2}(\mathbb{N})$. It is important to do this explicitly, however. So let $\mathcal{X}$ be column Hilbert space $\mathbf{C}_{\infty}$. This is $\ell^{2}(\mathbb{N})$ with the operator space structure it inherits as the set of all operators from $\mathbb{C}$ to $\ell^{2}(\mathbb{N})$. Equivalently, it is the set of all infinite matrices $T=(t_{ij})$ that represent a compact operator on $\ell^{2}(\mathbb{N})$ and have the property that $t_{ij}=0$, when $j>1$. (See \cite{BMP2000}.) We then have $\widetilde{\mathbf{C}_{\infty}}=\mathbf{R}_{\infty}$, the row Hilbert space. Also, if $\mathcal{K}=\mathcal{K}(\ell^{2}(\mathbb{N}))$, then $\mathbf{C}_{\infty}$ is a $\mathcal{K},\mathbb{C}$-equivalence bimodule. So, if $E$ is any correspondence over $\mathcal{K}$ that is equivalent to $\mathbb{C}^{d}$, then $E$ must be isomorphic to \[ \mathbf{C}_{\infty}\otimes_{\mathbb{C}}\mathbb{C}^{d}\otimes_{\mathbb{C}}\mathbf{R}_{\infty}\simeq\mathbf{C}_{d}(\mathcal{K})\] with its usual left and right actions of $\mathcal{K}$. Because $\mathcal{K}$ is stable, there is an endomorphism $\alpha$ of $\mathcal{K}$ such that $\mathbf{C}_{d}(\mathcal{K})$ is isomorphic to $_{\alpha}\mathcal{K}$. That is, $_{\alpha}\mathcal{K}$ is $\mathcal{K}$ as a right $\mathcal{K}$-module (the module product is just the product in $\mathcal{K}$ and the $\mathcal{K}$-valued inner product is $\langle\xi,\eta\rangle:=\xi^{*}\eta$.) The left action of $\mathcal{K}$ is that which is implemented by $\alpha$, i.e., $a\cdot\xi:=\alpha(a)\xi$. General theory tells us this is the case, but we can see it explicitly as follows. Choose a Cuntz family of $d$ isometries on $\ell^{2}(\mathbb{N})$, $\{S_{i}\}_{i=1}^{d}$. (This means that $S_{i}^{*}S_{j}=\delta_{ij}I$ and $\sum_{i=1}^{d}S_{i}S_{i}^{*}=I$.) Then, as is well known, $\{S_{i}\}_{i=1}^{d}$ defines an endomorphism of $\mathcal{K}$ via the formula $\alpha(a)=\sum_{i=1}^{d}S_{i}aS_{i}^{*}$. Note, too, that $\alpha$ extends to be a \emph{unital} endomorphism of $B(\ell^{2}(\mathbb{N}))$ since $\sum_{i=1}^{d}S_{i}S_{i}^{*}=I$. On the other hand, define $V:\mathbf{C}_{d}(\mathcal{K})\to\mathcal{K}$ via the formula \[ V(\left(\begin{array}{c} a_{1}\\ a_{2}\\ \vdots\\ a_{d}\end{array}\right))=\sum_{i=1}^{d}S_{i}a_{i},\qquad\left(\begin{array}{c} a_{1}\\ a_{2}\\ \vdots\\ a_{d}\end{array}\right)\in\mathbf{C}_{d}(\mathcal{K}).\] Then it is a straightforward calculation to see that $V$ is a correspondence isomorphism from $\mathbf{C}_{d}(\mathcal{K})$ onto $_{\alpha}\mathcal{K}$. Thus $\mathcal{X}=\mathbf{C}_{\infty}$ is an equivalence bimodule between $(_{\alpha}\mathcal{K},\mathcal{K})$ and $(\mathbb{C}^{d},\mathbb{C})$ and $(_{\alpha}\mathcal{K},\mathcal{K})$ is a bona fide contender for a stabilization of $(\mathbb{C}^{d},\mathbb{C})$. Note that this time $_{\alpha}\mathcal{K}$ is strictly cyclic, but the projection $P$ is the identity. Suppose, now, that $(T,\sigma)$ is a completely contractive covariant representation of $(\mathbb{C}^{d},\mathbb{C})$ on a Hilbert space $H$. Then as before $\sigma$ is an $n$-fold multiple of the identity representation of $\mathbb{C}$ on $\mathbb{C}$, where $n$ is the dimension of $H$ and $\widetilde{T}:\mathbb{C}^{d}\otimes H\to H$ may be viewed as a row contraction $[T_{1},T_{2},\cdots,T_{d}]$ of operators on $H$. The induced representation of $\mathcal{K}$, $\sigma^{\mathbf{C}_{\infty}}$ is the $n$-fold multiple of the identity representation of $\mathcal{K}$ (same $n$) and a calculation along the lines of that was carried out in Example \ref{exa:Popescu's_reconstruction_operator} shows that $\left(\widetilde{T^{\mathbf{C}_{\infty}}}\right)^{*}=S_{1}\otimes T_{1}^{*}+S_{2}\otimes T_{2}^{*}+\cdots+S_{d}\otimes T_{d}^{*}$ acting on $\ell^{2}(\mathbb{N})\otimes H$. Thus, $\left(\widetilde{T^{\mathbf{C}_{\infty}}}\right)^{*}$ is an alternative for Popescu's reconstruction operator. How different from his reconstruction operator $\left(\widetilde{T^{\mathbf{C}_{\infty}}}\right)^{*}$ is remains to be seen. We believe the difference could be very interesting. We believe that the dependence of $\left(\widetilde{T^{\mathbf{C}_{\infty}}}\right)^{*}$ on the Cuntz family $\{S_{i}\}_{i=1}^{d}$ could be very interesting, also.\bigskip \noindent\emph{Acknowledgment:} We are very grateful to Gelu Popescu for giving us some background and references on his reconstruction operator.
1,116,691,499,523
arxiv
\section{Introduction} Owing to high mobility and agility, unmanned aerial vehicle (UAV)-assisted communications have been acknowledged promising in enhancing future wireless networks \cite{Valavanis2015Handbook,Zeng2016Wireless,Chen2018Liquid,Zhao2018Integrating,Mozaffari2019Beyond}. Numerous UAV-assisted applications have emerged during the past decade, such as cargo delivery, surveillance and monitoring, with UAVs acting as different types of communication platforms including aerial base stations (BSs), aerial relays, or aerial terminals \cite{Li2019UAV,Hourani2014Optimal,Lyu2017Placement,Wang2018Power}. As aerial BSs, UAVs can provide reliable communication links for ground devices. As an aerial terminal, UAV has the degree of freedom of completing special tasks \cite{Duan2019Resource}. In particular, UAV, acting as an aerial relay, can enlarge communication coverage and improve communication quality. Actually, the relaying technology has been widely investigated in terrestrial communications. However, most of these relays are fixed with limited mobility. Different from the static relays, UAV-enabled mobile relays offer new opportunities for performance improvement by tuning the location of UAV-relay dynamically to best suit various specific environments, especially for latency-tolerant applications \cite{Zhan2011Wireless} and scenarios with harsh conditions \cite{Zeng2016Throughput}. Because the receivers are generally location-dispersive with mobility, the best relaying position for the receivers can vary from one to another and also from time to time. For UAV-relays with the ability of moving around, the UAV can dynamically fly near to the best position for the communication node pair \cite{Zeng2016Wireless}. Moreover, hovering UAVs at a high altitude provide a high probability of establishing line-of-sight (LoS) links between the UAVs and ground devices \cite{Zhang2016Probabilistic}, which further leads to improved data rate and reduced latency. Furthermore, the distinctive characteristics of UAV make it an important technology in Internet of Things (IoT) \cite{Motlagh2016Low}. Therefore, UAV-assisted communications for 5G IoT have recently been of wide research interest. In some applications, such as agricultural surveillance, IoT devices may be deployed remotely in rural areas far from base stations. It is expensive and inconvenient to build terrestrial communication facilities to achieve the information exchange and collection for these IoT devices \cite{Motlagh2016Low,Feng2019UAV}. UAV-enabled relays help IoT devices communicate with the base station whenever necessary, which in fact expands the effective coverage of base stations. Besides, IoT devices are usually energy-limited and thus they lack the ability to communicate over a wide range. By leveraging the mobility of UAV, it is possible to fly close to the IoT devices to communicate with them, including collecting data from devices and transmitting signals to them. In this way, the IoT devices can communicate with access points with less energy \cite{Mozaffari2016Unmanned,Feng2019UAV}. At the same time, the UAV can also transmit energy to energy-constrained IoT devices through radio frequency signals, which can further extend their working life \cite{Su2020UAV}. In addition, another typical application is post-disaster rescue. When cellular infrastructure is destroyed and the communication is disrupted in a sudden disaster, UAVs can be dispatched to establish temporary communication and send rescue information for IoT devices \cite{Liu2019Resource}. The IoT devices can be all kinds of human portable machine type devices, guiding humans to evacuate, avoid danger, and get rescued as soon as possible based on the rescue information. These potential benefits of UAV relay however comes with the new challenge of three-dimensional (3D) deployment and trajectory design of UAV specifically for the communication pairs to be served \cite{He2018Joint,Pan2019Joint}. This is location based optimization in communication which is of current interest for UAV-relays \cite{Zhang2018Trajectory,Zhang2018Joint} and is most related to our current work. In particular, in order to achieve efficient and high-capacity communication, the optimal relay trajectory design of UAV requires a balance between the source-relay and relay-destination throughput. Besides, the trajectory design can greatly affect the energy efficiency of UAVs, which is a key metric especially for battery-limited UAVs \cite{Mozaffari2016Efficient,Yang2018Joint}. The authors of \cite{Yang2018Joint} proposed an iterative algorithm to minimize the sum uplink power by jointly optimizing the UAV's flight altitude, antenna beamwidth, location, transmission bandwidth and power. In \cite{Mozaffari2017Mobile}, the deployment of multiple UAVs was optimized for collecting data from geometrically distributed IoT devices. Further considering a propulsion power consumption model, the authors of \cite{Zeng2017Energy} optimized the trajectory of UAVs aiming at maximizing energy efficiency, or equivalently lengthening the working life-time of UAVs. Besides the above new challenges compared to traditional relays in terms of coverage, the deployment of UAV also brings other new opportunities and technique challenges, including channel modeling \cite{Cai2017Low}, energy efficiency \cite{LinStriking}, and interference management \cite{Fouda2019Interference}. In UAV-aided communication networks, there exist both UAV-to-ground and UAV-to-UAV channels, which are quite different from well-studied traditional ground communication channels. Though the UAV-to-ground channels are usually expected as LoS links, they may also be blocked by obstacles making the reliability of communication challenging. As for UAV-to-UAV channels, they are dominated by LoS links suffering from possibly high Doppler frequency shifts. Therefore, it is necessary to measure and model these two kinds of channels more systematically \cite{Zeng2016Wireless}. Besides, UAVs suffer from limitations of size, weight, and power (SWaP), which makes the deployment and operation of energy-efficient UAVs essential for smart energy use. In the UAV-aided communication networks, there is a lack of fixed backhaul links and centralized control due to UAV's high mobility, which makes interference management more challenging than that in terrestrial communication networks. Therefore, interference management technologies especially designed for UAV communications are necessary \cite{Mei2018Uplink}. Recently, researches have shown the potential of UAVs in expanding communication coverage or improving quality of service (QoS) of cell-edge users (CEUs). The authors of \cite{Cheng2018UAV} maximized the sum rate of a multiuser network by jointly optimizing UAV trajectory and offloading schedule among multiple cells with the UAV acting as a mobile BS. In the hybrid cellular network, the UAV-enabled BSs and ground base stations (GBSs) jointly served the ground users. In \cite{Zeng2016Throughput}, the authors investigated a point-to-point communication system where a UAV relayed information from source to destination. Also for a point-to-point system, the authors of \cite{Pan2019Joint} proposed an algorithm to minimize the decoding error probability by jointly optimizing the time length allocation and UAV locations. Considering a layered network where a swarm of UAV was deployed to provide high QoS for IoT devices and enlarge the coverage area, the authors of \cite{Zhang2019IoT} optimized the number of UAVs and proposed a low latency routing algorithm. In \cite{Yuan2019Joint}, a location-based beamforming scheme was proposed to enhance the security in a UAV-enabled relaying system. However, practical causal cache constraint of the UAV was not considered. In fact, the relayed information has to be buffered at the UAV before being forwarded to destinations in practice \cite{Zhao2019Caching}, which results in the causal constraint. In this paper, we consider cell-edge performance enhancement in a multi-cell network in IoT applications by using a UAV relay, where the UAV equipping with a cache acts as the decoding forward mobile relay to forward information from adjacent GBSs to CEUs. Main contributions of this paper are summarized as follows: \begin{itemize} \item We consider the scenario where a UAV-enabled mobile relay helps forward data to CEUs distributed in the joint edge coverage of multiple cells. To maximize the sum rate of all CEUs, we formulate an optimization problem by jointly optimizing the UAV mobility management, including trajectory, velocity, and acceleration, and UAV-CEU association strategy, subject to minimum rate requirements of CEUs, mobility constraints of the UAV and causal buffer constraints in practice. \item The cache induces an information causality constraint in practice which has rarely been considered in existing works. The UAV can only forward the data which has been successfully received during the previous time slots. This causal constraint has a very complicated form and we successfully transform it into a convex constraint by resorting to tight bounds and relaxations. \item The original problem is a mixed-integer nonconvex problem, whose optimal solution is generally hard to be obtained. We devise an iterative algorithm to solve the original problem in an alternating manner. For the UAV mobility management optimization subproblem, we transform it into a convex problem and solve it by the well-established interior-point method. Then, we use the dual decomposition method to solve the UAV-CEU association optimization subproblem. \end{itemize} The rest of this paper is organized as follows. System model and problem formulation are presented in Section \ref{sec:system_model}. In Section \ref{sec:algorithm}, we decompose the original problem into two subproblems and solve them separately. An iterative mobility management and user association (IMMUA) algorithm is proposed. The convergence and complexity are also discussed in this section. In Section \ref{sec:results}, numerical results are presented to show the effectiveness of our proposed algorithm. Finally, concluding remarks are drawn in Section \ref{sec:conclusion}. \vspace{0.1cm} \section{System Model And Problem Formulation}\label{sec:system_model} \subsection{System Model} \begin{figure}[t!] \setlength{\abovecaptionskip}{-1pt} \centering \includegraphics[width = 3.2in, height = 2.5in]{system_model.eps} \caption{System model of a UAV-enabled mobile relaying network.} \label{Fig:system model} \end{figure} We consider that a mobile relay, i.e., UAV, helps serve ground CEUs distributed within the overlapped edge coverage of $ N_B $ adjacent GBSs, as depicted in Fig. \ref{Fig:system model}. In practice, it is necessary to select the appropriate type of UAV according to different application scenarios, including QoS, operating conditions, and laws \cite{Mozaffari2019A}. In general, UAVs can be roughly classified into fixed-wing and rotary-wing UAVs. The advantage of rotary-wing UAVs is that they can hover over fixed positions and fly in arbitrary direction while the disadvantages are poor mobility and limited load. In contrast, fixed-wing UAVs support higher flight speed and heavier load but need to keep flying in the air \cite{Zeng2016Wireless}. In this paper, we consider the rotary-wing UAV as a mobile relay, allowing the UAV to hover above the optimal positions and rotate by an arbitrary angle \cite{Filippone2006Flight}. Under this consideration, the constraint on the rotation of UAV can be released. The system model of UAV-assisted communication is considered in IoT applications. The CEUs can be all kinds of IoT devices, such as sensors or actuators in smart agriculture and robots in machine-to-machine (M2M) scenarios. The system model can also be applied to other scenarios via slight modifications, such as a post-disaster communication scenario. In this paper, we focus on cell-edge users rather than cell-center users to emphasize the great potential of UAVs in improving QoS of edge users. We assume that each GBS is equipped with $ L $ antennas, and the UAV as well as each user respectively has a single antenna. Denote $ \mathcal{K}=\{1,\cdots,K\} $ as the set of CEUs. Each CEU is served via the UAV relay in order to enhance the cell-edge performance under a minimum rate constraint. During each transmission period of $ T $ seconds, the adjacent GBSs first send data to UAV, and then the UAV forwards the data to CEUs. The period length can be selected according to the application scenario and mission type of the UAV. The UAV receives and forwards data in frequency division duplexing (FDD) mode. 3D Cartesian coordinate system is considered, in which all GBSs as well as CEUs have zero altitude and the UAV flies at a fixed altitude $ H $ in meters. The GBSs and CEUs have fixed horizontal locations denoted respectively by $ \bm{b}_m=(x_m,y_m) $ for the $ m $th GBS and $ \bm{e}_k=(x_k,y_k) $ for CEU $ k\in\mathcal{K} $. The locations of GBSs and CEUs are assumed known to the UAV. Without loss of generality, each transmission period $ T $ is splitted into $ N $ equal-length time slots, with $ \delta_t=\frac{T}{n} $ denoting the elementary slot length. $ N $ can be chosen sufficiently large in order to guarantee an approximately constant UAV location within each time slot, resulting in a sufficiently small value of $ \delta_t $. At each time slot $ n\in \{1,\cdots,N\} \triangleq \mathcal N $, the horizontal coordinate of the UAV is expressed as $ \bm{u}[n]=(x[n],y[n]) $. Therefore, the UAV's trajectory can be expressed approximately by the $ N $-length sequence $ \{(x[n],y[n])\}_{n=1}^{N} $. Similarly, the UAV's velocity and acceleration can be denoted as $ \{\bm{v}[n]\}_{n=1}^{N} $ and $ \{\bm{a}[n]\}_{n=1}^{N} $, respectively. We assume that the UAV must return to the pre-specified starting point after each transmission period, which is denoted as $ \bm{u}_0=(x_0,y_0) $. For sufficiently small $ \delta_t $, the mobility constrains of the UAV, including starting point, terminal point, speed constraint, and acceleration constraint can be expressed as \begin{gather} \bm{u}[1]=\bm{u}[N]=\bm{u}_0,\\ \|\bm{u}[n+1]-\bm{u}[n]\|\leq V_{\rm{max}}\delta_t,\ n=1,\cdots,N-1,\\ \bm{v}[n+1]=\bm{v}[n]+\bm{a}[n]\delta_t, \ n=1,\cdots,N-1, \label{eq:velocity} \\ \bm{u}[n+1]\!=\!\bm{u}[n]\!+\!\bm{v}[n]\delta_t\!+\!\frac{1}{2}\bm{a}[n]\delta_t^2, \ n=1,\cdots,N-1, \label{eq:acceleration} \\ \|\bm{v}[n]\|\leq V_{\rm{max}}, \ n=1,\cdots,N,\\ \|\bm{a}[n]\|\leq a_{\rm{max}}, \ n=1,\cdots,N, \end{gather} where $ \|\cdot\| $ represents the Euclidean norm of a vector, $ V_{\rm{max}} $ and $ a_{\rm{max}} $ denote the maximum speed and maximum acceleration, respectively, and $ V_{\rm{max}}\delta_t $ is the maximum displacement in each time slot. \eqref{eq:velocity} and \eqref{eq:acceleration} are obtained according to kinematics formulas. According to the above coordinate representation, the link distance between the GBS $ m $ and the UAV at the $ n $th time slot can be expressed as \begin{equation} d_{mu}[n]=\sqrt{H^2+\|\bm{u}[n]-\bm{b}_m\|^2}. \end{equation} Similarly, the link distance between the CEU $ k $ and the UAV at the $ n $th time slot can be expressed as \begin{equation} d_{uk}[n]=\sqrt{H^2+\|\bm{u}[n]-\bm{e}_k\|^2}. \end{equation} Considering high altitude of UAV, air-to-ground channels between the UAV and GBSs are dominated by line-of-sight (LoS) links. By applying the free-space path-loss model \cite{Goldsmith2005Wireless}, the channel power gain from GBS $ m $ to the UAV during time slot $ n $ is \begin{equation} h_{mu}[n]=\alpha_0d_{mu}^{-2}[n]=\frac{\alpha_0}{H^2+\|\bm{u}[n]-\bm{b}_m\|^2}, \end{equation} where $ \alpha_0 $ denotes the reference channel power at 1~m. Similarly, the channel power gain from the UAV to CEU $ k $ during time slot $ n $ is \begin{equation} h_{uk}[n]=\alpha_0d_{uk}^{-2}[n]=\frac{\alpha_0}{H^2+\|\bm{u}[n]-\bm{e}_k\|^2}. \end{equation} The $ N_B $ adjacent GBSs use the maximum ratio transmission (MRT) strategy to transmit data to the UAV. The MRT precodes the transmitted signal by using the weights proportional to the corresponding channel coefficients, which can maximize the signal-to-noise ratio (SNR) of the received signal. In the $ n $th time slot, the signal received by the UAV from $ N_B $ GBSs is \begin{equation} \label{eq:y_m} y[n] = \sum_{m=1}^{N_B}\sqrt{h_{mu}[n]P_B}\bm{g}_m^H \bm{w}_m s + z,\ m=1,\cdots,N_B, \end{equation} where $ P_B $ represents the transmitting power of GBS, $ \bm{g}_m $ accounts for the small-scale channel fading from GBS $ m $ to the UAV, $ \bm{w}_m $ is the beamforming vector, $ s $ is the transmit signal with unit power, and $ z $ is the additive white Gaussian noise (AWGN) with variance $ \sigma^2 $. Assume that full channel state information is known to the GBSs. Considering the $ N_B $ GBSs as a large GBS with $ N_BL $ antennas in total, the overall channel can be modeled as \begin{equation} \bm{g}=\left(\sqrt{h_{1u}}\bm{g}_1,\cdots,\sqrt{h_{mu}}\bm{g}_m,\cdots,\sqrt{h_{N_Bu}}\bm{g}_{N_B}\right)^T. \end{equation} By applying the MRT beamforming $ \bm{w}=\frac{\bm{g}}{\|\bm{g}\|} $, the signal received by the UAV in \eqref{eq:y_m} can be rewritten as \begin{equation} y[n]=\sqrt{P_B} \|\bm{g}\| s+z. \end{equation} In this way, the received SNR can be maximized as \begin{equation} \text{SNR}=\frac{P_B \|\bm{g}\|^2}{\sigma^2}=\sum_{m =1}^{N_B}\frac{P_B h_{mu}\|\bm{g}_m\|^2}{\sigma^2}. \end{equation} By using the Shannon formula, the data rate of the UAV in the $ n $th time slot can be evaluated as \begin{equation}\label{eq:rate_r} R_{U_r}[n]=\log_2 \left(1+\sum_{m=1}^{N_B} \frac{P_B h_{mu}[n] \|\bm{g}_m\|^2}{\sigma^2} \right). \end{equation} Then, the UAV forwards the received data to its associated CEU in each time slot. Assume that the UAV serves at most one CEU during each time slot. Let $ \rho_{k,n}=1 $ indicate that the $ k $th CEU associates with the UAV for reception in the $ n $th time slot and otherwise $ \rho_{k,n}=0 $. As a result, the average rate of CEU $ k $ within $ T $ equals \begin{equation}\label{eq:rate_k} R_E[k]=\frac{1}{N}\sum_{n=1}^{N}\rho_{k,n}\log_2\left(1+\frac{P_U h_{uk}[n]}{\sigma^2}\right), \end{equation} where $ P_U $ is the transmitting power of the UAV. From the perspective of UAV, the transmission rate from the UAV to its associated CEUs in the $ n $th time slot can be obtained as \begin{equation}\label{eq:rate_t} R_{U_t}[n]=\sum_{k=1}^{K}\rho_{k,n}\log_2\left(1+\frac{P_U h_{uk}[n]}{\sigma^2}\right). \end{equation} For the UAV with a sufficiently large buffer, without loss of generality, the processing time at the UAV is set as one time slot. The data received in the $ n $th time slot can be forwarded in the next time slot. So the UAV has no data to forward in the first time slot and the GBSs should not transmit any data to the UAV in the last time slot. Therefore, for $ n=1 $ and $ n=N $, we have $ R_{U_t}[1]=R_{U_r}[N]=0 $ and $ \rho_{k,1}=0 $. Considering causality in practice and from \eqref{eq:rate_r} and \eqref{eq:rate_t}, we can express the causal buffer constraint as \begin{equation}\label{eq:causality} \sum_{i=2}^{n}R_{U_t}[i]\leq \sum_{i=1}^{n-1}R_{U_r}[i], \ n=2,\cdots,N. \end{equation} It guarantees that the UAV in each time slot $ n $ can only forward the data that has been successfully received during the previous time slots. \subsection{Problem Formulation} We define the UAV's trajectory $ \bm{U}\triangleq \{\bm{u}[n],n=2,\cdots,N-1\} $, the velocity $ \bm{V}\triangleq \{\bm{v}[n],n=1,\cdots,N\} $, the acceleration $ \bm{A}\triangleq \{\bm{a}[n],n=1,\cdots,N\} $, and the UAV-CEU association strategy $ \bm{P}\triangleq \{\rho_{k,n},\forall k\in \mathcal{K},n=2,\cdots,N\} $. Our objective is to maximize the sum rate of all the CEUs by jointly optimizing $ \bm{U} $, $ \bm{V} $, $ \bm{A} $, and $ \bm{P} $ in the transmission period $ T $, subject to the minimum rate requirements of CEUs, mobility constraints of the UAV and causal buffer constraints in practice. Then, we can formulate the joint optimization problem as \begin{subequations}\label{eq:pro_U_P} \begin{align} \mathop{\max}_{\bm{U}, \bm{V},\bm{A},\bm{P}}\quad\!\! &\sum_{k\in \mathcal{K}}R_E[k]\\ \textrm{s.t.}\quad\!\! &R_E[k]\geq R_0, \ \forall k\in \mathcal{K}, \\ &\sum_{k\in \mathcal{K}}\rho_{k,n}\leq 1, \ n=2,\cdots,N,\label{eq:association_2}\\ &\rho_{k,n}=\{0,1\}, \ \forall k\in \mathcal{K}, \ n=2,\cdots,N,\\ &\sum_{i=2}^{n}R_{U_t}[i]\leq \sum_{i=1}^{n-1}R_{U_r}[i], \ n=2,\cdots,N, \\ &\bm{u}[1]=\bm{u}[N]=\bm{u}_0,\\ &\|\bm{u}[n\!+\!1]\!-\!\bm{u}[n]\|\!\leq\!V_{\rm{max}} \delta_t,\ n\!=\!1,\!\cdots\!,N\!\!-\!\!1,\!\!\!\!\!\!\!\!\!\\ &\bm{v}[n\!+\!1]\!=\!\bm{v}[n]\!+\!\bm{a}[n]\delta_t, \ n=1,\cdots,N\!\!-\!\!1, \\ &\bm{u}[n\!\!+\!\!1]\!=\!\bm{u}[n]\!\!+\!\!\bm{v}[n]\delta_t\!\!+\!\!\frac{1}{2}\bm{a}[n]\delta_t^2, n\!=\!1,\!\cdots\!,N\!\!-\!\!1, \\ &\|\bm{v}[n]\|\leq V_{\rm{max}}, \ n=1,\cdots,N, \\ &\|\bm{a}[n]\|\leq a_{\rm{max}}, \ n=1,\cdots,N, \end{align} \end{subequations} where $ R_0 $ is the minimum rate requirement of each CEU, which guarantees the QoS of CEUs. Constraints (19c) and (19d) ensure that the UAV serves at most one CEU in each time slot. Constraint (19e) is the information causality constraint in practice. Constraints (19f)-(19k) are mobility constraints of the UAV in terms of initial location, terminal location, speed constraint, and acceleration constraint. \section{Joint Mobility Management and Association Optimization}\label{sec:algorithm} We observe that the optimization problem in \eqref{eq:pro_U_P} is a mixed-integer nonconvex problem which is generally NP-hard. It is extremely challenging to obtain its optimal solution efficiently. As can be seen from \eqref{eq:pro_U_P}, the UAV's trajectory $ \bm{U} $, the velocity $ \bm{V} $, and the acceleration $ \bm{A} $ are coupled with each other, which can be collectively referred to as mobility management. We thus decompose the original problem into two subproblems, i.e., UAV mobility management and UAV-CEU association optimization. In order to make each subproblem tractable, we resort to employing bounding and relaxation to tackle the nonconvex objective function and constraints. In this way, the original nonconvex problem can be transformed into two convex subproblems and a procedure of alternating optimization is then applied. It is worth mentioning that the most difficult constraint of the original problem is (19e), i.e., the information causality constraints, because the denominators on both sides of the inequality contain optimization variables. As far as we know, no methods in the existing literature has tried to deal with the constraint of this kind. \subsection{UAV Mobility Management}\label{sec:trajectory optimization} In order to solve the optimization problem \eqref{eq:pro_U_P}, we first optimize the UAV mobility strategy, including UAV trajectory $ \bm{U} $, the velocity $ \bm{V} $, and the acceleration $ \bm{A} $, with temporarily fixed UAV-CEU association strategy $ \bm{P} $. This subproblem can be expressed as \vspace{0.1cm} \begin{subequations}\label{eq:pro_U} \begin{align} \mathop{\max}_{\bm{U}, \bm{V}, \bm{A}}\quad\!\! &\sum_{k \in \mathcal{K}}R_E[k]\\ \textrm{s.t.}\quad\!\! &R_E[k]\geq R_0, \ \forall k\in \mathcal{K}, \\ &\sum_{i=2}^{n}R_{U_t}[i]\leq \sum_{i=1}^{n-1}R_{U_r}[i], \ n=2,\cdots,N,\\ &\bm{u}[1]=\bm{u}[N]=\bm{u}_0,\\ &\|\bm{u}[n+1]\!-\!\bm{u}[n]\|\!\leq\!V_{\rm{max}} \delta_t,\ n\!=\!1,\cdots\!,N\!-\!1,\\ &\bm{v}[n+1]=\bm{v}[n]+\bm{a}[n]\delta_t, \ n=1,\cdots,N-1,\\ &\bm{u}[n\!\!+\!\!1]\!=\!\bm{u}[n]\!\!+\!\!\bm{v}[n]\delta_t\!\!+\!\!\frac{1}{2}\bm{a}[n]\delta_t^2, n\!=\!1,\!\cdots\!,N\!\!-\!\!1, \\ &\|\bm{v}[n]\|\leq V_{\rm{max}}, \ n=1,\cdots,N, \\ &\|\bm{a}[n]\|\leq a_{\rm{max}}, \ n=1,\cdots,N. \end{align} \end{subequations} Problem \eqref{eq:pro_U} is, however, still nonconvex due to its nonconvex objective function and constraints (20b) and (20c). The remaining constraints are relatively easy to solve. In particular, (20d), (20f), and (20g) are linear functions of the variables $ \bm{u}[n] $, $ \bm{v}[n] $, and $ \bm{a}[n] $. (20e), (20h), and (20i) are convex with respect to $ \bm{u}[n] $, $ \bm{v}[n] $, and $ \bm{a}[n] $, respectively. We try to transform the nonconvex objective function and constraints into convex objective function and constraints via relaxation. As mentioned above, the major challenge is to deal with the causality constraints (20c). To obtain a tractable form of (20c), we need to determine a convex upper bound of the left hand side (LHS) of (20c) and a concave lower bound of the right hand side (RHS) of (20c). Without loss of generality, we consider the $ (l+1) $th iteration, given the UAV-CEU association strategy obtained at the $ l $th iteration as $ \bm{P}^{(l)} $. Firstly, $ R_{U_r}[n] $ can be bounded by \begin{eqnarray}\label{eq:Rrlb} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!R_{U_r}[n]=\log_2 \left(1+\sum_{m=1}^{N_B} \frac{P_B h_{mu}[n] \|\bm{g}_m\|^2}{\sigma^2} \right) \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\overset{(a)}\geq \frac{1}{N_B}\sum_{m=1}^{N_B}\log_2\left(1+\frac{N_B P_B\alpha_0 \|\bm{g}_m\|^2}{\sigma^2(H^2+\|\bm{u}[n]-\bm{b}_m\|^2)}\right)\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\overset{(b)}\geq \sum_{m=1}^{N_B}\!\left(-a_r ^{(l)}[n](\|\bm{u}[n]\!\!-\!\!\bm{b}_m\|^2\!\!-\!\!\|\bm{u}^{(l)}[n]\!-\!\bm{b}_m\|^2)\!+\!b_r ^{(l)}[n]\right) \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\triangleq \underline{R}_{U_r}[n], \end{eqnarray} where inequality (a) follows from the Jensen's Inequality, inequality (b) is due to the facts that $ f(x)=\log \left(1+\frac{1}{x}\right) $ is convex with respect to $ x $ and the first-order Taylor approximation is a global under-estimator of convex functions \cite{Boyd2004Convex}, and \begin{equation}\label{eq:a} a_r ^{(l)}[n]=\frac{\frac{P_B \alpha_0 \sigma^2 \|\bm{g}_m\|^2}{[\sigma^2(H^2+\|\bm{u}^{(l)}[n]-\bm{b}_m\|^2)]^2}\log_2 e}{1+\frac{N_B P_B \alpha_0 \|\bm{g}_m\|^2}{\sigma^2(H^2+\|\bm{u}^{(l)}[n]-\bm{b}_m\|^2)}}\geq 0, \end{equation} \vspace{0.2cm} \begin{equation}\label{eq:b} b_r ^{(l)}[n]=\!\frac{1}{N_B}\log_2\left(1\!+\!\frac{N_B P_{B}\alpha_0\|\bm{g}_m\|^2}{\sigma^2(H^2+\|\bm{u}^{(l)}[n]-\bm{b}_m\|^2)}\right). \end{equation} \vspace{0.1cm} Since the coefficient $ a_r ^{(l)}[n] $ is a nonnegative value, the lower bound of $ R_{U_r}[n] $, i.e., $ \underline{R}_{U_r}[n] $, in \eqref{eq:Rrlb} is concave with respect to $ \bm{u}[n] $. Thus far, we obtain a concave lower bound of the RHS of (20c). \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.7in]{BoundPbs.eps} \caption{Exact receiving rate $ R_{U_r} $ and its lower bound $ \underline{R}_{U_r} $ with $ N=60 $ and $ \sigma^2=-114 $~dBm.} \label{Fig:RrBound} \end{figure} To exemplify the tightness of this bound, we here plot in Fig. \ref{Fig:RrBound} the exact receiving rate $ R_{U_r} $ and its lower bound $ \underline{R}_{U_r} $ for comparison. To make the figure more visible, the Y-axis represents the cumulative sum rate over time slots. From this figure, we can see that the adopted lower bound is rather tight, which can imply that we can get a near-optimal solution by exploiting this bound. Then we need to deal with $ R_{U_t}[n] $ in (20c) and obtain an upper bound. To simplify the notations, we define \begin{align}\label{eq:r[n]} r[n]&\triangleq \log_2\left(1+\frac{P_U h_{uk}[n]}{\sigma^2}\right) \nonumber\\ &=\log_2\left(1+\frac{P_U \alpha_0}{\sigma^2(H^2+\|\bm{u}[n]-\bm{e}_k\|^2)}\right), \end{align} \vspace{0.15cm} by replacing $ \bm{u}[n] $ and $ \bm{e}_k $ with their horizontal coordinates with respect to $ x $ and $ y $, $ r[n] $ can be rewritten as \begin{eqnarray}\label{eq:upper_r} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!r[n]=\log_2\left(1+\frac{P_U \alpha_0}{\sigma^2(H^2+\|\bm{u}[n]-\bm{e}_k\|^2)}\right) \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!=\log_2 \!\left(\!1\!+\!\frac{P_U \alpha_0}{\sigma^2\left[H^2 \!+\!(x[n] \!-\!x_k)^2 \!+\!(y[n] \!-\!y_k)^2\right]}\!\right)\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\leq \frac{\log_2\left(1+\frac{P_U \alpha_0}{3\sigma^2 H^2}\right)}{3} +\frac{\log_2\left(1+\frac{P_U \alpha_0}{3\sigma^2(x[n]-x_k)^2}\right)}{3}\nonumber\\ &&\!\!\!\!\!\!\!\!+\frac{\log_2\left(1+\frac{P_U \alpha_0}{3\sigma^2(y[n]-y_k)^2}\right)}{3} \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\triangleq \overline{r}[n], \end{eqnarray} \vspace{0.1cm} where we use the Jensen's Inequality for the convex function $ \log\left(1+\frac{A}{x}\right) $ for any $ A>0 $. We can verify that the second term of $ \overline{r}[n] $ in \eqref{eq:upper_r} is convex with respect to $ x[n] $ and the third term is convex with respect to $ y[n] $ by checking their second-order derivatives. Then from the definition of $ r[n] $ in \eqref{eq:r[n]}, we have \begin{eqnarray}\label{eq:Rtub} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!R_{U_t}[n]=\sum_{k=1}^{K} \rho_{k,n} r[n] \nonumber\\ &&\leq \sum_{k=1}^{K} \rho_{k,n} \overline{r}[n] \nonumber\\ &&\triangleq \overline{R}_{U_t}[n]. \end{eqnarray} \vspace{0.1cm} Thus, we obtain an upper bound of $ R_{U_t}[n] $, i.e., $ \overline{R}_{U_t}[n] $, and it is convex with respect to $ \bm{u}[n] $. Namely, $ \overline{R}_{U_t}[n] $ is a convex upper bound of the LHS of (20c). \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.7in]{Rt_upper_40.eps} \caption{Comparison of $ R_{U_t} $ and its upper bound $ \overline{R}_{U_t} $.} \label{Fig:Rt_upper} \end{figure} Similarly, we compare the exact transmitting rate $ R_{U_t} $ and its upper bound $ \overline{R}_{U_t} $ in Fig. \ref{Fig:Rt_upper} to show the tightness of $ \overline{R}_{U_t} $. It is observed that the derived bound is fairly tight. Then, as for the nonconvex objective function (20a) and constraint (20b), we need to determine a concave lower bound of $ R_E[k] $. $ r[n] $ defined in \eqref{eq:r[n]} is convex with respect to $ \|\bm{u}[n]-\bm{e}_k\|^2 $. Considering that the first-order Taylor approximation is a global under-estimator of convex functions, we have \vspace{0.1cm} \begin{eqnarray}\label{eq:lower_r} &&\!\!\!\!\!\!\!\!\!\!\!\!r[n]\geq -c_k ^{(l)}[n](\|\bm{u}[n]\!-\!\bm{e}_k\|^2\!-\!\|\bm{u}^{(l)}[n]\!-\!\bm{e}_k\|^2)+d_k^{(l)}[n],\nonumber\\ &&\!\triangleq \underline{r}[n], \end{eqnarray} \vspace{0.1cm} where \begin{equation}\label{eq:c} c_k ^{(l)}[n]=\frac{\frac{P_U\alpha_0\sigma^2}{[\sigma^2(H^2+\|\bm{u}^{(l)}[n]-\bm{e}_k\|^2)]^2}\log_2 e}{1+\frac{P_U\alpha_0}{\sigma^2(H^2+\|\bm{u}^{(l)}[n]-\bm{e}_k\|^2)}}\geq 0, \end{equation} \vspace{0.2cm} \begin{equation}\label{eq:d} d_k ^{(l)}[n]=\log_2\left(1+\frac{P_U\alpha_0}{\sigma^2(H^2+\|\bm{u}^{(l)}[n]-\bm{e}_k\|^2)}\right). \end{equation} It is easy to check that $ \underline{r}[n] $ is concave with respect to $ \bm{u}[n] $ because the coefficient $ c_k ^{(l)}[n] $ is a nonnegative value. Further from \eqref{eq:rate_k} and \eqref{eq:lower_r}, the lower bound $ \underline{R}_E[k] $ is directly obtained as \begin{equation} \underline{R}_E[k]=\frac{1}{N} \sum_{n=1}^{N}\rho_{k,n} \underline{r}[n]. \end{equation} \vspace{0.1cm} which is concave with respect to $ \bm{u}[n] $. The comparison of $ R_E $ and its lower bound $ \underline{R}_E $ is plotted in Fig. \ref{Fig:Rk_lower}. \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.6in]{Rk_lower_dots.eps} \caption{Comparison of $ R_E $ and its lower bound $ \underline{R}_E $.} \label{Fig:Rk_lower} \end{figure} By introducing the lower bounds $ \underline{R}_E[k] $ and $ \underline{R}_{U_r}[n] $ of $ R_E[k] $ and $ R_{U_r}[n] $ respectively and the upper bound, $ \overline{R}_{U_t}[n] $, of $ R_{U_t}[n] $, we successfully transform problem \eqref{eq:pro_U} into the following problem: \vspace{0.1cm} \begin{subequations}\label{eq:pro_U_new} \begin{align} \mathop{\max}_{\bm{U}, \bm{V}, \bm{A}}\quad\!\! &\sum_{k=1}^{K} \underline{R}_E[k]\\ \textrm{s.t.}\quad\!\!\!\! &\underline{R}_E[k] \geq R_0, \ \forall k \in \mathcal{K},\\ &\sum_{i=2}^{n}\overline{R}_{U_t}[i] \leq \sum_{i=1}^{n-1}\underline{R}_{U_r}[i], \ n=2,\cdots,N,\\ &\bm{u}[1]=\bm{u}[N]=\bm{u}_0,\\ &\|\bm{u}[n+1]\!-\!\bm{u}[n]\|\!\leq\!V_{\rm{max}} \delta_t, \ n\!=\!1,\cdots,N\!-\!1,\!\!\!\!\!\!\!\!\!\\ &\bm{v}[n+1]=\bm{v}[n]+\bm{a}[n]\delta_t, \ n=1,\cdots,N-1,\\ &\bm{u}[n\!\!+\!\!1]\!=\!\bm{u}[n]\!\!+\!\!\bm{v}[n]\delta_t\!\!+\!\!\frac{1}{2}\bm{a}[n]\delta_t^2, n\!=\!1,\!\cdots\!,N\!\!-\!\!1, \\ &\|\bm{v}[n]\|\leq V_{\rm{max}}, \ n=1,\cdots,N, \\ &\|\bm{a}[n]\|\leq a_{\rm{max}}, \ n=1,\cdots,N. \end{align} \end{subequations} \begin{mytheorem} Problem \eqref{eq:pro_U_new} is a convex problem. \end{mytheorem} \begin{proof} According to the above analysis, since $ \underline{R}_E[k] $ is concave with respect to $ \bm{u}[n] $, (31a) is a convex objective function and (31b) is a convex constraint. Similarly, since $ \overline{R}_{U_t}[n] $ is convex with respect to $ \bm{u}[n] $ and $ \underline{R}_{U_r}[n] $ is concave with respect to $ \bm{u}[n] $, thus (31c) is a convex constraint. Furthermore, (31d), (31f), and (31g) are linear constraints. (31e), (31h), and (31i) are convex constraints. Therefore, problem \eqref{eq:pro_U_new} is a convex problem. \end{proof} This convex problem can then be efficiently solved by using the well-established standard convex optimization method such as the interior-point method \cite{Boyd2004Convex}. \subsection{UAV-CEU Association Optimization}\label{sec:association optimization} Given a mobility management strategy of the UAV, the subproblem of optimizing the UAV-CEU association is rewritten from problem \eqref{eq:pro_U_P} as follows: \begin{subequations}\label{eq:pro_P} \begin{align} \mathop{\max}_{\bm{P}}\quad\!\! &\sum_{k\in \mathcal{K}}R_E[k]\\ \textrm{s.t.}\quad\!\! &R_E[k]\geq R_0, \ \forall k\in \mathcal{K}, \\ &\sum_{k\in \mathcal{K}}\rho_{k,n}\leq 1, \ n=2,\cdots,N,\\ &\rho_{k,n}=\{0,1\}, \ \forall k\in \mathcal{K}, \ n=2,\cdots,N,\\ &\sum_{i=2}^{n}R_{U_t}[i]\leq \sum_{i=1}^{n-1}R_{U_r}[i], \ n=2,\cdots,N. \end{align} \end{subequations} It is difficult to solve problem \eqref{eq:pro_P} because of the integer variable $ \rho_{k,n} $. By relaxing (32d) to the continuous constraint $ \rho_{k,n}\in\left[0,1\right] $, problem \eqref{eq:pro_P} reduces to a standard linear programming because the objective function and the constraints are linear combinations of $ \bm{P} $. The linear programming is \begin{subequations}\label{eq:pro_P_linear} \begin{align} \mathop{\max}_{\bm{P}}\quad\!\! &\sum_{k\in \mathcal{K}}R_E[k]\\ \textrm{s.t.}\quad\!\! &0\leq \rho_{k,n}\leq 1,\ \forall k\in \mathcal{K},\ n=2,\cdots,N, \\ &(\text{32b}),(\text{32c}),(\text{32e}). \end{align} \end{subequations} Naturally, this linear programming is a convex optimization problem. Typically, like in \cite{Cheng2018UAV}, the relaxed problem was solved by classical optimization methods, and then the solution of the relaxed problem was rounded to get the desired integer results. However, in this way, the optimality of the solution can not be guaranteed in theory and the feasibility of the solution may not hold due to the operation of rounding. This motivates us to adopt the Lagrangian dual decomposition method to obtain a low-complexity solution. In the following, we show that it fortunately returns integer solutions, which preserves both optimality and feasibility of the original problem if the variable relaxation is also deployed temporarily but using the dual decomposition approach. After relaxing the binary constraints with respect to $ \bm{P} $, problem \eqref{eq:pro_P} becomes a standard linear program. By introducing dual variables $ \bm{\lambda}=\{\lambda_n\}_{n=2}^{N} $ and $ \bm{\eta}=\{\eta_k\}_{k=1}^{K} $, we can write the Lagrangian function of problem \eqref{eq:pro_P_linear} as \begin{align}\label{eq:LagFun} L(\bm{P},\bm{\lambda},\bm{\eta})= &\sum_{k=1}^{K} R_E[k]-\sum_{n=2}^{N} \lambda_n \left(\sum_{i=2}^{n} R_{U_t}[i]-\sum_{i=1}^{n-1} R_{U_r}[i]\right) \nonumber \\ &-\sum_{k=1}^{K}\eta_k(R_0-R_E[k]), \end{align} where the dual variables $ \bm{\lambda}=\{\lambda_n\}_{n=2}^{N} $ and $ \bm{\eta}=\{\eta_k\}_{k=1}^{K} $ are all nonnegative. Equivalently, we solve its dual problem \begin{subequations}\label{eq:Dual} \begin{align} \mathop{\min}_{\bm{\lambda}\geq \mathbf{0},\bm{\eta}\geq \mathbf{0}} &\mathop{\max_{\bm{P}}} L(\bm{P},\bm{\lambda},\bm{\eta}) \\ \textrm{s.t.}\quad\!\! &0\leq \rho_{k,n}\leq 1,\ \forall k\in \mathcal{K},\ n=2,\cdots,N, \\ &\sum_{k\in \mathcal{K}}\rho_{k,n}\leq 1,\ n=2,\cdots,N. \end{align} \end{subequations} By defining \begin{equation} m_{k,n}\triangleq \log_2\left(1+\frac{P_U h_{uk}[n]}{\sigma^2}\right), \end{equation} the inner maximization in \eqref{eq:Dual} is rewritten as \begin{subequations}\label{eq:opt_P} \begin{align} \mathop{\max_{\bm{P}}} \frac{1}{N} &\sum_{k=1}^{K} \!\sum_{n=2}^{N} \!\rho_{k,n} m_{k,n}\!(1+\eta_k)\! -\sum_{k=1}^{K}\sum_{n=2}^{N}\left(\lambda_n\!\sum_{i=2}^{n}\!\rho_{k,i}m_{k,i}\right) \label{eq:inner max}\\ \textrm{s.t.}\quad\!\! &0\leq \rho_{k,n}\leq 1,\ \forall k\in \mathcal{K},\ n=2,\cdots,N, \label{eq:rho1}\\ &\sum_{k\in \mathcal{K}}\rho_{k,n}\leq 1,\ n=2,\cdots,N.\label{eq:rho2} \end{align} \end{subequations} \begin{algorithm}[t!] Initialize $ \bm{P} $ and let $ l=0 $\; \Repeat{convergence} {Given $ \bm{P}^{(l)} $, find the optimal $ \bm{U}^{(l+1)} $, $ \bm{V}^{(l+1)} $, and $ \bm{A}^{(l+1)} $ by solving problem \eqref{eq:pro_U_new}\; Given $ \bm{U}^{(l+1)} $, $ \bm{V}^{(l+1)} $, and $ \bm{A}^{(l+1)} $, find the optimal $ \bm{P}^{(l+1)} $ by solving problem \eqref{eq:pro_P}\; Update $ l=l+1 $\; } Return the UAV mobility management $ \bm{U}^*=U^{(l)} $, $ \bm{V}^*=V^{(l)} $, $ \bm{A}^*=A^{(l)} $, and the corresponding UAV-CEU association strategy $ \bm{P}^*=P^{(l)} $. \caption{IMMUA Algorithm for Problem \eqref{eq:pro_U_P}} \end{algorithm} We simplify the objective function in \eqref{eq:inner max} further and rewrite it as \begin{equation}\label{eq:simple inner max} \mathop{\max_{\bm{P}}} \sum_{k=1}^{K} \sum_{n=2}^{N}A_{k,n}\rho_{k,n}, \end{equation} which is a linear combination of $ \rho_{k,n} $ and the coefficient $ A_{k,n}=\left(\frac{m_{k,n}(1+\eta_k)}{N}-\sum_{i=n}^{N}\lambda_i m_{k,n}\right) $. To obtain the maximum value of \eqref{eq:simple inner max}, we should let $ \rho_{k,n} $ with the largest coefficient be 1 and the others be 0 for any $ n $ due to the constraints \eqref{eq:rho1} and \eqref{eq:rho2}. Thus, it implies the optimal solution as \begin{equation}\label{eq:optimal association} \rho_{k,n}^*= \begin{cases} 1,&\text{if}\; k \!=\! k^{(n)}\\ 0,&\text{if}\; k \!\neq\! k^{(n)}, \end{cases} \end{equation} where $ k^{(n)}=\arg\;\max\limits_{q\in\mathcal{K}}\left(\frac{m_{q,n}(1+\eta_q)}{N}-\sum_{i=n}^{N}\lambda_i m_{q,n}\right) $. Notice that the optimal $ \rho_{k,n}^* $ is proven in \eqref{eq:optimal association} to be either 0 or 1 which satisfies the integer constraint (32d) in problem \eqref{eq:pro_P}, even though we temporarily relaxed $ \rho_{k,n} $ as a continuous variable. Therefore, the optimal solution to problem \eqref{eq:opt_P} is exactly given by \eqref{eq:optimal association}. Then, we need to solve the outer minimization in \eqref{eq:Dual} using the integer solution of $ \rho_{k,n}^* $ in \eqref{eq:optimal association}. In each iteration, we exploit the subgradient based method \cite{Boyd2004Convex} to update the dual variables as \begin{equation} \lambda_n^{(t+\!1\!)}\!=\!\left[\lambda_n^{(t)}\!-\!\delta^{(t)}\! \!\left(\!\!-\!\!\sum_{i=2}^{n}\sum_{k=1}^{K}\rho_{k,i}^{(t)}m_{k,i}^{(t)}\!+\!\sum_{i=1}^{n-1}R_{U_r}^{(t)}[i] \!\right)\!\right]^+, \end{equation} \begin{equation} \eta_k^{(t+1)}=\left[\eta_k^{(t)}-\delta^{(t)}\left(-R_0+\frac{1}{N}\sum_{n=2}^{N}\rho_{k,n}^{(t)} m_{k,n}^{(t)}\right)\right]^+, \end{equation} where $ \delta^{(t)} $ is the step size and \begin{equation} [x]^+= \begin{cases} x,&\text{if}\;x\geq0\\ 0,&\text{if}\;x<0. \end{cases} \end{equation} We update dual variables $ \bm{\lambda}=\{\lambda_n\}_{n=2}^{N} $ and $ \bm{\eta}=\{\eta_k\}_{k=1}^{K} $ and association indicators $ \bm{P}=\{\rho_{k,n},\forall k\in \mathcal{K},n=2,\cdots,N\} $ iteratively until the objective function in \eqref{eq:Dual} converges. In this way, the UAV-CEU association optimization problem in \eqref{eq:pro_P_linear} is solved. To this end, we are able to solve the original problem by tackling the two subproblems, i.e., UAV mobility management and UAV-CEU association optimization, in an alternating manner. We summarize the iterative mobility management and user association (IMMUA) algorithm in \textbf{Algorithm 1}, which can obtain a suboptimal solution with low complexity. The convergence and complexity of IMMUA algorithm is analyzed in the following subsection. Even though we first decompose the original problem into two subproblems and then solve them separately in two steps, we obtain optimal or near-optimal solution in both steps. This guarantees good performance of our proposed IMMUA algorithm which will be demonstrated by numerical results in Section \ref{sec:results}. \begin{mydis} \textup{In addition to the objective function formulated in problem \eqref{eq:pro_U_P}, our proposed IMMUA algorithm can also handle some other forms of objective functions if the traffic patterns are considered. For instance, we can maximize the weighted sum rate of CEUs which is expressed as} \begin{equation} \label{eq:function_new} \mathop{\max}_{\bm{U},\bm{V},\bm{A},\bm{P}} \ \sum_{k\in \mathcal{K}}w_k R_E[k], \end{equation} \textup{where $ w_k $ denotes the constant weight of CEU $ k $. Note that the values of $ w_k $ can be determined by the traffic patterns, e.g., Poisson distribution, for different users \cite{Kobayashi2006An}. Since the weights are constant, they do not affect the application of our proposed algorithm to solve the problem.} \textup{Considering more directly the traffic arrival patterns for different users, we can add minimum rate requirement of each user individually. That is to change (19b) in problem \eqref{eq:pro_U_P} into the following constraint:} \begin{equation} R_E[k]\geq R_k, \ \forall k \in \mathcal{K}, \end{equation} \textup{where $ R_k $ is the minimum rate requirement of user $ k $. In particular, $ R_k $ can be determined according to user types, mission types, and traffic arrival patterns. In this way, the new problem imposes different QoS requirements on different users. This new constraint can be handled by using the similar bounding techniques for (19b).} \end{mydis} \subsection{Convergence and Complexity Analysis}\label{sec:complexity} With our proposed IMMUA algorithm, the resulting objective function value of problem \eqref{eq:pro_U_P} is non-decreasing after each iteration. Furthermore, it has a finite upper bound. Therefore, the overall IMMUA algorithm is guaranteed to converge. The complexity of the IMMUA algorithm lies in solving the UAV mobility management problem and the UAV-CEU association optimization problem. Considering that we solve the UAV mobility management problem via the standard interior-point method and the number of optimization variables is $ 6N $, the complexity of solving this problem is $ \mathcal{O}(L_i N^3) $ [10, Pages 487, 569], where $ L_i $ denotes the number of iterations required by the interior-point method. As for the UAV-CEU association optimization problem, we solve it via the dual decomposition method, whose complexity is $ \mathcal{O}(L_d (N+K)) $, where $ L_d $ denotes the number of iterations needed by the dual method. Therefore, the total complexity of ITUA algorithm is $ \mathcal{O}(L_o(L_i N^3+L_d (N+K))) $, where $ L_o $ represents the number of outer iterations. \section{Numerical Results}\label{sec:results} In this section, numerical results are presented to validate the effectiveness of our proposed algorithm. We consider a UAV-assisted communication network in IoT applications. As depicted in Fig. \ref{Fig:trajectory}, the network consists of one UAV, three adjacent GBSs, and four CEUs. Specifically, these CEUs are any type of IoT devices, which are randomly deployed in the overlapped coverage of GBSs to perform specific tasks. Due to the long distance from GBSs, the channel quality between CEUs and GBSs is poor and the QoS of CEUs cannot be guaranteed. Under this circumstance, a UAV acts as a mobile relay to forward data from GBSs to CEUs when data transmission is needed, which is a convenient and cost-efficient way to improve the QoS of CEUs without increasing infrastructure construction. We need to note that the above simulation setup also applies to a post-disaster communication scenario where a sudden disaster damaged the cellular infrastructure in the area affected by the disaster \cite{Zhao2019UAV}. By leveraging a UAV, rescue information can be sent from the GBSs in the area unaffected by the disaster to the IoT devices in the disaster area. Each GBS is equipped with $ L=8 $ antennas. The radius of each cell is set to $ r=1000 $~m and the horizontal locations of three GBSs are $ (0,r) $, $ (\sqrt{3}r,r) $, and $ (\frac{\sqrt{3}}{2}r,-\frac{1}{2}r) $ respectively. Other important simulation parameters are listed in Table \ref{TABLE:parameter} unless otherwise specified. \begin{table}[t!] \small \setlength{\belowcaptionskip}{-12pt} \centering \caption{Simulation Parameters}\label{TABLE:parameter} \begin{tabular}{|c|c|c|} \hline \textbf{Parameter} & \textbf{Description} & \textbf{Value} \\ \hline $ a_{\rm{max}} $ & Maximum UAV acceleration & $ 5~\rm{m/s^2} $ \\ \hline $ \alpha_0 $ & Reference channel power at $ d_0=1 $~m & -60 dB \\ \hline $ H $ & Flight altitude of UAV & 100 m \\ \hline $ L $ & Number of antennas at GBSs & 8 \\ \hline $ P_B $ & Transmit power of GBSs & 10 W \\ \hline $ P_U $ & Transmit power of UAV & 1 W \\ \hline $ r $ & Radius of the cell & 1000 m\\ \hline $ r_c $ & Radius of the circular trajectory & 500 m \\ \hline $ R_0 $ & Minimum rate requirement of CEU & 0.5 bps/Hz \\ \hline $ V_{\rm{max}} $ & Maximum UAV speed & 50 m/s \\ \hline $ \sigma^2 $ & Noise power & -114 dBm \\ \hline \end{tabular} \end{table} \begin{table}[t!] \scriptsize \setlength{\belowcaptionskip}{-10pt} \centering \caption{UAV-CEU association strategy obtained by IMMUA algorithm with the maximum speed of $ V_{\rm{max}} = 40 $~m/s}\label{TABLE:associationV40} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Time slot} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} \\ \hline \textbf{Associated CEU} & Null & 2 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 \\ \hline \textbf{Time slot} & \textbf{11} & \textbf{12} & \textbf{13} & \textbf{14} & \textbf{15} & \textbf{16} & \textbf{17} & \textbf{18} & \textbf{19} & \textbf{20} \\ \hline \textbf{Associated CEU} & 4 & 4 & 4 & 4 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \textbf{Time slot} & \textbf{21} & \textbf{22} & \textbf{23} & \textbf{24} & \textbf{25} & \textbf{26} & \textbf{27} & \textbf{28} & \textbf{29} & \textbf{30} \\ \hline \textbf{Associated CEU} & 1 & 1 & 4 & 4 & 2 & 2 & 2 & 2 & 2 & 2 \\ \hline \textbf{Time slot} & \textbf{31} & \textbf{32} & \textbf{33} & \textbf{34} & \textbf{35} & \textbf{36} & \textbf{37} & \textbf{38} & \textbf{39} & \textbf{40} \\ \hline \textbf{Associated CEU} & 2 & 2 & 4 & 4 & 4 & 4 & 3 & 3 & 3 & 3 \\ \hline \textbf{Time slot} & \textbf{41} & \textbf{42} & \textbf{43} & \textbf{44} & \textbf{45} & \textbf{46} & \textbf{47} & \textbf{48} & \textbf{49} & \textbf{50} \\ \hline \textbf{Associated CEU} & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 3 \\ \hline \textbf{Time slot} & \textbf{51} & \textbf{52} & \textbf{53} & \textbf{54} & \textbf{55} & \textbf{56} & \textbf{57} & \textbf{58} & \textbf{59} & \textbf{60} \\ \hline \textbf{Associated CEU} & 4 & 3 & 3 & 3 & 4 & 4 & 4 & 4 & 4 & 2\\ \hline \end{tabular} \end{table} \begin{table}[t!] \scriptsize \setlength{\belowcaptionskip}{-10pt} \centering \caption{UAV-CEU association strategy obtained by IMMUA algorithm with the maximum speed of $ V_{\rm{max}} = 50 $~m/s}\label{TABLE:associationV50} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Time slot} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} \\ \hline \textbf{Associated CEU} & Null & 2 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 \\ \hline \textbf{Time slot} & \textbf{11} & \textbf{12} & \textbf{13} & \textbf{14} & \textbf{15} & \textbf{16} & \textbf{17} & \textbf{18} & \textbf{19} & \textbf{20} \\ \hline \textbf{Associated CEU} & 4 & 4 & 4 & 4 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \textbf{Time slot} & \textbf{21} & \textbf{22} & \textbf{23} & \textbf{24} & \textbf{25} & \textbf{26} & \textbf{27} & \textbf{28} & \textbf{29} & \textbf{30} \\ \hline \textbf{Associated CEU} & 1 & 1 & 4 & 4 & 4 & 4 & 2 & 2 & 2 & 2 \\ \hline \textbf{Time slot} & \textbf{31} & \textbf{32} & \textbf{33} & \textbf{34} & \textbf{35} & \textbf{36} & \textbf{37} & \textbf{38} & \textbf{39} & \textbf{40} \\ \hline \textbf{Associated CEU} & 2 & 2 & 2 & 4 & 4 & 4 & 3 & 3 & 3 & 3 \\ \hline \textbf{Time slot} & \textbf{41} & \textbf{42} & \textbf{43} & \textbf{44} & \textbf{45} & \textbf{46} & \textbf{47} & \textbf{48} & \textbf{49} & \textbf{50} \\ \hline \textbf{Associated CEU} & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 \\ \hline \textbf{Time slot} & \textbf{51} & \textbf{52} & \textbf{53} & \textbf{54} & \textbf{55} & \textbf{56} & \textbf{57} & \textbf{58} & \textbf{59} & \textbf{60} \\ \hline \textbf{Associated CEU} & 4 & 4 & 4 & 4 & 3 & 4 & 4 & 4 & 4 & 2\\ \hline \end{tabular} \end{table} \begin{figure}[htbp] \centering \subfigure[the optimized trajectory with $ V_{\rm{max}}=40 $~m/s]{ \includegraphics[width = 3.6in]{TrajectoryV40Line.eps} \label{Fig:V40} } \subfigure[the optimized trajectory with $ V_{\rm{max}}=50 $~m/s]{ \includegraphics[width = 3.6in]{TrajectoryV50Line.eps} \label{Fig:V50} } \caption{Optimized UAV trajectories obtained by IMMUA algorithm with the maximum speed of $ V_{\rm{max}}=40 $~m/s and $ V_{\rm{max}}=50 $~m/s, respectively.} \label{Fig:trajectory} \end{figure} \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.6in]{bar2.eps} \caption{The rate of each CEU with the optimized mobility management and association strategy.} \label{Fig:bar} \end{figure} Firstly, we show the UAV trajectory and UAV-CEU association strategy obtained by IMMUA algorithm intuitively. Fig. \ref{Fig:V40} and Fig. \ref{Fig:V50} illustrate the optimized UAV trajectories with the maximum UAV speeds of 40 m/s and 50 m/s, respectively. The pre-specified initial location of the UAV is set to the intersection point of the three cells, i.e., $ (\frac{\sqrt{3}}{2}r,\frac{1}{2}r) $. The UAV has to return to the initial location after each transmission period. As shown in Fig. \ref{Fig:trajectory}, the UAV flies in the following way: firstly, the UAV flies from the initial location to CEU4. Secondly, it flies to CEU1, and then flies to CEU2. After that, it flies through CEU4 to CEU3 and stays there for a while. Finally, it returns to the initial location. For the cases of $ V_{\rm{max}}=40 $~m/s and $ V_{\rm{max}}=50 $~m/s, the corresponding UAV-CEU association strategies are presented in Table \ref{TABLE:associationV40} and Table \ref{TABLE:associationV50}, respectively. In the first time slot, no CEU is associated with the UAV due to the information causality constraint. According to the association strategy, we find that the UAV in fact associates with the nearest CEU in each time slot during its flight to maximize the sum rate of all CEUs. Based on the UAV mobility management strategy and UAV-CEU association strategy obtained by IMMUA algorithm, the data rates of the four CEUs are calculated and presented in Fig. \ref{Fig:bar}, respectively. CEU4 has the highest data rate since it has the most time slots associated to the UAV. Fig. \ref{Fig:speed} illustrates the effect of acceleration constraint on UAV's speed verus the flying time slot. We can see that when the acceleration constraint is not taken into account, the speed changes very quickly, even close to infinity, which is not practical. On the contrary, when the acceleration constraint is considered and the maximum acceleration is set to $ a_{\rm{max}}=5~\rm{m/s^2} $, the change of speed is much gentler. The acceleration of the UAV over time slot is plotted in Fig. \ref{Fig:acceleration}, which shows that the acceleration cannot exceed $ a_{\rm{max}}=5~\rm{m/s^2} $. \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.6in]{Speed_Vmax50.eps} \caption{The speed of the UAV over time with $ V_{\rm{max}}=50 $~m/s.} \label{Fig:speed} \end{figure} \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.6in]{acceleration_amax5.eps} \caption{The acceleration of the UAV over time with $ a_{\rm{max}}=5~\rm{m/s^2} $.} \label{Fig:acceleration} \end{figure} \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.6in]{Throughput_Pu_H_a.eps} \caption{Sum rate of all CEUs versus the number of iterations with different $ P_U $, $ H $, and $ a_{\rm{max}} $.} \label{Fig:Throughput} \end{figure} Then, we verify the convergence behaviour of our proposed IMMUA algorithm. In Fig. \ref{Fig:Throughput}, we show the sum rate of all the CEUs versus the number of iterations with different values of UAV's transmit power $ P_U $, flight altitude $ H $, and maximum acceleration $ a_{\rm{max}} $. It indicates that our proposed algorithm converges in a few iterations as expected. From the figure results, we can conclude that, for most cases, 7 to 8 iterations can be sufficient for the algorithm to converge. While in practice use, 5 iterations in most cases achieve a 99\% of the rate upon convergence. From the figure, the sum rate improves as the UAV's transmit power increases. This is intuitively true since an increase of transmit power could lead to a higher SINR, which implies higher data rate. On the other hand, the sum rate decreases as the UAV flies higher. This is because higher flight induces weak channel power gains resulting in lower rates. In theory, the lower UAV flies, the higher rate of CEUs. However, this is based on the assumption that the flight altitude of UAV is at least 100 m. When the flight altitude of UAV is much lower, the air-to-ground communication links will be blocked and scattered by buildings and other obstacles. The channel model used in this paper will no longer be applicable and the data rate will decrease. Therefore, the flight altitude of UAV cannot be too high or too low. Moreover, as can be seen from Fig. \ref{Fig:Throughput}, a smaller maximum acceleration leads to a lower sum rate, which is intuitively true since the feasible set is reduced. \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.7in]{CircularTrajectory.eps} \caption{Benchmark trajectories: (a) static UAV, (b) circular trajectories with different radius.} \label{Fig:Circular} \end{figure} \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.5in]{rateVSspeed.eps} \caption{Sum rate of CEUs with different trajectories: static UAV, circle trajectories and optimized trajectory.} \label{Fig:rateVSspeed} \end{figure} \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.7in]{rateVSassociation.eps} \caption{The data rate of each CEU with different UAV-CEU association strategies.} \label{Fig:rateVSassociation} \end{figure} \begin{figure}[t!] \setlength{\abovecaptionskip}{-5pt} \centering \includegraphics[width = 3.6in]{Close-to-optimal.eps} \caption{Performance comparison of IMMUA algorithm and close-to-optimal algorithm.} \label{Fig:optimal} \end{figure} For the purpose of comparison, we consider two types of benchmark UAV trajectories as shown in Fig. \ref{Fig:Circular}: (a) static UAV, where the UAV stays at the intersection point in the whole transmission period; (b) circle trajectories, where the UAV flies at the maximum speed with different circle radii, 200 m, 500 m, and 800 m, respectively. All the circle trajectories are centered at the intersection point. For different UAV trajectories, the UAV-CEU association strategy is optimized using our proposed algorithm as stated in Section \ref{sec:association optimization}. Fig. \ref{Fig:rateVSspeed} compares the sum rate of our proposed algorithm with those of the benchmark trajectories. It is observed that our proposed algorithm outperforms the benchmark schemes at different values of $ V_{\rm{max}} $, which demonstrates the efficiency of our proposed algorithm. The static UAV corresponds to a traditional relay, which has no degree of freedom. Therefore, the sum rate of CEUs does not vary with $ V_{\rm{max}} $. As for circle trajectories with different radii, the performance is quite different in terms of sum rate of CEUs. Specifically, the circle trajectory with $ r_c=500 $~m has the highest sum rate among the three circle trajectories due to the fact that this trajectory is close to the locations of CEUs. On the contrary, the circle trajectory with $ r_c=800 $~m performs worst since it is too far from CEUs. The performance of the circle trajectory with $ r_c=200 $~m is between the two. Besides, we observe that our proposed algorithm achieve 18.6\%, 13.4\% and 35.8\% sum rate gain over circle trajectories with $ r_c=200 $~m, 500 m and 800 m, respectively. Then we consider the scenario where the UAV-CEU association strategy is fixed. The UAV trajectory is optimized using our proposed algorithm as stated in Section \ref{sec:trajectory optimization}. We consider random association strategy and clockwise association strategy as benchmarks. As for the random association strategy, the UAV associates with one CEU randomly in each time slot. According to the locations of CEUs, clockwise association strategy means the UAV associates with CEU1, CEU2, CEU4 and CEU3 in a clockwise manner. Fig. \ref{Fig:rateVSassociation} shows the data rate of each CEU with different association strategies. It shows that the optimized association strategy improves sum rate performance over benchmark association strategies at the cost of fairness. As stated in Section \ref{sec:algorithm}, the original problem is a mixed integer program and the causality constraint is particularly hard to solve. It is challenging to obtain an optimal solution of such problem due to extremely high complexity. Even for the subproblem of mobility management, it is still nonconvex and tough to solve. Therefore, we compare the performance of our proposed algorithm with a close-to-optimal method that randomly selects 100 initial points for the proposed algorithm and returns the solution corresponding to the maximum objective value. Note that this method has been popular in approaching the optimal solution in literature \cite{Yang2018Joint,Chen2017Caching}. From Fig. \ref{Fig:optimal}, it can be seen that the performance of our proposed IMMUA algorithm quite approaches the close-to-optimal performance, which indicates the effectiveness of our proposed algorithm. Under different parameter settings, the proposed algorithm only causes less than 3\% performance loss. \section{Conclusion}\label{sec:conclusion} In this paper, we have studied a new mobile relaying technique with a cache-enabled UAV in a multi-cell network. We jointly optimized the UAV-CEU association strategy and UAV mobility management strategy to maximize the sum rate of all CEUs, subject to the minimum rate requirements of CEUs, mobility constraints and casual buffer constraints. We formulate an optimization problem and the original mixed-integer nonconvex problem is successfully transformed into two convex subproblems. Accordingly, an efficient iterative algorithm was developed to solve the two subproblems in an alternating manner, which is guaranteed to converge with low complexity. According to simulation results, the mobility of UAV induces rate improvement compared with static relay. Furthermore, our proposed algorithm performs well and outperforms the traditional trajectories and association strategies significantly. In our future work, we will extend the results obtained in this paper by taking into account the optimization of UAV's altitude, mobile CEUs, more UAVs as well as the energy efficiency of UAV.
1,116,691,499,524
arxiv
\section{Introduction} Quantisation of the Hall effect was first recognized in 1980 \cite{kvk} and it is understood that plateaux appear in the off-diagonal component of the magnetoresistance tensor $\rho_{xy}$ whenever the Fermi energy lies in a mobility gap of the electronic density of states. In an ideal sample this would be the cyclotron gap $\hbar\omega_c$. Associated with the plateaux are minima in the diagonal component which lead to the appearance of Shubnikov-de~Haas oscillations (SdHO) in $\rho_{xx}$. SdHOs were observed in semiconductors as long ago as 1966 \cite{stiles} and even in these earliest observations spin and valley splittings were seen to modify the underlying $1/B$ periodicity of the oscillations. In particular the spin splitting appeared to be much stronger than expected from just a bare Zeeman gap of $g_0\mu_BB$, leading to the idea of an enhanced $g$-factor $g^*$ \cite{nich}. This enhancement is due to many-body electron interactions that are introduced by forming a widely separated electron-hole pair with reversed spin. It is often referred to as {\em exchange enhancement} (after the exchange-correlation terms encountered in Hartree Fock calculations) and leads to the spin gap being written as: \begin{equation} \Delta_{spin}=g*\mu_BB=g_0\mu_BB+E_{ex}. \end{equation} Notice particularly that this is the {\em sum} of the bare Zeeman energy and an exchange energy $E_{ex}$ and that, unless $E_{ex}\propto B$, $g^*$ will itself be a function of field as opposed to a simple multiple of $g_0$. Several semi-empirical versions of this equation have been adopted to model experimental results but it is the calculation of $E_{ex}$ that has particularly excerised theoreticians for the past thirty years \cite{au,kh,fs}. It is now timely to revisit the exchange enhanced spin splitting for two reasons. First, there is currently great interest in complex spin textures, or skyrmions, at low odd integer filling factors $\nu$ \cite{skyrmions}. This work has shown a rich spectrum of spin excitations and serves to remind us how poorly even the basic spin splitting is understood. The calculations indicate that at $\nu=1$ skyrmion-antiskyrmion pairs will always have a lower excitation energy than electron-hole pairs, but for higher filling factors the preferred excitation depends crucially on how the force laws are treated for real systems both for single spin excitations and the larger textures \cite{cooper}. Secondly, there is a prediction by Fogler and Shklovskii (FS) \cite{fs} that the exchange enhancement may be destroyed by disorder and lead to the collapse of the spin splitting at a critical filling factor $\nu_c$. FS gives expressions for $\nu_c$ in terms of sample parameters relevant to GaAs/GaAlAs heterojunctions which can be compared with experiments \cite{wong,shikler}. These issues will be discussed further in the remainder of the paper, but first the phenomenon of spin splitting will be introduced with reference to some experimental data. The behavior as a function of temperature and sample parameters will then be investigated, both through the filling factor where the spin split SdH maxima appear and by considering the size of the energy gap at odd integer $\nu$. The temperature dependence of the separation between the maxima is found to scale onto a universal curve for all filling factors and all samples in a way that suggests a phase transition. By understanding how $\nu_c$ varies with temperature we are able to extract the value at $T=0$ from the finite temperature data, allowing meaningful comparison with theory. An empirical relationship between the critical filling factor and the measured sample parameters is established which justifies the idea of disorder driven collapse of the exchange enhancement. By measuring the energy gap at different large odd filling factors within each sample we find a {\em linear} increase in $\Delta_{spin}$ with magnetic field. Although a $\sqrt{B}$ increase would be expected for an exchange energy driven by the Coulomb energy, where correlation takes place over the magnetic length, this is not the case with multiple Landau levels (LL) occupied when the correlation length is set by the Fermi wavevector $k_F$ and a linear behavior results. Finally, experiments in tilted field will be reported which show that the exchange part of $\Delta_{spin}$ depends only on the component of magnetic field perpendicular to the two-dimensional electron gas and that the increase in spin gap is due to the bare Zeeman term only. This explains why $g$-factors obtained from the coincidence method in tilted fields do not agree with those found from activation studies in perpendicular field \cite{nich,usher}. \section{The Spin Splitting Phenomenon } The samples studied here are the well known GaAs/GaAlAs single heterojunctions grown at Philips Research Laboratories, Redhill having undoped spacer layer thickness in the range 100 \AA$<L_z<$3200 \AA. The samples cover a wide range of density and mobility and include examples with low disorder, which exhibit the fractional quantum Hall effect, as well as highly disordered samples, which have very wide integer quantum Hall plateaux. Table~\ref{samples} lists some relevant sample parameters, measured at 1~K. The experiments consist of magnetoresistance measurements performed over a range of temperatures from 50~mK to 4.2~K with the magnetic field normal to the sample, except in the final section where we consider tilted field measurements. The temperature $T$ is measured and stabilized using a ruthenium oxide resistor, mounted to be in thermal equilibrium with the sample and to have negligible magnetoresistance. A typical low temperature recording of the diagonal resistivity $\rho_{xx}$ for a high mobility sample G641 is shown in Fig.~\ref{fig:sdh641}. At high magnetic fields, in this case $B>0.5T$, the SdHOs are spin split and the maxima are evenly spaced, appearing at all half-integer filling factors. At low field, $B<0.15T$ in Fig.~\ref{fig:sdh641}, the maxima are again evenly spaced but here there is no spin splitting so they appear at odd integer filling factors. By examining either of these regions in isolation it would be impossible to tell whether or not the SdHOs were spin split without additional information. In the intermediate region, whose extent depends on the temperature of the measurement, the spin splitting is partially resolved. In this region, the spin split minima are less deep than those due to cyclotron splitting since their energy gaps are smaller. Also towards lower magnetic fields the spin split maxima converge with a rapid change from a spacing $\delta\nu=|\nu_{\uparrow}- \nu_{\downarrow}|=1$ at high fields to $\delta\nu=0$ where the spin splitting disappears. In Fig.~\ref{fig:sdh641}, spin splitting can clearly be seen at $\nu=23$ and at higher magnification it can just be made out for the next two peaks but no more. Clearly this is only a qualitative judgement of $\nu_{c}$ whereas a quantitative definition is required. In FS this was taken to be the point where $\delta\nu=0.5$ at $T=0$. Although the zero temperature requirement simplifies the theory, it adds complications when one only has experimental results at finite temperature as can be seen in Fig.~\ref{fig:rhonu641}. Data from several temperatures are re- plotted as a function of filling factor, allowing the convergence of the maxima to be seen more clearly. While the last spin split peak is at $\nu=25$ in the 90~mK data, by 600~mK it appears at more than twice the magnetic field at $\nu=9$. One aspect of this paper will be to establish a reliable way of extrapolating the real experimental data to the ideal $T=0$ situation. Let us now examine some differences between the regular SdHOs and the spin splitting by again referring to the magnetic field dependence shown in Fig.~\ref{fig:sdh641}. The SdHOs are damped exponentially towards low field according to the well know Lifshitz-Kosevich formula \cite{lk}: \begin{equation} \Delta\rho_{xx} \propto \frac{X}{\sinh{X}} \exp\left(-\frac{\pi}{\omega_c\tau_s}\right) \cos\left(\pi(\nu+1/2)\right), \label{eqn:LK} \end{equation} where the factor containing $X=2\pi^2kT/E_g$ arises from the width of the Fermi function at finite temperature. When there is no spin splitting the gap $E_g$ is the cyclotron energy $\hbar\omega_c$. Eq.~(\ref{eqn:LK}) shows that the damping at $T=0$ is just determined by the single particle scattering time $\tau_s$, but is larger at higher temperatures when the Fermi function becomes smeared. In practice this means that more SdHOs will be seen if the experiments are performed at lower temperatures and with higher resolution. Thus there is not a maximum filling factor $\nu_c^{SdH}$ at zero temperature and even when the LL broadening is significantly larger than $\hbar\omega_c$ small oscillations will still occur in the conductivity. However, the exponential damping will set a limit in a real experiment where the last oscillation observed is determined by experimental noise. Practically, great care also has to be taken to sweep the magnetic field sufficiently slowly and to obtain enough data points per oscillation. In our imperfect noisy experiments, SdHOs have been regularly observed at filling factors in excess of 100 and in the best cases with $\nu>150$. Values of $\tau_s$ deduced from Eq.~(\ref{eqn:LK}) are included in Table~\ref{samples}. The spin splitting disappears in quite a different way. If the minima at odd $\nu$ just became unresolved at a certain field, it could be argued that the signal was becoming lost in the noise due to an exponential term, with a damping factor different from that of the regular SdHOs. In this case spin splitting would be seen at lower fields (higher $\nu$) if the experiment were performed more carefully. However, this does not appear to be the case and, in addition, there is a finite field where the maxima from either side of a spin split minimum converge, irrespective of the amount of experimental noise or resolution. This is a sign that the collapse is critical, indicative of a second order phase change when the exchange interaction is turned off. The position of the phase change is however dependent both on the sample and on the temperature. A dramatic example of the critical collapse may be seen for the higher density and more disordered sample G590 in Fig.~\ref{fig:sdh590}. Here $\nu_c=11$ which is at a much higher magnetic field than for G641 (14 times greater), and shows only a very weak temperature dependence, changing from $\nu_c=11$ at 40~mK to $\nu_c=9$ at 900~mK. This is however still a high quality sample as can be seen in the insert where the SdHOs are observed down to very low fields and only disappear into the noise below 0.2~T at around $\nu=70$. The collapse of the spin splitting can be studied in at least three different ways: as a function of magnetic field at fixed temperature (preferably 0~K), as discussed above; for a given SdH peak (of Landau level $N$ \cite{N}) as function of temperature at fixed field, as will be discussed below; or as a function of density for a given SdH peak at fixed temperature. We will only consider the first two cases. Wong {\em et al.}\ \cite{wong} used gated samples to study the density dependence and found that for each $N$ there was a critical density below which the splitting disappeared consistent with a phase change. In their data, the density where $\delta\nu_N=0.5$ increased with the temperature of the measurement. By fitting their data in the range $0.5<\delta\nu_N<0.9$ and extrapolating to $\delta\nu_N=0$ a critical density $n_c$ was found for each $N$ which allowed data from all peaks to be scaled onto a single curve for each sample. Their results show that to first order $n_c\propto N$ which means that there was a critical magnetic field at which the spin splitting collapsed and the effect of varying the density was to align different filling factors with this critical field. This is exactly what would be expected if disorder destroys the exchange enhancement whose energy scale is set by the magnetic field. The critical field also varied between the samples, presumably in line with the disorder. However, it would appear that changing density had little effect on the disorder, as otherwise the critical field would change with density and there would not be linear relationship between $N$ and $n_c$. Hence, the vertical axis of the phase diagram depicted in Fig.~4 of Ref.~\cite{wong} is actually a measure of the spin gap size and not the disorder. \section{Temperature dependence of spin splitting} We will now consider how the spin splitting at a particular filling factor can be observed to collapse as a function of temperature. Again it appears to be a critical phenomenon. The temperature evolution of the resistivity around $\nu=15$ for sample G902 is shown in Fig.~\ref{fig:dnu902}. (This example is chosen as it shows the fully resolved spin split minimum at low temperature changing to a maximum by 1~K.) At the lowest temperatures the two maxima can be seen at $\nu=15\pm0.4$ and as the temperature is increased these remain at the same filling factor while the minimum becomes shallower. Only when the depth of the minimum is $<10$\% of the peak height do the maxima start to converge and then they do so rapidly as $\delta\nu$ collapses. It will also be noticed that once less than 0.5, $\delta\nu$ becomes difficult to measure reliably as the peak has a rather flat top. Figure~\ref{fig:maxima640} illustrates the actual filling factors at which maxima occur for each temperature (this time for sample G640 but all the samples behave similarly). The dotted lines on this figure show the positions expected when the spin splitting is either completely resolved or completely absent. By their convergence, the points clearly show transitions between these two limiting cases at different temperatures for each filling factor. Figure~\ref{fig:maxima640} thus represents the temperature driven phase diagram for sample G640 with the spin resolved phase to the lower left side of the figure. The dashed line on the figure, drawn through the positions where $\delta\nu=0.5$, defines the phase boundary. The separation of the individual spin split maxima is displayed as a function of temperature in Fig.~\ref{fig:dnuT902} for odd filling factors in the range $19<\nu<9$ from sample G902. For each filling factor the collapse looks to be quite similar but occurs at a different temperature. At lower filling factors there is no noticeable change in $\delta\nu$ in the temperature range of the experiment, although there would again be a collapse at higher temperature. If the spin gap had just one component that increased with magnetic field then it should be possible to collapse the data of Fig.~\ref{fig:dnuT902} onto a single curve as a function of a scaled temperature i.e.\ $\delta\nu(T/T_{\nu})$ with a $T_{\nu}$ determined for each filling factor. However, the data does not scale in this way. This can be seen by looking at the maximum value of $\delta\nu$, which would be 1.0 at $T=0$ for all $\nu$ if $\delta\nu$ were a function of $T/T_{\nu}$, but in fact decreases at higher $\nu$. Instead, the data can be collapsed by simply shifting the temperature axes, i.e.\ plotting it as $\delta\nu(T')=\delta\nu(T-T_{0.5})$ in Fig.~\ref{fig:scaled902}, where $T_{0.5}$ is the temperature where $\delta\nu=0.5$ for each filling factor. $T_{0.5}$ is used in preference to the temperature where $\delta\nu=0$ as it is both experimentally accessible and corresponds to the mid-point of the gap's collapse. No fitting of the data is required so we do not have to assume any functional form to the collapse of the gap, we just read off $T_{0.5}$ at each filling factor from Fig.~\ref{fig:dnuT902}. The remarkable behavior shown in Fig.~\ref{fig:scaled902} demonstrates that the gap collapses over the same range of temperature for all filling factors. In other words there is a single finite width to the phase boundary. This suggests that once there is sufficient thermal energy to initiate a collapse (which increases with magnetic field) the same amount of additional thermal energy will complete the phase transition at all magnetic fields. Other samples show a similar behavior but with the transition occurring over a smaller temperature range in higher mobility samples, i.e.\ the width of the phase boundary decreases as the mobility increases. Indeed it is possible to collapse the data for all the samples (and all filling factors) onto a single curve by normalizing $T-T_{0.5}$ by a sample dependent temperature $T_0$ as shown in Fig.~\ref{fig:dnuscaled}. The value used for $T_0$ is found by fitting the data for each sample to $\delta\nu=0.5+a_1\bigl(1-\exp\left((T-T_{0.5})/T_0\right)\bigr)$, which is the simplest function that provides a reasonable fit the data. Similar values of $T_0$, but slightly lower quality fits, can be obtained from a modified Brillouin function, as used in Ref.~\cite{wong}, $\delta\nu=a\coth\left(2a(T-T_{0.5}-T_0)/T_0\right)-b\coth\left(2b(T-T_{0.5}- T_0)/T_0\right)$. At present we do not understand the physical significance of either of these functional forms, merely using them to extract $T_0$, but we do note that $T_0$ is a good measure of the temperature range over which the transition proceeds. Figure~\ref{fig:T0} shows how $T_0$ varies with disorder in the samples, where the degree of disorder is represented by the inverse quantum lifetime $1/\tau_s$, measured from the low field SdHOs in each sample. This assumes that the SdHO broadening is due to an impurity potential $\sim\hbar/\tau_s$. Although there is a lot of scatter on the graph (due to problems in measuring $\tau_s$ and the long route to finding $T_0$) it shows a clear correlation between $T_0$ and disorder. The linear fit shown has a gradient of 0.95~K~ps, which means $\hbar/\tau_s=8.0 k_BT_0$. We note the similarity between this factor and the fact that Eq.~(\ref{eqn:LK}) predicts SdHO minima will be $\sim50\%$ developed when $k_BT\sim E_g/6$. This appears to confirm a connection between the collapse of the spin splitting and the disorder potential. \section{Energy gaps at odd integer filling factors} We now turn our attention from the spin-split maxima to the minima at odd integral $\nu$ and use the temperature dependence of the resistivity there to evaluate the energy gaps at each odd filling factor. As we have previously discussed the data can be analyzed in two ways. Either the actual value of $\rho_{xx}$ at the minimum can be used and an activation energy $\Delta$ extracted from an Arrhenius plot, or the depth of the minimum can be used to obtain an energy gap $E_g$ by fitting to the Lifshitz-Kosevich formula Eq.~(\ref{eqn:LK}). $\Delta$ and $E_g$ represent the energy gaps between mobility edges and between Landau level centers respectively and we expect that $E_g=\Delta +\Gamma$, where $\Gamma$ is the width of the region of extended states. Both $\Delta$ and $E_g$ are shown on Fig.~\ref{fig:Enu} for samples G641 and G650. The dashed line on this figure is the single particle Zeeman energy $g_0\mu_BB$. That this is much smaller than the measured energy gaps shows the gaps are dominated by exchange energy. Although there are quite large experimental uncertainties in the measured gaps, especially at large $\nu$, it can be seen that $\Delta$ is approximately linear in $1/\nu$ (i.e. magnetic field) and becomes zero at a finite filling factor, giving us a measure of $\nu_c$ where the spin gap closes. (These values of $\nu_c$ will be discussed in the next section.) The values of $E_g$ measured at magnetic fields above this critical region also appear to be linear in $B$, with the same slope as $\Delta$ but this time extrapolating to the origin. This is consistent with a constant value of $\Gamma$ that can be obtained from the negative intercept of the fit to $\Delta$. Near the critical region there is some indication that $E_g$ starts to decrease below the straight line but unfortunately it becomes unmeasureable once $\delta\nu<0.5$ where there is no longer a clear minimum observable over a range of temperatures. Furthermore, when the gap is collapsing as a function of temperature (as discussed above) the temperature dependence of resistivity can not be used to obtain a value for the gap. We are thus limited to data at temperatures below the point where the maxima start to converge. For $\nu<<\nu_c$ this is not a problem, but it increases the uncertainty in $E_g$ when $\nu\sim\nu_c$ as there are then fewer suitable points for fitting. It is important to note that these measurements suggest an energy gap due to exchange that increases {\em linearly} with magnetic field and is {\em independent} of temperature once the enhancement is turned on. This allows us to use an effective $g$-factor that is the same at all fields, and to describe the spin gap as $\Delta_{spin}=g^*\mu_BB$. Furthermore $g^*$ can be obtained from the gradients of the fits shown in Fig.~\ref{fig:Enu} as 6.2 for G650 and 7.6 for G641, values that are generally in agreement with earlier work \cite{usher}. Many workers have previously reported such a linear increase \cite{usher,dolgo} and found their results puzzling, since simple theories of exchange suggest the gap should increases like $\sqrt{B}$. This square root dependence arises when the magnetic length $l_B=\sqrt{\hbar/eB}$ represents the average separation of electrons which determines the Coulomb energy $E_c=e^2/4\pi\epsilon_0\epsilon_r l_B$. However, this is only strictly true when there is just one Landau level occupied and the Coulomb energy is small compared with the cyclotron energy. When there are many LLs occupied neither of these conditions are fulfilled: $E_c>\hbar\omega_c$ and only $1/\nu$ of the electrons are able to contribute to the exchange energy as the others are in full LLs. At zero field, and also in the limit of large $\nu$, the correlation length will be set by the Fermi wavevector $k_F$ rather than $L_B$. These two factors lead to an exchange energy $E_{ex}\sim \left(e^2/4\pi\epsilon_0\epsilon_r\right) \left(k_F/\nu\right)$ which depends on density and increases linearly with the magnetic field at which each filling factor falls. Thus we can write \begin{equation} E_{ex}=\alpha\hbar\omega_c \end{equation} and since $\mu_B=e\hbar/2m_0$, with $m_0$ the free electron mass and $m^*$ the effective mass, $\alpha$ is related to the $g$-factor by: \begin{equation} \alpha=(g^*-g_0)\frac{m^*}{2m_0} \end{equation} Aleiner and Glazman \cite{aleiner} have used a more sophisticated Hartree-Fock calculation with Thomas-Fermi screening to show that exchange energy should increase linearly with magnetic field in the large filling factor limit, but that there is an additional logarithmic factor such that $\alpha=\left(1/\pi k_F a_B\right) \ln(2k_F a_B)$, where $a_B$ is the effective Bohr radius. Thus $\alpha$ is predicted to vary with carrier density since, with $n_e$ in units of $10^{15}m^{-2}$, $k_F a_B=1.13\sqrt{n_e}$ giving a value of $\alpha=0.23$ at $n_e=10^{15}m^{-2}$. While FS use this value of $\alpha$ in their calculation they do not seem to account for its density dependence. The values of $g^*$ obtained for each sample from the gradients of plots such as Fig.~\ref{fig:Enu} are shown as a function of $n_e^{-1/2}$ in Fig.~\ref{fig:gvsn}. This figure clearly shows that there is a density dependence of $g^*$ and that it is somewhat weaker than $n_e^{-1/2}$. The dotted and the dashed lines given by the equations $g^*=g_0+6.27/\sqrt{n_e}$ and $g^*=g_0+6.22/\sqrt{n_e}+1.88 \ln(\sqrt{n_e})/\sqrt{n_e}$ represent the simplest and the best fits to the data respectively, which have been constrained to pass through $g^*=g_0$ at large density. By contrast the dash-dotted line is the prediction of Ref. \cite{aleiner} ($g^*=g_0+6.75/\sqrt{n_e}+8.29\ln(\sqrt{n_e})/\sqrt{n_e}$), which does not fit the data particularly well and produces an unexpected maximum. The discrepancy is mostly due to an over estimate of the logarithmic correction, and apart from this numerical factor the experiment and theory are in reasonable agreement. \section{The critical filling factor --- $\nu_c$} Having discussed something of the temperature dependence of the spin splitting we can now establish a reliable way of obtaining the critical filling factor $\nu_c$ at $T=0$, which is the quantity required for comparison with the theory of FS. The first, and most direct, approach is to follow the temperature dependence of the separation $\delta\nu$ between the spin split maxima. The crudest method of finding $\nu_c$ is by extrapolation of the temperatures where $\delta\nu=0.5$ on Fig.~\ref{fig:maxima640} to $T=0$. The problem with this approach is that the value obtained is sensitive to the functional form chosen for the curve, particularly if the experimental data does not extend to sufficiently low temperature. A more accurate method is to plot the values of $T_{0.5}$ as a function of $1/\nu$ and extrapolate to $T=0$ as shown in Fig.~\ref{fig:nuc640}. This graph shows a linear dependence with the intercept giving the critical filling factor at $T=0$, of 23 for this sample. From these data it is quite clear that at $T=0$ the spin splitting will collapse at a certain magnetic field as opposed to tailing off exponentially. A similar approximately linear relationship is observed for all the samples and so we propose the empirical relationship: \begin{equation} \frac{1}{\nu_c(T)}=\frac{1}{\nu_c(0)}+cT \label{eqn:nucT} \end{equation} with $c$ and $\nu_c(0)$ as sample dependent parameters. These parameters will be discussed in more detail later in the paper, but at this point it is important to note that there is no direct correlation between them. This means that $\nu_c(0)$ can not be obtained just by finding the last spin split maximum in a single finite temperature resistivity trace. There is sometimes a deviation from linearity at the lowest filling factors $\nu=3$ or 5, with values of $T_{0.5}$ up to 20\% below the line. This may be an artifact of the experiment since the transition region occurs above 1~K where the temperature in the fridge was not always stable. Alternatively it may provide evidence for skyrmionic excitations at these filling factors having a correspondingly lower energy than the single spin flips. In any case we have not used these points in our analysis of $\nu_c$. The critical filling factor can also be obtained from the point where the mobility gap $\Delta$ collapses, via the temperature dependent resistivity data discussed in the previous section. The values of $\nu_c$ obtained from the two methods are very similar, although for the cleanest samples the collapsing of the gap gives slightly smaller values. This may be due to the difficulty in measuring small energy gaps and for the remainder of the paper we will use the former method. \subsection{Sample dependence of $\nu_c$} Having established a method of finding the $T=0$ value of $\nu_c$ we can now investigate how it varies between samples. In their calculations Fogler and Shklovskii \cite{fs} found different expressions for the dependence of $\nu_c$ on sample parameters according to the range of the dominant scattering mechanism in each case. For low mobility samples they found $N_c\propto n_e\mu$ and for high mobility samples $N_c\propto L_z n_e^{5/6}/n_i^{1/3}$, with electron densities in our range of interest and similar forms of expressions for other densities. These predictions are tested by plotting our measured $\nu_c$ against $n_e\mu$ and $L_z n_e^{5/6}$ in Fig.~\ref{fig:nuln}. Clearly there are some samples that agree with the predictions in each case but neither description applies to all the samples and there is no clear distinction between the low and high mobility samples. This lack of agreement is not unexpected since our samples cover a wide range of densities and mobilities and different scattering mechanisms will dominate. In drawing these figures we have also to assume that the impurity density $n_i$ is constant, as there is no way of measuring this directly, but in practice it may differ widely between the samples. However, in FS all the sample parameters enter through the calculation of the scattering rate $\tau_s$. Rather than try to calculate this quantity we will show that the theory is essentially correct by using the measured value of $\tau_s$, obtained from the SdHOs at low magnetic field. Simply speaking the spin splitting will collapse when the energy separation of spin up and down levels is less than their disorder broadening {\em i.e.}\ when \begin{equation} g^*\mu_BB=\hbar/\tau_s . \end{equation} At this point the exchange contribution will disappear and only the small bare Zeeman splitting will remain. Rearranging this equation in terms of the critical filling factor and electron density we obtain \begin{equation} \nu_c=g^*n_e\tau_s\frac{h}{2m_0} \label{eqn:nuc} \end{equation} The $T=0$ values of $\nu_c$ are plotted as a function of $g^*n_e\tau_s$ in Fig.~\ref{fig:nugntau}, together with the prediction of Eq.~(\ref{eqn:nuc}). Clearly this provides a good description for all the samples without using any adjustable parameters. There are quite large experimental uncertainties in the measurement of both $g^*$ and $\tau_s$ indicated by the error bars. (A reliable value of $\tau_s$ could not be obtained for sample G627 because the magnetic field was swept too fast to resolve the SdHOs at low field. However, $\nu_c$ for this sample is quite in line with the value expected.) It is quite remarkable how well this very simple approach matches the experimental data particularly that the disorder broadening seems to be adequately described by $\hbar/\tau_s$ without any numerical factors. Returning to Eq.~(\ref{eqn:nucT}), it can be seen that not only did the critical filling factor at $T=0$ vary between the samples, but so did the rate of change of critical filling factor with temperature. We will now try to understand this temperature dependence in another simple model. As the temperature is increased, reversed spins will be thermally excited, which will reduce the exchange correlation. Thus the exchange enhancement can be destroyed even when $g^*\mu_BB>\hbar/\tau_s$. We propose the critical condition for the spin splitting to collapse at finite temperature to be \begin{equation} g^*\mu_BB=\hbar/\tau_s + M k_BT \end{equation} where $M$ is a numerical constant that determines the effect of the thermal fluctuations on the exchange energy. Comparing this with Eq.~(\ref{eqn:nucT}), and using Eq.~(\ref{eqn:nuc}) to replace $\tau_s$ with $\nu_c(0)$, shows that the parameters $c$ and $M$ that we have introduced are related by $c=M k_B m_0/\pi\hbar^2 n_e g^*$, {\em i.e.}\ just fundamental constants and the sample dependent product $n_eg^*$. In Fig.~\ref{fig:cgn} the linear dependence of $1/c$ on this product is very clear and from the slope of this graph we obtain a value of $M=2.1$ which appears quite reasonable. Again it is remarkable how well this simple model, of thermally excited spins aiding the disorder potential in destroying the exchange correlation, accounts for the data. A full theory is required to account for the actual value of $M$. \section{Tilted magnetic field and the enhanced \lowercase {g}-factor} It is well known that for a 2DEG the cyclotron motion, and hence the energy $\hbar\omega_c$, and the magnetic length, only depend on the perpendicular component of $B$ whereas the Zeeman energy depends on the total magnetic field. This means that on tilting the sample (such that the angle between the magnetic field and a direction normal to the 2DEG is $\theta$) the ratio of the Zeeman to cyclotron energy will increase. When the Zeeman energy is exactly half of the cyclotron energy, at some angle $\theta_c$, a ladder of equally spaced levels is generated and we would expect that the odd and even integer minima in the SdHO have the same depth. At this point we have the condition that $\hbar eB\cos \theta_c/m^*=2\Delta_{spin}$ which forms the basis for the coincidence method of determining the $g$-factor \cite{nich}. On further tilting other coincidence conditions occur as the levels cross each other. An example of the first condition can be seen in Fig.~\ref{fig:coincidence} for sample G650 at very high tilt angles. In the two lower traces the even integer minima are deeper and on further rotation the upper two traces show that the odd integer (spin split) minima dominate, allowing us to deduce that the coincidence angle is close to $\theta_c=87.3^o$. Notice that this is very close to $90^o$ where the magnetic field is parallel to the 2DEG and $\cos\theta$ is changing very rapidly. In the past the enhanced $g$-factor has then been extracted by writing $\Delta_{spin}=g^*\mu_BB$ which gives the coincidence condition as \begin{equation} \cos\theta_c=g^*\frac{m^*}{m_0} \label{eqn:coin} \end{equation} and in this case a value of $g^*=0.71$. Clearly this is at variance with the value of $g^*=6.8$ found in Section~IV. The same results have also been found in all previous coincidence measurements and various explanations have been given, such as there being less exchange enhancement at high tilt angle or variable degrees of enhancement depending on the occupancies used at the coincidence. However Eq.~(\ref{eqn:coin}) is not correct, because $\Delta_{spin}$ does not increase linearly with the total field. Although the bare Zeeman part of $\Delta_{spin}$ will increase with total field, for an ideal 2DEG the exchange contribution depends only on the perpendicular component so we should write \begin{equation} \Delta_{spin}=g_0\mu_BB_{\scriptscriptstyle TOTAL}+\alpha\hbar e B_{\perp}/m^* \label{eqn:tilt} \end{equation} which leads to a new coincidence condition \begin{equation} \cos\theta_c=\frac{g_0}{1-2\alpha}\frac{m^*}{m_0}. \label{eqn:coin2} \end{equation} Using a value of $\alpha=0.20$ appropriate for G650 with the coincidence at $87.3^o$ gives a value of $g_0=0.4$, in good agreement with the bulk band edge value of 0.44. In practice we would expect a small reduction in the $g$-factor in the 2-D layer due to the effects of non-parabolicity thus giving even better agreement. Our conclusion therefore is that when analysed in a way which includes the two dimensional nature of the exchange enhancement the coincidence method gives a correct picture of the spin splitting which agrees with other measurements. To further substantiate the claim that only the bare Zeeman term increases on tilting, we have measured $\nu_c$ as a function of tilt angle. The resistivity of sample G650 is shown in Fig.~\ref{fig:tilt} for angles between 0 and $85^o$ as a function of the normal component of magnetic field. This demonstrates that more spin split peaks are observed at higher tilt angles as expected for any mechanism that increases the spin splitting. By following the resistivity at fixed filling factor, it can also be seen that $\delta\nu$ for the spin split maxima displays a very similar behavior as a function of tilt to that found earlier as a function of temperature. In the tilting case, the additional parallel field should be considered to open up the spin gap by increasing the bare Zeeman contribution, thus delaying the disorder driven collapse just as increasing temperature hastened this collapse by adding to the spin disorder. In Ref.\ \cite{wong} the critical density was found to decrease at higher tilt angles which is equivalent to saying that the point of collapse moves to lower magnetic fields and is entirely consistent with our results. When only the bare Zeeman energy increases on tilting, the critical filling factor is given by \begin{equation} \nu_c(\theta)=n_e\tau_s\ h\left(\frac{g_0}{2m_0\cos\theta }+\frac{\alpha}{m^*} \right) \label{eqn:nuctheta} \end{equation} where it can seen that the increase in $\nu_c$ with angle is only due to the first term -- the contribution from the bare Zeeman energy. The measured critical filling factor is shown as a function of tilt in Fig.~\ref{fig:tiltnu} for sample G650. The line drawn on this figure has the gradient predicted by Eq.~(\ref{eqn:nuctheta}), using $g_0=0.40$, with an intercept for $\nu_c$ slightly lower than deduced in section V due to the finite temperature of the measurement. The agreement between the theory and experiment clearly demonstrates that only one part of the spin splitting is increasing as the field is tilted, as otherwise the gradient would depend on $g^*$ and be some 15 times larger. At the highest angles the data begins to fall below the line. The most likely cause of this is a reduction in $\tau _{s}$ caused by the parallel field which will push the wavefuction closer to the interface increasing the scattering. For angles above $85^{o}$ there is also a large positive magnetoresistance, suggesting a change in scattering. However, we were not able to verify this directly by measuring $\tau _{s}$ at high tilt angles, precisely because the SdHOs are spin split to much higher filling factors. Several papers have previously measured activation energies in tilted fields in an attempt to extract the enhanced $g$-factor from the rate of change of energy gap with tilt angle. The new analysis presented above suggests however that accurate and consistent results can only be achieved by including the two dimensional nature of the exchange interactions. \section{Conclusion} In this paper we have considered how the spin splitting is increased by the addition of an exchange term to the bare Zeeman energy. This enhancement is only present when the spin gap is larger than the disorder potential in the sample which can be parameterized by $\tau_s$. Once the exchange enhancement can no longer be sustained the spin splitting collapses critically, which is seen both by the separation in filling factor $\delta\nu$ of the spin split maxima and the energy gap deduced from the depth of the minima. We have examined the filling factor $\nu_c$ at which this critical collapse occurs as a function of temperature. Increasing the temperature leads to a lower value of $\nu_c$ because thermally excited spins essentially add to the disorder potential that the exchange energy must exceed. A scaling behavior is found which maps the temperature dependence of $\delta\nu$ for all filling factors in all the samples on to a single curve. A reliable method of extracting the critical filling factor at $T=0$ from finite temperature data is established. By investigating the variation of $\nu_c$ at zero temperature between samples we find a universal empirical relationship $\nu_c = g^*n_e\tau_s h/2m_0$ which is completely consistent with the picture of disorder driven collapse. In tilted magnetic field experiments, $\nu_c$ increases as the component parallel to the 2DEG is increased. This is found to be due to only the bare Zeeman component of the spin splitting increasing, not the exchange term. This finding allows us to understand past discrepancies between different methods of measuring the enhanced $g$-factor. \section*{Acknowledgements} We would like to thank H. Aoki, G.Kido, D.M. Symons and S. Uji for assistance at NRIM, Tsukuba, in making the tilted field measurements.
1,116,691,499,525
arxiv
\section{INTRODUCTION} The popular unification scheme for active galactic nuclei (AGNs) requires that the observed differences between type 1 and type 2 AGNs arise from the orientation \cite[e.g.,][]{Antonucci, Urry}, and the basic premise is that all AGNs are fundamentally the same. This scheme proposes a geometrically thick dusty torus surrounding the AGN central engine (accretion disk and supermassive black hole), with the torus providing anisotropic obscuration of the central region so that sources viewed face-on are recognized as type 1 AGNs, while those observed edge-on are type 2 AGNs. However, even if this torus exists in all AGNs, its key parameters such as its geometry and physical properties are still unclear. We focus here on the geometrical covering fraction of the dust torus, a fundamental parameter in the unification scheme. The covering factor (CF) is defined as the fraction of the sky, as seen from the AGN center, that is blocked by heavily obscuring material. This corresponds to the fraction of type 2 AGNs in the entire AGN population. Recently, some authors have claimed that the CF depends on the luminosity and redshift. For example, \cite{Simpson+05} examined data for 4,304 galaxies (including AGNs) from the Sloan Digital Sky Survey (SDSS) Data Release 2 (DR2) and found that the CF decreases with increasing [OIII] emission line luminosity, which is believed to be isotropic. \cite{Hasinger} also reported a negative correlation between the fraction of absorbed ($\sim$ type 2) AGNs and the X-ray (2--10 keV) luminosity based on 1,290 AGNs selected in the 2--10 keV band from different flux-limited surveys with very high optical identification completeness. Furthermore, \cite{Hasinger} found that the absorbed fraction increases significantly with increasing redshift, saturating at a redshift of $z$ $\sim$ 2. Recently, \cite{Toba} also confirmed the luminosity dependence of the CF by using the {\it AKARI} mid-infrared all-sky survey catalog \citep{Ishihara}. Some authors, however, have questioned these dependencies by claiming that the data are affected by various uncertainties. In particular, the observed correlations can be explained as a selection effect, in which case they may not necessarily have any astrophysical significance. For instance, \cite{Dwelly} found from {\it XMM-Newton} observations of the Chandra Deep Field-South that there is no evidence that the absorption distribution is dependent on either the intrinsic X-ray luminosity or the redshift. \cite{Akylas} suggested that the apparent increase in the absorbed AGN fraction with increasing redshift is due to a systematic overestimation of the column densities measured in high redshift sources where the absorption cut-off is shifted towards low energies. \cite{Lawrence+10} attempted to carefully distinguish strict type 2 AGNs from more lightly reddened type 1 AGNs, as well as from low-excitation narrow-line AGNs, which were assembled from the literature. They also showed that radio, infrared (IR), and volume-limited samples all agree in showing that the type 2 fraction does not change with luminosity. Therefore, it is still unclear whether the CF intrinsically depends on the luminosity and particularly on redshift. To resolve this problem, it is important to conduct a statistical analysis based on IR observations; reprocessed radiation from the dust in the torus is re-emitted in the IR wavelength range. Mid-IR (MIR) emission, in particular, is expected to be direct radiation from the dust torus and uninfluenced by dust extinction. In this paper, we estimate the CF of the dust torus using the MIR luminosity functions (LFs) and examine the luminosity and redshift dependence based on a statistically complete AGN sample. The LF of galaxies is a fundamental statistical tool for describing galaxy properties, since it should be almost entirely independent of the viewing angle. We construct the MIR LFs using the data from the {\it Wide-field Infrared Survey Explorer} \citep[{\it WISE}:][]{Wright}, which was launched in 2009. {\it WISE} performed an all-sky survey with a high sensitivity in four bands (particularly relevant to the study here are the 12- and 22-$\mu$m bands). While the spatial resolution of {\it WISE} is relatively poor owing to the 40-cm diameter of the telescope, it is several orders of magnitude better than those of the {\it Infrared Astronomical Satellite} \citep[{\it IRAS}:][]{Neugebauer,Beichman} and {\it AKARI} \citep{Murakami}, both of which performed previous all-sky IR surveys. This paper is organized as follows. Section 2 describes the sample selection and derivation of the LFs, and the 12- and 22-$\mu$m LFs computed using the 1/$V_{\mathrm{max}}$ technique are presented in Section 3. Our results are then compared with previous studies. In Section 4, we consider the origin of the MIR emission. According to an empirical method based on a {\it WISE} color--color diagram, we extract sources that are dominated in the MIR by the active nucleus. We then estimate the CF for those AGN-dominated MIR objects and discuss the luminosity and redshift dependence of the CF by analyzing the relationship between the CF and luminosity in separate redshift bins. This paper provides us with statistically robust results about the luminosity and redshift dependence of the CF and yields a reliable dust torus model that explains the results. Throughout this paper, we assume a flat universe with $\Omega_k =0$, and we adopt ($\Omega_M$, $\Omega_{\Lambda}$) = (0.3, 0.7) and $H_0$ = 75 km s$^{-1}$ Mpc$^{-1}$. \label{Section_Data_and_Analysis_WISE} \section{DATA AND ANALYSIS} We selected 12- and 22-$\mu$m flux-limited galaxies based on the {\it WISE} and SDSS catalogs, and these galaxies were then classified into five types according to their optical spectroscopic information in the SDSS catalog. For spectroscopically classified galaxies, we constructed the LFs using the 1/$V_{\mathrm{max}}$ method, considering both the detection limit of the {\it WISE} and SDSS catalogs. \subsection{Sample Selection} \label{sample_selection} \begin{figure} \epsscale{1} \plotone{Figure_1.eps} \caption{Flow chart of the sample selection process.} \label{flow_chart_sample_selection} \end{figure} The {\it WISE} All-Sky Release Source Catalog provides positions and four-band (3.4-, 4.6-, 12-, and 22-$\mu$m) photometry for 563,921,584 objects. In particular, there are 26,673,624 and 3,846,254 sources in the all-sky catalog with $\geq$10$\sigma$ detections in the 12- and 22-$\mu$m bands, respectively. The sample used for this study was selected from {\it WISE} MIR sources with spectroscopy from the SDSS DR8 \citep{Aihara}. In the end, we selected a 12-$\mu$m flux-limited sample of 223,982 galaxies and a 22-$\mu$m flux-limited sample of 25,721 galaxies. \subsubsection{WISE sample} {\it WISE} performed an all-sky survey at 3.4, 4.6, 12, and 22 $\mu$m with angular resolutions of 6.1, 6.4, 6.5, and 12.0 arcsec and a 5$\sigma$ photometric sensitivity better than 0.08, 0.11, 1, and 6 mJy (corresponding to 16.5, 15.5, 11.2, and 7.9 Vega magnitudes), respectively, in these four bands \citep{Wright}. A flow chart of our sample selection process is shown in Figure \ref{flow_chart_sample_selection}. We first narrowed our sample to {\it WISE} sources within the SDSS DR8 Legacy region (7966 deg$^2$). The SDSS spectroscopic survey is performed using two multi-object fiber spectrographs on the same telescope. Each spectroscopic fiber plug plate, referred to as a ``tile'', has a circular field-of-view with a radius of 1.49 degrees \citep{Blanton_03}, and 1794 tiles are employed in the Legacy survey. Because the tiles are circular, there is a fraction of the sky that is covered by the overlap of tiles. The equatorial coordinates of the tile centers are contained in the {\tt sdssTileAll} table. For the coordinates of each tile center, we searched for nearby {\it WISE} sources within a search radius of 1.49 degrees, which yielded a total of 82,107,768 {\it WISE} sources. We then extracted 10$\sigma$-detected objects at 12 $\mu$m above 0.9 mJy ($\sim$ 11.4 mag) or at 22 $\mu$m above 9.0 mJy ($\sim$ 7.4 mag). These fluxes correspond to almost 100\% completeness flux limits, according to the Explanatory Supplement to the {\it WISE} All-Sky Data Release Products\footnote{\url{http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/}}. When the sources were extracted, we also checked whether the sources were saturated. The observed saturation levels of {\it WISE} are 1.0 ($\sim$3.8 mag) and 12.0 Jy ($\sim$0.4 mag) for 12 and 22 $\mu$m, respectively. The saturated pixel fractions listed in the catalog are flagged as {\it w1-4sat} in each band. We eliminated sources that had fluxes exceeding the saturation level and a high fraction of saturated pixels (i.e., {\it WISE} sources with $w3sat \neq$ 0 for 12 $\mu$m or $w4sat \neq$ 0 for 22 $\mu$m were eliminated). In addition, we checked sources that were contaminated or biased due to proximity to an image artifact (e.g., diffraction spikes, scattered-light halos, or optical ghosts) according to {\it w1-4cc\_map}. A source that was unaffected by known artifacts was flagged as {\it w1-4cc\_map} = 0. We thus eliminated sources with $w3cc\_map \neq$ 0 for 12 $\mu$m or $w4cc\_map \neq$ 0 for 22 $\mu$m. This reduced the number of {\it WISE} samples to 1,350,393. Note that the {\it WISE} catalog contains the Vega magnitude of each source, and we converted these to fluxes. The zero magnitude flux densities for 12 and 22 $\mu$m are 31.674 and 8.363 Jy, respectively. Here, the profile-fitting magnitude ({\it w1-4mpro}) was used as the magnitude for the majority of the {\it WISE} sources. However, because the {\it w1-4mpro} photometry is optimized for point sources and may underestimate the true brightness of extended sources, we used the elliptical aperture magnitude ({\it w1-4gmag}) for the extended sources. The aperture is based on the elliptical shape reported in the Two Micron All Sky Survey (2MASS) Extended Source Catalog (XSC). We defined extended sources using {\it ext\_flg} in the {\it WISE} catalog. These 1,350,393 sources were then cross-identified with the {\it Tycho-2} Catalog \citep{Hog} to remove galactic bright stars. The {\it Tycho-2} catalog contains the positions, proper motions, and two-color photometry of the 2.5 million bright stars in the sky down to the magnitude limit of the plates ($V_T \sim11.5$). To avoid omitting high proper-motion stars, we referred to the \textit{mean position} rigorously propagated to the epoch J2000.0 by the proper motions in this catalog. As a result, a total of 225,547 stars (hereinafter {\it WISE-Tycho 2} stars) were identified. As shown in Figure \ref{cross-match_Tycho2_SDSS}, we adopted a 3-arcsec search radius because the star density in the SDSS spectroscopic region is at most $\sim$50 deg$^{-2}$ \citep{Hog}. Thus, the probability of chance coincidence is less than 0.01\% (i.e., 1,350,393 $\times$ 0.0001 $\sim$ 135 sources may be misidentified), which is acceptable. \begin{figure} \epsscale{1} \plottwo{Figure_2A.eps}{Figure_2B.eps} \caption{Histogram of the angular separation of {\it WISE} sources from the {\it Tycho-2} (left) and SDSS (right) coordinates. A search radius of 3 arcsec, as shown in red, was adopted for both sets. Cross-matching with the {\it Tycho-2} coordinates selected 225,547 objects within the search radius, while that with the SDSS coordinates selected 259,969 objects.} \label{cross-match_Tycho2_SDSS} \end{figure} \begin{figure} \epsscale{1} \plotone{Figure_3.eps} \caption{Color ([4.6] $-$ [12]) distribution of the {\it WISE} sources. The blue region represents {\it WISE-non Tycho 2} objects, and the red region represents {\it WISE-Tycho 2} stars. The dotted line indicates the threshold of star--galaxy separation. Objects with [4.6] $-$ [12] $\leq$ 0.5 are removed as stars.} \label{W2_3_distribution} \end{figure} Of the 1,124,846 remaining sources (hereinafter {\it WISE-non Tycho 2} objects), we removed certain stars based on their colors. Figure \ref{W2_3_distribution} shows a histogram of the [4.6] $-$ [12] color of the {\it WISE-non Tycho 2} objects. Here, [4.6] and [12] represent the Vega magnitudes in the {\it WISE} 4.6- and 12-$\mu$m bands, respectively. The zero magnitude flux density for 4.6 $\mu$m is 171.787 Jy. To examine the color distribution of the stars, the {\it WISE-Tycho 2} stars are also plotted for comparison. As shown in Figure \ref{W2_3_distribution}, the {\it WISE-Tycho 2} stars are located at [4.6] $-$ [12] $\sim$ 0 because the radiation from galactic stars is dominated by the Rayleigh--Jeans tail of the blackbody spectrum, thus yielding a Vega-system color near zero. To ensure the reliability of the color value, we examined the color of objects with a signal-to-noise ratio (S/N) greater than 10, $w2/3sat = 0$, and $w2/3cc\_map =0$. We found 263,417 objects that met the following criterion as stars: \begin{equation} [4.6] - [12] \leq 0.5\;. \end{equation} We note, however, that there are some stars in the 0.5 $<$ [4.6] $-$ [12] $<$ 1 region, but this region also may be populated with nearby elliptical galaxies with no dust. Therefore, we adopted a robust criterion to avoid the omission of galaxies. This left 861,429 objects in our {\it WISE} sample. \subsubsection{SDSS sample} \label{SDSS_sample} The SDSS DR8 spectroscopic catalog includes the main galaxy \citep{Strauss}, luminous red galaxy (LRG) \citep{Eisenstein}, and QSO \citep{Richards} samples. The DR8 legacy spectroscopic survey catalog contains about 1.5 million sources and covers 7,966 deg$^2$. SDSS sources with legacyPrimary = 1 were selected from the {\tt SpecPhoto} table, created by joining spectroscopic and photometric information. The ``legacyPrimary'' parameter is designed to choose the best available unique set of spectra of the legacy sources, and so the above criterion ensures clean spectroscopic data. We focused on the main galaxy and QSO samples in this study; these samples are magnitude-limited objects with \cite{Petrosian} magnitudes brighter than $r$ = 17.77 for the main galaxy sample and with point-spread function (PSF) magnitudes brighter than $i$ = 19.1 for the QSO samples at $z <$ 3. The principal spectroscopic type (galaxy or QSO) is listed under the column headed ``CLASS'' in the {\tt SpecPhoto} table. We thus extracted objects with (i) petroMag\_r below 17.77 mag and CLASS = ``GALAXY'' and (ii) psfMag\_i below 19.1 mag and CLASS = ``QSO''. This process selected 683,071 SDSS objects. \subsubsection{Cross-identification of WISE and SDSS} We then cross-matched the {\it WISE} samples with the SDSS samples. Using a matching radius of 3 arcsec, 259,969 {\it WISE}--SDSS objects were selected as shown in Figure \ref{cross-match_Tycho2_SDSS}. Adopting this search radius gives a probability of chance coincidence of 0.05\%; \cite{Donoso} cross-matched data from the {\it WISE} and SDSS DR7 spectroscopic catalogs and estimated the expected false detection fraction at 3 arcsec by using random catalogs generated over the effective area. This means that 861,429 $\times$ 0.0005 $\sim$ 431 sources may be misidentified, which we regard as acceptable. Note that 111 {\it WISE} sources have two SDSS counterparts because the spatial resolution of the SDSS is better than that of {\it WISE}. These sources are mostly close interacting systems of two members, and of these, we chose the closest object as the optical counterpart. >From these {\it WISE}-SDSS objects, we selected objects with {\it zWarning} = 0 and a redshift S/N greater than 10. The {\it zWarning} information is contained in the {\tt SpecPhoto} table and has flags set in suspicious cases; {\it zWarning} = 0 indicates that no problems were identified. Among the objects with a precise estimation of the redshift, we extracted objects with 0.006 $\leq z \leq$ 0.3. The redshift limit is applied because errors in the distance measurement are dominated by the peculiar motions of galaxies with $z \leq$ 0.006, and thus, the luminosity also has a large error. However, for the sources at $z > 0.3$, the [NII] $\lambda \,$6583 and H$\alpha$ lines that were used for classifying the sample into several galaxy types (see Section \ref{Classification}) were shifted to around 9,000 \AA. This wavelength almost corresponds exactly to the upper limit of the spectroscopy coverage and results in a relatively poor sensitivity. Therefore, we set the upper limit of the redshift to 0.3 to ensure a high S/N of these optical lines. A final sample consisted of 224,168 galaxies, whose details are given in Table \ref{sample_list}. The mean value of their redshifts is $\sim$0.1, and the redshift distribution is shown in Figure \ref{z_dist}. Ultimately, 223,982 galaxies at 12 $\mu$m and 25,721 galaxies at 22 $\mu$m were selected through these steps. For the selection process, we employed the ``2MASS Catalog Server Kit'' to easily construct a high-performance database server for the 2MASS Point Source Catalog (which includes 470,992,970 objects) and several all-sky catalogs \citep{Yamauchi}. We also used the STIL Tool Set, (STILTS\footnote{\url{http://www.star.bristol.ac.uk/~mbt/stilts/}}) which is a set of command-line tools based on the Starlink Tables Infrastructure Library \citep{Taylor}. The SDSS data were obtained from the Catalog Archive Server (CAS\footnote{\url{http://skyserver.sdss3.org/dr8/}}), a database containing catalogs of SDSS objects (both photometric and astrometric) that allows queries of their measured attributes. \begin{deluxetable}{llrrrrcrrrrrr} \rotate \tabletypesize{\scriptsize} \tablecaption{List of the 224,168 selected galaxies.\label{sample_list}} \tablewidth{0pt} \tablehead{ \colhead{objname} & \colhead{RA} & \colhead{DEC} & \colhead{f$_{12}$} & \colhead{f$_{22}$} & \colhead{redshift} & \colhead{type} & \colhead{w3sat} & \colhead{w4sat} & \colhead{S/N} & \colhead{S/N} & \colhead{w3cc\_map} & \colhead{w4cc\_map} \\ & \colhead{(J2000.0)} & \colhead{(J2000.0)} & \colhead{(mJy)} & \colhead{(mJy)} & & & & & (12 $\mu$m) & (22 $\mu$m) & & } \startdata SDSS J000000.74-091320.2 & 00:00:00.73 & -09:13:20.0 & 2.19 & 2.84 & 0.134 & SF &0.0 & 0.0 & 14.90 & 1.10 &0.0 & 0.0 \\ KUG 2357+156 & 00:00:01.98 & +15:52:53.9 & 8.67 & 11.44 & 0.020 & SF &0.0 & 0.0 & 27.00 & 6.40 &0.0 & 0.0 \\ LCSB S0001P & 00:00:03.30 & -10:43:15.8 & 3.37 & 5.61 & 0.083 & Composite &0.0 & 0.0 & 33.93 & 5.68 &0.0 & 0.0 \\ 2MASX J00000347+1411539 & 00:00:03.46 & +14:11:53.5 & 2.09 & 3.92 & 0.115 & Composite &0.0 & 0.0 & 13.30 & 1.90 &0.0 & 0.0 \\ SDSS J000004.59-105834.7 & 00:00:04.60 & -10:58:35.0 & 1.74 & 2.79 & 0.150 & SF &0.0 & 0.0 & 12.30 & 2.60 &0.0 & 0.0 \\ 2MASX J00000472+0046546 & 00:00:04.74 & +00:46:54.2 & 1.75 & 1.73 & 0.080 & type 2 AGN &0.0 & 0.0 & 15.29 & 1.99 &0.0 & 0.0 \\ ARK 591 & 00:00:07.83 & -00:02:25.8 & 5.27 & 5.92 & 0.024 & SF &0.0 & 0.0 & 41.76 & 6.39 &0.0 & 0.0 \\ 2MASX J00000811+1432450 & 00:00:08.08 & +14:32:45.9 & 4.45 & 9.06 & 0.105 & SF &0.0 & 0.0 & 27.30 & 9.10 &0.0 & 0.0 \\ 2MASX J00001235-1032114 & 00:00:12.34 & -10:32:10.7 & 3.99 & 6.38 & 0.077 & SF &0.0 & 0.0 & 24.60 & 6.80 &0.0 & 0.0 \\ CGCG 382-016 & 00:00:12.78 & +01:07:13.1 & 15.31 & 19.62 & 0.025 & SF &0.0 & 0.0 & 67.86 & 13.92 &0.0 & 0.0 \\ SDSS J000013.84+003912.1 & 00:00:13.86 & +00:39:12.0 & 1.95 & 2.31 & 0.103 & SF &0.0 & 0.0 & 12.30 & 2.20 &0.0 & 0.0 \\ 2MASX J00001447+1412420 & 00:00:14.44 & +14:12:42.0 & 1.79 & 4.37 & 0.091 & LINER &0.0 & 0.0 & 11.40 & 2.00 &0.0 & 0.0 \\ 2MASX J00001575-0853283 & 00:00:15.78 & -08:53:27.3 & 4.81 & 7.13 & 0.056 & Composite &0.0 & 0.0 & 18.20 & 3.70 &0.0 & 0.0 \\ KUG 2357+144 & 00:00:16.31 & +14:43:59.8 & 4.50 & 5.73 & 0.091 & SF &0.0 & 0.0 & 27.00 & 6.40 &0.0 & 0.0 \\ 2MASX J00001671+1541400 & 00:00:16.75 & +15:41:40.4 & 4.42 & 5.34 & 0.112 & SF &0.0 & 0.0 & 27.60 & 5.50 &0.0 & 0.0 \\ SDSS J000018.63+154327.7 & 00:00:18.62 & +15:43:27.9 & 1.44 & 3.92 & 0.176 & SF &0.0 & 0.0 & 10.50 & 1.90 &0.0 & 0.0 \\ SDSS J000019.03-105258.9 & 00:00:19.03 & -10:52:58.7 & 2.71 & 4.92 & 0.083 & SF &0.0 & 0.0 & 16.90 & 4.30 &0.0 & 0.0 \\ SDSS J000019.89+142219.5 & 00:00:19.87 & +14:22:19.8 & 1.81 & 2.91 & 0.094 & SF &0.0 & 0.0 & 12.10 & 1.00 &0.0 & 0.0 \\ SDSS J000020.06+135001.6 & 00:00:20.04 & +13:50:01.9 & 1.90 & 3.09 & 0.079 & SF &0.0 & 0.0 & 13.00 & 1.10 &0.0 & 0.0 \\ SDSS J000020.93+001254.2 & 00:00:20.92 & +00:12:54.4 & 2.13 & 3.98 & 0.085 & SF &0.0 & 0.0 & 14.40 & 4.20 &0.0 & 0.0 \\ 2MASX J00002629+0035503 & 00:00:26.31 & +00:35:51.1 & 1.62 & 4.90 & 0.104 & type 2 AGN &0.0 & 0.0 & 10.50 & 4.30 &0.0 & 0.0 \\ SDSS J000027.65+145624.6 & 00:00:27.65 & +14:56:25.0 & 4.12 & 11.94 & 0.159 & LINER &0.0 & 0.0 & 25.00 & 11.10 &0.0 & 0.0 \\ 2MASX J00002809+1422530 & 00:00:28.07 & +14:22:52.6 & 1.76 & 3.37 & 0.093 & Composite &0.0 & 0.0 & 11.70 & 1.00 &0.0 & 0.0 \\ SDSS J000028.19+142509.8 & 00:00:28.19 & +14:25:10.0 & 1.67 & 2.62 & 0.143 & SF &0.0 & 0.0 & 11.30 & 0.70 &0.0 & 0.0 \\ SDSS J000030.00-103825.0 & 00:00:29.94 & -10:38:24.7 & 2.50 & 2.89 & 0.151 & SF &0.0 & 0.0 & 16.50 & 2.70 &0.0 & 0.0 \\ 2MASX J00003086-0112473 & 00:00:30.89 & -01:12:46.8 & 1.02 & 2.30 & 0.075 & SF &0.0 & 0.0 & 10.54 & 2.74 &0.0 & 0.0 \\ 2MASX J00003718-1102077 & 00:00:37.18 & -11:02:07.8 & 3.94 & 9.04 & 0.151 & type 2 AGN &0.0 & 0.0 & 23.40 & 8.70 &0.0 & 0.0 \\ SDSS J000038.68+143548.1 & 00:00:38.68 & +14:35:48.0 & 1.62 & 2.28 & 0.146 & SF &0.0 & 0.0 & 12.30 & 2.60 &0.0 & 0.0 \\ 2MASX J00003878+1524270 & 00:00:38.72 & +15:24:27.3 & 1.63 & 1.88 & 0.152 & Composite &0.0 & 0.0 & 11.80 & 0.00 &0.0 & 0.0 \\ \enddata \tablecomments{Table \ref{sample_list} is published in its entirety in the electronic edition of the {\it Astrophysical Journal}. A portion of the table is shown here for guidance regarding its form and content. There are 223,982 galaxies at 12 $\mu$m defined by a flux of (12 $\mu$m) $\geq$ 9.0 mJy, {\it w3sat} = 0.0, {\it w3cc\_map} = 0.0, and an S/N of (12 $\mu$m) $\geq$ 10. There are 25,721 galaxies at 22 $\mu$m with a flux of (22 $\mu$m) $\geq$ 0.9 mJy, {\it w4sat} = 0.0, {\it w4cc\_map} = 0.0, and an S/N of (22 $\mu$m) $\geq$ 10.} \end{deluxetable} \begin{figure} \epsscale{1} \plotone{Figure_4.eps} \caption{Redshift distribution of the 224,168 selected galaxies.} \label{z_dist} \end{figure} \subsection{Classification of Spectroscopic Galaxy Type} \label{Classification} \begin{figure} \epsscale{0.8} \plotone{Figure_5.eps} \caption{Outline of the type classification process.} \label{flow_chart_type_classification} \end{figure} We spectroscopically classified the 224,168 galaxies (hereinafter {\it WISE}-SDSS sample) into five types: type 1 AGNs including quasars and Seyfert 1 galaxies, type 2 AGNs, low-ionization narrow-emission-line region galaxies (LINER), galaxies that are likely to contain both star formation and AGN activity (composite types of galaxies, hereinafter ``Composite''), and star-forming galaxies (SF). The classification was based on the spectroscopic information in the {\tt SpecPhoto} table, as shown in Figure \ref{flow_chart_type_classification}. Note that we consider Seyfert 2 galaxies (Sy2) as type 2 AGNs unless otherwise noted. The type 1 AGNs were identified according to the CLASS entry (CLASS = QSO) in the {\tt SpecPhoto} table. Objects for which CLASS = GALAXY were classified as type 2 AGNs, LINER, Composite, or SF by using the optical flux line ratios of [NII] $\lambda \,$6583/H$\alpha$ versus [OIII] $\lambda \,$5007/H$\beta$ \citep[BPT diagram suggested by][]{Baldwin}, as shown in Figure \ref{BPT}. However, the BPT diagram is not able to classify a galaxies if H$\alpha$, H$\beta$, [OIII], or [NII] were not detected in the emission line. These galaxies are classified as weak-emission-line galaxies (hereinafter, ``Unknown''). \begin{figure} \epsscale{0.8} \plotone{Figure_6.eps} \caption{BPT diagram of the emission-line flux ratio [NII]/H$\alpha$ versus [OIII]/H$\beta$ for all the narrow-line galaxies for which line flux information is available. The dashed-dotted line is the criterion given by \cite{Kauffmann}, the dashed line is the criterion given by \cite{Kewley}, and the dotted line is the traditional scheme \citep[see for example,][]{Veilleux}.} \label{BPT} \end{figure} \begin{deluxetable}{lrr} \tablecolumns{3} \tablewidth{0pc} \tabletypesize{\scriptsize} \tablecaption{Classifications of the 223,982 galaxies for the 12-$\mu$m LF and 25,721 galaxies for the 22-$\mu$m LF.\label{type_classification}} \tablehead{ \colhead{type} & \colhead{12 $\mu$m (percentage)} & \colhead{22 $\mu$m (percentage)} } \startdata type 1 AGNs & 8,151 ( 3.6 \%) & 2,846 (11.0 \%)\\ type 2 AGNs & 8,204 ( 3.7 \%) & 1,837 ( 7.1 \%)\\ LINER & 14,491 ( 6.5 \%) & 1,477 ( 5.8 \%)\\ Composite & 48,834 (21.8 \%) & 6,583 (25.6 \%)\\ SF & 141,242 (63.0 \%) & 12,799 (49.8 \%)\\ Unknown & 3,060 ( 1.4 \%) & 179 ( 0.7 \%)\\ \cline{1-3} All & 223,982 (100 \%) & 25,721 (100 \%)\\ \enddata \end{deluxetable} The galaxy classifications, summarized in Table \ref{type_classification}, indicate that the 22-$\mu$m band is especially powerful for finding AGNs. The detection rate of AGNs (type 1 + type 2) in the 22-$\mu$m band ($\sim$18\%) is higher than that in the 12-$\mu$m band ($\sim$7\%), since the 12-$\mu$m bandpass includes a strong contribution from polycyclic aromatic hydrocarbon (PAH) emission, which is unrelated to the presence of an active nucleus. Some authors have reported that LINERs typically have weak MIR emission \citep[e.g.,][]{Ogle}. Thus it might be surprising that we find such a high fraction of LINERs ($\sim$6\%). However, our LINER classification is based on the [NII] lines rather, than the [OI] $\lambda \,$6300 or [SII] $\lambda \,$6716 and $\lambda \,$6731 lines others have found useful for discriminating pure LINERS \citep[e.g.,][]{Kewley_06}. If we were to adopt the [OI]/H$\alpha$ versus [OIII]/H$\beta$ diagram, it would in fact reduce our fraction of LINERs ($\sim$ 3.5\% at 22 $\mu$m). However, as our focus in this study is on type 1 and 2 AGNs and one motivation is to check whether our findings in \cite{Toba} based on {\it AKARI} are confirmed by {\it WISE}, we employed [NII]/H$\alpha$ versus [OIII]/H$\beta$ in the same manner as \cite{Toba}. Figure \ref{flux} presents the flux distributions at 12 and 22 $\mu$m, and the distributions of 12- and 22-$\mu$m luminosities as a function of redshift are illustrated in Figures \ref{z_L_12} and \ref{z_L_22}, respectively. The flux distribution does not reveal any clear differences between each galaxy type, and we see in the redshift distributions that type 1 and type 2 AGNs have relatively higher redshifts. \begin{figure} \epsscale{1} \plottwo{Figure_7A.eps}{Figure_7B.eps} \caption{Flux distribution for each galaxy type at 12 $\mu$m (left) and 22 $\mu$m (right).} \label{flux} \end{figure} \begin{figure} \epsscale{1} \plotone{Figure_8.eps} \caption{The 12-$\mu$m luminosities as a function of redshift for each galaxy type.} \label{z_L_12} \end{figure} \begin{figure} \epsscale{1} \plotone{Figure_9.eps} \caption{The 22-$\mu$m luminosities as a function of redshift for each galaxy type.} \label{z_L_22} \end{figure} \subsection{Derivation of Luminosity Function with 1/V$_{\mathrm{max}}$ method} \label{Vmax_method} Here, we derive the LFs following the 1/$V_{\mathrm{max}}$ method described by \cite{Schmidt}. The advantage of the 1/$V_{\mathrm{max}}$ method is that it allows us to compute the LF directly from the data; no parameter dependence or model assumptions are needed. The volume density $\phi (L)$ and its uncertainty $\sigma_{\phi (L)}$ are derived using the expressions: \begin{equation} \phi(L) = \sum_i^N \frac{1}{V_{max,i}}\;, \end{equation} \begin{equation} \sigma_{\phi(L)} =\sqrt{\sum_i^N \frac{1}{V_{\mathrm{max},i}^2}}\;, \end{equation} where $V_{\mathrm{max}}$ is the maximum co-moving volume that would be enclosed at the maximum redshift at which the $i$th object could be detected. In the context of the cosmology we adopt, $V_{\mathrm{max}}$ is \begin{equation} V_{\mathrm{max}}(z) = \frac{c}{H_0} \int_{\Omega} \int_{z_{\mathrm{min}}}^{z_{\mathrm{max}}} \frac{(1+z^{\prime})^2 D_A^2}{\sqrt{\Omega_M (1+z^{\prime})^3 + \Omega_{\Lambda}}} \mathrm{d}z^{\prime} \mathrm{d}\Omega, \end{equation} where $D_A$ is the angular distance for a given redshift in our adopted cosmology, $\Omega$ is the solid angle of the SDSS DR8 spectroscopic region (7966 deg$^2$ $\sim$ 2.43 str), $z_{\mathrm{min}}$ is the lower limit of the redshift bin considered, and $z_{\mathrm{max}}$ is the maximum redshift at which the object could be seen given the flux limit of the sample. Note that when $z_{\mathrm{max}}$ is smaller than the maximum of the redshift bin considered, $V_{\mathrm{max}} = V(z_{\mathrm{max}}) - V(z_{\mathrm{min}})$. Otherwise, $V_{\mathrm{max}}$ is equal to the volume corresponding to the bin considered, $V_{\mathrm{max}} = V_{\mathrm{bin}}$. However, as $z_{\mathrm{max}}$ cannot be determined analytically, we calculated $z_{\mathrm{max}}$ numerically, using the following procedure. The absolute magnitude (hereafter, we use the magnitude for descriptive purposes) of the object $M$ observed to have an apparent magnitude $m$ at a redshift $z$ is \begin{equation} \label{M_m} M = m - K(z) -5\log d_L(z) -25\;, \end{equation} where $d_L$ is the luminosity distance (measured in Mpc) for a given redshift in our adopted cosmology, and $K(z)$ is the K-correction term, which is the redshift dependence of the magnitude of any object in a given wavelength band. When a source is artificially moved to the detection limit $m(z_{\mathrm{max}}) = m_{\mathrm{min}}$, Equation (\ref{M_m}) becomes \begin{equation} \label{M_m_max} M = m_{\mathrm{min}} - K(z_{\mathrm{max}}) -5 \log d_{L_\mathrm{max}} -25\;, \end{equation} where $d_{L_{\mathrm{max}}}$ is the maximum luminosity distance $d_L (z_{\mathrm{max}})$. Therefore, we numerically estimate $z_{\mathrm{max}}$ by substituting $z$ into Equations (\ref{M_m}) and (\ref{M_m_max}) step-wise in steps of $\Delta z$ (= $10^{-5}$ here) and iterating the above approach until the difference between the $M$ values obtained from Equations (\ref{M_m}) and (\ref{M_m_max}) is minimized. Note that when we calculate $z_{\mathrm{max}}$, the difference in the detection limits of the {\it WISE} and SDSS surveys should be considered because our {\it WISE}--SDSS sample is flux- (or magnitude-) limited. For the {\it WISE} samples, the detection limit is 0.9 mJy at 12 $\mu$m and 9.0 mJy at 22 $\mu$m. For the SDSS samples, the detection limit is 17.77 Petrosian r-band magnitude for galaxies and 19.10 PSF i-band magnitude for type 1 AGNs. Therefore, we calculated two values of $z_{\mathrm{max}}$ for each survey considering these detection limits, and we adopted the smaller of two possible values in each case. To compute the maximum redshift for the {\it WISE} objects, $K(z)$ in Equation (\ref{M_m}) was derived from the assumption that the spectral energy distribution (SED) of the objects in the IR region obeys a simple power law of $f(\nu) \propto \nu^{-\alpha}$, i.e., \begin{equation} K_{\mathrm{WISE}} (z) = 2.5 (\alpha -1 ) \log (1 + z)\;, \end{equation} where the spectral index $\alpha$ is calculated using the 12- and 22-$\micron$ fluxes ($f_{12}$ and $f_{22}$, respectively) as \begin{equation} \label{alpha} \alpha = -\frac{\log \left(\frac{f_{22}}{f_{12}} \right)}{\log \left( \frac{\nu_{22}}{\nu_{12}} \right)}\;. \end{equation} The frequencies at 12 and 22 $\mu$m, are $\nu_{12}$ and $\nu_{22}$, respectively. In the case of SDSS galaxies, $K(z)$ in Equation (\ref{M_m}) was computed using the K-correct (ver. 4.2) software of \cite{Blanton}. We also assumed a power law for SDSS type 1 AGNs, with $\alpha$ calculated as \begin{equation} \alpha = -\frac{\log \left(\frac{f_r}{f_i} \right)}{\log \left( \frac{\nu_r}{\nu_i} \right)}\;, \end{equation} where $f_r$ and $f_i$ are the PSF fluxes in $r$ and $i$ bands, respectively, as cataloged in the {\tt SpecPhoto} table. The corresponding frequencies are $\nu_r$ and $\nu_i$. Finally, for whichever maximum redshift is smaller, we adopted a maximum co-moving volume for each object $i$ of \begin{equation} V_{\mathrm{max},i} = \mathrm{min} [V_{\mathrm{max},i}\,\mathrm{(WISE)}, V_{\mathrm{max},i}\,\mathrm{(SDSS)}]\;, \end{equation} where $V_{\mathrm{max}}$(WISE) and $V_{\mathrm{max}}$(SDSS) are obtained using $z_{\mathrm{max}}$(WISE) and $z_{\mathrm{max}}$(SDSS), respectively. \section{RESULTS} We present the MIR LFs from the {\it WISE}--SDSS sample, classified as discussed in Section \ref{Classification}, and show that the shapes of each LF are in agreement with previous studies. We then show each LF in different redshift bins, which indicate that AGNs (particularly type 1 AGNs) show a certain evolution compared to normal galaxies. \subsection{The 12- and 22-$\micron$ Luminosity Functions}\label{LF} The rest-frame 12- and 22-$\mu$m LFs (i.e., the volume density of the galaxies per unit absolute magnitude range) of our {\it WISE}--SDSS galaxies at 0.006 $\leq z \leq$ 0.07, computed with the 1/$V_{\mathrm{max}}$ method, are shown in Figure \ref{LF_ALL}. We confirm here the consistency between the LFs of {\it WISE} and {\it AKARI} \citep[e.g.,][]{Toba}. \cite{Toba} selected 243 galaxies at 9 $\mu$m and 255 galaxies at 18 $\mu$m from the {\it AKARI} MIR all-sky survey catalog, and by combining the {\it AKARI} data with the SDSS DR7 spectroscopic data, they constructed 9- and 18-$\mu$m LFs for the first time. To compare those LFs with ours in a similar redshift range, Figure \ref{LF_ALL} plots only local objects (0.006 $\leq z \leq$ 0.07). Within this redshift range, the average value of the redshift ($\sim$0.04) is equal to that of \cite{Toba}. \begin{figure} \epsscale{1} \plottwo{Figure_10A.eps}{Figure_10B.eps} \caption{The 12- (left) and 22- (right) $\mu$m LFs for all galaxies for 0.006 $\leq z \leq$ 0.07. The 12-$\mu$m LFs from \cite{Rush} and \cite{Fang} and the 9- and 18-$\mu$m LFs from \cite{Toba} are also plotted for comparison. For the 9- and 18-$\mu$m LFs, we converted $\nu L_\nu (9, 18\, \micron)$ to $\nu L_\nu (12, 22\, \micron)$. The vertical error bars are calculated from the Poisson statistical uncertainty, and the horizontal error bars represent the uncertainty of the conversion to $\nu L_\nu (12, 22\, \micron)$.} \label{LF_ALL} \end{figure} Figure \ref{LF_ALL} also shows the {\it IRAS} 12-$\mu$m LFs \citep{Rush,Fang} for comparison. These were derived from samples of 893 \citep{Rush} and 668 \citep{Fang} galaxies selected from the {\it IRAS} Faint Source Survey. \cite{Fang}, in particular, corrected for the peculiar motion of the local supercluster. Note that for the 9- and 18-$\mu$m LFs we first converted the data by cross-identifying our {\it WISE}--SDSS sample with the {\it AKARI} MIR all-sky survey catalog, selecting the 200 {\it WISE}--SDSS-{\it AKARI} sources within the 3-arcsec search radius, and calculating conversion factors by plotting $\nu L_{\nu}$(9 $\micron$) versus $\nu L_{\nu}$(12 $\micron$) and $\nu L_{\nu}$(18 $\micron$) versus $\nu L_{\nu}$(22 $\micron$), as shown in Figure \ref{Convert_nuLnu}. We obtained the following conversion formulae: \begin{eqnarray} \log[\nu L_{\nu} (12 \, \micron)] & = & (0.93 \pm 0.03) \times \log[\nu L_{\nu}(9 \,\micron)] + (0.43 \pm 0.26)\;, \\ \log[\nu L_{\nu} (22 \, \micron)] & = & (0.96 \pm 0.02) \times \log[\nu L_{\nu} (18 \,\micron)] + (0.37 \pm 0.18)\;. \end{eqnarray} \begin{figure} \epsscale{1} \plottwo{Figure_11A.eps}{Figure_11B.eps} \caption{{\it WISE} 12-$\mu$m versus {\it AKARI} 9-$\mu$m luminosities (left) and {\it WISE} 22-$\mu$m versus {\it AKARI} 18-$\mu$m luminosities (right). The red dotted line shows the best-fit linear function.} \label{Convert_nuLnu} \end{figure} The conversion uncertainty is represented in Figure \ref{LF_ALL} as the horizontal error bars. The shapes of the LFs obtained from previous studies \citep{Rush,Fang,Toba} are in good agreement with our derived LFs. Figure \ref{LF_type} presents the resultant LFs at 12 and 22 $\mu$m for each galaxy type for 0.006 $\leq z \leq$ 0.3. \begin{figure} \epsscale{1} \plottwo{Figure_12A.eps}{Figure_12B.eps} \caption{The 12- (left) and 22- (right) $\mu$m LFs for each galaxy type for 0.006 $\leq z \leq$ 0.3 plotted in terms of the space density as a function of luminosity. The error bars are calculated from the Poisson statistical uncertainty. The data used in these figures can be found in Tables \ref{LF_table_ALL_z}, \ref{LF_table_12_z}, and \ref{LF_table_22_z} (see Appendix \ref{LF_data}).} \label{LF_type} \end{figure} SFs make up the majority of the objects at low luminosities, while AGNs dominate the volume density at luminosities above $\sim$$10^{11} L_{\odot}$. This tendency was also reported by Rush, Malkan, \& Spinoglio (1993), who used 12-$\mu$m flux-limited samples from the {\it IRAS} Faint Source Catalogue (FSC). Figure \ref{LF_type} also shows that the relative number of AGNs changes with increasing MIR luminosity; at low luminosities, type 2 AGNs dominate the AGN population, whereas type 1 AGNs dominate at high luminosities. \cite{Toba} also recently reported a similar trend using {\it AKARI}. The fraction of type 2 AGNs thus changes with MIR luminosity, which can be interpreted as a luminosity dependence of the CF. \begin{figure} \epsscale{1} \plotone{Figure_13.eps} \caption{The 12-$\mu$m LFs for each galaxy type in each redshift bin (0.006 $\leq z <$ 0.05, 0.05 $\leq z <$ 0.1, 0.1 $\leq z <$ 0.15, 0.15 $\leq z <$ 0.2, 0.2 $\leq z <$ 0.25, and 0.25 $\leq z \leq$ 0.3). The data used in this figure can be found in Tables \ref{LF_table_ALL_z} and \ref{LF_table_12_z} (see Appendix \ref{LF_data}).} \label{LF_type_12_z} \end{figure} \begin{figure} \epsscale{1} \plotone{Figure_14.eps} \caption{The 22-$\mu$m LFs for each galaxy type in each redshift bin (0.006 $\leq z <$ 0.05, 0.05 $\leq z <$ 0.1, 0.1 $\leq z <$ 0.15, 0.15 $\leq z <$ 0.2, 0.2 $\leq z <$ 0.25, and 0.25 $\leq z \leq$ 0.3). The data used in this figure can be found in Tables \ref{LF_table_ALL_z} and \ref{LF_table_22_z} (see Appendix \ref{LF_data}).} \label{LF_type_22_z} \end{figure} Figures \ref{LF_type_12_z} and \ref{LF_type_22_z} show the resultant LFs at 12 and 22 $\mu$m for each galaxy type in each of the six redshift bins (0.006 $\leq z <$ 0.05, 0.05 $\leq z <$ 0.1, 0.1 $\leq z <$ 0.15, 0.15 $\leq z <$ 0.2, 0.2 $\leq z <$ 0.25, and 0.25 $\leq z \leq$ 0.3). The overall trends seen in Figure \ref{LF_type} are reproduced in these figures except for $z > 0.2$. For $z > 0.2$, type 1 and type 2 AGNs dominate the volume density over a wide range of luminosities, while for $z \leq 0.2$, their magnitude relationship changes remarkably with increasing MIR luminosity, as seen in Figure \ref{LF_type}. At the same time, the overall magnitude relationship between AGNs also changes with increasing redshift; type 2 AGNs make up the majority of the AGNs at low redshift, while type 1 AGNs are the majority at high redshift. This change in the fraction of type 2 AGNs with redshift can be interpreted as a redshift dependence of the CF. \subsection{Evolution of Luminosity Functions}\label{LF_evo} We examined the luminosity (density) evolution of the AGN population based on the 22-$\mu$m sample. Here, we fit the LFs for all galaxies and AGNs using the double-power law \citep{Marshall}: \begin{equation} \phi(L)\mathrm{d}L = \phi^* \left\{ \left( \frac{L}{L^*} \right)^{-\alpha} + \left( \frac{L}{L^*} \right)^{-\beta} \right\}^{-1} \frac{\mathrm{d}L}{L^*}, \end{equation} where the free parameters are the characteristic luminosity $L^*$, the normalization factor $\phi^*$, the faint-end slope $\alpha$, and the bright-end slope $\beta$, respectively. \begin{figure} \epsscale{1} \plotone{Figure_15.eps} \caption{The 22-$\mu$m LFs for all galaxies (left) and AGNs (right) as a function of redshift. The dashed line represents the best fit function for 0.006 $\leq z <$ 0.05. The solid line represents the best fit function for a fixed bright-end slope $\beta$.} \label{LF_evo_1} \end{figure} Figure \ref{LF_evo_1} shows the best fit for each redshift bin. Four redshift bins (0.006 $\leq z <$ 0.05, 0.05 $\leq z <$ 0.1, 0.1 $\leq z <$ 0.2, and 0.2 $\leq z\leq$ 0,3) are considered to keep a certain data point (if there are less than four degrees of freedom, then we cannot use the double-power law as a fit). The fit of the data in the nearest redshift bin ($0.006 \leq z < 0.05$) is shown in all panels for comparison, and to examine the evolution, fits with $\beta$ fixed to the value of that in the nearest redshift bin (0.006 $\leq z <$ 0.05) are also shown. Comparing the LF of all galaxies with that of AGNs as a function of redshift, we see that the LF of all galaxies does not evolve considerably with redshift, whereas the LF of AGNs shows significant evolution. A comparison of the evolution of LFs for different AGN types (Figure \ref{LF_evo_2}) reveals that type 1 AGNs seem to exhibit more significant evolution than type 2 AGNs. However, this difference can arise from an incompleteness of type 2 AGNs, particularly at high redshifts (z $>$ 0.2), due to the SDSS selection criterion (see also Section \ref{rejected}). \begin{figure} \epsscale{1} \plotone{Figure_16.eps} \caption{The 22-$\mu$m LFs for type 1 AGNs (left) and type 2 AGNs (right) as a function of redshift. The dashed line represents the best fit function for 0.006 $\leq z <$ 0.05. The solid line represents the best fit function for a fixed $\beta$.} \label{LF_evo_2} \end{figure} \clearpage \section{DISCUSSION} In this section, we first consider the origin of the MIR emission by using an empirical method based on a {\it WISE} color--color diagram to extract sources that are dominated in the MIR by the active nucleus rather than their host galaxy. The luminosity and redshift dependence of the CF are then discussed in separate redshift bins to disentangle the luminosity and redshift correlations. We then consider the uncertainties in the luminosity dependence of the CF, such as the effect of (i) the Unknown galaxies, (ii) rejected objects in the sample selection, and (iii) optically elusive buried AGNs. Following that, we interpret the luminosity dependence in terms of two dust torus models (the receding torus model and the modified receding torus model). Finally, we compare our measurements of the CF with those of optical and hard X-ray results. \subsection{Origin of Mid-Infrared Emission in the WISE--SDSS Sample} \label{Xray} Before estimating the CF, we consider the origin of the MIR emission in our AGN sample. In this study, we have assumed that the MIR luminosity of the AGNs is dominated by emission from the active nucleus. However, the origin of the MIR emission may not always be an active nucleus: the emission is sometimes likely to have a contribution from the underlying host galaxy, especially in the low-luminosity regime. We thus attempted to select the AGN-dominated MIR sources from the {\it WISE}--SDSS sample by examining their MIR colors. Recently, \cite{Mateos} suggested a highly complete and reliable MIR color selection method for AGN candidates using the 3.4-, 4.6-, and 12-$\mu$m bands of {\it WISE}. They defined an ``AGN wedge'' based on the {\it WISE} and wide-angle Bright Ultrahard {\it XMM-Newton} survey (BUXS): \begin{equation} [3.4] - [4.6] = 0.315 \times ( [4.6] - [12] ), \end{equation} and \begin{equation} [3.4] - [4.6] = -3.172 \times ( [4.6] - [12] ) + 7.624, \end{equation} where the top and bottom boundaries of the wedge are obtained by adding y-axis ($[3.4]-[4.6]$) intercepts of +0.796 and $-$0.222, respectively. They reported that for $L_{\mathrm{2-10 keV}} > 10^{44}$ erg s$^{-1} (\sim 10^{11} L_{\odot})$, where the AGN is expected to dominate the MIR emission, $97.1^{+2.2}_{-4.8}$ and $76.5^{+13.3}_{-18.4}$ percent of the BUXS type 1 and type 2 AGNs, respectively, meet the selection criteria, i.e., a large amount of BUXS AGNs lie in the wedge area. They also showed that compared to other methods in the literature \citep{Jarrett,Stern}, this technique offers the highest reliability and efficiency for detecting X-ray selected luminous AGN populations with {\it WISE}. Therefore, we used the AGN wedge to extract AGN-dominated {\it WISE} sources. Hereinafter, we use the 22-$\mu$m luminosity as the MIR luminosity because the 12-$\mu$m flux is affected by PAH emission, which is unrelated to the presence of an active nucleus as mentioned in Section \ref{Classification} and may introduce large uncertainties. \begin{figure} \epsscale{1} \plotone{Figure_17.eps} \caption{MIR color--color diagram for the {\it WISE}-SDSS sample at 22 $\mu$m: type 1 AGNs (blue), type 2 AGNs (red), and SFs (green) as a function of the 22-$\mu$m luminosity. The solid lines illustrate the AGN selection wedge as defined by \cite{Mateos}.} \label{CF_L_multi-z_AGN_22} \end{figure} Figure \ref{CF_L_multi-z_AGN_22} shows the MIR colors of the {\it WISE}--SDSS AGNs as a function of the MIR luminosity. SFs are also plotted in the same figure for comparison. The fraction of sources (AGNs) that are expected to be dominated by an active nucleus (i.e., those sources in the AGN wedge) increases with increasing MIR luminosity. At low luminosities, the MIR emission from an AGN is largely dominated by the large contribution from the underlying host galaxy. At high luminosities, however, the MIR emission originates mainly from the active nucleus. This is consistent with the result of \cite{Mateos}. Of the AGNs, type 2 AGNs are more affected by the star-forming activity in their host galaxies, especially in the low-luminosity regime, which was also reported by \cite{Mateos}. This result indicates that there is a difference in the origin of the MIR emission of type 1 and 2 AGNs particularly in low-luminous AGNs, which could be interpreted as resulting from the fact that some low-luminous (type 2) AGNs are obscured by not only a dust torus but also their host SF. Figure \ref{OIII_Hb} shows the distribution of the ratio of the [OIII] (5007 \AA) to H$\beta$ luminosity (believed to be a good tracer for the strength of AGN activity) for the {\it WISE}-SDSS type 2 AGNs in and outside the AGN wedge. As shown in Figure \ref{OIII_Hb}, the peak of the distribution for type 2 AGNs in the AGN wedge is relatively larger than that of the type 2 AGNs outside the AGN wedge. To ensure the reliability of the ratio value, we examined the ratio for objects with a S/N greater than 10 in the [OIII] and H$\beta$ luminosities. In addition, \cite{Mateos+13}, who adopted this technique for [OIII]-selected type 2 quasars (QSO2s) from the SDSS, reported that the fraction of QSO2s in the AGN wedge increases with increasing [OIII] luminosity. Therefore, we conclude that for less powerful AGNs, the host galaxy can contribute substantially to the MIR emission. Throughout the following discussion, we consider the AGN-dominated objects (i.e., those in the AGN wedge) and estimate the CF based on these objects. We note that the [OIII] luminosity includes a contribution from HII regions, particularly in metal-poor galaxies, which means that L$_{\mathrm{[OIII}}$/L$_{\mathrm{H}\beta}$ may not always be a good tracer for AGN luminosity. \cite{Juneau} proposed the Mass-Excitation (MEx) diagnostic to identify AGNs on the basis of their [OIII]/H$\beta$ and stellar masses. The MEx diagnostic is a possible way to investigate the differences in the properties of samples in and outside the AGN wedge. However, such calculations and an extended discussion of the properties based on the MEx diagram are beyond the scope of this paper. Also, the luminosity and redshift dependences of the CF should be discussed using the complete sample for type 1 and 2 AGNs subject to certain criteria. Choosing objects in the WISE--SDSS sample (i.e., extracting the objects in the AGN wedge) is equivalent to excluding AGNs that are affected by their host galaxies. Implementing the AGN wedge selection technique implies that the intrinsic ratio of type 1 and type 2 AGNs is the same in and outside the AGN wedge. This assumption is reliable because (i) the influence of the host galaxies for each type of AGN is comparable in the unified model, and (ii) the MIR luminosity of the nucleus for each AGN is also comparable as reported by \cite{Gandhi}, who used high-resolution ($\sim$0.3--0.4 arcsec) N-band filters for 12-$\mu$m imaging data obtained with the VISIR instrument on the 8-m Very Large Telescope. \begin{figure} \epsscale{1} \plotone{Figure_18.eps} \caption{Distribution of the ratio of the [OIII] to H$\beta$ luminosity for type 2 AGNs in (red) and outside (blue) the AGN wedge as defined by \cite{Mateos}. The dashed lines represent the mean value of $\log [L_{\mathrm{[OIII]}}\,(5007\; {\rm\AA}) / L_{H\beta}]$ for each subsample.} \label{OIII_Hb} \end{figure} \subsection{Luminosity and Redshift Dependence of the Covering Factor} \label{CF_vs_L} In an effort to constrain the structure of the hypothesized dust torus invoked by unification, we examine here the MIR luminosity dependence of the CF. We assume that the MIR luminosity of the AGNs is dominated by emission from the active nucleus (see Section \ref{Xray}), and we also assume that the MIR emission is independent of the optical classification, i.e., type 1 and type 2 AGNs should have similar continuum MIR fluxes at any given intrinsic AGN luminosity \citep[see e.g.,][]{Horst,Gandhi}. By integrating the LFs of the type 1 and type 2 AGNs separately, we obtain the number density $\Phi$ for each AGN: \begin{equation} \Phi = \int_L \phi(L) \mathrm{d}L \sim \sum_i \phi_i(L) \Delta L. \end{equation} Using these number densities, the CF and its uncertainty $\sigma_{CF}$ can be estimated as \begin{equation} CF = \frac{\Phi_2}{\Phi_1 + \Phi_2}, \end{equation} and \begin{equation} \sigma_{CF} = CF \times \sqrt{\left( \frac{\sigma_{\Phi_{1+2}}}{\Phi_{1+2}} \right)^2 + \left( \frac{\sigma_{\Phi_2}}{\Phi_2} \right)^2}, \end{equation} where $\Phi_1$ and $\Phi_2$ are the type 1 and type 2 AGN number densities, respectively, and $\sigma_{\Phi_1}$ and $\sigma_{\Phi_2}$ are the associated errors. Note that $\Phi_{1+2} \equiv \Phi_1 + \Phi_2$ and $\sigma_{\Phi_{1+2}} \equiv \sqrt{\sigma_{\Phi_1}^2 + \sigma_{\Phi_2}^2}$. A point of caution here is that our flux-limited sample produces a strong artificial correlation between the redshift and luminosity (see Figures \ref{z_L_12} and \ref{z_L_22}). Thus, it is difficult to decide whether it is the redshift or the luminosity that is the more fundamental physical variable that correlates with the CF if a sample for all luminosity and all redshift ranges is used for the estimation. To test the intrinsic dependence of the CF on the luminosity and redshift, we have to remove the influence of the $z$--$L$ correlation. A simple way to do this is to analyze the CF in separate redshift bins. \cite{Hasinger}, who also used this diagnostic method, reported that the same trend toward a decreasing absorption fraction (which corresponds to the CF) as a function of X-ray luminosity, as observed in the total sample, was seen in each of the redshift bins. We note here that the redshift interval is optimized to keep the data points more or less equal for each redshift bin but only for $z \leq 0.2$ because the number of objects is limited and the SDSS survey may be incomplete, especially for type 2 AGNs for $z > 0.2$. It should also be noted that we assume the CF does not change within each interval, and it is under this assumption that we test the luminosity dependence of the CF. In addition to considering Sy2 galaxies as type 2 (obscured) AGNs, we note that some LINERs and Composites can also show type 2 AGN-like properties. To take this into account, we also estimated four alternative CFs that included these galaxies as type 2 AGNs: \begin{enumerate} \item Sy2s \item Sy2s + LINERs \item Sy2s + Composites \item Sy2s + LINERs + Composites \end{enumerate} We reiterate here that all the sources used in calculating the CF are expected to be AGN dominant (i.e., in the AGN wedge). Figure \ref{CF_multi_z_22_lin_AGN} shows these CFs as a function of the MIR luminosity in different redshift bins; the CF values are also listed in Table \ref{CF_table_22_revise}. In the $0.006 \leq z < 0.1$ redshift range (see the left and middle panel in Figure \ref{CF_multi_z_22_lin_AGN}), the CF does not change with MIR luminosity significantly within error, although the slope of the linear function that we fitted shows a negative correlation. In the $0.1 \leq z \leq 0.2$ redshift bin, however, we can see that the CF does decrease with an increase in the MIR luminosity, regardless of the choice of type 2 AGN classification criteria. \begin{figure} \epsscale{1} \plotone{Figure_19.eps} \caption{Variation in the CF with the 22-$\mu$m luminosity in different redshift bins for AGN-dominated MIR sources. The solid line shows the best-fit linear function determined in each redshift bin.} \label{CF_multi_z_22_lin_AGN} \end{figure} \begin{deluxetable}{cccccccccccc} \tablecolumns{12} \tablewidth{0pc} \tabletypesize{\scriptsize} \tablecaption{CFs as a function of the 22-$\mu$m luminosity for each type 2 AGN definition.\label{CF_table_22_revise}} \tablehead{ \colhead{} & \multicolumn{2}{c}{Sy2s} & \colhead{} & \multicolumn{2}{c}{Sy2s + LINERs} & \colhead{} & \multicolumn{2}{c}{Sy2s + Composites} & \colhead{} & \multicolumn{2}{c}{Sy2s + LINERs + Composites} \\ \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} \\ log L & \multicolumn{1}{c}{CF} & \multicolumn{1}{c}{$\sigma_{CF}$} & \colhead{} & \multicolumn{1}{c}{CF} & \multicolumn{1}{c}{$\sigma_{CF}$} & \colhead{} & \multicolumn{1}{c}{CF} & \multicolumn{1}{c}{$\sigma_{CF}$} & \colhead{} & \multicolumn{1}{c}{CF} & \multicolumn{1}{c}{$\sigma_{CF}$} } \startdata \cutinhead{$0.006 \leq z < 0.075$} 9.00 & 0.37 & 0.25 & & 0.55 & 0.32 & & 0.48 & 0.30 & & 0.61 & 0.33 \\ 9.40 & 0.45 & 0.14 & & 0.52 & 0.16 & & 0.54 & 0.15 & & 0.60 & 0.15 \\ 9.80 & 0.53 & 0.09 & & 0.57 & 0.09 & & 0.59 & 0.10 & & 0.62 & 0.10 \\ 10.2 & 0.45 & 0.09 & & 0.50 & 0.10 & & 0.56 & 0.12 & & 0.59 & 0.11 \\ 10.6 & 0.40 & 0.14 & & 0.46 & 0.14 & & 0.46 & 0.14 & & 0.51 & 0.15 \\ 11.0 & 0.44 & 0.29 & & 0.51 & 0.30 & & 0.56 & 0.31 & & 0.60 & 0.31 \\ \cutinhead{$0.075 \leq z < 0.1$} 9.80 & 0.50 & 0.36 & & 0.53 & 0.34 & & 0.57 & 0.32 & & 0.60 & 0.31 \\ 10.2 & 0.43 & 0.07 & & 0.45 & 0.07 & & 0.45 & 0.07 & & 0.48 & 0.07 \\ 10.6 & 0.42 & 0.09 & & 0.47 & 0.10 & & 0.46 & 0.09 & & 0.51 & 0.10 \\ 11.0 & 0.37 & 0.29 & & 0.59 & 0.32 & & 0.37 & 0.29 & & 0.59 & 0.32 \\ 11.4 & 0.24 & 0.27 & & 0.24 & 0.27 & & 0.24 & 0.27 & & 0.24 & 0.27 \\ \cutinhead{$0.1 \leq z \leq 0.2$} 9.80 & 0.09 & 0.10 & & 0.09 & 0.10 & & 0.16 & 0.12 & & 0.16 & 0.12 \\ 10.2 & 0.39 & 0.06 & & 0.42 & 0.06 & & 0.47 & 0.09 & & 0.50 & 0.09 \\ 10.6 & 0.36 & 0.05 & & 0.43 & 0.06 & & 0.43 & 0.05 & & 0.48 & 0.06 \\ 11.0 & 0.30 & 0.05 & & 0.36 & 0.05 & & 0.34 & 0.05 & & 0.39 & 0.05 \\ 11.4 & 0.11 & 0.04 & & 0.17 & 0.05 & & 0.33 & 0.15 & & 0.36 & 0.15 \\ 11.8 & 0.06 & 0.06 & & 0.20 & 0.13 & & 0.15 & 0.09 & & 0.24 & 0.14 \\ 12.2 & \multicolumn{1}{c}{\nodata} & \multicolumn{1}{c}{\nodata} & & \multicolumn{1}{c}{\nodata} & \multicolumn{1}{c}{\nodata} & & 0.57 & 0.70 & & 0.57 & 0.70 \\ \cutinhead{$0.006 \leq z \leq 0.15$} 9.00 & 0.36 & 0.25 & & 0.55 & 0.33 & & 0.48 & 0.31 & & 0.62 & 0.34 \\ 9.40 & 0.44 & 0.14 & & 0.52 & 0.16 & & 0.55 & 0.15 & & 0.60 & 0.15 \\ 9.80 & 0.52 & 0.08 & & 0.56 & 0.08 & & 0.59 & 0.09 & & 0.62 & 0.09 \\ 10.2 & 0.47 & 0.05 & & 0.51 & 0.05 & & 0.55 & 0.07 & & 0.58 & 0.07 \\ 10.6 & 0.37 & 0.04 & & 0.42 & 0.04 & & 0.43 & 0.04 & & 0.48 & 0.05 \\ 11.0 & 0.28 & 0.05 & & 0.37 & 0.07 & & 0.34 & 0.06 & & 0.42 & 0.07 \\ 11.4 & 0.14 & 0.07 & & 0.16 & 0.07 & & 0.29 & 0.11 & & 0.31 & 0.11 \\ 11.8 & \multicolumn{1}{c}{\nodata} & \multicolumn{1}{c}{\nodata} & & 0.54 & 0.40 & & \multicolumn{1}{c}{\nodata} & \multicolumn{1}{c}{\nodata} & & 0.54 & 0.40 \\ \cutinhead{$0.006 \leq z \leq 0.2$} 9.00 & 0.36 & 0.26 & & 0.55 & 0.33 & & 0.48 & 0.31 & & 0.62 & 0.34 \\ 9.40 & 0.44 & 0.14 & & 0.52 & 0.16 & & 0.55 & 0.15 & & 0.60 & 0.15 \\ 9.80 & 0.52 & 0.08 & & 0.56 & 0.08 & & 0.59 & 0.09 & & 0.62 & 0.09 \\ 10.2 & 0.47 & 0.05 & & 0.51 & 0.05 & & 0.55 & 0.07 & & 0.58 & 0.07 \\ 10.6 & 0.38 & 0.04 & & 0.44 & 0.04 & & 0.45 & 0.04 & & 0.49 & 0.04 \\ 11.0 & 0.29 & 0.04 & & 0.36 & 0.05 & & 0.33 & 0.04 & & 0.40 & 0.05 \\ 11.4 & 0.11 & 0.04 & & 0.17 & 0.05 & & 0.26 & 0.09 & & 0.30 & 0.09 \\ 11.8 & 0.06 & 0.06 & & 0.19 & 0.12 & & 0.15 & 0.09 & & 0.23 & 0.13 \\ 12.2 & \multicolumn{1}{c}{\nodata} & \multicolumn{1}{c}{\nodata} & & \multicolumn{1}{c}{\nodata} & \multicolumn{1}{c}{\nodata} & & 0.56 & 0.69 & & 0.56 & 0.69 \\ \enddata \end{deluxetable} This result has been reported several times, for example, by \cite{Maiolino}, who found that the MIR spectra of 25 AGNs taken by the infrared spectrograph (IRS) on board the {\it Spitzer Space Telescope} showed a negative correlation between $\nu L_{6.7}/\nu L_{5100}$ (which corresponds to the CF) and the [OIII]$\lambda$5007 line luminosity (L$_{6.7}$ and L$_{5100}$ are the continuum luminosities at rest-frame wavelengths of 6.7 $\mu$m and 5100 \AA, respectively). \cite{Burlon}, who constructed AGN samples in the local universe ($z <$ 0.1) using data from the {\it Swift}-BAT telescope and calculated the X-ray (15--55 keV) LFs of absorbed and unabsorbed AGNs that were classified according to their absorbing column density ($N_H$), also found a negative correlation between the fraction of absorbed AGNs and the hard X-ray luminosity. Recently, some studies based on {\it WISE} data have supported the luminosity dependence of the CF. \cite{Assef} presented the distribution of reddening in their AGN sample selected using the {\it WISE} color in a 9-deg$^2$ NOAO Deep Wide-Field Survey Bo\"{o}tes field. They found that the type 1 AGN (E(B - V) $<$ 0.15) fraction is a strong function of the AGN bolometric luminosity (in that case, the fraction of type 1 AGNs increases with bolometric luminosity). On the basis of the {\it WISE} and SDSS data, the CF of quasars measured by the ratio of the torus IR luminosity to the bolometric luminosity was also found to decrease with increasing bolometric luminosity \cite[]{Mor,Calderone,Ma,Roseboom,Gu}. More recently, \cite{Toba} also reported a similar trend based on the {\it AKARI} MIR data. However, these findings were from samples containing several hundred objects. In contrast, Figure \ref{CF_multi_z_22_lin_AGN} includes 3,000 AGNs in total. Therefore, compared to these previous studies, our results are statistically robust. Furthermore, the large number of AGNs allows not only for different definitions of type 2 AGNs but also for omitting the influence of the contribution from their host galaxies and enables us to estimate the luminosity dependence of the CF considering only the AGN-dominated MIR objects. The {\it WISE} results also strongly support our previous {\it AKARI} results \citep{Toba}. We note that the luminosity dependence of the CF we confirmed here is slightly weaker than previous studies. This difference may be caused by removing a large number of objects that are affected by the contribution of their host galaxies particularly in the low-luminosity regime. Recently, \cite{Lusso} estimated the CF by computing the ratio of re-processed MIR emission to intrinsic nuclear bolometric luminosity. By subtracting the contribution from the host galaxies and correcting the reddening effect, they showed that the obtained CF is smaller than that without any correction especially for low luminosity, which yields a relatively weak luminosity dependence of the CF. Our result shows a similar tendency. Ultimately, we conclude that the CF depends on the MIR luminosity.\\ The redshift dependence of the CF based on the acknowledgment of its luminosity dependence is derived following the diagnostic method presented in \cite{Hasinger}. In that study, the data in each redshift bin was also fitted with a linear function. \cite{Hasinger} first estimated the average value of the slope of the relation between the CF and luminosity in the redshift range 0.2--3.2 and then estimated the normalization value at a luminosity of $\log (L_X)$ = 43.75 erg s$^{-1}$, in the middle of the observed range, as a function of the redshift by keeping the slope fixed to the average value. To quantitatively examine the dependence of the CF on redshift, an attempt was made to correct the systematic selection effects, and it was found that this corrected normalization value (i.e., the CF at $\log (L_X)$ = 43.75 erg s$^{-1}$) increased with redshift up to $z \sim 2$. This method thus offers a simple way to examine the redshift dependence of the CF. In the case of our flux-limited sample, however, the above analysis would be still affected by the luminosity dependence of the CF because our data does not cover the same redshift range given any luminosity unlike Hasinger's sample. This yields an ``unfair'' normalization value as a result of fitting in each separate redshift bin, and so to evaluate the data correctly using this method, we need to collect a complete sample that fills the luminosity--redshift space. We selected a complete sample in luminosity--redshift space that enabled us to investigate the redshift dependence of the CF from unbiased data but at the cost of reducing the number of objects in the sample. The sample was divided into three redshift bins and four luminosity bins. The redshift bins were $0.05 \leq z < 0.1$, $0.1 \leq z < 0.15$, and $0.15 \leq z \leq 0.2$ in a luminosity range of $10^{10-11.6} L_{\odot}$. We note that these redshift bins are different from those presented in Figure \ref{CF_multi_z_22_lin_AGN} to ensure that the same luminosity range is covered in each redshift bin. In the same manner as \cite{Hasinger}, we investigated the dependence of the CF on redshift as follows: We first estimated the average value of the slope in the redshift range 0.05--0.2, and the estimated slope values for each type 2 definition are listed in Table \ref{average_slope}, which are in good agreement with those of Hasinger (0.25 $\pm$ 0.06) at $0.015 < z < 0.2$. \begin{deluxetable}{lc} \tablecolumns{2} \tablewidth{0pc} \tabletypesize{\scriptsize} \tablecaption{Average value of the slope of the fitted linear function in each redshift bin. \label{average_slope}} \tablehead{ \colhead{} & average value of slope } \startdata Sy2 & $-$0.19 $\pm$ 0.06 \\ Sy2 + LINER & $-$0.16 $\pm$ 0.06 \\ Sy2 + Composite & $-$0.18 $\pm$ 0.07\\ Sy2 + LINER + Composite & $-$0.12 $\pm$ 0.08\\ \enddata \end{deluxetable} We then estimated the normalization value at a luminosity of $\log[\nu L_{\nu}$(22 $\micron$)] = 10.6 L$_{\odot}$, in the middle of the observed range, as a function of redshift by keeping the slope fixed to the average value (Figure \ref{CF_multi_z_22_lin}). \begin{figure} \epsscale{1} \plotone{Figure_20.eps} \caption{Variation in the CF with the 22-$\mu$m luminosity in different redshift bins. The dashed line shows the best-fit linear function with the slope fixed to the average value.} \label{CF_multi_z_22_lin} \end{figure} We found that the data are well fitted by the linear function in each redshift bin. \begin{figure} \epsscale{1} \plotone{Figure_21.eps} \caption{Dependence of the normalization of the CF at $\log[\nu L_{\nu}$(22 $\micron$)] = 10.6 L$_{\odot}$ on redshift.} \label{CF_multi_z_22_slope} \end{figure} Figure \ref{CF_multi_z_22_slope} shows the normalization value of the CF at $\log[\nu L_{\nu}$(22 $\micron$)] = 10.6 L$_{\odot}$ as a function of the redshift. We found that the CF did not change significantly with the redshift for any type 2 AGN classification criteria. Therefore, we concluded that the CF did not have a redshift dependence for $z \leq 0.2$. \subsection{Uncertainties in the Luminosity Dependence} We consider in this section the uncertainties in the luminosity dependence of the CF. As described in Section \ref{CF_vs_L}, we employed the AGN wedge selection technique and investigated the luminosity dependence for selected AGN-dominated MIR objects. However, we also consider the uncertainties without using this technique for the benefit of users who refer to the data in Appendix \ref{CF_without_wedge}. \label{CF_err} \subsubsection{Influence of the Unknown galaxies} \label{Unknown} A total of 3,060 and 179 galaxies were classified as Unknown galaxies in the 12- and 22-$\mu$m samples, respectively. Even though these galaxies only constitute a small portion of the samples (1.4\% and 0.7\%, respectively), it is not clear that their influence on the CF can be ignored. Therefore, we estimated the effect of these objects by considering the most extreme possibility, i.e., all Unknown galaxies are type 2 AGNs. \begin{figure} \epsscale{1} \plotone{Figure_22.eps} \caption{The CF as a function of the 22-$\micron$ luminosities including the case for which all Unknown galaxies are type 2 AGNs (purple circles) at $z \leq$ 0.2.} \label{CF_L_Unknown} \end{figure} We should note that we investigated the influence using a sample in all redshift ranges because the number of Unknown galaxies is too small to estimate the CF with a high accuracy if they are divided into separate redshift bins as in Figure \ref{CF_multi_z_22_lin_AGN}. We here show the result when considering the AGN wedge. As shown in Figure \ref{CF_L_Unknown}, the CF for this case also decreases with increasing luminosity. In the case of the all sample including the outside the AGN wedge, we also see a similar trend (see Figure \ref{CF_L_ALL-z_12_22} in Appendix \ref{CF_without_wedge}), and so we conclude that the influence of the Unknown galaxies can be neglected. \subsubsection{Influence of rejected objects} \label{rejected} When we cross-matched the {\it WISE} sample with the SDSS sample, we rejected 601,460 {\it WISE} sources that did not lie within the 3-arcsec search radius. If these rejected objects were galaxies, then this could have an effect on our results. We thus attempted to extract possible galaxy candidates according to the SDSS photometric information and data in the literature. Figure \ref{reject_objcts} shows a flow chart of the process used for the extraction. The 601,460 {\it WISE}-rejected sources were first cross-matched with the SDSS DR8 photometric catalog, which contains 469,053,874 unique primary sources with all data cataloged in the {\tt PhotoPrimary} table on the CAS. \begin{figure} \epsscale{0.8} \plotone{Figure_23.eps} \caption{Flow chart for extracting the galaxy candidates from the {\it WISE}-rejected sample.} \label{reject_objcts} \end{figure} We again adopted a 3-arcsec search radius and selected 545,711 sources. Of these, we extracted sources that met the SDSS spectroscopic sample selection criteria, i.e., a petroMag\_r value of less than 17.77 mag or a psfMag\_i value of less than 19.1 mag, and considered their morphologies (see Section \ref{SDSS_sample}). The morphological information is listed under the column headed ``type'' in the {\tt PhotoPrimary} table; point-like objects are labeled as ``STAR'' (possibly including quasars) and diffuse objects are labeled as ``GALAXY'', based on the difference between the PSF and model magnitudes. It should be noted that there is no ``QSO'' category because it is difficult to distinguish stars and quasars based on the photometry only, unlike the case for the spectroscopy-based {\tt SpecPhoto} table. Thus, an extraction of objects with (i) petroMag\_r below 17.77 mag and type = ``GALAXY'' and (ii) psfMag\_i below 19.1 mag and type = ``STAR'' yielded 46,497 sources (17,906 point-like sources and 28,591 diffuse sources). There were 55,749 sources that did not have any SDSS photometric information, and these were rejected. While some of these objects may be galaxies, the sources are optically too faint to be detected by SDSS imaging (the exposure time per band is $\sim$60 s, and the detection limit (95\% completeness) of the r-band for point sources is 22.2 mag). We also note that 499,214 faint sources were excluded both by this method and in the spectroscopic target selection \citep{Eisenstein, Strauss}. We discuss the influence of these optically faint {\it WISE} sources on our result in the end of this subsection. For the 17,906 point-like sources (hereinafter, STAR/QSO sample), we examined their color properties to remove the star objects based on the $g - z$ versus $z - [3.4]$ color--color diagram. \cite{Wu} have also plotted this data for spectroscopically confirmed stars and quasars obtained from the SDSS DR7 and {\it WISE} catalogs and reported that most stars can be distinguished from quasars using following criterion: \begin{equation} z - [3.4] \leq 0.66 \times (g - z) + 2.01. \end{equation} Some high-$z$ ($z > 4$) quasars are actually lost in a field of stars when this criterion is adopted, as \cite{Wu} have mentioned. However, as we are focusing on only low-$z$ quasars ($z \leq 0.3$), this criterion is useful. \begin{figure} \epsscale{1} \plotone{Figure_24.eps} \caption{Distribution of the STAR/QSO sample in the $g -z$ versus $z - [3.4]$ color--color diagram. The red contours represent spectroscopically confirmed {\it WISE}--SDSS stars. The dashed line indicates the star--quasar separation criterion, $z - [3.4] = 0.66 (g - z) + 2.01$, proposed by \cite{Wu}.} \label{Wu_color} \end{figure} The distribution of the STAR/QSO sample in the color--color diagram is shown in Figure \ref{Wu_color}. To ensure the reliability of the color value, we examined the color for objects with a S/N of greater than 10 in the {\it g}, {\it z}, and 3.4-$\mu$m band photometry. For comparison, spectroscopically confirmed {\it WISE}--SDSS stars are also plotted in Figure \ref{Wu_color}. The criterion proposed by \cite{Wu} works well, and by adopting this criterion, 8,794 objects were selected as galaxy candidates. Finally, we carefully removed stars from the 28,591 + 8,794 = 37,385 galaxy candidates by utilizing the NASA/IPAC Extragalactic Database (NED\footnote{\url{http://ned.ipac.caltech.edu/}}) and the Set of Identifications, Measurements, and Bibliography for Astronomical Data (SIMBAD\footnote{\url{http://simbad.u-strasbg.fr/simbad/}}) database. The final galaxy-candidate sample consisted of 29,198 objects: 29,111 in the 12-$\mu$m sample and 4,390 in the 22-$\mu$m sample. As the {\it WISE}--SDSS sample contained 223,982 objects in the 12-$\mu$m sample and 25,721 objects in the 22-$\mu$m sample, the maximum uncertainty, therefore, caused by including these new galaxies would be 29,111/223,982 $\sim$ 0.130 (13.0\%) for the 12-$\mu$m sample and 4,390/25,721 $\sim$ 0.170 (17.0\%) for the 22-$\mu$m sample. Half of the galaxies in the {\it WISE}--SDSS sample were classified as SF in Section \ref{Classification} (see Table \ref{type_classification}), and we would expect half of the galaxy candidates to be SF, even if they are all galaxies. Hence, the influence on the estimated CF is expected to be small. In the context of considering the AGN wedge, there are 2,922 objects in the AGN wedge as shown in Table \ref{type_classification_AGN_wedge}, which are summarized by the type classification in the case of considering AGN-dominated 22-$\mu$m sources. Among them, 439 objects could be galaxies as a consequence of adopting the AGN wedge technique for 2,922 objects. Thus the maximum uncertainty caused by including these galaxies would be 439/2,922 $\sim$ 0.150 (15.0\%). However, it must be noted that the above estimation is for optically bright (PetroMag\_r $<$ 17.77 for galaxies and psfMag\_i for type 1 AGNs) MIR sources. Thus, some optically faint sources could be overlooked by our selection procedure. We thus attempted to estimate the influence of the optically faint type 2 AGNs on our results by using deeper spectroscopic data. The Galaxy And Mass Assembly \citep[GAMA;][]{Driver_09,Driver_11} program is a spectroscopic survey of $\sim$300,000 galaxies down to r $<$ 19.8 mag over $\sim$290 deg$^2$ using the AAOmega multi-object spectrograph on the Anglo-Australian Telescope (AAT). Partial data obtained in the first phase of the GAMA survey has been released as Data Release 2 (DR2; Liske et al. in preparation), and this catalog provides AAT/AAOmega spectra, redshifts, and a wealth of ancillary information for 72,225 objects located in three equatorial fields (referred to as G09, G12, and G15) covering 144 deg$^2$. The limiting Petrosian r magnitudes are 19.0 (G09 and G12) and 19.4 (G15), two magnitudes deeper than that of the SDSS spectroscopic catalog, but the survey area is smaller than that of the SDSS. Therefore, GAMA could be the best dataset for extracting optically faint WISE sources that were not detected by the SDSS spectroscopy. In what follows, we extract the optically faint sources and estimate the fraction of type 2 AGNs among the {\it WISE}-rejected sample by assuming that the spatial distributions of the optically faint sources in the GAMA field are the same as those in the SDSS spectroscopic field. We first narrowed the {\it WISE}-rejected sample to sources within the G09, G12, and G15 regions, which yielded 8,023 sources (8,013 in the 12-$\mu$m sample and 388 in the 22-$\mu$m sample). These sources were then cross-identified with the GAMA DR2 by using a matching radius of 3 arcsec. In this study, we used the {\tt EmLinesPhys} table, which includes the coordinates of each GAMA source and its redshift. As a result, 4,733 sources (hereinafter WISE-nonSDSS-GAMA objects) were selected (4,732 sources in the 12-$\mu$m sample and 217 sources in the 22-$\mu$m) sample. We then extracted type 2 AGNs, based on the BPT diagram employed in Section \ref{Classification}, with the line information obtained from the {\tt SpecLines} table \citep{Hopkins}. In addition, we narrowed the sample down to sources with a redshift smaller than 0.2, which were adopted to evaluate the luminosity and redshift dependence of the CF. This resulted in 163 objects being classified as type 2 AGNs at $z \leq$ 0.2 (163 sources in the 12-$\mu$m sample and 15 sources in the 22-$\mu$m sample). In the case of the 22-$\mu$m sample, 15/217 $\sim$7\% objects were type 2 AGNs. We note that this estimation is a lower limit because the GAMA could not detect almost 40\% of the {\it WISE}-rejected sample (hereinafter WISE-nonSDSS-nonGAMA objects), and thus some optically faint sources with a PetroMag\_r greater than 19.0 could be type 2 AGNs. We therefore investigated the possibility that these objects exist by using NED and SIMBAD. Among the 3,290 WISE-nonSDSS-nonGAMA objects, 2,492 and 424 objects were cross-identified with the NED and SIMBAD, respectively, by using matching radii of 3 arcsec, and we checked the existence of type 2 AGNs at 0.006 $< z <$ 0.2. We found that there were no objects that satisfied the above criteria, although NED and SIMBAD did not have complete spectroscopic classifications for all the galaxies. Therefore, the majority of the WISE-nonSDSS-nonGAMA objects are expected to be high-z ($>0.2$) sources, and the maximum contribution ($\sim$7\%) of type 2 AGN for the 22-$\mu$m sample mentioned above should be a reasonable estimate. In terms of the CF (type 2 AGN fraction), this result indicates that the CF we derived in Section \ref{CF_vs_L} is an underestimation (see also Sections \ref{interpretation_CF_L} and \ref{Comparison}). \subsubsection{Influence of optically elusive buried AGNs} \label{Obscured} The type classification was based on the optical spectroscopic information (see Section \ref {Classification}), but if the central engine of an AGN is enshrouded by dust covering the entire solid angle, then the bulk of the optical emission will be absorbed by the dust, and it is thus difficult to classify objects using the BPT diagram. The presence of these ``buried'' AGNs has been reported by many authors including \cite{Oyabu}, who identified two buried AGNs based on {\it AKARI} near-IR (NIR) spectroscopic observations. These objects do not show any AGN features in the optical spectra but do have a steep red continuum from the hot dust in the NIR spectra. We examined the presence of buried AGNs based on their expected {\it WISE} color, which should be very red. Figure \ref{WISE_color_color} shows the color--color diagram ($[3.4] - [4.6]$ versus $[4.6] - [12]$) for {\it WISE}--SDSS sample. The shaded regions representing different galaxy types indicate areas where the photometry of redshifted sample galaxies was synthesized using simulated SEDs \citep{Wright}. \begin{figure} \epsscale{1} \plotone{Figure_25.eps} \caption{{\it WISE} color--color diagram of the {\it WISE}--SDSS galaxies. The shaded regions representing different galaxy types indicate areas where the photometry of redshifted sample galaxies was synthesized using simulated SEDs \citep{Wright}. The solid lines illustrate the AGN selection wedge defined from \cite{Mateos}.} \label{WISE_color_color} \end{figure} A few LINERs, SFs, and Composites are located in the obscured AGN region, defined by \cite{Wright}, but the number of these obscured AGNs is very small. In the context of considering the AGN wedge, 1.2\% of the SFs and 0.4\% of the Unknown galaxies that exist in the AGN wedge may be candidates for buried AGNs (see Table \ref{type_classification_AGN_wedge}). Their percentage of all AGN-dominated objects is very small (1.6\% at most), and thus we conclude that their influence on the CF will also be small. \begin{deluxetable}{lr} \tablecolumns{2} \tablewidth{0pc} \tabletypesize{\scriptsize} \tablecaption{Classifications of the objects in the AGN wedge for 22-$\mu$m sample.\label{type_classification_AGN_wedge}} \tablehead{ \colhead{type} & \colhead{number (percentage)} } \startdata type 1 AGNs & 2,077 (71.1\%)\\ type 2 AGNs & 520 (17.8\%)\\ LINER & 130 (4.4\%)\\ Composite & 150 (5.1\%)\\ SF & 34 (1.2 \%)\\ Unknown & 11 (0.4 \%)\\ \cline{1-2} All & 2,922 (100 \%)\\ \enddata \end{deluxetable} \subsection{Interpretation of Luminosity Dependence of the Covering Factor} \label{interpretation_CF_L} We consider here two dust torus models that may explain the luminosity dependence of the CF. We fit our results first to the receding torus model \citep{Lawrence+91}, which argues that an expansion of the dust sublimation radius with luminosity will push the torus to larger radii and will therefore decrease its CF. In this model, the CF is described as a function of luminosity by \citep[e.g.,][]{Simpson+98,Simpson+05} \begin{equation} \label{Eq_RTM} CF = \left( 1 + \frac{3L}{L_0} \right) ^{-0.5}\;, \end{equation} where $L_0$ is the luminosity at which the CF is equal to 0.5. Here, the luminosity is based on radiation not from the dust torus but from the total output of the central engine of the AGNs. Thus, the luminosity of concern here is not equivalent to the MIR luminosity. However, the radiation from the central engine is thought to be strongly correlated with that from the dust torus. For instance, \cite{Spinoglio} found that the bolometric luminosity of AGNs is proportional to the MIR luminosity, based on an examination of {\it IRAS} data. \cite{Ichikawa} obtained MIR photometric data for a total of 128 sources in the 9-, 12-, 18-, 22-, and 25-$\mu$m bands from {\it AKARI} and {\it WISE} as well as hard X-ray (14--195 keV) data from {\it Swift} BAT. They found a good correlation between the hard X-ray and MIR luminosities over three orders of magnitude ($9 < \log \nu L_{\nu} (9,18\, \mu$m) $< 12$), which is tighter than that between the hard-X-ray luminosity and far-IR (FIR) luminosities at 90 $\mu$m. This could indicate that the radiation from the central engine is directly connected to that from the dust torus. Therefore, the MIR luminosity should be a good tracer of the bolometric luminosity from the central engine. It should be noted that we restrict the sample here to those objects at $z \leq 0.15$ to omit as much as possible the effects of optically faint WISE sources (see Section \ref{rejected}). The relationship between the CF and MIR luminosity, derived in Section \ref{CF_vs_L}, is compared in Figure \ref{CF_L_RTM_AGN} with that expected from the receding torus model. Here, $L_0$ is a free-parameter; its best-fit value and reduced chi-square value ($\chi^2 /\nu$) are listed in Table \ref{CF_RTM_AGN_para_22}. Figure \ref{CF_L_RTM_AGN} demonstrates that the receding torus model provides a good model for our data. However, the receding torus model does not provide a unique explanation of the luminosity dependence; the assumption that the height of the torus is constant regardless of the luminosity is rather strict, and the value of the reduced chi-square for the CF fit is relatively large. We therefore also considered the modified receding torus model, which was proposed by Simpson (2005) and supported by \cite{Ricci} and \cite{Lusso}. In this model, the height of the torus ($h$) also depends on the luminosity of the AGNs, \begin{equation} \label{Eq_RTM_H} CF = \left[ 1 + 3\left(\frac{L}{L_0} \right)^{1-2\xi} \right] ^{-0.5}, \end{equation} where there are now two free-parameters: $L_0$ and $\xi$. The best-fit values and $\chi^2 /\nu$ for this model are also listed in Table \ref{CF_RTM_AGN_para_22}, and the results are also shown in Figure \ref{CF_L_RTM_AGN}. We found that the modified receding torus model appears to provide a better fit to the data. For this model, $\xi$ takes positive values ($\sim$0.1--0.3; cf., Table \ref{CF_RTM_AGN_para_22}.), which are consistent with those reported by \cite{Simpson+05}. The luminosity dependence of the height of the torus ($h\propto L^{0.2-0.3}$) can be interpreted in the framework of the radiation-limited clumpy torus model. This model was originally suggested by \cite{Honig}, who investigated the influence of the dust distribution on the Eddington limit of the torus and concluded that the torus was a clumpy torus comprised of self-gravitating, optically thick dust clouds. Clouds at small radii from the central black hole are directly exposed to the AGN radiation pressure and forced out to larger distances, while distant clouds are shielded from the AGN radiation by the clouds at small radii. Both effects determine the size of the torus. This model gives the luminosity dependence of the height as $h\propto L^{0.25}$, which is in good agreement with our measurements. \begin{figure} \epsscale{1} \plotone{Figure_26.eps} \caption{The CF as a function of the 22-$\mu$m luminosity for AGN-dominated MIR sources at $z \leq 0.15$. The dashed line shows the best-fit curve determined with the receding torus model. The solid line shows the best-fit curve determined with the modified receding torus model.} \label{CF_L_RTM_AGN} \end{figure} \begin{deluxetable}{lcccccccc} \tablecolumns{9} \tablewidth{0pc} \tabletypesize{\scriptsize} \tablecaption{Fitting parameters of the receding torus model and modified receding torus model for the 22-$\mu$m sample in the AGN wedge.\label{CF_RTM_AGN_para_22}} \tablehead{ \colhead{} & \multicolumn{3}{c}{Receding Tours Mode} & \colhead{} & \multicolumn{4}{c}{Modified Receding Torus Mode} \\ \cline{2-4} \cline{6-9} \\ \colhead{} & \multicolumn{1}{c}{$L_0$} & \multicolumn{1}{c}{$\chi^2/\nu$} & \multicolumn{1}{c}{$\chi_{\nu}^2$} & \colhead{} & \multicolumn{1}{c}{$L_0$} & \multicolumn{1}{c}{$\xi$} & \multicolumn{1}{c}{$\chi^2/\nu$} & \multicolumn{1}{c}{$\chi_{\nu}^2$} } \startdata Sy2s & (8.33 $\pm$ 1.41)$\times 10^{9}$ & 16.34/6 & 2.72 & & (4.20 $\pm$ 2.08)$\times 10^{9}$ & (0.25 $\pm$ 0.07) & 5.26/5 & 1.05 \\ Sy2s + LINERs & (1.19 $\pm$ 0.20)$\times 10^{10}$ & 15.40/7 & 2.20 & & (8.44 $\pm$ 3.17)$\times 10^{9}$ & (0.26 $\pm$ 0.07) & 4.55/6 & 0.76 \\ Sy2s + Composites & (1.47 $\pm$ 0.27)$\times 10^{10}$ & 12.74/6 & 2.12 & & (9.51 $\pm$ 0.45)$\times 10^{9}$ & (0.29 $\pm$ 0.07) & 1.72/5 & 0.34 \\ Sy2s + LINERs + Composites & (1.96 $\pm$ 0.36)$\times 10^{10}$ & 13.52/7 & 1.93 & & (1.87 $\pm$ 0.83)$\times 10^{10}$ & (0.32 $\pm$ 0.07) & 1.09/6 & 0.18 \\ \enddata \end{deluxetable} \subsection{Comparison to Optical and Hard X-ray Results} \label{Comparison} Finally, we compare our measurement of the CF based on the MIR data with that based on optical and hard X-ray data. Any comparison to results obtained from a different data set should be undertaken carefully because the luminosity and redshift ranges may be different. In particular, as there is no consensus on the redshift dependence of the CF at $z > 0.2$, if the CF depends strongly on redshift as reported by previous studies \citep[e.g.,][]{La Franca,Hasinger}, then this would affect any comparisons. Hence, we compared our results with those obtained from \cite{Simpson+05} and \cite{Hasinger}, who presented the relationship between the CF and the [OIII] luminosity at $z < 0.3$ \citep{Simpson+05} and the hard X-ray (2--10 keV) luminosity \citep{Hasinger} at $z < 0.2$\footnote{\cite{Hasinger} also treats the high redshift data ($z < 5.2$), but we only use the data in the $0.015 < z < 0.2$ redshift bin for the comparison. See Figure 8 in \cite{Hasinger}.}. It should be noted that here we also restrict the sample to those objects at $z \leq 0.15$ to omit as much as possible the effects of optically faint WISE sources(see Section \ref{rejected}). We also compared the results with those obtained from higher energy X-ray band data, because the 2--10-keV-band-based surveys could fail to detect heavily obscured luminous AGN, as has been reported by, e.g., \cite{Mateos}. We referred to two papers: \cite{Beckmann_09}, who analyzed data for 199 AGNs supposedly detected by {\it INTEGRAL} above 20 keV and reported a negative correlation between the fraction of absorbed/type 2 AGNs and the hard X-ray (20--100 keV) luminosity; and \cite{Burlon}, who also reported the same correlation on the basis of 15--55-keV data. The sample data in both papers have been examined, and thus we estimated the CF using the data at $z \leq$ 0.15. We assume here that the dependence of the CF on redshift is very weak or almost constant even at $z \leq 0.3$, which enables us to compare directly the luminosity dependence without considering the effect of the redshift dependence. For the comparison, we converted the [OIII] luminosity (L$_{\mathrm{[OIII]}}$ and hard X-ray luminosity (L$_X$) to the 22-$\mu$m luminosity (L$_{\mathrm{MIR}}$) using the following conversion formulae: \begin{eqnarray} \log \left(\frac{L_{\mathrm{MIR}}}{10^{43}} \right) & = & (2.36 \pm 0.01) + (0.76 \pm 0.01) \log \left(\frac{L_{\mathrm{[OIII]}}}{10^{43}}\right), \\ \label{L_MIR_HX} \log \left(\frac{L_{\mathrm{MIR}}}{10^{43}} \right) & = & (0.19 \pm 0.05) + (1.11 \pm 0.07) \log \left(\frac{L_\mathrm{X}}{10^{43}}\right), \\ \log \left(\frac{L_{\mathrm{MIR}}}{10^{43}} \right) & = & (0.27 \pm 0.05) + (0.89 \pm 0.04) \log \left(\frac{L_\mathrm{X\,(14-195\,keV)}}{10^{43}}\right), \end{eqnarray} where the luminosities are normalized to $10^{43}$ erg s$^{-1}$. For the [OIII] luminosity, we calculated the conversion factors by plotting $\log L_{\mathrm{[OIII]}}$ versus $\log [\nu L_{\nu}$(22 $\micron$)], as shown in Figure \ref{L_22_OIII}. To ensure the accuracy of the conversion, high-SN ($>$10) objects (type 1AGNs, type 2 AGNs, LINERs, and Composites) are plotted. For the hard X-ray luminosity, we used Equation (5) in \cite{Gandhi} and Equation (2) when considering the entire sample at 18 $\mu$m in \cite{Matsuta}. \begin{figure} \epsscale{1} \plotone{Figure_27.eps} \caption{Plot of $\log L_{\mathrm{[OIII]}}$ versus $\nu L_{\nu}$(22 $\micron$). The solid red line shows the best-fit linear function.} \label{L_22_OIII} \end{figure} As \cite{Gandhi} derived the relationship between the 12.3-$\mu$m and hard X-ray luminosity, the intrinsic error associated with the conversion in this case will be somewhat different to that of Equation (\ref{L_MIR_HX}). Similarly, \cite{Matsuta} derived the relationship between the 18-$\mu$m and hard X-ray (14--195 keV) luminosity from {\it AKARI} and {\it Swift}-BAT, respectively, and thus some uncertainty may arise from the conversion. \begin{figure} \epsscale{1} \plotone{Figure_28.eps} \caption{Comparison of our measured CF ($z \leq$ 0.15) with those of optical \citep[yellow plus:][]{Simpson+05}, hard X-ray \citep[blue circle:][]{Hasinger}, 15--55-keV \citep[purple asterisk:][]{Burlon}, and 20--100-keV \citep[orange filled square:][]{Beckmann_09} studies. Errors in the CFs estimated from \cite{Beckmann_09} and \cite{Burlon} were determined using binomial statistics \citep[see][]{Gehrels}, drawn at the 1$\sigma$ level. The CFs including Sy2 + LINER + Composite galaxies, which are the total sample including the outside the AGN wedge, are also plotted (red shaded region; see Appendix \ref{CF_without_wedge}). The conversion uncertainty, represented in Figure, is shown as the horizontal error bars for the optical and hard X-ray measurements.} \label{CF_Comparison} \end{figure} Figure \ref{CF_Comparison} compares our measurements with those of the optical \citep{Simpson+05}, hard X-ray \citep{Hasinger}, 15--55-keV \citep{Burlon}, and 20--100-keV \citep{Beckmann_09} studies. Uncertainties (1$\sigma$ level) in the CFs obtained from \cite{Beckmann_09} and \cite{Burlon} were estimated by binomial statistics \citep[see][]{Gehrels} in the same manner as \cite{Burlon}. The optically-based CF has a larger value than ours over a wide range of luminosities, but the shape of the decrease is similar. To examine the reasons for the differences, the CFs obtained from objects including Sy2 + LINER + Composite galaxies, which are the total sample including the AGN outside the wedge, are also plotted (data are available in Table \ref{CF_table_22}). In that case, the optical data are in good agreement with ours, which indicates that the optical ([OIII])-based selection could be affected by host galaxy contributions. Indeed, \cite{Caccianiga}, who investigated the nature of all sources (35 in total) in the {\it XMM-Newton} bright serendipitous survey, showed an optical spectrum dominated by the light from the host galaxy with no evidence (or little evidence) for the presence of an AGN. In contrast, the hard X-ray based CF are consistent with ours, although our MIR-emission-based data exceed the 2--10 keV data by a substantial amount, which indicates that 2--10 keV based surveys fail to detect heavily obscured/absorbed luminous AGN as expected above. Ultimately, our MIR selection with the AGN wedge may avoid the problems associated with optical selection , as well as hard X-ray ($>$2 keV) based studies. We note that the CF derived from the 22-$\mu$m sample in the AGN wedge would be an underestimate due to the lack of optically faint (PetroMag\_r $>$ 17.7) MIR sources as described in Section \ref{rejected}. Thus, there may be a small difference between our MIR results and the optical results, although it may be difficult to fill in the gap using only optically faint type 2 AGNs. \section{SUMMARY} Using the {\it WISE} MIR all-sky survey, we constructed 12- and 22-$\mu$m LFs for all types of local galaxies. Using complete optical spectroscopy of emission lines, we classified the galaxies based on the cataloged classifications in the SDSS and their emission line ratios ([NII]/H$\alpha$ and [OIII]/H$\beta$). We classified the {\it WISE} sources into type 1 AGNs, type 2 AGNs, LINERs, Composites, and SFs. We then calculated the number densities of the type 1 and type 2 AGNs by integrating each LF and estimated the CF of the dust torus (the fraction of type 2 AGNs among all AGNs). In particular, we examined the luminosity and redshift dependence of the CF for $\sim3,000$ AGN-dominated MIR sources which were extracted by examining their MIR colors. The main results are as follows: \begin{enumerate} \item Less luminous AGNs in the MIR region are affected by a contribution from their host SF. \item The CF decreases with increasing 22-$\mu$m luminosity regardless of the choice of type 2 AGN classification criteria, although this dependence is relatively weaker than previous studies. \item The CF does not change significantly with the redshift ($z < 0.2$). \item The luminosity dependence of the CF can be interpreted using the receding torus model. This luminosity dependence is better described by the modified receding torus model in which the height of the torus is parameterized. \item Measurements of the CF based on optical survey data exceed our data but are in good agreement if contributions of the host galaxy (i.e., without adopting the AGN wedge selection) are not considered. In contrast, measurements of the CF based on hard X-ray survey data are almost consistent with ours. These trends may indicate that optical survey data is affected by the host galaxy contribution. \end{enumerate} Our study has confirmed and extended previous results obtained with {\it IRAS}, {\it Spitzer}, and {\it AKARI} by constructing a much larger MIR-selected sample with {\it WISE}. The large number of galaxies in the sample we obtained here means that the variation in the CF with the luminosity and redshift is described with a higher statistical accuracy and lower systematic errors than previous results. We emphasize that a luminosity-dependent torus geometry destroys the simplicity of the original torus unification scheme and now requires that at least one new free function must be determined. Our results are inconsistent with the simplest unified scheme, which expects that the CF is independent of the luminosity. A modification of this simple zero-order unification scheme is required. The present results with WISE have provided us with an important local benchmark for AGN studies at high redshifts. \acknowledgments The authors gratefully acknowledge the anonymous referee for a careful reading of the manuscript and very helpful comments. We are also deeply thankful to Drs Guenther Hasinger (Institute for Astronomy) and Chris Simpson (Liverpool John Moores University) who kindly provided data for comparison. We also thank Drs Tadayasu Dotani (ISAS/JAXA), Yoshihiro Ueda (Kyoto University), Tohru Nagao (Ehime University), and Akihiro Doi (ISAS/JAXA) for their relevant comments. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is \url{http://www.sdss3.org/}. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. Also, this research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programs including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is \url{http://www.gama-survey.org/}. P.G. acknowledges support from STFC grant reference ST/J00369711.
1,116,691,499,526
arxiv
\section{Introduction} Any attempt to bridge the gap between quantum and classical physics must be based on the treatment of a large number of degrees of freedom. This requires the use of the formalism of Quantum Field Theory. The systematical development in this domain started with the introduction of the interaction representation \cite{tom,schwe} to obtain the perturbation series for expectation values. Later, being interested mainly in cross sections, the calculations were rendered simpler by the introduction of the scattering matrix formalism. But this restricted formalism which received further, strong support form the path integral representation of the transition amplitudes is not well suited for the description of quantum-classical transition where reduced density matrices should be studied. Instead, the return to the more involved Heisenberg or interaction representation gives us access to the density matrix of the system and opens the way to develop theoretical methods to approach the quantum-classical transition. The main point of this work is to show in the framework of non-relativistic electron gas that Schwinger's closed time path (CTP) formalism \cite{schw,keldysh} is well suited to these goals. Let us consider an observable $A$ with expectation value \begin{equation}\label{expv} \langle\psi(t)|A|\psi(t)\rangle={\mathrm{Tr}}[A\rho(t)] \end{equation} where the density matrix at time $t$ is given by \begin{equation} \rho(t)=e^{-\frac{i}{\hbar}(t-t_i)H}\rho_ie^{\frac{i}{\hbar}(t-t_i)H} \end{equation} in terms of the initial density matrix $\rho_i$ given at the initial time $t_i$. Such an expectation value in the Heisenberg representation can not, in general, be reproduced by the usual vacuum-to-vacuum transition amplitudes, such as \begin{equation}\label{trampl} \langle\psi(t_i)|e^{-\frac{i}{\hbar}(t_f-t)H}Ae^{-\frac{i}{\hbar}(t-t_i)H}|\psi(t_i)\rangle, \end{equation} used in constructing the scattering amplitudes. In fact, the expectation value \eq{expv}, obtained for $\rho_i=|\psi(t_i)\rangle\langle\psi(t_i)|$ is related to \eq{trampl} when $|\psi_i\rangle$ is an eigenstate of the full Hamiltonian, a rather trivial and unrealistic case. One can bring over the techniques and the experience obtained in the scattering matrix formalism to the CTP formalism which aims at the expectation values \eq{expv} rather than the transition amplitudes \eq{trampl}. There are two time axes appearing in operators of the Heisenberg representation which lead to a formal reduplication of the degrees of freedom in calculating the expectation values. This has the following two important features. One is the identification of the combinations of the degrees of freedom which couple to retarded or advanced Green functions \cite{ed}, giving a direct way to access the possible breakdown of the time reversal invariance. The other is that this reduplication leads us to the density matrix rather than the transition amplitude. There are two known signatures of the quantum-classical transition. One is the emergence of classical probabilities, called decoherence \cite{zeh,zurek}. Let us use the basis $|S_n\rangle$ for the system plus measuring apparatus, denote the environment state vector by $|E\rangle$ and suppose that the whole system starts with the initial state $\sum_nc_n(t_i)|S_n\rangle\otimes|E_0\rangle$ at $t=t_i$ which evolves into $\sum_nc_n(t)|S_n\rangle\otimes|E_n\rangle$ for $t>t_i$ due to the unavoidable interactions between the system and the environment. Actually, the features 'macroscopic' and the 'impossibility of separating from the environment' are believed to be equivalent. The interactions with the environment are always strong because the latter is macroscopic and its gap-less, dense energy spectrum makes easy the jump $|E\rangle\to|E'\rangle$ with $\langle E|E'\rangle=0$ even by a small amount of energy exchange with the system $S$. As a result, one expects $\langle E_n|E_m\rangle\approx0$ for $n\not=m$ and the reduced density matrix for the system, \begin{eqnarray} \rho_S&=&{\mathrm{Tr}}_E\sum_{n,n'}c^*_nc_{n'}|S_n\rangle\otimes|E_n\rangle\langle E_{n'}|\otimes\langle S_{n'}|\nonumber\\ &\approx&\sum_n|c_n|^2|S_n\rangle\langle S_n| \end{eqnarray} is approximately diagonal, indicating the suppression of interference between system states which correspond to macroscopically different environment states. The reduced density matrix, being hermitian, is always diagonal when expressed in an appropriately chosen basis. The observables which are diagonal in this basis are called pointers \cite{zurek}. Decoherence makes the pointer probabilities $p_n=P(S=S_n)$ additive, $P(S=S_n\cup S=S_{n'})=P(S=S_n)+P(S=S_{n'})$ for $n\not=n'$ as in the usual probability theory. The other signature we have to establish at the quantum-classical transition is more involved, it is the dynamical breakdown of the time reversal invariance, $T\hskip-6pt/$. This is needed to read off the result of the measurement, ie. to create durable, macroscopic records which continues to exists independently of the system with its microscopic, time reversal invariant dynamics. Both signatures can be realized in the limit of large number of degrees of freedom only. The suppression of the overlap of two states in decoherence is achieved by the multiplication of a large number of overlap factor, each of them being between zero and one in absolute magnitude, belonging to the microscopic degrees of freedom of the environment. The dynamical symmetry breaking, $T\hskip-6pt/$, requires degeneracy of the non-interacting system-environment states, to 'leak' the former into the latter. Such a high degree of degeneracy can be achieved in the thermodynamical limit only. We shall see that these two mechanisms driving the quantum-classical transition become identical in the CTP formalism. This is due to their common dynamical origin, the continuous, gap-less spectrum and the resulting approximate degeneracy of the environment states. The thermodynamical limit implies another, related dynamical process, the screening. It is the result of the presence of soft excitation modes which can efficiently polarize the vacuum. Does the screening has a role to play in the quantum-classical transition? The reason of suspecting this is that the dressed quasi-particles obtained by vacuum polarization have extended structure and may represent the pointer states which are stable and decohere easily \cite{zeh,zurek,pointer}. It will be shown that the CTP formalism, applied to the non-relativistic electron gas provides an affirmative answer to our question. It is found that the zero-sound and the plasmon collective modes are responsible for decoherence and $T\hskip-6pt/$~in the electron plasma and these processes appear just at the screening length scale. This is a non-trivial result because the screening is realized in a static system while the other two mechanisms are based on time-dependent modes. The quantum-classical transition is considered in this work from the point of view of the renormalization group, when the modes are eliminated from the dynamics starting at short distances and going towards the infrared limit. This process is regarded on a rough, qualitative level and our goal is only to characterize the contributions of the eliminated modes from the point of view of their contributions to classical behavior. For this end we introduce the classicality of a quasi-particle mode characterized by a wave vector and frequency and calculate it for homogeneous, non-relativistic electron plasma in the one-loop approximation. The concept of classicality has already been introduced in a number of works. It has been used to characterize states form the point of view of the loss of information \cite{optpoint}, the robustness \cite{wiva}, the impact on the environment \cite{dzdazu}, and has been compared in \cite{dadzzu}. The loss of information and stability were compared in \cite{elze}. An intuitive proposition, based on the view of measurement as fluctuation induced macroscopic instability is outlined in \cite{dreyer}. The classical properties of states in the Hilbert space has been considered in \cite{grkemo}. Another state-dependent definitions, based on the phase space distribution functions were presented in \cite{dore,mapa}. Our definition of classicality, whose inverse can be considered as a measure of the distance from the classical domain, provides a more detailed picture of the establishment of classical physics because it is applicable for quantum modes, characterized by energy-momentum rather than states as in the previous examples. The organization of the paper is the following. The effective CTP theory for the Coulomb field is derived in the one-loop approximation in Section \ref{effacf}. Section \ref{classs} contains the identification of the contribution of each mode of the Coulomb field to the reduced density matrix and the introduction of classicality, a measure of the strength of modes to generate decoherence and $T\hskip-6pt/$. This quantity is computed for the electron plasma and followed during the gradual decrease of the infrared cut-off in space or time in Section \ref{points}. It is found that the zero-sound and plasmons are the modes which drive the transition to classical behavior. Finally, Section \ref{concls} is reserved for the conclusions. There are two Appendices to summarize some technical details. Appendix \ref{ctpprop} contains the brief derivation of the CTP propagator for non-relativistic particles at finite density and temperature. The calculation of the one-loop self energy of the Coulomb field, the non-trivial dynamical input to our effective theory is sketched in Appendix \ref{tgap}. \section{Effective theory for the Coulomb field}\label{effacf} Our goal is the construction of the reduced density matrix for the Coulomb field $A_0(x)=u_x$, $x=(t,\v{x})$, in the presence of non-relativistic electrons of finite, homogeneous density. We start with the generator functional for the insertion of the Coulomb field into the evolution of the initial density matrix, \begin{equation} e^{\frac{i}{\hbar} W[j^+,j^-]}={\mathrm{Tr}} T[e^{-\frac{i}{\hbar}\int_{t_i}^{t_f}dt'[H(t')-\int d^3xj^+(t,\v{x})u(t,\v{x})]}]\rho_i \bar T[e^{\frac{i}{\hbar}\int_{t_i}^{t_f}dt[H+\int d^3xj^-(t,\v{x})u(t,\v{x})]}] \end{equation} where $H$ denotes the Hamiltonian of the electrons interacting with the Coulomb potential and $\bar T$ stands for anti-time ordering. Each degree of freedom of the system follows unconstrained time evolution and we retain no information from their future behavior in defining the Green functions for the Coulomb field. Thus the trace extends over the whole Fock-space. The path integral representation of the generating functional is \begin{equation}\label{pintgf} e^{\frac{i}{\hbar} W[{\hat j}]}=\int D[{\hat u}]D[\bar\psi^\dagger]D[\hat\psi]e^{\frac{i}{\hbar}\bar\psi^\dagger\cdot[{\hat G}^{-1}-e\hat\sigma{\hat u}]\cdot\hat\psi +\frac{i}{2\hbar}{\hat u}\cdot{\hat D}_0^{-1}\cdot{\hat u}-\frac{ie}{\hbar}{\hat u}\cdot\hat n+\frac{i}{\hbar}{\hat j}\cdot{\hat u}} \end{equation} where the hat denotes a CTP doublet, ${\hat u}=(u^+,u^-)$, etc. and \begin{equation} \hat\sigma=\begin{pmatrix}1&0\cr0&-1\end{pmatrix}. \end{equation} As usual, the initial state can be chosen in an arbitrary manner as long as it has an overlap with the true vacuum on the expense of carrying out the limit $t_i\to-\infty$. In particular, $|\psi(t_i)\rangle$ is a non-interacting Fermi-sphere. The initial wave functional for the Coulomb field is chosen to be a constant. The final time boundary conditions are $u^+_{t_f,\v{x}}=u^-_{t_f\v{x}}$ and $\psi^+_{t_f,\v{x}}=\psi^-_{t_f,\v{x}}$, $\psi^{+\dagger}_{t_f,\v{x}}=\psi^{-\dagger}_{t_f,\v{x}}$. The space and space-time integrations, together with the summation over spin index if necessary, are indicated by a scalar product ie. $f\cdot g$ stands for $\int dxf_xg_x$ or $\int d^3xf_\v{x}g_\v{x}$ and $\psi^\dagger\cdot\psi=\sum_\sigma\int dx\psi^\dagger_{\sigma x}\psi_{\sigma x}$. The inverse propagators ${\hat G}^{-1}={\hat G}^{-1}_0+{\hat G}^{-1}_\mr{BC}$ and ${\hat D}^{-1}_0={\hat D}^{-1}_B+{\hat D}^{-1}_\mr{BC}$ contain the usual terms of the single time axis formalism, \begin{eqnarray} {\hat G}_0^{-1}&=&\begin{pmatrix}G_0^{-1}&0\cr0&-G_0^{-1*}\end{pmatrix},\nonumber\\ {\hat D}^{-1}_B&=&\begin{pmatrix}D_0^{-1}&0\cr0&-D_0^{-1*}\end{pmatrix}, \end{eqnarray} with \begin{equation} G^{-1}_{0~(\sigma x),(\sigma' x')}=\delta_{\sigma\sigma'} \left(i\hbar\partial_t+\frac{\hbar^2}{2m}\Delta_x+\hbar\mu+i\epsilon\right)\delta_{x,x'} \end{equation} and \begin{equation}\label{phtpr} D_{0~xx'}=(-\Delta_x+i\epsilon)\delta_{x,x'}. \end{equation} The boundary condition term, ${\hat G}^{-1}_\mr{BC}$ implements the boundary conditions in time. Finally, the homogeneous particle density $\hat n=(n,-n)$ is introduced to neutralize the system and thereby remove the tadpole contributions. The Coulomb field connected Green functions, monitored by their generating functional $W[{\hat j}]$, characterize the subsystem of the Coulomb photons in the environment provided by the electron gas. In particular, the effective theory obtained by eliminating the latter leads to the dynamics of the Coulomb photons. The integration over the fermion field yields the bosonized path integral \begin{equation} e^{\frac{i}{\hbar} W[{\hat j}]}=\int D[{\hat u}]e^{{\mathrm{Tr}}\ln[{\hat G}^{-1}-e\hat\sigma{\hat u}]+\frac{i}{2\hbar}{\hat u}\cdot{\hat D}_0^{-1}\cdot{\hat u} -\frac{ie}{\hbar}{\hat u}\cdot\hat n+\frac{i}{\hbar}{\hat j}\cdot{\hat u}} \end{equation} over the Coulomb field alone. The expansion \begin{equation} \ln[{\hat G}^{-1}-e\hat\sigma{\hat u}]=\ln{\hat G}^{-1}-\sum_{n=1}^\infty\frac{1}{n}{\mathrm{Tr}}(e{\hat G}\cdot\hat\sigma{\hat u})^n \end{equation} leads to the effective theory for the Coulomb field, \begin{equation}\label{efth} e^{\frac{i}{\hbar} W[{\hat j}]}=\int D[{\hat u}]e^{\frac{i}{\hbar} S_\mr{eff}[{\hat u}]+\frac{i}{\hbar}{\hat j}\cdot{\hat u}}, \end{equation} governed by the effective action \begin{equation}\label{ceffa} S_\mr{eff}[{\hat u}]=-i\hbar{\mathrm{Tr}}\ln{\hat G}^{-1}+\frac12{\hat u}\cdot{\hat D}^{-1}\cdot{\hat u}-e\hat k\cdot{\hat u}+\ord{{\hat u}^3}, \end{equation} involving the charge density \begin{equation} \hat k_x=\hat n+i\hbar({\hat G}\hat\sigma)_{x-\eta e^0,x+\eta e^0}, \end{equation} given in terms of the CTP propagator for the electron field, ${\hat G}$, cf. Appendix \ref{ctpprop}. The neutralizing background charge density is chosen in such a manner that $\hat k$ is vanishing. The inverse of the dressed Coulomb propagator, \begin{equation} {\hat D}^{-1}={\hat D}_0^{-1}-\hat\Sigma, \end{equation} is expressed by means of the self energy which is \begin{equation}\label{cself} \hat\Sigma^{\sigma\sigma'}_{xx'}=-ie^2\hbar\sigma\sigma'G^{\sigma\sigma'}_{xx'}G^{\sigma'\sigma}_{x'x} \end{equation} in the one-loop approximation. The Green functions encapsulated in the functional $W[{\hat j}]$ represent the moments of the reduced density matrix for the Coulomb field. The quadratic approximation to the effective action for the Coulomb field shows clearly the modes which build up these moments. The one-loop result, summarized briefly in Appendix \ref{tgap}, yields \begin{equation}\label{hdinv} {\hat D}^{-1}_{\omega,\v{q}}=\begin{pmatrix}\v{q}^2-L_{\omega,\v{q}}+i\epsilon&0\cr0&-\v{q}^2+L_{\omega,\v{q}}+i\epsilon\end{pmatrix} +ir_{|\omega|,\v{q}}\begin{pmatrix}1&-2\Theta(-\omega)\cr-2\Theta(\omega)&1\end{pmatrix}+{\hat D}_\mr{BC}^{-1}, \end{equation} in Fourier space where \begin{equation}\label{lindhfn} L_{\omega,\v{q}}=\frac{e^2mk_F}{2\pi^2\hbar^2}\left\{-1+\frac{1}{2q}\left[1-\left(\frac{z}{q}-\frac{q}{2}\right)^2 \right]\ln\left|\frac{1+\frac{z}{q}-\frac{q}{2}}{1-\frac{z}{q}+\frac{q}{2}}\right| -\frac{1}{2q}\left[1-\left(\frac{z}{q}+\frac{q}{2}\right)^2 \right]\ln\left|\frac{1+\frac{z}{q}+\frac{q}{2}}{1-\frac{z}{q}-\frac{q}{2}}\right|\right\} \end{equation} with $z=\omega m/\hbar k_F^2$ and $q=|\v{q}|/k_F$ and \begin{equation}\label{limag} r_{\omega,\v{q}}=\Theta(z)\frac{me^2k_F}{4\pi\hbar^2q}\begin{cases}1-\left(\frac{z}{q}-\frac{q}{2}\right)^2& q>2,~~~\frac{q^2}{2}-q<z<\frac{q^2}{2}+q,\cr 1-\left(\frac{z}{q}-\frac{q}{2}\right)^2&q<2,~~~q-\frac{q^2}{2}<z<\frac{q^2}{2}+q,\cr 2z&q<2,~~~0<z<q-\frac{q^2}{2}.\end{cases} \end{equation} We parametrize the density dependence by means of the usual dimensionless parameter $r_s=r_0/a_0$ denoting the average electron-electron separation, $r_0=(3V/4\pi N)^{1/3}$, in units of the Bohr radius, $a_0$. The relation $e^2m/\hbar^2k_F=4\pi\kappa r_s$, $\kappa=(4/9\pi)^{1/3}$ can be used to arrive at the expressions \begin{equation} \v{q}^2-L=k_F^2\left[q^2-\frac{2\kappa r_s}{\pi}\left\{\frac{1}{2q}\left[1-\left(\frac{z}{q}-\frac{q}{2}\right)^2\right] \ln\left|\frac{1+\frac{z}{q}-\frac{q}{2}}{1-\frac{z}{q}+\frac{q}{2}}\right| -\frac{1}{2q}\left[1-\left(\frac{z}{q}+\frac{q}{2}\right)^2\right] \ln\left|\frac{1+\frac{z}{q}+\frac{q}{2}}{1-\frac{z}{q}-\frac{q}{2}}\right|-1\right\}\right] \end{equation} and \begin{equation} r_{\omega,\v{q}}=\Theta(z)k_F^2\frac{\kappa r_s}{q}\begin{cases} 1-\left(\frac{z}{q}-\frac{q}{2}\right)^2&q>2,~~~\frac{Q^2}{2}-q<z<\frac{q^2}{2}+q,\cr 1-\left(\frac{z}{q}-\frac{q}{2}\right)^2&q<2,~~~q-\frac{q^2}{2}<z<\frac{q^2}{2}+q,\cr 2z&q<2,~~~\frac{q^2}{2}-q<z<q-\frac{q^2}{2}.\end{cases} \end{equation} The physical interpretation of the effective action \eq{ceffa} is the following. The real part of the diagonal components in \eq{hdinv}, $\Re({\hat D}^{-1})^{++}$, describes screening and determines the space-time structure of the quasi-particles, defined by the dispersion relation $\Re({\hat D}^{-1})^{++}_{\omega,\v{q}}=0$. The imaginary part of the diagonal component, $\Im({\hat D}^{-1})^{++}$, reflects Landau-damping which is the leakage of the Coulomb photons into electron-hole pairs and generates finite life-time for the dressed Coulomb photons. The non-diagonal matrix elements in Eq. \eq{hdinv} display entirely different physical processes. Notice that the variables residing on different time axes in the original, microscopic description \eq{pintgf} are coupled at the final time by the boundary conditions only. The couplings between the time axes at intermediate time contribute to non-factorisable terms in the density matrix and stand for entanglement in the system, that of the Coulomb photons interacting with the electron gas in our case. This contribution is purely imaginary in the Fourier-space, the suppression produced by it is just decoherence, cf. \eq{effevfsp} below which indicates that the one-loop entanglement is decoherence. The structure of the CTP self energy, the common factor appearing in the diagonal and off-diagonal imaginary matrix element in Eq. \eq{hdinv}, shows the identical dynamical origin of the finiteness of the life-time and decoherence as far as the one-loop quasi-particle structure is concerned. The effective action \eq{ceffa} is actually the influence functional \cite{feynman} for the Coulomb field and the contributions generated by the elimination of the electrons, the self-energy in our approximation, are independent on the boundary conditions imposed on the photon field. By opening the final boundary conditions for the photons the dependence on the final configurations of the generating functional reproduces the reduced density matrix. \section{Classicality}\label{classs} The transition from the quantum to the classical regime is supposed to be driven by the decoherence and the dynamical breakdown of the time reversal invariance. The interaction Lagrangian density, $-e\psi^\dagger_x\psi_xu_x$, is given in terms of the fields thus one expects that the suppression of the off-diagonal matrix elements of the reduced density matrix will be achieved in the field diagonal basis. The other ingredient of the classical limit, $T\hskip-6pt/$, is to establish durable records of macroscopic events. One should separate two obvious aspects from the non-trivial, dynamical aspects of $T\hskip-6pt/$. The first trivial feature is the appearance of the retarded Lienard-Wiecher potentials in classical electrodynamics even though the Maxwell equations are invariant under time reversal. In fact, it is the Cauchy problem with well defined initial conditions which makes the retarded Green functions to appear in the classical equations of motion. Had the final conditions been given to render the time evolution well defined, the advanced Lienard-Wiecher potential should have been used. The nontrivial aspect of $T\hskip-6pt/$~in classical physics is the mixing \cite{arnold}, the gradual and irrecoverable spread of informations in non-integrable systems. Such a dynamics generates a well defined time arrow for observers equipped with finite resources or resolution. The information represented by the initial conditions is spread in this manner in chaotic, classical dynamical systems. There is no mixing in quantum dynamics due to the linearity of the Schr\"odinger equation but the increased sensitivity to the initial conditions for long time evolution of classically chaotic systems \cite{peres} still shows some similarity with the mixing of classical physics. The second triviality is that the finite life-time is not yet $T\hskip-6pt/$. The induced dynamics of a subsystem is unitary, the scalar product of states of the subsystem is obviously preserved in time. It is easy to see that another, necessary condition for $T\hskip-6pt/$~beside the finite life-time is that the environment have no gap in its excitation spectrum. Let us consider the natural line width, as an example. It generates $T\hskip-6pt/$~because there are infinitely many soft photons wound up around an asymptotic electron state and they can not be resolved with finite resources. It is instructive to see briefly what happens in the superconducting vacuum. This latter is a macroscopic quantum effect in the absence of $T\hskip-6pt/$~, the life-time of excited atomic states remain finite, their decay can be resolved and the exclusive cross sections are finite because finite number of photons participating in the dressing. Let us now turn to the inverse problem, the effective dynamics of photons in the presence of electrons. The opening of channels above the pair creation threshold where photons decay into real electron-positron pairs is indicated by the non-vanishing imaginary part of the photon self energy, the generation of finite life-time for sufficiently energetic photon states. The environment of the photons, the Dirac-see has a gap and finite number of on-shell electron-positron pairs are created only. The on-shell nature of the created particles is important because it assures that the energy released by the pair creation process will be diffused. In order to lower the pair creation threshold we consider QED in the presence of a non-vanishing charge density, realized by the introduction of a chemical potential which places the Fermi level into the positive energy continuum of the one-particle spectrum. An infinitesimal energy is now sufficient for the polarization of the Fermi sphere, the creation of particle-hole pairs. A photon can decay into infinitely many on-shell, propagating particle-hole pairs, $|\gamma\rangle\to|\gamma\rangle+|\gamma,p,h\rangle+|\gamma,2p,2h\rangle+\cdots$ where $p$ and $h$ denote particle and hole excitations. Such a spread of the one-photon state into other propagating states of the Fock-space represents a diffusion of the information and appears as the quantum analogy of mixing. It is the restriction of our description into a subsystem in a gap-less, 'diffusive', environment which generates non-unitary time evolution. In other words, $T\hskip-6pt/$~appears as soon as the finite resolution power of observations leads to loss of informations. Observations which can resolve all particle-hole pairs surrounding a photon find no violation of the time reversal invariance and could isolate any component of the dressed photon state. Let us see now our two signatures of the quantum-classical crossover. To identify first the decoherence mechanism we take the plane wave \begin{equation} u^\pm_{t,\v{x}}=u^\pm_{\omega,\v{q}}\cos(\omega t-\v{q}\v{x}) \end{equation} representing the one-loop dressed quasi-particles and use the parametrization $u^\pm=u\pm v/2$ which yields the effective action \begin{equation}\label{effevfsp} S_\mr{eff}(u,v)=TV\left[u_{\omega,\v{q}}v_{\omega,\v{q}}(\v{q}^2-L_{\omega,\v{q}}) +\frac{i}{2}r_{|\omega|,\v{q}}v_{\omega,\v{q}}^2\right]+\ord{T^0} \end{equation} where $T=t_f-t_i$ and $V$ is the three dimensional volume. The second term term stands for the contribution of the CTP boundary conditions and will be ignored in the limit $T\to\infty$, needed to use the propagator of Appendix \ref{ctpprop}. It is well known that the dependence of the single time-axis path integral on the final coordinates reproduces the wave function(al) of the system. This makes it natural to consider the integrand of the path integral for a given trajectory as the contribution of the trajectory to the transition amplitude. In a similar manner we can interpret the integrand in Eq. \eq{efth} as the contribution of a pair of trajectories to the reduced density matrix of the photon field. The $\ord{T}$ part of the effective action gives the contribution \begin{equation}\label{demt} \Delta\rho\left(u+\frac{v}{2},u-\frac{v}{2}\right)=e^{-\frac{b}{2}v^2+iauv}, \end{equation} for a given mode to the reduced density matrix up to a field independent normalization constant where $a,b=\ord{V}$. This distribution yields finite moments $\langle u^n\rangle$ for large but finite volume and vanishing average $\langle v^n\rangle=0$ . The mixed moments arising in the higher order of the perturbation expansion can be non-vanishing due to the strong correlation between $u$ and $v$ realized by the imaginary part of the exponent, $\langle u^{2n}\rangle/\langle u^nv^n\rangle=\ord{(b/a)^n}$. Thus we take \begin{equation} C=\frac{b}{a}\approx\frac{\bar u}{\overline{v}}, \end{equation} the approximate inverse ratio of the off-diagonal and the quantum fluctuations, as a measure of the strength of the decoherence. The comparison of the effective action \eq{effevfsp} with the logarithm of the density matrix \eq{demt} shows that this quantity what we shall call classicality is \begin{equation}\label{class} C_{\omega,\v{q}}=\frac{r_{|\omega|,\v{q}}}{\v{q}^2-L_{\omega,\v{q}}} \end{equation} for the Coulomb field. The decoherence of the Coulomb field modes, the suppression of the off-diagonal matrix elements of the reduced density matrix arises from the small overlap of the electron-hole polarization clouds for sufficiently different $u^+_{\omega,\v{q}}$ and $u^-_{\omega,\v{q}}$. The CTP formalism is actually used here to calculate the overlap of dressed photon states in the Fock-space, corresponding to the Coulomb field $u^+$ and $u^-$ rather than expectation values. We now turn to the other signature of the quantum-classical crossover, the issue of $T\hskip-6pt/$. The 'mass-shell' condition is the vanishing of the denominator of the classicality. Thus modes with finite classicality correspond to virtual quasi-particles and give no contributions to the asymptotic states according to the reduction formulas. What we need for classical behavior is the finite life-time due to the diffusion of the state vector into other sectors of the Fock-space rather than due to the virtuality of the state. Thus classical behavior is expected for $C=\infty$ only and $1/C$ can be considered as a distance of the mode in question from the classical domain. Quasi-particles are identified by the pairs $(\Omega_\v{q},\v{q})$ satisfying the 'mass-shell' condition, $\v{q}^2-L_{\Omega_\v{q},\v{q}}=0$ and one associates a harmonic oscillators to each quasi-particle in the canonical quantization of the photon field. The complication arising from the negative norm of the temporal, Coulomb photons is not essential from the point of view of our problem. We shall see below, in section \ref{points} that there are two normal modes and thus two harmonic oscillators corresponding to each sufficiently small wave vector $\v{q}$. In a qualitative, heuristic treatment the classicality appears as the product of two factors, \begin{equation}\label{cltib} C=\frac{E_\mr{diff}}{E_\mr{en}}\cdot R. \end{equation} The first factor is the ratio of the efficiency of the diffusion of the photon state into the particle-hole sector and the efficiency of the energy exchange within the photon sector for each mode $(\omega,\v{k})$ which contributes to the quasi-particle $(\Omega_\v{k},\v{k})$, ie. for $\omega\approx\Omega_\v{k}$. The efficiencies, $E_\mr{diff}$ and $E_\mr{en}$ are estimated by the velocity square of the decrease of the amplitude due to the diffusion of the photon state into the particle-hole sector and that of the energy exchange within the photon sector, respectively, $E_\mr{diff}\approx(u_m/\tau)^2$, $u_m$ being the amplitude of the oscillating mode and $\tau$ denotes the life-time and $E_\mr{en}\approx(u_m\Omega)^2$. The other factor, $R$, represents the correction taking into account that the finite life-time is more important for the quantum-classical crossover if the mode is closer to being 'on-shell'. A convenient distance of a mode $(\omega,\v{q})$ form the 'mass-shell' condition is $|\omega^2-\Omega^2_\v{q}|$ thus we take $R=\Omega^2_\v{q}/|\omega^2-\Omega^2_\v{q}|$ and \eq{cltib} agrees with \eq{class}. In short, the classicality $\Im(D^{++})^{-1}/\Re(D^{++})^{-1}$ balances the role the 'mass-shell' condition and finite life-time plays in the perturbation expansion. The first factor represents the strength of $T\hskip-6pt/$~and the second one measures the distance from mass-shell. Note that the same measure for the strength for decoherence and for $T\hskip-6pt/$~emerges due to the structure of the CTP self-energy on the right hand side of Eq. \eq{hdinv} which relates the off-diagonal and diagonal matrix elements of the inverse propagator. \section{Generation of pointer states by blocking}\label{points} We shall approach the quantum-classical crossover by starting with a system containing the short distance modes only and by turning on gradually the longer length modes. In other word, we introduce an infrared cut-off in the effective theory \eq{efth}, $k_\mr{IR}$ or $\omega_\mr{IR}$ on the eigenvalues of the covariant momentum or energy operator for the allowed modes space, respectively. This is the implementation of the strategy of the renormalization group, applied to the density matrix \cite{ed}. Let us consider a sequence of partitions of the system plus environment linear space, ${\cal H}={\cal H}^{(1)}_\Lambda\otimes{\cal H}^{(2)}_\Lambda$, parametrized by a gauge invariant separation scale $\Lambda=k_\mr{IR}$ or $\omega_\mr{IR}$. This scale can be introduced in manner that ${\cal H}^{(1)}_0$ is the linear space of the system $S$ and ${\cal H}^{(1)}_\Lambda\subset{\cal H}^{(1)}_{\Lambda'}$ for $\Lambda<\Lambda'$. The renormalization group consists of the construction of the sequence of effective theories and reduced density matrices $\rho_\Lambda$ for the subspace ${\cal H}^{(1)}_\Lambda$. The transformation $\rho_\Lambda\to\rho_{\Lambda'}$, $\Lambda'<\Lambda$, is called blocking. The question we are interested is the qualitative way the classical physics is approached as $\Lambda$ decreased. Let us first follow blocking in three-space where the spatial resolution length of the effective theory $1/k_\mr{IR}$ is increased to $1/(k_\mr{IR}-\Delta k_\mr{IR})$ by including modes $k_\mr{IR}-\Delta k_\mr{IR}<k<k_\mr{IR}$ in the effective theory \eq{ceffa}. We follow this procedure qualitatively on the frequency-wave vector plane of Fig. \ref{imcar} in the simplest non-trivial approximation where the quadratic part of the effective action is kept fixed. As $k_\mr{IR}$ is lowered modes with wave-numbers around $k_\mr{IR}$, the vertical dotted line appear in the effective theory. What do we know about the modes of the Coulomb photon? These modes are non-propagating on the tree-level due to the absence of the frequency dependence of the free propagator, \eq{phtpr}. The dressed inverse propagator ${\hat D}^{-1}$ displays frequency dependence and the collective modes, defined by the 'mass-shell' condition $\Re(D^{-1})^{++}_{\omega,\v{k}}=\v{q}^2-L_{\omega,\v{q}}=0$, are propagating. They are located on the dashed line in Fig. \ref{imcar}. The location of this curve can be seen clearly from the plot of $\v{q}^2-L_{\omega,\v{q}}$ in Fig. \ref{denomcl}. The tilted and the horizontal parts of the dashed line of Fig. \ref{imcar} are natural to identify with the zero-sound and plasmon modes, respectively. The corresponding factor of the classicality, $1/(\v{q}^2-L_{|\omega|,\v{q}})$, is depicted in Fig. \ref{ree}. Finally, the complete ratio \eq{class} for the classicality is shown in Fig. \ref{cle}. Another information conveyed by Fig. \ref{imcar} is that the modes between the solid line parabolas can decay into a real particle-hole pair. These modes acquire non-vanishing imaginary part in their inverse causal propagator calculated in the one-loop approximation, $\Im(D^{-1})^{++}_{\omega,\v{k}}=r_{|\omega|,\v{q}}>0$ and generate a Gaussian suppression of the reduced density matrix in the off-diagonal direction. \begin{figure} \includegraphics[height=3.cm,width=4.cm]{1.eps} \caption{The frequency-wave vector plane of Coulomb photons.}\label{imcar} \end{figure} \begin{figure} \includegraphics[height=4.cm,width=5.cm]{2.eps} \caption{$\v{q}^2-L_{|\omega|,\v{q}}$ plotted on the segment $0.1<q,z<0.3$ of the frequency-wave vector plane for $r_s=0.05$. The function has a valley stretching from the origin along the line $z=q$. The function is increasing in the valley as we move away from the origin and the minimum of the valley reaches zero at $z\approx q=0.1875$.}\label{denomcl} \end{figure} \begin{figure} \includegraphics[height=4.cm,width=5.cm]{3.ps} \caption{$1/(\v{q}^2-L_{|\omega|,\v{q}})$ as the function of $z$ and $q$ for $r_s=0.05$. The function diverges along the zero-sound/plasmon line but its value is cut at 100 for easier visualization.}\label{ree} \end{figure} \begin{figure} \includegraphics[height=4.cm,width=5.cm]{4.ps} \caption{The classicality, Eq. \eq{class}, as the function of $z$ and $q$ for $r_s=0.05$. The shape of this function is qualitatively similar to that shown in Fig. \ref{ree} except the part with vanishing $r_{\omega,\v{q}}$ is replaced by zero. The value of the function is cut at 1.5 for easier visualization.}\label{cle} \end{figure} The theory where we cut out the modes $k<k_\mr{IR}\gg k_F$ has no classical regime, having no propagating modes. The situation changes drastically when the vertical dotted line erected at $q=k_\mr{IR}$ approaches the rightmost position of the collective modes curve what happens at $k_{IR}=q_\mr{cl}$. From now on the further decrease of the infrared cut-off brings in modes with diverging classicality and they render the effective theory classical. When the energy rather than the wave vector of the modes is bounded from below by $\omega_{IR}$ then we find quantum behavior when the horizontal dotted line of Fig. \ref{imcar}, drawn at $\omega=\omega_{IR}$ is above the uppermost point of the collective mode curve at $\omega_\mr{cl}$. Classical physics is reached when $\omega_{IR}=\omega_\mr{cl}$. This scheme is closer to the actual experiments aiming at the dynamics of the collective modes where some energy is injected and becomes dissipated over different length scales. What happens when higher loop contributions are taken into account in the effective theory? Both the numerator and the denominator of the classicality change. The change of the denominator may shift the zero-sound/plasmon line considerably. This modification is small at high density where loop-expansion is reliable. The higher order corrections are supposed to render the numerator non-vanishing over the whole $(\omega,\v{q})$ plane by means of the decay of the Coulomb photon into into multiple particle-hole pairs in the absence of gap in the fermion excitation spectrum. This introduces diverging classicality along the whole plasmon line. It is worthwhile noting that the classicality as given in Eq. \eq{class} displays the competition of tendencies, the classical and quantum one, represented by the numerator and denominator, respectively. The numerator is regular in the vicinity of the collective mode curve of Fig. \ref{imcar}. It is the fast changing quantum features traced by the denominator which drive the crossover to classical behavior. It is interesting to follow the way the scales of the quantum-classical crossover change with the density. There are two characteristic plasma scales. One is the plasmon oscillation frequency for vanishing wave vector, \begin{equation} z_\mr{pl}=\sqrt{\frac{4\kappa r_s}{3\pi}}, \end{equation} and the other is Thomas-Fermi screening length defined at vanishing frequency, \begin{equation} q^2_\mr{TF}=\frac{4\kappa r_s}{\pi}k_F^2. \end{equation} Both are given here in the one-loop approximation. It was found that the wave vector $q_\mr{cl}$ and frequency $\omega_\mr{cl}$ of the mode which triggers the quantum-classical transition during the blocking is close to the plasmon frequency and the Thomas-Fermi screening scale, respectively. The plot of the ratios $z_\mr{cl}/z_\mr{pl}$ and $q_\mr{cl}/q_\mr{TF}$ as the functions of the density are shown in Fig. \ref{dens}, together with the fitted form \begin{equation}\label{fitde} z_\mr{cl}\approx1.745\cdot r_s^{-0.046}z_\mr{pl}=0.822\cdot r_s^{0.495}, \end{equation} and \begin{equation}\label{fitdk} q_\mr{cl}\approx1.015\cdot r_s^{-0.046}q_\mr{TF}=0.822\cdot r_s^{0.495} \end{equation} which are valid for high density, small $r_s$. The relation $z_\mr{cl}\approx q_\mr{cl}$ is not surprising, it is the obvious result of the position of the zeros of $\Re(D^{-1})^{++}$ in the vicinity of $\omega=\v{k}=0$. The non-trivial result is the approximate agreement between $q_\mr{cl}$ and $q_\mr{TF}$. In fact, the former locates the root of $\Re(D^{-1})^{++}_{\omega,\v{q}}$ at $\omega\approx\omega_\mr{cl}\not=0$ while the latter is related to the value of the $\Re(D^{-1})^{++}_{\omega,\v{q}}$ for $\omega=0$ and $\v{q}\to0$. Their agreement, that the classical behavior is reached at the screening length, shows the importance of the screening cloud in the classical crossover from the point of view of the charge dynamics. But this is valid in the absence of the intrinsic scale of the photon dynamics only. In fact, let us introduce a finite mass $M^2$ in the photon propagator \eq{phtpr} which makes the static potential Yukawa-type. This raises $\v{q}^2-L_{|\omega|,\v{q}}$, shown in Fig. \ref{denomcl} by $M^2$ and shrinks the collective mode curve of Fig. \ref{imcar} towards the origin. The screening is reduced to a simple renormalization of the mass, $M^2\to M^2+\delta M^2$, and the agreement $q_\mr{cl}\approx q_\mr{TF}$ is lost, the massive photon and the charge dynamics now have two distinct scales. Our approximate relation, $q_\mr{cl}\approx q_\mr{TF}$, observed for mas-less photons indicates that the renormalization of the mass square of the photon due to the environment, $\delta M^2=q^2_\mr{TF}$, agrees with the crossover scale, $q^2_\mr{cl}\approx\delta M^2$ generated by the same environment in the one-loop level. Naturally these relations receive corrections from higher orders of the loop-expansion. It is interesting to observe that no collective modes and classical behavior is reached for sufficiently large $M^2$. \begin{figure} \includegraphics[height=6.cm,width=5cm,angle=270]{5.eps} \caption{The ratios $\omega_\mr{cl}/\omega_\mr{pl}$ and $k_\mr{cl}/k_\mr{TF}$ as the functions of $r_s$. The straight lines correspond to the fit \eq{fitde} and \eq{fitdk}.}\label{dens} \end{figure} \section{Conclusions}\label{concls} The contributions to decoherence and the dynamical breakdown of the time reversal invariance of the Coulomb field dynamics were calculated in the framework of the CTP formalism in this paper. A quantity to measure the contributions of the plane wave modes of the Coulomb field to decoherence and $T\hskip-6pt/$, the classicality was proposed. It was found in the one-loop approximation that the collective modes, corresponding to the zero-sound and plasmon excitations have diverging classicality and render the system classical. This result has already been expected, our calculation shows that the relation between decoherence and $T\hskip-6pt/$~ is the result of the structure of the CTP propagators and establishes the classicality of the collective modes in a systematic manner as the result of the gap-less excitation spectrum of the environment. The length scale of the quantum-classical crossover is found to be close to the Thomas-Fermi screening length. The dressed quasi-particles, corresponding to the collective modes of the plasma and being controlled by the Coulomb field are formed by minimizing their residual interactions. This minimization makes them dynamically optimized pointer states in every order of the loop-expansion. Another, indirect lesson of this calculation is the suitability of quantum field theoretical models to the problem of measurement theory. The advantages are (i) the possibility to consider a large number of degrees of freedom, (ii) the way to test dynamical symmetry breaking, such as the loss of invariance under time inversion and (iii) the accessibility of the reduced density matrix in the CTP formalism. Finally, the strategy of the renormalization group seems to be well suited to address the quantum-classical transition. \acknowledgments It is pleasure to thank Janos Hajdu for stimulating discussions.
1,116,691,499,527
arxiv
\section{Introduction} In multilayered cuprate superconductors, which have more than three $\rm CuO_{2}$ layers in their unit cells, the superconducting transition temperature $T_{\rm c}$ takes its highest value when the number of $\rm CuO_{2}$ layers $n$ is 3 \cite{Iyo2007, Chakravarty2004}. The $T_{\rm c}$ reduction for increasing $n$ can be understood in terms of inequivalent doping among the layers. However, the mechanism which enhances the $T_{\rm c}$ for the first few layers is still unknown and may be related to the mechanism of high-$T_{\rm c}$ superconductivity. Nuclear magnetic resonance (NMR), and angle resolved photoemission spectroscopy (ARPES) have shown evidence of the inequivalent doping among layers. The inner $\rm CuO_{2}$ planes (IPs) with square oxygen coordination have lower hole doping levels than the outer $\rm CuO_{2}$ planes (OPs) with pyramidal oxygen coordination \cite{Trokiner1991, Statt1993, Kontos1998, Piskunov1998, Tokunaga1999, Kotegawa2001, Ideta2009, Chen2009}. In the NMR experiments, the remarkable coexistence of antiferromagnetic (AFM) IPs with superconducting (SC) OPs has been reported in the underdoped regime \cite{Kotegawa2004, Mukuda2006, Kitaoka2007, Shimizu2009}. It has been shown theoretically that high $T_{\rm c}$ superconductivity is possible in this AFM and SC coexisting state \cite{Mori2005}. In that case, a finite Josephson coupling between SC OPs through the AFM IP provides bulk superconductivity. Thus, physical properties of underdoped region of multilayered superconductor is intriguing. In the overdoped to slightly underdoped region, several anisotropic properties of a trilayered superconductor ${\rm Bi}_{2}{\rm Sr}_{2}{\rm Ca}_{2}{\rm Cu}_{3}{\rm O}_{10+\delta}$ (Bi2223) have been measured and a sizable change in anisotropy was reported \cite{Piriou2008}. The anisotropy was lower than that of double layered superconductor ${\rm Bi}_{2}{\rm Sr}_{2}{\rm Ca}{\rm Cu}_{2}{\rm O}_{10+\delta}$ (Bi2212), indicating that the interlayer coupling of the three $\rm CuO_{2}$ layers exists. Bi2223 ($T_{\rm c,max}$ = 110 K) is one of the few multilayered cuprates of which we can obtain large size single crystals in order of millimeters \cite{Fujii2001, Liang2002, Giannini2004, Tokiwa1998, Iyo2004} and is the simplest system among multilayered cuprates. In this paper, we first report on a growth of Bi2223 crystals suitable for underdoping. Then, the doping dependence of fluctuation diamagnetism in Bi2223 is extracted from magnetization measurements. The extracted diamagnetism above $T_{\rm c}$ is expected to be qualitatively the same as the diamagnetism which Li \textit{et al} observed through Nernst and torque magnetization experiment.\cite{Li2010} \section{Experimental} Two techniques were used to reduce the doping level and drive ${\rm Bi}_{2+x}{\rm Sr}_{2-x}{\rm Ca}_{2}{\rm Cu}_{3}{\rm O}_{10+\delta}$ into the highly underdoped regime. The first is to anneal samples under various oxygen partial pressures. Oxygen annealing controls the excess oxygen content $\delta$, which supplies hole carriers to the $\rm CuO_{2}$ plane. The hole doping level can be decreased by lowering the oxygen partial pressure \cite{Watanabe1997}. The second method is a substitution of the trivalent cation $\rm Bi^{3+}$ for the divalent cation $\rm Sr^{2+}$. The hole density is expected to decrease with Bi substitution if the oxygen content is unchanged. In Bi2212, a hole density reduction was confirmed through $T_{\rm c}$ reduction due to this substitution \cite{Yamashita2009}. ${\rm Bi}_{2+x}{\rm Sr}_{2-x}{\rm Ca}_{2}{\rm Cu}_{3}{\rm O}_{10+\delta}$ single crystals with $x=0.1$ can be grown by a traveling solvent floating zone (TSFZ) method \cite{Kulakov2006, Fujii2002, Eisaki2004}. However, no successful single crystal growth of Bi2223 with $x>0.1$ has been reported, because the growth is difficult even for the intrinsically substituted $x=0.1$ sample. Here, using almost the same technique as for $x=0.1$, we succeeded in growing crystals of Bi2223 with $x=0.2$. Powders of $\rm Bi_{2}O_{3}$, $\rm SrCO_{3}$, $\rm CaCO_{3}$, and CuO (all of 99.9 \% or higher purity) were mixed in the desired cation ratio Bi : Sr : Ca : Cu = 2.2 : 1.8 : 2 : 3, ground in an agate mortar, and then calcined at 760 $^\circ\mathrm{C}$ for 12 hours. The calcined powders were well reground and again calcined at 780 $^\circ\mathrm{C}$ for 12 hours. After that, the powders were hydrostatically pressed into a cylindrical rod under 40 MPa and then sintered at 860 $^\circ\mathrm{C}$ for 50 hours. The sintered rod was cut into the long ($\sim7$ cm) and the short ($\sim2$ cm) part. The long one was hung on the upper shaft of an infrared radiation furnace equipped with two ellipsoidal mirrors (NEC-SCI\hspace{-.1em}I-EDH). The short one was held on the lower shaft. The long one was pre-melted using the short one as a base for the crystal growth at a rate of 25 mm/h in air. The pre-melted rod was cut into the pre-melted part ($\sim 6$ cm) and the base (seed) part ($\sim 2$ cm). The pre-melted and the seed rod were held on the upper and the lower shafts respectively, and both shafts were counter-rotated at a rate of 11 rpm. A crystal rod was grown using a very slow rate of 0.04 mm/h for the pre-melted rod. In order to realize as fast a crystal growth rate as possible, 300 W halogen lamps were used as light sources, which gave a steep temperature gradient near the solid-liquid interface. It also prevents the swells of solid-liquid interfaces on both the pre-melted and grown crystal sides which disturb static growth. Under these conditions, a crystal rod of 47 mm length and 5.5 mm in diameter as shown in Fig. \ref{fig:crystal}(a) was grown. \begin{figure}[t] \begin{center} \begin{tabular}{cc} \begin{minipage}{0.682\hsize} \begin{center} \includegraphics[width=5.797cm,clip]{64203Fig1a.eps} \end{center} \end{minipage} \begin{minipage}{0.318\hsize} \begin{center} \includegraphics[width=2.703cm,clip]{64203Fig1b.eps} \end{center} \end{minipage} \end{tabular} \caption{(Color online) (a)The grown crystal rod of 47 mm in length and typical diameter 5.5 mm. The linear scale is in centimeters. (b)One of the Bi2223 single crystals cleaved from the rod shown in (a).}{} \label{fig:crystal} \end{center} \end{figure} From the grown crystal rod, one crystal with maximum dimensions up to $\rm 2 mm \times 3 mm \times 0.1 mm$ was cleaved and is shown in Fig. \ref{fig:crystal}(b). Each part of the as-grown rod was evaluated by X-ray diffraction (XRD), magnetizaton and resistivity measurements. XRD measurements were done on a diffractometer (Rigaku RINT-TTRI\hspace{-.1em}I\hspace{-.1em}I) using a conventional $\theta-2\theta$ method to check phase purity and determine $c$-axis length. $T_{\rm c}$ was determined from magnetization measurements under a magnetic field of 4 Oe using a SQUID magnetometer (Quantum Design MPMS-7). Resistivity was measured by a standard four-probe method. After finishing the above sample characterizations, the highest quality crystal among them, whose weight was 8.6 mg, was selected. After each annealing condition; (1) $\rm O_{2}$ : 760 torr, 400 $^\circ\mathrm{C}$, 72 hours, (2) $\rm O_{2}$ : 760 torr, 600 $^\circ\mathrm{C}$, 24 hours, (3) $\rm O_{2}$ : 7.6 torr, 600 $^\circ\mathrm{C}$, 24 hours, (4) $\rm O_{2}$ : $7.6\times10^{-2}$ torr, 600 $^\circ\mathrm{C}$, 24 hours, (5) $\rm O_{2}$ : $7.6\times10^{-3}$ torr, 600 $^\circ\mathrm{C}$, 12 hours, $T_{\rm c}$ and $c$-axis length were determined, then anisotropic normal state susceptibilities $\chi_{ab}$ with $H\perp c$ and $\chi_{c}$ with $H\parallel c$ were measured under a magnetic field of 50 kOe using the MPMS. For each annealing process, 1 hour was taken to raise temperature to the target value, while oxygen pressure was kept at the target value. At the end of each annealing step, the sample was quenched to room temperature in 20 minutes in the same atmosphere in order to prevent a further oxygen uptake or loss. SCF diamagnetic components were extracted from the normal state susceptibility data for each annealing condition. \section{Results and Discussion} \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,clip]{64203Fig2.eps} \caption{(Color online) (a)X-ray diffraction peaks of the as-grown sample with $x=0.2$. (b)Temperature dependence of DC susceptibility for the same sample with magnetic field 4 Oe parallel to $ab$-axis. (c)Temperature dependence of in-plane resistivity for another as-grown sample.}{} \label{fig:XRD-sus-res} \end{center} \end{figure} Figure \ref{fig:XRD-sus-res}(a) shows XRD peaks in the highest quality $c$-axis oriented as-grown crystal. Only reflections from Bi2223 can be seen. Figure \ref{fig:XRD-sus-res}(b) shows the temperature dependence of the magnetic susceptibility in a magnetic field of 4 Oe parallel to the $ab$-plane. Clear Meissner diamagnetism was observed below about 110 K. Throughout this report, $T_{\rm c}$s are defined as the temperatures at which linear extrapolations of the steepest part of shielding signals cross zero susceptibility. Using this definition, $T_{\rm c}$ of the main phase was estimated to be 110 K. A step-like feature at around 80 K which may attributed to Bi2212 impurity phase was also observed in the shielding signal. Since XRD did not show any traces of bulk Bi2212 phase, it might be due to Bi2212 intergrowth, existing even in the highest quality crystal. By comparing the magnitude of the susceptibility at 80 K with the magnitude at 10 K, the relative volume fraction of Bi2223 phase to that of overall superconducting phase was estimated to be more than 60 \%. The temperature dependence of the in-plane resistivity of a crystal cleaved from the same part of the rod was shown in Fig. \ref{fig:XRD-sus-res}. It showed typical $T$-linear behavior above $T_{\rm c}$ and zero resistivity below 108.5 K. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,clip]{64203Fig3.eps} \caption{(Color online) Comparison of the relationships between $T_{c}$ and $c$-axis length for $x=0.2$ (filled circle) and $x=0.1$ (open circle). The attached numbers correspond to the annealing conditions (1) to (5) for both $x=0.1$ and 0.2 data. Data for $x=0.1$ were taken from ref.\cite{Fujii2002}.}{} \label{fig:c-Tc} \end{center} \end{figure} Next, let us discuss the precise determination of the $c$-axis lengths in the Bi2223 crystal for $x=0.2$. The $c$-axis length free from systematic measurement errors is usually estimated by an extraporation method using a Nelson-Riley (NR) function $(\cos^{2}\theta/\sin\theta+\cos^{2}\theta/\theta)/2$. If $c$-axis lengths obtained from (00$L$) peaks \textit{i.e.} $dL$, where $d = \lambda/2\sin\theta_{L}$, $L$ is the Miller index of the (001) direction are plotted against the NR function and the plots are fit with a straight line, an accurate $c$-axis length can be obtaind by extrapolating the fitting line to $\theta = \pi/2$. However, the plots for our crystal with $x=0.2$ did not obey this function. It is known that if there is intergrowth of a second phase in a layered structure, $dL$ can oscillate as a function of $L$ with a period $c/\Delta c$, where $\Delta c$ is the difference between the $c$-axis lattice length of the main phase and that of the intergrown phase \cite{Kulakov2006}. However, due to the strong and somewhat diverging oscillation in the plots of our sample, it was difficult to execute a reliable extrapolation. Here, we computed $c$-axis lengths for $x=0.2$ by averaging $dL$ weighted by $\tan\theta_{L}$, assuming the uncertainty of $dL$ to be proportional to $\cot\theta_{L}$: $c=\Sigma_{L}dL\tan\theta_{L}/\Sigma_{L}\tan\theta_{L}$. This corresponds to a $\theta$ average of $dL$ with the weight of $\sin\theta$. We confirmed that this method yields nearly equivalent or slightly overestimated $c$-axis lengths for $x=0.1$ samples if we use the diffraction data in ref.\cite{Fujii2002}. In Fig. \ref{fig:c-Tc}, the relationship between $T_{\rm c}$ and $c$-axis length for the sample with $x=0.2$ is shown by filled circles. For comparison, $x=0.1$ data from ref. \cite{Fujii2002} are also plotted with open circles. In cuprate superconductors, there is a general tendency that the $c$-axis length decreases with increasing hole doping due to stronger Coulomb attraction between the negative oxygen ions and the in-plane hole carriers. In keeping with this tendency, our $x=0.2$ crystal shows a qualitatively similar $T_{\rm c}$ - $c$-axis length relationship to that of the $x=0.1$ sample. However, several differences are evident. First, our sample has shorter $c$-axis lengths for the same annealing conditions except for the case of the condition (1), the strongest oxidation condition. From this, we can confirm that Bi substitution with $x>0.1$ is achieved. Second, the $T_{\rm c}$ saturation observed on the overdoped side for $x=0.1$ seems not to be realized even for the same oxidation condition. For the $x=0.2$ sample, the $c$-axis contracted little from the oxidation conditions (2) to (1), while it contracted substantially in the $x=0.1$ sample. This indicates that oxidation becomes more difficult on the overdoped side in the $x=0.2$ sample, possibly due to the Bi substitution. The effect of Bi substitution in Bi2212 was studied in detail in ref. \cite{Yamashita2009} and summarized as follows: (1) the reduction of doping level is proportional to the increase in Bi content, (2) the maximum superconducting transition temperature $T_{\rm c,max}$ is reduced, (3) there is an increase in excess oxygen. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,clip]{64203Fig4.eps} \caption{(Color online) Temperature and doping dependences of anisotropic normal state susceptibilities (a)$\chi_{ab}(T)$ ($H\perp c$) and (b)$\chi_{c}(T)$ ($H\parallel c$) under a magnetic field $H=50$ kOe.}{} \label{fig:susab-susc} \end{center} \end{figure} In Bi2212, the excess negative charges introduced to the plane by substituted Bi ions are partially compensated by the increase in excess oxygen and only 20 \% of substituted Bi contributes to the reduction in doping level. Since $x=0.2$ and 0.1 samples of Bi2223 show comparable $T_{\rm c}$s for the same oxidation conditions, excess Bi is thought to be fully compensated by the additionally absorbed oxygen. This gives a further lattice contraction along the $c$-axis as can consistently be seen in Bi2212 \cite{Yamashita2009}. Moreover, the increase in the oxygen uptake due to substituted Bi makes further oxidation difficult on the overdoped side. In addition, unexpectedly, $T_{\rm c,max}$ of the $x=0.2$ sample is essentially same as that of $x=0.1$ sample, while in Bi2212 about an 8 K reduction in $T_{\rm c,max}$ was reported \cite{Eisaki2004}. This is a novel feature of the multilayered structure, where the bulk $T_{\rm c}$ is believed to be determined by the highest one among the intrinsic $T_{\rm c}$s of the nonuniformly doped individual $\rm CuO_{2}$ layers. On the overdoped side of multilayered superconductors, $T_{\rm c}$ is thought to be determined by the optimally doped IP \cite{Fujii2002,Tokunaga2000}. Disorder introduced by Bi$^{3+}$ ions on the Sr site could only affect the OP which does not directly govern the $T_{\rm c}$ of the sample. Thus, in this $x=0.2$ case, the disorder might not affect the IP and $T_{\rm c,max}$ was kept unchanged. The invariable $T_{\rm c,max}$ indicates that the interaction between IP and $\rm BiO-SrO$ layer is very weak in optimally or slightly overdoped region. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,clip]{64203Fig5.eps} \caption{(Color online) $\chi_{ab}-\chi_{c}$ plots with an implicit parameter temperature $T$ for each doping level.}{} \label{fig:chiab-chic-asgrown-anneal5} \end{center} \end{figure} In Fig.\ref{fig:susab-susc}, temperature dependences of the anisotropic normal state susceptibilities $\chi_{ab}$ and $\chi_{c}$ at various doping levels are shown. Here, we assume that these anisotropic susceptibilities are composed of five components as follows: \begin{equation}\label{chi_alpha} \begin{split} \chi_{\alpha}(T) = &\chi^{\rm dia} + \chi^{\rm VV}_{\alpha} + 2\chi^{\rm spin}_{{\rm OP},\alpha}(T) \\ &+\chi^{\rm spin}_{{\rm IP},\alpha}(T)+\chi^{\rm FD}_{\alpha}(T) \end{split} \end{equation} where $\chi^{\rm dia}$ is the isotropic Larmor diamagnetic susceptibility, $\chi^{\rm VV}_{\alpha}$ is the anisotropic Van Vleck paramagnetic susceptibility, $\chi^{\rm spin}_{{\rm OP},\alpha}(T)$ and $\chi^{\rm spin}_{{\rm IP},\alpha}(T)$ are the anisotropic spin susceptibilities in the OP and IP, respectively, and $\chi^{\rm FD}_{\alpha}(T)$ is the anisotropic diamagnetic susceptibility due to fluctuation diamagnetism above $T_{\rm c}$. Both $\chi^{\rm dia}$ and $\chi^{\rm VV}_{\alpha}$ are expected to be small and temperature and doping independent over the experimental temperature range. A possible contribution to $\chi_{\alpha}$ from Landau diamagnetism is absorbed into the spin susceptibilities because it has the same temperature dependence as $\chi^{\rm spin}(T)$. The overall magnitudes of $\chi_{ab}$ and $\chi_{c}$ monotonically decrease with decreasing doping level. This is derived from a reduction in spin susceptibility due to the decrease in electronic density of states (DOS) near the Fermi level, as well as the opening of a pseudogap in the underdoped regime. The absolute values and the temperature dependences of $\chi_{ab}$ for as-grown, anneal(1), anneal(3), and anneal(5) in this work are comparable to those of $\chi_{ab}$ with $x=0.1$ for corresponding annealing condition in ref. \cite{Fujii2002}. Now let us discuss the temperature dependence of $\chi_{ab}$ and $\chi_{c}$ at each doping level. The gradual decrease in the susceptibilities can be seen from well above $T_{\rm c}$. This decrease should be caused by the opening of a pseudogap. However, we did not determine the characteristic temperature $T^{*}$ of the opening of the pseudogap since $T^{*}$ is expected to become very high in the underdoped regime as in the case of Bi2212. When temperature is further lowered to the vicinity of $T_{\rm c}$, $\chi^{\rm FD}_{\alpha}$ becomes significant. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,clip]{64203Fig6.eps} \caption{(Color online) $T-T_{\rm c}$ dependence of $\delta\chi^{\rm FD}$ for each doping level, where $T_{\rm c}$s are determined by magnetization measurement applying the magnetic field 4 Oe for each. The inset shows temperature and doping dependences of $\log \lvert \delta\chi^{\rm FD}\rvert$. Each solid line is drawn by fitting the data from $T_{\rm FD}$ down to the temperature 15 K lower than $T_{\rm FD}$. The absolute value of the slope of the fitting line corresponding to the exponent $\alpha$ increases with increasing doping level.}{} \label{fig:deltachiscf} \end{center} \end{figure} The fluctuation diamagnetic component can be extracted from the original data if the background normal state susceptibility is known. However, it is apparently temperature dependent in this case and there is no reliable way to estimate it. Here, we used the strong anisotropy in the fluctuation diamagnetism to extract it. First, $\chi_{c}(T)$ was plotted as a function of $\chi_{ab}(T)$ with temperature $T$ as an implicit parameter, following the method described in ref. \cite{Watanabe2000, Matsuda2001}. The plots are shown in Fig.\ref{fig:chiab-chic-asgrown-anneal5} for each doping level. In the region far above $T_{\rm c}$, they showed linear relations with almost the same slope. This indicates that the temperature dependent spin susceptibility has a doping and independent anisotropy ratio. The deviation from the linear background is attributed to $\chi^{\rm FD}_{\alpha}$ which has a different anisotropy ratio from that of the spin susceptibility. Based on the simple Gaussian superconducting fluctuation theory\cite{Tinkham}, anisotropy in $\chi^{\rm FD}_{\alpha}$ comes from anisotropy in the effective shape of fluctuating domain whose dimension is proportional to $\xi_{ab}\times\xi_{ab}\times\xi_{c}$, where $\xi_{ab}$ and $\xi_{c}$ are the in-plane and out-of-plane coherence lengths respectively. $\chi^{\rm FD}_{\alpha}$ can be estimated as the vertical deviation from the linear background assuming $\chi^{\rm spin}_{{\rm OP},\alpha}(T)=(g^{\rm OP}_{\alpha})^{2}\chi^{\rm spin}_{\rm OP}(T), \chi^{\rm spin}_{{\rm IP},\alpha}(T)=(g^{\rm IP}_{\alpha})^{2}\chi^{\rm spin}_{\rm IP}(T)$, $g^{\rm IP}_{\alpha}=\gamma g^{\rm OP}_{\alpha}$, where $\gamma$ is a constant and $g_{\alpha}$ is a gyromagnetic ratio of the conduction electrons, and that $g^{\rm IP}_{\alpha}$ and $g^{\rm OP}_{\alpha}$ are temperature and doping independent. Using eq.(1) and the above assumptions, the fluctuation diamagnetic component $\delta\chi^{\rm FD}$ is calculated to be \begin{equation} \begin{split} \delta\chi^{\rm FD}(T)&=\chi_{c}(T)-\left(\frac{g^{\rm OP}_{c}}{g^{\rm OP}_{ab}}\right)^{2}\chi_{ab}(T)-\chi_{0}\\ &=\chi^{\rm FD}_{c}(T)-\left(\frac{g^{\rm OP}_{c}}{g^{\rm OP}_{ab}}\right)^{2}\chi^{\rm FD}_{ab}(T) \\ &\sim\chi^{\rm FD}_{c}(T) \end{split} \end{equation} where $(g^{\rm OP}_{c}/g^{\rm OP}_{ab})^{2}$ and $\chi_{0}\equiv\{1-(g^{\rm OP}_{c}/g^{\rm OP}_{ab})^{2}\}\chi^{\rm dia}+\chi^{\rm VV}_{c}-(g^{\rm OP}_{c}/g^{\rm OP}_{ab})^{2}\chi^{\rm VV}_{ab}$ correspond to the slope and intercept of the linear part of the $\chi_{ab}-\chi_{c}$ plot respectively. Since $\chi^{\rm FD}_{ab}$ is thought to be much smaller than $\chi^{\rm FD}_{c}$ and $(g^{\rm OP}_{c}/g^{\rm OP}_{ab})^{2}$ is less than 1 (see Fig.\ref{fig:chiab-chic-asgrown-anneal5}), $\delta\chi^{\rm FD}$ is nearly equal to $\chi^{\rm FD}_{c}$. Thus, $\chi^{\rm FD}_{c}$ can be estimated under the above assumptions. $T-T_{\rm c}$ dependences of $\delta\chi^{\rm FD}$ for various doping levels are shown in Fig.\ref{fig:deltachiscf}. Here, we used $T_{\rm c}$s as the superconducting transition temperature determined by magnetization measurements under a magnetic field of 4 Oe. $\delta\chi^{\rm FD}$s increase divergently toward $T_{\rm c}$. We estimated the characteristic temprature $T_{\rm FD}$ at which $\delta\chi^{\rm FD}$ becomes 2 \% of the full value at $T_{\rm c}$ for anneal(5). Obtained $T_{\rm FD}$s were 134 K, 132 K, 130 K, and 116 K for samples that were measured as-grown and annealing under the conditions (1), (3), and (5) respectively, and they are plotted against $\delta c$ (the variation in the $c$-axis length from that of the as-grown sample) together with corresponding $T_{\rm c}$s 110 K, 108 K, 99 K, and 77 K respectively in Fig.\ref{fig:Tc-Tscf}. For each doping level, $T_{\rm FD}$ was about 30 K higher than $T_{\rm c}$, and these results are similar to the experimental results of Bi2212 reported by Wang \textit{et al}\cite{Wang2005} demonstrating that $\delta\chi^{\rm FD}$ is a fluctuation diamagnetic component. $\delta\chi^{\rm FD}$ also increases with decreasing doping. This behavior can be interpretted as that the difference between $T_{\rm c}$ and $T_{\rm FD}$ increases with decreasing doping. $\chi^{\rm FD}_{c}$ is expressed as functions of doping level $p$ and temperature $T$ as follows: \begin{equation} \begin{split} \chi^{\rm FD}_{c}(p,T)\propto \frac{k_{\rm B}T}{V(p,T)}\xi^{2}_{ab}(p,T)\langle r^{2}\rangle _{\rm eff}(p,T) \end{split} \end{equation} where $V\sim\pi\xi^{2}_{ab}\cdot{\rm max}(\xi_{c}(p,T),d)$ is the coherence volume, $d$ is the distance between Cu-O blocks, and $\langle r^{2}\rangle _{\rm eff}$ is the mean-square radius of the fluctuating domain, which is roughly represented as $\langle r^{2}\rangle _{eff}\sim\xi^{2}_{ab}$ for $H\parallel c$ in this case. Consequently, $\chi^{\rm FD}_{c}$ is calculated to be \begin{equation} \begin{split} \chi^{\rm FD}_{c}(p,T)\propto k_{\rm B}T\frac{\xi^{2}_{ab}(p,T)}{{\rm max}(\xi_{c}(p,T),d)}. \end{split} \end{equation} $\delta\chi^{\rm FD}$ is plotted as a function of $\ln(T-T_{\rm c})$ as can be seen in the inset of Fig.\ref{fig:deltachiscf}. $\delta\chi^{\rm FD}$ exhibits a divergent temperature dependence $(T-T_{\rm c})^{\alpha}$ with a doping dependent exponent $\alpha$ varing from -1.4 to -2.0. In addition, the exponents $\alpha$ determined by our experiments are rather lower than those expected in a classical 2D superconductor ($\alpha=-1$) \cite{Tinkham}. The same analysis on Bi2212 gave an essentially doping independent exponent $\alpha\sim-2.3$ \cite{Matsuda2001}, lower than that for any doping of Bi2223. Recent Nernst experiments for cuprates indicates that unbound vortex-antivortex pairs exist well above $T_{\rm c}$ and they disorder the long-range phase coherence \cite{Li2010}. In such vortex liquid state, temperature dependence of the correlation length $\xi$ is expressed as \begin{equation} \begin{split} \xi '_{\alpha}&\propto\exp\left[\left(\frac{B}{T-T_{\rm c}}\right)^{1/2}\right] \end{split} \end{equation} where $B$ is a constant.\cite{Halperin1979} Since the exponential temperature dependence of $\xi '_{\alpha}$ is stronger than any other power law temperature dependence of the form of $(T-T_{c})^{\alpha}$, the nominal exponent will be lifted up if $\xi_{\alpha}$ includes some component of $\xi '_{\alpha}$. The disagreement between the observed exponents of Bi2223 or Bi2212 and the expected one in simple Gaussian fluctuation theory implies that both Bi2223 and Bi2212 includes vortex liquid region above $T_{\rm c}$. The decrease in $\left|\alpha\right|$ of Bi2223 on doping suggests decrease of the vortex liquid domain in the sample and/or that the system becomes even less two-dimensional. This behavior is associated with a more rapid weakening of the interlayer coupling of Bi2223 on underdoping than that of Bi2212. The deviation of the plots from fitting line near $T_{\rm c}$ may result from the fact that the temperature dependence of $\chi^{\rm FD}_{c}$ becomes weak since ${\rm max}(\xi_{c},d)$ changed from $d$ to $\xi_{c}$ due to divergence of $\xi_{c}$ on cooling down toward $T_{\rm c}$. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,clip]{64203Fig7.eps} \caption{(Color online) The superconducting temperature $T_{\rm c}$s and the onset of fluctuation diamagnetism temperature $T_{\rm FD}$s are shown with filled triangles and open inverted triangles respectively. $T_{\rm FD}$s show the dome shaped doping dependence as well as $T_{\rm c}$s. $\delta c$ is the variation in the $c$-axis length from that of the as-grown sample. Since the $T_{\rm FD}$ curve highly resembles the $T_{\rm onset}$ curve in the phase diagram of Bi2212 reported by Wang et al.\cite{Wang2005}, the same physical property might have been detected in our DC susceptibility measurements and their Nernst experiments. }{} \label{fig:Tc-Tscf} \end{center} \end{figure} One possible origin of such modification is the existence of the IP in Bi2223. In this system, nonuniformly doped $\rm CuO_{2}$ layers were confirmed by ARPES measurements, which estimated the doping level of the IP to be 7 \%, while that of OP was 23 \% in optimally doped samples\cite{Ideta2009}. According to the ARPES result, slight underdoping can easily drive an IP insulating or possibly into an AF ordered state. The insulating IP disrupts superconducting coherence among the three layers (OP-IP-OP) and decouples the system into electromagnetically coupled OPs in the underdoped regime. \section{Conclusions} High quality single crystals of ${\rm Bi}_{2+x}{\rm Sr}_{2-x}{\rm Ca}_{2}{\rm Cu}_{3}{\rm O}_{10+\delta}$ (Bi2223) with $x=0.2$ were grown using a traveling solvent floating zone method in order to investigate dimensionality of superconductivity in highly underdoped Bi2223 crystals. Obtained crystals were characterized by X-ray diffraction, DC susceptibility and resistivity measurements, confirming Bi2223 to be the main phase. The highest quality crystal was annealed under various oxygen partial pressures, adjusting its carrier density from slightly underdoped to highly underdoped. From the relationship between $T_{\rm c}$ and $c$-axis length, we could conclude that higher Bi substitution for Sr ($x>0.1$) is successfully achieved. However, Bi2223 with $x=0.2$ did not show a decrease in $T_{\rm c,max}$, compared with $x=0.1$ Bi2223. This behavior confirms that $T_{\rm c}$ of the inner layer limits the bulk $T_{\rm c}$ in the optimally-doped region. The fluctuation diamagnetic component was extracted from the anisotropic normal state susceptibilities $\chi_{ab}(T)$ ($H\perp c$) and $\chi_{c}(T)$ ($H\parallel c$). The temperature dependence of this component became strong with underdoping, suggesting a reduction of the superconducting dimensionality and/or an increase of vortex liquid domain. This behavior supports the view that inner $\rm CuO_{2}$ layer which is relatively underdoped compared to outer layers disrupts superconducting coherence among the three layers and change interlayer coupling situation more largely than that of Bi2212. \section*{Acknowledgment} We thank T. Fujii for providing us law XRD data and D. C. Peets for valuable discussions. This work was supported in part by "Academic Frontier" project from MEXT and grants from the Ministry of Education, Culture and Science of Japan.
1,116,691,499,528
arxiv
\section{Introduction} \label{intro1} {\bf Background:} The perfect matching problem in a graph $G = (V,E)$ aims to find a subset of edges $E' \subseteq E$, such that the edges of $E'$ are non-adjacent, and each vertex of $V$ belongs to exactly one edge of $E'$. Unfortunately, not every graph admits a perfect matching. In this paper we consider a natural relaxation of perfect matchings. Specifically, each vertex must select a neighbor, such that the maximum number of vertices that select the same vertex in the graph is minimized. In other words, the goal is finding a subset of edges $E' \subseteq E$, such that each vertex of $v$ belongs to at least on edge of $E'$, and the maximum degree in $G' = (V,E')$ is minimized. In addition, the edges of $E'$ are oriented, and each vertex in $G'$ has out degree of one. This corresponds to the requirement that each vertex selects just one neighbor, but can be selected by several neighbors. Note that if $E'$ is a maximum matching, the maximum degree in $G'$ is 1. When finding a maximum matching is not possible, the goal is minimizing the maximum degree of $G'$, while making sure that all vertices belong to edges of $E'$. In the distributed setting, this problem is called {\em Backup Placement}. It was introduced by Halldorsson, Kohler, Patt-Shamir, and Rawitz \cite{halldorsson2015bp}. It is very well motivated by computer networks whose nodes may have memory faults, and wish to store backup copies of their data at neighboring nodes \cite{oren2018distributed}. But neighboring nodes may incur faults as well, and so the number of nodes that select the same backup-node should be minimized. This way, if a backup node incurs faults, the number of nodes in the network that lose data is minimized. The precise definition of the distributed variant of the problem is as follows. The computer network is represented by an unweighted unoriented graph $G = (V,E)$, where $V$ is the set of nodes, and $E$ is the set of communication links. Communication proceeds in synchronous discrete rounds. In each round vertices receive and send message, and perform local computations. A message sent over an edge in a certain round arrives to the endpoint of that edge by the end of the round. The algorithm terminates once every vertex outputs its neighbor selection for the backup placement. The running time is the number of rounds from the beginning until all vertices make their decisions. We consider two variants of networks: faultless networks and faulty networks. For the latter, the goal is obtaining a self-stabilizing algorithm. In Sections \ref{intro2} - \ref{intro3} we consider faultless networks. In Section \ref{selfstab} we consider faulty networks, and elaborate there on the additional properties of the problem and the required solution in this case. The backup placement problem turned out to be quite challenging in general graphs. The best-currently known solution is a randomized algorithm with running time $O(\frac{\log^6 n}{\log^4 \log n})$ that obtains an approximation factor of $O(\frac{\log n}{\log \log n})$. This solution due to Halldorsen et al. \cite{halldorsson2015bp} is non-trivial at all, and involves distributed computations of a certain variant of matching, called an $f$-matching, in bipartite graphs. On the other hand, in certain network topologies, simpler and much more efficient solutions are known. In particular, this is the case in wireless networks, certain social networks, claw-free graphs, line graphs, and more generally, any graph with {\em neighborhood-independence bounded by a constant $c$}. Neighborhood independence $I(G)$ of a graph $G=(V,E)$ is the maximum size of an independent set contained in a neighborhood $\Gamma(v), v \in V$. For graphs with $I(G) \leq c = O(1)$, a constant-time deterministic distributed algorithm with approximation ratio $2c + 1 = O(1)$ was devised by Barenboim and Oren \cite{barenboim2019fast}. Although not so complicated, this algorithm still consists of several stages, including a computation of a tree cover, and then handling differently the different parts of the trees. (Such as leafs and non-leafs.) {\bf Our Results:} In the current paper we significantly simplify the backup placement algorithm for graphs with neighborhood independence bounded by a constant $c$. Specifically, the algorithm becomes uniform, and consists just of a single instruction that should be executed by all nodes in parallel within a single round. Consequently, the running time becomes just one round, which improves the number of rounds required in the previous solution for such graphs by a constant. More importantly, this improves the approximation ratio as well, which becomes $c$, rather than $2c + 1$ of \cite{barenboim2019fast}. Furthermore, this instruction is solely a function of the IDs of a vertex and its neighbors. As IDs are stored in areas that are considered to be failure-free (in contrast to variables that are stored in Random Access Memory that is failure-pron), algorithms that perform computations only as a function of IDs within a single round can be translated into self-stabilizing ones in a straightforward way. The structure of our algorithm makes it especially suitable for implementation in real-life networks with limited resources, such as sensor networks, heterogeneous networks, and Internet of Things. We employ our backup-placement algorithm in order to compute {\em maximum matching approximation} of an input graph $G$ with neighborhood independence bounded by $c$. For $c = O(1)$, we obtain a $(2 + \epsilon)$-approximation to maximum matching within $O(\log^* n)$ rounds. The best previously-known $O(1)$-approximation for such graph has running time $O(\log{\Delta} + \log^*n)$ \cite{barenboim2018distributed}, where $\Delta$ is the maximum degree of the graph. Another $O(1)$-approximate matching result, for a narrower family of graphs with bounded growth, has running time $O(\log^* n)$ \cite{schneider2008log}. However, this result of Schneider and Wattenhofer \cite{schneider2008log} is based on network decompositions, whose computation in such graphs involves very sophisticated arguments. Our algorithm, on the other hand, applies to a wider family of graphs, and is very simple. Specifically, it performs a constant number of iterations, each of which consists of computing a backup-placement $G' = (V,E')$, computing a maximal matching of it, and removing the matched edges and edges adjacent on them from $G$. Since the maximum degree of $G' = (V,E')$ is $c + 1 = O(1)$, computing a maximal matching in it requires just $O(\log^* n)$ rounds, using \cite{panconesi2001some}. {\bf Graphs with bounded independence:} The family of graphs with neighborhood independence bounded by a constant is a very wide family that captures various network types. This includes unit disk graphs, unit balls graphs, line graphs, line graphs of hypergraphs, claw-free graphs, graphs of bounded diversity, and many more. Consequently, this graph family and its subtypes have been very widely studied, especially in the distributed computing setting \cite{gfeller2007randomized, schneider2009coloring, barenboim2011deterministic, barenboim2017deterministic, barenboim2018distributed, assadi2019algorithms, kuhn2019faster}. For example, unit disc graphs can model certain types of wireless networks. In such networks all nodes have the same transmission range that is the radius of a disc. If nodes are positioned in the plane, the neighbors of any node can be covered by at most 6 discs of radius $1/2$. Each such disc forms a clique, since all nodes inside it can transmit one to another. Thus a disc of radius $1/2$ cannot contain two or more independent nodes. Hence, the neighborhood independence of unit disc graphs is at most $6$. Another notable example is the family of line graphs. In these graphs each vertex belongs to at most $2$ cliques, and thus the neighborhood independence is bounded by $2$. A notable example of the benefit of analyzing such graphs is the very recent breakthrough of Kuhn \cite{kuhn2019faster}. Kuhn obtained a $(2\Delta-1)$-edge-coloring of {\em general graphs} by analyzing graphs with constant neighborhood independence. The resulting algorithm provides a vertex coloring of such graph with time below the square-root-in-$\Delta$ barrier. Since this provides, in particular, a vertex coloring of line graphs, it results in an edge coloring of general graphs. This result, as well as other results for this topology, illustrate how beneficial can be the analysis of graphs with bounded neighborhood independence. \section{Distributed Backup Placement Algorithm}\label{intro2} We begin with devising a procedure for computing $O(1)$-backup placement in graphs with bounded \textit{neighborhood independence} \textit{c}. We assume that each vertex knows only about its neighbors, and each vertex has a unique ID. The procedure receives a graph $G = (V,E)$ as input, and proceeds as follows. We define an operation {\em next-modulo} that receives a vertex $v$ and a set of its neighbors $\Gamma(v)$ in the graph $G$. The operation \textit{next-modulo}($v$, $\Gamma(v)$), selects a vertex in $\Gamma(v)$ with a higher ID than the ID of $v$, and whose ID is the closets to that of $v$. If no such neighbor is found, then the operation returns the neighbor with the smallest ID. Formally, each vertex $v \in V$ selects a neighbor $w$ of $v$ in $G$ with the property that $ID(w) > ID(v)$, and there is no other neighbor $z$ of $v$ such that $ID(w) > ID(z) > ID(v)$. If there is no such neighbor, then $v$ selects the minimum ID vertex in $\Gamma(v)$. All these selections are performed in parallel within a single round. This completes the description of the algorithm. Its pseudocode is provided in Algorithm \ref{algo1}. Its action is illustrated in Figure \ref{fig1}. The next theorem summarizes its correctness. \begin{algorithm}[H] \caption{Backup Placement in Graphs} \label{algo1} \begin{algorithmic}[1] \Procedure{Graph-BP(Graph $G = (V,E)$)}{} \State {\bf foreach} node $v \in G$ in parallel do: \State \hspace{0.5cm} v.BP = \textit{next-modulo}($v$, $\Gamma(v)$) \State /* find the vertex next to $v$ in $\Gamma(v)$, according to a circular list of sorted IDs of $\Gamma(v) \cup v$ */ \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure}[H] \centering \begin{tikzpicture} \tikzstyle{vertex}=[circle,fill=black!25,minimum size=12pt,inner sep=2pt] \node[vertex] (G_0) at (0,0) {25}; \node[vertex] (G_1) at (0.5,2) {9}; \node[vertex] (G_2) at (2,0.5) {30}; \node[vertex] (G_3) at (2,-0.5) {4}; \node[vertex] (G_4) at (0.5,-2) {40}; \node[vertex] (G_5) at (-0.5,-2) {7}; \node[vertex] (G_6) at (-2,-0.5) {50}; \node[vertex] (G_7) at (-2,0.5) {6}; \node[vertex] (G_8) at (-0.5,2) {20}; \draw [line width=0.2mm, black] (G_0) -- (G_1); \draw [line width=0.2mm, black] (G_1) -- (G_8); \draw [line width=0.2mm, black] (G_8) -- (G_0); \draw [line width=0.2mm, black] (G_0) -- (G_3); \draw [line width=0.2mm, black] (G_3) -- (G_2); \draw [line width=0.2mm, black] (G_2) -- (G_0); \draw [line width=0.2mm, black] (G_0) -- (G_5); \draw [line width=0.2mm, black] (G_5) -- (G_4); \draw [line width=0.2mm, black] (G_4) -- (G_0); \draw [line width=0.2mm, black] (G_0) -- (G_6); \draw [line width=0.2mm, black] (G_6) -- (G_7); \draw [line width=0.2mm, black] (G_7) -- (G_0); \draw [->, line width=0.2mm, blue] [dashed] (G_1) to[out=135,in=45] (G_8); \draw [->, line width=0.2mm, blue] [dashed] (G_8) to[out=225,in=135] (G_0); \draw [->, line width=0.2mm, blue] [dashed] (G_3) to[out=225,in=315] (G_0); \draw [->, line width=0.2mm, blue] [dashed] (G_2) to[out=315,in=45] (G_3); \draw [->, line width=0.2mm, blue] [dashed] (G_0) to[out=45,in=135] (G_2); \draw [->, line width=0.2mm, blue] [dashed] (G_5) to[out=135,in=225] (G_0); \draw [->, line width=0.2mm, blue] [dashed] (G_4) to[out=225,in=315] (G_5); \draw [->, line width=0.2mm, blue] [dashed] (G_7) to[out=45,in=135] (G_0); \draw [->, line width=0.2mm, blue] [dashed] (G_6) to[out=135,in=225] (G_7); \end{tikzpicture} \caption{A backup placement (\textit{blue}) in a graph with bounded neighborhood independence \textit{c = 4}.} \label{fig1} \end{figure} \begin{theorem} Let $G$ be a graph with neighborhood independence of $c$, i.e., for each vertex in the graph with \textit{c+1} neighbors, at least two neighbors are connected by an edge. Then each vertex is selected by at most $c$ neighbors. \end{theorem} \begin{proof} Assume for contradiction that $c+1$ vertices chose the same vertex $u$ as a backup placement. Since the neighborhood independence is $c$, at least two vertices, $v_1, v_2$, of the $c+1$ vertices, are connected by an edge. Therefore, a triangle $\{u, v_1, v_2\}$ is formed in the subgraph induced by the edges selected by the algorithm. However, we next show that $u$ could not have been selected by both its neighbors $v_1,v_2$ in the triangle: According to Algorithm \ref{algo1}, there are only three possibilities regarding ID order: \\ (1) $ID(u) < ID(v_1)$ and $ID(u) < ID(v_2)$, \\ (2) $ID(v_1) < ID(u)$ and $ID(v_2) < ID(u)$, \\ (3) $ID(v_1) < ID(u) < ID(v_2)$ or $ID(v_2) < ID(u) < ID(v_1)$. However, in each of those possibilities, as presented in the three figures below, it is impossible for two vertices to choose the same vertex for backup placement. This is because there is always a vertex between $v_1$ and $u$ or between $v_2$ and $u$, that one of $u_1,u_2$ must select. Indeed, either $u_1$ is closer to $u_2$ than to $v$, with respect to \textit{next-modulo} operation, or $u_2$ is closer to $u_1$. More precisely: \begin{itemize} \item Case (1),(2): if $ID(v_1) < ID(v_2)$ then $v_1$ selects $v_2$ or another neighbor with ID smaller than that of $v_2$, but not $u$. Otherwise $v_2$ selects $v_1$, or a neighbor with a smaller ID than that of $v_1$, but not $u$. In any case, both cannot select $u$ simultaneously. \item Case (3): The vertex with the highest ID among the three is either $v_1$ or $v_2$. It must select a neighbor with an even greater ID or the minimum ID. But $u$ has an ID smaller than that of $v_1$ or $v_2$, and is not the minimal among ${u,v_1,v_2}$. Thus, either $v_1$ or $v_2$ does not select $v$. \end{itemize} In any case, we have a contradiction to the assumption that both $v_1$,$v_2$ select $u$. \begin{figure}[H] \centering \begin{minipage}[b]{0.3\textwidth} \centering \begin{tikzpicture} \tikzstyle{vertex}=[circle,fill=black!25,minimum size=12pt,inner sep=2pt] \node[vertex] (G_0) at (0,0) {1}; \node[vertex] (G_1) at (-1.5,-2) {2}; \node[vertex] (G_2) at (1.5,-2) {3}; \draw [-, line width=0.3mm, black] (G_0) -- (G_1); \draw [-, line width=0.3mm, black] (G_0) -- (G_2); \draw [-, line width=0.3mm, black] (G_1) -- (G_2); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed] (G_0) to [bend right=45] (G_1); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed] (G_1) to [bend right=45] (G_2); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed,postaction={decorate,decoration={text along path,text align=center,text={|\myshift|next-modulo}}}] (G_2) to [bend right=45] (G_0); \end{tikzpicture} \caption*{$2=ID(v_1) > ID(u)=1$\\$3=ID(v_2) > ID(u)=1$} \label{fig1} \end{minipage} \hspace{-5mm} \begin{minipage}[b]{0.3\textwidth} \centering \begin{tikzpicture} \tikzstyle{vertex}=[circle,fill=black!25,minimum size=12pt,inner sep=2pt] \node[vertex] (G_0) at (0,0) {3}; \node[vertex] (G_1) at (-1.5,-2) {1}; \node[vertex] (G_2) at (1.5,-2) {2}; \draw [-, line width=0.3mm, black] (G_0) -- (G_1); \draw [-, line width=0.3mm, black] (G_0) -- (G_2); \draw [-, line width=0.3mm, black] (G_1) -- (G_2); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed,postaction={decorate,decoration={text along path,text align=center,text={|\myshift| next-modulo}}}] (G_0) to [bend right=45] (G_1); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed] (G_1) to [bend right=45] (G_2); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed] (G_2) to [bend right=45] (G_0); \end{tikzpicture} \caption*{$1=ID(v_1) < ID(u)=3$\\$2=ID(v_2) < ID(u)=3$} \label{fig2} \end{minipage} \hspace{-5mm} \begin{minipage}[b]{0.3\textwidth} \centering \begin{tikzpicture} \tikzstyle{vertex}=[circle,fill=black!25,minimum size=12pt,inner sep=2pt] \node[vertex] (G_0) at (0,0) {2}; \node[vertex] (G_1) at (-1.5,-2) {1}; \node[vertex] (G_2) at (1.5,-2) {3}; \draw [-, line width=0.3mm, black] (G_0) -- (G_1); \draw [-, line width=0.3mm, black] (G_0) -- (G_2); \draw [-, line width=0.3mm, black] (G_1) -- (G_2); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed,postaction={decorate,decoration={text along path,text align=center,text={|\myshift| }}}] (G_1) to [bend left=45] (G_0); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed] (G_2) to [bend left=45] (G_1); \def\myshift#1{\raisebox{-2.5ex}} \draw [->,line width=0.3mm, blue, dashed] (G_0) to [bend left=45] (G_2); \def\myshift#1{\raisebox{-2.5ex}} \draw [-,line width=0, transparent!0, dashed,postaction={decorate,decoration={text along path,text align=center,text={|\myshift|next-modulo}}}] (G_1) to [bend right=45] (G_2); \end{tikzpicture} \caption*{$ID(v_1) < ID(u) < ID(v_2)$\\ $1$ \ \ $ < $ \ \ $ 2 $ \ \ $ < $ \ \ $3$} \label{fig3} \end{minipage} \label{fig_proof1} \end{figure} \end{proof} \section{Maximum Matching Approximation based on Backup Placement}\label{intro3} A set of edges $M \in E$ is called a \textit{Matching} if and only if every vertex $v \in V$ belongs to at most one edge in $M$. A \textit{Maximal Matching} (shortly, \textit{MM}) is a matching that is maximal with respect to addition of edges, i.e., there is no edge $e \in E$ such that $M \cup \{e\}$ is a valid matching. A \textit{Maximum Matching} (shortly, \textit{MCM}) is a matching of maximum size among all matchings of $E$. As shown in the previous section, given a graph $G=(V,E)$ with bounded neighborhood independence $c$, we can compute a $O(1)$-backup placement in a single round. This results in a subgraph, $G'=(V,E')$, where $E'$ is the set of all the selected edges of the backup placement algorithm. Each vertex in the subgraph $G'$ has selected one neighbor, and is selected by at most $c$ neighbors. All edges adjacent on a vertex in $G'$ are either selecting edges or selected edges. Thus, the number of such edges is at most $c + 1$. Consequently, the maximum degree $\Delta(G')$ is at most $c + 1$. \begin{figure}[H] \centering \begin{tikzpicture} \tikzstyle{vertex}=[circle,fill=black!25,minimum size=12pt,inner sep=2pt] \tikzstyle{vertex_red}=[circle,fill=red,minimum size=12pt,inner sep=2pt] \node[vertex] (G_0_1) at (0,1) {}; \node[vertex] (G_2_1) at (-4,0) {1}; \node[vertex] (G_6_1) at (-2,0) {2}; \node (G_4_1) at (2,0) {\ldots}; \node[vertex] (G_5_1) at (4,0) {c}; \node[vertex_red] (G_1) at (0,0) {}; \node[vertex] (G_2) at (-4,-1) {1}; \node[vertex] (G_6) at (-2,-1) {2}; \node[vertex] (G_3) at (0,-1) {3}; \node (G_4) at (2,-1) {\ldots}; \node[vertex] (G_5) at (4,-1) {c}; \draw [->, line width=0.2mm, red] (G_2) -- (G_1); \draw [->, line width=0.2mm, red] (G_6) -- (G_1); \draw [->, line width=0.2mm, red] (G_3) -- (G_1); \draw [->, line width=0.2mm, red] (G_5) -- (G_1); \draw [->, line width=0.2mm, black] (G_2_1) -- (G_0_1); \draw [->, line width=0.2mm, black] (G_6_1) -- (G_0_1); \draw [->, line width=0.2mm, red] (G_1) -- (G_0_1); \draw [->, line width=0.2mm, black] (G_5_1) -- (G_0_1); \draw [-, line width=0.2mm, black] (G_6_1) -- (G_2); \draw [-, line width=0.2mm, black] (G_6_1) -- (G_1); \end{tikzpicture} \caption{The maximum degree $\Delta(G')$ of the subgraph $G'=(V,E')$ is at most $c + 1$. In red: a vertex with degree of $c + 1$, as it was selected by its neighbors with neighborhood independence of $c$, and selected one additional neighbor.} \label{fig3} \end{figure} We devise a Maximum Matching approximation based on this backup placement subgraph in $O(\log^* n)$ rounds. To this end, we compute a backup placement of an input graph $G$, obtain the graph $G' = (V,E')$, where $E'$ is the set of selected edges, and execute a maximal matching algorithm of Panconesi and Rizzi \cite{panconesi2001some} on $G'$. The latter algorithm computes a maximal matching of an input graph with degree $\Delta$ within $O(\Delta + \log^*n)$ rounds. \begin{lemma} \label{lemmaa} Given a graph $G=(V,E)$ with bounded neighborhood independence $c$, we achieve $(c + 1)$-approximation of the Maximum Matching problem. \end{lemma} \begin{proof} We begin with executing the backup placement algorithm, which computes the sub-graph $G'=(V,E')$. Next, we compute an MM of $G'$. Since $G'$ has bounded neighborhood independence $c$ and $\Delta(G') = c + 1$, we can show that a maximal matching of $G'$ has size at least $1/(c + 1)$ of $MCM(G)$. This is because every vertex in $V$ is either in $MM(G')$ or adjacent (in $G'$) to a vertex in $MM(G')$. (Otherwise, it is adjacent to a free vertex, and an edge can be added to the maximal matching of $G'$. Contradiction.) Thus, the set of vertices that belong to edges of $MM(G')$ together with the vertices adjacent on them in $G'$, are exactly the set $V$. On the other hand, since each vertex of $MM(G')$ is adjacent in $G'$ to at most $c + 1$ vertices, the size of $V$ is at most $c + 1$ times the number of vertices of $MM(G')$. Since the size of the maximum matching is at most $|V|/2$ and the size of $MM(G')$ is at least $|V|/2(c + 1)$, we obtained a $(c + 1)$-approximation to maximum matching \end{proof} \begin{lemma} Given a graph $G=(V,E)$ with bounded neighborhood independence $c$, the running time of the Maximum Matching approximation is $O(\log^* n)$. \end{lemma} \begin{proof} Using $O(\Delta + \log^* n)$-time Maximal Matching algorithm by Panconesi and Rizzi \cite{panconesi2001some}, and due to the fact that $\Delta(G') = c + 1 = O(1)$, the achieved time complexity is $O(\log^* n)$. \end{proof} In order to reach an even better Maximum Matching approximation than the $(c + 1)$-approximation, we apply several times the maximal matching algorithm by Panconesi and Rizzi \cite{panconesi2001some} on $G'$ which was obtained by the $O(1)$-backup placement in graphs with bounded \textit{neighborhood independence} \textit{c}. In order to preserve a proper matching of $G$ in each step, after each computation of a maximal matching, we remove the resulting edges endpoints from $G'$, as well as all edges adjacent on these endpoints. Then we invoke again a maximal matching computation on the residual graph. We repeat this for a constant number of iterations. This completes the description of the algorithm. Its pseudocode is provided in Algorithm \ref{algo2} below. Next, we prove its correctness and analyze running time. \begin{algorithm}[H] \caption{Maximum Matching Approximation Algorithm} \label{algo2} \begin{algorithmic}[1] \Procedure{Maximum-Matching-Approximation(Graph $G = (V,E)$)}{} \State Let $k$ be a positive constant \State $G' = GeneralBP(G)$ \State $MCMA = \emptyset$ \For {$i = 1,2, ..., k$} \State $MCMA = MCMA \cup MM(G')$ \State $G = G \setminus MM(G')$ \ \ \ /* remove from $G$ the edges of $MM(G')$, the adjacent edges in $G$ of $MM(G')$, and all isolated vertices. */ \State $G' = GeneralBP(G)$ \EndFor \State return MCMA \EndProcedure \end{algorithmic} \end{algorithm} \begin{theorem} \label{lemmab} Given a graph $G=(V,E)$ with bounded neighborhood independence $c$, we achieve a $(2 + \epsilon)$-approximation of the Maximum Matching problem. \end{theorem} \begin{proof} Be Lemma 3.1, after the first iteration of the loop of line 5 of Algorithm 2, we obtain a $(c + 1)$-approximation to maximum matching. Moreover, since each edge in the matching is adjacent in $G'$ to at most $2c$ vertices, and the set of vertices of the matching with their neighbors is $V$, the size of the matching is at least $|V|/2(c + 1) = n/2(c + 1)$. In each iteration of the loop, at least $1/(c+1)$-fraction of the vertices that are still in $G$ are matched and removed from $G$. Hence, for $i = 1,2,...,$ the number of vertices remaining in $G$ is at most $(c/c + 1)^i \cdot n$. All the other vertices are either matched or have all their neighbors matched. For an arbitrarily small fixed constant $\epsilon$ and a sufficiently large constant $i$, it holds that $(c/c + 1)^i \cdot n \leq \epsilon \cdot n/2(c + 1)$. In other words, the residual set of vertices after $i$ rounds is of size at most an $\epsilon$ fraction of the matching already computed in iteration 1. Thus, after $i$ iteration, any subset of remaining edges of $G$ whose addition makes the result a maximal matching, increases its size by at most an $\epsilon$ fraction. Therefore, the matching after $i$ iterations is a $(1 + \epsilon)$-approximation to MM. Since MM is a 2-approximation to MCM, our algorithm computes a $(2+\epsilon)$-approximate MCM within a constant number of iterations. \end{proof} \begin{theorem} Given a graph $G=(V,E)$ with bounded neighborhood independence $c$, the running time of Algorithm \ref{algo2} is $O(\log^* n)$. \end{theorem} \begin{proof} Using $O(\Delta + \log^* n)$-time Maximal Matching algorithm by Panconesi and Rizzi \cite{panconesi2001some}, and due to the fact that $\Delta(G') = c + 1 = O(1)$, each iteration requires $O(\log^*n)$ rounds. Since the overall number of iterations is constant, the entire running time is $O(\log^* n)$ as well. \end{proof} \section{Self-stabilizing Backup Placement in Graphs of Bounded Neighborhood Independence} \label{selfstab} In this section we devise a self-stabilizing backup placement algorithm in Dijkstra model of self-stabilization \cite{edsger1974dijkstra}. In this model each vertex has a ROM (Read Only Memory) that is failure free, and a RAM (Random Access Memory) that is prone to failures. An adversary can corrupt the RAM of all processors in any way. However, in certain periods of time, faults do not occur. These periods of time are not known to the processors. The goal of a distributed self-stabilizing algorithm is reaching a proper state in all processors, once faults stop occurring. Since these time points are not known in advance, an algorithm is constantly executed by all processors. The stabilization time is the number of rounds from the beginning of a time period in which faults do not occur, until all processors reach a proper state, given that no additional faults occur during this time period. Our algorithm stores only the $ID$ of a processor in its ROM. The backup placement selection is stored in the RAM of a processor. The self-stabilizing algorithm is extremely simple. Specifically, In each round each vertex executes the \textit{next-modulo} operation. In other words, each vertex repeats Algorithm \ref{algo1} in each round. This completes the description of the algorithm. Since this operation within a single (faultless) round results in a proper $O(1)$-Backup-placement in a graph with constant neighborhood independence, such an algorithm stabilizes within one round after faults stop occurring. Moreover the solution remains proper as long as there are no faults. We summarize this in the next theorem. \begin{theorem} In graphs with neighborhood independence bounded by a constant, our algorithm stabilizes within $1$ round and produces $O(1)$-backup-placement. \end{theorem} Thanks to the simplicity of this backup-placement algorithm, it can be used as a building block for other self-stabilization algorithms that employ backup-placements. Specifically, in each round an algorithm can execute the \textit{next-modulo} operation before its own code. This way, starting from the round after the round when faults stop occurring, a subgraph $G'$ of maximum degree $c + 1$ is obtained. This subgraph does not change as long as there are no additional faults. This is because the subgraph is deduced once faults stopped occurring, based only on values in the ROM, and this subgraph does not change in faultless rounds. Thus, a self-stabilizing algorithm with time of the form $f(\Delta,n) = f_1(\Delta) \cdot f_2(n)$ invoked on $G'$ will stabilize within $f(\Delta',n) + 1 = O(f_2(n))$ rounds. This is because $\Delta' = \Delta(G') = c + 1 = O(1)$ in graphs with bounded neighborhood independence $c$. Hence, we obtain the following theorem. \begin{theorem} In graph with neighborhood independence at most $c = O(1)$, any self-stabilizing maximal matching algorithm with time $f(\Delta,n) = f_1(\Delta) \cdot f_2(n)$ can be converted to a self-stabilizing $(c + 1)$-approximation of maximum matching with time $O(f_1(c) \cdot f_2(n)) = O(f_2(n))$. \end{theorem} For example, a maximal matching algorithm with running time $O(\Delta n + \Delta^2 \log n)$, such as the self-stabilizing algorithm of \cite{kunne2018self} adapted to a network with IDs, can be converted into a self-stabilizing $(c + 1)$-approximate MCM algorithm with $O(c \cdot n + c^2 \log n)$ time. This is $O( n)$, for $c = O(1)$. \bibliographystyle{plain}
1,116,691,499,529
arxiv
\section{Introduction} Recent experimental demonstrations of small quantum simulations~\cite{OMalley16,Barends15,Langford16} and quantum error correction (QEC)~\cite{Kelly15,Riste15,Corcoles15,Ofek16} position superconducting circuits for targeting quantum supremacy~\cite{Boixo16} and quantum fault tolerance~\cite{Martinis15}, two outstanding challenges for all quantum information processing platforms. On the theoretical side, much modeling of QEC codes has been made to determine fault-tolerance threshold rates in various models~\cite{Fowler12,Landahl11,Yoder16} with different error decoders~\cite{Tomita14,Fowler09,Heim16}. However, the need for computational efficiency has constrained many previous studies to oversimplified noise models, such as depolarizing and bit-flip noise channels. This discrepancy between theoretical descriptions and experimental reality compromises the ability to predict the performance of near-term QEC implementations, and offers limited guidance to the experimentalist through the maze of parameter choices and trade-offs. In the planar circuit quantum electrodynamics (cQED)~\cite{Blais04} architecture, the major contributions to error are transmon qubit relaxation, dephasing from flux noise and resonator photons leftover from measurement, and leakage from the computational space, none of which are well-approximated by depolarizing or bit-flip channels. Simulations with more complex error models are now essential to accurately pinpoint the leading contributions to the logical error rate in the small-distance surface codes~\cite{Fowler12,Tomita14,Horsman12} currently pursued by several groups worldwide. In this paper, we perform a density-matrix simulation of the distance-3 surface code named Surface-$17$, using the concrete quantum circuit recently proposed in~\cite{Versluis16} and the measured performance of current experimental multi-transmon cQED platforms~\cite{Bultink16,Rol16,Asaad16,Walter17}. For this purpose, we have developed an open-source density-matrix simulation package named quantumsim~\footnote{Please visit https://github.com/brianzi/quantumsim}. We use quantumsim to extract the logical error rate per QEC cycle, $\eL$. This metric allows us to optimize and trade off between QEC cycle parameters, assess the merits of feedback control, predict gains from future improvements in physical qubit performance, and quantify decoder performance. We compare an algorithmic decoder using minimum-weight perfect matching (MWPM) with homemade weight calculation to a simple look-up table (LT) decoder, and weigh both against an upper bound (UB) for decoder performance obtainable from the density-matrix simulation. Finally, we make a low-order approximation to extend our predictions to the distance-$5$ Surface-$49$. The combination of results for Surface-17 and -49 allows us to make statements about code scaling and to predict the code size and physical qubit performance required to achieve break-even points for memory and computational performance. \section{Results} \subsection{Error rates for Surface-17 under current experimental conditions} To quantify the performance of the logical qubit, we first define a test experiment to simulate. Inspired by the recent experimental demonstration of distance-3 and -5 repetition codes~\cite{Kelly15}, we first focus on the performance of the logical qubit as a quantum memory. Specifically, we quantify the ability to hold a logical $\ket{0}$ state, by initializing this state, holding it for $k\in\left\{1,\ldots,20\right\}$ cycles, performing error correction, and determining a final logical state (see Fig.~\ref{Fig_schematic} for details). The logical fidelity $\fid[k]$ is then given by the probability to match the initial state. We observe identical results when using $\ket{1}$ or $\ket{\pm}=\frac{1}{\sqrt{2}}(\ket{0}\pm\ket{1})$ in place of $\ket{0}$. \begin{figure} \includegraphics[width=\columnwidth]{LogicalFidelity_new.pdf} \caption{\label{fig:logical_fidelity}Logical fidelity $\fid[k]$ of Surface-$17$ with current experimental parameters (Table~\ref{table:parameters} and~\cite{Suppmaterial}), simulated with quantumsim as described in Fig.~\ref{Fig_schematic}. The results from a MWPM decoder (green) and an implementation of the LT decoder of~\cite{Tomita14} (blue) are compared to the decoder upper bound (red). The labeled error rate is obtained from the best fit to Eq.~\eqref{eq:good_decay} (also plotted). A further comparison is given to majority voting (purple, dashed), which ignores the outcome of individual stabilizer measurements, and to the fidelity $\fidphys$ of a single transmon (black) [Eq.~\eqref{eq:physical_fidelity_decay}]. Error bars ($2$ s.d.) are obtained by bootstrapping.} \end{figure} We base our error model for the physical qubits on current typical experimental performance for transmons in planar cQED, using parameters from the literature and in-house results (e.g., gate-set tomography measurements). These are summarized in Table~\ref{table:parameters}, and further detailed in~\cite{Suppmaterial}. We focus on the QEC cycle proposed in~\cite{Versluis16}, which pipelines the execution of $X$- and $Z$-type stabilizer measurements. Each stabilizer measurement consists of three parts: a coherent step (duration $\Tcorr=2\Tgone +4\Tgtwo$), measurement ($\Tmeas$), and photon depletion from readout resonators ($\Tdep$), making the QEC cycle time $\tcycle = \Tcorr + \Tmeas + \Tdep$. Simulating this concrete quantum circuit with the listed parameters using quantumsim, we predict $\fid[k]$ of Surface-$17$ (Fig.~\ref{fig:logical_fidelity}). We show $\fid[k]$ for both a homemade MWPM decoder (green, described in~\cite{Suppmaterial}), and an implementation of the LT decoder of~\cite{Tomita14} (blue, described in~\cite{Suppmaterial}). To isolate decoder performance, we can compare the achieved fidelity to an upper bound extractable from the density-matrix simulation (red, described in Sec.~\ref{sec:quantumsim}). To assess the benefit of QEC, we also compare to a single decohering transmon, whose fidelity is calculated by averaging over the six cardinal points of the Bloch sphere: \begin{equation} \label{eq:physical_fidelity_decay} \fidphys(t) = \tfrac{1}{6} \left( 1 + e^{-t/\Tone} \right) + \tfrac{1}{3} \left(1 + e^{-t(1/2\Tone + 1/\Tphi)} \right). \end{equation} The observation of $\fid[k] > \fidphys(k\tcycle)$ for large $k$ would constitute a demonstration of QEC beyond the quantum memory break-even point~\cite{Ofek16}. Equivalently, one can extract a logical error rate $\eL$ from a best fit to $\fid[k]$ (as derived in Sec.~\ref{sec:protocol} as the probability of an odd number of errors occurring), \begin{equation} \fid[k]=\frac{1}{2}[1+(1-2\eL)^{k-k_0}].\label{eq:good_decay} \end{equation} Here, $k_0$ and $\eL$ are the parameters to be fit. We compare $\eL$ to the physical error rate \begin{equation} \ephys = -\tcycle \left. \frac{d \fidphys(t)}{dt}\right|_{t=0}=\frac{\tcycle}{3\Tone}+\frac{\tcycle}{3\Tphi}. \end{equation} We observe $\eL=1.44\,\ppercycle$ for the LT decoder, $\eL=1.07\,\ppercycle$ for the MWPM decoder, and $\eL=0.68\,\ppercycle$ at the decoder upper bound ($\ppercycle$ = $\%$ per cycle). The latter two fall below $\ephys=1.33\,\ppercycle$. Defining the decoder efficiency $\etad = \eL^{\mathrm{(UB)}}/\eL$, we find $\etad^{\mathrm{(LT)}} = 0.47$ and $\etad^{\mathrm{(MWPM)}} = 0.64$. We can also compare the multi-cycle error correction to majority voting, in which the state declaration is based solely on the output of the final data qubit measurements (ancilla measurements are ignored). Majority voting corrects any single data qubit error (over the entire experiment), and thus exhibits a quadratic decay for small $k$~\footnote{A distance-$d$ code with majority voting alone should exhibit a $(d+1)/2$-order decay}. A decoder should also be able to correct (at least) a single error, and thus should produce the same behavior at low $k$, delaying the onset of exponential decay in $\fid[k]$. In fact, a good test for the performance of a MWPM decoder is to ensure it can outperform the majority vote at short timescales, as suboptimal configuration will prevent this (as seen for the look-up table decoder). With the baseline for current performance established, we next investigate $\eL$ improvements that may be achieved by two means. First, we consider modifications to the QEC cycle at fixed physical performance. Afterwards, we consider the effect of improving physical qubit $\Tone$ and $\Tphi$. \subsection{Optimization of logical error rates with current experimental conditions} Error sources in current cQED setups derive primarily from transmon decoherence, as opposed to gate and measurement errors produced by control electronics. Thus, a path to reducing $\eL$ may be to decrease $\tcycle$. Currently, the cycle is dominated by $\Tmeas + \Tdep$. At fixed readout power, reducing $\Tmeas$ and $\Tdep$ will reduce $\tcycle$ at the cost of increased readout infidelity $\ero$ (described in Sec.~\ref{sec:measurement}). We explore this trade-off in Fig.~\ref{Fig_2_Measurement_time}, using a linear-dispersive readout model \cite{FriskKockum12}, keeping $\Tmeas = \Tdep$ and assuming no leftover photons. Because of the latter, $\eL^{\mathrm{(MWPM)}}$ reduces from $1.07\,\ppercycle$~(Fig.~\ref{fig:logical_fidelity}) to $0.62\,\ppercycle$ at $\Tmeas = 300\,\text{ns}$. The minimum $\emwpm = 0.55\,\ppercycle $ is achieved at around $\Tmeas=260~\ns$. This is perhaps counterintuitive, as $\ephys$ reduces only $0.13\,\ppercycle$ while $\ero$ increases $0.5\,\%$. However, it reflects the different sensitivity of the code to different types of errors. Indeed, $\emwpm$ is smaller for $\Tmeas=200~\ns$ than for $\Tmeas=300~\ns$, even though $\ero$ increases to $5\,\%$. It is interesting to note that the optimal $\Tmeas$ for quantum memory, which minimizes logical error per unit time, rather than per cycle, is $\Tmeas = 280\,\text{ns}$ (Fig.~\ref{Fig_2_Measurement_time} inset). This shows that different cycle parameters might be optimal for computation and memory applications. \begin{figure} \includegraphics[width=\columnwidth]{Fig2_error_msmt.pdf} \caption{\label{Fig_2_Measurement_time}Optimization of the logical error rate (per cycle) of Surface-$17$ as a function of measurement-and-depletion time~\cite{Bultink16}. Changes in the underlying physical error rates are shown as well. Decreasing the measurement time causes an increase in the readout infidelity (solid black curve with dots), whilst decreasing the single qubit decay from $\Tone$ and $\Ttwo$ (black dashed curve) for all qubits. The logical rate with an MWPM decoder (green curve) is minimized when these error rates are appropriately balanced. The logical error rate is calculated from the best fit of Eq.~\eqref{eq:good_decay}. Error bars ($2$ s.d.) are obtained by bootstrapping ($N=10,000$ runs). Inset: Logical error rate per unit time, instead of per cycle. } \end{figure} Next, we consider the possibility to reduce $\eL$ using feedback control. Since $\Tone$ only affects qubits in the excited state, the error rate of ancillas in Surface-17 is roughly two times higher when in the excited state. The unmodified syndrome extraction circuit flips the ancilla if the corresponding stabilizer value is -1, and since ancillas are not reset between cycles, they will spend significant amounts of time in the excited state. Thus, we consider using feedback to hold each ancilla in the ground state as much as possible. We do not consider feedback on data qubits, as the highly entangled logical states are equally susceptible to $\Tone$. The feedback scheme (Inset of Fig. 3) consists of replacing the $R_y(\pi/2)$ gate at the end of the coherent step with a $R_y(-\pi/2)$ gate for some of the ancillas, depending on a classical control bit $p$ for each ancilla. This bit $p$ represents an estimate of the stabilizer value, and the ancilla is held in the ground state whenever this estimate is correct (i.e.~in the absence of errors). Figure~\ref{Fig_4_FeedbackFidelity} shows the effect of this feedback on the logical fidelity, both for the MWPM decoder and the decoder upper bound. We observe $\eL$ improve only $0.05\,\ppercycle$ in both cases. Future experiments might opt not to pursue these small gains in view of the technical challenges added by feedback control. \begin{figure} \includegraphics[width=\columnwidth]{FeedbackFidelity.pdf} \caption{\label{Fig_4_FeedbackFidelity}Logical fidelity of Surface-$17$ with (solid) and without (dashed) an additional feedback scheme. The performance of a MWPM decoder (green) is compared to the decoder upper bound (red). Curves are fits of Eq.~\eqref{eq:good_decay} to the data, and error bars ($2$ s.d.) are given by bootstrapping, with each point averaged over $10,000$ runs. Inset: Method for implementing the feedback scheme. For each ancilla qubit $A_j$, we store a parity bit $p_j$, which decides the sign of the $R_y(\pi/2)$ rotation at the end of each coherent step. The time $A_j$ spends in the ground state is maximized when $p_j$ is updated each cycle $t$ by XORing with the measurement result from cycle $t-1$, after the rotation of cycle $t$ has been performed. } \end{figure} \subsection{Projected improvement with advances in quantum hardware}\label{sec:analytics} We now estimate the performance increase that may result from improving the transmon relaxation and dephasing times via materials and filtering improvements. To model this, we return to $\tcycle=800~\ns$, and adjust $\Tone$ values with both $\Tphi=2\Tone$ (common in experiment) and $\Tphi=\infty$ (all white-noise dephasing eliminated). We retain the same rates for coherent errors, readout infidelity, and photon-induced dephasing as in Fig.~\ref{fig:logical_fidelity}. Figure~\ref{Fig_3_T1} shows the extracted $\eL$ and $\ephys$ over the $\Tone$ range covered. For the MWPM decoder (upper bound) and $\Tphi=2\Tone$, the memory figure of merit $\powm=\ephys/\eL$ increases from $1.3$ $ (2)$ at $\Tone = 30~\us$ to $2$ $(5)$ at $100~\us$. Completely eliminating white-noise dephasing will increase $\powm$ by $10\%$ with MWPM and $30\%$ at the upper bound. \begin{figure} \includegraphics[width=\columnwidth]{LogicalError_vs_T1Tphi_log.pdf} \caption{\label{Fig_3_T1}$\Tone$ dependence of the Surface-$17$ logical error rate (MWPM and UB) and the physical error rate. We either fix $\Tphi=2\Tone$ (solid) or $\Tphi=\infty$ (dashed). Logical error rates are extracted from a best fit of Eq.~\eqref{eq:good_decay} to $\fid[k]$ over $k=1,\ldots,20$ QEC cycles, averaged over $N=50,000$ runs. Error bars ($2$ s.d.) are calculated by bootstrapping. } \end{figure} A key question for any QEC code is how $\eL$ scales with code distance $d$. Computing power limitations preclude similar density-matrix simulations of the $d=5$ surface code Surface-$49$. However, we can approximate the error rate by summing up all lowest-order error chains (as calculated for the MWPM decoder), and deciding individually whether or not these would be corrected by a MWPM decoder (see~\cite{Suppmaterial} for details). Figure~\ref{Fig_6_Analytic_extension_49} shows the lowest-order approximation to the logical error rates of Surface-$17$ and -$49$ over a range of $\Tone=\Tphi/2$. Comparing the Surface-$17$ lowest-order approximation to the quantumsim result shows good agreement and validates the approximation. We observe a lower $\eL$ for Surface-49 than for -17, indicating quantum fault tolerance over the $\Tone$ range covered. The fault-tolerance figure of merit defined in~\cite{Martinis15}, $\ecp=\eseventeen/\efortynine$, increases from $2$ to $4$ as $\Tone$ grows from $30$ to $100~\us$. \begin{figure} \includegraphics[width=\columnwidth]{analytics_post_ref.pdf} \caption{\label{Fig_6_Analytic_extension_49}Analytic approximation of $\eL$ for Surface-$17$ (green) and Surface-$49$ (orange) using a MWPM decoder. Details of the calculation of points and error bars are given in~\cite{Suppmaterial}. All plots assume $\Tphi=2\Tone$, and $\tcycle=800~\ns$ (crosses) or $400~\ns$ (dots). Numerical results for Surface-$17$ with $\tcycle=800~\ns$ are also plotted for comparison (green, dashed). The physical-qubit computation metric is given as the error incurred by a single qubit over the resting time of a single-qubit gate (black, dashed). } \end{figure} As a rough metric of computational performance, we offer to compare $\eL$ (per cycle) to the error accrued by a physical qubit idling over $\Tgone$. We define a metric for computation performance, $\powc=(\ephys\Tgone)/(\eL\tcycle)$ and $\powc=1$ as a computational break-even point. Clearly, using the QEC cycle parameters of Table~\ref{table:parameters} and even with $\Tone$ improvements, neither Surface-17 nor -49 can break-even computationally. However, including the readout acceleration recently demonstrated in~\cite{Walter17}, which allows $\Tmeas=\Tdep=100~\ns$ and $\tcycle=400~\ns$, Surface-49 can cross $\powc=1$ by $\Tone=40~\us$. In view of first reports of $\Tone$ up to $80~\us$ emerging for planar transmons~\cite{Paik16,Gustavsson16}, this important milestone may be within grasp. \section{Discussion} \subsection{Computational figure of merit} We note that our metric of computational power is not rigorous, due to the different gate sets available to physical and logical qubits. Logical qubits can execute multiple logical $X$ and $Z$ gates within one QEC cycle, but require a few cycles for two-qubit and Hadamard gates (using the proposals of~\cite{Horsman12,Yoder16}), and state distillation over many cycles to perform non-Clifford gates. As such, this metric is merely a rough benchmark for computational competitiveness of the QEC code. However, given the amount by which all distance-$3$ logical fidelities fall above this metric, we find it unlikely that these codes will outperform a physical qubit by any fair comparison in the near future. \subsection{Decoder performance}\label{sec:decoder} A practical question facing quantum error correction is how best to balance the trade-off between decoder complexity and performance. Past proposals for surface-code computation via lattice surgery~\cite{Horsman12} require the decoder to provide an up-to-date estimate of the Pauli error on physical qubits during each logical $T$ gate. Because tracking Pauli errors through a non-Clifford gate is inefficient, however implemented, equivalent requirements will hold for any QEC code~\cite{Terhal15}. A decoder is thus required to process ancilla measurements from one cycle within the next (on average). This presents a considerable challenge for transmon-cQED implementations, as $\tcycle < 1\,\us$. This short time makes the use of computationally intensive decoding schemes difficult, even if they provide lower $\eL$. The leading strategy for decoding the surface code is MWPM using the blossom algorithm of Edmonds~\cite{Fowler12,Fowler09,Fowler14}. Although this algorithm is challenging to implement, it scales linearly in code distance~\cite{Fowler14}. The algorithm requires a set of weights (representing the probability that two given error signals are connected by a chain of errors) as input. An important practical question (see~\cite{Suppmaterial}) is whether these weights can be calculated on the fly, or must be precalculated and stored. On-the-fly weight calculation is more flexible. For example, it can take into account the difference in error rates between an ancilla measured in the ground and in the excited state. The main weakness of MWPM is the inability to explicitly detect $Y$ errors. In fact,~\cite{Suppmaterial} shows that MWPM is nearly perfect in the absence of $Y$ errors. The decoder efficiency $\etad$ may significantly increase by extending MWPM to account for correlations between detected $X$ and $Z$ errors originating from $Y$ errors~\cite{Delfosse14,Fowler13b}. If computational limitations preclude a MWPM decoder from keeping up with $\tcycle$, the look-up table decoder may provide a straightforward solution for Surface-17. However, at current physical performance, the $\etad$ reduction will make Surface-17 barely miss memory break-even (Fig.~\ref{fig:logical_fidelity}). Furthermore, memory requirements make look-up table decoding already impractical for Surface-49. Evidently, real-time algorithmic decoding by MWPM or improved variants is an important research direction already at low code distance. \subsection{Other observations}\label{sec:gen_obs} The simulation results allow some further observations. Although we have focused on superconducting qubits, we surmise that the following statements are fairly general. We observe that small quasi-static qubit errors are suppressed by the repeated measurement. In our simulations, the $1/f$ flux noise producing $0.01$ radians of phase error per flux pulse on a qubit has a diamond norm approximately equal to the $\Tone$ noise, but a trace distance $100$ times smaller. As the flux noise increases $\eL$ by only $0.01\,\ppercycle$, it appears $\eL$ is dependent on the trace distance rather than the diamond norm of the underlying noise components. Quasi-static qubit errors can then be easily suppressed, but will also easily poison an experiment if unchecked. We further observe that above a certain value, ancilla and measurement errors have a diminished effect on $\eL$. In our error model, the leading sources of error for a distance $d$ code are chains of $(d-1)/2$ data qubit errors plus either a single ancilla qubit error or readout error, which together present the same syndrome as a chain of $(d+1)/2$ data qubit errors. An optimal decoder decides which of these chains is more likely, at which point the less-likely chain will be wrongly corrected, completing a logical error. This implies that if readout infidelity ($\ero$) or the ancilla error rate ($\ea$) is below the data qubit ($\ephys$) error rate, $\eL\propto(\ea+\ero)\ephys^{(d-1)/2}$. However, if $\ero$ ($\ea$) $>\ephys$, $\eL$ becomes independent of $\ero$ ($\ea$), to lowest order. This can be seen in Fig.~\ref{Fig_2_Measurement_time}, where the error rate is almost constant as $\ero$ exponentially increases. This approximation breaks down with large enough $\ea$ and $\ero$, but presents a counterintuitive point for experimental design; $\eL$ becomes less sensitive to measurement and ancilla errors as these error get worse. A final, interesting point for future surface-code computation is shown in Fig.~\ref{Fig_2_Measurement_time}: the optimal cycle parameters for logical error rates per cycle and per unit time are not the same. This implies that logical qubits functioning as a quantum memory should be treated differently to those being used for computation. This idea can be extended further: at any point in time, a large quantum computer performing a computation will have a set $S_m$ of memory qubits which are storing part of a large entangled state, whilst a set $S_c$ of computation qubits containing the rest of the state undergo operations. To minimize the probability of a logical error occurring on qubits within both $S_c$ and $S_m$, the cycle time of the qubits in $S_c$ can be reduced to minimize the rest time of qubits in $S_m$. As a simple example, consider a single computational qubit $q_c$ and a single memory qubit $q_m$ sharing entanglement. Operating all qubits at $\tcycle=720~\ns$ to minimize $\eL$ would lead to a $1.09\%$ error rate for the two qubits combined. However, shortening the $\tcycle$ of $q_c$ reduces the time over which $q_m$ decays. If $q_c$ operates at $\tcycle=600~\ns$, the average error per computational cycle drops to $1.06\%$, as $q_m$ completes only $5$ cycles for every $6$ on $q_c$. Although this is only a meager improvement, one can imagine that when many more qubits are resting than performing computation, the relative gain will be quite significant. \subsection{Effects not taken into account} Although we have attempted to be thorough in the detailing of the circuit, we have neglected certain effects. We have used a simple model for C-Z gate errors as we lack data from experimental tomography (e.g.~one obtained from two-qubit gate-set tomography~\cite{Blume13}). Most importantly, we have neglected leakage, where a transmon is excited out of the two lowest energy states, i.e., out of the computational subspace. Previous experiments have reduced the leakage probability per C-Z gate to $\sim0.3\%$~\cite{Barends14}, and per single-qubit gate to $\sim0.001\%$~\cite{Chen16}. Schemes have also been developed to reduce the accumulation of leakage~\cite{Fowler13}. Extending quantumsim to include and investigate leakage is a next target. However, the representation of the additional quantum state can increase the simulation effort significantly [by a factor of $(9/4)^{10} \approx 3000$]. To still achieve this goal, some further approximations or modifications to the simulation will be necessary. Future simulations will also investigate the effect of spread in qubit parameters, both in space (i.e., variation of physical error rates between qubits) and time (e.g., $\Tone$ fluctuations), and cross-talk effects such as residual couplings between nearest and next-nearest neighbor transmons, qubit cross-driving, and qubit dephasing by measurement pulses targeting other qubits. \section{Methods} \subsection{Simulated experimental procedure} \subsubsection{Surface-$17$ basics} A QEC code can be defined by listing the data qubits and the stabilizer measurements that are repeatedly performed upon them~\cite{GottesmanPhD}. In this way, Surface-$17$ is defined by a $3\times 3$ grid of data qubits $\{D_0,\ldots D_8\}$. In order to stabilize a single logical qubit, $9-1=8$ commuting measurements are performed. The stabilizers are the weight-two and weight-four $X$- and $Z$-type parity operators $X_2 X_1$, $Z_3 Z_0$, $X_4 X_3 X_1 X_0$, $Z_5 Z_4 Z_2 Z_1$, $Z_7 Z_6 Z_4 Z_3$, $X_8 X_7 X_5 X_4$, $Z_8 Z_5$, and $X_7 X_6$, where $X_j$ ($Z_j$) denotes the $X$ ($Z$) Pauli operator acting on data qubit $D_j$. Their measurement is realized indirectly using nearest-neighbor interactions between data and ancilla qubits arranged in a square lattices, followed by ancilla measurements [Fig.~\ref{Fig_schematic}(a)]. This leads to a total of $17$ physical qubits when a separate ancilla is used for each individual measurement. We follow the circuit realization of this code described in~\cite{Versluis16}, for which we give a schematic description in Fig.~\ref{Fig_schematic}(b) (see~\cite{Suppmaterial} for a full circuit diagram). In an experimental realization of this circuit, qubits will regularly accumulate errors. Multiple errors that occur within a short period of time (e.g., one cycle) form error `chains' that spread across the surface. Errors on single qubits, or correlated errors within a small subregion of Surface-$17$, fail to commute with the stabilizer measurements, creating error signals that allow diagnosis and correction of the error via a decoder. However, errors that spread across more than half the surface in a short enough period of time are misdiagnosed, causing an error on the logical qubit when wrongly corrected~\cite{Fowler12}. The rate at which these logical errors arise is the main focus of this paper. \subsubsection{Protocol for measurement of logical error rates}\label{sec:protocol} As the performance measure of Surface-$17$, we study the fidelity of the logical qubit as a quantum memory. We describe our protocol with an example `run' in Fig.~\ref{Fig_schematic}. We initialize all qubits in $\ket{0}$ and perform $k = 1,2,\ldots,20$ QEC cycles [Fig.~\ref{Fig_schematic}(b)]. Although this initial state is not a stabilizer eigenstate, the first QEC cycle projects the system into one of the $16$ overlapping eigenstates within the $+1$ eigenspace for $Z$ stabilizers, which form the logical $\ket{0}$ state~\cite{Fowler12}. This implies that, in the absence of errors, the first measurement of the $Z$ stabilizers will be $+1$, whilst that of the $X$ stabilizers will be random. In the following cycles, ancilla measurements of each run [Fig.~\ref{Fig_schematic}(c)] are processed using a classical decoding algorithm. The decoder computes a Pauli update after each QEC cycle [Fig.~\ref{Fig_schematic}(d)]. This is a best estimate of the Pauli operators that must be applied to the data qubits to transform the logical qubit back to the logical $\ket{0}$ state. The run ends with a final measurement of all data qubits in the computational basis. From this 9-bit outcome, a logical measurement result is declared [Fig.~\ref{Fig_schematic}(e)]. First, the four $Z$-type parities are calculated from the 9 data-qubit measurement outcomes and presented to the decoder as a final set of parity measurements. This ensures that the final computed Pauli update will transform the measurement results into a set that measures $+1$ for all $Z$ stabilizers. This results in one of $32$ final measurements, from which the value of a logical $Z$ operator can be calculated to give the measurement result (any choice of logical operator gives the same result). The logical fidelity $\fid[k]$ after $k$ QEC cycles is defined as the probability of this declared result matching the initial $+1$ state. \begin{figure*} \centering{ \includegraphics[width=\textwidth]{DecoderExperiment_LDC.pdf}} \caption{\label{Fig_schematic}Schematic overview of the simulated experiment. (a) 17 qubits are arranged in a surface code layout (legend top-right). The red data qubits are initialized in the ground state $\ket{0}$, and projected into an eigenstate of the measured $X$- (blue) and $Z$- (green) type stabilizer operators. (b) A section of the quantum circuit depicting the four-bit parity measurement implemented by the $A_3$ ancilla qubit ($+$/$-$ refer to $R_y(\pm\pi/2$) single-qubit rotations). The ancilla qubit (green line, middle) is entangled with the four data qubits (red lines) to measure $Z_1Z_2Z_4Z_5$. Ancillas are not reset between cycles. Instead, the implementation relies on the quantum non-demolition nature of measurements. The stabilizer is then the product of the ancilla measurement results of successive cycles. This circuit is performed for all ancillas and repeated $k$ times before a final measurement of all (data and ancilla) qubits. (c) All syndrome measurements of the $k$ cycles are processed by the decoder. (d) After each cycle, the decoder updates its internal state to represent the most likely set of errors that occurred. (e) After the final measurement, the decoder uses the readout from the data qubits, along with previous syndrome measurements, to declare a final logical state. To this end, the decoder processes the $Z$-stabilizers obtained directly from the data qubits, finalizing its prediction of most likely errors. The logical parity is then determined as the product of all data qubit parities ($\prod_{j=0}^8D_j$) once the declared errors are corrected. The logical fidelity $\fid$ is the probability that this declaration is the same as the initial state ($\ket{0}$).} \end{figure*} At long times and with low error rates, Surface codes have a constant logical error rate $\eL$. The fidelity $\fid[k]$ is obtained by counting the probability of an odd number of errors having occurred in total (as two $\sigma_x$ errors cancel)~\cite{Rol16,Terhalcomment}: \begin{align} \fid[k]=1-\sum_{l\;\mathrm{odd}}{k \choose l}\eL^l(1-\eL)^{k-l}. \end{align} Here, the combinatorial factor counts the number of combinations of $l$ errors in $k$ rounds, given an $\eL$ chance of error per round. This can be simplified to \begin{align} \fid[k]&=1-\frac{1}{2}\sum_l{k \choose l}\eL^l(1-\eL)^{k-l}(1-(-1)^l)\nonumber\\ &=1-\frac{1}{2}\left[(1-\epsilon_L+\epsilon_L)^k-(1-\epsilon_L-\epsilon_L)^k\right] \nonumber \\ &=\frac{1}{2}[1+(1-2\eL)^{k}].\label{eq:bad_decay} \end{align} However, at small $k$, the decay is dominated by the majority vote, for which $\eL\propto (k \ephys)^{(d+1)/2}$. For example, for all the Surface-$17$ decay curves, we observe a quadratic error rate at small $k$, as opposed to the linear slope predicted by Eq.~\eqref{eq:bad_decay}. In order to correct for this, we shift the above equation in $k$ by a free parameter $k_0$, resulting in Eq.~\eqref{eq:good_decay}. This function fits well to data with $k \ge 3$ in all plots, and thus allows accurate determination of $\eL$. \subsubsection{The quantumsim simulation package}\label{sec:quantumsim} Quantumsim performs calculations on density matrices utilizing a graphics processing unit in a standard desktop computer. Ancillas are measured at the end of each cycle, and thus not entangled with the rest of the system. As such, it is possible to obtain the effect of the QEC cycle on the system without explicitly representing the density matrix of all 17 qubits simultaneously. The simulation is set up as follows: the density matrix of the nine data qubits is allocated in memory with all qubits initialized to $\ket{0}$. One- and two-qubit gates are applied to the density matrix as completely positive, trace preserving maps represented by Pauli transfer matrices. When a gate involving an ancilla qubit must be performed, the density matrix of the system is dynamically enlarged to include that one ancilla. Qubit measurements are simulated as projective and following the Born rule, with projection probabilities given by the squared overlap of the input state with the measurement basis states. In order to capture empirical measurement errors, we implement a black-box measurement model (Sec.~\ref{sec:measurement}) by sandwiching the measurement between idling processes. The measurement projects the system to a product state of the ancilla and the projected sub-block of the density matrix. We can therefore remove the ancilla from the density matrix and only store its state right after projection, and continue the calculation with the partial density matrix of the other qubits. Making use of the specific arrangement of the interactions between ancillas and data qubits in Surface-17, it is possible to apply all operations to the density matrix in such an order (shown in~\cite{Suppmaterial}) that the total size of the density matrix never exceeds $2^{10}\times2^{10}$ (nine data qubits plus one ancilla), which allows relatively fast simulation. We emphasize that with the choice of error model in this work, this approach gives the same result as a full simulation on a 17-qubit density matrix. Only the introduction of residual entangling interactions between data and ancilla qubits (which we do not consider in this work) would make the latter necessary. On our hardware (see~\cite{Suppmaterial}), simulating one QEC cycle of Surface-17 with quantumsim takes $25~\ms$. We highlight an important advantage of doing density-matrix calculations with quantumsim. We do not perform projective measurements of the data qubits. Instead, after each cycle, we extract the diagonal of the data-qubit density matrix, which represents the probability distribution if a final measurement were performed. We leave the density matrix undisturbed and continue simulation up to $k=20$. This is a very useful property of the density-matrix approach, because having a probability distribution of all final readout events greatly reduces sampling noise. Our measurement model includes a declaration error probability (see Sec.~\ref{sec:measurement}), where the projected state of the ancilla after measurement is not the state reported to the decoder. Before decoding, we thus apply errors to the outcomes of the ancilla projections, and smear the probability distribution of the data qubit measurement. To then determine the fidelity averaged over this probability distribution, we present all 16 possible final $Z$-type parities to the decoder. This results in 16 different final Pauli updates, allowing us to determine correctness of the decoder for all 512 possible measurement outcomes. These are then averaged over the simulated probability distribution. This produces good results after about $\sim 10^4$ simulated runs. A second highlight of quantumsim is the possibility to quantify the sub-optimality of the decoder. The fidelity of the logical qubit obtained in these numerical simulations is a combination of the error rates of the physical qubits and the approximations made by the decoder. Full density-matrix simulations make it possible to disentangle these two contributions. Namely, the fidelity is obtained by assigning correctness to each of the 512 possible readouts according to 16 outputs of the decoder, and summing the corresponding probabilities accordingly. If the probabilities are known, it is easy to determine the 16 results that a decoder should output in order to maximize fidelity (i.e., the output of the best-possible decoder). This allows placing a decoder upper bound $\fid^{\max}$ on logical fidelity as limited by the physical qubits independent of the decoder. Conversely, it also allows quantifying sub-optimality in the decoder used. In fact, we can make the following reverse statement: if our measurement model did not include a declaration error, then we could use the simulation to find the final density matrix of the system conditioned on a syndrome measurement. From this, the simulation could output exactly the 16 results that give $\fid^{\max}$, so that quantumsim could thus be used as a maximum-likelihood decoder. In this situation, $\fid^{\max}$ would not only be an upper bound, but indeed the performance of the best-possible decoder. However, as we add the declaration errors after simulation, we can only refer to $\fid^{\max}$ as the decoder upper bound. \subsection{Error models} We now describe the error model used in the simulations. Our motivation for the development of this error model is to provide a limited number of free parameters to study, whilst remaining as close to known experimental data as possible. As such, we have taken well-established theoretical models as a base, and used experimental tomography to provide fixed parameters for observed noise beyond these models. The parameters of the error model are provided in~\cite{Suppmaterial}. \begin{table}[h] \begin{tabular}{| l | l | l|l| } \hline Parameter & Symbol & Value & Reference \\ \hline Qubit relaxation time & $\Tone$ & $30~\us$ &~\cite{Bultink16} \\ Qubit dephasing time (white noise) & $\Tphi$ & $60~\us$ &~\cite{Asaad16, Bultink16} \\ Single-qubit gate time & $\Tgone$ & $20~\ns$ &~\cite{Asaad16, Bultink16} \\ Two-qubit gate time & $\Tgtwo$ & $40~\ns$ &~\cite{Riste15} \\ Coherent step time & $\Tcorr$ & $200~\ns$ &~\cite{Versluis16} \\ Measurement time & $\Tmeas$ & $300~\ns$ &~\cite{Bultink16} \\ Depletion time & $\Tdep$ & $300~\ns$ &~\cite{Bultink16} \\ Fast measurement time & $\Tmeas^{\mathrm{(fast)}}$& $100~\ns$ &~\cite{Walter17} \\ Fast depletion time & $\Tdep^{\mathrm{(fast)}}$ & $100~\ns$ &~\cite{Walter17} \\ \hline \end{tabular} \caption{\label{table:parameters} Standard simulation parameters: Summary of standard times used in all density-matrix simulations, unless otherwise indicated. The two-qubit gate is a conditional phase gate (C-Z). Other error rates and parameters are given in~\cite{Suppmaterial}.} \end{table} \subsubsection{Idling qubits} \label{sec:rest} While idling for a time $\tau$, a transmon in $\ket{1}$ can relax to $\ket{0}$. Furthermore, a transmon in superposition can acquire random quantum phase shifts between $\ket{0}$ and $\ket{1}$ due to $1/f$ noise sources (e.g., flux noise) and broadband ones (e.g., photon shot noise~\cite{Sears12} and quasiparticle tunneling~\cite{Riste13}). These combined effects can be parametrized by probabilities $p_{1}=\exp(-\tau/\Tone)$ for relaxation, and $p_{\phi}=\exp(-\tau/\Tphi)$ for pure dephasing. The combined effects of relaxation and pure dephasing lead to decay of the off-diagonal elements of the qubit density matrix. We model dephasing from broadband sources in this way, taking for $\Tphi$ the value extracted from the decay time $\Ttwo$ of standard echo experiments: \begin{equation} \frac{1}{\Ttwo}=\frac{1}{\Tphi}+\frac{1}{2\Tone}. \end{equation} We model $1/f$ sources differently, as discussed below. \subsubsection{Dephasing from photon noise} \label{sec:photons} The dominant broadband dephasing source is the shot noise due to photons in the readout resonator. This dephasing is present whenever the coupled qubit is brought into superposition before the readout resonator has returned to the vacuum state following the last measurement. This leads to an additional, time-dependent pure dephasing (rates given in~\cite{Suppmaterial}). \subsubsection{One-qubit Y rotations} \label{sec:sqgates} We model $y$-axis rotations as instantaneous rotations sandwiched by idling periods of duration $\Tgone/2$. The errors in the instantaneous gates are modeled from process matrices measured by gate-set tomography~\cite{Blume13,Blume16} in a recent experiment~\cite{Rol16}. In this experiment, the GST analysis of single-qubit gates also showed that the errors can mostly be attributed to Markovian noise. For simplicity, we thus model these errors as Markovian. \subsubsection{Dephasing of flux-pulsed qubits} \label{sec:dephasing} During the coherent step, transmons are repeatedly moved in frequency away from their sweetspot using flux pulses, either to implement a C-Z gate or to avoid one. Away from the sweetspot, transmons become first-order sensitive to flux noise, which causes an additional random phase shift. As this noise typically has a $1/f$ power spectrum, the largest contribution comes from low-frequency components that are essentially static for a single run, but fluctuating between different runs. In our simulation, we approximate the effect of this noise through ensemble averaging, with quasi-static phase error added to a transmon whenever it is flux pulsed. Gaussian phase errors with the variance (calculated in~\cite{Suppmaterial}) are drawn independently for each qubit and for each run. \subsubsection{C-Z gate error} \label{sec:two_qubit_gates} The C-Z gate is achieved by flux pulsing a transmon into the $\ket{11}\leftrightarrow \ket{02}$ avoided crossing with another, where the $2$ denotes the second-excited state of the fluxed transmon. Holding the transmons here for $\Tgtwo$ causes the probability amplitudes of $\ket{01}$ and $\ket{11}$ to acquire phases~\cite{DiCarlo09}. Careful tuning allows the phase $\phi_{01}$ acquired by $\ket{01}$ (the single-qubit phase $\phi_{\oneq}$) to be an even multiple of $2\pi$, and the phase $\phi_{11}$ acquired by $\ket{11}$ to be $\pi$ extra. This extra phase acquired by $\ket{11}$ is the two-qubit phase $\phi_{\twoq}$. Single- and two-qubit phases are affected by flux noise because the qubit is first-order sensitive during the gate. Previously, we discussed the single-qubit phase error. In~\cite{Suppmaterial}, we calculate the corresponding two-qubit phase error $\delta \phi_{\twoq}$. Our full (but simplistic) model of the C-Z gate consists of an instantaneous C-Z gate with single-qubit phase error $\delta \phi_{\oneq}$ and two-qubit phase error $\delta \phi_{\twoq}=\delta \phi_{\oneq}/2$, sandwiched by idling intervals of duration $\Tgtwo/2$. \subsubsection{Measurement} \label{sec:measurement} We model qubit measurement with a black-box description using parameters obtained from experiment. This description consists of the eight probabilities for transitions from an input state $\ket{i}\in\{\ket{0},\ket{1}\}$ into pairs ($m$,$\ket{o}$) of measurement outcome $m\in\{+1,-1\}$ and final state $\ket{o}\in\{\ket{0},\ket{1}\}$. By final state we mean the qubit state following the photon-depletion period. Input superposition states in the computational bases are first projected to $\ket{0}$ and $\ket{1}$ following the Born rule. The probability tree (the butterfly) is then used to obtain an output pair $\left(m,\ket{o}\right)$. These experimental parameters can be described by a six-parameter model (described in detail in~\cite{Suppmaterial}), consisting of periods of enhanced noise before and after a point at which the qubit is perfectly projected, and two probabilities $\ero^{\ket{i}}$ for wrongly declaring the result of this projective measurement. In~\cite{Suppmaterial}, a scheme for measuring these butterfly parameters and mapping them to the six-parameter model is described. In experiment, we find that the readout errors $\ero^{\ket{i}}$ are almost independent of the qubit state $\ket{i}$, and so we describe them with a single readout error parameter $\ero$ in this work. \begin{acknowledgments} We thank C.~C.~Bultink, M.~A.~Rol, B.~Criger, X.~Fu, S.~Poletto, R.~Versluis, P.~Baireuther, D.~DiVincenzo, B.~Terhal, and C.W.J. Beenakker for useful discussions. This research is supported by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), an ERC Synergy Grant, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the U.S. Army Research Office grant W911NF-16-1-0071. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. \end{acknowledgments} \bibliographystyle{naturemag}
1,116,691,499,530
arxiv
\section{Brief Overview of Blockchain} Blockchain technology is the underlying mechanism for cryptocurrencies such as Bitcoin \cite{b2}. Bitcoin, the cryptocurrency introduced in 2009, peaked a record high valuation in the December of 2017 \cite{b1} and created a hype around digital currency. Since the debut of Bitcoin, there has been several cryptocurrencies in the market holding a market cap in billions of dollars \cite{b8}. Blockchain was first introduced in 2008 and implemented as the infrastructure of Bitcoin in 2009 by Satoshi Nakamoto, an unknown person or a group \cite{b2}. Blockchain is essentially a ``distributed ledger or database'' where all the transactions are documented regarding all the participating parties. Blockchain is a chronological chain of blocks, where each block can be considered as a page in a ledger. The chain grows continuously as miners discover new blocks and append to the existing Blockchain. Each transaction is broadcasted in the network via cryptographic communication while miners would try to collect as many transactions as they can and verify them using ``proof-of-work'' and create a new block. Miners would compete with each other to create such blocks. Once a winning block is appended to the Blockchain, a new copy of the block is broadcasted to the entire network, thus, creating a decentralized public ledger. While miners are responsible to verify transactions and update the Blockchain, they are incentivized with rewards. Note that the traditional ledger technologies need a trusted third-party such as bank, as shown in Fig. \ref{fig.t1}. However, the Blockchain based technology runs on peer-to-peer network, as shown in Fig. \ref{fig.bc}, where a centralized trusted third party is not needed for managing the transactions. Since, the issues such as double-spending is mitigated through consensus of miners, this system does not require intermediary, that is, a centralized trusted third party, as shown in Fig. \ref{fig.bc}. \begin{figure}[!b] \centering \includegraphics[width=3in]{Figs/traditional.png} \caption{Traditional centralized ledger technology with a trusted third-party.} \label{fig.t1} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=3.5in]{Figs/blockchain.png} \caption{A Typical Example of Blockchain Technology -- distributed ledger technology -- without a trusted third-party.} \label{fig.bc} \end{figure} \begin{figure*}[!h] \centering \includegraphics[width=7in]{Figs/history1.png} \caption{The History and Milestones of Blockchain Technology.} \label{fig.his} \end{figure*} Although Blockchain is widely used for cryptocurrencyies such as Bitcoin, this technology can also be applied to other applications. The Blockchain technology enables financial services without having financial institutions such as bank or other intermediary involved, as shown in Fig. \ref{fig.bc}. It can be implemented to conduct services such as online payment, digital assets and remittance \cite{rawat2018smart}. The key features of Blockchain technology -- decentralization, immutability, integrity and anonymity -- make it applicable to non-financial domains such as smart contracts \cite{b3}, the Internet-of-Things \cite{b4}, reputation systems \cite{b5}, security services \cite{b6, rawat2018ishare, malomo2018next, adebayo2019blockchain}, wireless network virtualization \cite{rawat2018leveraging} and so on. In this paper, we briefly discuss the history and architecture of Blockchain followed by its applications and use cases in different domains. Although the technology has been widely praised and discussed in academia and industry for different applications, a comprehensive documentation of its emerging applications and use cases are rarely found in the literature. \section{Brief History of Blockchain Technology} Although the technologies involved in Blockchain such as cryptographically secured chain of blocks \cite{b9} and Merkle trees \cite{b10} were developed in the early 1990s, the first Blockchain was conceptualized and implemented Satoshi Nakamoto in 2008 \cite{b2}. The work was published on a paper entitled ``Bitcoin: A Peer-to-Peer Electronic Cash System.'' The paper introduced a peer-to-peer version of digital cash that can function with having any central authority such as bank to verify transactions. Bitcoin was the first implementation of this technology. After the publication of \cite{b2}, an open source program was published by the same author that began with the Genesis block of 50 coins. \section{Block Architecture} \begin{figure}[!t] \centering \includegraphics[width=3.3in]{Figs/blocks.png} \caption{Typical Blocks with Header and Transactions in Blockchain Technology.} \label{fig.block} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=7in]{Figs/blockchainApp.png} \caption{Different Blockchain Applications and Use Cases.} \label{fig.app} \end{figure*} Blockchain is a chronological sequence of block where each block holds a complete list of transactions, as shown in Fig. \ref{fig.block}. It follows the data structure of linked list, where each block points to a previous block through the reference of the hash value of the previous block, also called parent block. The first block of a Blockchain, the genesis block, has no block to point to. A block is composed of metadata (block header) and list of transactions (block body). The metadata includes block version, parent block hash, merkle tree root hash, timestamp and nonce. Nonce is an arbitrary number which is used in the cryptographic communication within the user network. Digital signature is required to establish a secure communication between the users in Blockchain network. Each user is assigned with a public and a private key. When a user broadcasts a transaction to the network, the user signs the transaction using the private key. Later, the recipients use the user's public key to verify the transaction. Such cryptographic communication preserves the integrity of the transaction within the network. \section{Applications and Use Cases of Blockchain} After successful implementation of Blockchain in Bitcoin \cite{b2} because of its salient features, Blockchain has been proposed to be used in different applications and use cases, as shown in Fig. \ref{fig.his}. We present a brief overview of each domain in the following section. \subsection{Finance} Conventionally, an intermediary such as a bank, verifies and processes the financial transactions. Having such a centralized system puts immense work in the hand of intermediaries, meanwhile the transactions are prone to errors as multiple uncoordinated parties are required to keep the record and adjust them. Thus, the entire process is time-consuming and costly. The Blockchain simplifies such complications associated with financial services by introducing a distributed public ledger, where the transactions are verified by the miners using ``proof-of-work'' \cite{b1}. Since, each node in the Blockchain network has a copy of the updated Blockchain, there is transparency regarding the transactions, as shown in Fig. \ref{fig.bc}. Since, the blocks are chronologically arranged, once a block is added to the Blockchain with a verified transaction, the entire Blockchain is immutable. Thus, attackers cannot manipulate the transactions once it is registered into the system. In case of a conflicting Blockchain where branching might occur, miners always go for the longest chain, as the longest chain is more reliable. With such secured communication protocol and robust verification method, it creates an effective system to improve our existing financial services. \subsubsection{Cryptocurrency} Cryptocurrency -- which holds a market cap in billions of dollars \cite{b8} -- has been possible with the help of Blockchain \cite{b2} which doesn't need a trusted third-party like a bank in traditional systems. All transactions are verifiable and immutable. \subsubsection{Global Payments (Global Currency)} Global payments become complicated and time-consuming because there are many intermediaries involved to verify the transactions. The entire process can be prone to error and costly. These issues arise essentially due to the centralization of the monetary transactions where institutions such as banks and other financial firms dictate processes and they are responsible to verify the transactions. The Blockchain technology reduces such complexities by introducing the decentralized public ledger and robust verification method to verify the transactions. Within this peer-to-peer network, global payments are quicker, verifiable, immutable and safer. There are several remittance companies \cite{bcC} such as Abra and Bitspark that are already using Blockchain technology for remittance services. \subsubsection{Insurance Claims and Processing} Insurance claim has been dealing with several fraudulent claims. Moreover, there must be updated policies and data associated with each claim to properly process an insurance claim which is difficult to handle in traditional approaches. With Blockchain technology, the process can be handled through Blockchain (distributed ledger technology) efficiently in a secure manner. Similarly, any fraudulent claims/transactions can be detected and dropped with a good confidence as multiple participants/miners need to agree on the validity of each transaction. This makes sure the insurers settle their claim which they deserve quickly and effectively. \subsection{Blockchain Government} In order to build trustworthy and effective government operations through collaborative and transparent networks, different government organizations and units can use Blockchain technology. Blockchain technology with its salient features will help provide accountability, transparency and trust among stake holders such as citizens, leaders, government officials and their different operations \cite{rawat2018smart,rawat2019cybersecurity}. Government is required to make its affairs transparent in order to address the accountability of its bodies. In order to do so, government might have to make a great amount of data open to the public \cite{rawat2019cybersecurity}. As per the report from McKinsey \cite{b11}, open data made available to the public in the Internet can benefit the people in an order of trillions of dollars. Several entities can use open data to expose illegitimate doings. The public can question the quality of health-care, and food supplies with given open data which eventually makes the system more fair and trust \cite{b11}. Therefore, releasing the data to the public is helpful for the economy, but it also has its own challenges to make the data public. When the data is released only once a year, it is largely left unnoticed by the public. Thus, an alternative to this can be a Blockchain government, where the data is distributed in the public ledger, and is open to the public all the time. Moreover, smart contracts can be used to ensure the electorates work in the favor of the electors. The contracts can be based on the manifesto of the electorates, and they only get paid once they meet the demand of the electors via the smart contracts. This kind of technology can put the electorates in check and possibly enforce them to fulfill their promises. \subsection{Internet of Things (IoT)} The number of electronic devices getting connected to the Internet is rapidly increasing every year \cite{b7}. With the massive number of devices interlinked to each other creates the Internet-of-Things (IoT). The IoT is expected to transform the way of lives where ideas like smart homes is feasible. While this new phenomenon is likely to make lives easier, having massive number of heterogeneous devices connected to the Internet creates graves issues regarding cyber security and privacy. The Blockchain can be an important technology to secure IoT. Having millions of devices connected to each other and communicating, it is important to ensure that the information flowing through IoT remains secured and makes the participants accountable. \subsubsection{Energy Cyber Physical System} Smart energy grid systems are becoming complex cyber-physical systems (CPS) where complex interactions among power generation, distribution, utility offices and users happen in a bidirectional manner \cite{rawat2015cyber}. Salient features of Blockchain technology provide a secure and verifiable environment to support interactions in energy CPS \cite{zhaoyang2018blockchain}. \subsubsection{Vehicular Cyber Physical System} Vehicular cyber physical system is regarded as the backbone technology for intelligent transportation systems and autonomous driving \cite{rawat2016vehicular,rawat2015cyber} for improving road safety and traffic efficiency. Security and privacy in vehicular cyber physical system are always central issues since vehicles are ties to the private information of their owner, driver or renter. Blockchain with its features such as decentralization, immutability, integrity and anonymity through a pair of public and private keys can be leveraged to build a smart and secure autonomous intelligent transportation system \cite{sharma2017block}. \subsubsection{Blockchain in Aviation Systems} Blockchain in aviation industry can offer robust collaborative partnerships among service and product providers to offer travel services as well as products in a distributed secure way. Smart contracts could streamline the interactions among businesses and different units within the business \cite{akmeemana2017blockchain}. \subsubsection{Supply Chain Systems/Sensors} Smart sensors can be helpful for the companies to gather information regarding the supply chain as they are transported around the globe. Several leading supply chain companies are reported to use smart sensors to track supplies. Therefore, the number of such sensors is expected to grow rapidly in th near future. Having such a massive distribution of sensors, there will be enormous amount of data to be collected and analyzed. Blockchain technology can be used for disruptive transformation for efficient and secure supply chains and network \cite{korpela2017digital}. \subsubsection{Smart Homes } Blockchain in the context of smart homes with IoT devices can help to have secure and reliable operations for smart home operations \cite{dorri2017blockchain}. However, implementation of Blockchain in such resource constrained IoT systems is not straightforward because of high resource demand required for proof-of-work, limited storage capacity, low latency and low scalability \cite{dorri2017blockchain}. \subsubsection{Internet of Battle-field Things (IoBT)} Internet-of-Battle-field Things (IoBT) is regarded as the backbone for smart defense and warfare applications where Battle-field Things such as combat equipment, unnamed areal vehicles, ground vehicles, fighters with sensors can collect intelligent information to enable informed decision real-time in a secure and immutable manner. Note that the IoBT is so diverse in a way that it consists of different devices (combat equipment, unnamed areal vehicles, ground vehicles and fighters), platforms, networks and connectivity. This diversity several challenges for secure, privacy-aware and trustworthy battlefield operations such as communication and computing. Blockchain technology can help to have secure and reliable operations for IoBT \cite{tosh2018blockchain}. \subsection{Cybersecurity} Another application of Blockchain is cybersecurity where threat information can be sharing using Blockchain among participants/organizations to combat future cyber attacks \cite{rawat2018ishare, adebayo2019blockchain}. However, Blockchain will not be able to fix everything but its features can be leveraged to harden the systems against multitude of cyber threats. \subsection{Smart Property and Public Value} All entities/property such as house, land, automobiles, stocks, etc can be represented in the ledger technology and Blockchain can be used to keep the track of all operations and property records. Once, the records are kept in the Blockchain, it will be shared with all the concerned or participating parties which can easily be used to establish contracts and verify them. Thus, with a decentralized ledger, any lost record can be duplicated from the network and immediately can be used to recover the loss \cite{crosby2016blockchain}. \subsubsection{Hard Money Lending} Hard money lending serves people to mitigate financial burdens in a short term. It requires the borrower to have property such as real estate as a collateral. Thus, it is important that the collateral is legit and trustworthy. Lenders can lose money if the collateral is not redeemable. Similarly, the borrower might also lose its property if the lender use fraudulent policies as part of the agreement. With Blockchain, both the property and the policies can be encoded in the ledger and distributed among the users. This will create a healthy setting where people can trade with complete strangers due to the transparency and security of the Blockchain. Smart contracts can be deployed using Blockchain for this kind of scenarios. \subsubsection{Cars and Phones} Personal devices such as phones are protected using authentication keys. Similarly, cars are only accessible to the owners using smart keys. This kind of technology is possible with cryptography, and yet, such methods can fail if the authentication key is stolen or copied or transferred. Such issues can be fixed in the Blockchain ledger where users/miners can replace and replicate lost credentials. \subsubsection{Smart Appliances} Smart appliances are essentially electronic devices aided with cyber system such that the cyber portion can communicate information regarding environment around the device and the device itself. It is essentially about the idea of a ``talking toaster'' where a toaster can give its user information relevant to its usage. A home connected with smart appliances can be considered a smart home, where the cyber physical system tries to optimize the functionalities of the smart devices, providing maximum utilities to its users. With so many devices involved as part of smart appliances, we can encode them in the Blockchain as smart property. Such practice could easily ensure the ownership of a user over these devices. \subsubsection{Asset Management} Asset management involved multiple parties where each party is required to keep the transactions. While keeping the same transaction in different places can make the entire process inefficient and prone to errors. To make the matter worse, asset management might also involve cross-border transactions, adding more complexity to verify the transactions. Such issues can be dealt with a distributed ledger where each party can have a copy of the entire transactions and get updated about each transaction using cryptographic communications \cite{notheisen2017trading}. This improves the efficiency and reduces the cost as there would be no intermediary to verify the transactions. \subsection{Cloud Storage and Provenance} Metadata that records the history of the creation and all operations including file/data accessing activities can be kept in the Blockchain which then be shared with all stakeholders. Data provenance through Blockchain is important for applications like accountability and forensics \cite{liang2017provchain}. \subsection{Intellectual Property} Intellectual Property management system could leverage the Blockchain technology to enforce provable intellectual property rights \cite{zeilinger2018digital} where verifiable, immutable and secure operations in Blockchain could help any disputes. \subsection{Food Safety} Food safety is one of the most critical issues to be addressed since over 0.6 billion (equivalently 1 in 10 people) in the world become ill after consuming bad food every year \cite{galvin2017ibm}. About 1,167 people die every day \cite{galvin2017ibm}. To prevent these issues, Blockchain technology can help to prevent counterfeiting issues for food to have visibility across food supply chain and help to access any information such as food content, its origins, expiration, etc. in seconds. Food consumers will better control over food and information with high accuracy and transparency for food safety \cite{galvin2017ibm}. \subsection{Blockchain Notary} Blockchain using distributed ledger technology with cryptography replaces trusted third parties such as a notary (trust third party in the traditional systems). Blockchain helps the entire notary process by automatically executing process in a cost-effective, transparent and secure manner \cite{nofer2017blockchain} \subsection{Blockchain Health-care} Personal health records are sensitive information and needs to be dealt with high security. Such personal records can be encoded and stored using Blockchain and provide a private key which would allow only specific individuals to access the records. Similarly, the same protocol can be applied to conduct research where personal records are used via HIPAA laws to ensure confidentiality of the data. Patients records in the Blockchain can be automatically sent to the insurance providers or doctor can send the health record to concerned parties securely \cite{mettler2016blockchain}. \subsection{Fundraising and Transparency} Transparency is one of the issues to be addressed in fund-raising activities to make the process trustworthy. Blockchain as a distributed ledger technology can ensure the transparency, security, and integrity in fund-raising activities by leveraging Blockchain features such as immutability, verifiability and security \cite{zhu2016analysis}. \subsection{Wireless Networks and Virtualization} Wireless network is suffering from explosive growth of IoT and CPS applications and there have been different approach studied to enhance the network capacity and coverage \cite{rawat2015dynamic,rawat2018payoff} Blockchain can be used to sublease wireless resources such as RF slices to network service providers or third party like mobile virtual network operators (MVNOs) in a verifiable way so that quality of service of the users would be met by preventing double spending/subleasing of same wireless resource to multiple parties in a distributed manner \cite{rawat2018leveraging, rawat2017edge}. \subsection{Real State} Blockchain technology as a distributed ledger database system can offer benefit for the real estate industry. Property title recording can be done using blocks with transactions in Blockchain rather than using traditional/current record keeping system \cite{spielman2016blockchain}. \subsection{Smart Contracts} Smart contracts digital entity written in a Turing-complete byte language, called EVM bytecode \cite{b47}. They are essentially a set of functions where each function is a sequence of instructions. Such contracts are embedded with conditional statements which enables them to self-execute. Smart contracts can be a replacement to intermediaries which make sure that all parties are obliged by the agreed terms. Thus, with Blockchain, such regulatory bodies become redundant. Smart contracts based on Blockchain ensures that the participants know the contract details and the agreement are automatically implemented once the conditions are fulfilled. In order to make the smart contracts work, there is a group of mutually ``untrusted'' peers called miners who verify the transactions related to the contract. Each transaction broadcasted to the Blockchain network is collected by the miners and verified before they are encoded to a new block and appended to the Blockchain. Any potential conflict is resolved through the consensus protocol which is based on ``proof-of-work''. Thus, a smart contract only work if there is no bias or majority in the computational power of the network, thus ensuring the decentralization in the network. The miners are rewarded for creating new blocks under the protocol everyone miners are required to follow. Any miner’s work is discredited by other miners if he/she does not follow the protocol, thus there is an incentive for each miner to follow the rules. \subsection{Identity Management} In this section, we present brief overview of different identity management based applications and how they could benefit from Blockchain technology. \subsubsection{Academic Records} Blockchain can be used to store academic records for students and universities in a decentralized ledgers \cite{sharples2016blockchain}. This academic record keeping system will be temper-proof, verifiable, immutable and secure \cite{sharples2016blockchain}. \subsubsection{Blockchain Music} In music industry, it is a huge challenge to own products via ownership rights, and benefit from royalty distribution. In order to monetize digital music products, ownership rights are required. The Blockchain and smart contracts technology can be used to create a comprehensive and accurate decentralized database of music rights. Meanwhile, the ledger can be used to provide a transparent information regarding the artist royalties and real time distributions to all the labels involved. Digital currency can be deployed to make the payments as per the terms of contracts. \subsubsection{Birth, Marriage and Death Certificates} The record of birth, marriage and death are important records of a citizen as they are used to confirm citizenship of citizens, and grant rights as per their status such as voting rights and work permits. While keeping such records in a conventional method can be slow and prone to error, such issues can be fixed with the public ledger such as Blockchain. The Blockchain can make such records more reliable by encrypting the records \cite{sullivan2017residency,doveylove}. \subsubsection{Passports} The first digital passport was launched in 2014 \cite{b12} which could help the owners to identify themselves online and offline. With this Blockchain technology, a user can take a picture and share it via cryptograpphic communication, which can be used to share the picture and verify among the users via digital signatures. In the Blockchain based passport system, passports are stored in the distributed ledger, which is confirmed/verified by the users as well as government. \subsubsection{Personal Identity and Privacy} We perform several transactions that are based on our personal information. For instance, we can only buy alcoholic beverages, or get into bars, and several other public places depending on our age. Similarly, there is some level of personal profiling done via companies we interact with through online shopping, or personalized web surfing. Thus, there is a notion of personal identity that is being traded in the market to the advertiser so that they can target essential products to users as per their need. While such personal identity is fairly being traded in the market, it is essential to protect the privacy of the users. Therefore, the Blockchain can be used to protect the identity of the users by encrypting the data and securing it from attackers \cite{jacobovitz2016blockchain,rawat2018ishare,rawat2018smart}. \textit{ Personal ID}: There are several personal identifications we carry around such as our driver's license, student identity cards, keys, social security number card, state identification card, etc. Blockchain can be used to store these identifications as digital form of IDs that will replace all forms of traditional physical identifications. Essentially, one Blockchain ID could be used for all kind of identifications used identify the same subject or object\cite{andrade2016systems}. \subsubsection{Voting} Blockchain could offer many tangible benefits for verifiable secure voting system in coming years. Current voting system has flaws and hard to verify votes and votes. Thus, Blockchain with its features could provide immutable, verifiable and secure voting system where voter can cast their votes with highest confidence from anywhere in the world \cite{ernest2017blockchain,osgood2016future}. \subsection{Reputation System} Reputation system is an important measure on how much a community trusts a person. Such a system plays an important role to assess a person through their reputation, which is evaluated on his/her past transactions and interactions with the community. Credit system can be thought of as a reputation system, where users are given credit scores based on their financial activities and later they are used to make decision regarding other financial transactions. There can be falsification of such system if the integrity of the data is compromised. Thus, it is important to securely keep the record of past transaction and fairly evaluate the reputation of a users. Here, Blockchain can be a really important technology as it keeps a distributed public ledger which is scrutinized by the consensus of users in the network. \subsection{Other Applications and Use Cases} Blockchain technology can be used in any scenarios when a trusted third-party is not needed or peer-to-peer system is needed for managing the transactions, as shown in Fig. \ref{fig.bc}, with features like transparency, decentralization, integrity, immutability, security and privacy. However, Blockchain has some limitations such as high delay introduced by consensus process, large size of the blocks in Blockchain, etc. \section{Summary} This paper has briefly summarized not only how Blockchain works but also its different emerging applications and use cases. By reading this paper, readers can have better understanding of what is a Blockchain and what are its different applications and user cases. \section*{Acknowledgment} {This work is partly supported by the U.S. Air Force Research Lab (AFRL), U.S. National Science Foundation (NSF) under grants CNS 1650831 and HRD 1828811, and by the U.S. Department of Homeland Security (DHS) under grant award number, 2017‐ST‐062‐000003. However, any opinion, finding, and conclusions or recommendations expressed in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the funding agencies. } \section{Brief Overview of Blockchain} Blockchain technology is the underlying mechanism for cryptocurrencies such as Bitcoin \cite{b2}. Bitcoin, the cryptocurrency introduced in 2009, peaked a record high valuation in the December of 2017 \cite{b1} and created a hype around digital currency. Since the debut of Bitcoin, there has been several cryptocurrencies in the market holding a market cap in billions of dollars \cite{b8}. Blockchain was first introduced in 2008 and implemented as the infrastructure of Bitcoin in 2009 by Satoshi Nakamoto, an unknown person or a group \cite{b2}. Blockchain is essentially a ``distributed ledger or database'' where all the transactions are documented regarding all the participating parties. Blockchain is a chronological chain of blocks, where each block can be considered as a page in a ledger. The chain grows continuously as miners discover new blocks and append to the existing Blockchain. Each transaction is broadcasted in the network via cryptographic communication while miners would try to collect as many transactions as they can and verify them using ``proof-of-work'' and create a new block. Miners would compete with each other to create such blocks. Once a winning block is appended to the Blockchain, a new copy of the block is broadcasted to the entire network, thus, creating a decentralized public ledger. While miners are responsible to verify transactions and update the Blockchain, they are incentivized with rewards. Note that the traditional ledger technologies need a trusted third-party such as bank, as shown in Fig. \ref{fig.t1}. However, the Blockchain based technology runs on peer-to-peer network, as shown in Fig. \ref{fig.bc}, where a centralized trusted third party is not needed for managing the transactions. Since, the issues such as double-spending is mitigated through consensus of miners, this system does not require intermediary, that is, a centralized trusted third party, as shown in Fig. \ref{fig.bc}. \begin{figure}[!b] \centering \includegraphics[width=3in]{Figs/traditional.png} \caption{Traditional centralized ledger technology with a trusted third-party.} \label{fig.t1} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=3.5in]{Figs/blockchain.png} \caption{A Typical Example of Blockchain Technology -- distributed ledger technology -- without a trusted third-party.} \label{fig.bc} \end{figure} \begin{figure*}[!h] \centering \includegraphics[width=7in]{Figs/history1.png} \caption{The History and Milestones of Blockchain Technology.} \label{fig.his} \end{figure*} Although Blockchain is widely used for cryptocurrencyies such as Bitcoin, this technology can also be applied to other applications. The Blockchain technology enables financial services without having financial institutions such as bank or other intermediary involved, as shown in Fig. \ref{fig.bc}. It can be implemented to conduct services such as online payment, digital assets and remittance \cite{rawat2018smart}. The key features of Blockchain technology -- decentralization, immutability, integrity and anonymity -- make it applicable to non-financial domains such as smart contracts \cite{b3}, the Internet-of-Things \cite{b4}, reputation systems \cite{b5}, security services \cite{b6, rawat2018ishare, malomo2018next, adebayo2019blockchain}, wireless network virtualization \cite{rawat2018leveraging} and so on. In this paper, we briefly discuss the history and architecture of Blockchain followed by its applications and use cases in different domains. Although the technology has been widely praised and discussed in academia and industry for different applications, a comprehensive documentation of its emerging applications and use cases are rarely found in the literature. \section{Brief History of Blockchain Technology} Although the technologies involved in Blockchain such as cryptographically secured chain of blocks \cite{b9} and Merkle trees \cite{b10} were developed in the early 1990s, the first Blockchain was conceptualized and implemented Satoshi Nakamoto in 2008 \cite{b2}. The work was published on a paper entitled ``Bitcoin: A Peer-to-Peer Electronic Cash System.'' The paper introduced a peer-to-peer version of digital cash that can function with having any central authority such as bank to verify transactions. Bitcoin was the first implementation of this technology. After the publication of \cite{b2}, an open source program was published by the same author that began with the Genesis block of 50 coins. \section{Block Architecture} \begin{figure}[!t] \centering \includegraphics[width=3.3in]{Figs/blocks.png} \caption{Typical Blocks with Header and Transactions in Blockchain Technology.} \label{fig.block} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=7in]{Figs/blockchainApp.png} \caption{Different Blockchain Applications and Use Cases.} \label{fig.app} \end{figure*} Blockchain is a chronological sequence of block where each block holds a complete list of transactions, as shown in Fig. \ref{fig.block}. It follows the data structure of linked list, where each block points to a previous block through the reference of the hash value of the previous block, also called parent block. The first block of a Blockchain, the genesis block, has no block to point to. A block is composed of metadata (block header) and list of transactions (block body). The metadata includes block version, parent block hash, merkle tree root hash, timestamp and nonce. Nonce is an arbitrary number which is used in the cryptographic communication within the user network. Digital signature is required to establish a secure communication between the users in Blockchain network. Each user is assigned with a public and a private key. When a user broadcasts a transaction to the network, the user signs the transaction using the private key. Later, the recipients use the user's public key to verify the transaction. Such cryptographic communication preserves the integrity of the transaction within the network. \section{Applications and Use Cases of Blockchain} After successful implementation of Blockchain in Bitcoin \cite{b2} because of its salient features, Blockchain has been proposed to be used in different applications and use cases, as shown in Fig. \ref{fig.his}. We present a brief overview of each domain in the following section. \subsection{Finance} Conventionally, an intermediary such as a bank, verifies and processes the financial transactions. Having such a centralized system puts immense work in the hand of intermediaries, meanwhile the transactions are prone to errors as multiple uncoordinated parties are required to keep the record and adjust them. Thus, the entire process is time-consuming and costly. The Blockchain simplifies such complications associated with financial services by introducing a distributed public ledger, where the transactions are verified by the miners using ``proof-of-work'' \cite{b1}. Since, each node in the Blockchain network has a copy of the updated Blockchain, there is transparency regarding the transactions, as shown in Fig. \ref{fig.bc}. Since, the blocks are chronologically arranged, once a block is added to the Blockchain with a verified transaction, the entire Blockchain is immutable. Thus, attackers cannot manipulate the transactions once it is registered into the system. In case of a conflicting Blockchain where branching might occur, miners always go for the longest chain, as the longest chain is more reliable. With such secured communication protocol and robust verification method, it creates an effective system to improve our existing financial services. \subsubsection{Cryptocurrency} Cryptocurrency -- which holds a market cap in billions of dollars \cite{b8} -- has been possible with the help of Blockchain \cite{b2} which doesn't need a trusted third-party like a bank in traditional systems. All transactions are verifiable and immutable. \subsubsection{Global Payments (Global Currency)} Global payments become complicated and time-consuming because there are many intermediaries involved to verify the transactions. The entire process can be prone to error and costly. These issues arise essentially due to the centralization of the monetary transactions where institutions such as banks and other financial firms dictate processes and they are responsible to verify the transactions. The Blockchain technology reduces such complexities by introducing the decentralized public ledger and robust verification method to verify the transactions. Within this peer-to-peer network, global payments are quicker, verifiable, immutable and safer. There are several remittance companies \cite{bcC} such as Abra and Bitspark that are already using Blockchain technology for remittance services. \subsubsection{Insurance Claims and Processing} Insurance claim has been dealing with several fraudulent claims. Moreover, there must be updated policies and data associated with each claim to properly process an insurance claim which is difficult to handle in traditional approaches. With Blockchain technology, the process can be handled through Blockchain (distributed ledger technology) efficiently in a secure manner. Similarly, any fraudulent claims/transactions can be detected and dropped with a good confidence as multiple participants/miners need to agree on the validity of each transaction. This makes sure the insurers settle their claim which they deserve quickly and effectively. \subsection{Blockchain Government} In order to build trustworthy and effective government operations through collaborative and transparent networks, different government organizations and units can use Blockchain technology. Blockchain technology with its salient features will help provide accountability, transparency and trust among stake holders such as citizens, leaders, government officials and their different operations \cite{rawat2018smart,rawat2019cybersecurity}. Government is required to make its affairs transparent in order to address the accountability of its bodies. In order to do so, government might have to make a great amount of data open to the public \cite{rawat2019cybersecurity}. As per the report from McKinsey \cite{b11}, open data made available to the public in the Internet can benefit the people in an order of trillions of dollars. Several entities can use open data to expose illegitimate doings. The public can question the quality of health-care, and food supplies with given open data which eventually makes the system more fair and trust \cite{b11}. Therefore, releasing the data to the public is helpful for the economy, but it also has its own challenges to make the data public. When the data is released only once a year, it is largely left unnoticed by the public. Thus, an alternative to this can be a Blockchain government, where the data is distributed in the public ledger, and is open to the public all the time. Moreover, smart contracts can be used to ensure the electorates work in the favor of the electors. The contracts can be based on the manifesto of the electorates, and they only get paid once they meet the demand of the electors via the smart contracts. This kind of technology can put the electorates in check and possibly enforce them to fulfill their promises. \subsection{Internet of Things (IoT)} The number of electronic devices getting connected to the Internet is rapidly increasing every year \cite{b7}. With the massive number of devices interlinked to each other creates the Internet-of-Things (IoT). The IoT is expected to transform the way of lives where ideas like smart homes is feasible. While this new phenomenon is likely to make lives easier, having massive number of heterogeneous devices connected to the Internet creates graves issues regarding cyber security and privacy. The Blockchain can be an important technology to secure IoT. Having millions of devices connected to each other and communicating, it is important to ensure that the information flowing through IoT remains secured and makes the participants accountable. \subsubsection{Energy Cyber Physical System} Smart energy grid systems are becoming complex cyber-physical systems (CPS) where complex interactions among power generation, distribution, utility offices and users happen in a bidirectional manner \cite{rawat2015cyber}. Salient features of Blockchain technology provide a secure and verifiable environment to support interactions in energy CPS \cite{zhaoyang2018blockchain}. \subsubsection{Vehicular Cyber Physical System} Vehicular cyber physical system is regarded as the backbone technology for intelligent transportation systems and autonomous driving \cite{rawat2016vehicular,rawat2015cyber} for improving road safety and traffic efficiency. Security and privacy in vehicular cyber physical system are always central issues since vehicles are ties to the private information of their owner, driver or renter. Blockchain with its features such as decentralization, immutability, integrity and anonymity through a pair of public and private keys can be leveraged to build a smart and secure autonomous intelligent transportation system \cite{sharma2017block}. \subsubsection{Blockchain in Aviation Systems} Blockchain in aviation industry can offer robust collaborative partnerships among service and product providers to offer travel services as well as products in a distributed secure way. Smart contracts could streamline the interactions among businesses and different units within the business \cite{akmeemana2017blockchain}. \subsubsection{Supply Chain Systems/Sensors} Smart sensors can be helpful for the companies to gather information regarding the supply chain as they are transported around the globe. Several leading supply chain companies are reported to use smart sensors to track supplies. Therefore, the number of such sensors is expected to grow rapidly in th near future. Having such a massive distribution of sensors, there will be enormous amount of data to be collected and analyzed. Blockchain technology can be used for disruptive transformation for efficient and secure supply chains and network \cite{korpela2017digital}. \subsubsection{Smart Homes } Blockchain in the context of smart homes with IoT devices can help to have secure and reliable operations for smart home operations \cite{dorri2017blockchain}. However, implementation of Blockchain in such resource constrained IoT systems is not straightforward because of high resource demand required for proof-of-work, limited storage capacity, low latency and low scalability \cite{dorri2017blockchain}. \subsubsection{Internet of Battle-field Things (IoBT)} Internet-of-Battle-field Things (IoBT) is regarded as the backbone for smart defense and warfare applications where Battle-field Things such as combat equipment, unnamed areal vehicles, ground vehicles, fighters with sensors can collect intelligent information to enable informed decision real-time in a secure and immutable manner. Note that the IoBT is so diverse in a way that it consists of different devices (combat equipment, unnamed areal vehicles, ground vehicles and fighters), platforms, networks and connectivity. This diversity several challenges for secure, privacy-aware and trustworthy battlefield operations such as communication and computing. Blockchain technology can help to have secure and reliable operations for IoBT \cite{tosh2018blockchain}. \subsection{Cybersecurity} Another application of Blockchain is cybersecurity where threat information can be sharing using Blockchain among participants/organizations to combat future cyber attacks \cite{rawat2018ishare, adebayo2019blockchain}. However, Blockchain will not be able to fix everything but its features can be leveraged to harden the systems against multitude of cyber threats. \subsection{Smart Property and Public Value} All entities/property such as house, land, automobiles, stocks, etc can be represented in the ledger technology and Blockchain can be used to keep the track of all operations and property records. Once, the records are kept in the Blockchain, it will be shared with all the concerned or participating parties which can easily be used to establish contracts and verify them. Thus, with a decentralized ledger, any lost record can be duplicated from the network and immediately can be used to recover the loss \cite{crosby2016blockchain}. \subsubsection{Hard Money Lending} Hard money lending serves people to mitigate financial burdens in a short term. It requires the borrower to have property such as real estate as a collateral. Thus, it is important that the collateral is legit and trustworthy. Lenders can lose money if the collateral is not redeemable. Similarly, the borrower might also lose its property if the lender use fraudulent policies as part of the agreement. With Blockchain, both the property and the policies can be encoded in the ledger and distributed among the users. This will create a healthy setting where people can trade with complete strangers due to the transparency and security of the Blockchain. Smart contracts can be deployed using Blockchain for this kind of scenarios. \subsubsection{Cars and Phones} Personal devices such as phones are protected using authentication keys. Similarly, cars are only accessible to the owners using smart keys. This kind of technology is possible with cryptography, and yet, such methods can fail if the authentication key is stolen or copied or transferred. Such issues can be fixed in the Blockchain ledger where users/miners can replace and replicate lost credentials. \subsubsection{Smart Appliances} Smart appliances are essentially electronic devices aided with cyber system such that the cyber portion can communicate information regarding environment around the device and the device itself. It is essentially about the idea of a ``talking toaster'' where a toaster can give its user information relevant to its usage. A home connected with smart appliances can be considered a smart home, where the cyber physical system tries to optimize the functionalities of the smart devices, providing maximum utilities to its users. With so many devices involved as part of smart appliances, we can encode them in the Blockchain as smart property. Such practice could easily ensure the ownership of a user over these devices. \subsubsection{Asset Management} Asset management involved multiple parties where each party is required to keep the transactions. While keeping the same transaction in different places can make the entire process inefficient and prone to errors. To make the matter worse, asset management might also involve cross-border transactions, adding more complexity to verify the transactions. Such issues can be dealt with a distributed ledger where each party can have a copy of the entire transactions and get updated about each transaction using cryptographic communications \cite{notheisen2017trading}. This improves the efficiency and reduces the cost as there would be no intermediary to verify the transactions. \subsection{Cloud Storage and Provenance} Metadata that records the history of the creation and all operations including file/data accessing activities can be kept in the Blockchain which then be shared with all stakeholders. Data provenance through Blockchain is important for applications like accountability and forensics \cite{liang2017provchain}. \subsection{Intellectual Property} Intellectual Property management system could leverage the Blockchain technology to enforce provable intellectual property rights \cite{zeilinger2018digital} where verifiable, immutable and secure operations in Blockchain could help any disputes. \subsection{Food Safety} Food safety is one of the most critical issues to be addressed since over 0.6 billion (equivalently 1 in 10 people) in the world become ill after consuming bad food every year \cite{galvin2017ibm}. About 1,167 people die every day \cite{galvin2017ibm}. To prevent these issues, Blockchain technology can help to prevent counterfeiting issues for food to have visibility across food supply chain and help to access any information such as food content, its origins, expiration, etc. in seconds. Food consumers will better control over food and information with high accuracy and transparency for food safety \cite{galvin2017ibm}. \subsection{Blockchain Notary} Blockchain using distributed ledger technology with cryptography replaces trusted third parties such as a notary (trust third party in the traditional systems). Blockchain helps the entire notary process by automatically executing process in a cost-effective, transparent and secure manner \cite{nofer2017blockchain} \subsection{Blockchain Health-care} Personal health records are sensitive information and needs to be dealt with high security. Such personal records can be encoded and stored using Blockchain and provide a private key which would allow only specific individuals to access the records. Similarly, the same protocol can be applied to conduct research where personal records are used via HIPAA laws to ensure confidentiality of the data. Patients records in the Blockchain can be automatically sent to the insurance providers or doctor can send the health record to concerned parties securely \cite{mettler2016blockchain}. \subsection{Fundraising and Transparency} Transparency is one of the issues to be addressed in fund-raising activities to make the process trustworthy. Blockchain as a distributed ledger technology can ensure the transparency, security, and integrity in fund-raising activities by leveraging Blockchain features such as immutability, verifiability and security \cite{zhu2016analysis}. \subsection{Wireless Networks and Virtualization} Wireless network is suffering from explosive growth of IoT and CPS applications and there have been different approach studied to enhance the network capacity and coverage \cite{rawat2015dynamic,rawat2018payoff} Blockchain can be used to sublease wireless resources such as RF slices to network service providers or third party like mobile virtual network operators (MVNOs) in a verifiable way so that quality of service of the users would be met by preventing double spending/subleasing of same wireless resource to multiple parties in a distributed manner \cite{rawat2018leveraging, rawat2017edge}. \subsection{Real State} Blockchain technology as a distributed ledger database system can offer benefit for the real estate industry. Property title recording can be done using blocks with transactions in Blockchain rather than using traditional/current record keeping system \cite{spielman2016blockchain}. \subsection{Smart Contracts} Smart contracts digital entity written in a Turing-complete byte language, called EVM bytecode \cite{b47}. They are essentially a set of functions where each function is a sequence of instructions. Such contracts are embedded with conditional statements which enables them to self-execute. Smart contracts can be a replacement to intermediaries which make sure that all parties are obliged by the agreed terms. Thus, with Blockchain, such regulatory bodies become redundant. Smart contracts based on Blockchain ensures that the participants know the contract details and the agreement are automatically implemented once the conditions are fulfilled. In order to make the smart contracts work, there is a group of mutually ``untrusted'' peers called miners who verify the transactions related to the contract. Each transaction broadcasted to the Blockchain network is collected by the miners and verified before they are encoded to a new block and appended to the Blockchain. Any potential conflict is resolved through the consensus protocol which is based on ``proof-of-work''. Thus, a smart contract only work if there is no bias or majority in the computational power of the network, thus ensuring the decentralization in the network. The miners are rewarded for creating new blocks under the protocol everyone miners are required to follow. Any miner’s work is discredited by other miners if he/she does not follow the protocol, thus there is an incentive for each miner to follow the rules. \subsection{Identity Management} In this section, we present brief overview of different identity management based applications and how they could benefit from Blockchain technology. \subsubsection{Academic Records} Blockchain can be used to store academic records for students and universities in a decentralized ledgers \cite{sharples2016blockchain}. This academic record keeping system will be temper-proof, verifiable, immutable and secure \cite{sharples2016blockchain}. \subsubsection{Blockchain Music} In music industry, it is a huge challenge to own products via ownership rights, and benefit from royalty distribution. In order to monetize digital music products, ownership rights are required. The Blockchain and smart contracts technology can be used to create a comprehensive and accurate decentralized database of music rights. Meanwhile, the ledger can be used to provide a transparent information regarding the artist royalties and real time distributions to all the labels involved. Digital currency can be deployed to make the payments as per the terms of contracts. \subsubsection{Birth, Marriage and Death Certificates} The record of birth, marriage and death are important records of a citizen as they are used to confirm citizenship of citizens, and grant rights as per their status such as voting rights and work permits. While keeping such records in a conventional method can be slow and prone to error, such issues can be fixed with the public ledger such as Blockchain. The Blockchain can make such records more reliable by encrypting the records \cite{sullivan2017residency,doveylove}. \subsubsection{Passports} The first digital passport was launched in 2014 \cite{b12} which could help the owners to identify themselves online and offline. With this Blockchain technology, a user can take a picture and share it via cryptograpphic communication, which can be used to share the picture and verify among the users via digital signatures. In the Blockchain based passport system, passports are stored in the distributed ledger, which is confirmed/verified by the users as well as government. \subsubsection{Personal Identity and Privacy} We perform several transactions that are based on our personal information. For instance, we can only buy alcoholic beverages, or get into bars, and several other public places depending on our age. Similarly, there is some level of personal profiling done via companies we interact with through online shopping, or personalized web surfing. Thus, there is a notion of personal identity that is being traded in the market to the advertiser so that they can target essential products to users as per their need. While such personal identity is fairly being traded in the market, it is essential to protect the privacy of the users. Therefore, the Blockchain can be used to protect the identity of the users by encrypting the data and securing it from attackers \cite{jacobovitz2016blockchain,rawat2018ishare,rawat2018smart}. \textit{ Personal ID}: There are several personal identifications we carry around such as our driver's license, student identity cards, keys, social security number card, state identification card, etc. Blockchain can be used to store these identifications as digital form of IDs that will replace all forms of traditional physical identifications. Essentially, one Blockchain ID could be used for all kind of identifications used identify the same subject or object\cite{andrade2016systems}. \subsubsection{Voting} Blockchain could offer many tangible benefits for verifiable secure voting system in coming years. Current voting system has flaws and hard to verify votes and votes. Thus, Blockchain with its features could provide immutable, verifiable and secure voting system where voter can cast their votes with highest confidence from anywhere in the world \cite{ernest2017blockchain,osgood2016future}. \subsection{Reputation System} Reputation system is an important measure on how much a community trusts a person. Such a system plays an important role to assess a person through their reputation, which is evaluated on his/her past transactions and interactions with the community. Credit system can be thought of as a reputation system, where users are given credit scores based on their financial activities and later they are used to make decision regarding other financial transactions. There can be falsification of such system if the integrity of the data is compromised. Thus, it is important to securely keep the record of past transaction and fairly evaluate the reputation of a users. Here, Blockchain can be a really important technology as it keeps a distributed public ledger which is scrutinized by the consensus of users in the network. \subsection{Other Applications and Use Cases} Blockchain technology can be used in any scenarios when a trusted third-party is not needed or peer-to-peer system is needed for managing the transactions, as shown in Fig. \ref{fig.bc}, with features like transparency, decentralization, integrity, immutability, security and privacy. However, Blockchain has some limitations such as high delay introduced by consensus process, large size of the blocks in Blockchain, etc. \section{Summary} This paper has briefly summarized not only how Blockchain works but also its different emerging applications and use cases. By reading this paper, readers can have better understanding of what is a Blockchain and what are its different applications and user cases. \section*{Acknowledgment} {This work is partly supported by the U.S. Air Force Research Lab (AFRL), U.S. National Science Foundation (NSF) under grants CNS 1650831 and HRD 1828811, and by the U.S. Department of Homeland Security (DHS) under grant award number, 2017‐ST‐062‐000003. However, any opinion, finding, and conclusions or recommendations expressed in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the funding agencies. }
1,116,691,499,531
arxiv
\section{introduction} Throughout this paper $\C$ is the field of complex numbers, $K\subseteq \C$ is a subfield of $\C$, $f(x)\in K[x]$ a polynomial without multiple roots and of degree $n\geq 4$. Let $p\in \mathbf{N}$ be a prime that does not divide $n$ and $q=p^r\in \mathbf{N}$ an integral power of $p$. We write $C_{f,q}$ for the superelliptic $K$-curve $y^q=f(x)$, and $J(C_{f,q})$ for the Jacobian of $C_{f,q}$. By definition, $C_{f,q}$ is the smooth projective model of the affine curve $y^q=f(x)$. The Jacobian $J(C_{f,q})$ is an abelian variety over $K$ of dimension \[ \dim J(C_{f,q})= g(C_{f,q})=\frac{(n-1)(q-1)}{2}. \] If $q>p$, the map \[ C_{f,q}\to C_{f,q/p}, \quad (x,y)\mapsto (x,y^p)\] induces by Albanese fuctoriality a surjective $K$-map between the Jacobians $J(C_{f,q})\to J(C_{f,q/p})$. We write $J^{(f,q)}$ for the identity component of the kernel. If $q=p$, we set $J^{(f,p)}=J(C_{f,p})$. It is follows easily that $J^{(f,q)}$ is an abelian variety over $K$ of dimension $(n-1)\varphi(q)/2$, where $\varphi$ denotes the Euler $\varphi$-function. Moreover, $J(C_{f,q})$ is $K$-isogenous to the product $\prod_{i=1}^r J^{(f,p^i)}$(See \cite{ZarhinM}). Since $K\subseteq \C$, we may view $J^{(f,q)}$ as a complex abelian variety. We refer to \cite{Ribet3}, \cite[Sect. 6.6.1 and 6.6.2]{ZarhinIzv} for the definition and basic properties of the Hodge group (aka special Mumford--Tate group). In \cite{xuezarhin2}, assuming that $n>q$ and some other conditions on $n, q$ and $f(x)$, the authors showed that the (reductive $\Q$-algebraic connected) Hodge group of $J^{(f,q)}$ coincides with the largest $\Q$-algebraic subgroup of $\GL(\H^1(J^{(f,q)},\Q))$ that's ``cut out'' by the induced polarization from the canonical principal polarization of $J(C_{f,q})$ and the endomorphism ring of $J^{(f,q)}$. Notice that when $q=2$ (i.e., in the hyperelliptic case) this group was completely determined in \cite{ZarhinMMJ} (when $f(x)$ has ``large" Galois group). In this paper, we study some additional properties of $J^{(f,q)}$ which will allow us to extend the result to the case $n<q$ as well. This case is necessary in order to treat the infinite towers of superelliptic jacobians, which, in turn, are useful for the study of the ranks of Mordell-Weil groups in infinite towers of function fields (See \cite{UlmZarhin}). To state our main result, we make explicit the endomorphism ring and the polarization mentioned above. Let $X$ be an abelian variety over $\bar{K}$. We write $\End(X)$ for the ring of all its $\bar{K}$-endomorphisms and $\End^0(X)$ for the endomorphism algebra $\End(X)\otimes_\Z \Q$. In a series of papers \cite{ZarhinMRL,ZarhinCrelle,ZarhinCamb,ZarhinM}, Yuri Zarhin discussed the structure of $\End^0(J(C_{f,q}))$, assuming that $n\ge 5$ and the Galois group $\Gal(f)$ of $f(x)$ over $K$ is, at least, doubly transitive. Here $\Gal(f)\subseteq \ST_n$ is viewed as a permutation group on the roots of $f(x)$. It is well known that $f(x)$ is irreducible over $K$ if and only if $\Gal(f)$ acts transitively on the roots. For the sake of simplicity let's assume that $K$ contains a primitive $q$-th root of unity $\zeta_q$. The curve $C_{f,q}: y^q=f(x)$ admits the obvious periodic automorphism \[ \delta_q: C_{f,q}\to C_{f,q}, \quad (x,y)\mapsto (x,\zeta_q y).\] By an abuse of notation, we also write $\delta_q$ for the induced automorphism of $J(C_{f,q})$. The subvariety $J^{(f,q)}$ is $\delta_q$-invariant and we have an embedding \[\Z[\zeta_q]\hookrightarrow \End(J^{(f,q)}), \quad \zeta_q\mapsto \delta_q.\] In particular, the $q$-th cyclotomic filed $E:=\Q(\zeta_q)$ is contained in $\End^0(J^{(f,q)})$. Zarhin showed (\cite{ZarhinMRL,ZarhinM,ZarhinMZ2}) that $\End(J^{(f,q)})$ is isomorphic to $\Z[\zeta_q]$ if either $\Gal(f)$ coincides with the full symmetric group $\ST_n$, $n\geq 4$ and $p\geq 3$, or $\Gal(f)$ coincides with the alternating group $\An$ (or $\ST_n$), and $n\geq 5$. This result has also been extended to the case $\Gal(f)=\ST_n$ or $\An$, $n\geq 5$ and $p\mid n$ in \cite{xuejnt}. The first rational homology group $\H_1(J^{(f,q)}, \Q)$ carries a natural structure of $E$-vector space of dimension \[\dim_E\H_1(J^{(f,q)},\Q)=\frac{\dim_\Q \H_1(J^{(f,q)},\Q)}{[E:\Q]}=\frac{2\dim J^{(f,q)}}{[E:\Q]}=\frac{(n-1)\varphi(q)}{\varphi(q)}=n-1.\] Notice that if $q>2$, then $E$ is a CM field with complex conjugation $e\mapsto \bar{e}$. Let \[E^{+}=\{e\in \Q(\zeta_q) \mid \bar{e}=e\}\] be the maximal totally real subfield of $E$ and let \[E_{-}=\{e\in \Q(\zeta_q) \mid \bar{e}=-e\}.\] The canonical principal polarization on $J(C_{f,q})$ induces a polarization on $J^{(f,q)}$, which gives rise to a nondegenerate $E$-sesquilinear Hermitian form (\cite{xuezarhin2}) \[\phi_q:\H_1(J^{(f,q)},\Q) \times \H_1(J^{(f,q)},\Q) \to E. \] We write $\U(\H_1(J^{(f,q)},\Q),\phi_q)$ for the unitary group of $\phi_q$ of the $\Q(\zeta_q)$-vector space $\H_1(J^{(f,q)},\Q)$, viewed as an $\Q$-algebraic subgroup of $\GL(\H_1(J^{(f,q)},\Q))$ (via Weil's restriction of scalars from $E^{+}$ to $\Q$ (\cite{Ribet3})). Since the Hodge group respects the polarization and commutes with endomorphisms of $J^{(f,q)}$, \[\Hdg(J^{(f,q)})\subset \U(\H_1(J^{(f,q)},\Q),\phi_q).\] If $\End^0(J^{(f,q)})=E$, then $\U(\H_1(J^{(f,q)},\Q),\phi_q)$ is the largest connected reductive $\Q$-algebraic subgroup of $\GL(\H_1(J^{(f,q)},\Q))$ that both respects the polarization and commutes with endomorphisms of $J^{(f,q)}$. The following theorem is a natural extension of \cite[Theorem 0.1]{xuezarhin2}. \begin{thm} \label{main} Suppose that $n\ge 4$ and $p$ is a prime that does not divide $n$. Let $f(x) \in \C[x]$ be a degree $n$ polynomial without multiple roots. Let $r$ be a positive integer and $q=p^r$. Suppose that there exists a subfield $K$ of $\C$ that contains all the coefficients of $f(x)$. Let us assume that $f(x)$ is irreducible over $K$ and the Galois group $\Gal(f)$ of $f(x)$ over $K$ is either $\Sn$ or $\An$. Assume additionally that either $n \ge 5$ or $n=4$ and $\Gal(f)=\ST_4$. Suppose that one of the following three conditions holds: \begin{itemize} \item[(A)] $n=q+1$; \item[(B)] $p$ is odd and $n\not\equiv 1 \mod q$; \item [(C)] $p=2$, $n\not\equiv 1 \mod q$ and $n\not \equiv q-1 \mod 2q$. \end{itemize} Then $\Hdg(J^{(f,q)})= \U(\H_1(J^{(f,q)},\Q),\phi_q)$. \end{thm} \begin{cor} Corollary~0.3, Theorem 4.2 and Theorem 4.3 of \cite{xuezarhin2} all hold without the assumption that $n>q$. \end{cor} \begin{rem}\label{sec:n_less_than_q} We assume that $n<q$ throughout the rest of the paper since the case $n>q$ has already been treated in \cite{xuezarhin2}. \end{rem} \begin{rem} Since both $\Hdg(J^{(f,q)})$ and $\U(\H_1(J^{(f,q)},\Q),\phi_q)$ are connected $\Q$-algebraic groups, to prove Theorem~\ref{main}, it suffices to show that \[ \dim \Hdg(J^{(f,q)})\geq \dim \U(\H_1(J^{(f,q)},\Q),\phi_q).\] It is known that \[ \dim\U(\H_1(J^{(f,q)},\Q),\phi_q) =\dim_\Q E^{+}\cdot \big(\dim_E\H_1(J^{(f,q)},\Q)\big)^2.\] Let $\hdg$ be the $\Q$-Lie algebra of $\Hdg(J^{(f,q)})$. It is a reductive $\Q$-Lie subalgebra of $\End_\Q\big(\H_1(J^{(f,q)},\Q)\big)$, and thus splits into a direct sum \[ \hdg= \c\oplus \hdg^{ss}, \] of its center $\c$ and the semisimple part $\hdg^{ss}=[\hdg, \hdg]$. By \cite[Theorem 1.3]{xuezarhin1}, if $\Gal(f)=\ST_n$ and $n\geq 4$, or $\Gal(f)=\An$ and $n\geq 5$, the center $\c$ coincides with $E_{-}$. Notice that \[\dim_\Q E_{-}=\dim_\Q E^{+}=[E:\Q]/2.\] Theorem~\ref{main} follows if we show that \begin{equation} \label{eq:1} \dim_\Q \hdg^{ss}\geq \frac{1}{2}[E:\Q] \big((\dim_E\H_1(J^{(f,q)},\Q))^2-1\big). \end{equation} \end{rem} The paper is organized as follows. In section 2 we study the Galois actions on certain vector spaces. In section 3 we recall some facts about the Hodge Lie algebra $\hdg$. The proof of Theorem~\ref{main} is given at the end of section 3 except a key arithmetic lemma, which is proven in Section 4. \section{Galois Actions} Throughout this section, let $E$ be a field that is a finite Galois extension of $\Q$ with Galois group $G$. Let $V$ be a $E$-vector space of finite dimension. We write $V_\Q$ for the underlying $\Q$-vector space of $V$, and $V_{\C}$ for the $\C$-vector space $V\otimes_\Q \C=V_\Q\otimes_\Q \C$. Let $\Aut(\C)$ be the group of all automorphisms of $\C$. It act semilinearly on $V_{\C}=V\otimes_\Q \C$ through the second factor. More explicitly, $\forall \kappa \in \Aut(\C), v\otimes z\in V\otimes_\Q \C$, we define $\kappa (v\otimes z):=v\otimes \kappa(z)$. It follows that $\forall x\in V\otimes_\Q \C$ and $c\in \C$, $\kappa(cx)=\kappa(c)x$. On the other hand, $E$ acts on $V_\C=V\otimes_\Q \C$ through its first factor. It follows that $V_\C$ is a free $E\otimes_\Q\C$ module of rank $\dim_E V$, and the action of $E=E\otimes 1\subseteq E\otimes_\Q\C$ commutes with that of $\Aut(\C)$. In other words, \[\kappa((e\otimes 1)x)=(e\otimes 1)\kappa(x), \quad \forall \kappa \in \Aut(\C), e\in E, \text{ and } x\in V_\C.\] Let's fix an embedding $E\hookrightarrow \C$. This allows us to identify each Galois automorphism $\sigma: E\to E$ with the embedding $\sigma: E\to E \subset \C$ of $E$ into $\C$. It is well known that \[ E_\C:=\E\otimes_\Q \C =\bigoplus_{\sigma\in G} E\otimes_{E,\sigma}\C=\bigoplus_{\sigma\in G} \C_{\sigma}, \text{ where } \C_\sigma:=E\otimes_{E,\sigma}\C. \] So every $E_\C$ module $W$ splits as a direct sum $ W=\oplus_{\sigma\in G} W_\sigma$, where \[ W_\sigma:=\C_\sigma W=\{w\in W\mid (e\otimes 1)w= \sigma(e)w, \forall e\in E\}.\] In particular, $V_\C=\oplus_{\sigma\in G} V_\sigma$, and each $V_\sigma$ is a $\C$-vector space of dimension $\dim_E V$. For each $\sigma \in G$, let $P_\sigma: V_\C\to V_\sigma$ be the $\C$-linear projection map from $V_\C$ to the summand $V_\sigma$. Similarly, for each pair $\sigma\neq \tau$, we write $P_{\sigma, \tau}=P_\sigma\oplus P_\tau: V_\C\to V_\sigma\oplus V_\tau$ for the projection map onto this pair of summands. We claim that $\Aut(\C)$ permutes the set $\{V_\sigma \mid \sigma\in G\}$, and the action factors through the canonical restriction \[\Aut(\C)\twoheadrightarrow G,\quad \kappa \mapsto \kappa\mid_E.\] Indeed, for all $\kappa \in \Aut(\C), e\in E$ and $x_\sigma\in V_\sigma$, \[ (e\otimes 1)\kappa (x_\sigma)= \kappa ((e\otimes 1)x_\sigma)=\kappa( \sigma(e)x_\sigma)= \kappa(\sigma(e))\kappa(x_\sigma)=\kappa\sigma(e)\kappa(x_\sigma).\] Clearly $\kappa\sigma(e)=((\kappa\mid_E) \sigma)(e)$. By an abuse of notation, we write $\kappa$ for the restriction $\kappa\mid_E$. So it follows that $\kappa(x_\sigma)\in V_{\kappa\sigma}$, and thus $\kappa(V_\sigma)=V_{\kappa\sigma}$ for all $\kappa \in \Aut(\C)$ and $\sigma\in G$. Let us define an action of $\Aut(\C)$ on the set of projection $\PP=\{ P_\sigma \mid \sigma\in G\}$ by \[ \kappa_* P_\sigma:= \kappa \circ P_\sigma \circ \kappa^{-1}.\] Then for any element $\sum x_\sigma \in \oplus_{\sigma\in G} V_\sigma=V_\C$ and $P_\tau\in \PP$, \[ (\kappa_* P_\tau)(\sum x_\sigma)= \kappa\circ P_\tau\left(\sum \kappa^{-1}(x_\sigma)\right)=\kappa (\kappa^{-1}(x_{\kappa \tau}))=x_{\kappa \tau},\] where all summations runs through $\sigma\in G$, and we used the fact that $\kappa^{-1}(x_\sigma)$ belongs to $V_{\tau}$ if and only if $\sigma= \kappa\tau$. Therefore, \[\kappa_*P_\sigma=P_{\kappa\sigma}.\] Clearly $\Aut(\C)$ acts transitively on $\PP$. Since $P_{\sigma,\tau}=P_\sigma\oplus P_\tau$, we have similarly an action of $\Aut(\C)$ on the set $\PP\PP:=\{P_{\sigma,\tau}\mid (\sigma, \tau)\in G^2, \sigma\neq \tau\}$ by \[ \kappa_* P_{\sigma, \tau}=\kappa\circ P_{\sigma, \tau}\circ \kappa^{-1}=P_{\kappa\sigma, \kappa \tau}.\] The $\Aut(\C)$-orbit $O_{\sigma,\tau}$ of each $P_{\sigma, \tau} \in \PP\PP$ consists of all elements of the form $P_{\kappa\sigma, \kappa\tau}$ with $\kappa\in G$. \begin{lem}\label{lem:galois-actions} Let $W_\Q\subseteq V_\Q$ be any $\Q$-subspace of $V_\Q$, and $W_\C:=W_\Q\otimes_\Q\C\subseteq V_\C$ be its complexification. \begin{itemize} \item[(i)] If there exists $\sigma_0\in G$ such that $P_{\sigma_0}(W_\C)=V_{\sigma_0}$, then $P_{\sigma}(W_\C)=V_{\sigma}$ for all $\sigma\in G$. \item[(ii)] If there exists a pair $(\sigma_0,\tau_0)\in G^2$ with $\sigma_0\neq \tau_0$ such that $P_{\sigma_0,\tau_0}(W_\C)=V_{\sigma_0}\oplus V_{\tau_0}$, then $P_{\sigma,\tau}(W_\C)=V_{\sigma}\oplus V_\tau$ for all $P_{\sigma,\tau}\in O_{\sigma_0,\tau_0}$. \end{itemize} \end{lem} \begin{proof} Clearly, $W_\C$ is $\Aut(\C)$-invariant. For each $\sigma\in G$, let us choose $\kappa \in \Aut(\C)$ such that $\sigma=\kappa \sigma_0$. Then \[ P_\sigma(W_\C)= (\kappa_* P_{\sigma_0})(W_\C)= \kappa \circ P_{\sigma_0}\circ \kappa^{-1}(W_\C)=\kappa\circ P_{\sigma_0}(W_{\C})=\kappa(V_{\sigma_0})=V_\sigma.\] This proves part (i). Similarly, suppose that $P_{\sigma_0,\tau_0}(W_\C)=V_{\sigma_0}\oplus V_{\tau_0}$. For all $P_{\sigma,\tau}\in O_{\sigma_0,\tau_0}$, there exists $\kappa \in \Aut(\C)$ such that $\sigma=\kappa \sigma_0$ and $\tau=\kappa \tau_0$. So we have \[ \begin{split} P_{\sigma,\tau}(W_\C)&=(\kappa_* P_{\sigma_0,\tau_0})(W_\C)=\kappa\circ P_{\sigma_0,\tau_0}\circ \kappa^{-1}(W_\C)=\kappa \circ P_{\sigma_0,\tau_0} (W_\C)\\&=\kappa(V_{\sigma_0}\oplus V_{\tau_0})=\kappa(V_{\sigma_0})\oplus \kappa (V_{\tau_0})=V_\sigma\oplus V_\tau, \end{split} \] and part (ii) follows. \end{proof} Let $R$ be a commutative ring with unity, and $N$ be a free $R$-module of finite rank. We write $\Tr_R:\End_R(N)\to R$ for the trace map, and \[\sL_R(N):=\{ g\in \End_R(N)\mid \Tr_R(g)=0\}\] for the $R$-Lie algebra of traceless endomorphisms of $N$. It is well-known that \[ \sL_E(V)\otimes_\Q\C = \sL_{E_\C}(V_\C)=\sL_{E_\C}(\oplus_{\sigma\in G}V_\sigma)=\bigoplus_{\sigma\in G} \sL_{\C}(V_\sigma).\] We will denote the projection map $\sL_E(V)\otimes_\Q\C \to \sL_\C(V_\sigma)$ again by $P_\sigma$, and similarly for $P_{\sigma,\tau}$. Clearly, each $\sL_{\C}(V_\sigma)$ has $\C$-dimension $(\dim_EV)^2-1$. For the rest of the section, we assume additionally that $E$ is a CM-field. For any $\sigma\in G$, let $\bar\sigma:E\to E$ be the complex conjugation of $\sigma$. In other words, $\bar{\sigma}$ is the composition $E\xrightarrow{\sigma}E\to E$, where the second arrow stands for the complex conjugation map $e\mapsto \bar{e}$. \begin{lem}\label{lem:dimension-comparison} Let $\k$ be a semisimple $\Q$-Lie subalgebra of $\sL_E(V)$, and $\k_\C:=\k\otimes_\Q \C$ be its complexification. Suppose that the following two conditions holds: \begin{itemize} \item[(I)] there exists $\sigma_0\in G$ such that $P_{\sigma_0}(\k_\C)=\sL_\C(V_{\sigma_0})$; \item[(II)] For each pair $(\sigma, \tau)\in G^2$ with $\sigma\neq \tau$ and $\sigma\neq \bar\tau$, there exists $P_{\sigma_0, \tau_0}\in O_{\sigma,\tau}$ such $P_{\sigma_0, \tau_0(\k_\C)}=\sL_\C(V_{\sigma_0})\oplus \sL_\C(V_{\tau_0})$. \end{itemize} Then \[\dim_\Q \k\geq \frac{1}{2}[E:\Q]\left((\dim_E V)^2-1\right).\] \end{lem} \begin{proof} Applying Lemma~\ref{lem:galois-actions} with $\k$ in place of $W$ and $\sL_E(V)$ in place of $V$, we see that \begin{gather*} P_\sigma(\k_\C)=\sL_\C(V_\sigma), \quad \forall \sigma\in G;\\ P_{\sigma,\tau}(\k_\C)= \sL_\C(V_\sigma)\oplus\sL_\C(V_\tau), \quad \forall (\sigma,\tau)\in G^2 \text{ with }\sigma\neq \tau \text { and }\sigma \neq \bar\tau. \end{gather*} Let us fix a CM-type $\Phi$ of $E$. By definition, $\Phi$ is a maximal subset of $G=\Hom(E,\C)$ such that no two elements of $\Phi$ are complex conjugate to each other. Clearly, $\abs{\Phi}=[E:\Q]/2$, and \[ \dim_\C \Big(\bigoplus_{\sigma\in \Phi}\sL_\C(V_\sigma)\Big)=\frac{1}{2}[E:\Q](\dim_E(V)^2-1).\] Let $\k_\C'$ be the projection of $\k_\C$ on $\oplus_{\sigma\in \Phi}\sL_\C(V_\sigma)$. It follows that the projection $\k'_\C\to \sL_\C(V_\sigma)$ is surjective for all $\sigma\in \Phi$, and $\k'_\C$ also projects surjectively onto $\sL_\C(V_\sigma)\oplus \sL_\C(V_\tau)$ for all distinct pairs $\sigma, \tau\in \Phi$. Therefore, $\k_\C'=\oplus_{\sigma\in \Phi}\sL_\C(V_\sigma)$ by the Lemma on pp.790-791 of \cite{Ribet}. In particular, we get \[ \dim_\Q \k =\dim_\C \k_\C\geq \dim_\C \k_\C'=\frac{1}{2}[E:\Q]\left((\dim_E V)^2-1\right).\] \end{proof} In the next section, we will show that our semisimple part of Hodge Lie algebra $\hdg^{ss}=[\hdg, \hdg]$ satisfies (I) and (II) of Lemma~\ref{lem:dimension-comparison} and thus prove our Main Theorem. \section{the hodge lie algebra } We keep all notation and assumptions of the previous sections. More specifically, $\zeta_q$ is a primitive $q$-th root of unity, $E=\Q(\zeta_q)$ and $G=\Gal(E/\Q)=(\Z/q\Z)^*$, where each $a\in (\Z/q\Z)^*$ maps $\zeta_q$ to $\zeta_q^a$. In order to simplify the notation, we write $X$ for the abelian variety $J^{(f,q)}$, and $V$ for its first rational homology group $\H_1(X, \Q)$. In addition, we assume that $\End^0(X)=E$. Recall that $E_\C=E\otimes_\Q\C$. Let $\Lie(X)$ be the complex tangent space to the origin of $X$. By functoriality, $E$ acts on $\Lie(X)$ and provides $\Lie(X)$ with a natural structure of $E_\C$-module. Therefore, $\Lie(X)$ splits into a direct sum \[\Lie(X)=\oplus_{a\in G} \Lie(X)_a.\] where $\Lie(X)_a:=\{x\in \Lie(X)\mid (\zeta_q\otimes 1) x= \zeta_q^a x\}$. Let us put $n_a=\dim_\C\Lie(X)_a$. It is known that $n_a=[na/q]$ (see \cite{ZarhinM,ZarhinPisa}), where $[x]$ is the maximal integer that's less or equal to $x$, and we take the representative $1\leq a \leq q-1$. \begin{rem}\label{rem:relative-prime} By \cite[Proposition 2,1, 2.2]{xuezarhin2}, the assumptions (A)(B)(C) of Theorem~\ref{main} guarantee that there exists an integer $a$ such that \[ 1\leq a \leq q-1, \quad \gcd(a, p)=1\] and the integers $[na/q]$ and $\dim_EV=n-1$ are relative prime. We note that the conditions (A)(B)(C) of Theorem~\ref{main} are equivalent to the conditions (A)(B)(C) of \cite[Theorem 0.1]{xuezarhin2}. \end{rem} Since $V=\H_1(X,\Q)$ carries a natural structure of $E$-vector space, the first complex homology group $V_\C=\H_1(X,\C)=\H_1(X,\Q)\otimes_\Q\C$ carries a structure of $E_\C$-module, and therefore splits into a direct sum \[ V_\C= \oplus_{a\in G} V_a. \] Each $V_a$ is a $\C$-vector space of dimension $\dim_EV= n-1$. There is a canonical Hodge decomposition (\cite[chapter 1]{Mumford}, \cite[pp.~52--53]{Deligne}) \[V_\C=\H_1(X,\C)=\H^{-1,0}(X) \oplus \H^{0,-1}(X)\] where $\H^{-1,0}(X)$ and $\H^{0,-1}(X)$ are mutually ``complex conjugate" $\dim(X)$-dimensional complex vector spaces. This splitting is $E$-invariant, and $\H^{-1,0}(X)$ and $\Lie(X)$ are canonically isomorphic as $E_\C$-modules. In particular, \[ \dim_\C \H^{-1,0}(X)_a= \dim_\C \Lie(X)_a= n_a. \] Let $\f_H^{0}=\f_{H,Z}^{0}:V_\C \to V_\C$ be the $\C$-linear operator such that \[\f_H(x) =-x/2 \quad \forall \ x \in \H^{-1,0}(X); \quad \f_H^{0}(x)=x/2 \quad \forall \ x \in \H^{0,-1}(X).\] Since the Hodge decomposition is $E$-invariant, $\f_H^{0}$ commutes with $E$. Therefore, each $V_a$ is $\f_H^{0}$-invariant. It follows that the linear operator $\f_H^{0}:V_a\to V_a$ is semisimple and its spectrum lies in the two-element set $\{-1/2, 1/2\}$. The multiplicity of eigenvalue $-1/2$ is $n_a=\dim_\C \H^{-1,0}(X)_a$, while the multiplicity of eigenvalue $1/2$ is $\dim_EV-n_a$. Clearly, the complex conjugate of $a\in \Gal(E/\Q)=(\Z/q\Z)^*$ is $\bar a=q-a$. It is known (\cite{Deligne}, \cite{MZ}) that \begin{equation} \label{eq:2} n_a+n_{\bar a}=\dim_E V. \end{equation} This implies that the multiplicity of the eigenvalue $1/2$ is $n_{\bar{a}}$. The Hodge Lie algebra $\hdg$ of $X$ is a reductive $\Q$-Lie subalgebra of $\End_\Q(V)$. Its natural representation in $V$ is completely reducible and its centralizer in $\End_\Q(V)$ coincides with $\End^0(X)=E$. Moreover, its complexification \[ \hdg_\C=\hdg\otimes_\Q \C\subset \End_\Q(V)\otimes_\Q\C =\End_\C(V_\C)\] contains $\f_H^0$ \cite[Sect. 3.4]{xuezarhin1}. Recall that $\hdg=\c\oplus \hdg^{ss}$, with $\c$ being the center of $\hdg$ and $\hdg^{ss}=[\hdg,\hdg]$ the semisimple part. Let $\c_\C:=\c\otimes_\Q \C$ be the complexification of $\c$ and $\hdg^{ss}_\C:=\hdg^{ss}\otimes_\Q\C$ the complexification of $\hdg^{ss}$. Clearly, $\hdg^{ss}\subset \sL_{E}(V)$, and thus \[ \hdg^{ss}_\C\subset \sL_{E_\C}(V_\C)=\oplus_{a\in G} \sL_\C(V_a). \] We write $\hdg^{ss}_a$ for the image of projection $P_a: \hdg^{ss}_\C\to \sL_\C(V_a)$. Clearly, each $\hdg^{ss}_a$ is a semisimple complex Lie subalgebra of $\sL_\C(V_a)$. \begin{rem}\label{rem:operator-with-two-eigenvalue} Let us decompose $f_H^0$ as $f+f'$ with $f'\in \c_\C$ and $f\in \hdg^{ss}_\C$. By \cite[Remark 3.2]{xuezarhin2}, the natural representation $V_a$ of $\hdg^{ss}_a$ is simple for all $a\in G$. It follows from Schur's Lemma that when restricted to each $V_a$, $f'$ coincides with multiplication by scalar $c_a\in \C$. Therefore, $\hdg^{ss}_\C$ contains an operator (namely, $f$) whose restriction on each $V_a$ is diagonalizable with at most two eigenvalues: $-1/2-c_a$ of multiplicity $n_a$ and $1/2-c_a$ of multiplicity $n_{\bar a}=\dim_EV-n_a$. \end{rem} \begin{lem}\label{lem:one-factor}Let the assumptions be the same as in Theorem~\ref{main}. There exists an $a\in G=(\Z/q\Z)^*$ such that $\hdg^{ss}_a=P_a(\hdg^{ss}_\C)$ coincides with $\sL_\C(V_a)$. \end{lem} \begin{proof} The idea is to combine Remark~\ref{rem:relative-prime}, ~\ref{rem:operator-with-two-eigenvalue} together with Lemma~3.3 of \cite{xuezarhin2}. This result is already contained in the proof of \cite[Theorem 3.4]{xuezarhin2}, where we note that the assumption $n>q$ in \cite[Theorem 3.4]{xuezarhin2} is not used for this particular step of the proof. \end{proof} Notice that this is the place where assumptions (A)(B)(C) in Theorem~\ref{main} are used, since we need to make sure that there exists $a\in G$ such that $n_a$ and $\dim_EV$ are relative prime in order to apply Lemma~3.3 of \cite{xuezarhin2}. Let $h:(\Z/q\Z)^*\to \R$ be the function such that for all $1\leq a \leq q-1$ with $\gcd(a, q)=1$, \begin{equation} \label{eq:3} h(a)=\left(\frac{\dim_EV}{2}-n_a\right)^2=\left(\frac{n-1}{2}-\left[\frac{na}{q}\right]\right)^2. \end{equation} By~(\ref{eq:2}), $n_a+n_{\bar a}=\dim_EV$, so $h(a)=h(\bar a)=h(q-a)$, which is also easy to check directly from~(\ref{eq:3}). The function $h$ is non-increasing on the set of integers \[ [1, q/2]_\Z:=\{a \mid 1\leq a \leq q/2, \gcd(a,p)=1\}.\] By Remark~\ref{sec:n_less_than_q}, we have $4\leq n <q$. In particular, $[n/q]=0$. On the other hand, let $t$ be the maximal element of $[1,q/2]_\Z$. Then $t\neq 1$ and $[nt/q]\neq 0$. It follows that $h$ is not a constant function. \begin{lem}\label{lem:two-factors} Let the assumption be the same as Theorem~\ref{main}. Let $(a,b)\in G^2$ be a pair such that $h(a)\neq h(b)$. Then $P_{a,b}(\hdg^{ss}_\C)=\sL_\C(V_a)\oplus \sL_\C(V_b)$. \end{lem} \begin{proof}By (\ref{eq:3}), \[ h(a)-h(b)=(n_a-n_b)(\dim_E V-n_a-n_b). \] So $h(a)\neq h(b)$ if and only if $n_a\neq n_b$ and $n_a\neq \dim_EV-n_b$. Let $\k^{ss}=P_{a,b}(\hdg^{ss}_\C)$. By Lemma~\ref{lem:one-factor} and part (i) of Lemma~\ref{lem:galois-actions}, both projections $\k^{ss}\to \sL_\C(V_a)$ and $\k^{ss}\to\sL_\C(V_b)$ are surjective. By Remark~\ref{rem:operator-with-two-eigenvalue}, $P_{a,b}(f)$ is a semisimple element of $\k^{ss}\subseteq \End_\C(V_a)\oplus \End_\C(V_b)$ such that $P_{a,b}(f)$ acts on $V_a$ with (at most) 2 eigenvalues of multiplicities $n_a$ and $\dim_EV-n_a$ respectively, and similarly for $b$. Lemma~\ref{lem:two-factors} follows by setting $d=2$ in \cite[Lemma 3.6]{xuezarhin2}. Last, we point out that the assumption that the multiplicities $a_i$ are positive in \cite[Lemma 3.6]{xuezarhin2} is not used in its proof, so the lemma applies to the case that $n_a$ or $n_b$ is zero, which may happen if $n<q$. \end{proof} \begin{proof}[Proof of Theorem~\ref{main}] As remarked at the end of Section 2, Theorem~\ref{main} follows if we show that the conditions (I) and (II) of Lemma~\ref{lem:dimension-comparison} holds for $\k=\hdg^{ss}$. Condition (I) holds by Lemma~\ref{lem:one-factor}. To show that Condition (II) holds, by Lemma~\ref{lem:two-factors} it is enough to prove that for each $(a,b)\in G^2$ with $a\neq b$ and $a\neq \bar{b}$, there exists $x\in G$ such that $h(xa)\neq h(xb)$. Suppose that this is not the case, then there exists a pair $(a,b)$ such that $h(xa)=h(xb)$ for all $x\in G$. Without loss of generality, we may and will assume that $b=1\in (\Z/q\Z)^*$, thus $a\neq \pm 1$. It follows that $h(xa)=h(x)$ for all $x\in (\Z/q\Z)^*$. Since $h$ is not a constant function, such an $a$ does not exists by Lemma~\ref{lem:even} of next section. Contradiction. \end{proof} \section{Arithmetic Results} Throughout this section, $G=(\Z/q\Z)^*$. For each $a\in G$, let $\theta_a:G\to G$ be the translation map: $b\mapsto ab$. A function $h:G\to \R$ is said to be \textit{even} if $h\circ \theta_{-1}= h$. For any $x\leq y\in R$, we write $[x,y]_\Z$ for the set of integers $\{i \mid x\leq i \leq y, \gcd(i, p)=1\}$. \begin{lem}\label{lem:even} Let $h:(\Z/q\Z)^*\to \R$ be an even function that's monotonic on $[1,q/2]_\Z$. If $h\circ \theta_a=h$ for some $a\in (\Z/q\Z)^{*}$ and $a \neq \pm 1$, then $h$ is a constant function. \end{lem} \begin{proof} We prove the Lemma in seven steps. \begin{step}\label{step:2} Let $\langle \pm a \rangle$ be the subgroup of $(\Z/q\Z)^{*}$ generated by $a$ and $-1$. Clearly $h\circ \theta_b=h$ for any $b\in \langle \pm a \rangle$ since $h\circ \theta_a=h$ and $h$ is even. In particular, this holds true for the maximal element $b_{\max}$ in the set in $\langle \pm a \rangle \cap [1,q/2]_\Z$. If $b_{\max}=1$, the group $\langle \pm a \rangle$ is necessarily $\{ \pm 1 \}$. Therefore, it is enough to prove that $h$ being nonconstant implies that $b_{\max}=1$. So with out lose of generality, we assume that $a=b_{\max}$ throughout the rest of the proof. Notice that if $a\neq 1$, then $2a^2>q$, for otherwise it contradicts the maximality of $a$. \end{step} \begin{step}\label{lem:3} Lemma~\ref{lem:even} holds if $p=2$. \\ Every even function on $(\Z/q\Z)^*$ is constant if $q$ is $2$ or $4$ so we assume that $q=2^r\geq 8$. The group $(\Z/2^r\Z)^* $ is isomorphic to $ \Z/2\Z\times \Z/2^{r-2}\Z$, where the factor $\Z/2\Z$ is generated by $-1$. Let us assume that $\langle \pm a \rangle$ has order $2^s$. Since $\langle \pm a \rangle\supseteq \langle \pm 1 \rangle$, it follows that $\langle \pm a \rangle \cong \Z/2\Z\times \Z/2^{s-1}\Z$. In particular, if $\langle \pm a \rangle \neq \langle \pm 1 \rangle$, then $\Z/2^{s-1}\Z$ is nontrivial, therefore $\langle \pm a \rangle$ contains 3 elements of order two. But there are exactly 3 elements of order two in $(\Z/q\Z)^*: -1, 2^{r-1}-1, 2^{r-1}+1$. Hence $\langle\pm a\rangle$ contains all the above elements of order $2$. So $a=2^{r-1}-1$ since it is the largest element in $[1,q/2]_\Z$. Therefore, \[h(q/2-1)=h(2^{r-1}-1)=h(a)=(h\circ \theta_a)(1)=h(1).\] Since $h$ is monotonic on $[1, q/2]_\Z$, the above equality implies that $h$ is constant on $[1, q/2]_\Z$ and therefore a constant function. \end{step} \begin{step}\label{lem:4} Let $p$ be an odd prime. Lemma~\ref{lem:even} holds if either $a$ is even, or $a$ is odd and $3a\geq q$. \\ It is enough to prove that if $a\neq 1$, then $h(1)=h((q-1)/2)$. Since $h(1)=(h\circ \theta_a)(1)=h(a)$, by monotonicity $h$ is constant on $[1,a]_\Z$. Therefore it is enough to find $b$ such that $h((q-1)/2)=h(b)$ and $b\in [1,a]_\Z$. First, let's assume that $a=2b$ is even. Then \[ a\cdot\frac{q-1}{2}= (q-1)b \equiv -b \mod q. \] So $h((q-1)/2)=h(a(q-1)/2)=h(-b)=h(b)$. Clearly $b=a/2$ lies in $[1,a]_\Z$. Next, assume that $a$ is odd. Then \[ a\cdot\frac{q-1}{2}=\frac{qa-a}{2} \equiv \frac{q-a}{2} \pmod q.\] So $h((q-1)/2)=h((q-a)/2)$. Let $b=(q-a)/2$. When $3a\geq q$, we have $ b=(q-a)/2 \leq a$ hence $b$ lies in $[1,a]_\Z$ as desired. \end{step} \begin{step}\label{lem:5} Lemma~\ref{lem:even} holds if $p=3$. \\ When $p$ is odd, $(\Z/p^r\Z)^*$ is cyclic of order $\varphi(p^r)=(p-1)p^{r-1}$. For $p=3$, \[(\Z/3^r\Z)^* \cong \Z/(2\cdot 3^{r-1})\Z\cong \Z/2\Z\times \Z/3^{r-1}\Z.\] In particular, if $q\geq 9$, $(\Z/q\Z)^*$ contains a unique subgroup of order 3 which is generated by $3^{r-1}+1$. If the order of $\langle \pm a \rangle$ is coprime to $3$, then $\langle \pm a\rangle$ is necessarily $\{\pm 1\}$, which leads to an contradiction. If the order of $\langle \pm a \rangle$ is divisible by $3$, then $q\geq 9$ and $\langle \pm a \rangle$ contains $3^{r-1}+1$. By assumption on the maximality of $a$ we must have $a \geq 3^{r-1}+1$ and hence $3a>q$. \end{step} \begin{step}\label{lem:6} Assume that both $p$ and $a$ are odd, $p \neq 3$ and $3a<q$. Lemma~\ref{lem:even} holds if $7a\geq q$. \\ Since $p\neq 3$, $(q-3)/2$ lies in $[1, q/2]_\Z$. It is enough to prove that $a\neq 1$ implies that $h(1)=h((q-3)/2)$. Indeed, it follows from the proof of Step~\ref{lem:4} that $h((q-1)/2)=h((q-a)/2)$. But if $a\neq 1$ then $a \geq 3$ so $(q-a)/2 \leq (q-3)/2$. If we prove that $h$ is constant on $[1, (q-3)/2]_\Z$, then $h((q-1)/2)=h((q-a)/2)=h(1)$ and it follows that $h$ is a constant function. By our assumption $3a<q$, so $(q-3a)/2$ lies in $[1, q/2]_\Z$. Notice that \[ a\cdot\frac{q-3}{2} \equiv \frac{q-3a}{2} \mod q. \] We see that $h((q-3)/2)=h((q-3a)/2)$. Since $h$ is constant on $[1, a]_\Z$, the inequality $h(1) \neq h((q-3)/2)$ would imply that $ a <(q-3a)/2$, or equivalently $5a<q$. In particular, $2a<q/2$. But $2\in [1, a]_\Z$ since $p$ is odd and $a\geq 3$. So $h(2)=h(1)$, therefore $h(2a)=h(1)$ and $h$ is constant on $[1, 2a]_\Z$. But now by our assumption $7a\geq q$, or equivalently $ 2a \geq (q-3a)/2$, it follows that \[ h\left(\frac{q-3}{2}\right)=h\left(\frac{q-3a}{2}\right)=h(1).\] \end{step} \begin{step}\label{lem:7} Assume that both $p$ and $a$ are odd, $p \neq 3,5$ and $7a<q$. Lemma~\ref{lem:even} holds. \\ Since $7a<q$ and $p\neq 5$, $(q-5a)/2$ lies in $[1, q/2]_\Z$. By similar argument as in Step \ref{lem:6}, $h((q-5)/2)=h((q-5a)/2)$. We claim that now it is enough to show that $h(1)=h((q-5)/2)$. Indeed, by the proof of the Step~\ref{lem:6}, all we need to show is that $h(1)=h((q-3)/2)$, but since $a\geq 3$, then $(q-3a)/2< (q-5)/2$. So $h$ being constant on $[1, (q-5)/2]_\Z$ implies that $h(1)=h((q-3a)/2)=h((q-3)/2)$. Let $S$ be the set of all integers \[ S=\{b \mid b \geq 1, p\nmid b, (2b+1)a<q\}. \] Clearly $1\in S$ so $S$ is not empty. Let $x$ be the maximal element of $S$. By Step~\ref{step:2}, $2a^2>q$ so necessarily $x<a$. Since $h$ is constant on $[1, a]_\Z$, we must have $h(1)=h(x)$. Notice that $xa< q/2$ by assumption. So $h(ax)=h(x)=h(1)$ and it follows that $h$ is constant on $[1, ax]_\Z$. Assume that $h(1)\neq h((q-5)/2)$. It is necessary that $ax< (q-5a)/2$, or equivalently, $(2x+5)a<q$. But we can choose $x'$ from the two elements set $\{x+1, x+2\}$ such that $x'$ is coprime to $p$. It follows that $x'\in S$. This contradicts the maximality of $x$. \end{step} \begin{step}\label{lem:8} Lemma~\ref{lem:even} holds if $p=5$. \\ If the order of $\langle \pm a \rangle$ is divisible by $5$, then $\langle \pm a \rangle$ contains the unique subgroup of order 5 in $(\Z/5^r\Z)^*$. In particular, $2\cdot 5^{r-1}+1\in \langle \pm a \rangle$. It follows that $a> 2\cdot 5^{r-1}+1$ and therefore $3a>5^r$. The Lemma holds by Step~\ref{lem:4}. If the order of $\langle \pm a \rangle$ is coprime to $5$. Then from the isomorphism \[ \Z/5^r\Z\cong \Z/4\Z\times \Z/5^{r-1}\Z,\] we see that $\langle \pm a \rangle$ is has either order $2$ or $4$. If $\langle \pm a \rangle$ has order 2, then $\langle \pm a \rangle$ is necessarily $\langle\pm 1\rangle$ and this leads to a contradiction. So we assume that $\langle \pm a \rangle$ has order $4$ and $a$ is the unique element such that $1<a<5^r/2$ and $a^2\equiv -1 \mod 5^r$. In particular, $a^2+1\geq 5^r$. If $a$ is even then the Lemma holds by Step~\ref{lem:4}. In particular, this works for $q=p=5$ since $a=2$ in this case. We assume that $q\geq 25$ and $a$ is odd through out the rest of the proof. First we claim that $a \geq 7$. Indeed, If $q=25$, then $a=7$ by direct calculation; if $q>25$, then $a>7$ since $a^2+1\geq q$. This implies that $(q-a)/2\leq (q-7)/2$. Therefore, it is enough to prove that $h((q-7)/2)=h(1)$ since it then follows that $h((q-1)/2)=h((q-a)/2)=h(1)$. By Step~\ref{lem:6} we may also assume that $7a<q$. It follows that $(q-7a)/2 \in [1, q/2]_\Z$ and $h((q-7)/2)=h((q-7a)/2)$. Let $c=[q/a]$. Since $a^2+1\geq q$ and $a<q/2$ we see that $2\leq c\leq a$. Let $x=[c/2]$ if $[c/2]$ is not divisible by 5, and $x=[c/2]-1$ otherwise. Notice that $a>x\geq \max\{1, (c-3)/2\}$ and $xa \leq q/2$ by our choice of $x$. It follows that $x\in [1, a]_\Z$ therefore $h(x)=h(1)$, and therefore $h(ax)=h(x)=h(1)$. So $h$ is constant on $[1, ax]_\Z$. If $h(1)\neq h((q-7)/2)$, we must have $ xa < (q-7a)/2$, or equivalently, $(2x+7)a <q$. Then it follows that \[ \frac{q}{a} > 2x+7\geq 2\left(\frac{c-3}{2}\right)+7 = c+4=\left[\frac{q}{a}\right]+4, \] which is absurd. \end{step} Lemma~\ref{lem:even} is proved by combining all the above steps. \end{proof}
1,116,691,499,532
arxiv
\section{Introduction} \label{sec_intro} The investigation of the in-medium color force between partons is pivotal for understanding the microscopic mechanisms that lead to the remarkable features of the quark-gluon plasma (QGP) as observed in ultra-relativistic heavy-ion collisions (URHICs). Lattice-QCD (lQCD) computations of the free energy of a heavy quark-antiquark ($Q\bar Q$) pair immersed into the QGP~\cite{Petreczky:2004pz,kaczmarek2005static} indicate that nonperturbative effects, specifically remnants of the linear part of the potential, survive up to temperatures of at least twice the pseudo-critical one, $T_{\rm pc}\simeq 160$\,MeV~\cite{Borsanyi:2010bp,Bazavov:2011nk}. Potential models~\cite{Wong:2004zr,Cabrera:2006wh,Alberico:2006vw,Mocsy:2007yj,Riek:2010py} have been employed to implement these effects and test them against lQCD data for euclidean quarkonium correlators~\cite{Jakovac:2006sf,Aarts:2007pk,Ding:2012sp,Aarts:2014cda}, but no definite answer on the modifications of the QCD force in medium could be achieved. To broaden these investigations we have been developing a thermodynamic $T$-matrix approach~\cite{Mannarelli:2005pz,Riek:2010py,Liu:2016ysz,Liu:2017qah} where consequences of the in-medium potential are assessed not only for quarkonia, but also for individual heavy quarks (such as their transport properties) and the surrounding medium that they interact with. The $T$-matrix framework has been solved selfconsistently for one- and two-parton correlations in a full off-shell scheme beyond the quasiparticle approximation~\cite{Liu:2016ysz,Liu:2017qah}, allowing for the dynamical formation of (broad) bound states, and connecting bulk and microscopic properties of the QGP and its excitations (spectral functions). Despite constraints from three sets of lQCD data (equation of state (EoS), heavy-quark (HQ) free energy, and quarkonium correlators), the underlying in-medium potential could still not be determined unambiguously~\cite{Liu:2017qah}. However, different potentials predict markedly different spectral and transport properties of the QGP. The objective of the present paper is to further explore this sensitivity by computing the thermal relaxation rates for charm quarks for different potential solutions (including previously used internal- and free-energy proxies) and quantifying their effect on the charm-quark spectra in URHICs using relativistic Langevin simulations. We specifically scrutinize off-shell effects in the calculation of the transport coefficients, which can play a significant role given the large spectral widths of partons found in the ``strongly-coupled solution" of the $T$-matrix approach, together with broad $D$-meson resonance states in the charm-light-quark scattering amplitude near or even below the nominal two-parton threshold. This paper is organized as follows. In Sec.~\ref{sec_pot} we recollect the main features and differences of weakly- and strongly-coupled solutions that we previously found within the $T$-matrix approach. In Sec.~\ref{sec_offtrans} we introduce the off-shell formalism to calculate HQ transport coefficients, and discuss an improved partial-wave expansion in the $T$-matrix over previous calculations of the HQ relaxation rate. In Sec.~\ref{sec_trans} we analyze the results of the HQ transport coefficients from the different types of potentials. In Sec.~\ref{sec_urhic}, we briefly recall the transport implementation into URHICs using relativistic Langevin simulations, calculate the charm-quark and $D$-meson nuclear modification factors ($R_{AA}$) and elliptic flow ($v_2$), and discuss the results in light of discriminating different potential strengths via experimental observables. In Sec.~\ref{sec_concl} we summarize and conclude. In appendix~\ref{app_cm}, we collect the expressions used for the transformation of the off-shell $T$-matrix into the center-of-mass (CM) frame as used in this work. \section{In-Medium Potentials Based on Lattice QCD} \label{sec_pot} In Ref.~\cite{Liu:2017qah} we deployed the $T$-matrix approach, together with the Luttinger-Ward Baym formalism, in a comprehensive fit to lQCD data for the HQ free energy, quarkonium correlators and the QGP EoS. It turned out that the input potential required to simultaneously describe the lattice data is not unique. We bracketed the range of viable potentials by approximately limiting scenarios referred to as a strong-coupling scenario (SCS) and a weak-coupling scenario (WCS). The main features of the SCS are large thermal parton widths leading to a dissolution of their quasiparticle peaks at low momentum near $T_{\rm pc}$ while broad mesonic and diquark bound states emerge whose contribution dominates the pressure when approaching $T_{\rm pc}$ from above. On the other hand, in the WCS thermal partons remain well defined quasiparticles (with masses similar to the SCS), while rather narrow, loosely-bound two-body states form near $T_{\rm pc}$ whose contribution to the EoS remains, however, subleading. The underlying static potentials, $V_s$ for the SCS and $V_w$ for the WCS, are displayed in Fig.~\ref{fig_pot}, along with the lQCD results for the free ($F$) and internal energy ($U$) that they reproduce through the $T$-matrix formalism. Both ${V}_s$ and ${V}_w$ lie in between $U$ and $F$, and they tend to be closer to $U$ as temperature increases while their gap diminishes. However, at low temperatures $V_w$ essentially coincides with $F$ while $V_s$ reaches much above it at intermediate and especially large distances. This difference is the key factor in the resulting QGP spectral properties near $T_{\rm pc}$ as discussed above; the large-distance strength of $V_s$ implies that the QGP is strongly coupled only at large distances, \ie, for soft momenta. \begin{figure} [thb!] \centering \includegraphics[width=0.99\columnwidth]{vuf.eps} \caption{The potentials of the SCS (solid lines) and WCS (dashed lines) are compared to the internal energy $U$ (crosses) and free energy $F$ (dots) as a function of distance between a $Q$ and $\bar Q$, for four temperatures. } \label{fig_pot} \end{figure} \begin{figure} [!htb] \centering \includegraphics[width=0.99\columnwidth]{vufforceplot.eps} \caption{Force for $V_s$ (solid line), $V_w$ (dashed line), $U$ (crosses) and $F$ (dots) at different temperatures.} \label{fig_vufforce} \end{figure} \begin{figure} [!htb] \centering \includegraphics[width=0.99\columnwidth]{vufasplot.eps} \caption{The dimensionless quantity $\frac{3}{4}r^2 dV/dr$ (scaled to recover the strong coupling constant, $\alpha_s$, at short distance) is plotted for $V_s$ (solid line), $V_w$ (dashed line), $U$ (crosses) and $F$ (dots).} \label{fig_vasr} \end{figure} \begin{figure} [!htb] \centering \includegraphics[width=0.99\columnwidth]{vufasplotp.eps} \caption{The dimensionless quantity $\alpha_{\rm eff}(k)\equiv\frac{3}{16\pi}k^2 V(k)$ (scaled to recover the strong coupling constant, $\alpha_s$, at large momentum) is plotted for $V_s$ (solid line), $V_w$ (dashed line), $U$ (crosses) and $F$ (dots).} \label{fig_vasp} \end{figure} Taking the derivative of the potentials, $-dV(r)/dr$, yields the pertinent forces, cf.~Fig.~\ref{fig_vufforce}. The forces for $V_s$ and $U$ at large distances are much higher than those for $ V_w $ and $F$. Around $r\simeq0.5$\,fm and $T$=0.194~GeV, the force from $U$ amounts to ca.~2.5\,GeV/fm which even exceeds the vacuum force by about a factor of $\sim$2. This enhancement originates from the ``entropy term", $-TdF/dT$, as a fast change in degrees of freedom near $T_{\rm pc}$ leads to a large temperature derivative. It has been suggested that this is caused by releasing thermal magnetic monopoles~\cite{Liao:2008vj}. The force from $V_s$ at this distance (at $T$=0.194~GeV) is also larger than in vacuum, by about 20\%; \ie, the major contribution to this force is still considered to be the remnant of the confining vacuum configuration rather than thermal monopoles. The long-range force is closely related with low-momentum transport properties of the medium; in particular, a long-range force allows a parton to interact with an increased number of thermal partons in the heat bath, proportional to the volume of the spherical shell which grows as $r^2$. Therefore, by multiplying the force with $\frac{3}{4}r^2$, one forms a dimensionless quantity, $\frac{3}{4} r^2 dV/dr$, that can be regarded as an ``effective interaction strength" in the medium and is plotted for the 4 ``potentials" in Fig.~\ref{fig_vasr}. The factor of 3/4 renders the $r\to0$ limit equal to the strong coupling constant, which is $\alpha_s$=0.27 for all of our 4 ``potentials". Starting from short range, $U$ has the largest interactions, up to $r\simeq 1(0.4)$\,fm at the smallest (largest) temperature, due to the ``entropy-related" potential, $-TdF/dT$; as we will see below, this can affect transport properties even at rather high momentum. Coming from the large distance side, $V_s$ gives the strongest ``effective" coupling, and its maximum coupling peak at each temperature is located at the largest distance among all potentials, ranging from $r_{\rm max}$=0.85\,fm at $T$=0.194\,GeV down to $r_{\rm max}$=0.5\,fm at $T$=0.400\,GeV. The large-distance enhancement of the coupling can be related to an infrared enhancement in momentum space, as illustrated by the dimensionless-scaled momentum space potentials displayed in Fig.~\ref{fig_vasp}: here, the maximum interaction strength for $V_s$ occurs at the lowest momentum (relative momentum exchange between $Q$ and $\bar Q$) among the 4 potentials, approximately given by $p_{\rm max}=2/r_{\rm max}$. \section{Off-Shell Transport Coefficients} \label{sec_offtrans} As mentioned above, the strong color force, in particular in the SCS, leads to large widths in the spectral functions of thermal partons, dissolving their quasiparticle peaks at low momenta and temperatures~\cite{Liu:2017qah}. It is therefore in order to incorporate the off-shell effects in the Boltzmann/Langevin description of the HQ transport. Toward this goal, we start from the Kadanoff-Baym equations and use a minimal set of approximations to reduce them to a Boltzmann equation, where quantum effects are encoded in the transition rates. Subsequently, this Boltzmann equation is expanded into a Fokker-Planck equation, which can be implemented via a Langevin process where quantum effects are encoded in the transport coefficients. We closely follow the formalism for non-equilibrium quantum field theory described in Ref.~\cite{Danielewicz:1982kk}. We first illustrate a formal derivation of the relations for the non-relativistic case, but our final expressions for the transport coefficients account for relativistic effects as discussed in Ref.~\cite{Liu:2017qah}. In relative energy-momentum space, with a macroscopic time denoted as \(t\),\footnote{We use the same approximation, \(T\pm t/2\approx T\), as in Ref.~\cite{Danielewicz:1982kk}, but use \(t\) to denote \(T=(t_1+t_2)/2\).} the equation for the non-equilibrium HQ Green function can be expressed as\footnote{We enforce translational invariance so that all terms with a coordinate gradient vanish, and the Boltzmann equation used to evaluate the transport coefficients can be obtained as in Ref.~\cite{Svetitsky:1987gq}.} \begin{align} &\frac{\partial}{\partial t}[\int d\omega G_Q^<(\omega,\textbf{p},t)]= \int d\omega\{i\Sigma_Q^{<}(\omega,\textbf{p},t)G_Q^>(\omega,\textbf{p},t) \nonumber\\ & \quad \qquad \qquad \qquad \qquad - i\Sigma_Q^{>}(\omega,\textbf{p},t)) G_Q^<(\omega,\textbf{p},t)\} \ . \label{eq_baymeq} \end{align} The $G_Q^{<,>}(\omega,\textbf{p},t)$ are the Fourier transforms of the Green functions, \begin{align} G_Q^{<}(t_1,x_1,t_2,x_2) &=i\langle\psi_Q^\dagger(t_2,x_2)\psi_Q(t_1,x_1)\rangle \\ G_Q^{>}(t_1,x_1,t_2,x_2) &=-i\langle\psi_Q(t_1,x_1)\psi^\dagger_Q(t_2,x_2)\rangle \ , \end{align} with respect to $\delta t$ and $ \delta x $ for fixed $ t $ and $ x $ where $\delta t=t_1-t_2$, $\delta x=x_1-x_2$, $t=(t_1+t_2)/2$, $x=(x_1+x_2)/2$~\cite{Danielewicz:1982kk}. In a uniform medium the $G_Q^{<,>}$ do not depend on $x$. $\Sigma_Q^{<,>}$ is the selfenergy in the real-time formalism, in which it can be calculated diagrammatically from the underlying scattering processes between the heavy quark and the partons of the medium. The Fourier transform of $\Sigma_Q^{<,>}$ uses the same convention as that for $G_Q^{<,>}$. The $T$-matrix approach has been used to derive the expressions for these selfenergies in Appendix F of Ref.~\cite{Danielewicz:1982kk}. One has \begin{align} &\Sigma_Q^{>}(\omega,\textbf{p},t)=\mp \sum\int \frac{d\omega'd^4\textbf{p}'}{(2\pi)^4}\frac{d\nu d^3\textbf{q}}{(2\pi)^4} \frac{d\nu'd^4\textbf{q}'}{(2\pi)^4}(2\pi)^4\delta^{(4)} \nonumber\\ & \quad \times |T(E,\textbf{P},\textbf{p},\textbf{p}')|^2 G^{>}_Q(\omega',p')G^{<}_i(\nu,q)G^{>}_i(\nu',q') \ , \label{eq_selfT1} \end{align} and \begin{align} &\Sigma_Q^{<}(\omega,\textbf{p},t)=\mp \sum\int \frac{d\omega'd^4\textbf{p}'}{(2\pi)^4}\frac{d\nu d^3\textbf{q}}{(2\pi)^4} \frac{d\nu'd^4\textbf{q}'}{(2\pi)^4}(2\pi)^4\delta^{(4)} \nonumber\\ &\quad \times |T(E,\textbf{P},\textbf{p},\textbf{p}')|^2 G^{<}_Q(\omega',p')G^{>}_i(\nu,q)G^{<}_i(\nu',q') \ . \label{eq_selfT2} \end{align} Here, $\delta^{(4)}$ is a short-hand notation for energy-momentum conservation, and $\sum$ represents the summation over the internal degrees of freedom $i=q, \bar{q}, g$ and their color, spin and flavor, divided by one HQ degeneracy, $d_Q$=6; $\textbf{P}$ and $E$ are the total momentum and energy, and $T(E,\textbf{P},\textbf{p},\textbf{p}')$ is the retarded \(T\)-matrix. The $ G_{i}^{<,>} $ are the Green functions for the light partons in medium. The classical Boltzmann equation is recovered from Eq.~(\ref{eq_baymeq}) using the on-shell approximations: $G^<=\mp i(2\pi)\delta(\omega-\varepsilon(\textbf{p}))f(\textbf{p},t))$ and $G^>=-i (2\pi)\delta(\omega-\varepsilon(\textbf{p}))(1\pm f(\textbf{p},t))$. These approximations are derived in Ref.~\cite{Danielewicz:1982kk};\footnote{Our convention for ``$ \mp $'' (upper/lower sign denotes boson/fermion) is opposite of that in Ref.~\cite{Danielewicz:1982kk}.} they neglect off-shell quantum effects, but not all are necessary to describe HQ diffusion in a local-equilibrium QGP. We have found that ``minimal" approximations required for obtaining a HQ Boltzmann equation amount to \begin{align} &G_Q^< (\textbf{p},\omega,t)=i(2\pi)\delta(\omega-\varepsilon_Q(\textbf{p}))f_Q(\textbf{p},t) \ , \nonumber\\ &G_Q^{>}(\omega,p)=- i(2\pi)\rho_Q(\omega,p)(1- n_Q(\omega)) \ , \nonumber\\ &G_i^{<}(\omega,p)=\mp i(2\pi)\rho_i(\omega,p)n_i(\omega) \ , \nonumber\\ & G_i^{>}(\omega,p)=- i(2\pi)\rho_i(\omega,p)(1\pm n_i(\omega)) \ , \label{eq_approx} \end{align} \normalsize where the quasiparticle approximation is only applied to $ G_Q^< (\omega,\textbf{p},t) $, \ie, the incoming heavy quark, while all other $ G^{<,>} $ are taken to be off-shell equilibrium Green functions, with $\rho_{i,Q}$ and $ n_{i,Q} $ denoting the corresponding spectral and distribution functions, respectively, for light ($i$) and heavy ($Q$) partons in equilibrium. Substituting these expressions into Eqs.~(\ref{eq_baymeq}), (\ref{eq_selfT1}), and (\ref{eq_selfT2}), yields the Boltzmann equation \begin{align} &\frac{\partial}{\partial t}f(\textbf{p},t)= \nonumber\\ &\int \frac{d^3\textbf{k}}{(2\pi)^3} [w(\textbf{p+k},\textbf{k})f(\textbf{p+k},t)-w(\textbf{p},\textbf{k})f(\textbf{p},t)] \ , \end{align} \normalsize where the transition rate is\footnote{Note that $i\Sigma^{>}(p,\varepsilon(p),t)f(\textbf{p},t)=\int \frac{d^3\textbf{k}}{(2\pi)^3} [w(\textbf{p},\textbf{k})f(\textbf{p},t)]$. Also, when converting the gain term, $ \Sigma_Q^< G_Q^>$, to Boltzmann form, it is necessary to use $T(E,\textbf{P},\textbf{p},\textbf{p}')=T(E,\textbf{P},\textbf{p}',\textbf{p})$.} \begin{align} &w(\textbf{p},\textbf{k})=\int \frac{d\nu d^3\textbf{q}}{(2\pi)^3}\frac{d\nu'd^3\textbf{q}'}{(2\pi)^3} d\omega'(2\pi)^4\delta^{(4)} \rho_i(\nu,q) \nonumber\\ & \quad \qquad \times \rho_i(\nu',q') \rho_Q(\omega',|\textbf{k}+\textbf{p}|) |T(E,\textbf{P},\textbf{p},\textbf{k}+\textbf{p})|^2 \nonumber\\ & \qquad \quad \qquad \qquad \times n_i(\nu) \left[1\mp n_i(\nu')\right] \left[1- n_Q(\omega')\right] \ , \label{eq_wrate} \end{align} \normalsize and $\textbf{k}= \textbf{p}'-\textbf{p}$ is the 3-momentum exchange. Note that we have approximated the distribution function of the outgoing heavy quark in the blocking factor $(1-n_Q)$ to be a thermal one (the blocking factor is close to one in any case), and therefore the rate \(w(\textbf{p},\textbf{k})\) does not depend on the dynamical non-equilibrium HQ distribution function, \(f(\textbf{p},t)\). So far, our discussion does not include relativistic effects; several modifications are necessary to do that, as detailed in the following for the calculation of the HQ transport coefficients. Expanding the full Boltzmann equation in the momentum transfer, \(\textbf{k}\), results in a Fokker-Planck equation, which can be converted to a Langevin approach for heavy quarks. The Fokker-Planck equation is given by \begin{align} &\frac{\partial}{\partial t}f(p,t)=\frac{\partial}{\partial p_i}\{A_i(p)f(p,t)+ \frac{\partial}{\partial p_j}[B_{ij}(p)f(p,t)]\} \end{align} where the HQ transport coefficients are defined as weighted averages over the transition rate, \begin{align} &A_i(p)=\int \frac{d^3\textbf{k}}{(2\pi)^3} w(\textbf{p},\textbf{k}) k_i\nonumber\\ &B_{ij}(p)=\int \frac{1}{2}\frac{d^3\textbf{k}}{(2\pi)^3}w(\textbf{p},\textbf{k}) k_i k_j \ . \end{align} In local equilibrium, the drag ($A$) and transverse/longitudinal diffusion coefficients ($B_0/B_1 $) are defined through \begin{align} &A_i(p)=A(p)p_i \nonumber\\ &B_{ij}(p)=B_0(p)P^{\perp}_{ij}+B_1(p)P^{\parallel}_{ij} \ , \end{align} with the projectors $P^{\perp}_{ij}=\delta_{ij}-p_i p_j/\textbf{p}^2$ and $P^{\parallel}_{ij}=p_i p_j/\textbf{p}^2$. The scalar transport coefficients, $A(p)$, $B_0(p)$ and $B_1(p)$, can thus be expressed via averages \begin{equation} \langle X(\mathbf{p}')\rangle\equiv\int \frac{d^3\mathbf{k}}{(2\pi)^3} w(\mathbf{p},\mathbf{k}) X(\mathbf{p}') \end{equation} as \begin{eqnarray} A(p)&=&\langle 1-\frac{\textbf{p}\cdot\textbf{p}'}{\textbf{p}^2}\rangle \nonumber\\ B_0(p)&=&\frac{1}{2}\langle p'^2-\frac{(\textbf{p}\cdot\textbf{p}')^2}{\textbf{p}^2}\rangle \nonumber\\ B_1(p)&=&\frac{1}{2}\langle \frac{(\textbf{p}\cdot\textbf{p}')^2}{\textbf{p}^2}-2\textbf{p}\cdot\textbf{p}'+\textbf{p}^2 \rangle \ . \end{eqnarray} Using the expression for $w(\textbf{p},\textbf{k}) $ in Eq.~(\ref{eq_wrate}) with the replacement $\textbf{k}-\textbf{p} \rightarrow \textbf{p}'$, and switching the integration variable to $ \textbf{p}' $, we express $ \langle X(\textbf{p}')\rangle$ in $T$-matrix form as \small \begin{align} \langle X(\textbf{p}')\rangle=&\sum_i\frac{1}{2 \varepsilon_Q(p)}\int \frac{d\omega'd\textbf{p}' }{(2\pi)^3 2\varepsilon_Q(p')} \frac{d\nu d^3\textbf{q}}{(2\pi)^3 2\varepsilon_i(q)} \frac{d\nu'd^3\textbf{q}'}{(2\pi)^3 2\varepsilon_i(q')} \nonumber\\ &\times\delta^{(4)}\frac{(2\pi)^4}{d_Q}\sum_{a,l,s}|M|^2\rho_Q(\omega',p')\rho_i(\nu,q) \rho_i(\nu',q') \nonumber\\ &\times[1-n_Q(\omega')] n_i(\nu) [1\pm n_i(\nu')] X(\textbf{p}') \ . \label{eq_offtrans} \end{align} \normalsize The summation \(\sum_i\) is over all light flavors, $i=u,\bar{u}, d, \bar{d}, s, \bar{s}, g$, where the light and strange quarks are assumed to have the same mass (which is a good approximation in our context~\cite{Riek:2010py}). We include the relativistic phase space factor with the single-particle on-shell energy, denoted by $\varepsilon_{Q,i}(p)$. The heavy-light scattering matrix elements, $|M_{Qi}|^2$, in Eq.~(\ref{eq_offtrans}) are related to the $T$-matrix in the CM frame as \begin{align} & \sum_{a,l,s}\left|M^2\right| =16\varepsilon_Q(p_\text{cm})\varepsilon_i(p_\text{cm}) \varepsilon_Q(p_\text{cm}')\varepsilon_i(p_\text{cm}')] d^{Qi}_s \nonumber\\ &\times\underset{a}{\sum }d^{Qi}_a\left| 4\pi\underset{l}{\sum}(2l+1) T^{a,l}_{Q,i}(E_\text{cm},p_\text{cm},p_\text{cm}')P_l\left(\cos \theta _\text{cm}\right)\right|^2, \label{eq_ampsq} \end{align} where $T^{a,l}_{Qi}(E_\text{cm},p_\text{cm},p_\text{cm}')$ is calculated in the CM frame in all possible two-body color channels, $a$, and partial-wave channels, $l$. The CM energy, $E_\text{cm}$, incoming CM momentum $p_\text{cm}$, outgoing CM momentum $p_\text{cm}'$, and scattering angle $\cos \theta _\text{cm} $ are expressed as functions of $E$, $\textbf{p}$, $\textbf{q}$, $\textbf{p}'$, $\textbf{q}'$, as discussed in App.~\ref{app_cm}. The two-body color/spin degeneracy factor is denoted by $d^{Qi}_{a,s}$, and the $P_l\left(\cos \theta_\text{cm}\right)$ are Legendre polynomials. The partial-wave summation is different from that employed in Eq.~(8) of Ref.~\cite{vanHees:2007me} (and in Ref.~\cite{Riek:2010py}), in that our expression (\ref{eq_ampsq}) includes the interference effects between different partial waves and an additional factor of $\pi$. We also carry the partial-wave expansion to higher angular momenta of up to $l$=8 (compared to $l$=1 in Refs.~\cite{vanHees:2007me,Riek:2010py}), which turns out to be essential for the convergence of the high-momentum region of the transport coefficients. More explicitly, one can show that $\left|\sum_l (2l+1)c_l P_l(x)\right|^2=\sum_l(2l+1)b_lP_l(x)$, where each $b_l$ is a function of the $\{c_l\}$. The final results for the friction coefficient using, \eg, the $U$ potential turn out to be within $\sim$20\% of the results of Ref.~\cite{Riek:2010py} based on the same lQCD free-energy data. This is a consequence of benchmarking the partial-wave expansion in both versions against the full perturbative-QCD (pQCD) results. \section{Charm Quark Transport Coefficients} \label{sec_trans} In this section, we discuss the resulting charm-quark transport coefficients, focusing on the drag coefficient $A(p)$ which characterizes the thermal relaxation rate for the different input potentials. We emphasize that the ``true" potentials $V_s$ and $V_w$ are part of a comprehensive many-body set-up which encompasses the lQCD EoS and thus fully specifies the properties of thermal medium, \ie, the spectral functions (masses and widths) of the thermal partons that the heavy quark scatters off. This is not the case for the previously used potential ``proxies" $F$ and $U$, which have been applied within quasiparticle approximations for the QGP medium. Therefore, in Sec.~\ref{ssec_apquasi}, we first conduct baseline calculations for all four potentials, \{$ U $, $ F $, $ V_s $, $ V_w $\}, with thermal quasiparticle partons. In Sec.~\ref{ssec_apfull}, we employ the off-shell formalism outlined above to compute the transport coefficients for the potentials \{$ V_s $, $ V_w $\} in their accompanying bulk medium. In Sec.~\ref{ssec_pvsnp} we scrutinize various nonperturbative effects (resummed vs. Born amplitudes, Coulomb vs. full calculations with string term, and on- vs. off-shell) to exhibit their quantitative role in the HQ transport. \subsection{Drag coefficients for different color forces in quasiparticle medium} \label{ssec_apquasi} We first restrict ourselves to the quasiparticle approximation for the QGP medium, \ie, the thermal-parton spectral functions in the expressions given in Sec.~\ref{sec_offtrans} are taken to be $\delta$-functions at their quasiparticle masses. The latter are chosen to be the same for all four potentials as shown in Fig.~\ref{fig_vufmass} left, obtained from a quasiparticle fit to the lQCD EoS using the Fock mass ansatz~\cite{Liu:2017qah} with $V_s$. The charm-quark masses, plotted in Fig.~\ref{fig_vufmass} right, are taken to be $1.264+\Sigma(\infty;T)/2 $ where $\Sigma(\infty;T)$ denotes the infinite-distance limit of \{$U$, $F$, $V_s$, $V_w$\} as shown in Fig.~\ref{fig_pot}. Note that the light parton masses from the quasiparticle fit are different from the results extracted using the off-shell many-body calculations~\cite{Liu:2017qah}, while the charm-quark masses of \{$ V_s $, $ V_w $\} are taken from the corresponding potential. This setup allows for an approximate ``apples-to-apples" comparison of how the different forces (or ``effective couplings") shown in Figs.~\ref{fig_vufforce}, \ref{fig_vasr} and \ref{fig_vasp} manifest themselves in the charm-quark transport coefficients. \begin{figure} [!thb] \centering \includegraphics[width=0.99\columnwidth]{hlmassplot.eps} \caption{Light-parton (left) and charm-quark (right) masses for $V_s$ (solid lines), $V_w$ (dashed lines), $U$ (crosses) and $F$ (dots) as used in the quasiparticle calculations leading to the results displayed in Figs.~\ref{fig_vufap}.} \label{fig_vufmass} \end{figure} \begin{figure} [!thb] \centering \includegraphics[width=0.99\columnwidth]{ApvufplotBorn.eps} \caption{Friction coefficients for $V_s$ (solid line), $V_w$ (dashed line), $U$ (crosses) and $F$ (dots) when using the on-shell Born diagrams in quasiparticle approximation.} \label{fig_vufapborn} \end{figure} We start with the case of using the Born approximation to calculate the friction coefficient, displayed in Fig.~\ref{fig_vufapborn} for the four potentials. The results for the WCS potential and the free energy closely agree across all temperatures and charm-quark momenta considered here. The friction coefficient is much larger for the SCS potential and the internal energy, which are also rather close to each other except that the $U$-potential is about a factor 2 larger at the lowest temperature and at high momenta at the highest temperature. To better understand what the relevant momentum exchanges for the transport coefficients are, we divide up the phase space into shells of momentum transfer, $k dk$, where $k=|\vec p_\text{cm}- \vec p_\text{cm}'|$, and define a ``normalized" momentum-exchange density \begin{equation} \bar{K}(k;p) dk \equiv A(p)^{-1} dA(k) \end{equation} and a corresponding cumulative density \begin{equation} \bar{A}(k;p) \equiv\int_{0}^{k}dk'\bar{K}(k';p) \end{equation} of the friction coefficient, $A(p)$, defined such that $\bar{A}(p,k\to\infty)$=1. These two quantities are plotted in Fig.~\ref{fig_ftoap} using the SCS potential, $V_s$ (still in quasiparticle and Born approximation). For low-momentum charm quarks, most of the momentum transfers at low temperatures occur in a 0.5\,GeV window around $k$=0.4\,GeV, corresponding to a relatively large force range of $\sim$1\,fm (recall the remark at the end of Sec.~\ref{sec_pot}). The peak position shifts to higher momentum transfer as temperature or HQ momentum increase, implying a transition from the long-range string force to a shorter-range Coulomb force. This is due to a harder thermal phase space and the enhanced screening of the potential as temperature increases. For the $U$ potential, the effective coupling at a momentum exchange of $\sim$0.5 GeV is about 50\% larger at the lowest temperature (recall upper left panel in Fig.~\ref{fig_vasp}), leading to an approximately twice larger low-momentum friction coefficient in Fig.~\ref{fig_vufapborn}. A similar analysis applies to the other potentials. \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{ftoap.eps} \caption{Differential CM momentum-transfer ``probability" distribution, $\bar{K}(k;p)$ (upper panel), for the friction coefficient from the SCS potential in Born approximation, and its cumulative (lower panel) for charm quarks at zero momentum ($p$=0, solid lines) and $p$=10\,GeV (dashed lines) for different temperatures (identified by the color code).} \label{fig_ftoap} \end{figure} In the next step, we compare the friction coefficients from the resummed $T$-matrix interactions in Fig.~\ref{fig_vufap}, still using a quasiparticle QGP medium. At low temperature and low momentum, the drag coefficients for $U$ and $V_s$ are {\em reduced} by a factor of 2 and 1.5, respectively, compared to the Born calculation. This is mainly because the resummation converts the strongly attractive Born term into subthreshold resonance states whose interaction strength is not accessible in 2$\to$2 on-shell scattering, while only a repulsive tail of the $T$-matrix remains in the on-shell phase space. However, for a less attractive potential which does not generate a strong bound state, which is the case for $F$ and $V_w$, the resummation generally enhances the Born result. On the other hand, at high momentum and high temperature, a closer agreement between the Born approximation and $T$-matrix results is found. \begin{figure} [!t] \centering \includegraphics[width=0.99\columnwidth]{Apvufplot.eps} \caption{Quasiparticle friction coefficients for $V_s$ (solid line), $V_w$ (dashed line), $U$ (crosses) and $F$ (dots).} \label{fig_vufap} \end{figure} \subsection{Transport coefficients with off-shell effects} \label{ssec_apfull} In the previous section we saw how in a strongly coupled medium the formation of bound states can lead to a marked {\em decrease} in the interaction strength when employing the quasiparticle approximation in two-body scattering. This should be considered as an artifact of an incompatible approximation. In the presence of a large interaction strength, the single-particle spectral functions are expected to become broad and/or develop collective modes below their nominal ``quasiparticle" masses. In either case, phase space opens up below the quasiparticle two-body threshold and allows for subthreshold resonance scattering. We now compute the charm-quark transport coefficients deploying the off-shell formalism described in Sec.~\ref{sec_offtrans} to incorporate the quantum effects associated with subthreshold many-body interactions. We focus on the results for the SCS and WCS as their selfconsistent solutions constructed in Ref.~\cite{Liu:2017qah} specify the spectral functions of the thermal partons, while this information is not available for $U$ nor $F$. \begin{figure*} [!htb] \centering \includegraphics[width=2\columnwidth]{Applot.eps} \caption{Charm-quark friction coefficients, $A(p)$, for the full off-shell calculations (left) in the SCS (solid lines) and WCS (dashed lines), and comparing the full off-shell case for the SCS (solid lines) with one using the on-shell approximation for both thermal partons and the outgoing charm quark (dashed lines; middle panel) or for the outgoing charm quark only (dashed lines in right panel).} \label{fig_Ap} \end{figure*} \begin{figure} [htb] \centering \includegraphics[width=0.8\columnwidth]{gammaplot.eps} \includegraphics[width=0.8\columnwidth]{dsplot.eps} \vspace{-0.15cm} \vspace{-0.15cm} \caption{Temperature dependence of the zero-momentum relaxation rate, $\gamma$ (upper), and the spatial diffusion coefficient, $D_s=T/(\gamma M_c)$ (lower, in units of the thermal wave length $D_s(2\pi T)$). The pQCD results uses $\alpha_s=$0.4 and a factor of 5.} \label{fig_gammads} \end{figure} The pertinent charm-quark friction coefficients are compiled in Fig.~\ref{fig_Ap}. The full results displayed in the upper left panel show that for small momenta and small temperatures the relaxation rate is about four times larger for the SCS than for the WCS, while with increasing momentum and temperature they approach each other. The key reason for the large enhancement at low momentum and temperature is the remnant of the long-range confining force, as discussed in the context of Figs.~\ref{fig_pot}, \ref{fig_vufforce}, \ref{fig_vasr} and \ref{fig_vasp}. At higher temperatures, the confining potential is largely screened, and the larger thermal parton momenta probe the force at shorter distances. Since the short-range Coulomb force is quite similar for the WCS and SCS, the difference between $ A_s(p) $ and $ A_w(p) $ is reduced (in the fits of Ref.~\cite{Liu:2017qah} the screening of the Coulomb interaction is slightly weaker in the WCS than in the SCS, causing $ A_w(p) $ to exceed $ A_s(p) $ at high momenta and at the highest temperature where the confining interaction has nearly vanished). The off-shell effects in the SCS scenario are illustrated in the middle and right panel of Fig.~\ref{fig_Ap}, where we have switched them off for either both thermal partons and the outgoing charm quark (middle panel) or only for the outgoing charm quark (right panel). In the former case, we have re-adjusted (\ie, decreased) the thermal parton masses to ensure compatibility with the lQCD EoS. We find that the quantum effects almost double the transport coefficients in the small-momentum and low-temperature region: the broadening of the thermal spectral functions allows to probe off-shell energies in the $T$-matrix where scattering through a (broad) bound state becomes possible. This confirms, in a more rigorous treatment, the original findings of Refs.~\cite{vanHees:2004gq,vanHees:2007me}, where near threshold resonances were put forward to solve the heavy-flavor puzzle in Au-Au collisions at RHIC~\cite{Adare:2006nq}. A more moderate but still significant effect arises from the non-trivial spectral function of the outgoing charm quark. Switching back to a $\delta$-function reduces the low-momentum low-temperature relaxation rate by almost 20\%, cf.~right panel of Fig.~\ref{fig_Ap}. Once the resonance states are close to threshold (or have melted) so that the on-shell treatment can already access the main scattering strength, the off-shell treatment does not provide a significant enhancement. For the WCS, the results from the full off-shell case generally agree well with the results from the quasiparticle case (not shown), since the widths of spectral functions are small. At high momentum, the HQ drag coefficients are dominated by the Coulomb term, augmented by relativistic (magnetic) corrections (Breit enhancement), while the scalar vertex assumed for the string interaction suppresses its high-momentum contribution. Therefore, the off-shell case approaches the quasiparticle case: the spectral functions become more quasiparticle like, and the typical CM energy in the $T$-matrix becomes larger so that even off-shell effects do not significantly probe the subthreshold resonances anymore. In Fig.~\ref{fig_gammads} we summarize the temperature dependence of the zero-momentum relaxation rate, $\gamma=A(p\rightarrow 0)$, and the dimensionless spatial diffusion coefficient, $D_s(2\pi T)=(2\pi T^2)/(m_c \gamma_c)$, for the WCS and SCS. As a reference, we also show a perturbative QCD (pQCD) Born result (using $\alpha_s$=0.4 in a quasiparticle QGP with Debye and thermal parton masses of $gT$, and a constant charm-quark mass of 1.5\,GeV) upscaled by a factor of 5 (as recently used as a benchmark scenario in Ref.~\cite{Rapp:2018qla}). The temperature behavior of the relaxation rates and spatial diffusion coefficients for the WCS is similar to the pQCD*5 scenario, wherein $\gamma$ increases monotonically with temperature and $D_s(2\pi T)$ is essentially constant, similar to what one would expect from a dimensionless theory. For the SCS, on the other hand, $\gamma$ exhibits a rather flat behavior with temperature where the increasing density of the thermal scatterers is essentially compensated by the decreasing interaction strength. Consequently, $D_s(2\pi T)$ increases with temperature by about a factor 5 over the considered temperature range of $T$=0.2-0.4\,GeV; the extra dimensionful quantity is brought in by the nonperturbative string tension. Also note that the SCS diffusion coefficient differs from the ``bare" pQCD interaction by a factor of almost 15 at low temperature. \subsection{Scrutinizing Nonperturbative Effects} \label{ssec_pvsnp} In the calculation of the transport coefficients, there are at least three nonperturbative components: (1)~the string interaction in the potential; (2)~the resummation of the $T$-matrix possibly leading to the resonance formation; (3)~off-shell effects from the large widths of the partons. Here, we reassess these effects relative to the full calculation of the friction coefficient within the SCS in Fig.~\ref{fig_Appert}, using the thermal parton and charm-quark masses shown in Fig.~\ref{fig_vufmass}. \begin{figure} [!hbt] \centering \includegraphics[width=0.99\columnwidth]{Apperplot.eps} \caption{Comparison of the effects of different ingredients on the HQ transport coefficients in the SCS as a function of charm-quark momentum, at 4 different temperatures. Solid lines: full results; dashed lines: using the on-shell approximation for the thermal partons and outgoing charm-quark; dash-dotted line: on-shell results using only the Coulomb term in the potential; dotted lines: using the Born and quasiparticle approximation (including the confining potential).} \label{fig_Appert} \end{figure} When switching off the string interaction in the potential (and neglecting off-shell effects, which play a negligible role in this scenario), the pertinent $T$-matrix results for the friction coefficient (labeled ``Coulomb-only'' in Fig.~\ref{fig_Appert}) are much reduced compared to the full results at low momentum, close to a factor of 15 at low temperature, and still by a factor of $\sim$3 at $T$=0.4\,GeV. At charm-quark momenta of $p$=10\,GeV, the reduction is still significant at low $T$ (indicating a non-negligible portion of soft interactions driven by the string term), but has essentially ceased at $T$=0.4\,GeV. Therefore, perturbative (elastic) calculations of $A(p)$ that do not account for remnants of the confining term are not be reliable at low temperatures even at momenta of \(p=10\)\,GeV. The ``on-shell" results with the full interaction, already shown in the previous section, fall below the full results by almost 50\% at low $T$ and nearly uniform in 3-momentum from 0 to 10\,GeV. This implies that even at $p$=10\,GeV, the soft off-shell effects (making accessible the subthreshold resonances) are significant, although in practice one expects radiative processes to become dominant at these momenta. The difference between full and on-shell calculations is essentially gone at $T$=0.4\,GeV (resonance structures have ceased), again basically across the entire momentum range. Finally, the ``on-shell Born" results are surprisingly close to full results within a few 10's of percent. This is, however, a highly deceptive result: if we include the second Born term in the $T$-matrix, the friction coefficient is up to 5 times larger at low momentum and low temperature, signaling an uncontrolled convergence property of the perturbative series at low momentum, very similar to the findings in Ref.~\cite{CaronHuot:2007gq}. This is another reminder that a proper resummation in the nonperturbative region is mandatory. Figure~\ref{fig_Appert} furthermore shows that the ``on-shell Born'' and ``on-shell'' curves approach each other at high momentum. Still, the results for the second Born order at high momentum and low temperature double the first-order result, \ie, the convergence of the perturbative series is still not good (due to the presence of the string term). This situation improves at higher temperature: at $ T=$0.4~GeV, the second Born contribution is only by a factor 1.8 (1.6) larger than the Born contribution at low (high) momentum. \section{Charm-Quark Langevin Simulations in Heavy-Ion Collisions} \label{sec_urhic} In this section we implement the transport coefficients following from the selfconsistent WCS and SCS, as well as from the $U$-potential proxy with quasiparticle QGP medium, into Langevin simulations of charm quarks in URHICs as described in Ref.~\cite{He:2011qa}. As our current calculations are limited to temperatures $T$=0.194-0.4\,GeV and momenta $p$=0-10\,GeV, an extrapolation is required to cover the ranges needed in the Langevin approach to heavy-ion collisions at the LHC. Since the $p$-dependence of the quasiparticle results is similar to the full results at high momentum (as discussed in the previous section), we extrapolate $A(p)$ to higher momenta using the quasiparticle results augmented by a $p$-independent $K$ factor to smoothly connect them at $p$=10\,GeV. For the extrapolation to lower and higher temperatures, we first extrapolate $D_s(2 \pi T)$ and $m_c$ as shown in the lower two panels of Fig.~\ref{fig_extra}. Then, we use $A(p=0;T)=T/(D_s m_c)$ and take the momentum dependence of $A(p;T)$ to be the same as for $A(p;T)$ at $T$=0.194(0.4)\,GeV for low (high) temperature, as shown in the upper two panels of Fig.~\ref{fig_extra}. \begin{figure} [!t] \centering \includegraphics[width=1\columnwidth]{extrafigure.eps} \caption{Extrapolation results for $ D_s(2\pi T) $, $ M_c $, $ A_s(p) $ and $ A_w(p) $.} \label{fig_extra} \end{figure} The transport coefficients are utilized within the Langevin equations \begin{align} &d\textbf{x}=\frac{\textbf{p}}{\varepsilon_c(p)} dt\\ &d\textbf{p}=\Gamma(p)\,\textbf{p}dt+\sqrt{2 dt D(p)}\bm{\rho} \ , \end{align} where the relaxation rate, $\Gamma(p)$, and the momentum diffusion coefficient, $D(p)$, are taken to be $\Gamma(p)=A(p)$ and $D(p)=B_0(p)=B_1(p)=T\varepsilon_c(p)\Gamma(p)$, and $ \bm{\rho} $ is a random number determined from the Gaussian distribution function $ P(\bm{\rho})=(2\pi)^{-3/2}e^{-\bm{\rho}/2} $. Using the Langevin equations, we simulate Brownian motion of charm quarks in a background medium provided by an ideal hydrodynamic evolution of the QGP fireball in URHICs at RHIC and the LHC. For definiteness, we choose semicentral Pb-Pb collisions at CM energy $\sqrt{s_{\rm NN}}$=5.02\,TeV, at a fixed impact parameter representing the 20-40\% centrality class. Figure~\ref{fig_c-RAA-v2} summarizes the nuclear modification factor, $R_{\rm AA}$, and elliptic flow, $v_2$, of charm quarks at the end of the QGP evolution, taken at $T_{\rm pc}$=170\,MeV, for the three potentials under investigation. The $R_{\rm AA}$ shows the standard feature of softening the initial charm-quark spectra, but only exhibits a modest sensitivity to the underlying potential. This reiterates the finding~\cite{Rapp:2008qc} that the main effects determining the charm-quark $R_{\rm AA}$ occur early in the evolution where the difference between the potentials is small. This is quite different for the elliptic flow~\cite{Rapp:2008qc}, which requires several fm/$c$ to build up in the expanding fireball. At that point the difference in the underlying potential scenarios becomes maximal, and, consequently, the low-$p_t$ elliptic flow of charm quarks provides a direct gauge of the coupling strength in the later stages of the QGP evolution. More quantitatively, the largest value of the $v_2$ is generated within the SCS reaching near 10\%, more than a factor of 3 larger than in the WCS. It also exceeds the maximum value attained with the $U$-potential proxy by about 20\%, indicating that low-$p_t$ elliptic flow of charm quarks is rather sensitive to the long-distance behavior of the in-medium potential, and thus an excellent measure of the spatial diffusion coefficient. Note that a charm-quark momentum of $p_t$=2\,GeV corresponds to a velocity of about 0.74$c$, not much larger than the (surface) flow velocities reached in the fireball expansion at the end of the QGP phase. At higher $p_t$, above $\sim$4\,GeV, the intermediate-distance strength is largest in the $U$-potential and leads to significantly larger $v_2$ values than obtained for $V_s$ and $V_w$. \begin{figure} [!tb] \includegraphics[width=0.99\columnwidth]{cRA.eps} \includegraphics[width=0.99\columnwidth]{cv2.eps} \caption{The $R_{\rm AA}$ (upper panel) and $v_2$ (lower panel) of charm quarks calculated from the $T$-matrix interactions with 3 different potentials using relativistic Langevin simulations in a hydrodynamic fireball evolution for semicentral Pb-Pb collisions at the LHC.} \label{fig_c-RAA-v2} \end{figure} To make contact with experiment, we proceed to calculate $D$-meson observables. As the fireball medium approaches the pseudo-critical temperature, charm quarks are hadronized into $D$ mesons through either recombination with surrounding light quarks from the hydrodynamic medium (pre-dominantly at low $p_t$)~\cite{Ravagli:2007xx} or independent fragmentation (we also account for a $\sim$20\% ($p_t$-dependent) reduction in the $D$-meson yields due to shadowing and ``chemistry effects" where charm quarks hadronize into other hadrons like $D_s$ and $\Lambda_c$ at a higher fraction than in proton-proton collisions). We finally carry out the $D$-meson diffusion in the hadronic phase. The resulting $D$-meson $R_{\rm AA}$ and $v_2$ are shown in Fig.~\ref{fig_D-RAA-v2}. Recombination effectively acts as another interaction between charm quarks and the medium, driving the $D$-meson spectra closer to equilibrium~\cite{He:2011qa}. This produces a characteristic flow ``bump" in the $R_{\rm AA}$ at a $p_T$ reflecting the velocity of low-momentum $D$-mesons embedded in the flowing hydrodynamic medium. At high $p_T$, fragmentation takes over, and the $D$-meson $R_{\rm AA}$ tends toward that of the charm quark (modulo further suppression due to $D$-meson interactions in the hadronic phase). Other than the flow bump, the qualitative features of the charm-quark spectra relating to the different potentials are preserved at the $D$-meson level. However, the discrimination power is reduced, especially for the maximum value of the $v_2$, which is now quite similar for the SCS and the $U$-potential, while the $v_2$ of the WCS is only a factor 2 below the former two. This is because recombination plus hadronic diffusion together add a roughly equal amount of $v_2$ in the 3 potential scenarios when going from charm-quark to $D$-meson spectra. To some extent this is an artifact of applying the same coalescence model to all three scenarios. In reality, the coalescence probability should be smaller in the WCS compared to the SCS, since the $D$-meson resonance strength, which is the microscopic mediator of the recombination process, is weaker in the WCS than in the SCS and thus should lead to a smaller increment in $v_2$ in the former compared to the latter. While the resonance recombination model~\cite{Ravagli:2007xx} (as employed here) in principle encodes this mechanism, its implementation in the current calculation does not account for this difference. These considerations reiterate the importance of a recombination model that is consistent with the microscopic interactions driving the diffusion process in the the vicinity of $T_{\rm pc}$. \begin{figure} [!tb] \centering \includegraphics[width=0.99\columnwidth]{dRA.eps} \includegraphics[width=0.99\columnwidth]{dv2.eps} \caption{Comparison of the calculated $D$-meson $R_{\rm AA}$ (upper panel) and $v_2$ (lower panel) obtained from applying a recombination-fragmentation model to hadronize the charm-quark spectra plotted in Fig.~\ref{fig_c-RAA-v2}.} \label{fig_D-RAA-v2} \end{figure} Recalling the experimental results~\cite{ALICE:2012ab,Abelev:2013lca,Sirunyan:2017xss}, which report maximal $v_2$ values of $D$-mesons in 30-50\% central collision of ca.~17$\pm$2\%, the SCS scenario is not far below, but the WCS and also the free-energy potential (not shown here) are strongly disfavored as their interaction strength is too small. At higher momenta, $p_T\gtsim5$\,GeV, both WCS and SCS produce too little suppression and too little $v_2$ (even the $U$ potential did not supply enough suppression in comparisons to central Pb-Pb data). This is, however, expected, since radiative processes have not been systematically included yet (some are encoded through the in-medium selfenergies of the heavy and light quarks in the $T$-matrices), see, \eg, Ref.~\cite{Rapp:2018qla} for a recent discussion and references. Such processes may also help to reduce the milder discrepancies at lower $p_T$. It remains to be seen whether contributions beyond the potential approximation might be helpful in generation additional interaction strength. From a more practical perspective, fluctuating initial conditions in the hydrodynamic evolution are conceivable for producing an enhancement in the $v_2$ over the results from smooth initial conditions~\cite{Nahrgang:2014vza,Noronha-Hostler:2016eow,Prado:2016szr,Cao:2017umt} as employed in the ideal hydrodynamic evolution used here. \section{Conclusions and Perspectives} \label{sec_concl} In an attempt to establish connections between heavy-flavor phenomenology in heavy-ion collisions and the microscopic interactions driving the diffusion of heavy quarks through the QGP formed in these reactions, we have employed a range of underlying two-body interaction potentials to compute the heavy-light $T$-matrices and pertinent HQ transport coefficients. Specifically, we have investigated two potentials recently constructed to satisfy constraints from lQCD for HQ free and internal energies, quarkonium correlators and the QGP EoS, as well as the free and internal energies, which have been used previously as potential proxies. We have first analyzed the corresponding forces, in particular their typical ranges in both coordinate and momentum space. As expected, the $U$-potential yields the largest force strength, realized at intermediate distances, while the strongly coupled $T$-matrix solution develops a smaller force but of longer range; in both cases the remnants of the confining force in the QGP play a key role in generating nonperturbative interaction strength, operative for temperatures of up to about 2.5\,$T_{\rm pc}$. The weakly coupled solution and the free energy have very similar forces, but are further reduced in strength and of much shorter range than the internal energy and the strongly coupled solution. We then derived a transport equation including quantum many-body (off-shell) effects, to account for the broad spectral functions of the thermal medium partons characterizing, in particular, the SCS of the $T$-matrix solution. These off-shell effects are instrumental in enabling the diffusing heavy quarks to probe the interaction strength of the broad subthreshold two-body resonances in the heavy-light scattering amplitudes. As a somewhat surprising result, the SCS potential develops the largest thermal relaxation rate for low-momentum charm quarks among all four potentials, while the $U$-potential's rate is strongest at intermediate and large momenta. Implementing these potentials into relativistic Langevin simulations revealed the SCS potential to develop the largest peak value of the charm-quark $v_2$, about 20\% above the $U$-potential and a factor of 3 larger than the WCS potential (or free energy). Computing pertinent $D$-meson observables and benchmarking them against experimental data at the LHC rules out the WCS and free energy as viable potentials for HQ interactions in the QGP. Even the SCS potential falls slightly short of accounting for the existing low-momentum $v_2$ data at the LHC. These findings imply that charm quarks acquire collisional widths of 0.5-1\,GeV in the QGP near $T_{\rm pc}$, and consequently low-momentum light partons are likely dissolved in this regime, \ie, soft excitations in the QGP near $T_{\rm pc}$ do not support parton quasiparticles; at the same time, broad hadronic resonances emerge and act as mediators of the nonperturbative interaction strength. Among the challenges that remain in the HQ sector, from a microscopic point of view, are to account for the missing 20\% in the elliptic flow of $D$-mesons as observed at the LHC, and to incorporate gluon radiation in a strongly-coupled framework. The latter will be essential to increase the high-$p_T$ suppression and $v_2$, whereas genuine 3-body scattering, retardation effects, improvements in the coalescence mechanism and/or the hadronic diffusion, as well as features of the bulk evolution not captured by the ideal-hydro model employed here, could augment the $v_2$ at low-$p_T$. Work on several aspects of the above has already been done by various groups and/or is in progress, and efforts to combine them are ongoing~\cite{Rapp:2018qla} and expected to reveal further insights in due course. \acknowledgments This work was supported by the U.S.~National Science Foundation (NSF) through grant PHY-1614484, by the A.-v.-Humboldt Foundation, and by the NSFC grant 11675079.
1,116,691,499,533
arxiv
\section{Introduction} Processes in low-energy QCD that involve an odd number of (pseudo-)Goldstone bosons (and possibly photons), which are, therefore, of odd intrinsic parity, are thought to be governed by the Wess--Zumino--Witten (WZW) term~\cite{WZW} via chiral anomalies. While the so-called triangle anomaly is well tested in processes such as $\pi^0,\,\eta\to\gamma\gamma$, and the box anomaly contributes e.g.\ to $\gamma\pi\to\pi\pi$ and $\eta\to\pi\pi\gamma$, the pentagon anomaly remains more elusive; the simplest possible process that is usually cited is $K^+K^-\to\pi^+\pi^-\pi^0$, which however has not been experimentally tested yet, and is likely to be subject to large corrections to the chiral-limit amplitude that is dictated by the WZW term. A different set of processes involving five light pseudoscalars is the four-pion decays of $\eta$ and $\eta'$. Experimental information about these is scarce: only upper limits on branching ratios exist~\cite{PDG2010}; however, this may change in the near future for at least some of the possible final states with the advent of high-statistics $\eta'$ experiments such as BES-III, WASA-at-COSY, ELSA, CB-at-MAMI-C, CLAS at Jefferson Lab, etc. We are only aware of one previous theoretical calculation of these decays, performed in the framework of a quark model~\cite{Parashar}, whose partial width predictions, however, have in the meantime been ruled out by the experimental upper limits, at least for the channel $\eta'\to 2(\pi^+\pi^-)$. In principle, the decays $\eta' \to 4\pi$, in contradistinction to many other $\eta'$ decay channels, seem not terribly forbidden by approximate symmetries: they are neither isospin forbidden, nor required to proceed via electromagnetic interactions. The reaction $\eta\to 4\pi$, in contrast, is essentially suppressed by tiny phase space: only the decay into $4\pi^0$ is kinematically allowed ($M_\eta-4M_{\pi^0} = 7.9$\,MeV, $M_\eta-2(M_{\pi^\pm}+M_{\pi^0}) = -1.2$\,MeV). Furthermore, the fact that anomalous amplitudes always involve the totally antisymmetric tensor $\epsilon_{\mu\nu\alpha\beta}$ can be used to show that no two pseudoscalars are allowed to be in a relative S~wave: assuming they were, this would reduce the five-point function $PPPPP$ effectively to a four-point function $SPPP$ (where $S$ stands for a scalar and $P$ for a pseudoscalar), in which there are no four independent vectors left to contract the $\epsilon$ tensor with. The decays $\eta'\to 2(\pi^+\pi^-)$ and $\eta'\to \pi^+\pi^-2\pi^0$ can therefore be expected to be P-wave dominated. As furthermore Bose symmetry forbids two neutral pions to be in an odd partial wave, $\eta'\to 4\pi^0$ and $\eta\to 4\pi^0$ even require all $\pi^0$ to be at least in relative D~waves~\cite{KupscWirzba}. This, combined with the tiny phase space available, leads to the notion of $\eta\to 4\pi^0$ being $CP$ forbidden~\cite{PDG2010,Prakhov,Nefkens}, although strictly speaking it is only S-wave $CP$ forbidden. The outline of the article is as follows. We begin by discussing the two decay channels with charged pions in the final state, $\eta'\to 2(\pi^+\pi^-)$ and $\eta'\to \pi^+\pi^-2\pi^0$, in Sec.~\ref{sec:Charged}. There, we calculate the corresponding decay amplitudes at leading nonvanishing order in the chiral expansion, saturate the appearing low-energy constants by vector-meson contributions, and calculate the corresponding branching ratios. In Sec.~\ref{sec:Neutral}, we then construct a $CP$-conserving (D-wave) decay mechanism for $\eta,\,\eta'\to 4\pi^0$ and determine the resulting branching fractions, before discussing the $CP$-violating (S-wave) $\eta,\,\eta' \to 4\pi^0$ decay as induced by the QCD $\theta$-term in Sec.~\ref{sec:CP}. Finally, we summarize and conclude. The Appendices contain technical details on four-particle phase space integration as well as on a (suppressed) tensor-meson mechanism for $\eta,\,\eta' \to 4\pi^0$. \section{\boldmath{$\eta'\to 2(\pi^+\pi^-)$ and $\eta'\to \pi^+\pi^-2\pi^0$}} \label{sec:Charged} \subsection{Chiral perturbation theory} We wish to calculate the leading (nontrivial) chiral contribution to the anomalous decays \begin{align} \eta' &\to \pi^+(p_1)\pi^-(p_2)\pi^+(p_3)\pi^-(p_4) ~, \nonumber\\ \eta' &\to \pi^+(p_1)\pi^0(p_2)\,\pi^-(p_3)\pi^0(p_4) ~. \label{eq:charged} \end{align} The amplitudes can be written in terms of the invariant variables $s_{ij} = (p_i+p_j)^2$, $i,\,j = 1,\ldots,4$, which are subject to the constraint \begin{equation} s_{12}+s_{13} +s_{14}+s_{23}+s_{24}+s_{34} = M_{\eta'}^2 + 8M_\pi^2 \end{equation} (in the isospin limit of equal pion masses). The five-meson vertices of the WZW term can be deduced from the Lagrangian \begin{equation} \mathcal{L}_{P^5}^{\rm WZW} = \frac{N_c\epsilon_{\mu\nu\alpha\beta}}{240\pi^2F_\pi^5} \left\langle \varphi \partial^\mu\varphi \partial^\nu\varphi \partial^\alpha\varphi \partial^\beta\varphi \right\rangle + \ldots, \label{eq:WZWpentagon} \end{equation} where $N_c$ is the number of colors and will be taken to be 3 in this paper, $F_\pi=92.2\,{\rm MeV}$ is the pion decay constant, and $\langle \ldots \rangle$ denotes the trace in flavor space. For simplicity, we refrain from spelling out the WZW term in its full, chirally invariant form. Furthermore, \begin{equation} \frac{\varphi}{\sqrt{2}} = \left( \begin{array}{ccc} \frac{\eta_0}{\sqrt{3}} + \frac{\eta_8}{\sqrt{6}} + \frac{\pi^0}{\sqrt{2}} & \pi^+ & K^+\\[2mm] \pi^- & \frac{\eta_0}{\sqrt{3}} + \frac{\eta_8}{\sqrt{6}} -\frac{\pi^0}{\sqrt{2}} & K^0\\[2mm] K^- & \bar{K}^0 & \frac{\eta_0}{\sqrt{3}} -\frac{2\eta_8}{\sqrt{6}} \end{array} \right). \end{equation} We assume a simple, one-angle $\eta\eta'$ mixing scheme, \begin{align} |\eta\rangle &= \cos\theta_P|\eta_8\rangle -\sin\theta_P |\eta_0\rangle ~, \nonumber\\ |\eta'\rangle &= \sin\theta_P|\eta_8\rangle +\cos\theta_P |\eta_0\rangle ~, \label{eq:mixing} \end{align} and use the standard mixing angle $\theta_P = \arcsin({-1}/{3})\approx -19.5^\circ$. As we are going to present what in some sense corresponds to a leading-order calculation of the decay amplitudes, we regard the more elaborate two-angle mixing schemes~\cite{LeutwylerKaiser} as beyond the scope of this study; we expect the error made thereby to be covered by our generous final uncertainty estimate. The flavor structure of Eq.~\eqref{eq:WZWpentagon} is such that there are no direct contributions to $\eta,\,\eta'\to 4\pi$, and the decay amplitudes vanish at leading order (in the anomalous sector) $\mathcal{O}(p^4)$. Nonvanishing contributions only occur at $\mathcal{O}(p^6)$, where the amplitudes are given by sums of (kaon) loops and counterterm contributions from the $\mathcal{O}(p^6)$ Lagrangian of odd intrinsic parity~\cite{BGT_Op6}, see Fig.~\ref{fig:KKloop}. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{KKloop_bw.eps} \caption{Feynman diagrams contributing to $\eta'\to 2(\pi^+\pi^-)$ (and similarly to $\eta'\to \pi^+\pi^-2\pi^0$) at $\mathcal{O}(p^6)$. The thick dot in the right diagram denotes a vertex from $\mathcal{L}_{\rm odd}^{(6)}$.} \label{fig:KKloop} \end{figure} Only two different structures ($\propto C_1^W, \, C_{12}^W$) remain when external currents are switched off. Ref.~\cite{BGT_Op6} only considers the Goldstone boson octet; we add terms $\propto \tilde C_1^W, \, \tilde C_{12}^W$ that only contribute for the singlet field $\eta_0$: \begin{align} \mathcal{L}^{(6)}_{\rm odd} &= i C_1^W \epsilon_{\mu\nu\alpha\beta} \langle \chi_- u^\mu u^\nu u^\alpha u^\beta \rangle \nonumber\\ & - \frac{i \tilde C_1^W}{3} \epsilon_{\mu\nu\alpha\beta} \langle \chi_-\rangle\langle u^\mu u^\nu u^\alpha u^\beta \rangle \nonumber\\ &+ C_{12}^W \epsilon_{\mu\nu\alpha\beta} \langle h^{\gamma\mu} [u_\gamma, u^\nu u^\alpha u^\beta]\rangle \nonumber\\ & - \frac{\tilde C_{12}^W}{3} \epsilon_{\mu\nu\alpha\beta} \langle h^{\gamma\mu} [u_\gamma, u^\nu u^\alpha] \rangle \langle u^\beta \rangle + \ldots ~, \label{eq:LagrC12} \end{align} with the usual chiral vielbein $u_\mu = i(u^\dagger \partial_\mu u - u \partial_\mu u^\dagger)$ (neglecting external currents), $u = \exp (i \varphi/2F_\pi)$, $h_{\mu\nu} = \nabla_\mu u_\nu + \nabla_\nu u_\mu$ with $\nabla_\mu X = \partial_\mu X + [\Gamma_\mu,X]$ and $\Gamma_\mu= \textstyle\frac{1}{2} ( u^\dagger \partial_\mu u+ u \partial_\mu u^\dagger)$ (neglecting again external currents). Furthermore, we use $\chi_- = u^\dagger\chi u^\dagger - u \chi^\dagger u$, where $\chi = 2 B \,{\rm diag}(m_u,m_d,m_s) + \ldots$ contains the quark mass matrix and $B$ is related to the quark condensate according to $B=-\langle \bar q q\rangle/F_\pi^2$. The decay amplitudes at $\mathcal{O}(p^6)$ take the compact forms \begin{align} \mathcal{A}(\eta_{0/8}&\to\pi^+\pi^-\pi^+\pi^-) = - \mathcal{A}(\eta_{0/8}\to\pi^+\pi^0\pi^-\pi^0) \nonumber\\ &= \frac{N_c\epsilon_{\mu\nu\alpha\beta}}{3\sqrt{3}F_\pi^5} p_1^\mu p_2^\nu p_3^\alpha p_4^\beta \big[ \mathcal{F}_{0/8}(s_{12})+\mathcal{F}_{0/8}(s_{34}) \nonumber\\ &\quad -\mathcal{F}_{0/8}(s_{14})-\mathcal{F}_{0/8}(s_{23}) \big] ~, \nonumber\\ \mathcal{F}_0(s) &= - 16 \sqrt{2}\left(C_{12}^{Wr}(\mu)-\tilde C_{12}^{Wr}(\mu)\right)s ~, \nonumber\\ \mathcal{F}_8(s) &= \frac{1}{8\pi^2F_\pi^2}\bigg\{ \big(s-4M_K^2\big) \bar J_{KK}(s) \nonumber\\ & \quad - \frac{s}{16\pi^2} \Big( 2\log\frac{M_K}{\mu} + \frac{1}{3} \Big) \bigg\} - 16 C_{12}^{Wr}(\mu)s ~, \nonumber\\ \bar{J}_{KK}(s) &=\frac{1}{8\pi^2}(1-\sigma_K\operatorname{arccot}\sigma_K)\,, ~ \sigma_K=\sqrt{\frac{4M_K^2}{s}-1} \,, \label{eq:AmpOp6} \end{align} with the scale-dependent renormalized low-energy constants $C_{12}^{Wr}(\mu)$ and $\tilde C_{12}^{Wr}(\mu)$. There are no loop contributions to the $\eta_0$ amplitudes at this order, since at $\mathcal{O}(p^4)$ the anomalous five-pseudoscalar term (\ref{eq:WZWpentagon}) (the left vertex of the loop diagram in Fig.~\ref{fig:KKloop}) contributes only to the octet case. Equation~\eqref{eq:AmpOp6} is scale-independent with the $\beta$ function for $C_{12}^{Wr}(\mu)$ obtained in Ref.~\cite{BGT_Op6}, if we demand $\tilde C_{12}^{W}$ to have the same infinite part and scale dependence as $C_{12}^{W}$. A numerical estimate for the finite part $C_{12}^{Wr}(M_\rho)$ will be obtained by resonance saturation through vector-meson contributions. \subsection{Resonance saturation from hidden local symmetry} Resonance saturation for the $\mathcal{O}(p^6)$ chiral Lagrangian of odd intrinsic parity has been studied in great generality recently in Ref.~\cite{Kampf}. Here however we opt for the simpler, but on the other hand more predictive hidden-local-symmetry scheme~\cite{FKTUY,Ulf,Bando,Harada}, which has the additional advantage of having been tested phenomenologically in great detail~\cite{Benayoun}. In the framework of hidden local symmetry (HLS), there are four additional terms involving vector-meson fields, with coefficients $c_i$ $(i=1,\ldots,4)$, in addition to the WZW action for anomalous processes~\cite{FKTUY,Bando}; as already noted in Ref.~\cite{BijnensBramonCornet}, only three independent combinations of these contribute to low-energy amplitudes at $\mathcal{O}(p^6)$. HLS amplitudes for any given anomalous process contain two kinds of contributions: contact terms and resonance exchange terms. The contact terms have the same form as those derived from the WZW action, but with a modified coefficient (see below); the gauge-invariant construction of the HLS Lagrangian density guarantees that the additional, $c_i$-dependent contributions will be canceled by vector-meson exchange in the low-energy limit. In the following, we again for simplicity reasons refrain from properly defining all the HLS Lagrangian terms in their chirally invariant forms, but only quote the terms relevant for vertices of five pseudoscalars; the full Lagrangians can be retrieved e.g.\ from Refs.~\cite{Bando,Harada}. The contact terms for five-pseudoscalar vertices can be read off from the Lagrangian \begin{equation} \mathcal{L}_{P^5}^{\rm HLS} = \frac{N_c\epsilon_{\mu\nu\alpha\beta}}{240\pi^2F_\pi^5}\left[1-\frac{15}{8}(c_1\!-\!c_2)\right] \left\langle \varphi \partial^\mu\varphi \partial^\nu\varphi \partial^\alpha\varphi \partial^\beta\varphi \right\rangle . \label{eq:HLScontact} \end{equation} The low-energy limit of the vector-meson-exchange contribution can be obtained by integrating out the heavy fields: substituting the leading-order equation of motion of the vector-meson fields \begin{equation} V_\mu = \frac1{8igF_\pi^2}\left[\partial_\mu\varphi,\varphi\right] , \label{eq:EOMvector} \end{equation} where $g$ is the universal vector-meson coupling constant, into the HLS Lagrangians~\cite{Harada,Benayoun} \begin{align} \mathcal{L}_{VVP} &= - \frac{N_c c_3 g^2}{8\pi^2F_\pi}\epsilon_{\mu\nu\alpha\beta} \left\langle \partial^\mu V^\nu \partial^\alpha V^\beta \varphi \right\rangle ,\nonumber\\ \mathcal{L}_{VPPP} &= - \frac{iN_c (c_1-c_2-c_3) g}{32\pi^2F_\pi^3}\epsilon_{\mu\nu\alpha\beta} \left\langle V^\mu \partial^\nu\varphi\partial^\alpha\varphi\partial^\beta\varphi \right\rangle , \label{eq:HLSLagr} \end{align} where the vector-meson nonet (with ideal mixing) is defined as \begin{align} V_\mu = \frac{1}{\sqrt{2}}\left( \begin{array}{ccc} \frac{\rho^0_\mu}{\sqrt{2}} + \frac{\omega_\mu}{\sqrt{2}} & \rho^+_\mu & K^{*+}_\mu\\[2mm] \rho^- & -\frac{\rho^0_\mu}{\sqrt{2}} + \frac{\omega_\mu}{\sqrt{2}} & K^{*0}_\mu\\[2mm] K^{*-}_\mu & \bar{K}^{*0}_\mu & \phi_\mu \end{array} \right) , \end{align} we find \begin{equation} \mathcal{L}_{P^5,V}^{(4)} = \frac{N_c(c_1-c_2)}{128\pi^2F_\pi^5} \epsilon_{\mu\nu\alpha\beta} \left\langle \varphi \partial^\mu\varphi \partial^\nu\varphi \partial^\alpha\varphi \partial^\beta\varphi \right\rangle, \label{eq:HLSvec} \end{equation} which exactly cancels the second term inside the square brackets in Eq.~\eqref{eq:HLScontact}. If we extend the equation of motion Eq.~\eqref{eq:EOMvector} to next-to-leading order in the derivative expansion, \begin{equation} V_\mu = \frac1{8igF_\pi^2}\left(1-\frac{\partial^2}{M_V^2}\right)\left[\partial_\mu\varphi,\varphi\right] ~, \label{eq:EOMvectorNLO} \end{equation} where $M_V$ is the vector-meson mass, we can derive the vector-meson contribution to the five-meson vertices at $\mathcal{O}(p^6)$. Inserting Eq.~\eqref{eq:EOMvectorNLO} into Eq.~\eqref{eq:HLSLagr}, we find \begin{align} \mathcal{L}_{P^5,V}^{(6)} &= \frac{N_c(c_1-c_2+c_3)}{128\pi^2F_\pi^5M_V^2} \epsilon_{\mu\nu\alpha\beta} \nonumber\\ & \times \big\langle \partial^\lambda\partial^\mu\varphi \left[\partial_\lambda\varphi, \partial^\nu\varphi \partial^\alpha\varphi \partial^\beta\varphi\right] \nonumber\\ & \quad -2 \partial^2\varphi \partial^\mu\varphi \partial^\nu\varphi \partial^\alpha\varphi \partial^\beta\varphi \big\rangle ~. \end{align} The first term is exactly of the form of the Lagrangian term $\propto C_{12}^W$ in Eq.~\eqref{eq:LagrC12}. For the second term, we use the equation of motion for the Goldstone bosons, which, neglecting higher orders in the fields, reads (compare e.g.\ Ref.~\cite{BGT_Op6}) \begin{equation} \partial^2 \varphi = - \tfrac{1}{2}\{\varphi,\chi\} + \tfrac{1}{3}\langle\varphi\chi\rangle ~, \end{equation} so we also identify a vector-meson contribution to $C_1^{Wr}$ and $\tilde C_1^{Wr}$. Our results read altogether \begin{align} C_1^{Wr}(M_V) &= \tilde C_1^{Wr}(M_V) = -2 C_{12}^{Wr}(M_V) \nonumber\\ & = \frac{N_c(c_1-c_2+c_3)}{128\pi^2M_V^2} ~, \label{eq:C1+12ressat} \end{align} where we have indicated the conventional assumption of the resonance-saturation hypothesis to be valid roughly at the resonance scale, $\mu=M_V$ (which in the following we will identify with the mass of the $\rho$, $M_\rho = 775.5$\,MeV). The numerical values of the HLS coupling constants are often taken to be given by $c_1-c_2 \approx c_3 \approx1$~\cite{FKTUY}, fairly consistent with more elaborate phenomenological fits that yield $c_1-c_2=1.21$, $c_3=0.93$~\cite{Benayoun}. In principle, this completes the task to provide the necessary input for an evaluation of the chiral representation of the decay amplitude, Eq.~\eqref{eq:AmpOp6}. We observe, however, the following. First, evaluating the slope of (the largely linear function) $\mathcal{F}_8(s)$ in Eq.~\eqref{eq:AmpOp6} with this input (using $\bar J'_{KK}(0) = 1/(96\pi^2M_K^2)$), we find \begin{align} & 8\pi^2 (4\pi F_\pi)^2 \times \mathcal{F}'_8(0) \nonumber\\ &= 3 (c_1-c_2+c_3) \frac{(4\pi F_\pi)^2}{2M_\rho^2} - \Big(1+2\log\frac{M_K}{M_\rho}\Big) ~. \label{eq:VMvsKloop} \end{align} Numerically, the first term is about $6.7\times (c_1-c_2+c_3)/2$, and the second is 0.1. Hence, at the scale $\mu=M_\rho$, the slope is entirely dominated by the vector-meson contribution, and the kaon loops are negligible. Second, the maximal value for the kinematical invariants in $\eta' \to 4\pi$ allowed by phase space is $\sqrt{s_{ij}} \leq M_{\eta'}-2M_\pi \approx 680$\,MeV, therefore replacing the $\rho$ propagator by its leading linear approximation is not phenomenologically reliable. Even deviations induced by the finite \emph{width} of the $\rho$, $\Gamma_\rho = 149.1$\,MeV, will be clearly visible. In the following, we will therefore use the full vector-meson-exchange amplitudes as derived from the HLS formalism, with the $\rho$-meson propagators including the width, which in addition is expected to be a very good estimate of the higher-order pairwise P-wave interaction of the pions in the final state (of course neglecting any crossed-channel effects). They are given by \begin{align} &\mathcal{A}_{V}(\eta_8\to\pi^+\pi^-\pi^+\pi^-) = \frac{1}{\sqrt{2}} \mathcal{A}_{V}(\eta_0\to\pi^+\pi^-\pi^+\pi^-) \nonumber\\ &= - \mathcal{A}_{V}(\eta_8\!\to\pi^+\pi^0\pi^-\pi^0) = - \frac{1}{\sqrt{2}} \mathcal{A}_{V}(\eta_0\to\pi^+\pi^0\pi^-\pi^0) \nonumber\\ &= \frac{N_c\epsilon_{\mu\nu\alpha\beta}}{16\sqrt{3}\pi^2F_\pi^5}p_1^\mu p_2^\nu p_3^\alpha p_4^\beta \bigg\{ (c_1-c_2-c_3)\bigg[\frac{M_\rho^2}{D_\rho(s_{12})} \nonumber\\ & \qquad + \frac{M_\rho^2}{D_\rho(s_{34})} - \frac{M_\rho^2}{D_\rho(s_{14})} - \frac{M_\rho^2}{D_\rho(s_{23})} \bigg] \nonumber\\ & + 2c_3 \bigg[ \frac{M_\rho^4}{D_\rho(s_{12})D_\rho(s_{34})} - \frac{M_\rho^4}{D_\rho(s_{14})D_\rho(s_{23})} \bigg] \bigg\} \label{eq:Arho1}\\ &\simeq \frac{N_c\epsilon_{\mu\nu\alpha\beta}}{16\sqrt{3}\pi^2F_\pi^5}p_1^\mu p_2^\nu p_3^\alpha p_4^\beta \bigg\{ (c_1-c_2)\bigg[\frac{s_{12}}{D_\rho(s_{12})} \nonumber\\ & \qquad + \frac{s_{34}}{D_\rho(s_{34})} - \frac{s_{14}}{D_\rho(s_{14})} - \frac{s_{23}}{D_\rho(s_{23})} \bigg] \nonumber\\ & + c_3 \bigg[ \frac{M_\rho^2(s_{12}+s_{34})}{D_\rho(s_{12})D_\rho(s_{34})} - \frac{M_\rho^2(s_{14}+s_{23})}{D_\rho(s_{14})D_\rho(s_{23})} \bigg] \bigg\} ~,\label{eq:Arho2} \end{align} where \begin{align} D_\rho(s) &= M_\rho^2 -s - i \,M_\rho\Gamma_\rho(s) ~, \nonumber\\ \Gamma_\rho(s) &= \frac{M_\rho}{\sqrt{s}}\bigg(\!\frac{s-4M_\pi^2}{M_\rho^2-4M_\pi^2}\!\bigg)^{3/2}\Gamma_\rho \end{align} is the inverse $\rho$ propagator, and we have neglected the width term in the transformation from Eq.~\eqref{eq:Arho1} to Eq.~\eqref{eq:Arho2} in order to demonstrate the correct chiral dimension $\mathcal{O}(p^6)$ of the vector-meson contribution explicitly. Expanding the resonance propagators in Eq.~\eqref{eq:Arho2} and comparing to Eq.~\eqref{eq:AmpOp6} easily leads back to the coupling constant estimate for $C_{12}^{Wr}$ found on the Lagrangian level in Eq.~\eqref{eq:C1+12ressat}. At this point, we can try to answer the introductory question on which parts of the WZW anomaly action---triangle, box, or pentagon---the decays $\eta'\to 2(\pi^+\pi^-)$ and $\eta'\to \pi^+\pi^-2\pi^0$ yield information. As the pentagon anomaly only enters via the kaon-loop contributions, we have found above that its significance for the decays under investigation here is negligible; the vector-meson contributions are derived from the triangle and box-anomaly terms, see Eq.~\eqref{eq:HLSLagr}. As the phenomenological values of the HLS coupling constants suggest $c_1-c_2-c_3 \ll 2c_3$, the box anomaly yields the lesser part of the two, and the decays are dominated by the triangle-anomaly term. \subsection{Branching ratios} \label{sec:numCharged} We calculate the partial widths of the decays $\eta'\to \pi^+\pi^-\pi^+\pi^-$ and $\eta'\to \pi^+\pi^0\pi^-\pi^0$ using \begin{equation} \label{eq:width} \Gamma(\eta' \to 4\pi) = \frac{1}{2SM_{\eta'}} \int |\mathcal{A}(\eta'\to 4\pi)|^2 d\Phi_4 ~, \end{equation} where the evaluation of the four-particle phase space $\Phi_4$ is discussed in detail in Appendix~\ref{app:ps}. $S$ is a symmetry factor---$S=4$ for the $2(\pi^+\pi^-)$ final state, and $S=2$ for the $\pi^+\pi^-2\pi^0$ one. Note that with the relation $\mathcal{A}(\eta_0\to 4\pi) = \sqrt{2}\mathcal{A}(\eta_8\to 4\pi)$ and the standard mixing according to Eq.~\eqref{eq:mixing}, we have $\mathcal{A}(\eta'\to 4\pi) = \mathcal{A}(\eta_8\to 4\pi)$. To obtain branching ratios, we normalize the partial widths by the total width of the $\eta'$ as quoted by the particle data group, $\Gamma_{\eta'} = (0.199\pm 0.009)\,{\rm MeV}$~\cite{PDG2010}. Note that by using the most precise single measurement of this width alone, $\Gamma_{\eta'} = (0.226\pm 0.017 \pm 0.014)\,{\rm MeV}$~\cite{Czerwinski}, our predictions for the branching fractions would be reduced by more than 10\%. Given the observation of Eq.~\eqref{eq:VMvsKloop}, we neglect the kaon-loop contributions altogether and evaluate the matrix elements using Eq.~\eqref{eq:Arho1}. In order to account for trivial isospin-breaking effects due to phase space corrections, we calculate the branching ratio for $\eta'\to \pi^+\pi^0\pi^-\pi^0$ using an average pion mass $M_\pi= (M_{\pi^+}+M_{\pi^0})/2$, while we employ the charged pion mass for the decay into four charged pions. All results are first quoted as a function of the coupling constants $c_1-c_2$ and $c_3$, before inserting two sets of values: (i) $c_1-c_2 = c_3 =1$, and (ii) $c_1-c_2=1.21$, $c_3=0.93$~\cite{Benayoun}. We refrain from employing the errors given in the fits in Ref.~\cite{Benayoun}: the uncertainties in the HLS coupling constants are well below what we estimate to be the overall uncertainty of our prediction. The results are \begin{align} &\mathcal{B}\big(\eta' \to 2(\pi^+\pi^-)\big) \nonumber\\ &= \Big[ 0.15 \,(c_1-c_2)^2 + 0.47 \,(c_1-c_2)c_3 + 0.37 \,c_3^2 \Big]\times 10^{-4} \nonumber\\ &= \big\{ 1.0 ,\, 1.1 \big\}\times 10^{-4} ~, \\ &\mathcal{B}\big(\eta' \to \pi^+\pi^-2\pi^0\big) \nonumber\\ &= \Big[ 0.35 \,(c_1-c_2)^2 + 1.09 \,(c_1-c_2)c_3 + 0.87 \,c_3^2 \Big]\times 10^{-4} \nonumber\\ &= \big\{ 2.3 ,\, 2.5 \big\}\times 10^{-4} ~. \end{align} We therefore find that the uncertainties due to the HLS coupling constants are small. We wish to point out that although $\pi\pi$ P-wave dynamics are usually well approximated by the $\rho$ resonance, and crossed-channel effects are expected to occur rather at the 10\% level (as inferred from studies of decays such as $\omega \to 3\pi$, $\phi\to 3\pi$~\cite{Niecknig}), the present study in some sense still amounts to a leading-order calculation: SU(3)-breaking effects of the order of $F_\eta / F_\pi \approx 1.3$~\cite{GasserLeutwyler} may occur, and in the treatment of the $\eta'$ ($\eta_0$), we have implicitly evoked the $1/N_c$ expansion. We therefore deem a generic uncertainty of 30\% realistic, and quote our predictions accordingly as \begin{align} \mathcal{B}\big(\eta'\to 2(\pi^+\pi^-) \big) &= (1.0 \pm 0.3)\times 10^{-4} ~, \nonumber\\ \mathcal{B}\big(\eta'\to \pi^+\pi^-2\pi^0\big) &= (2.4 \pm 0.7) \times 10^{-4} ~. \end{align} These are to be compared to the current experimental upper limits~\cite{PDG2010,CLEO} \begin{align} \mathcal{B}_{\rm exp}\big(\eta'\to 2(\pi^+\pi^-) \big) &< 2.4\times 10^{-4} ~, \nonumber\\ \mathcal{B}_{\rm exp}\big(\eta'\to \pi^+\pi^-2\pi^0\big) &< 2.6\times 10^{-3} ~, \end{align} hence signals of these decays ought to be within reach of modern high-statistics experiments soon. \section{\boldmath{$\eta,\,\eta'\to 4\pi^0$}}\label{sec:Neutral} As we have mentioned in the Introduction, the P-wave mechanism described in the previous section, proceeding essentially via two $\rho$ intermediate resonances, cannot contribute to the $4\pi^0$ final states. In fact, we can show that the D-wave characteristic of $\eta,\,\eta' \to 4\pi^0$ suppresses these decays to $\mathcal{O}(p^{10})$ in chiral power counting, that is to the level of three loops in the anomalous sector. This is, in particular, due to the flavor and isospin structure of the anomaly, which does not contain five-meson vertices including $2\pi^0$ at leading order ($\mathcal{O}(p^4)$), and to the chiral structure of meson--meson scattering amplitudes, which only allows for S and P~waves at tree level ($\mathcal{O}(p^2)$). As a complete three-loop calculation would be a formidable task and is certainly beyond the scope of our exploratory study, we instead consider the decay mechanisms shown in Fig.~\ref{fig:4pi0loop}. As shown in Appendix~\ref{app:tensor}, the contribution from two $f_2$ mesons is negligible in comparison to the pion loop. We therefore focus on the pion loop as shown in the left panel of Fig.~\ref{fig:4pi0loop}. It represents a decay mechanism that, we believe, ought to capture at least the correct order of magnitude of the corresponding partial width. \subsection{Pion-loop contribution} \begin{figure} \centering \vskip 3mm \includegraphics[width=0.5\linewidth]{4pi0loop.eps}\hfill \includegraphics[width=0.4\linewidth]{f2.eps} \caption{Left: Pion-loop contribution to $\eta,\,\eta'\to 4\pi^0$. The black circle denotes an effective local $\eta,\,\eta'\to \pi^+\pi^-2\pi^0$ coupling at $\mathcal{O}(p^6)$, the black square an effective local D-wave $\pi\pi$ scattering vertex at $\mathcal{O}(p^4)$. Right: $\eta,\,\eta'\to 4\pi^0$ through two intermediate $f_2$ mesons.\label{fig:4pi0loop}} \end{figure} Our decay mechanism for $\eta,\,\eta' \to 4\pi^0$ is built on the observation that there is a specific diagrammatic contribution that we can easily calculate, and that, in particular, comprises the complete leading contribution to the \emph{imaginary part} of the decay amplitude. This is given by $\pi^+\pi^-$ intermediate states, and hence harks back to the results of the previous section. As argued above, it appears at chiral $\mathcal{O}(p^{10})$: $\eta_{0/8} \to \pi^+\pi^- 2\pi^0$ as calculated in Eq.~\eqref{eq:AmpOp6} to $\mathcal{O}(p^6)$, followed by rescattering $\pi^+\pi^- \to \pi^0\pi^0$, where the S~wave does not contribute, and D and higher partial waves start to appear at $\mathcal{O}(p^4)$~\cite{GL-AnnPhys}; see Fig.~\ref{fig:4pi0loop} for illustration. We calculate this first in the following approximation: given the numerical dominance of the counterterm contribution in Eq.~\eqref{eq:AmpOp6}, the amplitudes $\mathcal{F}_{0/8}(s)$ are taken to be linear, $\mathcal{F}_{0/8}(s) \approx \mathcal{F}'_{0/8}(s)s$, neglecting tiny curvature effects from the kaon loops; and we approximate $\pi\pi$ rescattering by a phenomenological D~wave, thus improving on the leading chiral representation, but neglecting G and higher partial waves. We find \begin{widetext} \begin{align} \mathcal{A}(\eta_8\to 4\pi^0) &= \frac{1}{\sqrt{2}} \mathcal{A}(\eta_0\to 4\pi^0) = - \frac{N_c(c_1-c_2+c_3)}{8\pi} \frac{\epsilon_{\mu\nu\alpha\beta}}{\sqrt{3}F_\pi^5}p_1^\mu p_2^\nu p_3^\alpha p_4^\beta \Big\{ \mathcal{G}(s_{12},s_{23},s_{14},s_{34};s_{13}) \nonumber\\ & + \mathcal{G}(s_{12},s_{14},s_{23},s_{34};s_{24}) - \mathcal{G}(s_{13},s_{23},s_{14},s_{24};s_{12}) - \mathcal{G}(s_{13},s_{14},s_{23},s_{24};s_{34}) \nonumber\\ & - \mathcal{G}(s_{12},s_{24},s_{13},s_{34};s_{14}) - \mathcal{G}(s_{12},s_{13},s_{24},s_{34};s_{23}) \Big\} ~, \nonumber\\ \mathcal{G}(v,w,x,y;s) &= \frac{v-w-x+y}{M_\rho^2}\,\frac{16 (t_2^0(s)-t_2^2(s))}{3(s-4M_\pi^2)^2} \bigg\{ (s-4M_\pi^2)^2 \bar J_{\pi\pi}(s) \nonumber\\ & - 2\big(s^2-10s M_\pi^2+30M_\pi^4\big) \Big(L + \frac{1}{16\pi^2}\log\frac{M_\pi}{\mu}\Big) + \frac{1}{16\pi^2} \bigg( \frac{s^2}{15}-\frac{8}{3}s M_\pi^2 + 15M_\pi^4 \bigg) \bigg\} ~, \nonumber\\ \bar J_{\pi\pi}(s) &= \frac{1}{8\pi^2}\bigg\{1 - \frac{\sigma}{2}\Big(\log\frac{1+\sigma}{1-\sigma}-i\,\pi\Big)\bigg\} ~, \quad \sigma = \sqrt{1-\frac{4M_\pi^2}{s}} ~, \quad L = \frac{ \mu^{d-4}}{16\pi^2} \bigg\{ \frac{1}{d-4}+\frac{1}{2}(\gamma_E-1-\log 4\pi)\bigg\} ~. \label{eq:A4pi0} \end{align} \end{widetext} $t_2^I(s)$ is the partial wave of angular momentum $\ell=2$ for the appropriate isospin quantum number $I$; the expression $16t_2^I(s)(s-4M_\pi^2)^{-2} = a_2^I + \mathcal{O}(s-4M_\pi^2)$, with the D-wave scattering length $a_2^I$, is therefore finite at threshold. Note furthermore that $t_2^I = \mathcal{O}(p^4)$ in chiral counting, such that the chiral order of Eq.~\eqref{eq:A4pi0} is indeed $\mathcal{O}(p^{10})$. $L$ contains the infinite part of the divergent loop diagram in the usual way, using dimensional regularization. Of course, this individual loop contribution is both divergent and scale-dependent: only the imaginary part is complete (to this order) and in that sense well-defined and finite. We display the full expression here as we will use the scale dependence as a rough independent consistency check below. Without the knowledge of counterterms of an order as high as $\mathcal{O}(p^{10})$, one cannot make a quantitative prediction using the loop amplitude derived in the above. Hence, we have to resort to a certain phenomenological representation. The imaginary part of Eq.~\eqref{eq:A4pi0}, which is complete at $\mathcal{O}(p^{10})$ as mentioned, is used to establish a connection to a one-$f_2$ exchange in the $s$ channel. Note that the $f_2(1270)$ exchange dominates the available $\pi\pi$ scattering phase shifts in the $I=0$, $\ell=2$ channel, see e.g.\ Ref.~\cite{Dobado:2001rv}. We will proceed to estimate the full D-wave $\pi\pi$ rescattering contribution as follows. Neglecting again any crossed-channel effects, rescattering of two pions can be summed by the Omn\`es factor, \begin{equation} \Omega_\ell^I(s) = \exp \bigg\{ \frac{s}{\pi} \int_{4M_\pi^2}^\infty \frac{\delta_\ell^I(z)dz}{z(z-s-i\epsilon)} \bigg\}~, \label{eq:defOmnes} \end{equation} where $\delta_\ell^I$ is the $\pi\pi$ scattering phase shifts in the channel with isospin $I$ and angular momentum $\ell$. Near threshold, its imaginary part can be approximated as \begin{equation} \Im\Omega_\ell^I(s) \approx \delta_\ell^I(s) \big\{ 1+\mathcal{O}(\sigma^2)\big\} \approx \sigma \,t_\ell^I(s) \big\{ 1+\mathcal{O}(\sigma^2)\big\} ~ \label{eq:ImOmnes_thr} \end{equation} (neglecting the shift from unity in $\Omega(4M_\pi^2)$, which is justified in the D~wave for our intended accuracy), while in the approximation of a phase being dominated by a narrow resonance of mass $M$ and width $\Gamma$, the Omn\`es factor is given by \begin{align} \Omega_\ell^I(s) &\approx \frac{M^2 \exp\left(i\delta_\ell^I(s)\right)}{\sqrt{(M^2-s)^2+M^2\Gamma^2(s)}} ~, \nonumber\\ \Gamma(s) &= \frac{M}{\sqrt{s}}\Big(\frac{s-4M_\pi^2}{M^2-4M_\pi^2}\Big)^{\ell+1/2} \Gamma ~. \label{eq:Omnes_Res} \end{align} Despite $D$-wave scattering near threshold not being dominated by the $f_2(1270)$,\footnote{It is dominated by the low-energy constant $\bar\ell_2$ from the $\mathcal{O}(p^4)$ Lagrangian~\cite{GL-AnnPhys}, or by $t$-channel vector-meson exchange in the spirit of resonance saturation~\cite{Ecker:1988te}.} we still use Eq.~\eqref{eq:ImOmnes_thr} to invoke the $f_2$. This is because at somewhat higher energies, the $I=0$ $\pi\pi$ D~wave dominates over the $I=2$ component and will be well approximated by the $f_2(1270)$ resonance. One may wonder whether, in particular, for $\eta\to 4\pi^0$, which stays close to $\pi\pi$ threshold throughout the allowed phase space, this approximation may not lead to sizeable errors. We have checked for the numerical results for the branching fraction discussed below that, employing the full Omn\`es function according to Eq.~\eqref{eq:defOmnes} with the phase parameterization provided in Ref.~\cite{Madrid}, the branching ratio changes by about 10\%, well below the accuracy we can aim for here. On the other hand, within the $\eta'\to 4\pi^0$ decay, we stay sufficiently far below the resonance energy that the phase of the D~wave can still be neglected. With the correspondence between Eqs.~\eqref{eq:ImOmnes_thr} and~\eqref{eq:Omnes_Res}, we conclude that the $f_2(1270)$ contribution to the amplitude can be estimated as \begin{align} \mathcal{A}_{f_2}(\eta_8\to 4\pi^0) &= \frac{1}{\sqrt{2}} \mathcal{A}_{f_2}(\eta_0\to 4\pi^0) \nonumber\\ &= - \frac{N_c(c_1-c_2+c_3)}{24\pi^2} \frac{\epsilon_{\mu\nu\alpha\beta}}{\sqrt{3}F_\pi^5}p_1^\mu p_2^\nu p_3^\alpha p_4^\beta \nonumber\\ &\times \Big\{ \mathcal{G}_{f_2}(s_{12},s_{23},s_{14},s_{34};s_{13},s_{24}) \nonumber\\ &\quad - \mathcal{G}_{f_2}(s_{13},s_{23},s_{14},s_{24};s_{12},s_{34}) \nonumber\\ &\quad - \mathcal{G}_{f_2}(s_{12},s_{24},s_{13},s_{34};s_{14},s_{23}) \Big\} ~, \nonumber\\ \mathcal{G}_{f_2}(v,w,x,y;s,t) &= \frac{v-w-x+y}{M_\rho^2} \nonumber\\ &\times \bigg[ \frac{M_{f_2}^2}{M_{f_2}^2-s}+\frac{M_{f_2}^2}{M_{f_2}^2-t}\bigg] , \label{eq:Af2} \end{align} neglecting for simplicity the width of the $f_2$, which is justified in the kinematic regime accessible in $\eta' \to 4\pi^0$. Note that, due to the special symmetry of the amplitude, Eq.~\eqref{eq:Af2} can be rewritten identically by employing a ``twice-subtracted'' version of the resonance term, i.e.\ replacing $\mathcal{G}_{f_2} \to \mathcal{G}''_{f_2}$, \begin{align} \mathcal{G}''_{f_2}(v,w,x,y;s,t) &= \frac{v-w-x+y}{M_\rho^2 M_{f_2}^2} \nonumber\\ &\times \bigg[ \frac{s^2}{M_{f_2}^2-s}+\frac{t^2}{M_{f_2}^2-t}\bigg] , \label{eq:Gf2''} \end{align} which makes the correct chiral dimension of the resonance contribution manifest. As a rough final consistency check, we compare the order of magnitude of a chiral counterterm induced by the $f_2$ exchange, see Eq.~\eqref{eq:Gf2''} in the low-energy limit $s,\,t \ll M_{f_2}^2$ , with the scale running of such a counterterm as necessitated by the $\log\mu$ dependence in Eq.~\eqref{eq:A4pi0}. If we only retain the scattering lengths in the D-wave partial waves, the relevant part to be compared to Eq.~\eqref{eq:Gf2''} (that does not cancel in the full amplitude) is \begin{align} \mu \frac{d}{d\mu}&\big[\mathcal{G}(v,w,x,y;s)+\mathcal{G}(v,w,x,y;t)\big] \nonumber\\ &= \frac{v-w-x+y}{3M_\rho^2}\big(a_2^0-a_2^2\big) \frac{s^2 + t^2}{8\pi^2} ~. \end{align} Comparing the numerical prefactors, we find that the scale dependence is suppressed versus the estimate for the finite counterterm by \begin{equation} \frac{a_2^0-a_2^2}{16\pi} \times M_{f_2}^4 \approx 0.22 ~, \end{equation} where we have used $a_2^0 = 1.75 \times 10^{-3}M_\pi^{-4}$, $a_2^2 = 0.17 \times 10^{-3}M_\pi^{-4}$~\cite{CGL}. In other words, the scale dependence suggests the order of magnitude of our counterterm estimate using $f_2$ saturation to be reasonable. \subsection{Pion-loop contribution improved: including vector propagators} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{4pi0viarho.eps} \caption{Pion-loop contribution to $\eta,\,\eta'\to 4\pi^0$ via $\rho^\pm$ intermediate states; see the vector-meson dominated amplitude discussed in Sec.~\ref{sec:Charged}. The black square denotes an effective local D-wave $\pi\pi$ scattering vertex at $\mathcal{O}(p^4)$.\label{fig:4pi0viarho}} \end{figure} We have seen in Sec.~\ref{sec:Charged} on the P-wave dominated, (partially) charged four-pion final states that the leading approximation in an expansion of the $\rho$ meson propagators is not a sufficient description of these decays, given the available phase space in $\eta'$ decays. With the $\eta_{0/8} \to \pi^+\pi^-2\pi^0$ transitions entering the decay mechanism for $\eta_{0/8} \to 4\pi^0$ as described in the previous section, this deficit would be fully inherited in our estimate of the all-neutral final states. In fact, the imaginary part of the corresponding diagram including the full $\rho$ propagators, see Fig.~\ref{fig:4pi0viarho}, can even be calculated exactly, using Cutkosky rules; however, the resulting expressions are extremely involved and not very illuminating. It turns out, though, that the main effects of the not-so-large vector-meson mass can be approximated by the following expression for the imaginary part: \begin{widetext} \begin{align} \Im \mathcal{A}(\eta_8\to 4\pi^0) &= \frac{1}{\sqrt{2}} \Im \mathcal{A}(\eta_0\to 4\pi^0) = - \frac{N_c}{8\pi} \frac{\epsilon_{\mu\nu\alpha\beta}}{\sqrt{3}F_\pi^5}p_1^\mu p_2^\nu p_3^\alpha p_4^\beta \Big\{ (c_1-c_2-c_3) \Big[ \Im \mathcal{G}^\rho_1 (s_{12},s_{23},s_{14},s_{34};s_{13}) \nonumber\\ & \quad + \Im \mathcal{G}^\rho_1(s_{12},s_{14},s_{23},s_{34};s_{24}) - \Im \mathcal{G}^\rho_1(s_{13},s_{23},s_{14},s_{24};s_{12}) - \Im \mathcal{G}^\rho_1(s_{13},s_{14},s_{23},s_{24};s_{34}) \nonumber\\ & \quad - \Im \mathcal{G}^\rho_1(s_{12},s_{24},s_{13},s_{34};s_{14}) - \Im \mathcal{G}^\rho_1(s_{12},s_{13},s_{24},s_{34};s_{23})\Big] \nonumber\\ & + 2 c_3 \Big[ \Im \mathcal{G}^\rho_2(s_{12},s_{23},s_{14},s_{34};s_{13}) + \Im \mathcal{G}^\rho_2(s_{12},s_{14},s_{23},s_{34};s_{24}) - \Im \mathcal{G}^\rho_2(s_{13},s_{23},s_{14},s_{24};s_{12}) \nonumber\\ & \quad - \Im \mathcal{G}^\rho_2(s_{13},s_{14},s_{23},s_{24};s_{34}) - \Im \mathcal{G}^\rho_2(s_{12},s_{24},s_{13},s_{34};s_{14}) - \Im \mathcal{G}^\rho_2(s_{12},s_{13},s_{24},s_{34};s_{23}) \Big] \Big\} ~, \nonumber\\ \Im \mathcal{G}^\rho_1(v,w,x,y;s) &= \bigg[ \frac{M_\rho^2(v-w)}{\left(M_\rho^2-\frac{1}{2}(v+w)\right)^2} -\frac{M_\rho^2(x-y)}{\left(M_\rho^2-\frac{1}{2}(x+y)\right)^2}\bigg] \big(t_2^0(s)-t_2^2(s)\big) \frac{\sigma}{3\pi} + \mathcal{O}\left(\sigma^7\right)~, \nonumber\\ \Im \mathcal{G}^\rho_2(v,w,x,y;s) &= \frac{M_\rho^4\big(M_\rho^2(v-w-x+y) - vy + wx\big)} {\left(M_\rho^2-\frac{1}{2}(v+w)\right)^2\left(M_\rho^2-\frac{1}{2}(x+y)\right)^2} \, \big(t_2^0(s)-t_2^2(s)\big) \frac{\sigma}{3\pi} + \mathcal{O}\left(\sigma^7\right)~. \end{align} We find, furthermore, that the neglected terms indicated as $\mathcal{O}(\sigma^7)$ are also suppressed in inverse powers of $M_\rho$, starting at $\mathcal{O}(M_\rho^{-6})$ compared to the leading terms of $\mathcal{O}(M_\rho^{-2})$ in the above. Numerically, the indicated higher-order corrections in $\sigma^2$ are found to be small, less than about 10\% all over phase space. However, the corrections by the remnants of the $\rho$ propagators are large compared to the limit $M_\rho \to \infty$, given the available phase space and the high power of these propagators in the denominator. Using the same trick as in the previous section to transform the imaginary part into an estimate for the whole (resonance-dominated) partial wave via the Omn\`es function, we arrive at \begin{align} \mathcal{A}(\eta_8\to 4\pi^0) &= \frac{1}{\sqrt{2}} \mathcal{A}(\eta_0\to 4\pi^0) = - \frac{N_c}{24\pi^2} \frac{\epsilon_{\mu\nu\alpha\beta}}{\sqrt{3}F_\pi^5}p_1^\mu p_2^\nu p_3^\alpha p_4^\beta \Big\{ (c_1-c_2-c_3) \Big[ \mathcal{G}^\rho_{f_2,1} (s_{12},s_{23},s_{14},s_{34};s_{13}) \nonumber\\ & \quad + \mathcal{G}^\rho_{f_2,1}(s_{12},s_{14},s_{23},s_{34};s_{24}) - \mathcal{G}^\rho_{f_2,1}(s_{13},s_{23},s_{14},s_{24};s_{12}) - \mathcal{G}^\rho_{f_2,1}(s_{13},s_{14},s_{23},s_{24};s_{34}) \nonumber\\ & \quad - \mathcal{G}^\rho_{f_2,1}(s_{12},s_{24},s_{13},s_{34};s_{14}) - \mathcal{G}^\rho_{f_2,1}(s_{12},s_{13},s_{24},s_{34};s_{23}) \Big] \nonumber\\ &+ 2 c_3 \Big[ \mathcal{G}^\rho_{f_2,2}(s_{12},s_{23},s_{14},s_{34};s_{13}) + \mathcal{G}^\rho_{f_2,2}(s_{12},s_{14},s_{23},s_{34};s_{24}) - \mathcal{G}^\rho_{f_2,2}(s_{13},s_{23},s_{14},s_{24};s_{12}) \nonumber\\ & \quad - \mathcal{G}^\rho_{f_2,2}(s_{13},s_{14},s_{23},s_{24};s_{34}) - \mathcal{G}^\rho_{f_2,2}(s_{12},s_{24},s_{13},s_{34};s_{14}) - \mathcal{G}^\rho_{f_2,2}(s_{12},s_{13},s_{24},s_{34};s_{23}) \Big] \Big\} ~, \nonumber\\ \mathcal{G}^\rho_{f_2,1}(v,w,x,y;s) &= \bigg[ \frac{M_\rho^2(v-w)}{\left(M_\rho^2-\frac{1}{2}(v+w)\right)^2} -\frac{M_\rho^2(x-y)}{\left(M_\rho^2-\frac{1}{2}(x+y)\right)^2}\bigg] \frac{M_{f_2}^2}{M_{f_2}^2-s} ~, \nonumber\\ \mathcal{G}^\rho_{f_2,2}(v,w,x,y;s) &= \frac{M_\rho^4\big(M_\rho^2(v-w-x+y) - vy + wx\big)} {\left(M_\rho^2-\frac{1}{2}(v+w)\right)^2\left(M_\rho^2-\frac{1}{2}(x+y)\right)^2} \, \frac{M_{f_2}^2}{M_{f_2}^2-s} ~. \label{eq:4pi0rhof2} \end{align} \end{widetext} Note that this result is far from the one-$f_2$ dominance estimate, with a $f_2$ coupling constant $\propto M_\rho^{-2}$ as the previous section suggested. Expanding Eq.~\eqref{eq:4pi0rhof2} simultaneously around the limits $M_\rho \to \infty$, $M_{f_2} \to \infty$, the leading term (corresponding to chiral dimension $\mathcal{O}(p^{10})$) is not dominated by terms of $\mathcal{O}(M_\rho^{-2}M_{f_2}^{-4})$, but also contains other terms of $\mathcal{O}(M_\rho^{-4}M_{f_2}^{-2})$ and $\mathcal{O}(M_\rho^{-6})$. In other words, Eq.~\eqref{eq:Af2} is numerically no reasonable approximation to Eq.~\eqref{eq:4pi0rhof2} even for the decay $\eta \to 4\pi^0$, with its tiny phase space available. \subsection{Branching ratios} \label{sec:numNeutral} We calculate the partial width using Eq.~\eqref{eq:width} with the symmetry factor $S=4!$. Note again that $\mathcal{A}(\eta'\to 4\pi^0) = \mathcal{A}(\eta_8\to 4\pi^0)$, assuming standard mixing. We employ the amplitude as given in Eq.~\eqref{eq:4pi0rhof2} as our ``best guess'' for an estimate of the branching fraction. With the same numerical input as in Sec.~\ref{sec:numCharged} (except using the \emph{neutral} pion mass everywhere), we find \begin{align} &\mathcal{B}\big(\eta'\to 4\pi^0 \big) \nonumber\\ &= \big[ 0.4 \,(c_1-c_2)^2 + 1.6 \,(c_1-c_2)c_3 + 1.7 \, c_3^2 \big]\times 10^{-8} \nonumber\\ & = \big\{ 3.7 ,\, 3.9 \big\}\times 10^{-8} ~,\label{eq:etaprime4pi0width} \end{align} for the two sets of coupling constants $c_i$. Note that the use of the amplitude~\eqref{eq:Af2} leads to a branching fraction of the order of $4\times 10^{-11}$, i.e.\ almost 3 orders of magnitude smaller. We can trivially also calculate the branching fraction for $\eta \to 4\pi^0$, the only $\eta \to 4\pi$ decays that is kinematically allowed. We again employ the amplitude~\eqref{eq:4pi0rhof2}, and note that mixing according to Eq.~\eqref{eq:mixing} suggests $\mathcal{A}(\eta\to 4\pi^0) = \sqrt{2}\mathcal{A}(\eta_8\to 4\pi^0)$. Normalized to the total width of the $\eta$, $\Gamma_\eta = (1.30 \pm 0.07)\,{\rm keV}$~\cite{PDG2010}, we find \begin{align} &\mathcal{B}\big(\eta\to 4\pi^0 \big) \nonumber\\ &= \big[ 0.4 \,(c_1-c_2)^2 + 1.1 \,(c_1-c_2)c_3 + 1.0 \, c_3^2 \big]\times 10^{-30} \nonumber\\ &= \big\{ 2.4 ,\, 2.6 \big\}\times 10^{-30} ~, \label{eq:eta4pi0width} \end{align} in other words, the D-wave characteristic of the decay combined with tiny phase space leads to an enormous suppression of the $CP$-allowed $\eta\to 4\pi^0$ decay. We again compare these estimates to the available experimental upper limits~\cite{Alde,Prakhov}, \begin{align} \mathcal{B}_{\rm exp}\big(\eta'\to 4\pi^0\big) &< 5\times 10^{-4} ~, \nonumber\\ \mathcal{B}_{\rm exp}\big(\eta\to 4\pi^0 \big) &< 6.9\times 10^{-7} ~; \end{align} further improvements of these experimental upper limits are planned (see e.g.\ Ref.~\cite{Bednarski} for $\eta\to 4\pi^0$). In this case, our predictions are smaller than those by several orders of magnitude. The uncertainties of Eqs.~\eqref{eq:etaprime4pi0width} and \eqref{eq:eta4pi0width} are hard to assess. The generic SU(3) and $1/N_c$ error of about 30\% assumed in Sec.~\ref{sec:numCharged} is probably too small, as here, we do not even have a complete leading-order calculation at our disposal. We therefore rather assume these numbers to be the correct orders of magnitude, without quantifying the uncertainty of the prediction any further. \section{\boldmath{$CP$}-violating \boldmath{$\eta,\,\eta'\to 4\pi^0$} decays}\label{sec:CP} Given the smallness of the branching fractions predicted for $\eta'\to 4\pi^0$, $\eta\to 4\pi^0$ via a D-wave dominated, $CP$-conserving decay mechanism in the previous section, it is desirable to compare these numbers with possible $CP$-violating contributions that may, on the other hand, avoid the huge angular-momentum suppression. One such $CP$-violating mechanism that is expected to affect strong-interaction processes is induced by the so-called $\theta$-term, an additional term in the QCD Lagrangian necessitated for the solution of the U(1)$_A$ problem. The $\theta$-term violates $P$ and $CP$ symmetry and may induce observable symmetry-violating effects, in particular, in flavor-conserving processes. Its effective-Lagrangian treatment includes a term that can be rewritten as (see Ref.~\cite{PichRafael} and references therein) \begin{align} \mathcal{L}_{\theta} &= i\,\bar\theta_0\,\frac{F_\pi^2 M_{\eta_0}^2 }{12} \bigg\{ \langle U-U^\dagger\rangle - \log\Big(\frac{\det U}{\det U^\dagger}\Big) \bigg\} ~, \nonumber\\ U &= u^2 = \exp\Big( \frac{i \varphi}{F_\pi} \Big) ~, \end{align} which, in addition to the well-known $\eta \to 2\pi$ amplitude~\cite{CVVW,PichRafael}, also induces a $CP$-violating $\eta \to 4\pi$ amplitude, \begin{align} & \mathcal{A}_{CP}(\eta_8\to 4\pi^0) = \frac{1}{\sqrt{2}}\mathcal{A}_{CP}(\eta_0\to 4\pi^0) \nonumber\\ &= \mathcal{A}_{CP}(\eta'\to 4\pi^0) = \frac{1}{\sqrt{2}}\mathcal{A}_{CP}(\eta\to 4\pi^0) = -\frac{M_{\eta_0}^2 \bar\theta_0}{3\sqrt{3}F_\pi^3} ~. \end{align} We will use $M_{\eta_0} \approx M_{\eta'}$ for numerical evaluation. The fact that this amplitude is a constant makes the phase space integration almost trivial, with the results for the branching fractions \begin{align} \mathcal{B}(\eta \stackrel{CPV}{\longrightarrow} 4\pi^0) &= 5 \times 10^{-5} \times \bar\theta_0^2 ~, \nonumber\\ \mathcal{B}(\eta' \stackrel{CPV}{\longrightarrow} 4\pi^0) &= 9 \times 10^{-2} \times \bar\theta_0^2 ~. \label{eq:CPBR} \end{align} We remark that we do not consider the branching ratio estimate for $\eta'\to 4\pi^0$ in Eq.~\eqref{eq:CPBR} reliable in any sense: given the available phase space and the possibility of strong S-wave $\pi\pi$ final-state interactions, it could easily be enhanced by an order of magnitude. Were $\bar\theta_0$ a quantity of natural size, Eq.~\eqref{eq:CPBR} would demonstrate the enhancement of the $CP$-violating S-wave mechanism compared to the $CP$-conserving D-wave one, see Eqs.~\eqref{eq:etaprime4pi0width} and \eqref{eq:eta4pi0width}. With current limits on the QCD vacuum angle derived from neutron electric dipole moment measurements, $\bar\theta_0 \lesssim 10^{-11}$~\cite{Ottnad}, these branching fractions are already bound beyond anything measurable; however, we note that for $\eta \to 4\pi^0$, the suppression of the $CP$-conserving D-wave mechanism, see Eq.~\eqref{eq:eta4pi0width}, is so strong that it is even smaller than the $CP$-violating (S-wave) one in Eq.~\eqref{eq:CPBR} if the current bounds are inserted for $\bar\theta_0$. \section{Summary and conclusions} In this article, we have calculated the branching fractions of the $\eta$ and $\eta'$ decays into four pions. These processes of odd intrinsic parity are anomalous, and---as long as $CP$ symmetry is assumed to be conserved---forbid the pions to be in relative S-waves. We organize the amplitudes according to chiral power-counting rules, and find the leading contributions to the $\eta'$ decay amplitudes with charged pions in the final state at $\mathcal{O}(p^6)$. Utilizing the framework of hidden local symmetry for vector mesons, we assume that vector-meson exchange saturates the $\mathcal{O}(p^6)$ low-energy constants, and find that the (P-wave) decay amplitude is entirely governed by $\rho$ intermediate states. The dominant contribution is hence given by the triangle anomaly via $\eta' \to \rho\rho$ (with numerically subleading box terms), not by the pentagon anomaly. In this way, the branching fractions for $\eta'\to 2(\pi^+\pi^-)$ and $\eta'\to \pi^+\pi^-2\pi^0$ are predicted to be \begin{align} \mathcal{B}\big(\eta'\to 2(\pi^+\pi^-) \big) &= (1.0\pm0.3)\times10^{-4} ~, \nonumber\\ \mathcal{B}\big(\eta'\to \pi^+\pi^-2\pi^0 \big) &= (2.4\pm0.7)\times10^{-4} ~, \end{align} respectively. The former is only a factor of 2 smaller than the current experimental upper limit, so should be testable in the near future with the modern high-statistics facilities. Predictions for the decays into four neutral pions are much more difficult, as Bose symmetry requires them to emerge in relative D-waves (assuming $CP$ conservation), suppressing the amplitudes to $\mathcal{O}(p^{10})$ in chiral power counting. We here do not even obtain the full leading-order amplitudes, as these would require a three-loop calculation. We estimate the decay via a charged-pion-loop contribution with D-wave pion--pion charge-exchange rescattering; an alternative mechanism through two $f_2$ mesons is found to be completely negligible in comparison, based on an estimate of the tensor--tensor--pseudoscalar coupling constant in the framework of QCD sum rules. Because of these phenomenological approximations, the $CP$-conserving branching ratios thus obtained, \begin{align} \mathcal{B}\big(\eta'\to 4\pi^0 \big) &\sim 4 \times 10^{-8} ~, \nonumber\\ \mathcal{B}\big(\eta\to 4\pi^0 \big) &\sim 3 \times 10^{-30} ~, \end{align} should only be taken as order-of-magnitude estimates. It thus turns out that the $CP$-conserving decay width of $\eta\to 4\pi^0$ is so small that any signal to be observed would indicate $CP$-violating physics. For the latter, we calculate one specific example using the QCD $\theta$-term. \begin{acknowledgments} We would like to thank Andrzej Kup\'s\'c for initiating this project and for discussions, and Maurice Benayoun for useful communications concerning Ref.~\cite{Benayoun}. Partial financial support by the Helmholtz Association through funds provided to the Virtual Institute ``Spin and strong QCD'' (VH-VI-231), by the DFG (SFB/TR 16, ``Subnuclear Structure of Matter''), and by the project ``Study of Strongly Interacting Matter'' (HadronPhysics2, Grant No.\ 227431) under the Seventh Framework Program of the EU is gratefully acknowledged. \end{acknowledgments}
1,116,691,499,534
arxiv
\section{Introduction} Nowadays the verification of radiotherapy treatments in most of the hospitals is performed through air or solid state ionization chambers. These chambers are mechanically displaced to obtain beam profiles. IMRT techniques require detectors able to verify and to monitor the clinical beams with high spatial resolution and fast response. Furthermore, the dose rate at any point must be integrated over the entire exposure, limiting the use of typical ionization chambers. IMRT verification with radiographic films (RGFs), radio-chromic films (RCFs) or electronic portal imaging devices (EPIDs) provide a high spatial resolution. However, RGFs need a chemical processing and over-respond to low energy scattered photons (Sykes \etal 1999, Martens \etal 2002), RCFs present response non-uniformity (Niroomand-Rad \etal 1998) and calibration of EPIDs is a difficult task, which complicates high precision dosimetry with all of these devices. Segmented anode ionization chambers, like those presented in Martens \etal (2001), Belletti \etal (2001) and Eberle \etal (2003), and diode arrays (Jursinic and Nelms 2003) are an alternative. Although faster verification procedures are possible with these devices, none of them achieve a millimeter range spatial resolution. In this paper we present the design, the operation principles and the first tests of a 128 pixel linear array whose aim is to obtain a profile in a single beam shot with enough resolution to make mechanical displacement unnecessary. Each pixel has an area of 1.7 mm $\times$ 1.7 mm. The active medium is a 0.5 mm thick isooctane layer, which is encapsulated between two printed circuit boards. We used a standard liquid isooctane from Merk\footnote{Merk Uvasol quality grade isooctane}, with a purity $\geq$ 99.8\%. No further purification, in order to obtain an ultra-pure liquid, has been made. Non polar liquids are becoming and alternative to air and solid state semiconductors in radiotherapy detectors due to their tissue equivalent behavior, their sensitivity and their small directional dependence. Liquid filled ionization chambers are currently used in radiotherapy both for dosimetry, as shown by Wickman and Nystr\"{o}m (1992) and Wickman \etal (1998), and portal imaging as in the device of van Herk and Meertens (1988). One of the most commonly used liquids is isooctane (2,2,4 trimethylpentane). This non-polar liquid has a quite constant stopping power ratio to water in a very wide energy spectrum (less than 3\% variation from 0.1 MeV to 20 MeV) and also its intrinsic mass density allows to achieve a spatial resolution in the millimeter range for therapy beams. \section{Detector description} \subsection{Detector design} The linear array has been constructed using two printed circuit boards (PCB) that surround a 0.5 mm thick isooctane layer. The isooctane gap is provided by a PEEK\footnote{Poly Ether Ether Ketone} spacer. The chamber walls were fabricated using FR4 fiber glass epoxy. The upstream wall has a 0.8 mm thickness and contains the high voltage plane. The downstream one is a four layer PCB with a 3 mm thickness. The top layer contains the Cu+Ni+Au anode segmented in 128 pixels. Each electrode has an area of 1.5 mm $\times$ 1.5 mm and is surrounded by a guard electrode biased to +2 V. The pitch is 1.7 mm, and so the linear array consists of 128 cells of 1.7 mm $\times$ 1.7 mm $\times$ 0.5 mm giving a total active length of 21.6 cm. The internal layers contain metallic strips that carry out the ionizing charge produced in the liquid to one of the device sides, where the detector is connected to the read-out electronics. In the bottom layer was deposited a 35 $\mu$m thick Cu clad to shield the strips from external noise. The high voltage electrode dimensions (250 mm $\times$ 15 mm) are larger than the sensitive area in order to guarantee a high electric field uniformity in the active volume. Figure \ref{fig1} shows a scheme of the detector layout, and figure \ref{fig2} shows the detector cross section. The total dimensions of the assembled device are 350 mm $\times$ 70 mm $\times$ 4.5 mm. \begin{figure} \begin{center} \includegraphics*[width=7.5cm]{figure1.eps} \end{center} \caption{Detector scheme. It shows the top PCB, the PEEK spacer and the four layer bottom PCB. } \label{fig1} \end{figure} \begin{figure} \begin{center} \includegraphics*[width=8.5cm]{figure2.eps} \end{center} \caption{Scheme of the detector cross section.} \label{fig2} \end{figure} \subsection{Read-out electronic system} The X-ray Data Acquisition System (XDAS) has been used as read-out electronic. This system is provided by the company Electron Tubes Ltd., and it is based on the Xchip developed by the CCLRC. It consists of a modular system in which each board has 128 read-out channels, and up to 63 boards can be serially connected, giving a maximum of 8064 readout channels. The main characteristics of the XDAS system are showed in table \ref{table1}. For this application we only use one board (128 channels). The response of each read-out channel has been studied using a Thevenin current source. The mean sensitivity of the channels is 4272$\pm$6 ADC counts per pC. The relative non-uniformity in the response of the channels (figure \ref{fig3}) is lower than 0.6 \%. The XDAS system together with the DC power supplies and a high voltage generator were mounted into a metallic box (the electro-meters station) to protect them from external noise and also to make a manageable device. This portable unit is placed close to the detector and outside of the direct beam. It is connected to the detector through a 3 meter double shielded cable, and to a PC standard serial and parallel ports for digital control and read-out. Figure \ref{fig4} shows a photo of the assembled device. \begin{table} \caption{\label{table1}XDAS main characteristics.} \begin{indented} \item[]\begin{tabular}{@{}llll} \br integration period& 0.01 ms to 0.5 s\\ sub-samples& 256 max.\\ signal to noise ratio& 30000:1\\ readout rate & 5 MB/s max.\\ non-linearity & $<$ 0.1 \% \\ A/D conversion & 14 bit\\ data output & 16 bit\\ dimensions & 101 mm $\times$ 164 mm\\ \br \end{tabular} \end{indented} \end{table} \begin{figure} \begin{center} \includegraphics*[width=8cm]{figure3.eps} \end{center} \caption{Calibration of the XDAS board. The relative non-uniformity is lower than 0.6 \%.} \label{fig3} \end{figure} \begin{figure} \begin{center} \includegraphics*[width=7cm]{figure4.eps} \end{center} \caption{Photo of the assembled device. It shows the detector, the cable and the electro-meters station.} \label{fig4} \end{figure} \section{Principles of operation} \subsection{Initial recombination} When ionizing radiation interacts with a medium ionizes electron-ion pairs along its track. Electrons released from molecules thermalize at a distance $r$, where the electron and the positive ion are still bounded by the Coulomb interaction. This causes the recombination of a fraction of the primary ionization pairs produced, which is called initial recombination. These effects are much more relevant in liquids than in gases due to the fact that mass density of liquid hydrocarbons is almost three orders of magnitude higher than density of gases at normal conditions, and then $r$ is much smaller. The amount of electron-ion pairs escaping initial recombination per 100 eV of absorbed energy is denominated the free ion yield and is denoted as $G_{\rm{fi}}$. The initial recombination, and thus the $G_{\rm{fi}}$, depends on the liquid properties, on its temperature, $T$, an on the external electric field, $E$, (Onsager 1938), but does not on the dose rate. For low electric field values ($E\sim$ $10^{3}$ V mm$^{-1}$) the $G_{\rm{fi}}$ rises approximately linear with the electric field: \begin{equation} G_{\rm{fi}}\simeq G_{\rm{fi}}^{\rm{0}} \; \lbrack1+aE\rbrack \label{Gfi} \end{equation} The constant $a$ must be measured for a correct absolute dosimetry, but is well approximated by $a\simeq 1/E_{\rm{0}}$ (Mozumder 1974, Pardo \etal 2004), where $E_{\rm{0}}=8\pi\epsilon_{\rm{r}}\epsilon_{\rm{0}}(\kappa T)^{2}/e^{3}$ is the called Onsager field. Here $\epsilon_{\rm{r}}$ and $\epsilon_{\rm{0}}$ are the relative dielectric constant of the liquid and the dielectric constant of the vacuum respectively, $\kappa$ is the Boltzmann constant and $e$ is the electron charge. \subsection{Volume recombination and ion collection efficiency} \begin{table} \caption{\label{table2}Measured charge carriers mobilities ($k_{\rm{-}}$, $k_{\rm{+}}$), volume recombination constant ($\alpha$), free ion yield at zero electric field ($G_{\rm{fi}}^{\rm{0}}$) for non ultra-pure isooctane, and its relative dielectric constant ($\epsilon_{\rm{r}}$) and Onsager field ($E_{{\rm{0}}}$).} \begin{indented} \item[]\begin{tabular}{@{}llll} \br $k_{\rm{-}}$ (m$^{3}$ s$^{-1}$ V$^{-1}$)$^{(a)}$ & 3.5$\times$10$^{-8}$ \\ $k_{\rm{+}}$ (m$^{3}$ s$^{-1}$ V$^{-1}$)$^{(a)}$ & 2.3$\times$10$^{-8}$ \\ $\alpha$ (m$^{3}$ s$^{-1}$)$^{(a)}$ & 5.4$\times$10$^{-16}$ \\ $G_{\rm{fi}}^{\rm{0}}$ $^{(b)}$ & 0.32 (20 $^{0}$C)\\ $\epsilon_{\rm{r}}$ & 1.94 (20 $^{0}$C)\\ $E_{{\rm{0}}}$ (V mm$^{-1}$) & 1.74$\times$10$^{3}$ (20 $^{0}$C)\\ \br $^{(a)}$Determined measuring the temporal development of the read-out\\ signal in a pulsed beam. The mobilities reported by several authors\\ for non ultra-pure isooctane are in the range 1-4$\times$10$^{-8}$ m$^{3}$ s$^{-1}$ V$^{-1}$,\\ probably due to different contamination in the liquids.\\ $^{(b)}$From Pardo \etal (2004).\\ \end{tabular} \end{indented} \label{table} \end{table} The electrons that have escaped from initial recombination flow due to drift and diffusion, and this made possible the interaction between ions from different tracks, which causes the volume recombination. Volume recombination depends on the liquid properties, on the electric field, on the dose rate and also on the form in which the dose is delivered (i.e. pulsed or continuous radiation). Actual clinical linear accelerators delivered the dose in high ionization pulses of a few $\mu$s duration and several ms period. The beam dose rate is modulated varying the pulse period. If the pulse period, $p$, is higher than the charge carriers drift time in the liquid, i.e. when \begin{equation} p\geq \frac{h^{2}}{V k_{\rm{min}}} \label{p_t} \end{equation} the collection efficiency will not depend on the period (i.e. on the dose rate). In equation (\ref{p_t}) $k_{\rm{min}}$ denotes the lower mobility. In this case we can apply the theory of Boag (1950). This theory assumes the negative charge is carried by ions and also neglects space charge effects and recombination during the pulse. The theory has been experimentally tested by several authors (see for example Johansson \etal 1997), and within it the collection efficiency is given by, \begin{equation} f=\frac{1}{u}\ln(1+u) \label{eq_Boag} \end{equation} with \[u=\mu\frac{r}{V}h^{2}\] \[\mu=\frac{\alpha}{e(k_{+}+k_{-})}\] where $r$ is the amount of charge released by the radiation in the liquid and escaping initial recombination per unit volume and pulse, $h$ is the liquid layer thickness, $V$ is the polarization voltage, $k_{+}$ and $k_{-}$ are the mobilities of positive and negative charge carriers and $\alpha$ is the volume recombination constant, which for a low permittivity non-polar liquid can be expressed as (Debye 1942), \begin{equation} \alpha=\frac{e(k_{+}+k_{-})}{\epsilon_{\rm{r}}\epsilon_{\rm{0}}} \label{alpha} \end{equation} We use a numerical simulation to calculate the collection efficiency of the detector irradiated by a pulsed beam using the parameters of table \ref{table}. We considered the pulse period of a Siemens Primus accelerator placed in the Hospital Clinico Universitario de Santiago, which is related to the monitor unit rate as \begin{equation*} \label{eq_period} p=\cases {(1.93\pm 0.02) \cdot \dot{M}^{-1} & for the 15 MV photon beam\\ (1.08\pm 0.02) \cdot \dot{M}^{-1} & for the 6 MV photon beam\\} \end{equation*} where $p$ is expressed in seconds and $\dot{M}$ is the monitor unit rate expressed in MU min$^{-1}$. Figure \ref{fig5} shows the computed detector collection efficiencies. In figure \ref{fig5}(a) the dose rate is modulated varying the source-detector distance (SDD). The pulse period verify equation (\ref{p_t}) and then the Boag theory can be applied. There is a good correspondence between the simulation, the Boag theory and the experimental points (for a 1000 V operation voltage). In figure \ref{fig5}(b) the distance is constant and the dose rate is modulated changing the accelerator monitor unit rate (up to 300 (500) MU min$^{-1}$ for the 6 (15) MV beam). We can see that in the upper part of the curves there is not dependence on the dose rate. This is due to in this region the pulse period is higher than the charge carriers drift time. Due to this fact, the collection efficiency does not go to 1 when the dose rate goes to 0, because the zero dose rate limit is achieved taking $p\rightarrow\infty$. In the constant part of the curves the results obtained with the simulation are very close to those computed from the Boag theory. For example, for the 15 MV beam, with a 1000 V polarization voltage, the simulated collection efficiency at low dose rates is $\simeq98.6$ \%, very close to the experimentally measured ($\simeq98.7$ \%, although due to accelerator dose rate oscillations the experimental results have important uncertainties as we can see in figure \ref{fig5}(b)) and to the computed from the Boag theory ($\simeq$ 98.7\%). For the 6 MV beam with the same voltage the simulated value is $\simeq99.2$ \% and the the computed from the Boag theory is $\simeq99.3$ \%. Only in the case of overlapping of charges ionized by different pulses will the collection efficiency depend on the dose rate. In this case the Boag theory cannot be applied. In figure \ref{fig5}(b) we can see a quite good agreement between the experimental points, for the 15 MV beam and with a 1000 V detector polarization voltage, and the simulation. From figure \ref{fig5} we can conclude that the recombination is higher when there is not pulse overlapping and the dose rate is modulated changing the SSD. The maximum detector response non-linearity depends on the charge collection efficiency variation between low and high dose rates. Thus, a higher non-linearity is expected when the dose rate is modulated in this way. \begin{figure} \begin{center} \includegraphics*[height=8cm]{figure5.eps} \end{center} \caption{(A) Simulated collection efficiency (solid line) plotted against dose rate for several detector operation voltages. The dose rate is modulated varying the SDD, and the pulse period is high enough to verify equation (\ref{p_t}) and then the Boag theory (dotted line) can be applied. Experimental points for a 1000 V operation voltage are plotted. (B) Simulated collection efficiency plotted against dose rate for several detector operation voltages with the detector irradiated by 15 MV (solid line) and 6 MV (dashed line) beams. The SSD is constant and the dose rate is modulated varying the accelerator monitor unit rate. Experimental points for the 15 MV, 1000 V operation voltage are plotted.} \label{fig5} \end{figure} \section{First tests of the device} \subsection{Experimental set-up} The first tests of the device were performed using a Siemens Primus accelerator placed in the Hospital Cl\'{\i}nico Universitario de Santiago. For this accelerator a MU is defined as 1 cGy at the maximum depth (1.6 cm for 6 MV and 3 cm for 15 MV) in a water phantom for a 10 cm $\times$ 10 cm field and SSD=100 cm. Measurements were performed in a home-made solid water phantom. The detector operation voltage was 1000 V and the XDAS integration time was 10 ms. Unless mentioned otherwise, the SSD was 100 cm. Comparative measurements of OFs and energy dependence have been made with a 0.125 cm$^{3}$ air ionization chamber (PTW, Freiburg, Germany, type 31010). Penumbras and profiles measurements were compared with RGFs measurements. In some cases we used a 0.015 cm$^{3}$ PinPoint chamber (PTW, Freiburg, Germany, type 31006) which has a 2 mm diameter. \subsection{Pixel response calibration} To study the pixel response homogeneity the detector was inserted in the phantom at a 2 cm depth and irradiated with 10 cm $\times$ 10 cm 6 MV photon shots, each one delivering 50 MU at a 100 MU min$^{-1}$ rate. Between each shot the detector was displaced 1.7 mm with a micro-metric linear stage in order to compare the read-out signal of each pixel in the center of the field. The maximum relative deviation in the response was $\sim$ 6 \%. The non-homogeneity is due to different response of each XDAS read-out channel (studied in subsection 2.2.) and to small inhomogeneities in the gap and the pixel area. These effects have been corrected in all the following measurements. \subsection{Read-out signal linearity with the dose rate} Figure \ref{fig6} shows the read-out signal in the central pixel of the device plotted against the dose rate, when the detector was irradiated by a 10 cm $\times$ 10 cm 15 MV photon beam. The detector is placed in the phantom at a 4 cm depth. In the first case, figure \ref{fig6}(a), the monitor unit rate was 100 MU min$^{-1}$ to avoid the superposition of charge carriers ionized by different pulses, and the dose rate was modified changing the SSD from 130 cm to 60 cm. In the second case, figure \ref{fig6}(b), the SSD was 100 cm and the dose rate was modified varying the accelerator MU rate from 50 MU min$^{-1}$ to 500 MU min$^{-1}$ in 50 MU min$^{-1}$ steps. It is common to fit this relationship to the empirical expression of Fowler and Attix (1966) \begin{equation} S=k\dot{D}^{\Delta} \end{equation} where $S$ is the read-out signal, $k$ is a parameter for the detector sensitivity and $\Delta$ a parameter related to the non-linearity of the detector response. We obtain $\Delta=0.993\pm0.007$ in the first case, and $\Delta=0.984\pm0.007$ in the second case, which implies a small non-linearity in both cases. The linear fit of the lower dose rate points shows 1.5 \% deviation from linearity in the first case (at 2.9 Gy min$^{-1}$) and 2.1 \% in the second (at 5 Gy min$^{-1}$). \begin{figure} \begin{center} \includegraphics*[width=12cm]{figure6.eps} \end{center} \caption{Read-out signal plotted versus the dose rate. The dose rate is modulated: (A) varying the SSD for a constant MU rate (100 MU min$^{-1}$); (B) varying the MU rate for a constant SSD (100 cm). In both cases the solid line corresponds to the Fowler-Attix fit, and the dotted line to the linear fit of the lower dose rate points.} \label{fig6} \end{figure} \subsection{Photon beams profiles} Figure \ref{fig7} shows a profile of a 15 MV 5 cm $\times$ 5 cm at 5 cm depth in solid water, measured with our linear array, with the PinPoint chamber (displaced with the linear stage in 2 mm steps) and with RGF. All the profiles show a good correspondence. To study possible systematic deviations in the penumbras measured with the linear array, several 90 \%-10 \% and 80 \%-20 \% penumbras of photon beams were measured. The studied fields were 5 cm $\times$ 5 cm at 3 cm, 5 cm, 10 cm and 20 cm for 15 MV, and at 1.5 cm, 3 cm, 5 cm, 10 cm and 20 cm for 6 MV in the solid water phantom. The MU rate was 100 MU min$^{-1}$ in all cases. For each configuration penumbras from linear array profiles were determined through quadratic interpolation, and the average of the left and the right penumbras was considered. The results were compared with RGF measurements. We use RGF despite its energy dependence because this effects does not affect too much the penumbra measurements, at least at moderate depths and field sizes (Martens \etal 2002), and it provides a high spatial resolution. Differences between measurements of both detectors are plotted in figure \ref{fig8}. We can see that penumbras measured with the linear array are broader, in general, than those measured with RGF. Typical uncertainties of the points plotted in figure \ref{fig8} are around $\pm$0.2 mm and $\pm$0.4 mm for 80 \%-20 \% and 90 \%-10 \% respectively, and then most of these differences are compatible with zero. \begin{figure} \begin{center} \includegraphics*[width=10cm]{figure7.eps} \end{center} \caption{Relative dose profile of a 5 cm $\times$ 5 cm 15 MV photon beam at a 5 cm depth in a solid water phantom, measured with the linear array ($\times$), with the PinPoint chamber ($\circ$), with RGF (solid line).} \label{fig7} \end{figure} \begin{figure} \begin{center} \includegraphics*[width=12cm]{figure8.eps} \end{center} \caption{Difference between 90\%-10\% ($\circ$) and 80\%-20\% ($\times$) penumbras measured with RGF and with the linear array for 6 MV (a) and 15 MV (b).} \label{fig8} \end{figure} \subsection{Output factors} Output factors (OFs) are defined as the ratio of the dose at a given depth for a given field size to the dose at the same depth for the reference field size. OFs for several rectangular fields were measured with the central pixel of the linear array and compared with the OFs measured with the reference detector. The length of the field was 10 cm and the width was varied between 10 cm and 1 cm in 1 cm steps. A 5 cm $\times$ 5 cm was taken as reference field. The accelerator MU rate was 100 MU min$^{-1}$. The depth was 5 cm for 6 MV and 10 cm for 15 MV in the solid water phantom. As reference detector we used the 0.125 cm$^{3}$ chamber, except for the narrowest field where we used the 0.015 cm$^{3}$ chamber due to its smaller sensitive volume. Figure \ref{fig9} shows the OFs measured with the linear array, the reference detector, and their relative deviation. The linear array seems to over-respond to narrow fields both for 6 MV and 15 MV (up to 2.9 \% for the 1 cm width 15 MV field). However for the smaller fields the OFs uncertainty is large due to the positioning uncertainty and the difference between the sensitive volume of both detectors. For wider fields the linear array presents an under-response (around 0.5 \% and 0.1 \% for 6 MV and 15 MV respectively). This behavior has been observed and studied in a similar detector by Martens \etal (2001) who found that is related with the effect of the electrode metalization. \begin{figure} \begin{center} \includegraphics*[width=12cm,height=6cm]{figure9.eps} \includegraphics*[width=12cm,height=6cm]{figure9_b.eps} \end{center} \caption{OFs measured with the linear array ($\times$) and with the reference detector ($\circ$) for 6 MV (a) and 15 MV (c). OFs relative deviations for 6 MV (b) and 15 MV (d)} \label{fig9} \end{figure} \subsection{Energy dependence and effect of the field size.} To study the dependence on the energy spectrum of the incident radiation and the influence of the irradiated area on the read-out signal, measurements were performed for 6 MV and 15 MV at several depths and for several field dimensions. The results were compared with the data obtained with the 0.125 cm$^{3}$ chamber. Figure \ref{fig10} shows the ratio of the linear array measurements to those of the reference detector. The ratio was normalized to unit for a 5 cm $\times$ 5 cm field at 3 cm both for 6 MV and 15 MV. The relative uncertainties of the normalized data plotted in the figure are around 0.5 \%. From this figure it is clear that the relative signal decreases when the field size increases as was expected from the OFs measurements. In addition the linear array underrresponds when the depth is increased. This under-response is up to 2.7 \% for 6MV and up to 2.5 \% for 15 MV, and again can be related with the metallization of the electrodes (Martens \etal 2001). \begin{figure} \begin{center} \includegraphics*[width=8cm]{figure10.eps} \includegraphics*[width=8cm]{figure10_b.eps} \end{center} \caption{Field size and depth dependence of the linear array response for 6 MV (a) and 15 MV (b).} \label{fig10} \end{figure} \subsection{Measurement of an IMRT and virtual wedge profiles} The detector can be used for the verification of virtual wedges. A profile of a 45$^{0}$ virtual wedge for a 15 MV 10 cm $\times$ 10 cm, delivering a total of 200 MU, has been acquired with the linear array and RGF. The results are compared in figure \ref{fig11}, with a maximum difference of 3\% and an average difference less than 1 \%. Also a profile of an IMRT field which consists of four segments, each one delivering 50 MU, was acquired. Each segment delivered 50 MU. The linear array and RGF results are compared in figure \ref{fig12}. The maximum difference is 3 \% while the average difference is again less than 1 \%. \begin{figure} \begin{center} \includegraphics*[width=10cm]{figure11.eps} \end{center} \caption{45$^{0}$ virtual wedge relative dose profile of a 5 cm $\times$ 5 cm 6 MV photon beam delivering a total of 200 MU measured with the linear array ($\times$) and with RGF (solid line).} \label{fig11} \end{figure} \begin{figure} \begin{center} \includegraphics*[width=10cm]{figure12.eps} \end{center} \caption{Profiles for an IMRT field delivering a total of 200 MU measured with the linear array ($\times$) and with RGF (solid line).} \label{fig12} \end{figure} \subsection{Signal reproducibility} The signal reproducibility was studied along the test period (around three months). All equivalent measurements were within a 2 \%. An important fraction of this deviation is due to temperature dependence, which has not been studied. Liquid-filled devices relative read-out signal dependence on temperature is around 10$^{-3}$ per degree due to the temperature influence on initial recombination (Mozumder 1974, Wickman \etal 1998). \section{Conclusions} The response of each linear array pixel is very linear with the dose rate (2.1 \% deviation at 5 Gy $min^{-1}$). A correction factor has to be applied to each pixel due to the low inhomogeneity in the XDAS response, and to small inhomogeneities in the gap and the pixel area. OFs measurements deviates less than 1 \% from those measured with the reference detector for field widths from 2 cm to 10 cm (2 cm to 10 cm) for 6 MV (15 MV). For the narrower fields the deviation is less than to 3 \%, but for this narrow fields the positioning uncertainty is high, and the difference between the active volume of a linear array pixel and the reference detector can affect the OFs measurements. The energy dependence is lower than 2 \% (for depths up to 20 cm and field sizes from 5 cm $\times$ 5 cm to 20 cm $\times$ 20 cm). Despite this dependence is not very large, it has to be taken into account when using the detector at high depth. The detector has measured with accuracy several beam profiles and penumbras, and also IMRT and virtual wedge treatments. The small pixel size of the device combined with the fast and sensible XDAS read-out system allow a faster verification of these fields with a very good spatial resolution (even in regions of high dose gradient) and signal to noise ratio, making mechanical displacement unnecessary and showing its utility for high-precision relative dose measurements. In addition, the detector can be used for absolute dose measurements. The $G_{\rm{fi}}$ and its dependence with the electric field have been studied together with the charge losses due to volume recombination. Considering these effects, the absolute dose can be obtained from the read-out signal. Studies concerning the temperature dependence, the influence of the detector walls in the absolute dose deposited in the medium, the dose calibration and also the long term stability of the device will be the scope of further work. \ack This work has been supported by the research projects Xunta de Galicia PGIDT01INN20601PR and MCYT DPI2002-0185, and by a CIXTEC (Xunta de Galicia) grant. \References \item[] Belletti S, Cirio R, Cocuzza L, Degiorgis P G, Donetti M, Madon E, Marchetto F, Marletti M, Marzoli L, Peroni C, Trevisiol E and Urgesi A 2001 Pixel segmented ionization chamber for therapeutical beams of photons and hadrons {\it Nucl. Instrum. Methods A} {\bf 461} 420-1 \item[] Boag J W 1950 Ionization measurements at very high intensities: 1. Pulsed radiation beams {\it Br. J. Radiol.} {\bf 23} 601-11 \item[] Debye P 1942 Reaction rates in ionic solutions {\it Trans. Electrochem. Soc.} {\bf 82} 265-72 \item[] Eberle K, Engler J, Hartmann G, Hofmann R and H\"{o}randel J R 2003 First tests of a liquid ionization chamber to monitor intensity modulated radiation beams {\it Phys. Med. Biol.} {\bf 48} 3555-64 \item[] Fowler J F and Attix F H 1966 Solid state integrating dosimeters {\it Radiation Dosimetry} vol 2 (New York: Academic) 241-90 \item[] Johansson B, Wickman G and Bahar-Gogani J 1997 General collection efficiency for liquid isooctane and tetramethylsilane in pulsed radiation {\it Phys. Med. Biol.} {\bf 42} 1929-38 \item[] Jursinic P A and Nelms B E 2003 A 2-D diode array and analysis software for verification of intensity modulated radiation therapy delivery {\it Med. Phys.} {\bf 30} 870-9 \item[] Martens C, De Wagner C and De Neve W 2001 The value of the LA48 linear ion chamber array for characterization of intensity-modulated beams {\it Phys. Med. Biol.} {\bf 46} 1131-48 \item[] Martens C, Claeys I, De Wagner C and De Neve W 2002 The value of radiographic film for the characterization of intensity-modulated beams {\it Phys. Med. Biol.} {\bf 47} 2221-34 \item[] Mozumder A 1974 Effect of an external electric field on the yield of free ions. I General Results from the Onsager theory {\it J. Chem. Phys.} {\bf 60} 4300-4 \item[] Niroomand-Rad A, Blackwell C R, Coursey B M, Gall K P, Galvin J M, McLaughlin W L, Meigooni A S, Nath R, Rodgers J E and Soares C G 1998 Radiochromic film dosimetry: Recommendations of AAPM Radiation Therapy Committee Task Group 55 {\it Med. Phys.} {\bf 25} 2093-2115 \item[] Onsager L 1938 Initial recombination of ions {\it Phys. Rev.} {\bf 54} 554-7 \item[] Pardo J, Franco L, G\'{o}mez F, Iglesias A, Lobato R, Mosquera J, Pazos A, Pena J, Pombar M, Rodr\'{\i}guez A and Send\'{o}n J 2004 Free ion yield observed in liquid isooctane irradiated by $\gamma$ rays. Comparison with the Onsager theory {\it Phys. Med. Biol.} {\bf 49} 1905-14 \item[] Sykes J R, James H V and Williams P C 1999 How much does film sensitivity increase at depth for larger field sizes? {\it Med. Phys.} {\bf 26} 329-30 \item[] van Herk M and Meertens H 1988 A matrix ionization chamber imaging device for on-line patient setup verification during radiotherapy {\it Radiother. Oncol.} {\bf 11} 369-78 \item[] Wickman G and Nystr\"{o}m H 1992 The use of liquids in ionization chambers for high precision radiotherapy dosimetry {\it Phys. Med. Biol.} {\bf 37} 1789-812 \item[] Wickman G, Johansson B, Bahar-Gogani J, Holmstr\"{o}m T and Grindborg J E 1998 Liquid ionization chambers for absorbed dose measurements in water at low dose rates and intermediate photon energies {\it Med. Phys.} {\bf 25} 900-7 \endrefs \end{document}
1,116,691,499,535
arxiv
\section{Introduction}\label{sec:intro} Sterile neutrinos, namely weak interaction singlets, are compelling candidates to explain a host of cosmological and astrophysical phenomena. They could be a suitable warm dark matter component\cite{dodelson,asaka,shi,kev1,hansen,kev2,kev3,kusenko,kou,dolgovrev,pastor,hannestad,biermann,michaDM}, may also be relevant in the latest stages of stellar collapse\cite{raffeltSN,fuller}, primordial nucleosynthesis\cite{fuller2,fuller3}, and provide a potential explanation for the anomalous velocity distributions of pulsars\cite{segre,fullkus,kuse2}. Although sterile neutrinos are ubiquitous in extensions of the standard model\cite{book1,book2,book3,raffelt}, the MiniBooNE collaboration\cite{miniboone} has recently reported results in contradiction with those from LSND\cite{lsnd1,lsnd2} that suggested a sterile neutrino with $\Delta m^2 \sim 1~\textrm{eV}^2$ scale. Although the MiniBooNE results hint at an excess of events below $475~\mathrm{MeV}$ the analysis distinctly excludes two neutrino appearance-only from $\nu_\mu \rightarrow \nu_e$ oscillations with a mass scale $\Delta m^2\sim 1~\textrm{eV}^2$, perhaps ruling out a \emph{light} sterile neutrino. However, a recent analysis\cite{malto} suggests that while $(3+1)$ schemes are strongly disfavoured, $(3+2)$ neutrino schemes provide a good fit to both the LSND and MiniBooNE data, including the low energy events, because of the possibility of CP violation in these schemes, although significant tension remains. These issues notwithstanding the MiniBooNE result does not constrain a heavier variety of sterile neutrinos such as those that could be suitable warm dark matter candidates with masses in the $\mathrm{keV}$ range\cite{dodelson,asaka,shi,kev1,hansen,kev2,kev3,kou,pastor,hannestad}. Their radiative decay would contribute to the X-ray background\cite{hansen,Xray,kou,boyarsky,hansen2} from which constraints on their masses and mixing angles may be extracted\cite{kou,boyarsky,hansen2,kou2}. It has also been suggested that precision laboratory experiments may be sensitive to $\sim \textrm{keV}$ neutrinos\cite{shapolast}. Being weak interaction singlets, sterile neutrinos can only be produced via their mixing with an active species, hence any assessment of the possibility of sterile neutrinos as dark matter candidates or their role in supernovae must begin with understanding their production mechanism. To be a suitable dark matter candidate, two important constraints must be satisfied: the correct abundance and a velocity dispersion that restricts the free streaming length to be consistent with the constraints from structure formation. Both ingredients depend directly on the distribution function of the sterile neutrinos, which in turn depend on the dynamics of production and evolution until freeze-out. Pioneering work on the non-equilibrium dynamics of neutrinos in a medium was cast in terms of kinetic equations for a flavor ``matrix of densities''\cite{dolgov} or in terms of $2\times 2$ Bloch-type equations for flavor quantum mechanical states\cite{stodolsky,enquist}. A general field theoretical approach to neutrino mixing and kinetics was presented in \cite{sigl,raffkin} (see also \cite{raffelt}), however sterile neutrino production in the early Universe is mostly studied in terms of simple phenomenological rate equations\cite{dodelson,kev1,cline,kainu,foot,dibari}, and numerical studies \cite{kev1,dibari} rely on an approximate semi-phenomenological approach\cite{kainu,foot}. A field theoretical study of the hadronic contribution to the sterile production rate \emph{near an MSW resonance} has been reported in ref.\cite{shapo}. Understanding the dynamics of oscillations, decoherence and damping is of \emph{fundamental} and phenomenological importance not only in neutrino cosmology but also in the dynamics of neutral meson mixing and CP violation\cite{cp,cp2,beuthe} and axion-photon mixing in the presence of a magnetic field\cite{raffelt}, a phenomenon whose interest has been rekindled by the recent results from the PVLAS collaboration\cite{pvlas} (see the discussion in ref.\cite{pvlas2}). As argued in\cite{dolokun} the spinorial nature of neutrinos is inessential to describe the dynamics of mixing and decoherence in a medium. Recently we reported on a study\cite{hobos} of mixing and decoherence in a theory of mesons that provides an accurate description of similar phenomena for mixed neutrinos. This effective theory incorporates interactions that model the medium effects associated with charge and neutral currents for neutrinos and yields a picture of the dynamics which is remarkably general. The fermion nature of the distributions and Pauli blocking effects can be simply accounted for in the final result\cite{hobos}. This study implemented quantum field theory methods to obtain the non-equilibrium effective action for the ``neutrino'' degrees of freedom. More recently this approach was extended to study the production of sterile neutrinos both from the effective action as well as from the correct quantum kinetic equations obtained directly from the quantum master equation\cite{hoboyste}. The results obtained in ref.\cite{hoboyste} clarify a host of important aspects, such as the approach to equilibrium and a detailed analysis of quantum Zeno suppression when the decoherence time scale is shorter than the oscillation time scale, thereby confirming previous results obtained for neutrinos with standard model interactions in refs.\cite{hozeno,hochar}. The study in refs.\cite{hobos,hoboyste} relied on integrating out the bath degrees of freedom, assumed to remain in equilibrium, up to second order in a perturbative expansion akin to an expansion in $G_F$ in the standard model. This perturbative treatment restricted the analysis to the weak damping regime in which the decoherence time scale is larger than the oscillation time scale. In refs.\cite{hoboyste,hozeno} it was pointed out that a strong damping regime featuring the opposite relation between these time scales could emerge near an MSW resonance for small vacuum mixing angle consistent with constraints from the X-ray background\cite{kou,boyarsky,hansen2,kou2}. {\bf Motivation and goals:} A sound assessment of sterile neutrinos as warm dark matter candidates requires a reliable description of the kinetics of production and evolution towards freeze-out. Strong departure from equilibrium in the distribution function at freeze-out could lead to significant changes in the abundance or skewed velocity distributions that could affect the free streaming lengths and structure formation\cite{fullshi}. In this article we complement and extend a previous study\cite{hoboyste} on the non-equilibrium production of a sterile species via active-sterile mixing. While the previous study\cite{hozeno,hobos,hoboyste} focused on the weak damping limit consistently with a perturbative expansion in standard model interactions, this article studies an \emph{exactly solvable} model that allows to explore systematically the \emph{strong damping case} and to draw general conclusions on the production dynamics of a sterile species. The model incorporates all the relevant ingredients: active-sterile mixing via a mass matrix which is off-diagonal in the flavor basis, and the coupling of the active species to a continuum of degrees of freedom which are taken as a thermal bath in equilibrium and includes an index of refraction contribution which modifies the mixing angles and dispersion relations in the same manner as for neutrinos propagating in a medium. {\bf Summary of results:} The exact solution of the Heisenberg equations of motion allows a complete investigation of the non-equilibrium dynamics of production of the sterile species in the weak and strong damping regimes and to analyze in detail quantum Zeno suppression. We obtain the quantum master equation and from it the complete set of kinetic equations that describe the production and evolution of the active and sterile distribution functions and coherences and reproduce the exact results. Our main results are : \begin{itemize} \item{The exact solution of the Heisenberg (-Langevin) equations of motion for one active and one sterile species yield two different modes of propagation in the medium corresponding to quasiparticles whose dispersion relations and damping rates (widths) depend on the dimensionless ratio $\widetilde{\gamma} = \Gamma_{aa}/2\Delta E$ with $\Gamma_{aa}$ the active species interaction rate in absence of mixing, and $\Delta E$ the oscillation frequency in absence of damping but including the index of refraction in the medium. The weak and strong damping cases correspond to $\widetilde{\gamma} \ll 1$ and $\widetilde{\gamma} \gg 1$ respectively. The exact distribution functions for the active and sterile species are obtained, their time evolution is completely determined by the widths of these quasiparticles and the oscillation frequency including corrections from the index of refraction \emph{and damping}. } \item{ The results in the weak damping regime $\widetilde{\gamma} \ll 1$ coincide with those obtained previously in refs.\cite{hobos,hoboyste,hozeno}: the dispersion relations are akin to those of neutrinos in a medium with an index of refraction and the damping rates are $\Gamma_1=\Gamma_{aa} \cos^2\theta_m~;~\Gamma_2=\Gamma_{aa} \sin^2\theta_m$ where $\theta_m$ is the mixing angle in the medium. The generalized active-sterile transition probability obtained from expectation values of Heisenberg operators in the full quantum density matrix is $\frac{\sin^22\theta_m}{4 }\left[e^{-\Gamma_1t}+e^{-\Gamma_2t}-2e^{-\frac{1}{2}(\Gamma_1+\Gamma_2)t} \cos\left[\Delta E t\right]\right]$. The production of the sterile species \emph{cannot} be described by a simple rate equation, since the distribution function depends on the time scales $1/\Gamma_1,1/\Gamma_2,1/\Delta E$.} \item{In the strong damping regime $\widetilde{\gamma} \gg 1$ the oscillation frequency \emph{vanishes at an MSW resonance} signaling a breakdown of adiabaticity, and the widths of the quasiparticles become $\Gamma_1 \sim \Gamma_{aa}$, $\Gamma_2 \sim \Gamma_{aa} \sin^2 2\theta_m/4\widetilde{\gamma}^2$. To leading order in $1/\widetilde{\gamma}$, the time evolution of the sterile distribution function simplifies into a rate equation, with the production rate given by $\Gamma_2 \sim \sin^22\theta_m (\Delta E)^2/\Gamma_{aa}$ (see eqn. (\ref{npg2})). The active-sterile transition probability is strongly suppressed $\sim 1/\widetilde{\gamma}^2$. The vanishing of the oscillation frequency, the suppression of the transition probability and the production of the sterile species are all manifestations of the \emph{quantum Zeno effect} emerging in the strong damping limit. } \item{For active neutrinos with standard model interactions it is shown that the strong damping limit is only available near an MSW resonance for small vacuum mixing angle $\theta$ satisfying the condition $\sin2\theta \lesssim \alpha_w$ where $\alpha_w$. This condition is likely satisfied by the constraints on the vacuum mixing angle from the X-ray background\cite{hansen,Xray} and entails that sterile neutrino production is \emph{strongly suppressed by the quantum Zeno effect near an MSW resonance.} This suppression may relieve uncertainties from the QCD phase transition for keV sterile neutrinos. } \item{The quantum master equation for the reduced density matrix is obtained under standard approximations. From it the generalized transition probability and the complete set of kinetic equations are obtained valid in all regimes of damping. These reproduce the results obtained from the exact treatment. Under simple approximations the full set of kinetic equations is presented in the form of quantum kinetic equations for a ``polarization vector''. The complete set of kinetic equations (\ref{dotn11}-\ref{dotn12}) along with the relations (\ref{Naoftfin},\ref{Nsoftfin}) provide a complete description of the non-equilibrium evolution of the active and sterile distributions and coherences.} \end{itemize} \section{The Model} \label{sec:model} The main ingredients in the dynamics of the production of a sterile species via active sterile mixing are: i) a mass matrix off diagonal in the flavor basis which mixes the sterile and active species, ii) the coupling of the active species to a bath in equilibrium. In the standard model the bath degrees of freedom are quarks, leptons or hadrons, these equilibrate via strong or electromagnetic interactions, hence can be taken to be in thermal equilibrium. We propose a simple exactly solvable model that includes all these ingredients, it is a generalization of a model for quantum Brownian motion\cite{feyver,leggett} which has long served as a paradigm for the study of quantum dissipative systems in condensed matter\cite{qds} and quantum optics\cite{qobooks}. It consists of a set of coordinates $\vec{q}$ that describe the ``system'' coupled to a continuum of harmonic oscillators $Q_p$ that describe a thermal bath in equilibrium. This simple model is generalized so that the coordinates $ {q}_{a,s}$ stand for the active and sterile ``neutrinos'', these are mixed by off diagonal elements in a frequency matrix but only the ``active'' coordinate couples to the bath degrees of freedom. The motivation for studying this model stem from the realization that the spinorial degrees of freedom are not relevant to describe the non-equilibrium dynamics\cite{dolokun}, a statement confirmed by previous studies of mixing, oscillations and decoherence in a theory mesons\cite{hobos,hoboyste} which yields a remarkably robust picture of the dynamics of neutrinos. The Lagrangian for this model is \begin{eqnarray} L = \frac{1}{2}\Bigg[ {\vec{\dot{q}}}^{~T} \cdot \vec{\dot{q}} - \vec{ q}^{~T}~ \Big( k^2 \mathbb{I}+\mathbb{M}^2 +\mathbb{V}\Big)~\vec{q} \Bigg]+ \frac{1}{2}\sum_p \Bigg[ \dot{Q}^2_p-W^2_p Q^2_p\Bigg]+ q_a \sum_p C_p Q_p \label{model}\end{eqnarray} where the flavor vector is given by \begin{equation} \vec{q} = \Bigg( \begin{array}{c} q_a \\ q_s \end{array}\Bigg) \end{equation} and $k$ is a momentum label, which is assumed but not included as an argument of $q_{a,s}$ for compact notation, $\mathbb{I}$ is the $2\times 2$ identity matrix and \begin{equation} \mathbb{M}^2 = \left( \begin{array}{cc} M^2_{aa} & M^2_{as} \\ M^2_{as} & M^2_{ss} \\ \end{array} \right)~~;~~\mathbb{V} = \left( \begin{array}{cc} V_{aa}(k) & 0 \\ 0 & 0 \\ \end{array} \right)\,. \label{matrices} \end{equation} The off diagonal elements of the mass matrix $\mathbb{M}$ lead to active-sterile mixing and the matrix $\mathbb{V}$ models a ``momentum dependent matter potential'' for the active species. A sum over $k$ makes explicit the field theoretical nature of the model, however just as in the case of neutrinos, we are interested on the dynamics of a given $k$ mode in interaction with the ``bath'' degrees of freedom. The correspondence with neutrinos is manifest by assuming that the matter potential is obtained from one-loop charged and neutral current contributions of $\mathcal{O}(G_F)$ from a background of leptons, quarks or hadrons (or neutrinos in equilibrium) and features a CP-odd term proportional to the lepton and baryon asymmetries and a CP-even term that only depends on energy and temperature\cite{notzold,bell}. The linear coupling of the active species to the bath degrees of freedom with $C_p \propto G_F$ models the charged current interaction, for example the coupling between the electron neutrino and protons, neutrons and electrons in a medium, $G_F \overline{\psi}_P(C_V-C_A\gamma^5)\gamma^\mu\psi_N\overline{\psi}_e\gamma_\mu(1-\gamma^5)\nu_e$ (see a similar description in\cite{raffelt,raffkin}). The label $p$ will be taken to describe a continuum when the density of states is introduced below. Obviously the model (\ref{model}) affords an \emph{exact} solution and yields a remarkably general description of the dynamics. The main ingredient is the coupling of a degree of freedom to a \emph{continuum} of bath or environmental degrees of freedom. Such coupling to a continuum is also at the heart of particle-antiparticle oscillations in neutral meson systems ($K^0-\overline{K}^0; B^0-\overline{B}^0$) as described in refs.\cite{beuthe,cp2}. Other versions of this model, without mixing have been studied with focus on the dynamics of equilibration\cite{boyalamo,hoboydavey}. For vanishing matter potential $\mathbb{V}$ the flavor $q_{a,s}$ and the mass coordinates $q_{1,2}$ are related by an orthogonal transformation \begin{equation} \left(\begin{array}{c} q_a \\ q_s\\ \end{array}\right) = U(\theta) ~\Bigg(\begin{array}{c} q_1 \\ q_2\\ \end{array}\Bigg)~~;~~U(\theta) = \Bigg( \begin{array}{cc} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \\ \end{array} \Bigg) \label{trafo} \end{equation} where the orthogonal matrix $U(\theta)$ diagonalizes the mass matrix $\mathds{M}^2$, namely \begin{equation} U^{-1}(\theta)\,\mathbb{M}^2 \, U(\theta) = \Bigg( \begin{array}{cc} M^2_1 &0 \\ 0 & M^2_2 \\ \end{array} \Bigg) \label{diagM} \end{equation} and $\theta$ is the ``vacuum'' mixing angle in absence of the ``matter potential'' $\mathbb{V}$. In the flavor basis the mass matrix $\mathbb{M} $ can be written in terms of the vacuum mixing angle $\theta$ and the eigenvalues of the mass matrix as \begin{equation} \mathbb{M}^2 = \overline{M}^{\,2}\,\mathds{1}+\frac{\delta M^2}{2} \left(\begin{array}{cc} -\cos 2\theta & \sin2\theta \\ \sin 2\theta & \cos 2\theta \\ \end{array} \right) \label{massmatx2}\end{equation} where we introduced \begin{equation} \overline{M}^{\,2} =\frac{1}{2}(M^2_1+M^2_2)~~;~~ \delta M^2 = M^2_2-M^2_1 \label{MbarDelM}\,. \end{equation} The frequencies of the flavor modes are determined by the diagonal entries of the matrix $\mathbb{M}^2$ in the flavor basis, introducing \begin{equation} \overline{\omega}(k) = \sqrt{k^2+\overline{M}^2}\,, \label{wk}\end{equation} these are given by \begin{equation} \omega_a(k) = \overline{\omega}(k)\Bigg[1-\frac{\delta M^2}{2\,\overline{\omega}(k)^2}\cos2\theta\Bigg]^{\frac{1}{2}}~~;~~\omega_s(k) = \overline{\omega}(k)\Bigg[1+\frac{\delta M^2}{2\,\overline{\omega}(k)^2}\cos2\theta\Bigg]^{\frac{1}{2}} \label{flavfreqs}\end{equation} Focusing on the relevant case of ultrarelativistic neutrinos, we anticipate that the \emph{only} approximation to be invoked is the one in which $\overline{\omega}(k)$ is larger than any other energy scale. It is convenient to introduce \begin{equation} \mathbb{K} \equiv k^2\mathbb{I}+\mathbb{M}+\mathbb{V}= \Bigg(\overline{\omega}(k)^2+\frac{V_{aa}}{2}\Bigg)\,\mathbb{I}+ \frac{\delta M^2}{2}\, \Bigg[ \begin{array}{cc} -\Big(\cos2\theta - \frac{V_{aa}}{\delta M^2}\Big) & \sin2\theta \\ \sin 2\theta & \Big(\cos2\theta - \frac{V_{aa}}{\delta M^2}\Big) \\ \end{array} \Bigg]\,. \label{matx2}\end{equation} The exact solution will be presented in the Heisenberg picture, in which the density matrix is time independent and determined by its initial value, which is assumed to be uncorrelated and of the form \begin{equation}\hat{\rho}(0) = \hat{\rho}_q \otimes \hat{\rho}_Q \,.\label{DMat}\end{equation} The bath is taken to be in thermal equilibrium with density matrix $\hat{\rho}_Q = \mathrm{Tr}e^{-H_Q/T}$ where $H_Q$ is the Hamiltonian for the sum of \emph{free} harmonic oscillators of frequencies $W_p$. The Heisenberg equations of motion for the coordinates $q_{a,s},Q_p$ are the following \begin{eqnarray} && \ddot{q}_{\alpha}+\mathbb{K}_{\alpha \beta}\, q_\beta = \eta_\alpha~~;~~\alpha,\beta = a,s \label{qeqn} \\&& \ddot{Q}_p+W^2_pQ_p = q_a C_p\,, \label{Qeqn}\end{eqnarray} where we have introduced the flavor vector \begin{equation} \vec{\eta} = \sum_p C_p Q_p ~\left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right) \label{veceta}\,.\end{equation} The solution of eqn (\ref{Qeqn}) is \begin{equation} Q_p(t) = Q^{(0)}_p(t)+ \frac{C_p}{W_p}\int^t_0 \sin\big[W_p(t-t')\big]q_a(t')dt' \label{solQ}\,,\end{equation} where \begin{equation} Q^{(0)}_p(t) = \frac{1}{\sqrt{2W_p}}\Big[A_p\,e^{-iW_p t}+A^{\dagger}_p\,e^{iW_p t}\Big] \label{solQ0}\,,\end{equation} is a solution of the homogeneous equation and $A_p,A^{\dagger}_p$ are free field annihilation and creation operators with the usual canonical commutation relations. The distribution function for the bath degrees of freedom is \begin{equation} \mathrm{Tr}\,\hat{\rho}_Q\, A^\dagger_p A_p = \frac{1}{e^{\frac{W_p}{T}}-1}=n(W_p)\label{nofp}\end{equation} Introducing the solution (\ref{solQ}) into (\ref{qeqn}) we find the Heisenberg-Langevin equations\cite{qobooks} \begin{equation} \label{HL} \ddot{q}_{\alpha}(t)+\mathbb{K}_{\alpha \beta}\, q_\beta(t) + \int^t_0 \Sigma_{\alpha\beta}(t-t') q_\beta(t') = \xi_\alpha (t) \end{equation} where the self energy is diagonal in the flavor basis and given by \begin{equation} \label{SE} \Sigma_{\alpha\beta}(t-t') = - \sum_p \frac{C^2_p}{W_p} \sin[W_p(t-t')] \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array} \right) \,.\end{equation} The \emph{stochastic} quantum noise is \begin{equation} \vec{\xi}(t) = \sum_p C_p Q^{(0)}_p(t)~ \left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right) \label{noise}\,,\end{equation} and we note that \begin{equation} \mathrm{Tr}\hat{\rho}\, \vec{\xi}(t) =0\,. \label{xiav}\end{equation} The self energy $\Sigma$ is written in dispersive form by passing to a continuum description of the bath degrees of freedom, writing \begin{equation} - \sum_p \frac{C^2_p}{W_p} \sin[W_p(t-t')] = {i} \int^\infty_{-\infty}\frac{d\omega}{\pi} \textrm{Im}\Sigma_{aa}(\omega) e^{i\omega(t-t')} \label{SEdis}\end{equation} where the density of states \begin{equation} \textrm{Im}\Sigma_{aa}(\omega) = \sum_p \frac{\pi C^2_p}{2W_p} \left[\delta(\omega-W_p)-\delta(\omega+W_p)\right]\label{disper}\end{equation} has the properties \begin{equation} \textrm{Im}\Sigma_{aa}(-\omega)= - \textrm{Im}\Sigma_{aa}(\omega)~~;~~ \textrm{Im}\Sigma_{aa}(\omega)>0 ~~\textrm{for}~~ \omega >0\,. \label{SEprop}\end{equation} The density of states $\mathrm{Im}\Sigma_{aa}$ contains all of the relevant information of the bath. The Heisenberg-Langevin equation (\ref{HL})is solved by Laplace transform, introduce \begin{equation} \widetilde{q}_\alpha (s) = \int^\infty_0 e^{-st}q_\alpha(t)dt~~; ~~\textrm{etc}\,, \label{LT}\end{equation} in terms of which the equation of motion (\ref{HL}) becomes an algebraic equation \begin{equation} \Bigg[s^2 \delta_{\alpha\beta}+\mathbb{K}_{\alpha\beta}+\widetilde{\Sigma}_{\alpha\beta}(s)\Bigg]\widetilde{q}_\beta(s) = \dot{q}_\alpha(0)+s q_{\alpha}(0)+\widetilde{\xi}_\alpha(s)\,, \label{LTE}\end{equation} where in the flavor basis \begin{equation}\widetilde{\Sigma}(s) = -\frac{1}{\pi} \int^\infty_{-\infty} \frac{\textrm{Im}\Sigma_{aa}(\omega')}{\omega'+is}d\omega' ~\left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array} \right)\,.\label{tilS}\end{equation} In what follows we need the analytic continuation of the self-energy to real frequencies $s \rightarrow i\omega + 0^+$ \begin{equation} \widetilde{\Sigma}_{aa}(s=i\omega+0^+)= \textrm{Re}\Sigma_{aa}(\omega) + i \, \textrm{Im}\Sigma_{aa}(\omega) \label{conti}\end{equation} with the dispersive relation \begin{equation} \textrm{Re}\Sigma_{aa}(\omega) = -\frac{1}{\pi}\mathcal{P} \int^\infty_{-\infty} \frac{\textrm{Im}\Sigma_{aa}(\omega')}{\omega'-\omega}d\omega' \,,\label{sigRE}\end{equation} and $\mathcal{P}$ stands for the principal part. The solution of eqn. (\ref{HL}) in real time is given by \begin{equation} q_\alpha(t) = \dot{G}_{\alpha\beta}(t)q_\beta(0)+ {G}_{\alpha\beta}(t)\dot{q}_\beta(0)+ \int^t_0 G_{\alpha\beta}(t')\xi_\beta(t-t')dt' \label{solt}\end{equation} with \begin{equation} {G}_{\alpha\beta}(t) = \int_C \frac{ds}{2\pi i}~ \widetilde{G}_{\alpha\beta}(s)~ e^{st} \,. \end{equation} The Laplace transform of the propagator is given by \begin{equation} \widetilde{G}(s) = \Bigg[s^2 \mathbb{I}+\mathbb{K} +\widetilde{\Sigma}(s)\Bigg]^{-1}\label{tildeG}\end{equation} and $C$ is the Bromwich contour that runs parallel to the imaginary axis and to the right of all the singularities of $\widetilde{G}$ in the complex s-plane. It follows from eqns. (\ref{tildeG}) and (\ref{LTE}) that the propagator matrix $G_{\alpha \beta}(t)$ is a homogeneous solution of the equation of motion (\ref{HL}) with initial conditions \begin{equation} G_{\alpha \beta}(0)=0~;~\dot{G}_{\alpha \beta}(0)=1 \,. \label{Gini}\end{equation} It is convenient to introduce the following combinations \begin{eqnarray} \widetilde{\Delta}(s) & = & \frac{1}{\delta M^2} \Big[\widetilde{\Sigma}_{aa}(s)+V_{aa}\Big] \label{Delta} \\ \widetilde{\rho}(s) & = & \Bigg[\Big(\cos2\theta-\widetilde{\Delta}(s)\Big)^2+\sin^2 2\theta \Bigg]\label{widerho}\end{eqnarray} and the matrix \begin{equation} \mathbb{A}(s) = \frac{1}{\widetilde{\rho}(s)}~\Bigg[ \begin{array}{cc} \cos2\theta-\widetilde{\Delta}(s) & -\sin 2\theta \\ -\sin 2\theta & -\cos2\theta+\widetilde{\Delta}(s) \\ \end{array} \Bigg]\,,\label{mtxA}\end{equation} in terms of which we find \begin{equation} \widetilde{G}(s) = \frac{1}{2}\, \frac{\mathbb{I}+\mathbb{A}(s)}{s^2+\overline{\omega}^2(k)+\frac{\delta M^2}{2}\big(\widetilde{\Delta}(s)-\widetilde{\rho}(s)\big)} ~+~\frac{1}{2}\, \frac{\mathbb{I}-\mathbb{A}(s)}{s^2+\overline{\omega}^2(k)+ \frac{\delta M^2}{2}\big(\widetilde{\Delta}(s)+\widetilde{\rho}(s)\big)}\,.\label{Gofs}\end{equation} Each term in this expression features poles in the complex s-plane near $s \approx \pm \, i \, \overline{\omega}(k)$ which are found by first performing the analytic continuation $s\rightarrow i\omega + 0^+$ upon which the denominators in $\widetilde{G}(s)$ become \begin{equation} s^2+\overline{\omega}^2(k)+ \frac{\delta M^2}{2}\big(\widetilde{\Delta}(s)\mp\widetilde{\rho}(s)\big) \rightarrow -\omega^2+\overline{\omega}^2(k)+\frac{1}{2} \Bigg[\textrm{Re}\Sigma_{aa}(\omega) + i \, \textrm{Im}\Sigma_{aa}(\omega)+V_{aa} \Bigg]\mp \frac{\delta M^2}{2}\rho(\omega) \end{equation} where the analytic continuations are given by\begin{eqnarray} \rho(\omega) & = & \Bigg[\Big(\cos 2\theta - \Delta_R(\omega)-i\Delta_I(\omega)\Big)^2+(\sin2\theta)^2 \Bigg]^{\frac{1}{2}}\label{rhomega}\\ \Delta_R(\omega) & = & \frac{\Big[\textrm{Re}\Sigma_{aa}(\omega)+V_{aa} \Big]}{{\delta M^2}}~~;~~\Delta_I(\omega) = \frac{\textrm{Im}\Sigma_{aa}(\omega)}{\delta M^2}\,. \label{DRI} \end{eqnarray} The complex poles describe \emph{quasiparticles}, the real part determines their dispersion relation and the imaginary part their damping rate in the medium. At this stage it is convenient to introduce the following variables \begin{equation} \overline{\Delta}_R \equiv \Delta_R(\overline{\omega}(k)) = \frac{\Big[ V_{aa}+ \textrm{Re}\Sigma_{aa}(\overline{\omega}(k)) \Big]}{\delta M^2}\end{equation} \begin{equation} \widetilde{\gamma} \equiv \frac{\Delta_I(\overline{\omega}(k))}{\rho_0} = \frac{\textrm{Im}\Sigma_{aa}(\overline{\omega}(k))}{\delta M^2\,\rho_0}\label{gamatil}\end{equation} and write \begin{equation} \rho(\overline{\omega}(k)) = \rho_0\,r \,e^{-i\alpha} \end{equation} where \begin{eqnarray} \rho_0 & = & \Bigg[\Big(\cos 2\theta - \overline{\Delta}_R \Big)^2+\Big(\sin2\theta\Big)^2\Bigg]^{\frac{1}{2}}\label{rho0} \\ r & = & \Bigg[\Big(1 - \widetilde{\gamma}^2 \Big)^2+\Big(2 \widetilde{\gamma}\cos2\theta_m\Big)^2\Bigg]^{\frac{1}{4}}\,, \label{r} \\ \alpha & = & \frac{1}{2}~ \textrm{arctg} \Bigg[ \frac{2 \widetilde{\gamma}\cos2\theta_m}{ 1 - \widetilde{\gamma}^2 }\Bigg] \label{alpha}\end{eqnarray} and the branch is chosen such that $0\leq \textrm{arctg}\big[\cdots\big]\leq \pi$. The mixing angle in the medium, $\theta_m$, is defined by the relations \begin{equation} \cos 2\theta_m = \frac{\cos 2\theta - \overline{\Delta}_R }{\rho_0} ~~;~~ \sin2\theta_m = \frac{\sin 2\theta}{\rho_0} \,, \label{thetam} \end{equation} an MSW resonance in the medium occurs whenever\cite{book1,book2,book3} \begin{equation} \cos 2\theta = \overline{\Delta}_R \,. \label{MSW}\end{equation} The \emph{only} approximations to be used are the following \begin{equation} \frac{\delta M^2}{\overline{\omega}(k)}\ll 1~;~\frac{\textrm{Re}\Sigma_{aa}(\omega)}{\overline{\omega}(k)}\ll1~;~ \frac{\textrm{Im}\Sigma_{aa}(\omega)}{\overline{\omega}(k)}\ll1 \label{onlyappx}\end{equation} these are all consistent with the ultrarelativistic limit, small radiative corrections and the narrow width limit, all approximations used in the case of neutrinos. Using these approximations we find the following complex poles: \begin{itemize} \item{ The first term in (\ref{Gofs}) features complex poles at \begin{equation} \omega = \pm \, \Omega_1 + i\frac{\Gamma_1}{2} \label{pole1} \end{equation} with \begin{eqnarray} && \Omega_1 = \overline{\omega}(k)+ \frac{1}{4\overline{\omega}(k)}\Bigg[ \textrm{Re}\Sigma_{aa}(\overline{\omega}(k))+V_{aa}-\delta M^2~\rho_0 \,r \cos\alpha \Bigg] \label{ome1}\\&& \Gamma_1 = ~\frac{\Gamma_{aa}}{2}\Big[1+ \frac{ r \sin\alpha}{\widetilde{\gamma}}\Big] \label{gama1c} \end{eqnarray} } \item{ The second term in (\ref{Gofs}) features complex poles at \begin{equation} \omega = \pm \, \Omega_2 + i\frac{\Gamma_2}{2} \label{pole2} \end{equation} with \begin{eqnarray} && \Omega_2 = \overline{\omega}(k)+ \frac{1}{4\overline{\omega}(k)} \Bigg[ \textrm{Re}\Sigma_{aa}(\overline{\omega}(k))+V_{aa}+\delta M^2~\rho_0 \,r \cos\alpha\Bigg] \label{ome2}\\&& \Gamma_2 = ~\frac{\Gamma_{aa}}{2}\Big[1- \frac{ r \sin\alpha}{\widetilde{\gamma}}\Big] \label{gama2c} \end{eqnarray} } \end{itemize} where \begin{equation} \Gamma_{aa} = \frac{ \textrm{Im}\Sigma_{aa}(\overline{\omega}(k))}{ \overline{\omega}(k)} \label{Gamaaa}\end{equation} is the interaction rate for the active species \emph{in absence of mixing} in the limit $\overline{\omega}(k) \gg \delta M^2$, which is of relevance for ultrarelativistic or nearly degenerate neutrinos. In what follows we suppress the argument $\overline{\omega}(k)$ in the quantities $\Delta_{R,I}$, etc., to simplify notation. Near the complex poles the analytic continuation $\widetilde{G}(s=i\omega+0^+)$ features a Breit-Wigner form, and the inverse Laplace transform can be performed by approximating the analytic continuation by the Breit-Wigner Lorentzian. We find \begin{eqnarray} G(t) = && \frac{e^{i\Omega_1t} ~ e^{-\frac{\Gamma_1}{2} t}}{2i\Omega_1}\frac{1}{2}\Big[\mathbb{I}+\mathbb{T}\Big] -\frac{e^{-i\Omega_1t} ~e^{-\frac{\Gamma_1}{2} t}}{2i\Omega_1}\frac{1}{2}\Big[\mathbb{I}+\mathbb{T}^*\Big]+ \nonumber \\ && \frac{e^{i\Omega_2t} ~e^{-\frac{\Gamma_2}{2} t}}{2i\Omega_2}\frac{1}{2}\Big[\mathbb{I}-\mathbb{T}\Big] -\frac{e^{-i\Omega_2t} ~e^{-\frac{\Gamma_2}{2} t}}{2i\Omega_2}\frac{1}{2}\Big[\mathbb{I}-\mathbb{T}^*\Big]\label{Goft}\end{eqnarray} where we have neglected wave function renormalization (residues at the poles) and introduced the complex matrix \begin{equation} \mathbb{T} = \frac{e^{i\alpha}}{{r}} \left[ \begin{array}{cc} \cos2\theta_m -i\widetilde{\gamma} & -\sin 2\theta_m \\ -\sin 2\theta_m & -\cos2\theta_m +i\widetilde{\gamma} \\ \end{array} \right]\label{Tmtx} \end{equation} where all quantities are evaluated at $\omega =\overline{\omega}(k)$ and used the approximations (\ref{onlyappx}). Inserting the result (\ref{Goft}) into the solution (\ref{solt}) we obtain the complete solution for the time evolution of the Heisenberg operators. The Breit-Wigner approximation leading to exponential damping in (\ref{Goft}) is a Markovian approximation\cite{qobooks}. The full solution requires the initial conditions on the Heisenberg operators $q(0),\dot{q}(0)$, it is convenient to expand these in a basis of creation and annihilation operators of flavor states, \begin{equation} q_\beta(0) = \frac{1}{\sqrt{2\omega_\beta}}\Big[a_\beta (0)+ a^\dagger_\beta(0)\Big]~~;~~\dot{q}_\beta(0) = -i \frac{\omega_\beta}{ \sqrt{2\omega_\beta}}\Big[a_\beta(0) - a^\dagger_\beta(0)\Big]~~;~~\beta= a,s \label{q0}\end{equation} where $\omega_{a,s}$ are the frequencies associated with flavor eigenstates given by eqn. (\ref{flavfreqs}). Under the validity of the approximations (\ref{onlyappx}), we can approximate \begin{equation} \omega_a\sim \omega_s \sim \Omega_1\sim \Omega_2 \sim \overline{\omega}(k) \label{freqappx}\end{equation} leading to a simplified form \begin{eqnarray} q_\alpha(t) \approx && \frac{1}{\sqrt{2\overline{\omega}(k)}} \Bigg\{e^{-i\Omega_1t}~e^{-\frac{\Gamma_1}{2}t}\,\frac{1}{2}\Big[\mathbb{I}+\mathbb{T}^*\Big]+ e^{-i\Omega_2t}~e^{-\frac{\Gamma_2}{2}t}\,\frac{1}{2}\Big[\mathbb{I}-\mathbb{T}^*\Big]\Bigg\}_{\alpha\beta}a_\beta(0) + h.c. + \nonumber \\ && \int^t_0 G_{\alpha\beta}(t')\xi_\beta(t-t')dt'\,. \label{qfinoft}\end{eqnarray} Under the same approximations, we find the Heisenberg annihilation operators at an arbitrary time from \begin{equation} a_\alpha(t) = \sqrt{\frac{\omega_\alpha}{2}} \left[q_\alpha(t)-\frac{p_\alpha(t)}{i\omega_\alpha}\right]~~;~~p_\alpha(t)=\dot{q}_\alpha(t)\,, \label{aoft}\end{equation} these are given by \begin{eqnarray} a_\alpha(t) \approx && \Bigg\{e^{-i\Omega_1t}~e^{-\frac{\Gamma_1}{2}t}~\frac{1}{2}\Big[\mathbb{I}+\mathbb{T}^*\Big]+ e^{-i\Omega_2t}~e^{-\frac{\Gamma_2}{2}t}~\frac{1}{2}\Big[\mathbb{I}-\mathbb{T}^*\Big]\Bigg\}_{\alpha\beta}a_\beta(0)+ \nonumber \\&& \sqrt{\frac{\overline{\omega}(k)}{2}} \int^t_0 \left[G(t')+ \frac{i \dot{G}(t')}{\overline{\omega}(k)}\right]_{\alpha\beta}\xi_\beta(t-t') \label{aHoft}\end{eqnarray} where we have used the initial condition $G_{\alpha\beta}(0)=0$ (see eqn. (\ref{Gini})) and (\ref{onlyappx}). Under these same approximations we find \begin{equation} G(t')+ \frac{i \dot{G}(t')}{\overline{\omega}(k)} \simeq \frac{i}{\overline{\omega}(k)}\Bigg\{ e^{-i\Omega_1 t'}~e^{-\frac{\Gamma_1}{2}t'} \frac{1}{2} \Big[\mathbb{I}+\mathbb{T}^*\Big]+ e^{-i\Omega_2 t'}~e^{-\frac{\Gamma_2}{2}t'} \frac{1}{2} \Big[\mathbb{I}-\mathbb{T}^*\Big]\Bigg\} \label{GdotG} \end{equation} \subsection{Transition probability} The result (\ref{aHoft}) allows us to obtain the \emph{generalized} transition probability from expectation values of these operators in the initial density matrix. Denoting $\langle a (t) \rangle = \mathrm{Tr}\hat{\rho}\,a(t) $ and using the result (\ref{xiav}) we find \begin{equation} \langle a_\alpha(t) \rangle = \Bigg\{e^{-i\Omega_1t}~e^{-\frac{\Gamma_1}{2}t}\,\frac{1}{2}\Big[\mathbb{I}+\mathbb{T}^*\Big]+ e^{-i\Omega_2t}~e^{-\frac{\Gamma_2}{2}t}\,\frac{1}{2}\Big[\mathbb{I}-\mathbb{T}^*\Big]\Bigg\}_{\alpha\beta}\langle a_\beta(0) \rangle \,.\label{aav}\end{equation} Consider an initial density matrix that yields an initial non-vanishing expectation value for the annihilation operator of the \emph{active} component, but a vanishing expectation value for the sterile one, namely \begin{equation} \langle a_a(0) \rangle \neq 0~;~\langle a_s(0) \rangle = 0 \label{ainitp}\end{equation} From the form of the matrix $\mathbb{T}$ given by eqn. (\ref{Tmtx}) we find the \emph{generalized} active-sterile transition probability \begin{equation} \mathcal{P}_{a\rightarrow s}(t) = \Bigg|\frac{\langle a_s(t) \rangle}{\langle a_a(0) \rangle}\Bigg|^2 = \frac{\sin^22\theta_m}{4\,r^2}\left[e^{-\Gamma_1t}+e^{-\Gamma_2t}-2e^{-\frac{1}{2}(\Gamma_1+\Gamma_2)t} \cos\left[(\Omega_2-\Omega_1)t\right]\right]\label{Pas} \end{equation} where \begin{equation} \Omega_2-\Omega_1 = \frac{\delta M^2\,\rho_0\,r }{2\overline{\omega}(k)}\, \cos\alpha ~~;~~\Gamma_1+\Gamma_2 = \frac{\textrm{Im}\Sigma_{aa}(\overline{\omega}(k))}{\overline{\omega}(k)}= \Gamma_{aa}\,.\label{freqdif}\end{equation} and $\Gamma_{1,2}$ are given by eqns. (\ref{gama1c},\ref{gama2c}). The expression (\ref{Pas}) is similar to the transition probability for particle-antiparticle mixing of neutral mesons\cite{cp,cp2}. The oscillatory term is a result of the coherent interference between the quasiparticle states in the medium and its exponential suppression in (\ref{Pas}) identifies the \emph{decoherence} time scale $\tau_{dec} = 2/(\Gamma_1+\Gamma_2) = 2/\Gamma_{aa}$. \subsection{Weak and strong damping: quantum Zeno suppression} The above expressions for the propagation frequencies and damping rates of the quasiparticle excitations in the medium lead to two different cases: \begin{eqnarray} \big| \widetilde{\gamma}\big| & \ll & 1 \Rightarrow \textbf{weak~damping} \label{weakdamp}\\ \big| \widetilde{\gamma}\big| & \gtrsim & 1 \Rightarrow \textbf{strong~damping} \label{strongdamp} \end{eqnarray} These conditions can be written in a more illuminating manner, from the definitions (\ref{gamatil}) and (\ref{Gamaaa}) it follows that \begin{equation} \widetilde{\gamma} = \frac{\Gamma_{aa}}{2\Delta E}\label{ratio} \end{equation} where \begin{equation} \Delta E = \frac{\delta M^2 \,\rho_0}{2\overline{\omega}(k)} \label{delE}\end{equation} is the \emph{oscillation frequency in the medium in absence of damping}, namely $\Delta E$ is given by $|\Omega_2-\Omega_1|$ setting $\Delta_I=0$, i.e, the difference in the propagation frequencies only arising from the index of refraction in the medium. The dimensionless quantity $\widetilde{\gamma}$ is the ratio between the \emph{oscillation time scale} $1/\Delta E$ and the \emph{decoherence time scale} $2/\Gamma_{aa}$. When $\widetilde{\gamma}\gg 1$ the environment induced decoherence occurs on time scales \emph{much shorter} than the oscillation scale and active-sterile oscillations are strongly suppressed. In the opposite limit $\widetilde{\gamma}\ll 1$ there are many oscillations before the environment induces decoherence. The strong damping condition (\ref{strongdamp}) is then recognized with the condition for quantum Zeno suppression by scattering in a medium\cite{stodolsky,kev1}. It corresponds to the limit in which the active mean free path is shorter than the oscillation length and decoherence by the medium suppresses active-sterile oscillations. \subsubsection{Weak damping case: $\Big|\widetilde{\gamma}\Big|\ll 1$} For weak damping it follows that \begin{equation} r \approx 1 ~~;~~ \sin\alpha \approx \widetilde{\gamma}\cos 2\theta_m \label{alphawd} \end{equation} and the widths $\Gamma_{1,2}$ given by (\ref{gama1c},\ref{gama2c}) become \begin{equation} \Gamma_1 = \Gamma_{aa}\cos^2\theta_m ~~;~~ \Gamma_2 = \Gamma_{aa}\sin^2\theta_m \,. \label{gamawd}\end{equation} For the oscillation frequency we obtain \begin{equation} \Omega_2-\Omega_1 = \Delta E = \frac{\delta M^2\,\rho_0}{2\overline{\omega}(k)} \label{wdosc}\end{equation} and \begin{equation} \mathbb{T} \simeq \left( \begin{array}{cc} \cos2\theta_m & -\sin 2\theta_m \\ -\sin 2\theta_m & -\cos2\theta_m \\ \end{array} \right) = U^{-1}(\theta_m)\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right) U(\theta_m)\label{Twd}\end{equation} where $U(\theta)$ is the unitary matrix given by eqn. (\ref{trafo}). Introducing the Heisenberg annihilation and creation operators \emph{in the medium} as \begin{equation} \left( \begin{array}{c} a_1(t) \\ a_2(t) \\ \end{array} \right) = U^{-1}(\theta_m) \left( \begin{array}{c} a_a(t) \\ a_s(t) \\ \end{array} \right) \label{Hopmed} \end{equation} and similarly with the creation operators, the time evolution (\ref{aav}) in the weakly damped case yields \begin{equation} \left( \begin{array}{c} \langle a_1(t)\rangle \\ \langle a_2(t) \rangle \\ \end{array} \right) = \left( \begin{array}{cc} e^{-i\Omega_1t}~e^{-\frac{\Gamma_1}{2}t} & 0 \\ 0 & e^{-i\Omega_2t}~e^{-\frac{\Gamma_2}{2}t} \\ \end{array} \right) \left( \begin{array}{c} \langle a_a(0) \rangle \\ \langle a_s(0) \rangle \\ \end{array} \right) \,.\label{Hopmedoft} \end{equation} Therefore, in the weak damping regime, the Heisenberg operators $a^\dagger_{1,2}\,,\,a_{1,2}$ create and annihilate the \emph{in-medium states} that propagate with frequencies $\Omega_{1,2}$ and their ensemble averages damp out with the widths $\Gamma_{1,2}$. The active-sterile transition probability in this limit, is obtained from eqn. (\ref{Pas}), and is given by \begin{equation} \mathcal{P}_{a\rightarrow s}(t) = \Bigg|\frac{\langle a_s \rangle (t)}{\langle a_a \rangle (0)}\Bigg|^2 = \frac{\sin^22\theta_m}{4 }\left[e^{-\Gamma_1t}+e^{-\Gamma_2t}-2e^{-\frac{1}{2}(\Gamma_1+\Gamma_2)t} \cos\left[\Delta E t\right]\right]\,.\label{Paswdl}\end{equation} In the weakly damped case the decoherence time scale $\tau_{dec}=2/\Gamma_{aa}$ is much larger than the oscillation time scale $1/\Delta E$, hence many oscillations take place before the interaction with the environment leads to decoherence. These results reproduce those of references\cite{hobos,hozeno,hoboyste} and confirm their generality and applicability to the case of neutrinos with standard model interactions studied in ref.\cite{hozeno}. \subsubsection{Strong damping case: $\Big|\widetilde{\gamma} \Big| \gg 1 $} The case of (very) strong damping yields the following simplifications: \begin{eqnarray} r^2 & \sim & \widetilde{\gamma}^2 -1 + 2 \cos^2 2\theta_m \label{rsd}\\ {r}\,\sin\alpha & \sim & \widetilde{\gamma}~ \Big[1-\frac{\sin^22\theta_m}{2\widetilde{\gamma}^2} \Big] \,, \label{alfasd} \end{eqnarray} leading to the damping rates \begin{eqnarray} && \Gamma_1 \simeq \Gamma_{aa} \Bigg[1-\frac{\sin^22\theta_m}{4\widetilde{\gamma}^2} \Bigg]\approx \Gamma_{aa} \label{Gamma1sd}\\&& \Gamma_2 \simeq \Gamma_{aa}\,\frac{\sin^22\theta_m}{4\widetilde{\gamma}^2}\,.\label{Gamma2sd}\end{eqnarray} This is a remarkable result, the quasiparticle width $\Gamma_2$ becomes \emph{vanishingly small} in the strong damping regime, with important consequences for production of the sterile species as seen below. Furthermore, the oscillation frequency is found to be \begin{equation} \Omega_2-\Omega_1 = \frac{\delta M^2 \rho_0}{2\,\overline{\omega}(k)} \cos2\theta_m = \Delta E \cos 2\theta_m \,, \label{oscfreqsd}\end{equation} this is another remarkable result in the strong damping regime: the oscillation frequency \emph{vanishes} at the MSW resonance. It follows from eqns. (\ref{alpha}) and (\ref{ome1},\ref{ome2}) that the vanishing of the oscillation frequency at an MSW resonance is an \emph{exact result} for any $\widetilde{\gamma}^2 >1$. This result implies that there is a degeneracy right at the resonance, and unlike the quantum mechanical case in which there is no level crossing, in presence of strong environmental damping the two propagating states in the medium become \emph{degenerate} at the resonance leading to a breakdown of adiabaticity. Furthermore, in this regime the transition probability (\ref{Pas}) is strongly suppressed by the factor $1/ \widetilde{\gamma}^2\ll1$, it is given by \begin{equation} \mathcal{P}_{a\rightarrow s}(t) = \Bigg|\frac{\langle a_s(t) \rangle}{\langle a_a(0) \rangle}\Bigg|^2 = \frac{\sin^22\theta_m}{4\, \widetilde{\gamma}^2} \,\left[e^{-\Gamma_1t}+e^{-\Gamma_2t}-2e^{-\frac{1}{2}(\Gamma_1+\Gamma_2)t} \cos\left[(\Omega_2-\Omega_1)t\right]\right]\label{Passd} \end{equation} In the strong damping limit $\Delta \Omega =|\Omega_2-\Omega_1| \leq \Delta E $ hence it follows that $\tau_{dec} \ll 1/\Delta\Omega$ and the interference term is strongly damped out before one oscillation takes place. This is the quantum Zeno effect in which the rapid scattering in the medium prevents the build up of coherence\cite{stodolsky}. The vanishing of the oscillation frequency, the suppression of the transition probability and $\Gamma_2$ in the strong damping case are all manifestations of the quantum Zeno effect. Of particular importance is the vanishing of the oscillation frequency at the MSW resonance because this entails a breakdown of adiabaticity. \section{Production of the sterile species} The number of sterile particles is given \begin{equation} N_s(t) = \langle a^\dagger_s(t) a_s(t) \rangle \label{Nofs}\end{equation} where the Heisenberg operators are given by eqns. (\ref{aHoft},\ref{GdotG}) and the expectation value is in the density matrix (\ref{DMat}). Let us consider the case in which the initial density matrix $\widehat{\rho}_q$ is diagonal \emph{in the flavor basis} with initial populations \begin{equation} N_a(0)= \langle a^\dagger_a(0)a_a(0)\rangle ~~;~~ N_s(0)=\langle a^\dagger_s(0)a_s(0)\rangle \,.\label{ininums}\end{equation} Using the results (\ref{aHoft},\ref{GdotG}) and the stochastic noise given by eqn. (\ref{noise},\ref{solQ0}) with the averages (\ref{nofp},\ref{xiav} ) we find \begin{equation} N_s(t) = \mathcal{P}_{a\rightarrow s}(t) N_a(0) + \mathcal{P}_{s\rightarrow s}(t) N_s(0)+ N^\xi_s(t) \label{Nst}\end{equation} where $\mathcal{P}_{a\rightarrow s}(t)$ is the active-sterile transition probability given by eqn. (\ref{Pas}), and \begin{eqnarray} \mathcal{P}_{s\rightarrow s}(t) & = & \Big|e^{-i\Omega_1t}\,e^{-\frac{\Gamma_1}{2}t} f_- +e^{-i\Omega_2t}\,e^{-\frac{\Gamma_2}{2}t} f_+ \Big|^2 \label{Pss}\\ f_{\pm} & = & \frac{1}{2}\Big(1\pm \frac{e^{-i\alpha}}{ {r}}\big(\cos2\theta_m + i\widetilde{\gamma}\big)\Big)\,. \label{fpm}\end{eqnarray} The contribution $N^\xi_s(t)$ is completely determined by the correlation function of the noise in the initial density matrix, it is given by \begin{equation} N^\xi_s(t)= \frac{\sin^22\theta_m}{4\,r^2} \int \frac{d\omega}{\pi} \frac{\mathrm{Im}\Sigma_{aa}(\omega)}{2\,\overline{\omega}(k)}\,n(\omega) \Big|F_1(\omega;t)-F_2(\omega;t)\Big|^2 \label{Nxi}\end{equation} where $n(\omega)=[e^{\omega/T}-1]^{-1}$ and \begin{equation} F_i (\omega;t) = \frac{e^{-i(\Omega_i-\omega)t}\,e^{-\frac{\Gamma_i}{2}t}-1}{\omega_i-\omega - \frac{\Gamma_i}{2}} ~~;~~ i=1,2 \,.\label{Fi}\end{equation} The frequency integral is carried out by approximating the functions $F_i(\omega;t)$ as Breit-Wigner Lorentzians near their complex poles, the result is found to be \begin{eqnarray} N^\xi_s(t) = && \frac{\sin^2\theta_m \cos^2\theta_m}{r^2}\Bigg\{ \frac{\Gamma_{aa}}{\Gamma_1}\,n(\Omega_1) \big(1-e^{-\Gamma_1t}\big) + \frac{\Gamma_{aa}}{\Gamma_2}\,n(\Omega_2) \big(1-e^{-\Gamma_2t}\big) \nonumber \\ && - e^{-\frac{1}{2}(\Gamma_1+\Gamma_2)t}\, \frac{\Gamma_{aa}\big[n(\Omega_1) +n(\Omega_2)\big]}{ \Big(\frac{\Gamma_{aa}}{2}\Big)^2+\Big(\Omega_2-\Omega_1\Big)^2 } \Bigg[\frac{\Gamma_{aa}}{2}\Big(1-\cos(\Omega_2-\Omega_1)t \Big)+ (\Omega_2-\Omega_1)\sin(\Omega_2-\Omega_1)t \Big) \Bigg] \Bigg\} \label{Nxifin} \end{eqnarray} The set of equations (\ref{Nst},\ref{Pas}, \ref{Pss}) and (\ref{Nxifin}) completely determine the time evolution of the sterile distribution function $N_s(t)$. \subsection{Weak and strong damping limits} \subsubsection{Weak damping: $\big|\widetilde{\gamma}\big| \ll 1$} In the weak damping limit the results above yield \begin{eqnarray} r & \sim & 1 ~;~ \sin\alpha \sim \mathcal{O}(\widetilde{\gamma})~;~ \cos\alpha \sim 1 \nonumber \\ \Gamma_1 & \sim & \Gamma_{aa} \cos^2\theta_m ~;~ \Gamma_2 \sim \Gamma_{aa} \sin^2\theta_m \nonumber\\ \Omega_2-\Omega_1 & \sim & \frac{\delta M^2 \,\rho_0}{2\overline{\omega}(k)} =\Delta E \label{wdquans}\end{eqnarray} which lead to the following expression for the number density of the sterile species, valid for an initial density matrix diagonal in the flavor basis and with $N_s(0)=0\,,N_a(0)\neq 0 $, \begin{eqnarray} N_s(t) = && N_a(0)\,\frac{ \sin^22\theta_m}{4 }\left[e^{-\Gamma_1t}+e^{-\Gamma_2t}-2e^{-\frac{\Gamma_{aa}}{2} t} \cos\left[\Delta E t\right]\right] \nonumber \\ && + \sin^2\theta_m \,n(\Omega_1) \big(1-e^{-\Gamma_1t}\big) + \cos^2\theta_m\,n(\Omega_2) \big(1-e^{-\Gamma_2t}\big) +\mathcal{O}(\widetilde{\gamma}) \label{wdNs}\end{eqnarray} this result reproduces those in ref. \cite{hoboyste} for $N_s(0)=0$. We note that the production of the sterile species \emph{cannot} be described in terms of a simple rate equation in the weak damping case because it depends on several different time scales. \subsubsection{Strong damping limit: $\big|\widetilde{\gamma}\big| \gg 1$} While the strong damping limit $\big|\widetilde{\gamma}\big| \gtrsim 1$ must be studied numerically, progress can be made in the \emph{very} strong damping regime $\big|\widetilde{\gamma}\big| \gg 1$. It will be seen below that this regime is relevant for sterile neutrinos near an MSW resonance. In this regime the above results yield \begin{eqnarray} r^2 & \sim & \widetilde{\gamma}^2 \nonumber \\ \Gamma_1 & \sim & \Gamma_{aa}\nonumber \\ \Gamma_2 & \sim & \Gamma_{aa} ~\frac{\sin^22\theta_m}{4\widetilde{\gamma}^2}\nonumber \\ \Omega_2-\Omega_1 & \sim & \frac{\delta M^2 \rho_0}{2\,\overline{\omega}(k)} \cos2\theta_m = \Delta E \cos2\theta_m \,. \label{sdquans}\end{eqnarray} The coefficients \begin{eqnarray} \frac{1}{2\,r^2} \, \frac{\Gamma^2_{aa}}{ \Big(\frac{\Gamma_{aa}}{2}\Big)^2+\Big(\Omega_2-\Omega_1\Big)^2 } & = & \frac{2 }{\widetilde{\gamma}^2+\cos^22\theta_m} \sim \frac{2}{\widetilde{\gamma}^2}\ll 1 \nonumber \\ \frac{1}{r^2}\, \frac{\Gamma_{aa}(\Omega_2-\Omega_1)}{\Big(\frac{\Gamma_{aa}}{2}\Big)^2+\Big(\Omega_2-\Omega_1\Big)^2} & = & \frac{2 \cos2\theta_m }{\widetilde{\gamma}(\widetilde{\gamma}^2+\cos^22\theta_m)} \sim \frac{1}{ \widetilde{\gamma} ^3} \ll 1\, \label{sdcoeffs} \end{eqnarray} therefore the second line in the noise contribution (\ref{Nxifin}) becomes subleading. Furthermore the ratios \begin{eqnarray} \frac{\Gamma_{aa}}{r^2 \,\Gamma_1} & \sim & \frac{1}{\widetilde{\gamma}^2} \ll 1\nonumber \\\frac{\Gamma_{aa}}{r^2 \,\Gamma_2} & \sim & \frac{4}{\sin^22\theta_m} \end{eqnarray} therefore, only the term with $\Gamma_{aa}/\Gamma_2$ survives in the first line in (\ref{Nxifin}). Since the transition probability in the first term in eqn. (\ref{Nst}) $\mathcal{P}_{a\rightarrow s} \propto 1/\widetilde{\gamma}^2$ (see eqn. (\ref{Pas})) this term is also strongly suppressed, therefore in the strong damping limit \begin{equation} N_s(t)\sim N^\xi_s(t) \sim n(\Omega_2) \big(1-e^{-\Gamma_2t}\big) \,. \label{Nxisdlim}\end{equation} Hence in this limit the sterile population obeys a simple \emph{rate} equation \begin{equation} \frac{dN_s(t)}{dt} = \Gamma_2 \big[n(\Omega_2)-N_s(t)\big]\,, \label{rateNs}\end{equation} however the \emph{sterile production rate} is \begin{equation} \Gamma_2= \Gamma_{aa} ~\frac{\sin^22\theta_m}{4\widetilde{\gamma}^2} \ll \Gamma_{aa}\, \label{Nsrate}\end{equation} becoming vanishingly small in the strong damping case. We conclude that \emph{sterile species production is strongly suppressed in the strong damping case as a consequence of the quantum Zeno effect} . The non-perturbative nature of this result is manifest by writing \begin{equation} \Gamma_2 = {\sin^22\theta_m} \frac{(\Delta E)^2}{\Gamma_{aa}}\,. \label{npg2}\end{equation} We note that with $\widetilde{\gamma} = \Gamma_{aa}/2\Delta E$ (see eqn. (\ref{ratio})) this result coincides with the effective rate in the quantum Zeno limit $2\Delta E/\Gamma_{aa} \ll 1$ obtained in reference\cite{foot} and implemented in the numerical study in refs.\cite{kev1,dibari}. However, we argue below that in the case of sterile neutrinos, the strong damping limit is \emph{only available} near an MSW resonance, and far away from this resonance the non-equilibrium dynamics corresponds to weak damping and the time evolution of $N_s(t)$ \emph{cannot} be described by a simple rate equation. \section{Quantum master and kinetic equations} Although we have obtained the time evolution of the distribution function from the exact solution of the Heisenberg-Langevin equations (under the approximation (\ref{freqappx})), within the cosmological setting it is more convenient to obtain a set of quantum kinetic equations for the distribution functions. This is achieved by obtaining first the quantum master equation for the time evolution of the reduced density matrix. In the case of neutrinos, the index of refraction term $V_{aa}$ is of first order in $G_F$ (Fermi's effective weak coupling) while the self-energy $\Sigma=\Sigma_R+i\Sigma_I$ is of second order. Furthermore the study in the previous sections clearly shows that the contribution of the real part of the self-energy yields a second order renormalization of the index of refraction which can be simply absorbed into a redefinition of $V_{aa}$. The most important aspect of the second order self-energy correction arise from its imaginary part, which yields the damping rates of the collective quasiparticle excitations. The production of the sterile species is associated with this imaginary part, and not the real part of the self-energy, which only renormalizes the index of refraction in the medium. Therefore it is convenient to include the index of refraction in the ``non-interacting'' part of the Hamiltonian by first diagonalizing the Hamiltonian for the system's degrees of freedom $\vec{q}$ corresponding to the first term in the Lagrangian (\ref{model}). This is achieved by introducing the \emph{mass eigenstates} in the medium with the index of refraction as follows. The matrix $\mathbb{K}$ in eqn. (\ref{matx2}) can be written as \begin{equation} \mathbb{K} = \Bigg(k^2+\overline{M}^{\,2}+\frac{V_{aa}}{2}\Bigg)\,\mathbb{I}+ \frac{\delta M^2\,\rho_0}{2}\, \Bigg[ \begin{array}{cc} - \cos2\theta_m & \sin2\theta_m \\ \sin 2\theta_m & \cos2\theta_m \\ \end{array} \Bigg]\,, \label{nuK}\end{equation} where the expressions for $\rho_0$ and the mixing angle in the medium are the same as (\ref{rho0}, \ref{thetam})) but neglecting the \emph{second} order correction $\mathrm{Re}\Sigma_{aa}$ to the index of refraction. The diagonalization of the Hamiltonian is achieved via the unitary transformation (\ref{trafo}) but in terms of the mixing angle in the medium $\theta_m$ that includes the correction from the index of refraction, namely \begin{equation} \left(\begin{array}{c} q_a \\ q_s\\ \end{array}\right) = U(\theta_m) ~\Bigg(\begin{array}{c} q_1 \\ q_2\\ \end{array}\Bigg)~~;~~U(\theta) = \Bigg( \begin{array}{cc} \cos\theta_m & \sin\theta_m\\ -\sin\theta_m & \cos\theta_m \\ \end{array} \Bigg)\,. \label{nutrafo} \end{equation} Again to avoid proliferation of indices we refer to the coordinates that diagonalize the Hamiltonian with the index of refraction with the labels $1,2$, which now \emph{should not} be identified with those labeling the complex poles in section (\ref{sec:model}). Expanding $q_{1,2}$ and their canonical momenta $p_{1,2}$ in terms of Heisenberg annihilation and creation operators \begin{equation} q_i = \frac{1}{\sqrt{2\omega_i}}\Big[a_i + a^\dagger_i]~~;~~p_i = -i\frac{\omega_i}{\sqrt{2\omega_i}}\Big[a_i - a^\dagger_i] \label{qps}\end{equation} where the frequencies in the medium are \begin{eqnarray} \omega_1 & \sim & \overline{\omega}(k) +\frac{V_{aa}}{4\,\overline{\omega}(k)} -\frac{\delta M^2\,\rho_0}{4\,\overline{\omega}(k)} \nonumber \\ \omega _2 & \sim & \overline{\omega}(k)+\frac{V_{aa}}{4\,\overline{\omega}(k)} +\frac{\delta M^2\,\rho_0}{4\,\overline{\omega}(k)} \,. \label{nufreqs}\end{eqnarray} Under the approximation (\ref{onlyappx}) the active and sterile annihilation (and creation) operators $a_{a,s}$ are related to $a_{1,2}$ as \begin{equation} a_a = \cos\theta_m a_1 + \sin\theta_m a_2 ~~;~~ a_s = \cos\theta_m a_2 - \sin\theta_m a_1 \,.\label{as}\end{equation} The total system-bath Hamiltonian becomes $H= H_0 + H_I $ where \begin{eqnarray} H_0 & = & \sum_{i=1,2} a^\dagger_i a_i \, \omega_i+\sum_p \frac{1}{2}\Big[P^2_p+W^2_p Q^2_p \Big] \label{H0}\\ H_I & = & (q_1\cos\theta_m+q_2\sin\theta_m)\sum_p C_pQ_p \,.\label{Hint} \end{eqnarray} The density matrix in the interaction picture of $H_0$ is \begin{equation} \widehat{\rho}_{i}(t) = e^{i{H}_0 t}e^{-i{H} t}\, \widehat{\rho}(0)\,e^{ i{H} t}e^{-i{H}_0 t}\label{rhoip}\end{equation} where $\widehat{\rho}(0)$ is given by eqn. (\ref{DMat}). The equation of motion of the density matrix in the interaction picture is \begin{equation} \frac{d\widehat{\rho}_{i}(t)}{dt} = -i\left[H_{I}(t),\widehat{\rho}_{i}(t)\right] \label{eqrhoip}\end{equation} with $H_I(t) =e^{i {H}_0 t} H_I e^{-i {H}_0 t}$ is the interaction Hamiltonian in the interaction picture of $ {H}_0$. Iteration of this equation up to second order in the interaction yields\cite{qobooks} \begin{equation} \frac{d\widehat{\rho}_{i}(t)}{dt} = -i\left[H_{I}(t),\widehat{\rho}_{i}(0)\right]- \int^t_0 dt'\left[ H_I(t),\left[H_I(t'),\widehat{\rho}_{i}(t')\right]\right]+\cdots \label{2ndord}\end{equation} The \emph{reduced} density matrix for the system's variables $q$ is obtained from the total density matrix by tracing over the bath degrees of freedom $Q_p$, which are assumed to remain in equilibrium\cite{qobooks}. The following standard approximations are invoked\cite{qobooks}: \textbf{a): factorization:} the total density matrix is assumed to factorize \begin{equation}\widehat{\rho}_i(t)= \rho_{q,i}(t)\otimes\rho_Q(0)\label{fact}\end{equation} where it is assumed that the bath remains in equilibrium. \textbf{b): Markovian approximation:} the memory of the evolution is neglected and in the double commutator in (\ref{2ndord}) $\widehat{\rho}_i(t')$ is replaced by $\widehat{\rho}_i(t)$ and taken out of the integral\cite{qobooks}. Taking the trace over the bath degrees of freedom yields the quantum master equation for the reduced density matrix, \begin{equation} \frac{d {\rho}_{R}(t)}{dt} = - \int^t_0 dt'{\mathrm{Tr}}\rho_Q\big\{\left[ H_I(t),\left[H_I(t'),\widehat{\rho}_{i}(t)\right]\right]\big\}+\cdots \label{2ndordred}\end{equation} where the first term has vanished because $Tr_Q \rho_Q(0) Q^{(0)}_p(t) =0$ since $Q^{(0)}_p(t)$ is a free harmonic oscillator in the interaction picture of $H_0$ (see eqn. (\ref{solQ0})). The trace over $Q$ in the double commutator requires the following ingredients \begin{eqnarray} \sum_{p,p'}\frac{ C_p C_{p'}}{\sqrt{4W_p W_{p'}}}\, \mathrm{Tr}\rho_Q(0) Q^{(0)}_p(t)Q^{(0)}_{p'}(t') & = & \sum_p \frac{C^2_p}{2W_p}\Big[(1+n(W_p))\,e^{-iW_p(t-t')}+n(W_p)\,e^{iW_p(t-t')}\Big] \nonumber \\ & = & \int \frac{d\omega}{\pi} \mathrm{Im}\Sigma_{aa}(\omega)(1+n(\omega))\,e^{-i\omega(t-t')} \label{gplus}\end{eqnarray} \begin{eqnarray} \sum_{p,p'}\frac{ C_p C_{p'}}{\sqrt{4W_p W_{p'}}}\, \mathrm{Tr}\rho_Q(0) Q^{(0)}_{p'}(t')Q^{(0)}_p(t) & = & \sum_p \frac{C^2_p}{2W_p}\Big[(1+n(W_p))\,e^{-iW_p(t'-t)}+n(W_p)\,e^{iW_p(t'-t)}\Big]\nonumber\\ & = & \int \frac{d\omega}{\pi} \mathrm{Im}\Sigma_{aa}(\omega)\, n(\omega) \,e^{-i\omega(t-t')} \label{gmin} \end{eqnarray} where the interaction picture operators $Q^{(0)}(t)$ are given by eqn. (\ref{solQ0}) and we have used eqns. (\ref{disper},\ref{SEprop}). Several standard approximations are invoked: terms that feature rapidly varying phases of the form $a^\dagger_i a^\dagger_j \,e^{i (\omega_i+\omega_j)t}$ and $a_i a_j e^{ -i(\omega_i+\omega_j)t}$ are averaged out in time leading to their cancellation, in the quantum optics literature this is known as the ``rotating wave approximation''\cite{qobooks}, similar terms are discarded in the kinetic approach in ref.\cite{raffkin,raffelt}. The time integrals are evaluated in the Weisskopf-Wigner approximation\cite{qobooks,hoboyste}. Finally we also invoke the ultrarelativistic approximation $\omega_1 \sim \omega_2 \sim \overline{\omega}(k)$. Neglecting the second order energy shift (see eqn. (\ref{sigRE})), the final result for the quantum master equation is given by \begin{equation} \frac{d\rho_{R}(t)}{dt} = -\frac{\Gamma_{aa}}{2} \Bigg\{\cos^2\theta_m \mathcal{L}_{11}[\rho_R]+ \sin^2\theta_m \mathcal{L}_{22}[\rho_R]+ \frac{1}{2}\sin2\theta_m \Big(\mathcal{L}_{12}[\rho_R]+\mathcal{L}_{21}[\rho_R] \Big) \Bigg\}\label{qme}\end{equation} where $\mathcal{L}_{ij}[\rho_R]$ are the Lindblad operators\cite{qobooks} \begin{eqnarray} \mathcal{L}_{ij}[\rho_R] = && \big(1+n(\omega_i)\big)\Big[\rho_R a^\dagger_i a_j+ a^\dagger_j a_i \rho_R - a_i \rho_R a^\dagger_j- a_j \rho_R a^\dagger_i \Big] \nonumber \\ && + n(\omega_i) \Big[\rho_R a_i a^\dagger_j+ a _j a^\dagger_i \rho_R - a^\dagger_i \rho_R a_j - a^\dagger_j \rho_R a_i \Big] \label{Lij}\end{eqnarray} In these expressions, the annihilation and creation operators carry the time dependence in the interaction picture, namely \begin{equation} a^\dagger_i(t)= a^\dagger_i(0)\,e^{i\omega_i t}~~;~~a_i(t)= a_i(0)\,e^{-i\omega_i t}\,. \label{ipas}\end{equation} The trace of the reduced density matrix is automatically conserved in time as a consequence of unitary time evolution of the full density matrix. Denoting the expectation value of any interaction picture operator $A(t)$ in the reduced density matrix by \begin{equation} \langle A \rangle (t) = \mathrm{Tr}\rho_R(t) A(t) \,,\label{aveR}\end{equation} we obtain the following equations for the expectation values of the annihilation operators \begin{equation} \frac{d}{dt} \left( \begin{array}{c} \langle a_1 \rangle(t) \\ \langle a_2 \rangle(t) \\ \end{array} \right)= \left( \begin{array}{cc} -i\omega_1 - \frac{\Gamma_{aa}}{2}\cos^2\theta_m & -\frac{\Gamma_{aa}}{4}\sin2\theta_m \\ -\frac{\Gamma_{aa}}{4}\sin2\theta_m & -i\omega_2 - \frac{\Gamma_{aa}}{2}\sin^2\theta_m \\ \end{array} \right) \left( \begin{array}{c} \langle a_1 \rangle(t) \\ \langle a_2 \rangle(t) \\ \end{array} \right) \label{amtx}\end{equation} The eigenvalues of the matrix in eqn. (\ref{amtx}) are found to be $ -i\widetilde{\Omega}_{1,2}-\Gamma_{1,2}/2$ where $\widetilde{\Omega}_{1,2}$ are obtained from eqns. (\ref{ome1},\ref{ome2}) by setting the second order contribution to the energy shift $\mathrm{Re}\Sigma_{aa}=0$, and $\Gamma_{1,2}$ are \emph{precisely} given by eqns.(\ref{gama1c},\ref{gama2c}) but again setting $\mathrm{Re}\Sigma_{aa}=0$ in $\rho_0$, which of course is a consequence of having neglected the second order energy shifts (real part of the self energy) in the quantum master equation. It is a straightforward exercise to obtain the (complex) eigenvectors of the matrix (\ref{amtx}) and to write $\langle a_{a,s} \rangle$ in terms of these through the relation (\ref{as}). Fixing the initial values of the corresponding eigenvectors to yield the initial values $\langle a_a \rangle (0) \neq 0; \langle a_s \rangle (0) =0$ we find \begin{equation} \mathcal{P}_{a\rightarrow s}(t) = \Bigg|\frac{\langle a_s \rangle (t)}{\langle a_a \rangle (0)}\Bigg|^2 = \frac{\sin^22\theta_m}{4\,r^2}\left[e^{-\Gamma_1t}+e^{-\Gamma_2t}-2e^{-\frac{1}{2}(\Gamma_1+\Gamma_2)t} \cos\left[(\widetilde{\Omega}_2-\widetilde{\Omega}_1)t\right]\right]\label{Pasqme}\end{equation} which is the same as the transition probability (\ref{Pas}) but neglecting the second order correction from $\mathrm{Re}\Sigma_{aa}$. These results clearly show that the quantum master equation (\ref{qme}) correctly describes the non-equilibrium dynamics \emph{including the strong damping regime}, the only difference with the exact result being that the second order energy shift $\mathrm{Re}\Sigma_{aa}$ is neglected . The quantum master equation (\ref{qme}) is exactly the same as the one obtained in ref.\cite{hoboyste}. We now introduce the distribution functions \begin{equation} n_{ij} = \mathrm{Tr}\rho_R(t) a^\dagger_i(t) a_j(t)\,, \label{nij}\end{equation} the diagonal components describe the population of the \emph{in medium} states, and the off-diagonal components the coherences\cite{qobooks}. Accounting for the free field time dependence of the operators $a^\dagger,a$ in the interaction picture, we find the following kinetic equations for the distribution functions \begin{eqnarray} \dot{n}_{11} & = & -\Gamma_{aa} \Big\{ \cos^2\theta_m \big(n_{11}-n(\omega_1)\big) + \frac{\sin2\theta_m}{4} \big(n_{12}+n^*_{12}\big) \Big\} \label{dotn11} \\\dot{n}_{22} & = & -\Gamma_{aa} \Big\{ \sin^2\theta_m \big(n_{22}-n(\omega_2)\big) + \frac{\sin2\theta_m}{4} \big(n_{12}+n^*_{12}\big) \Big\} \label{dotn22}\\\dot{n}_{12} & = & -i\Big(\omega_2-\omega_1\Big)n_{12}-\frac{\Gamma_{aa}}{2}\Bigg[n_{12}+\frac{\sin2\theta_m}{2}\big(n_{11}+n_{22} -n(\omega_1)-n(\omega_2)\big)\Bigg]\label{dotn12}\end{eqnarray} where $n(\omega_i)$ are the equilibrium distribution functions. In terms of the $n_{ij}(t)$ we obtain the time evolution of the active and sterile distribution functions via the relation (\ref{as}), namely \begin{eqnarray} N_a(t) & = & \cos^2\theta_m n_{11}(t) + \sin^2\theta_m n_{22}(t) + \frac{1}{2} \sin2\theta_m \big(n_{12}(t)+n^*_{12}(t)\big) \label{Naoftfin}\\N_s(t) & = & \sin^2\theta_m n_{11}(t) + \cos^2\theta_m n_{22}(t) - \frac{1}{2} \sin2\theta_m \big(n_{12}(t)+n^*_{12}(t)\big)\,. \label{Nsoftfin} \end{eqnarray} The weak damping limit can be studied in a perturbative expansion in $\widetilde{\gamma} \ll 1$ by considering the terms $n_{12},n^*_{12}$ in equations (\ref{dotn11},\ref{dotn22}) and the terms $n_{ii}-n(\omega_i);i=1,2$ in equation (\ref{dotn12}) as perturbations. This study was carried out in ref.\cite{hoboyste} and reproduces the result eqn. (\ref{wdNs}) for the sterile population. Therefore the set of quantum kinetic equations (\ref{dotn11}-\ref{dotn12}) reproduce the exact results both in the weak and strong damping cases. We can now establish a correspondence with the quantum kinetic equation often quoted in the literature\cite{mckellar,raffelt,foot,dibari,wong} by introducing the following ``polarization vector''\cite{bohot} \begin{eqnarray} P_0(t) & = & \langle a^\dagger_a a_a + a^\dagger_s a_s \rangle(t) = N_a( t)+N_s( t)\label{P0}\\P_x( t) & = & \langle a^\dagger_a a_s + a^\dagger_s a_a \rangle (t) \label{Px}\\P_y( t) & = & -i\langle a^\dagger_a a_s -a^\dagger_s a_a \rangle (t) \label{Py}\\P_z( t) & = & \langle a^\dagger_a a_a - a^\dagger_s a_s \rangle (t) = N_a( t)-N_s( t) \label{Pz}\end{eqnarray} where the creation and annihilation operators for the active and sterile fields are related to those that create and annihilate the propagating modes in the medium $1,2$ by eqn. (\ref{as}), and the angular brackets denote expectation values in the reduced density matrix $\rho_{R}$ which obeys the quantum master equation (\ref{qme}). In terms of the population and coherences $n_{ij}$ the elements of the polarization vector are given by \begin{eqnarray} P_0 & = & n_{11}+n_{22} \label{P01} \\ P_x & = & -\sin2\theta_m \Big(n_{11}-n_{22}\Big)+\cos2\theta_m \big(n_{12}+n^*_{12} \big)\label{Px1}\\ P_y & = & - i\big(n_{12}- n^*_{12} \big) \label{Py1}\\P_z & = & \cos2\theta_m \Big(n_{11}-n_{22}\Big)+ \sin2\theta_m \big(n_{12}+n^*_{12} \big)\,. \label{Pz1}\end{eqnarray} Using the quantum kinetic equations (\ref{dotn11}-\ref{dotn12}) we find \begin{equation} \frac{dP_0}{dt} = -\frac{\Gamma_{aa}}{2}P_z - \frac{\Gamma_{aa}}{2}\Bigg[\Big(n_{11}-n(\omega_1)\Big)+ \Big(n_{22}-n(\omega_2)\Big) \Bigg]+\frac{\Gamma_{aa}}{2}\cos2\theta_m \Big(n(\omega_1)-n(\omega_2)\Big)\label{dotP0}\end{equation} \begin{equation} \frac{dP_x}{dt} = -i(\omega_2-\omega_1) \cos2\theta_m \big(n_{12}-n^*_{12}) \big) - \frac{\Gamma_{aa}}{2}P_x - \frac{\Gamma_{aa}}{2}\sin2\theta_m \Big(n(\omega_1)-n(\omega_2)\Big) \label{dotPx}\end{equation} \begin{equation} \frac{dP_y}{dt} = -(\omega_2-\omega_1)\big(n_{12}+n^*_{12}\big) -\frac{\Gamma_{aa}}{2}P_y \label{dotPy}\end{equation} \begin{equation} \frac{dP_z}{dt} = -i(\omega_2-\omega_1) \sin2\theta_m \big(n_{12}-n^*_{12}) -\frac{\Gamma_{aa}}{2}P_z - \frac{\Gamma_{aa}}{2}\Bigg[\Big(n_{11}-n(\omega_1)\Big)+ \Big(n_{22}-n(\omega_2)\Big) \Bigg] \label{dotPz}\end{equation} Under the approximation $\omega_1\sim \omega_2 \sim \overline{\omega}(k)$ we can take \begin{equation} \Big(n(\omega_1)-n(\omega_2)\Big) \sim 0 \,, \label{eqmass}\end{equation} and neglect the last terms in eqns. (\ref{dotP0},\ref{dotPx}). Introducing the vector $\vec{V}$ with components \begin{equation} \vec{V} = (\omega_2-\omega_1)~\Big(\sin 2\theta_m, 0 , -\cos2\theta_m\Big) \label{VecV}\end{equation} we find the following equations of motion for the polarization vector \begin{equation} \frac{d\vec{P}}{dt} = \vec{V}\times\vec{P} - \frac{\Gamma_{aa}}{2}\Big(P_x \hat{x}+P_y \hat{y}\Big)+ \frac{dP_0}{dt}\hat{z}\,. \label{QKEpol}\end{equation} This equation is exactly of the form \begin{equation} \frac{d\vec{P}}{dt} = \vec{V}\times\vec{P} - D \vec{P}_T + \frac{dP_0}{dt}\hat{z} \label{QKEpol2}\end{equation} often used in the literature\cite{stodolsky,mckellar,wong,foot,dibari}, where \begin{equation} D = \frac{\Gamma_{aa}}{2} ~~;~~ \vec{P}_T = \Big(P_x \hat{x}+P_y \hat{y}\Big) \,.\label{DandPT}\end{equation} Therefore the quantum kinetic equation for the polarization vector (\ref{QKEpol}) is \emph{equivalent} to the full set of quantum kinetic equations (\ref{dotn11}-\ref{dotn12}). However it must be highlighted that the set of equations (\ref{QKEpol},\ref{QKEpol2}) is \emph{not closed} because it must input the time evolution of $P_0$ which is obtained from the full set of kinetic equations (\ref{dotn11}-\ref{dotn12}). Often the last term in (\ref{QKEpol2}) ($\dot{P}_0$) is omitted, however, such omission is not warranted, since it follows from the definition of $P_0$, eqn. (\ref{P01}) and eqns (\ref{Naoftfin},\ref{Nsoftfin}), that \begin{equation} P_0 = N_a(t)+N_s(t)\,, \end{equation} therefore $\dot{P}_0$ vanishes \emph{only} when both the active and the sterile species have reached equilibrium. Thus we advocate that the set of kinetic equations (\ref{dotn11}-\ref{dotn12}) combined with the relations (\ref{Naoftfin},\ref{Nsoftfin}) provide a complete description of active and sterile production. \section{Consequences for cosmological production of sterile neutrinos.} The results obtained above can be straightforwardly adapted to the case of neutrinos by replacing the equilibrium distributions $n(\Omega_{1,2})$ by the Fermi-Dirac distributions in the ultrarelativistic limit and the matter potential from forward scattering in the medium. While in general $\widetilde{\gamma}$, $\Gamma_{1,2}$ and $\Omega_{1,2}$ depend on the details of the interactions, masses and vacuum mixing angles, an assessment of the consequences of the results obtained above on cosmological sterile neutrino production can be obtained for an active neutrino with standard model interactions. In this case the matter potential for temperatures features a CP-odd contribution proportional to the lepton and baryon asymmetries, and a CP-even contribution that depends solely on momentum and temperature. In the ultrarelativistic limit with $\overline{\omega}(k)\sim k$ the matter potential for neutrinos is given by\cite{notzold,bell,boyhohec}, \begin{equation} V_{aa} = \frac{4\sqrt{2}\xi(3)}{\pi^2} G_F k T^3 \left[L-A \frac{Tk}{M^2_W} \right] \label{matpotsm} \end{equation} where $L$ is proportional to the lepton and baryon asymmetries and $A \sim 10$\cite{notzold,bell}, for antineutrinos $L\rightarrow -L$. The active neutrino interaction rate (neglecting contributions from the lepton and baryon asymmetries) is given by \cite{notzold,bell,raffelt,cline,kainu} \begin{equation} \Gamma_{aa} \sim G^2_F T^4 k \,.\label{smgama} \end{equation} For $keV$ sterile neutrinos an MSW resonance is available only for $L \gg Tk/M^2_W$ when the first term in the bracket in (\ref{matpotsm}) dominates\cite{bell,dibari,kev1,kev2}, while no resonance is available when the second term dominates. We will analyze separately the two different cases \begin{eqnarray} L & \ll & \frac{T^2}{M^2_W} \label{smallL}\\ L & \gg & \frac{T^2}{M^2_W} \label{largeL}\end{eqnarray} where we have taken $k \sim T$. In the first case no MSW resonance is possible for $keV$ sterile neutrinos, whereas such resonance is possible in the second case\cite{bell,dibari,kev1,kev2}. \begin{itemize} \item{{\bf High temperature limit:}} At high temperature above the MSW resonance for $V_{aa}\gg \delta M^2 $ and neglecting the second order correction to the matter potential ($\textrm{Re}\Sigma$), \begin{equation} \rho_0 \sim \frac{V_{aa}}{\delta M^2}\,. \end{equation} For $L \ll T^2/M^2_W$ \begin{equation} \frac{\delta M^2 \rho_0}{\overline{\omega}(k)} \sim \frac{G_F \,T^5}{M^2_W} \end{equation} and the ratio \begin{equation} \widetilde{\gamma}= \Bigg|\frac{\Gamma_{aa}}{\frac{\delta M^2}{\overline{\omega}(k)} \rho_0}\Bigg| \sim G_F M^2_W \sim \alpha_w \ll 1 \end{equation} where $\alpha_w$ is the standard model ``fine structure constant''. For $L\gg T^2/M^2_W$ a similar analysis yields \begin{equation} \widetilde{\gamma} \sim G_F M^2_W \left(\frac{T^2}{LM^2_W}\right) \sim \alpha_w \left(\frac{T^2}{LM^2_W}\right) \ll1 \,. \end{equation} \item{\bf Low temperature limit:} In the low temperature regime for $V_{aa}\ll \delta M^2$, $\rho_0 \sim 1$ and $\widetilde{\gamma}$ becomes \begin{equation} \Big|\frac{\textrm{Im}\Sigma_{aa}}{\delta M^2 \rho_0} \Big| \sim \Big|\frac{\textrm{Im}\Sigma_{aa}}{\delta M^2 } \Big| \end{equation} however in perturbation theory $V_{aa}\gg \textrm{Im}\Sigma $ since $V_{aa}$ is of $\mathcal{O}(G_F)$ and $\textrm{Im}\Sigma_{aa} \sim \mathcal{O}(G^2_F)$. Therefore since in this regime \begin{equation} \delta M^2 \gg V_{aa}\gg \textrm{Im}\Sigma_{aa} \Rightarrow \Big|\frac{\textrm{Im}\Sigma_{aa}}{\delta M^2 } \Big| \ll 1 \end{equation} The conclusion of this analysis is that \emph{far away from an MSW resonance, either in the high or low temperature limit damping is weak}, namely at high or low temperature away from the MSW resonance \begin{equation} \widetilde{\gamma}=\frac{\Gamma_{aa}}{2\Delta E}\ll 1\,. \label{weak}\end{equation} Therefore the strong damping condition may \emph{only} be fulfilled near an MSW resonance $ \theta_m \sim \pi/4$ in which case $\rho_0 \approx |\sin 2\theta|$. \item{\bf Near an MSW resonance:} As mentioned above a resonance is only possible for $keV$ sterile neutrinos for $L\gg T^2/M^2_W$\cite{bell,dibari,kev1,kev2}. For very small vacuum mixing angle $\sin 2\theta \ll 1$ it proves illuminating to write the resonance condition $\cos 2\theta = V_{aa}/\delta M^2$ as $V_{aa} \sim \delta M^2$ and $\rho_0 \sim |\sin 2\theta|$, with $V_{aa}$ given by eqn. (\ref{matpotsm}) for $L\gg T^2/M^2_W$. Therefore $\delta M^2 / k \sim G_F T^3 L $, hence using eqn. (\ref{smgama}) near the MSW resonance, the ratio \begin{equation} \Bigg|\frac{\Gamma_{aa}}{\frac{\delta M^2}{\overline{\omega}(k)} \rho_0}\Bigg| \sim \frac{G_F M^2_W}{|\sin 2\theta|} \left( \frac{T^2}{LM^2_W}\right) \sim \frac{\alpha_w}{|\sin 2\theta|}\left( \frac{T^2}{LM^2_W}\right)\,. \end{equation} Therefore, the strong damping condition near the resonance is fulfilled provided that $|\sin2\theta|\ll \alpha_w$. With $\alpha_w \sim 10^{-2}$ the region near an MSW resonance is generally described by the strong damping regime for $|\sin 2\theta| \lesssim 10^{-3}$, which is likely to be the case for sterile neutrinos\cite{kev1,kuse2} and is consistent with constraints from the X-ray background \cite{hansen,Xray,kou,boyarsky,hansen2}. In the resonance region the sterile production rate is described by the simple rate equation (see eqn. (\ref{rateNs}) ) \begin{equation} \dot{N}_s(t) = -\Gamma_2[ N_s(t)-n_{eq}] \label{resrateqn} \end{equation} where the sterile production rate $\Gamma_2$ is given by eqn. (\ref{Nsrate}) which can be written as \begin{equation} \Gamma_2 \sim \sin^2 2\theta \frac{(\delta M^2)^2}{\overline{\omega}(k)^2 \Gamma_{aa}}\label{resNsrate}\end{equation} and clearly exhibits the suppression for small vacuum mixing angle and the non-perturbative nature as a function of $\Gamma_{aa}$. \end{itemize} This analysis leads to the conclusion that \emph{away from an MSW resonance} the weak damping condition holds, sterile neutrino production \emph{cannot} be described by a simple rate equation but involves $\Gamma_{1,2}$ and $\Delta E$. In this regime the quantum kinetic equations (\ref{dotn11}-\ref{dotn12}) may be simplified\cite{hoboyste} by neglecting the terms with $n_{12},n^*_{12}$ in eqns. (\ref{dotn11},\ref{dotn22}) and the terms with $n_{11}-n(\omega_1);n_{22}-n(\omega_2)$ in eqn. (\ref{dotn12}). The resulting equations are very simple and their solutions feature the two damping rates $\Gamma_1 = \Gamma_{aa}\cos^2\theta_m;\Gamma_2 = \Gamma_{aa}\sin^2\theta_m$. This simplification also holds if the lepton asymmetry is of the same order of the baryon asymmetry $L \sim 10^{-9}$ in which case $L\ll T^2/M^2$ for $T \gtrsim 3~\textrm{MeV}$\cite{boyhohec,dolgovrev} and no MSW resonance is available\cite{bell,notzold,dibari}. Near an MSW resonance for sterile neutrinos with $\sim \textrm{keV}$ mass and $\sin2\theta \lesssim 10^{-3}$ the strong damping condition holds and $N_s(t)$ obeys a simple rate equation, but the sterile production rate is \emph{suppressed} by the quantum Zeno effect. For $\textrm{keV}$ sterile neutrinos with small mixing angle $\sin 2\theta \lesssim 10^{-3}$, the MSW resonance occurs near the scale of the QCD phase transition $T \sim 180 ~\textrm{MeV}$\cite{kev1,shapo} with the inherent uncertainties arising from strong interactions and the rapid change in the effective number of relativistic degrees of freedom in a regime in which hadronization becomes important. However, as argued above, near the MSW resonance the strong damping condition is fulfilled and quantum Zeno suppression \emph{hinders} the production of sterile neutrinos. As discussed above the sterile distribution function obeys a simple rate equation with a production rate given by eqn. (\ref{Nsrate}) or alternatively (\ref{npg2}) which is strongly suppressed by the factor $1/\widetilde{\gamma}^2 \sim \sin^2 2\theta /\alpha^2_w \ll 1$. This suppression of the sterile production rate makes the production mechanism less efficient near the resonance, thus relieving the uncertainties associated with the strong interactions, although these remain in the non-resonant scenario\cite{dodelson}. \section{Conclusions} The production of a sterile species via active-sterile mixing has been studied in a simple, exactly solvable model that includes all the relevant ingredients: active-sterile mixing via an off-diagonal mass matrix and the coupling of the active species to a bath in thermal equilibrium. The exact solution of the Heisenberg -Langevin equations allows to obtain the exact time evolution of the distribution function for the sterile species and the active-sterile transition probability. Both are determined by the dispersion relations and damping rates (widths) of the \emph{two quasiparticle modes in the medium}. These depend on \begin{equation} \widetilde{\gamma} = \frac{\Gamma_{aa}}{2\Delta E} \end{equation} where $\Gamma_{aa}$ is the interaction rate of the active species in the absence of mixing and $\Delta E$ is the oscillation frequency with corrections from forward scattering (the index of refraction) but no damping. $\widetilde{\gamma} \ll 1;\widetilde{\gamma} \gg 1$ correspond to the weak and strong damping regimes respectively. In the weak damping case the damping rates are $\Gamma_1=\Gamma_{aa}\cos^2\theta_m;\Gamma_2=\Gamma_{aa}\sin^2\theta_m$ the active-sterile transition probability is given by eqn. (\ref{Paswdl}), and the time evolution of the sterile distribution function is given by eqn. (\ref{wdNs}) for vanishing initial sterile population, both feature these two scales along with the oscillation time scale. As a result, the time evolution of the sterile distribution function does not obey a simple rate equation. These results confirm those of refs.\cite{hobos,hozeno,hoboyste}. The exact solution allows the systematic exploration of the strong damping case for which $\widetilde{\gamma} \gg 1$ corresponding to the situation in which the interaction rate in the medium is faster than the oscillation time scale and the quantum Zeno effect is present\cite{stodolsky}. In this regime we find that the damping rates of the quasiparticles are $\Gamma_1=\Gamma_{aa};\Gamma_2 = \Gamma_{aa}\sin^2 2\theta_m/4\widetilde{\gamma}^2$ where $\theta_m$ is the mixing angle in the medium. The active-sterile (generalized) transition probability is $$\mathcal{P}_{a\rightarrow s} = \frac{\sin^22\theta_m}{4\widetilde{\gamma}^2}\left[e^{-\Gamma_1t}+ e^{-\Gamma_2t}-2e^{-\frac{1}{2}(\Gamma_1+\Gamma_2)t}\cos[(\Omega_1-\Omega_2)t\right] $$ In the strong damping regime the oscillation frequency $\Omega_1-\Omega_2 \propto \cos2\theta_m$ \emph{vanishes} at an MSW resonance and the two quasiparticle states \emph{become degenerate} leading to a breakdown of adiabaticity. The sterile distribution function obeys a simple rate equation with a sterile production rate $\Gamma_2$ strongly suppressed for $\widetilde{\gamma}^2 \gg 1$. The suppression of the active-sterile transition probability and the sterile production rate, and the vanishing of the oscillation frequency in the strong damping limit are all consequences of quantum Zeno suppression. The quantum master equation for the reduced density matrix is derived and shown to be valid in both limits. From it we obtain the complete set of quantum kinetic equations that yield the non-equilibrium evolution of the active and sterile distribution functions. The complete non-equilibrium time evolution of the active and sterile distribution functions and the coherences are given by the set of equations (\ref{dotn11}-\ref{dotn12}) along with the identifications (\ref{Naoftfin},\ref{Nsoftfin}). The set of kinetic equations (\ref{dotn11}-\ref{dotn12}) are shown to be equivalent to the kinetic equations for the ``polarization vector'' often quoted in the literature. However, unlike these the set (\ref{dotn11}-\ref{dotn12}) along with (\ref{Naoftfin},\ref{Nsoftfin}) yield a complete description of the non-equilibrium dynamics amenable to a straightforward numerical analysis, the extrapolation to fermionic degrees of freedom is a straightforward replacement of the equilibrium distribution functions by the Fermi-Dirac distributions. Furthermore, the analysis based on the exact solution and the quantum master equation yield a wealth of information that cannot be easily gleaned from the set of kinetic equations, for example the active-sterile transition probability. For active neutrinos with standard model interactions it is shown that the weak damping limit describes the parameter range \emph{away} from an MSW resonance and that the strong damping limit \emph{only} emerges near the resonance for very small vacuum mixing angle, such that $\sin2\theta \lesssim \alpha_w\sim 10^{-2}$. Such small value is consistent with constraints from the X-ray background. This result bears important consequences for cosmological sterile neutrino production. In the resonant production mechanism of ref.\cite{dodelson} the production rate peaks at the MSW resonance, however our analysis, which includes consistently the damping corrections, shows that quantum Zeno suppression \emph{hinders} the sterile production rate near the resonance. For $keV$ sterile neutrinos the MSW resonance occurs in a temperature range too close to the QCD phase transition. Hadronization and strong interactions lead to substantial uncertainties during this temperature regime which translate into uncertainties in the production rate. Quantum Zeno suppression of the production rate in this regime relieves these uncertainties. \textbf{In summary:} The set of kinetic equations (\ref{dotn11}-\ref{dotn12}) (with Fermi-Dirac equilibrium distributions) along with the relations (\ref{Naoftfin},\ref{Nsoftfin}) yield a complete description of the non-equilibrium dynamics of active and sterile neutrino production valid in the weak and strong damping limits. Quantum Zeno suppression is operative near an MSW resonance and suppresses the sterile production rate, thus relieving potential uncertainties associated with the QCD phase transition for $keV$ neutrinos. \acknowledgements The author thanks C.-M.Ho for fruitful discussions and acknowledges support from the U.S. National Science Foundation through grant award PHY-0553418.
1,116,691,499,536
arxiv
\section{System-status chain transition probabilities}\label{Appendix A} Here we show the calculation of the transition probabilities of the system chain. We use the guide lines in \cite{ephremides1987delay} to simplify things and use also their notions but under our case which state that $r_i=s_i=q_i=p_i$. Due to this fact there are many more possible transition is the state space and therefore the following calculations would be more complicated. In the same manner the calculations are based on the change in the number of active users, thus the transitions can be classified into four types as follows: \begin{enumerate} \item \underline{The number of active users transits from 0 to 1:}\\ Only a blocked user may become active, and only one, say $j$. All other blocked blocked must not exceed the threshold. In addition all idle users which may receive a package must not exceed the threshold and therefore, say $w$, becomes blocked. The latter will repeat it self during this calculation. The probability of such a transition is expressed by: \begin{equation*} P(\Delta A=1, \Delta B=-1+w, \Delta I=-w)=\prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l \prod^{n+1}_{i}\overline{p}_i\frac{p_j}{\overline{p}_j}\left[(1-P_j(0|2))+\lambda_jP_j(0|2)\right] \end{equation*} where the product is on the sets of users by their transition and $n$ is the number of blocked users after the transition. Note that the user $j$ will be active if his queue is not empty after the successful transmission or a package arrived in the beginning of the slot. Also note that $\overline{p}$ and $\overline{\lambda}$ stand for $1-p$ and $1-\lambda$, respectively. \item \underline{The number of active users remains 1:}\\ This case divides to two subcases, the same active user $j$ remains active or another blocked user $s$ becomes active. Nevertheless $w$ idle users may still become blocked. \begin{enumerate} \item the probability for the first case is: \begin{equation*} P(\Delta A=0, \Delta B=+w, \Delta I=-w,j\rightarrow j)= \prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l \prod^{n+1}_{i}\overline{p}_i\frac{p_j}{\overline{p}_j}\left[P_j(1|1)+\lambda_j(1-P_j(1|1))\right] \end{equation*} \item the probability for the second case is: \begin{equation*} P(\Delta A=0, \Delta B=+w, \Delta I=-w,j\rightarrow s)= \\ \prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l \prod^{n+1}_{i}\overline{p}_i\frac{p_s}{\overline{p}_s}\left[P_s(1|1)+\lambda_s(1-P_s(1|1))\right] \end{equation*} \end{enumerate} \item \underline{The number of active users transits from 1 to 0:}\\ This case is subdivided to three subcases, \begin{enumerate} \item The first subcase is that the active user $j$ becomes idle: \begin{equation*} P(\Delta A=-1, \Delta B=+w, \Delta I=-w)= \prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l \prod^{n+1}_{i}\overline{p}_i\frac{p_j}{\overline{p}_j}\left[\overline{\lambda}_j(1-P_j(1|1))\right] \end{equation*} \item The second subcase is that the active user $j$ becomes blocked and blocked user $s$ becomes idle: \begin{equation*} P(\Delta A=-1, \Delta B=+1-1+w, \Delta I=+1-w)= \prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l \prod^{n+1}_{i}\overline{p}_i\frac{p_s}{\overline{p}_s}\left[\overline{\lambda}_s(1-P_s(1|1))\right] \end{equation*} \item The third subcase is that the active user $j$ becomes blocked: \begin{equation*} \begin{aligned} &P(\Delta A=-1, \Delta B=+1+w, \Delta I=-w)= \\ &\prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l\prod^{n+1}_{i}\overline{p}_i\\ +&\prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l \sum^{K-n-w-1}_{s}\frac{\lambda_sp_s}{\overline{\lambda}_s} \prod^{n+1}_{i}\overline{p}_i\\ +&\prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\left( 1-\prod^{w}_{s}\overline{p}_s \right)\left(1-\prod^{n}_{i}\overline{p}_i\right)\\ +&\prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l\left[ 1-\prod^{n}_{s}\overline{p}_s - \overline{p}_j\prod^{n}_{i}\overline{p}_i\sum^{n}_{q}\frac{p_q}{\overline{p}_q} \right]\\ +&\prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{n}_{i}\overline{p}_i\prod^{w}_{l}\lambda_l\left[ 1-\prod^{w}_{s}\overline{p}_s - \overline{p}_j\prod^{w}_{b}\overline{p}_b\sum^{w}_{q}\frac{p_q}{\overline{p}_q} \right] \end{aligned} \end{equation*} The first expression is for the case which non of the users exceeds the threshold. The second is for the case which one idle user managed to successfully transmits. The third case describes the case which one or more from the blocked and the idle groups tries to transmit, therefore what happens with user $j$ is meaningless. The forth expression describes the situation which user $j$ collided with one or more users from the blocked group, or collision happened between the blocked users and $j$ didn't exceed the threshold. The fifth expression is the same as the fourth concerning the group of idle users which receives a package. \end{enumerate} \item \underline{The number of active users remains 0 :}\\ This case is subdivided to three subcases, \begin{enumerate} \item The first subcase is that all users maintain their status without change: \begin{equation*} P(\Delta B=0, \Delta I=0)= \left[ 1-\prod^{n}_{i}\overline{p}_i\sum^{n}_{q}\frac{p_q}{\overline{p}_q} \right]\prod^{K-n}_{k}\overline{\lambda}_k + \prod^{n}_{i}\overline{p}_i\prod^{K-n}_{k}\overline{\lambda}_k \sum^{K-n}_{q}\frac{\lambda_qp_q}{\overline{\lambda}_q} \end{equation*} where the first term is the probability that no packet arrives at the idle users while no blocked user, or at least two blocked users, transmit, and the second term is the probability that no blocked user exceeds the threshold while only one of the idle users successfully transmits. \item The second subcase is that one blocked user $j$ becomes idle: \begin{equation*} P(\Delta B=-1+w, \Delta I=+1-w)= \prod^{K-n-w-1}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l \prod^{n+1}_{i}\overline{p}_i\left[\overline{\lambda}_j\frac{p_j}{\overline{p}_j}P_j(0|2)\right] \end{equation*} \item The third subcase describes some situation for $w$ idle users becomes blocked: \begin{equation*} \begin{aligned} &P(\Delta B=+w, \Delta I=-w)= \\ &\prod^{K-n-w}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l\left[ 1-\prod^{n}_{i}\overline{p}_i\sum^{n}_{q}\frac{p_q}{\overline{p}_q} \right]\\ +&\prod^{K-n-w}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\left[ 1-\prod^{w}_{s}\overline{p}_s -\prod^{w}_{b}\overline{p}_b\sum^{w}_{q}\frac{p_q}{\overline{p}_q} \right]\\ +&\prod^{K-n-w}_{k}\overline{\lambda}_k\prod^{w}_{l}\lambda_l\overline{p}_l\sum^{w}_{q}\frac{p_q}{\overline{p}_q} \left[ 1-\prod^{n}_{i}\overline{p}_i\right]\\ +&\prod^{K-n-w}_{k}\overline{\lambda}_k\sum^{K-n-w}_{s}\frac{\lambda_sp_s}{\overline{\lambda}_s} \prod^{w}_{l}\lambda_l\overline{p}_l\prod^{n}_{i}\overline{p}_i \end{aligned} \end{equation*} The first expression is for the case which non of the $w$ users exceeds the threshold and no blocked user, or at least two blocked users, transmit, the second is for the case which at least two users from $w$ exceeds the threshold, the third case describes collision between one of the users from $w$ with at least one from the blocked users and the forth expression describes the situation which one idle user succussed to transmit while all the other does't exceed the threshold. \end{enumerate} \end{enumerate} According to these transition probabilities we can calculate the steady state of the chain given the auxiliary quantities $p_i(1|1)$ and $p_i(0|2)$, the probability of exceedance and the users arrival rates. \section{Average success probabilities}\label{Appendix B} Once the steady state of the system chain is known, the average success probabilities can be calculated. We do so by calculate the values of $P_B(i),P_A(i)$ and $P_I(i)$ in the same manner as calculated in \cite{ephremides1987delay} while taking in consideration that user must exceed the threshold in order to transmit. \begin{enumerate} \item For Blocked user $i$: \begin{equation*} \begin{aligned} P_B(i)&=Pr(\text{user $i$ success $\mid$ user $i$ is blocked}) &=\frac{Pr(\text{user $i$ success and user $i$ is blocked})}{Pr(\text{user $i$ is blocked})} \end{aligned} \end{equation*} where \begin{equation*} Pr(\text{user $i$ success and user $i$ is blocked})=p_i\sum_{\substack{S_i=2 \\ j\neq i \\ S_j=0,1,2}}P(S_1,...,S_K)\prod_{j\neq i} (\lambda_j\overline{p}_j+\overline{\lambda}_j)^{\delta_{S_j=0}}(\overline{p}_j)^{\delta_{S_j=1}} (\overline{p}_j)^{\delta_{S_j=2}} \end{equation*} The exponent $\delta$ equals $1$ when it's condition holds. \begin{equation*} Pr(\text{user $i$ is blocked})=\sum_{S_i=2}P(S_1,..,S_i,..,S_K) \end{equation*} \item For Active user $i$: \begin{equation*} \begin{aligned} P_A(i)&=Pr(\text{user $i$ success $\mid$ user $i$ is active}) &=\frac{Pr(\text{user $i$ success and user $i$ is active})}{Pr(\text{user $i$ is active})} \end{aligned} \end{equation*} where \begin{equation*} Pr(\text{user $i$ success and user $i$ is active})=p_i\sum_{\substack{S_i=1 \\ j\neq i \\ S_j=0,2}}P(S_1,...,S_K)\prod_{j\neq i} (\lambda_j\overline{p}_j+\overline{\lambda}_j)^{\delta_{S_j=0}}(\overline{p}_j)^{\delta_{S_j=2}} \end{equation*} the difference here is due to the fact that no more than one user may be active. \begin{equation*} Pr(\text{user $i$ is active})=\sum_{S_i=1}P(S_1,..,S_i,..,S_K) \end{equation*} \item For Idle user $i$: \begin{equation*} \begin{aligned} P_I(i)&=Pr(\text{user $i$ success $\mid$ user $i$ is idle}) &=\frac{Pr(\text{user $i$ success and user $i$ is idle})}{Pr(\text{user $i$ is idle})} \end{aligned} \end{equation*} where \begin{equation*} Pr(\text{user i success and user $i$ is idle})=\lambda_ip_i\sum_{\substack{S_i=0 \\ j\neq i \\ S_j=0,1,2}}P(S_1,...,S_K)\prod_{j\neq i} (\lambda_j\overline{p}_j+\overline{\lambda}_j)^{\delta_{S_j=0}}(\overline{p}_j)^{\delta_{S_j=1}} (\overline{p}_j)^{\delta_{S_j=2}} \end{equation*} and \begin{equation*} Pr(\text{user $i$ is idle})=\sum_{S_i=0}P(S_1,..,S_i,..,S_K) \end{equation*} \end{enumerate} After knowing the these values the boundary conditions can be calculated: \begin{equation*} \begin{aligned} P_i(1\mid 1)=&1-\frac{\pi(1,1)}{G^i_1(1)-\pi(1,0)} \\ P_i(0\mid 2)=&\frac{\pi(0,0)}{G^i_0(1)}, \end{aligned} \end{equation*} where, \begin{equation}\label{equ-probability for blocked and empty} \pi(0,0)=\frac{\lambda_i\overline{P}_I(i)}{\lambda_iP_A(i)+\overline{\lambda}_iP_B(i)}\pi(1,0), \end{equation} \begin{equation}\label{equ-probability for idle and empty} \pi(1,0)=\frac{\overline{\lambda}_iP_B(i)-\lambda_i\overline{P}_A(i)}{\overline{\lambda}_iP_B(i)-\lambda_i(P_I(i)-P_A(i))}, \end{equation} \begin{equation}\label{equ-probability for active and not empty} \pi(1,1)=\frac{\lambda_i}{\overline{\lambda}_i}\pi(0,0), \end{equation} \begin{equation}\label{equ-probability to be blocked} G_{0}^i(1)=\frac{\lambda_i\overline{\lambda}_i\overline{P}_I(i)}{\overline{\lambda}_iP_B(i)-\lambda_i(P_I(i)-P_A(i))}, \end{equation} \begin{equation}\label{equ-probability to be unblocked} G_{1}^i(1)=\lambda_i+\overline{\lambda}_i\frac{\overline{\lambda}_iP_B(i)-\lambda_i\overline{P}_A(i)}{\overline{\lambda}_iP_B(i)-\lambda_i(P_I(i)-P_A(i))}. \end{equation} \section{Convergence to Extreme Value Distribution}\label{Appendix C} \begin{proof} Using the stationary distribution as the marginal distribution does not impairs the convergence when $K \rightarrow \infty$, as shown in \cite{denzel1975limit}, for such type of dependent sequences. Thus, we can analyze the expected channel capacity using EVT with the distribution above as the marginal distribution function for each user. In order to do so, one first need to prove that convergence to one of the extreme distributions types exists, and derive normilizing constants $a_K$ and $b_K$. The result in \cite[1.6.2]{EVT:Springer1983} gives a necessary and sufficient condition on the marginal distribution $F$ to belong to each of the three possible domains of attraction of the extreme value distributions. The first sufficient condition states that if $f$ has a negative derivative $f'$ for all $x$ in some interval $(x_0,x_F),\ (x_F\leq \infty),\ f(x)=0$ for $x\geq x_F$, and \begin{equation}\label{equ-Sufficient type 1 condition} \lim_{t \uparrow x_F} \frac{f'(t)(1-F(t))}{f^2(t)}=-1 \end{equation} then $F$ is in the domain of attraction of Type \Rmnum{1} extreme value distribution (Gumbel distribution). The second necessary and sufficient condition for $F$ to be in the domain of attraction of Type \Rmnum{1} extreme value distribution, states that there exists some strictly positive function $g(t)$ such that \begin{equation}\label{equ-Necessary and sufficient type 1 condition} \lim_{t \uparrow x_F} \frac{1-F(t+xg(t))}{1-F(t)}=e^{-x}\\ \end{equation} for all real $x$.\\ In the following we give compliance for the convergence conditions and the derivation of $a_K$ and $b_K$. The stationary distribution $F(t)=pF_g(t)+qF_b(t)$ as shown earlier has a negative derivative $f'$ from $x_0=\max\{\mu_g,\mu_b\}$ till $\infty$. So we only need to show that \eqref{equ-Sufficient type 1 condition} holds, \begin{equation*} \begin{aligned} &\lim_{t \rightarrow \infty} \frac{f'(t)\left(1-F(t)\right)}{f^2(t)} = \lim_{t \rightarrow \infty} \frac{\left(pf_g'(t)+qf_b'(t)\right)\left(1-\left(pF_g(t)+qF_b(t)\right)\right)}{\left(pf_g(t)+qf_b(t)\right)^2}\\ &=\lim_{t \rightarrow \infty} \frac{\frac{1}{2}\left(pf_g'(t)+qf_b'(t)\right)\left(pErfc\left(\frac{t-\mu_g}{\sqrt{2}\sigma_g}\right) +qErfc\left(\frac{t-\mu_b}{\sqrt{2}\sigma_b}\right)\right)}{\left(pf_g(t)+qf_b(t)\right)^2}\\ \end{aligned} \end{equation*} In \cite{abramowitz2012handbook}, 7.1.13, we can find upper and lower bounds for the complementary error function, \begin{equation*} \frac{2}{\sqrt{\pi}}\frac{e^{-t^2}}{t+\sqrt{t^2+2}}< Erfc(t)\leq \frac{2}{\sqrt{\pi}}\frac{e^{-t^2}}{t+\sqrt{t^2+\frac{4}{\pi}}}. \end{equation*} where these inequalities are true for $t>0$ which fits our case for $t \rightarrow \infty$. Using these bounds we will show with the sandwich rule that the limit above converge to $-1$. Let us consider first the lower bound of the complementary error function, \begin{equation*} \begin{aligned} &\lim_{t \rightarrow \infty} \frac{\frac{1}{2}\left(pf_g'(t)+qf_b'(t)\right) \left( p\frac{2}{\sqrt{\pi}}\frac{e^{-\frac{(t-\mu_g)^2}{2\sigma_g^2}}}{\frac{t-\mu_g}{\sqrt{2}\sigma_g}+\sqrt{\frac{(t-\mu_g)^2}{2\sigma_g^2}+2}} + q\frac{2}{\sqrt{\pi}}\frac{e^{-\frac{(t-\mu_b)^2}{2\sigma_b^2}}}{\frac{t-\mu_b}{\sqrt{2}\sigma_b}+\sqrt{\frac{(t-\mu_b)^2}{2\sigma_b^2}+2}}\right)} {\left(pf_g(t)+qf_b(t)\right)^2}\\ &= \lim_{t \rightarrow \infty}\frac{1}{\sqrt{\pi}} \frac{\left(pf_g'(t)+qf_b'(t)\right) \left( p\frac{\sqrt{2\pi}\sigma_g f_g(t)}{\frac{t-\mu_g}{\sqrt{2}\sigma_g}+\sqrt{\frac{(t-\mu_g)^2}{2\sigma_g^2}+2}} + q\frac{\sqrt{2\pi}\sigma_b f_b(t)}{\frac{t-\mu_b}{\sqrt{2}\sigma_b}+\sqrt{\frac{(t-\mu_b)^2}{2\sigma_b^2}+2}} \right)} {\left(pf_g(t)+qf_b(t)\right)^2}\\ &= \lim_{t \rightarrow \infty}-\sqrt{2} \frac{\left(pf_g(t)\frac{t-\mu_g}{\sigma_g^2}+qf_b(t)\frac{t-\mu_b}{\sigma_b^2}\right) \left(p \frac{\sigma_g f_g(t)}{\frac{t-\mu_g}{\sqrt{2}\sigma_g}+\sqrt{\frac{(t-\mu_g)^2}{2\sigma_g^2}+2}} + q \frac{\sigma_b f_b(t)}{\frac{t-\mu_b}{\sqrt{2}\sigma_b}+\sqrt{\frac{(t-\mu_b)^2}{2\sigma_b^2}+2}} \right)} {\left(pf_g(t)+qf_b(t)\right)^2}\\ \end{aligned} \end{equation*} The Limit above can be break to four different limits, \begin{equation*} \begin{aligned} &\lim_{t \rightarrow \infty}-\sqrt{2}p^2 \frac{f_g(t)\frac{t-\mu_g}{\sigma_g^2} \frac{\sigma_g f_g(t)}{\frac{t-\mu_g}{\sqrt{2}\sigma_g}+\sqrt{\frac{(t-\mu_g)^2}{2\sigma_g^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2}+ \lim_{t \rightarrow \infty}-\sqrt{2}q^2 \frac{f_b(t)\frac{t-\mu_b}{\sigma_b^2} \frac{\sigma_b f_b(t)}{\frac{t-\mu_b}{\sqrt{2}\sigma_b}+\sqrt{\frac{(t-\mu_b)^2}{2\sigma_b^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2}+\\ &\lim_{t \rightarrow \infty}-\sqrt{2}pq \frac{f_g(t)\frac{t-\mu_g}{\sigma_g^2} \frac{\sigma_b f_b(t)}{\frac{t-\mu_b}{\sqrt{2}\sigma_b}+\sqrt{\frac{(t-\mu_b)^2}{2\sigma_b^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2}+ \lim_{t \rightarrow \infty}-\sqrt{2}pq \frac{f_b(t)\frac{t-\mu_b}{\sigma_b^2} \frac{\sigma_g f_g(t)}{\frac{t-\mu_g}{\sqrt{2}\sigma_g}+\sqrt{\frac{(t-\mu_g)^2}{2\sigma_g^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2}\\ \end{aligned} \end{equation*} please notice that the first and the second limits are similar with the exception of their indexes, and so does the third and the fourth limit. We start with the first limit calculation, \begin{equation*} \begin{aligned} &\lim_{t \rightarrow \infty}-\sqrt{2}p^2 \ \frac{f_g(t) \ \frac{t-\mu_g}{\sigma_g^2} \ \frac{\sigma_g f_g(t)}{\frac{t-\mu_g}{\sqrt{2}\sigma_g}+\sqrt{\frac{(t-\mu_g)^2}{2\sigma_g^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2} =\lim_{t \rightarrow \infty}-\sqrt{2} \ \frac{p^2f_g^2(t) \ (t-\mu_g)\sigma_g} {\left(pf_g(t)+qf_b(t)\right)^2 \ \sigma_g^2 \ \frac{t-\mu_g}{\sqrt{2}\sigma_g}\left(1+\sqrt{1+\frac{4\sigma_g^2}{(t-\mu_g)^2}} \right) }\\ &=\lim_{t \rightarrow \infty} -2 \ \frac{p^2f_g^2(t)} {\left(pf_g(t)+qf_b(t)\right)^2 \ \left(1+\sqrt{1+\frac{4\sigma_g^2}{(t-\mu_g)^2}} \right)} =\lim_{t \rightarrow \infty} -2 \ \frac{p^2f_g^2(t)} {\left(pf_g(t)+qf_b(t)\right)^2} \cdot \lim_{t \rightarrow \infty} \frac{1} {\left(1+\sqrt{1+\frac{4\sigma_g^2}{(t-\mu_g)^2}} \right)}\\ &=\lim_{t \rightarrow \infty} -2 \ \frac{p^2f_g^2(t)} {\left(pf_g(t)+qf_b(t)\right)^2 } \cdot \frac{1}{2} =\lim_{t \rightarrow \infty} - \ \frac{p^2f_g^2(t)} {\left(pf_g(t)+qf_b(t)\right)^2} =-\left(\lim_{t \rightarrow \infty} \frac{pf_g(t)} {\left(pf_g(t)+qf_b(t)\right)}\right)^2\\ &\overset{(a)}{=}-\left(\lim_{t \rightarrow \infty} \frac{1} {1+\frac{qf_b(t)}{pf_g(t)}}\right)^2 =-\left( \frac{1} {1+\lim_{t \rightarrow \infty}\frac{qf_b(t)}{pf_g(t)}}\right)^2 \ = \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &\Rightarrow \lim_{t \rightarrow \infty}\frac{qf_b(t)}{pf_g(t)} =\frac{\sigma_g}{\sigma_b} \frac{q}{p} \lim_{t \rightarrow \infty} e^{-\frac{(t-\mu_b)^2}{2\sigma_b^2}+\frac{(t-\mu_g)^2}{2\sigma_g^2}} = \frac{\sigma_g}{\sigma_b}\frac{q}{p} e^{\lim_{t \rightarrow \infty} -\frac{(t-\mu_b)^2}{2\sigma_b^2}+\frac{(t-\mu_g)^2}{2\sigma_g^2}}\\ &\Rightarrow \lim_{t \rightarrow \infty}\frac{t^2(\sigma_b^2-\sigma_g^2)+t(2\mu_b\sigma_g^2-2\mu_g\sigma_b^2)+C}{2\sigma_b^2\sigma_g^2} = \left\{ \begin{array}{l l} \infty & \quad \sigma_g^2 < \sigma_b^2\\ -\infty & \quad \sigma_g^2 \geq \sigma_b^2 \ \text{assuming} \ \mu_g>\mu_b \end{array} \right. \end{aligned} \end{equation*} So, \begin{equation*} \lim_{t \rightarrow \infty}\frac{qf_b(t)}{pf_g(t)} = \left\{ \begin{array}{l l} \infty & \quad \sigma_g^2 < \sigma_b^2\\ 0 & \quad \sigma_g^2 \geq \sigma_b^2 \ \text{assuming} \ \mu_g>\mu_b \end{array} \right. \end{equation*} Note that in (a) we assume that $p \neq 0$. This assumption implies that the situation which all the users are in the bad group is not taken in consideration. For that case all the users have the same channel and the analysis is known and not in our interest. In the same manner we assume also that $q \neq 0$ for the opposite situation. Hence, the first limit result is \begin{equation*} \lim_{t \rightarrow \infty}-\sqrt{2}p^2 \ \frac{f_g(t) \ \frac{t-\mu_g}{\sigma_g^2} \ \frac{\sigma_g f_g(t)}{\frac{t-\mu_g}{\sqrt{2}\sigma_g}+\sqrt{\frac{(t-\mu_g)^2}{2\sigma_g^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2}= \left\{ \begin{array}{l l} 0 & \quad \sigma_g^2 < \sigma_b^2\\ -1 & \quad \sigma_g^2 \geq \sigma_b^2 \ \ \text{assuming} \ \mu_g>\mu_b \end{array} \right. \end{equation*} As mentioned earlier the first and the second limits different only in their indexes, therefore the result for the second limit is \begin{equation*} \lim_{t \rightarrow \infty}-\sqrt{2}q^2 \frac{f_b(t)\frac{t-\mu_b}{\sigma_b^2} \frac{\sigma_b f_b(t)}{\frac{t-\mu_b}{\sqrt{2}\sigma_b}+\sqrt{\frac{(t-\mu_b)^2}{2\sigma_b^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2}= \left\{ \begin{array}{l l} -1 & \quad \sigma_g^2 < \sigma_b^2\\ 0 & \quad \sigma_g^2 \geq \sigma_b^2 \ \ \text{assuming} \ \mu_g>\mu_b \end{array} \right. \end{equation*} We turn now for the third limit calculation \begin{equation*} \begin{aligned} &\lim_{t \rightarrow \infty}-\sqrt{2}pq \frac{f_g(t)\frac{t-\mu_b}{\sigma_g^2} \frac{\sigma_b f_b(t)}{\frac{t-\mu_b}{\sqrt{2}\sigma_b}+\sqrt{\frac{(t-\mu_b)^2}{2\sigma_b^2}+2}}} {\left(pf_g(t)+qf_b(t)\right)^2} =\lim_{t \rightarrow \infty}-\sqrt{2} \ \frac{pqf_g(t) \ (t-\mu_g)\sigma_b \ f_b(t) \sqrt{2} \sigma_b} {\left(pf_g(t)+qf_b(t)\right)^2 \ \sigma_g^2 \ (t-\mu_b) \ \left(1+\sqrt{1+\frac{4\sigma_b^2}{(t-\mu_b)^2}} \right) }\\ &=-2\frac{\sigma_b^2}{\sigma_g^2}\lim_{t \rightarrow \infty} \ \frac{pf_g(t)}{\left(pf_g(t)+qf_b(t)\right)} \cdot \lim_{t \rightarrow \infty}\ \frac{qf_b(t)}{\left(pf_g(t)+qf_b(t)\right)} \cdot \lim_{t \rightarrow \infty}\ \frac{(t-\mu_g)}{(t-\mu_b)}\cdot \lim_{t \rightarrow \infty}\ \frac{1} {\left(1+\sqrt{1+\frac{4\sigma_g^2}{(t-\mu_g)^2}} \right) }\\ &=-\frac{\sigma_b^2}{\sigma_g^2}\lim_{t \rightarrow \infty}\ \frac{pf_g(t)}{\left(pf_g(t)+qf_b(t)\right)} \cdot \lim_{t \rightarrow \infty}\ \frac{qf_b(t)}{\left(pf_g(t)+qf_b(t)\right)}\\ &=-\frac{\sigma_b^2}{\sigma_g^2} \left\{ \begin{array}{l l} 0 & \quad \sigma_g^2 < \sigma_b^2\\ 1 & \quad \sigma_g^2 \geq \sigma_b^2 \ \ \text{assuming} \ \mu_g>\mu_b \end{array} \right. \cdot\left\{ \begin{array}{l l} 1 & \quad \sigma_g^2 < \sigma_b^2\\ 0 & \quad \sigma_g^2 \geq \sigma_b^2 \ \ \text{assuming} \ \mu_g>\mu_b\ \end{array} \right. =0 \end{aligned} \end{equation*} The fourth limit also share the same result, and if we add all the parts we can see that the limit converge to $-1$. If we consider the upper bound of the complementary error function we will find also that the limit convergence to $-1$ since the analytical development is the same and the limit \begin{equation*} \lim_{t \rightarrow \infty} \frac{1} {\left(1+\sqrt{1+\frac{4\sigma_g^2}{(t-\mu_g)^2}} \right)} =\lim_{t \rightarrow \infty} \frac{1} {\left(1+\sqrt{1+\frac{8\sigma_g^2}{\pi(t-\mu_g)^2}} \right)} =\frac{1}{2} \end{equation*} Therefore we can conclude that condition \eqref{equ-Sufficient type 1 condition} holds for our stationary distribution. We now show that the second condition \eqref{equ-Necessary and sufficient type 1 condition} also holds for the stationary distribution. Let us examine the expression: \begin{equation*} 1-F(t)=p(1-F_g(t))+q(1-F_b(t)) \end{equation*} Since $F_i(t)$ is a Gaussian distribution we shall use the asymptotic relation: \begin{equation}\label{equ-Asymptotic relation of Gaussian distribution} 1-\Phi(t) \sim \frac{\phi(t)}{t} \quad \text{as } t \rightarrow \infty \end{equation} \begin{equation*} \begin{aligned} 1-F(t)&=p\left(\frac{\sigma_g}{t-\mu_g}\phi(\frac{t-\mu_g}{\sigma_g}) \right)+q\left(\frac{\sigma_b}{t-\mu_b}\phi(\frac{t-\mu_b}{\sigma_b}) \right) \\ &=\frac{1}{\sqrt{2\pi}}\left(\frac{p}{t-\mu_g}e^{-\frac{(t-\mu_g)^2}{2\sigma_g^2}} +\frac{q}{t-\mu_b}e^{-\frac{(t-\mu_b)^2}{2\sigma_b^2}} \right) \\ &\overset{(a)}{=}\frac{1}{\sqrt{2\pi}}\frac{p}{t-\mu_g}e^{-\frac{(t-\mu_g)^2}{2\sigma_g^2}}(1+o(1)) \quad \text{as } t \rightarrow \infty \end{aligned} \end{equation*} where (a) is true since \begin{equation*} \lim_{t \rightarrow \infty} \frac{\frac{\frac{q}{t-\mu_b}e^{-\frac{(t-\mu_b)^2}{2\sigma_b^2}}}{\frac{p}{t-\mu_g}e^{-\frac{(t-\mu_g)^2}{2\sigma_g^2}}}}{1}= \lim_{t \rightarrow \infty} \frac{e^{-\frac{(t-\mu_b)^2}{2\sigma_b^2}}}{e^{-\frac{(t-\mu_g)^2}{2\sigma_g^2}}} = 0 \end{equation*} assuming $\sigma_g>\sigma_b$. So taking in consideration condition \eqref{equ-Necessary and sufficient type 1 condition}: \begin{equation*} \begin{aligned} &\frac{1-F(t+xg(t))}{1-F(t)}=\frac{\frac{1}{\sqrt{2\pi}}\frac{p}{t+xg(t)-\mu_g}e^{-\frac{(t+xg(t)-\mu_g)^2}{2\sigma_g^2}}(1+o(1))} {\frac{1}{\sqrt{2\pi}}\frac{p}{t-\mu_g}e^{-\frac{(t-\mu_g)^2}{2\sigma_g^2}}(1+o(1))} \\ &=\frac{t-\mu_g}{t+xg(t)-\mu_g} e^{\frac{-(t+xg(t)-\mu_g)^2+(t-\mu_g)^2}{2\sigma_g^2}} (1+o(1)) \\ &=\frac{1}{1+\frac{xg(t)}{t-\mu_g}}e^{-\frac{g(t)x(t-\mu_g)}{\sigma_g^2}}e^{-\frac{g^2(t)x^2}{2\sigma_g^2}}(1+o(1))=\\ \end{aligned} \end{equation*} By choosing $g(t)=\frac{\sigma_g^2}{t-\mu_g}$ as the strictly positive function for $t \rightarrow \infty$ we get \begin{equation*} =\frac{1}{1+\frac{x\sigma_g^2}{(t-\mu_g)^2}}e^{-x}e^{-\frac{\sigma_g^2 \ x^2}{2(t-\mu_g)^2}}(1+o(1))=e^{-x} \quad \quad \text{as } t \rightarrow \infty \end{equation*} That conclude that the distribution function $F(x)$ belongs to the domain of attraction of Type \Rmnum{1}. Similar analysis can be found in \cite{mladenovic1999extreme}, where some examples for convergence of sequences of independent random variables with the same mixed distribution is investigated.\\ We now derive the normalizing constants $a_K$ and $b_K$:\\ According to EVT results for \emph{i.i.d.} sequences \cite[Theorem 1.5.1]{EVT:Springer1983}, $u_K=u_K(x)=x/a_K+b_K$ is a sequence of real numbers such that $K(1-F(u_K))\rightarrow \uptau$ as $K\rightarrow \infty$, therefore in our case: \begin{equation*} 1-pF_g(u_K)-qF_b(u_K)\rightarrow \frac{1}{K}e^{-x}, \quad K\rightarrow \infty \end{equation*} where $\uptau=e^{-x}$. The same way as the previous proof using \eqref{equ-Asymptotic relation of Gaussian distribution} we obtain that \begin{equation*} \left(\frac{p\sigma_g}{u_K-\mu_g}\phi(\frac{u_K-\mu_g}{\sigma_g}) \right)+\left(\frac{q\sigma_b}{u_K-\mu_b}\phi(\frac{u_K-\mu_b}{\sigma_b}) \right)\rightarrow \frac{1}{K}e^{-x} \end{equation*} \begin{equation*} \frac{1}{\sqrt{2\pi}}\frac{p}{u_K-\mu_g}e^{-\frac{(u_K-\mu_g)^2}{2\sigma_g^2}}(1+o(1)) \rightarrow \frac{1}{K}e^{-x} \end{equation*} the last step is true since $u_K \rightarrow \infty$ as $K \rightarrow \infty$, similar to the pervious proof. \begin{equation}\label{equ-Proof for a_n b_n (1)} -\frac{1}{2}\log2\pi+\log p-\log{(u_K-\mu_g)}-\frac{(u_K-\mu_g)^2}{2\sigma_g^2}+\log K+x+o(1) \rightarrow 0 \end{equation} It follows at once that $(t-\mu_g)^2 / 2\log K \rightarrow 1$, and hence \begin{equation*} \log{(u_K-\mu_g)}=\frac{1}{2}(\log 2 +\log{\log{K}})+o(1) \end{equation*} Putting this in \eqref{equ-Proof for a_n b_n (1)} ,we obtain \begin{equation*} \frac{(u_K-\mu_g)^2}{2\sigma_g^2}=-\frac{1}{2}\log2\pi+\log p-\frac{1}{2}(\log 2 +\log{\log{K}})+\log K +x+o(1) \end{equation*} or \begin{equation*} \frac{(u_K-\mu_g)^2}{\sigma_g^2}= 2\log K\left(1+\frac{x-\frac{1}{2}\log{\frac{4\pi}{p^2}}-\frac{1}{2}\log{\log{K}}}{\log K}+o\left(\frac{1}{\log K}\right)\right) \end{equation*} and hence \begin{equation*} \frac{(u_K-\mu_g)}{\sigma_g}= \sqrt{2\log K}\left(1+\frac{x-\frac{1}{2}\log{\frac{4\pi}{p^2}}-\frac{1}{2}\log{\log{K}}}{2\log K}+o\left(\frac{1}{\log K}\right)\right) \end{equation*} so by using expansion we have, \begin{equation*} u_K=\sigma_g\sqrt{2\log K} \left(1+\frac{x-\frac{1}{2}\log{\frac{4\pi}{p^2}}-\frac{1}{2}\log{\log{K}}}{2\log K}+o\left(\frac{1}{\log K}\right)\right)+\mu_g \end{equation*} since we know that $u_K=x/a_K+b_K$ we conclude that \begin{equation*} \begin{aligned} &a_K=\frac{\sqrt{2\log{K}}}{\sigma_g}\\ &b_K=\sigma_g\left((2\log K)^{1/2}-\frac{\log{\log K}+\log{\frac{4\pi}{p^2}}}{2(2\log K)^{1/2}}\right)+\mu_g \end{aligned} \end{equation*} \end{proof} \section{Scaling Law Under Time Dependent Channel}\label{sec-Scaling Law Under Time Dependent Channel} Up until now, we explored the performance of the MAC system for time independent and time dependent channel scenarios. We considered a distributed threshold based scheduling algorithm, which ensures that when an exceedance occurs, the transmitting user transmits with high channel capacity. This is due to the high threshold value which is set such that only one user will exceed it on average. As mentioned, this scheme exploit multi-user diversity, i.e., let the best user utilize the channel. The scaling laws for independent channels were studied in \cite{qin2003exploiting,qin2006distributed,kampeas2014capacity}. However, scaling law of the channel capacity for time dependent channels, to the best of our knowledge, was not considered yet. Hence, in this section, we derive the scaling laws of the channel capacity for our MAC system under the Good-Bad channel model described earlier. We note here that, for this analysis, we consider the users to be backlogged at all time. We start by formulating the problem and analysing the scaling laws under a centrelized scheduling algorithm, where the base station chooses the strongest user for transmission using the channel state information (CSI) sent to it from the users. We then return to for the distributed algorithm and analyse it as well. \\ In order to exploit user diversity, the user with the best channel capacity must utilize the channel. Therefore, the problem is in finding the distribution of the random variable $\widetilde{M_K}$, the maximum capacity in a time dependent channel, defined as: \begin{equation}\label{equ-CapacityDefinition} \widetilde{M_K}=\max\{C_1(n),C_2(n),...,C_K(n)\}, \end{equation} where the capacity $C_i(n)$ of the $i$-th user in each slot is determined by the Good-Bad Markov process $\{J(n)\}$: \begin{equation}\label{equ-UserCapacityProcess} C_i(n)= \begin{cases} N_g & \text{when } J_i(n)=Good \\ N_b & \text{when } J_i(n)=Bad. \end{cases} \end{equation} $N_g$ and $N_b$ are random variables distributed normally with parameters $(\mu_g,\sigma_g)$ and $(\mu_b,\sigma_b)$, respectively. That is, we assume that in each slot, each user exists in either a Good or a Bad state, distinguished by different parameters of the Gaussian distribution which models the capacity. This is due to the Gaussian approximation for the MIMO\footnote{The assumption of a MIMO channel is used only to have a concrete expression for the capacity, with a reasonable approximation (in this case, as Gaussian random variable; see e.g., \cite{shmuel2014capacity}).} channel capacity \cite{smith2002gaussian,chiani2003capacity}. The parameters reflect the differences in the channel qualities, in a way that a good channel parameters maintain $(1) \ \sigma_g>\sigma_b, \ \mu_g,\mu_b\in\mathds{R}$ or $(2) \ \sigma_g=\sigma_b$ and $\mu_g>\mu_b$.\\ We are interested in the expected channel capacity $E[\widetilde{M_K}]$ at the limit of large $K$. Here, we first use results from EVT, as well as results concerning time dependent processes in order to evaluate the limit distribution of the maximal value when we consider centralized scheduling. Then, when considering distributed scheduling, we use results from PPA in order to analyse threshold arrival rates and tail distributions. \subsection{Centralized Scheduling} Let us consider the distribution of each $C_i(n)$, as defined in \eqref{equ-UserCapacityProcess}, which is determined by the Good-Bad Markov process $\{J(n)\}$. The stationary distribution of the chain is $(\frac{\beta}{\alpha+\beta},\frac{\alpha}{\alpha+\beta})$, which we will denote in short as $(p,q)$. We have \ifdouble \small \begin{equation}\label{equ-stationary distribition of C_i(n)} \begin{aligned} F(x)=: &P(C_i(n)\leq x)= P(N_g(n)\leq x \mid J(n)=G )\\ & \qquad \cdot P(J(n)=G)+P(N_b(n)\leq x \mid J(n)=B )P(J(n)=B)\\ = &pF_g(x)+qF_b(x), \end{aligned} \end{equation} \normalsize \else \begin{equation}\label{equ-stationary distribition of C_i(n)} \begin{aligned} F(x)=: &P(C_i(n)\leq x)= P(N_g(n)\leq x \mid J(n)=G )\\ & \qquad \cdot P(J(n)=G)+P(N_b(n)\leq x \mid J(n)=B )P(J(n)=B)\\ = &pF_g(x)+qF_b(x), \end{aligned} \end{equation} \fi where $F_g(x)$ and $F_b(x)$ are Gaussian distributions with parameters $(\mu_g,\sigma_g)$ and $(\mu_b,\sigma_b)$, respectively. The distribution of the maximal value is in the form of \begin{equation*} P(\widetilde{M_K}\leq x)= P(C_1(n)\leq x,...,C_K(n)\leq x)=F^K(x), \end{equation*} due to the independence \emph{between the users}. We wish to test the behaviour of $F^K(x)$ as $K \rightarrow \infty$. Note that this means we actually use the stationary distribution as the marginal distribution. Our main result in this context is the following. \begin{theorem}\label{thm-Capacity distribution in a time dependent channel convergence to a Gumbel} Let $(C_1,...,C_K)$ be the sequence of the users' capacities in a certain time slot where each capacity has a distribution $F(x)$ as given in \eqref{equ-stationary distribition of C_i(n)}. Then, the asymptotic distribution of the scheduled user's capacity, i.e., $\widetilde{M_K}=\max\{C_1,...,C_K\}$, is a Gumbel distribution. Specifically, \begin{equation}\label{equ-Capacity distribution of theorem} P\{a_K(\widetilde{M_K}-b_K)\leq x\}\rightarrow e^{-e^{-x}} \end{equation} where \ifdouble \small \begin{equation} \begin{aligned} a_K&= \frac{\sqrt{2\log{K}}}{\sigma_g}, \\ b_K&= \sigma_g\left(\sqrt{2\log K}-\frac{\log{\log K}+\log{\frac{4\pi}{p^2}}}{2\sqrt{2\log K}}\right)+\mu_g. \end{aligned} \end{equation} \normalsize \else \begin{equation} a_K= \frac{\sqrt{2\log{K}}}{\sigma_g}, \ \ b_K= \sigma_g\left(\sqrt{2\log K}-\frac{\log{\log K}+\log{\frac{4\pi}{p^2}}}{2\sqrt{2\log K}}\right)+\mu_g. \end{equation} \fi Therefore, the expected throughput of the transmitting user is \ifdouble \small \begin{multline}\label{equ-Expected channel capacity (stationary distribution)} E[\widetilde{M_K}]= b_K+\frac{\gamma}{a_K}=\\ \sigma_g\left(\left(\sqrt{2\log K}-\frac{\log{\log K}+\log{\frac{4\pi}{p^2}}}{2\sqrt{2\log K}}\right)+\frac{\gamma}{\sqrt{2\log{K}}}\right) +\mu_g, \end{multline} \normalsize \else \begin{equation}\label{equ-Expected channel capacity (stationary distribution)} E[\widetilde{M_K}]= b_K+\frac{\gamma}{a_K}=\sigma_g\left(\left(\sqrt{2\log K}-\frac{\log{\log K}+\log{\frac{4\pi}{p^2}}}{2\sqrt{2\log K}}\right)+\frac{\gamma}{\sqrt{2\log{K}}}\right) +\mu_g, \end{equation} \fi where $\gamma=0.57721$ is the Euler-Mascheroni constant. \end{theorem} The proof of Theorem \ref{thm-Capacity distribution in a time dependent channel convergence to a Gumbel} is based on results we obtained on EVT for stationary processes. The complete proof of Theorem \ref{thm-Capacity distribution in a time dependent channel convergence to a Gumbel} is given in Appendix \ref{Appendix C}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{MaxialCapacityDistributionForTimeDependentChannel_5000} \caption[Simulation for capacity distribution]{The maximal capacity distribution where the capacities were drawn according to the stationary distribution (dashed line), according to the a varying Good-Bad process for each user (solid line) compared to the corresponding Gumbel density with the constants $a_K$ and $b_K$ (red line). For 5000 users.} \label{fig-MaxialCapacityDistributionForTimeDependentChannel_5000} \end{figure} \begin{comment} \ifdouble \small \begin{multline}\label{equ-Expected channel capacity (stationary distribution)} E[\widetilde{M_K}]= b_K+\frac{\gamma}{a_K}=\\ \sigma_g\left(\left(\sqrt{2\log K}-\frac{\log{\log K}+\log{\frac{4\pi}{p^2}}}{2\sqrt{2\log K}}\right)+\frac{\gamma}{\sqrt{2\log{K}}}\right) +\mu_g, \end{multline} \normalsize \else \begin{equation}\label{equ-Expected channel capacity (stationary distribution)} E[\widetilde{M_K}]= b_K+\frac{\gamma}{a_K}=\sigma_g\left(\left(\sqrt{2\log K}-\frac{\log{\log K}+\log{\frac{4\pi}{p^2}}}{2\sqrt{2\log K}}\right)+\frac{\gamma}{\sqrt{2\log{K}}}\right) +\mu_g, \end{equation} \fi where $\gamma=0.57721$ is the Euler-Mascheroni constant. \end{comment} Note that the proof of Theorem \ref{thm-Capacity distribution in a time dependent channel convergence to a Gumbel} is based on the assumption that $p,q \neq 0$ which implicitly implies that $p$ and $q$ are bounded away from zero. Nevertheless, note that the expected throughput of the transmitting user when setting $p=1$, which implies that all users have a Good channel state, agrees with the one presented in \cite{kampeas2014capacity}. Simulation results for the capacity distribution are given in Figure \ref{fig-MaxialCapacityDistributionForTimeDependentChannel_5000}. The figure clearly depicts a match between the analysis and simulation results. \begin{comment} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{MaxialCapacityDistributionForTimeDependentChannel} \caption[Simulation for capacity distribution]{Simulation for the maximal capacity distribution, when choosing the maximal capacity among 5000 capacities that follow the stationary distribution where the red line is the corresponding Gumbel density with the constants $a_K$ and $b_K$.} \label{fig-MaxialCapacityDistributionForTimeDependentChannel} \end{figure} \end{comment} The probability $p$, which is the stationary probability to exist in a Good state, governs the average number of users which are in Good state. That is, in each time slot, one can distinguish between two groups of users: the users that are in a Good state and the users that are in Bad state. It is easy to show that the number of good users, on average, is $pK$. Hence, as $p$ grows, the expected capacity grows since there are more users in a Good state. This is clear from analytical result in \eqref{equ-Expected channel capacity (stationary distribution)} for the channel capacity as well as in Figure \ref{fig-Capacity comparison as Function of p}. One may wonder why considering the bad group at all, meaning, why should a user which is in a Bad channel state be taken into account in the scheduling decision process. Leaving only the users with the good channel to compete for the channel, the capacity with $K$ sufficiently large is \begin{equation}\label{equ-CapacityExpressionOnlyGood} E[\widetilde{M_{pK}}] = \sigma_g\sqrt{(2\log pK)}+\mu_g+\text{o}\left(\frac{1}{\sqrt{\log{pK}}}\right). \end{equation} \begin{comment} \begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{capacityComparisonAsFuncOfP} \caption[Capacity comparison as Function of p]{The channel capacity gain as given in \eqref{equ-Expected channel capacity (stationary distribution)}, in comparison to the gain when choosing only from the "good" group of size $Kp$, for $K=5000$ users, as a function of $p$. $\mu_g = \sqrt{2}$ and $\sigma_g = 0.5$. } \label{fig-Capacity comparison as Function of p} \end{figure} \end{comment} Figure \ref{fig-Capacity comparison as Function of p} depicts the influence of the bad group on the capacity. For rather small values of $p$, it is beneficial to schedule the strongest user \emph{from both groups}. As $p$ grows, the size of the good group grows as well, hence the two curves converge to the case were all the users are in Good state. Thus, for small values of $p$ the users in the \emph{Bad} state still have a significant impact, in the sense that they have a high enough probability to be the strongest and gain channel access. Figures \ref{fig-capacityGainFforTimeDependentChannel_p=0.5} and \ref{fig-capacityGainFforTimeDependentChannel_p=0.2} give a different perspective: the capacity as a function of the number of users. In Figure \ref{fig-capacityGainFforTimeDependentChannel_p=0.2}, the difference between the capacities is noticeable. \begin{comment} \ifdouble \begin{figure}[t] \centering \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=0.9\textwidth,height=4cm,keepaspectratio]{gfx/pdf/capacityGainFforTimeDependentChannel_p=05} \caption{} \label{fig-capacityGainFforTimeDependentChannel_p=0.5} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=0.9\textwidth,height=4cm,keepaspectratio]{gfx/pdf/capacityGainFforTimeDependentChannel_p=02} \caption{} \label{fig-capacityGainFforTimeDependentChannel_p=0.2} \end{subfigure} \caption[Capacity gain comparison]{The capacity gain for choosing the best user among all the population and only the good where $\sigma_g = 0.5$ and (a) p=0.5 (b) p=0.2 } \label{fig-gain for time dependent channel} \end{figure} \else \begin{figure*}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth,height=4cm,keepaspectratio]{gfx/pdf/capacityGainFforTimeDependentChannel_p=05} \caption{} \label{fig-capacityGainFforTimeDependentChannel_p=0.5} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=0.9\textwidth,height=4cm,keepaspectratio]{gfx/pdf/capacityGainFforTimeDependentChannel_p=02} \caption{} \label{fig-capacityGainFforTimeDependentChannel_p=0.2} \end{subfigure} \caption[Capacity gain comparison]{The capacity gain for choosing the best user among all the population and only the good where $\sigma_g = 0.5$ and (a) p=0.5 (b) p=0.2 } \label{fig-gain for time dependent channel} \end{figure*} \fi \end{comment} \begin{figure}[t] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth,height=4cm,keepaspectratio]{capacityGainFforTimeDependentChannel_p=05} \caption{} \label{fig-capacityGainFforTimeDependentChannel_p=0.5} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth,height=4cm,keepaspectratio]{capacityGainFforTimeDependentChannel_p=02} \caption{} \label{fig-capacityGainFforTimeDependentChannel_p=0.2} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth,height=4cm,keepaspectratio]{capacityComparisonAsFuncOfP} \caption{} \label{fig-Capacity comparison as Function of p} \end{subfigure} \caption[Capacity gain comparison]{Capacity comparison for choosing only from the "good" group, of size $Kp$, and the whole population (as given in \eqref{equ-Expected channel capacity (stationary distribution)}) as a function of the number of users with fixed $p$, where in (a) p=0.5 and in (b) p=0.2. And as a function of $p$, for $K=5000$ users in (c). Here $\mu_g = \sqrt{2}$ and $\sigma_g = 0.5$.} \label{fig-gain for time dependent channel} \end{figure} \subsection{Distributed Scheduling} One of the main disadvantages in centralized scheduling is the overhead (e.g., CSI) which is needed for communicating properly. Of course for multi-user systems this drawback is more acute. Hence, distributive scheduling schemes become more attractive. One distributive approach which has shown excellent exploitation of multi-user diversity, lets the users transmit based on their instantaneous channel condition \cite{qin2006distributed}. Specifically, given a predefined threshold, users may attempt transmission only if their channel gain exceeds it. When considering this distributed scheduling algorithm, we derive the expected channel capacity using PPA. This method can also be found in \cite{kampeas2014capacity} for the case of \textit{i.i.d.} and heterogeneous users. We first give a brief review on the construction of the point process and the relevant results which will be used throughout this subsection. Let $\{X_n\}$ be a sequence of standard normal \textit{i.i.d.} variables, with the Gumbel Distribution as the extreme value distribution and normalizing constants $a_n$ and $b_n$. We then define a sequence of points on $[0,1]\times \mathds{R}^2$ by: \begin{equation}\label{equ-sequence of point processes N_n} N_n=\Big\{\frac{i}{n},X_i:i=1,...,n \Big \}. \end{equation} Under the above, we have the following Theorem. \begin{theorem}\label{thm-Point process convergance of iid}(\cite{EVT:Springer1983}) The sequence $N_n$ on $[0,1]\times (u,\infty)$, for some large value of $u$, converge to a non-homogeneous Poisson process $N$ with parameter $\uptau$, that is, $N_n\rightarrow N \ \ \text{as } \ n\rightarrow \infty$. \end{theorem} However, since we are interested in time dependent environment, we would like to explore the point process analysis for more general stationary sequences. In \cite{leadbetter1976weak}, exceedances of a high level $u_n$ by a stationary sequence $\{X_i\}$ (i.e., points where $X_i>u_n$), were analyzed, obtaining Poisson limits under weak dependence restrictions. In particularly, the following conditions make precise the notion of extreme events being near-independent if they are sufficiently distant in time. \begin{Definition}\label{def-StrongMixing} We say $\{X_n\}$ is strongly mixing if there is a function $g$ on the positive integers with $g(k) \rightarrow 0$ as $k \rightarrow \infty$ such that, if $A \in \mathfrak{F}(X_1,...,X_m)$ and $B \in \mathfrak{F}(X_{m+k},X_{m+K+1},...)$ for some $k, m\geq 1$, then $$\mid P(A\cap B)-P(A)P(B) \mid \leq g(k),$$ where $\mathfrak{F}(\cdot)$ denotes the $\sigma$-field generated by the indicated random variables. \end{Definition} When trying to weaken the strong mixing condition, one notes that the events of interest in extreme value theory are typically those of the form $\{X_n\leq u\}$. Hence we have, \begin{Definition}\label{def-D condition}(\cite{EVT:Springer1983}) A stationary series $\{X_n\}$ is said to satisfy the $D(u_n)$ condition if \\ for all $i_1<...<i_p<j_1<...<j_q$ with $j_1-i_p>l$, \ifdouble \footnotesize \begin{multline*} |P_r\{X_{i_1}\leq u_n,...,X_{i_p}\leq u_n, X_{j_1}\leq u_n,...,X_{j_q}\leq u_n\} - \\ P_r\{X_{i_1}\leq u_n,...X_{i_p}\leq u_n\}P_r\{ X_{j_1}\leq u_n,...,X_{j_q}\leq u_n\}|\leq\alpha(n,l), \end{multline*} \normalsize \else \begin{multline*} |P_r\{X_{i_1}\leq u_n,...,X_{i_p}\leq u_n, X_{j_1}\leq u_n,...,X_{j_q}\leq u_n\} - \\ P_r\{X_{i_1}\leq u_n,...X_{i_p}\leq u_n\}P_r\{ X_{j_1}\leq u_n,...,X_{j_q}\leq u_n\}|\leq\alpha(n,l), \end{multline*} \fi where $\alpha(n,l)\rightarrow 0$ for some sequence $l_n$ such that $l_n/n \rightarrow 0$ as $n\rightarrow \infty$. \end{Definition} Another condition which is highly relevant is the local dependence condition $D'(u_n)$: \begin{Definition}\label{def-D' condition}(\cite{EVT:Springer1983}) We say that $D'(u_n)$ is satisfied if, as $k\rightarrow \infty$, \begin{equation*} \limsup_{n \rightarrow \infty} n\sum_{j=2}^{ \lfloor n/k\rfloor } P\{X_1>u_n,X_j>u_n\}\rightarrow 0. \end{equation*} \end{Definition} Considering the above, we have, \begin{theorem}\label{thm-Point process convergance of stationary process}(\cite{leadbetter1976weak}) Let $D(u_n),D'(u_n)$ hold for the stationary sequence $\{X_i\}$ with $u_n=u_n(\uptau)$, such that $n(1-F(u_n))=nP\{X_1>u_n\}\rightarrow \uptau$ as $n\rightarrow\infty$ for all $\uptau > 0$ . Let $N_n$ be the point process, consisting of the exceedances of $u_n(\uptau)$. Then $N_n \overset{d}{\rightarrow} N$ as $n\rightarrow\infty$, where $N$ is a Poisson process with parameter $\uptau$. \end{theorem} Using Theorem \ref{thm-Point process convergance of stationary process}, we now analyze each user separately and examine its sequence of channel capacities over time. Specifically, we will show that the exceeding points in this \emph{dependent sequence} converge to a Poisson process, similar to a one if the sequence was \textit{i.i.d.}. The sequence $\{C_i(n)\}$ depends on which state user $i$ exists in, which is, in turn, governed by the underlying Markov chain process $\{J_i(n)\}$, as defined in \eqref{equ-UserCapacityProcess}. Such processes have been studied before, \cite{janssen1969processus,denzel1975limit}, and are known as $\tilde{J}-X$ processes or "chain-dependent" processes. In our context, we shall consider the sequence $\{X_n\}$ as the sequence of capacities $\{C_i(n)\}$ of the i-th' user over time, and the chain process $\{\tilde{J}_n\}$ as the sequence of the irreducible, aperiodic, 2-state Good-Bad Markov chain $\{J_i(n)\}$. Since we analyze the capacity process of one user, and all have the same distribution we omit the user index. We have \ifdouble \small \fi \begin{equation}\label{equ-J-C process defenition our model} \begin{aligned} P(J_n &= j , C_n \leq \alpha \mid J_0,C_1,J_1,...,C_{n-1},J_{n-1}=i) \\ &= P(J_n=j,C_n\leq \alpha \mid J_{n-1}=i)=P_{ij}H_j(\alpha), \end{aligned} \end{equation} \ifdouble \normalsize \fi where $P$ is the transition matrix of the chain and $H_j(\alpha)$, where $j$ belongs to the state space, are the distribution functions associated with the chain states, respectively. Note that each state determines the distribution of $X$ for the current time transition. This means that given the chain process $\{J_n\}$, the random variables of the $\{C_n\}$ process are conditionally independent. If the initial distribution of the chain is the stationary distribution, i.e $P(J_0=i)=\pi_i$ for all $i$ in the finite state space, where $\pi$ is the stationary distribution, then the distribution of $C_n$ is $H(x)=\Sigma \pi_i H_i(x)$. In \cite{denzel1975limit}, it was stated that every stationary chain-dependent process is strongly mixing. We will show that the $J-C$ process, as defined in this paper, is indeed strongly mixing by definition \ref{def-StrongMixing}. \begin{lemma}\label{lem-C_n is strongly mixing} $\{C_n\}$ is strongly mixing with $g(k)=\sum_{j} \pi_i \mid P_{ij}^k -\pi_j \mid$, where $\pi$ is the stationary distribution of the chain and $P_{ij}^k=(P^k)_{ij}$. \end{lemma} \begin{proof} Let $A$ and $B$ be as definition \ref{def-StrongMixing}. Then \ifdouble \small \begin{equation*} \begin{aligned} & \mid P(A \cap B)-P(A)P(B) \mid \\ & \leq \sum_{i,j\in\{0,1\}} \mid P(A \cap B,J_m=i,J_{m+k}=j) -P(A,J_m=i)P(B,J_{m+k}=j) \mid \\ &=\sum_{i,j\in\{0,1\}} P(A|J_m=i)P(B|J_{m+k}=j)P(J_m=i)\mid P(J_{m+k}=j|J_m=i)-P(J_{m+k}=j) \mid \\ &\leq \sum_{i,j\in\{0,1\}} \pi_i \mid P_{ij}^k -\pi_j \mid = g(k).\\ \end{aligned} \end{equation*} \normalsize \else \begin{equation*} \begin{aligned} & \mid P(A \cap B)-P(A)P(B) \mid \\ & \leq \sum_{i,j\in\{0,1\}} \mid P(A \cap B,J_m=i,J_{m+k}=j)-P(A,J_m=i)P(B,J_{m+k}=j) \mid \\ &=\sum_{i,j\in\{0,1\}} P(A|J_m=i)P(B|J_{m+k}=j)P(J_m=i)\mid P(J_{m+k}=j|J_m=i)-P(J_{m+k}=j) \mid \\ &\leq \sum_{i,j\in\{0,1\}} \pi_i \mid P_{ij}^k -\pi_j \mid = g(k).\\ \end{aligned} \end{equation*} \fi For each $i \in {0,1}$, $\sum_{j} \pi_i \mid P_{ij}^k -\pi_j \mid \rightarrow 0$ as $k \rightarrow \infty$. This was also used in \cite{o1974limit} in order to show the strongly mixing property, nevertheless, it is easy to notice that ,in fact, as $k \rightarrow \infty$, $(P^k)_{ij} \rightarrow \pi_j$ regardless of $i$. \end{proof} Since strong mixing holds for our sequence of channel capacities, due to the fact that it is a weaken case, so does the condition $D(u_n)$ hold. We would like to show that condition $D'(u_n)$ also holds, so we are able to characterize the rate of exceedance over the threshold $u_n$. Note that we are only interested in a sequence of reals $\{u_n\}$ which satisfies $1-H(u_n)=\uptau/n+o(1/n)$ when considering the $D'(u_n)$ condition. We thus have the following Lemma. \begin{lemma}\label{lem-condition D'(u_n) holds on C_n} The local dependence condition $D'(u_n)$ holds for the sequence $\{C_n\}$ as defined in \eqref{equ-UserCapacityProcess}. \end{lemma} \begin{proof} \ifdouble \small \fi \begin{equation*} \begin{aligned} &\limsup_{n \rightarrow \infty} n\sum_{r=2}^{ \lfloor n/k\rfloor } P(C_1>u_n,C_r>u_n) \\ &\overset{(a)}{\leq} \limsup_{n \rightarrow \infty} n\sum_{r=2}^{ \lfloor n/k\rfloor } \sum_{i,j\in\{0,1\}} P(C_1>u_n | J_1=i)\\ &\quad \quad \quad \cdot P(C_r>u_n | J_r=j) P(J_1=i)P(J_r=j | J_1=i) \\ &= \limsup_{n \rightarrow \infty} n\sum_{r=2}^{ \lfloor n/k\rfloor } \sum_{i,j\in\{0,1\}} (1-H_i(u_n))(1-H_j(u_n))\pi_i P_{ij}^r\\ &\overset{(b)}{\leq} \limsup_{n \rightarrow \infty} n\sum_{r=2}^{ \lfloor n/k\rfloor } \sum_{i,j\in\{0,1\}} \left(\frac{\uptau}{n\pi_i}+o\left(\frac{1}{n}\right) \right) \left(\frac{\uptau}{n\pi_j}+o\left(\frac{1}{n}\right) \right) \pi_i P_{ij}^r\\ &\leq \limsup_{n \rightarrow \infty} n\sum_{r=2}^{ \lfloor n/k\rfloor } \sum_{i,j\in\{0,1\}}\frac{1}{\pi_j\pi_i} \left(\frac{\uptau}{n}+o\left(\frac{1}{n}\right) \right)^2 \pi_i P_{ij}^r\\ &\leq \limsup_{n \rightarrow \infty} n \left\lfloor \frac{n}{k}\right\rfloor \left(\frac{\uptau}{n}+o\left(\frac{1}{n}\right) \right)^2 \sum_{i,j\in\{0,1\}}\frac{1}{\pi_j\pi_i} \pi_i \max_r\{P_{ij}^r\}\\ &=(\uptau^2+o(1))^2 \frac{1}{k} \sum_{i,j\in\{0,1\}}\frac{1}{\pi_j\pi_i} \pi_i \max_r\{P_{ij}^r\} \rightarrow 0 \text{ as } k \rightarrow \infty. \end{aligned} \end{equation*} \ifdouble \normalsize \fi In the above chain, (a) is since once $J_1$ is known, $C_1$ and $C_r,J_r$ are independent, then we conditioned on $J_r$. (b) is true since $u_n$ maintains $n(1-F_1(u_n))=\uptau +o(1)$, so we have \ifdouble \small \fi \begin{equation*} \begin{aligned} &1-F_1(u_n)=1-H(u_n)=1-\sum_{i} \pi_i H_i(u_n)\\ &=\sum_{i} \pi_i(1- H_i(u_n))= \frac{\uptau}{n}+o\left(\frac{1}{n}\right) \end{aligned} \end{equation*} \ifdouble \normalsize \fi and therefore \ifdouble \small \fi \begin{equation*} \pi_l(1-H_l(u_n))=\frac{\uptau}{n}-\sum_{i,i\neq l} \pi_i(1- H_i(u_n))\leq \frac{\uptau}{n} +o\left(\frac{1}{n}\right). \end{equation*} \ifdouble \normalsize \fi Note that $\pi_l$ and $\uptau$ are constants. \end{proof} Thus, in our paradigm, a single users' channel capacity process has the same laws of convergence as if the sequence $\{C_n\}$ was \textit{i.i.d.} with a marginal distribution $H(x)$. Therefore, since the users are independent and each user sees the same marginal distribution $H(x)$, we can analyze the point process of the sequence of all users' capacities at a specific time (e.g, time slot), resulting in the basic case of \textit{i.i.d.} random variables, as given in Theorem \ref{thm-Point process convergance of iid}. Considering the above, we can now turn to evaluate the expected channel capacity. \subsubsection{Distributed Algorithm} Given the number of users, we set a capacity threshold $u$ such that only a small fraction of the users will exceed it. At the beginning of each slot, each user estimates its capacity for that slot. If the capacity anticipated by the user is greater than the capacity threshold, it transmits in that slot. Otherwise, it keeps silent. The threshold value is set such that one user exceeds the threshold on average, and as a result, its transmission is successful. We treat this slot as a utilized slot. Hence the expected channel capacity has the form: \begin{equation*} C_{av}(u)=P_r(\text{utilized slot})E[C|C>u]. \end{equation*} For the calculation of $E[C|C>u]$, the expected capacity experienced by a user who passed $u$, one needs to evaluate the distance of the exceeding points from the threshold. \cite{kampeas2014capacity} gives the analytical tools to compute the tail distribution of the exceeding points. These points follow the generalized Pareto distribution, hence by using the PPA exceedance rate results, and since we already showed that we have the same exceedance as an \textit{i.i.d.} case with $H(x)$ as the marginal distribution, the result is the same and we have \begin{equation*} E[C|C>u]=u+\frac{1}{a_K}+o\left(\frac{1}{a_K}\right), \end{equation*} where $a_K$ is the normalizing constant as in Theorem \ref{thm-Capacity distribution in a time dependent channel convergence to a Gumbel}. We say that a slot is utilized if only one point out of all $K$ points exceeds the threshold. Hence, as $K \to \infty$ \begin{equation*} K\left(\frac{1}{K}\right)\left(1-\frac{1}{K}\right)^{K-1}\to e^{-1}, \end{equation*} where the threshold $u$ was chosen such that $1-H(u)=1/K$. We will elaborate on this value in the next subsection. The expected channel capacity is thus \begin{equation}\label{equ-Expected channel capacity (PPA)} C_{av}(u)=e^{-1}\left(u+\frac{1}{a_K}+o\left(\frac{1}{a_K}\right)\right). \end{equation} Clearly, in order to assess the expression above and understand how the capacity scales in a distributed algorithm, one has to compute the value of the threshold. \subsubsection{Threshold Estimation} The threshold is set such that only one user on average exceeds it in each time slot. This selection maximizes the probability of a successful transmission in a slot, and is asymptotically optimal as $n \rightarrow \infty$ \cite{qin2003exploiting}. By using this rule we can estimate the optimal threshold value for the time dependent channel as well: \begin{equation*} 1-H(u)=\frac{1}{K} \ \ \text{Hence,} \ \ \ 1-pF_g(u)-qF_b(u)=\frac{1}{K}.\\ \end{equation*} \begin{comment} \begin{equation*} 1-H(u)=\frac{1}{K}\\ \end{equation*} Hence, \begin{equation*} 1-pF_g(u)-qF_b(u)=\frac{1}{K}.\\ \end{equation*} \end{comment} where the above represents the probability that a user capacity will exceed the threshold $u$. With similar derivation as in Theorem \ref{thm-Capacity distribution in a time dependent channel convergence to a Gumbel} we get that \ifdouble \small \begin{multline}\label{equ-Estimated threshold} u=\sigma_g\sqrt{2\log{K}}\left( 1-\frac{\frac{1}{2}\log{\frac{4\pi}{p^2}}+\frac{1}{2}\log{\log{K}}}{2\log{K}}+o\left( \frac{1}{\log{K}} \right) \right) +\mu_g=b_K. \end{multline} \normalsize \else \begin{equation}\label{equ-Estimated threshold} u=\sigma_g\sqrt{2\log{K}}\left( 1-\frac{\frac{1}{2}\log{\frac{4\pi}{p^2}}+\frac{1}{2}\log{\log{K}}}{2\log{K}}+o\left( \frac{1}{\log{K}} \right) \right)+\mu_g=b_K. \end{equation} \fi Putting \eqref{equ-Estimated threshold} in \eqref{equ-Expected channel capacity (PPA)} we get the same expression for the expected channel capacity under distributed scheduling which show that both approaches have the same scaling laws, the distributed approach being smaller only by a factor of $e^{-1}$, so there was no loss of optimality due to the distributed algorithm. \subsubsection{Threshold exceedance process} \label{subsec-Threshold_exceedance_process} We saw that the point process described earlier result in convergence of the points exceeding $u_n$ to a non-homogeneous Poisson process $N$. This justify our initial assumption, in the performance part of this work, that the time between threshold exceedances is exponentially distributed with parameter $\uptau$. It is important to note that convergence in distribution happens if we change the "time scale" by a factor of $n$ as defined in the definition of $N_n$ in \eqref{equ-sequence of point processes N_n}. Meaning, we have a point process in the interval $(0,1]$ and the exceedance of such points has a limiting Poisson distribution. Of course when considering random arrivals it would interest us to assess the convergence of the exceeding points for the entire positive line and not just the unit interval. Here, we shall use the notation given in \cite{EVT:Springer1983} for the Poisson properties of exceedance.\\ Let $C_1,C_2,...,C_n$, a sequence of \textit{i.i.d.} r.v.s with distribution $F$, be the channel capacity draws of a specific user. From \cite[Theorem 2.1.1]{EVT:Springer1983}, if $u_n$ satisfies $n(1-F(u_n))\rightarrow \uptau$, then for $k=0,1,2,..., \ \ P(N_n\leq k)\rightarrow e^{-\uptau}\sum_{s=0}^{k}\frac{\uptau^{s}}{s!},$ where $N_n$ is the number of exceedances of a level $u_n$ by $\{C_n\}$. By \cite[Theorem $5.2.1 (\rmnum{2})$]{EVT:Springer1983}, if for each $\uptau >0$, there exists a sequence $\{u_n(\uptau)\}$ satisfying $n(1-F(u_n(\uptau)))=nP(C_1>u_n(\uptau))\rightarrow \uptau$ as $n \rightarrow \infty$, and that $D(u_n(\uptau)), D'(u_n(\uptau))$ hold for all $\uptau > 0$, then for any fixed $\uptau$, $N_n$ converges in distribution to a Poisson process $N$ on $(0,\infty)$ with parameter $\uptau$. Clearly, for the approximation model given in Section \ref{Approximate model 2}, the dependence conditions hold, since the sequence $\{C_n\}$ is \textit{i.i.d.}, so one only needs to show that the first condition holds for each $\uptau$. As for the approximation model given in Section \ref{Approximate model 3}, we already showed that the dependence conditions holds and similar derivation will show that the first condition holds also for each $\uptau$. \begin{lemma}\label{lem-condition for poisson convergence on all positive line} Assume F is the Gaussian distribution and let $a_n$ and $b_n$ be given according to \cite[Theorem 1.5.3]{EVT:Springer1983}. Fix any $\uptau>0$ and set $u_n(\uptau)= \frac{\log{1/\uptau}}{a_n}+b_n$ Then, \begin{equation*} \lim_{n \to \infty} n(1-F(u_n(\uptau))) = \uptau. \end{equation*} \end{lemma} \begin{proof} In a similar way for the derivation of the normalizing constant in \cite[Theorem 1.5.3]{EVT:Springer1983}. Let us find $u_n(\uptau)$ which satisfies the equivalence condition for the convergence of the expression $n(1-F(u_n(\uptau)))$ \ifdouble \small \fi \begin{equation*} \begin{aligned} n(1-F(u_n(\uptau))) &\rightarrow \uptau \qquad \text{as } n\rightarrow \infty \\ \frac{nf(u_n(\uptau))}{u_n(\uptau)}&\rightarrow \uptau \qquad \text{as } n\rightarrow \infty \end{aligned} \end{equation*} \ifdouble \normalsize \fi where the second line is true due to the Gaussian relation $1-\Phi(u)\sim \frac{\phi(u)}{u}$ as $u\rightarrow \infty$, which in our case $u_n(\uptau)$ grows with $n$. So \ifdouble \small \fi \begin{equation*} \begin{aligned} \frac{1}{\sqrt{2\pi}}e^{-\frac{u_n^2(\uptau)}{2}} &\underset{n\rightarrow \infty}{\rightarrow} \frac{\uptau \ u_n(\uptau)}{n} \\ -\log\sqrt{2\pi}-\frac{u_n^2(\uptau)}{2}&\underset{n\rightarrow \infty}{\rightarrow} \log \uptau + \log(u_n(\uptau)) - \log n \qquad \end{aligned} \end{equation*} \ifdouble \normalsize \fi we know that $\log(u_n(\uptau))=\frac{1}{2}(\log 2 +\log{\log{n}})+o(1) $, hence \ifdouble \small \begin{equation*} \begin{aligned} &\frac{u_n^2(\uptau)}{2}= \log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}} + \log n +o(1)\\ &u_n^2(\uptau)=2\log n\left( 1+ \frac{\log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}}}{\log n} +o\left(\frac{1}{\log n} \right) \right)\\ &u_n(\uptau)=\sqrt{2\log n}\left( 1+ \frac{\log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}}}{2\log n} +o\left(\frac{1}{\log n} \right) \right)\\ &u_n(\uptau)=\frac{\log{1/\uptau}}{a_n}+b_n \end{aligned} \end{equation*} \normalsize \else \begin{equation*} \begin{aligned} &\frac{u_n^2(\uptau)}{2}= \log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}} + \log n +o(1)\\ &u_n^2(\uptau)=2\log n\left( 1+ \frac{\log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}}}{\log n}+o\left(\frac{1}{\log n} \right) \right)\\ &u_n(\uptau)=\sqrt{2\log n}\left( 1+ \frac{\log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}}}{2\log n}+o\left(\frac{1}{\log n} \right) \right)\\ &u_n(\uptau)=\frac{\log{1/\uptau}}{a_n}+b_n \end{aligned} \end{equation*} \fi where the penultimate line is due to Taylor expansion. \end{proof} Now, since all conditions for convergence hold, and the exceeding points indeed converge to a Poisson process on the real line of the exceeding points, we can conclude that a user attempts transmission at a rate of $\uptau$, assuming he has packets to send. \ifdouble \else Figure \ref{fig-PoissonConvergence} show simulation for convergence on the real line. \fi \ifdouble \else \begin{figure} \centering \begin{tikzpicture}[ every node/.style={anchor=south west,inner sep=0pt}, x=1mm, y=1mm, ] \node (fig1) at (0,0) {\includegraphics[width=0.45\textwidth]{PoissonConverganceSimulationIID}}; \node (fig2) at (45,20) {\tiny \begin{tabular}{l|c c} \hline & Simulation & Poisson \\ & & distribution \\ \hline Pr(N=0) & 0.3959 & 0.3961 \\ Pr(N=1) & 0.3682 & 0.3668 \\ Pr(N=2) & 0.1696 & 0.1698 \\ Pr(N=3) & 0.0527 & 0.0524 \\ Pr(N=4) & 0.0112 & 0.0121 \\ Pr(N=5) & 0.002 & 0.0022 \\ Pr(N=6) & 0.0004 & 0.0003 \\ Pr(N=7) & 0.0 & 0.00004 \\ \hline \end{tabular}}; \end{tikzpicture} \caption{Simulation for 10000 user's capacities which follows an \textit{i.i.d.} Gaussian distribution, showing the behaviour of exceedance which converge to a Poisson distribution (the dots) with parameter $\uptau$. On the right up side the values table is given.} \label{fig-PoissonConvergence} \end{figure} \fi \begin{comment} At the limit of large number of users, the capacity seen by a user who exceeds the threshold is high (in fact, it scales like $O(\sigma\sqrt{2\log K}+\mu)$, see \eqref{equ-CapacityExpression}). Hence, for any finite size of a data packet, transmission time tends to zero as the number of users increases. Furthermore, in this case, as convergence to Poisson process $N_{exc}$ exists, events duration also goes to zero. This motivates us to suggest the asymptotic model, where transmission time is negligible but still exists, hence under this assumptions we will refer the slots as mini slots to emphasize this fact. \begin{remark}[Zero collisions] \label{rem-zero collisions} Since the users are independent, and, at the limit of large $K$ and small slots, are all characterized by the same $N_{exc}$ Poisson process, it is clear that no two events (transmission) can occur in the same time. Therefore, once a user exceeds $u_n$, assuming it's transmission time goes to zero, the probability that other users will exceed $u_n$ at the exact same time goes to zero. Thus, in such a scenario, the resulting approximation for the behavior of the queues is that of independent queues, each with a Poisson arrival process with rate $\lambda$ and exponential service process with parameter $\uptau$, that is, independent $M/M/1$ queues. Still, we use this scenario only as a guide line for the analysis, since we wish to keep the assumption that the services are synchronized, and hence collisions \textbf{may occur}, and result with decline in the rate of service. \end{remark} \end{comment} \section{Conclusion}\label{sec-conclusion} In this work, we investigated the performance and the channel capacity of a multi-user MAC system in a time dependent and independent environment under distributed scheduling. Specifically, the performance of the system was derived while considering queueing theory aspects. In fact, a precise characterization is a very difficult mission, which up until today was not solved. Therefore, we presented approximation models to describe its behaviour. First, we addressed the \textit{i.i.d.}\ case, where the users do not experience a time varying channel. For that case, we elaborated an existing model and showed results for our paradigm. In addition, we gave another approach, which assumed the queues are independent, derived the probability of collision in the random access mechanism, and enabled us to consider them each as a much more simple queue. Then, we suggested a queue model, which is time dependent and modelled by our Good-Bad channel model. We showed good agreement between the analytic models and the simulation results. Lastly, the expected channel capacity gain was derived in the case where the dependent capacity sequence was modelled as a stationary process, characterized by the Good-Bad channel Markov process. \section{Introduction} In recent years, connectivity of everyday objects, which are often equipped with computing and networking capabilities, is becoming attractive and in some cases even necessary. These next generation information technologies form the new field of Internet of Things (IoT). A key aspect of such technologies is to support data transfer from a \emph{huge number} of nodes in the network (e.g., sensors), in order to provide novel applications. For example, many cities today provide smart city technologies such as smart metering, surveillance and security, infrastructure management, city automation, and eHealth. All these applications require data transfer from a huge number of sensors/devices, while sharing a common channel or infrastructure. Due to this rapid growth in the number of users/devices, which rely on the wireless connectivity to a single gateway, we expect to have extremely large amount of traffic, data or contro , and since not all devices can transmit simultaneously, some channel access mechanism is required to coordinate between the transmitters efficiently. A common channel access for IoT and wireless sensors networks (WSN) is the random access mechanism \cite{pratas2012code,zhou2012contention,huang2013evolution}. Yet, as widely explored over the past few decades, such contention based access paradigms can result in low channel utilization due to collisions and mutual interference. Furthermore, since users are accessing the channel arbitrarily, regardless of their channel quality, a user with bad channel quality can capture the channel and transmit for a long duration (i.e., low rate), degrading the overall network performance \cite{heusse2003performance}. On the other hand, schedule-access-based protocols, allowing a scheduler to schedule users according to their current channel state, hence, exploit the multi-user diversity which is inherent to the wireless medium \cite{viswanath2002opportunistic,liu2001opportunistic}, may suffer from a large overhead and a complexity borden if the number of users is large. Multi-user diversity gains are even more acute when utilizing multiple antennas both at the transmitter and the receiver. Many scheduling studies can be found for such Multiple-Input Multiple-Output (MIMO) technologies. For examples, in \cite{kim2005scheduling} and \cite{yoo2006optimality}, Zero-Forcing Beamforming (ZFBF) was investigated and user selection was performed in order to avoid interference among users' streams. Note, however, that such schedule-base schemes involve large channel state information exchange overhead, which hinders the large throughput gain \cite{bejarano2014mute}. Considering the above, a suitable channel access scheme can be a distributed opportunistic threshold based algorithm, in which users can attempt transmission only if their channel state is above a threshold. On the one hand, such a scheme is opportunistic as it allows channel access only to users with advantageous channel conditions. On the other hand, it does not require extensive channel state information exchange, hence entails only a relatively low overhead. Several studies in the literature considered similar threshold-based opportunistic systems, e.g., \cite{qin2003exploiting,qin2006distributed} and \cite{kampeas2014capacity} for the homogeneous and non-homogeneous user cases, respectively. However, these studies investigated only the potential capacity or throughput gains of the mechanism, but have not analyzed other systems' metrics such as \emph{delay or buffer occupancy}. Moreover, they did not consider fully the very practical model of a time-dependent channel. \cite{qin2003exploiting} have a model with states but it's a bit different since one state is for no transmission at all and the other allow transmissions. It is important to note that metrics such as delay or buffer occupancy are especially interesting in threshold-based algorithms, as one might suspect that a threshold-based algorithm will result in a long delay (until the threshold is exceeded) or in some kind of unfair scheduling (if a user exceeds the threshold more often than others). \subsection{Main contributions} In this work we address these concerns. Specifically, we consider a multi-user system comprising of $K$ users (devices) with a single antenna, wishing to communicate with a single gateway with multiple antennas. The channel access is governed by a threshold based random-access mechanism where each user transmits packets in a first-in-first-out (FIFO) manner, such that packets arriving to a user which is already busy with a pending packet transmission wait for their turn to be transmitted. Accordingly, each user maintains a queue in which it stores packets waiting for transmission. Note that due to the shared medium the \emph{queues at different users are tightly correlated}. Furthermore, we assume \emph{time dependent channels}, that is, the channel state users experience is not \emph{i.i.d.}, and depends on the previous channel conditions. We provide analytical models and closed formulas to determine important properties of such systems. Specifically, the contributions of this work are divided into two main threads which, together, gives a complete understanding of the system. We start with exploring the \emph{performance} of the system by presenting approximate models for the systems' queues behavior. We first modify the model presented in \cite{ephremides1987delay}, while suiting it to our setting. Numerical results and interesting observations are presented for this model. Then, we present a simpler model which, in contrast to the former, is able to describe the system's behavior when the number of users (hence, queues) is large. In this work-flow, the first model essentially emphasizes the difficulty in tracking such a complex and \emph{dependent} system, even for time independent channels, while the second model, which is simpler, provides a good answer to this problem by decoupling the dependency between the queues. We then present a third approximation model, in which we tie the second approximation model with the assumption of time dependent channels. Note that this knot is not possible using the techniques in \cite{ephremides1987delay}, and is enabled by our simpler decoupling technique. Specifically, we assume that each user experiences \emph{a time varying channel} modeled as a Good-Bad channel (the Gilbert-Elliot model \cite{gilbert1960}) which reflects the time varying channel distribution. We thus suggest a time dependent queue model for our multi-user Multiple Access Channel (MAC) system, which shows very good agreement with simulations results. Finally, after assuming time dependent channels, we study the \emph{capacity} while considering both the centralized and the threshold-based distributed scheduling algorithms. We achieve closed analytic expressions, in two different ways, for the channel capacity scaling laws. Specifically, we use two statistical tools, Extreme Value Theory (EVT) and Point Process Approximation (PPA). These enable us to examine the limit distribution of the system's throughput. The rest of the paper is organized as follows. Section \ref{sec-model description} describes the model and basic assumptions for this work. In Section \ref{sec-Performance analysis}, we examine the performance metrics of the queues under this algorithm, e.g., delay and buffer occupancy. Models for the time-independent and time-dependent scenarios are presented along with numerical results. In Section \ref{sec-Scaling Law Under Time Dependent Channel}, we focus on time-dependent channels and their \emph{asymptotic capacity} under centralized and distributed algorithms. Section \ref{sec-conclusion} concludes this work. \begin{comment} Numerous innovations, such as sophisticated multi-user coding techniques and advances in the endpoint equipment have been suggested over recent years in order to cope with the high traffic demands over the last-hop wireless medium. Yet, as the technology becomes smaller and sophisticated the wireless networks are becoming denser due to the rapid growth in the number of user devices and applications relying on the last-hop wireless connectivity. Since in such dense networks the amount of traffic, data or control, is expected to be extremely large (e.g., many devices, each running multiple applications and continuously exchanging keep-alive messages), and since not all devices can transmit simultaneously, some channel access mechanism is required to coordinate between the transmitters. One of the most prevalent channel access mechanisms is the random access mechanism (e.g., 802.11). Nonetheless, as widely explored over the past few decades, such contention based access paradigms can result in low channel utilization due to collisions and mutual interference. Furthermore, since users are accessing the channel arbitrarily, regardless of their channel quality, a user with bad channel quality can capture the channel and transmit for a long duration (i.e., low rate), degrading the overall network performance (e.g., \cite{heusse2003performance}). On the other hand, schedule-access-based protocols allowing a scheduler to schedule users according to their current channel state (e.g., LTE), exploiting the multi-user diversity which is inherent to the wireless medium, have also been widely studied in such multi-user environments (e.g., \cite{viswanath2002opportunistic,liu2001opportunistic}). Since multi-user diversity gains are even more acute when utilizing multiple antennas both at the transmitter and the receiver, many scheduling studies can be found under such Multiple-Input Multiple-Output (MIMO) technologies. For examples, in \cite{kim2005scheduling}, \cite{yoo2006optimality}, Zero-Forcing Beamforming (ZFBF) was investigated and user selection was performed in order to avoid interference among users' streams. Note, however, that such scheduled base schemes involve large channel state information exchange overhead, which hinders the large throughput gain (e.g., \cite{bejarano2014mute}). In this paper, we study a threshold based opportunistic distributed channel access scheme, in which users can attempt transmission only if their channel state is above a threshold. Note that on the one hand such a scheme is opportunistic as it allows channel access only to users with temporal advantageous channel conditions, yet on the other hand does not require extensive channel state information exchange, hence entails only a relatively low overhead. Several studies in the literature considered similar threshold-based opportunistic systems, e.g., \cite{qin2003exploiting,qin2006distributed} and \cite{kampeas2014accepted} for the homogeneous and non-homogeneous user cases, respectively. However, these studies investigated the potential capacity or throughput gains of the mechanism, but have not analyzed other systems' metrics such as \emph{delay or buffer occupancy}. Specifically, we consider a threshold based random-access system comprising of $K$ users. Each user transmits packets in a first-in-first-out (FIFO) manner, such that packets arriving to a user which is already busy with a pending packet transmission wait for their turn to be transmitted. Each packet is continuously transmitted until successfully received by the receiver. Accordingly, each user maintains a queue in which it stores packets waiting for transmission. Note that due to the shared medium the \emph{queues at different users are tightly correlated}. It has been shown that such interaction between the queues makes the analysis inherently difficult \cite{fayolle1979two}. Despite the complexity, and due to its realistic setup, similar systems were considered over the years. For example, the stability region, which identifies the arrival rate values for which the system is stable, was characterized for the slotted Aloha system for a \emph{small number of users} ($K=2,3$) in \cite{tsybakov1979ergodicity,rao1988stability,szpankowski1994stability}; bounds on the arrival rates can be found in \cite{rao1988stability,fayolle1977stability,luo1999stability}. Performance analysis of such systems in terms of delay, mean queue size and more were also widely studied. For example, an iterative approximation model was suggested in \cite{saadawi1981analysis} utilizing decoupled Markov chains. A refined model was presented in \cite{ephremides1987delay}. In \cite{sidi1983two}, the mean delay was given for the case of two identical users, as well as an approximation for a larger population. Extension for slotted CSMA/CD model can be found in \cite{takagi1985mean}. Nevertheless, due to the aforementioned complexity involved in the strong interdependence between the users' queues, even toady there is no clear understanding of the performance of such systems with large number of users \subsection{Main contributions} The contributions of this work can be described as two main threads which are tightly correlated. We first investigate the \emph{performance} of a multi-user MAC system under the threshold-based, distributed algorithm. Specifically, we present approximate models for the systems' queues behavior. We first concentrate and review the model presented in \cite{ephremides1987delay}, while suiting it to our setting. Numerical results and interesting observations are presented for this model. Then, we present a simpler approximate model which in contrast to the former, is able to describe the systems' behavior when the number of users (hence, queues) is large. While the problem in its many forms, as described above, has been studied in various time-independent scenarios, capacity, performance and algorithms for time-dependent channels (e.g., Markov channels) remained relatively unexplored. In the second part of this work, we consider in addition to non-trivial traffic the assumption that each user experiences \emph{a time varying channel}. Therefore, a time dependent model ( the Gilbert-Elliot model \cite{gilbert1960}) is suggested, which reflects the time varying channel distribution. We then suggest a time dependent queue model for our multi-user MAC system, which shows very good agreement with simulations results. Finally, after assuming time dependent channels, we study the \emph{capacity} while considering both the centralized and the threshold-based distributed scheduling algorithms. We achieve closed analytic expressions, in two different ways, for the channel capacity scaling laws. Specifically, we use two statistical tools, Extreme Value Theory (EVT) and Point Process Approximation (PPA). These enable us to examine the limit distribution of the maximal capacity value for the transmitting user. The rest of the paper is organized as follows. Section \ref{sec-model description} describes the model and basic assumptions for this work. In Section \ref{sec-Performance analysis}, we examine the performance metrics of the queues under this algorithm, e.g., delay and buffer occupancy. Models for the independent and dependent time scenarios are presented along with numerical results. In Section \ref{sec-Scaling Law Under Time Dependent Channel}, we present time dependent channels and their asymptotic capacity under centralized and distributed algorithms. Section \ref{sec-conclusion} concludes this work. Numerous innovations, such as sophisticated multi-user coding techniques and advances in the endpoint equipment, have been suggested over recent years in order to cope with the high traffic demands over the last-hop wireless medium. Yet, as the technology become smaller and sophisticated the wireless networks are becoming more dense due to the rapid growth \emph{in the number of users and devices}. Hence, this last mile wireless access architecture requires efficient scheduling techniques as well, since in practical scenarios, not all of them can be scheduled together. Accordingly, the amount of messages, whether these are data or control, is extremely large. Therefore we return here to occupy ourselves with the problem of random access to a shared medium under a distributed scheduling algorithm when the number of users is large. Even today opportunistic scheduling schemes are highly relevant and has been adopted by state of the art wireless standards such as 802.11n, 802.11ac and MAC for sensors networks. \textcolor{red}{Add citations for these }. Many studies appear in the literature dealing with such systems, usually present the capacity or throughput but not engage other system's metrics such as \emph{delay and buffer occupancy}. We consider here an opportunistic distributed scheme which lets the users transmit based on their instantaneous channel condition. Specifically, users may transmit only if their channel gain is above a given threshold. This scheme was studied in \cite{qin2003exploiting,qin2006distributed}. An extension for the case of non-homogeneous users was performed in \cite{kampeas2014accepted}. Obviously, a distributive paradigm in which users schedule themselves for transmission can result in collisions between users, which, in turn, affects channel utilization and quality of service (QoS). Clearly, the system performance under such effects highly depends on the traffic patterns seen by the users. Thus, in contrast to the aforementioned studies, which have explored channel capacity under various schemes, in this paper, we study the performance under realistic traffic patterns, in which users are not fully backlogged and the arrival process of each user is a random process. Scheduling users according to their channel state exploit the multi-user diversity which is inherent to the wireless medium. Multi-user diversity strategies have been widely studied is such multi-user environments (e.g., \cite{viswanath2002opportunistic,liu2001opportunistic}). Especially when utilizing multiple antennas both at the transmitter and the receiver. Under such Multiple-Input Multiple-Output (MIMO) technologies we can find many scheduling studies, e.g., in \cite{kim2005scheduling}, \cite{yoo2006optimality}, where Zero-Forcing Beamforming (ZFBF) was investigated and user selection was performed in order to avoid interference among users streams. We suggest a general model, consisting of a system with $K$ users in a random, multiple-access system, who interact and affect each other. Each user transmits packets in a first-in-first-out (FIFO) manner, such that packets arriving to a user which is already busy with a pending packet transmission, wait for their turn to be transmitted. Accordingly, each user maintains a queue in which it stores packets waiting for transmission. This theoretical problem was studied in the past and it has been shown that the interaction between the queues (due to the shared medium all users use) in such systems makes the analysis inherently difficult \cite{fayolle1979two}. For example, under our setting of slotted time, the most notable system is slotted aloha. This system was widely studied in the past with two major aspects in mind, the stability region and its QoS performance. The first aspect relates to the problem of finding the values of the arrival rates such that the system is stable, while the second relates to the analysis of such systems in terms of delay, mean queue size and more. The stability region was precisely characterized for \emph{small number of users} ($K=2,3$) in \cite{tsybakov1979ergodicity,rao1988stability,szpankowski1994stability}, where sufficiency bounds on the arrival rates can be found in \cite{rao1988stability,fayolle1977stability,luo1999stability}. Nevertheless, even toady there is no clear understanding of the performance of such systems with large number of users (more than 3), due to the complexity in describing the strong interdependence between the user's queues. Only approximate models were developed in order to describe the system behaviour. In \cite{saadawi1981analysis}, an iterative approximation model was suggested while using decoupled Markov chains. A refined model was presented in \cite{ephremides1987delay}. In \cite{sidi1983two}, the mean delay was given for the case of two identical users, as well as an approximation for a larger population. Extension for slotted CSMA/CD model can be found in \cite{takagi1985mean}. \subsection{Main contributions} We investigate the performance of a multi-user MAC system under the threshold based, distributed algorithm. Specifically, we present approximate models for the systems' queues behaviour. We first concentrate and review the model presented in \cite{ephremides1987delay}, while fitting it to our setting. Numerical results and interesting observations are presented for this elaborated model. Then, we present a simpler approximate model which in contrary to the former, is able to describe the system's behaviour when the number of users (hence, queues) is large. While the problem in its many forms, as described above, has been studied in various time-independent scenarios, capacity, performance and algorithms for time-dependent channels (e.g., Markov channels) remained relatively unexplored. In this work, we consider in addition, traffic with the assumption that each user experiences \emph{a time varying channel}, therefore a time dependent model (Gilbert-Elliot model) is suggested, which reflects the time varying channel distribution. We then suggest a time dependent queue model for our multi-user MAC system which show very good agreement with simulations results. Lastly, we study the the capacity under the assumption of time dependent channels when considering a centralized and a threshold based distributed scheduling algorithms. We achieve closed analytic expressions, in different ways, for the channel capacity scaling laws. Specifically, we use two statistical tools, Extreme Value Theory (EVT) and Point Process Approximation (PPA). These enable us to examine the limit distribution of the maximal capacity value for the transmitting user, i.e., multi-user diversity. The rest of the paper as follows. Section \ref{sec-model description} describes the model and basic assumptions for this work. In Section \ref{sec-Performance analysis}, we examine the performance metrics of the queues under this algorithm, e.g., delay and buffer occupancy. Models for the independent and dependent time scenarios are presented along with numerical results. In Section \ref{sec-Scaling Law Under Time Dependent Channel}, we present time dependent channels and their asymptotic capacity under centralized and distributed algorithms. \end{comment} \section{System Model and Assumptions}\label{sec-model description} We consider an uplink system with $K$ independent users and one base station. We assume a slotted system in which the time axis is divided into fixed-length intervals, referred to as time slots or simply slots. Following a typical slotted system model, we assume that all nodes are synchronized and that transmissions can only start at slot boundaries. Obviously, simultaneous transmissions may result in collisions. We assume that each user maintains a queue in which the user stores the packets waiting for transmission. The packets are transmitted in a FIFO manner, in which each packet is repeatedly transmitted until received successfully by the base station. We focus our attention on the users' queues, as illustrated in Figure~\ref{fig-QueuingSystem}. We assume that the arrival process of new packets to each user's queue is characterized by a Poisson process. Accordingly, the users are not always backlogged. We further assume that all users are homogenous, thus, all users have the same arrival rate $\lambda$. At the beginning of each slot, each user estimates its own channel conditions, i.e., the expected achievable rate, and tests whether it exceeds a predefined threshold. Upon exceeding and having a packet to send, i.e., if the queue is not empty, the user attempts transmission. Throughout most of this paper the threshold value is set such that on average the probability for exceedance is $1/K$. Note that it was shown in \cite{qin2003exploiting} that under a \emph{fully backlogged} system this value is optimal. However, considering that users may not have packets to send at all times, the probability of $1/K$ is conservative. That is, it is possible that some slots will not utilized due to over-restrained transmission probability, or alternatively, that the arrival rate, which keeps the system stable, can be slightly higher if we allow users to be slightly more aggressive in their transmission attempts. We emphasise that the analysis presented in this work is correct for any threshold value and arrival rate as long as those maintain stability. We thus give evaluation of different exceedance probabilities via simulations and further discussion regarding the threshold in the sequel. \begin{figure}[!t] \centering \includegraphics[width=2in]{QueueingSystemModel} \caption[Model of queueing system]{System model. $K$ users access a common channel. Each user has a packet arrival process with rate $\lambda$.} \label{fig-QueuingSystem} \end{figure} We present several approximate models. Thus, new packets may enter the system at any given (continuous) point on the time axis or at the beginning of a slot, depending on the approximation model used. The slot size is set to encompass a single packet transmission at a rate which corresponds to the threshold. Note that since users are transmitting only while having an above-threshold capacity, and this capacity \emph{grows with the number of users} \cite{kampeas2014capacity}, the slots are expected to be small, specifically, we may neglect the transmission time in the analysis. In the first part of the paper, we assume that the achievable rates seen by each user at each time slot are \emph{i.i.d.} We extend these results to a setup in which these achievable rates are still identically distributed yet are not independent. In particular, we assume the channel of each user can alternate between different channel distributions (e.g., Good-Bad channel) according to the Gilbert-Elliot model. A full description will be provided in the sequel. We define the service time as the time from the moment a packet becomes first in queue, until it is successfully transmitted. Hence, the service time of the packets is composed of the waiting time \emph{for transmission} and the transmission time. \begin{comment} We consider an uplink system with $K$ independent users and one base station. The time axis is divided into fixed-length intervals referred to as time slots or, simply, slots. As mentioned, we assume a non trivial arrival process and that the users are not always backlogged. We focus our attention on the users' queues, as illustrated in Figure \ref{fig-QueuingSystem}. The arrival process of new packets to the system is characterized by a Poisson process. In addition, we assume that all users are homogenous, thus, all users has the same arrival rate $\lambda$. In the beginning of each slot, each user estimate his channel conditions, i.e. the channel capacity, and test whether he exceed a predefined threshold. Upon exceeding and having a packet to send, the user attempt transmission. The threshold value is set such that on average the probability for exeecedance is $1/K$. Assume for now that this probability is a constant variable. Note that under a fully backlogged system this value is optimal, hence, considering that users may not have packets to send, the probability of $1/K$ is considered conservative. That is, it is possible that some slots would be unutilized and that the arrival rate which keeps the system stable is lower than the one we can really permit. However, the analysis presented is this work is suited for any exceedance probability, for which the system is stable, which eventually relates to the smart selection of the threshold value. Further discussion will take place later. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{QueueingSystemModel} \caption[Model of queueing system]{System model. $K$ users access a common channel. Each user has a packet arrival process with rate $\lambda$} \label{fig-QueuingSystem} \end{figure} We present several approximate models. Thus, new packets may enter the system at any given (continuous) point on the time axis or at the beginning of a slot, depending on the approximation used. The size of the packets is set such that the transmission time is exactly one slot. Thus, when a user transmits it is while having an above-threshold capacity, hence packets can be large enough for that capacity. Note that in the first part of the paper the capacity of users is independent between slots which under time dependent channels will be addressed differently. In the second approximation model for the system performance we approximate the time between user's threshold exceedences as an exponential random variable. We note that if we consider a small duration of a slot i.e. transmission time, due to the high capacity, we would get a series of Bernoulli trials on the exceedance which converge to a Poisson process, hence the time between excedances is exponentially distributed. \textcolor{red}{This approximation can be found in ...}. In the second part of this work we will prove that indeed, as the number of users grows, convergence exist and exponential distribution can be treated. We define the service time as the time from the moment a packet becomes first in queue, until it is successfully transmitted. Hence, the service time of the packets is composed of the waiting time \emph{for transmission} and the transmission time. In addition, we assume that services are synchronized to (i.e., can only start and end at) slot boundaries. This, of course may cause collisions. \end {comment} \begin{comment} We consider a slotted MIMO\footnote{The assumption of a MIMO channel is used only to have a concrete expression for the capacity, with a reasonable approximation (in this case, as Gaussian random variable). The key methods we use may be applicable to other channel models as well.} uplink system with $K$ independent users and one base station. The time axis is divided into fixed-length intervals referred to as time slots or, simply, slots. The distribution of the channel capacity a single user experience, is time varying, according to a Gilbert Elliott model \cite{gilbert1960}. That is, the channel may be at one of two states, Good or Bad, with two different capacity distributions. The states evolve according to a 2-state, first order Markov chain. We first assume that a single user, with the strongest received signal at the base station, is selected by a centralized scheduler to utilized the channel and transmit in each slot. We then consider the distributed setting as well. Hence, we first wish to analyze the expected channel capacity where the base station chooses the strongest user for transmission using the channel state information (CSI) sent to it from the users. Of course, this centralized scheme results in an overhead due to the frequent transmissions of the channel state of the users. We then turn to analyzing the capacity under a distributed opportunistic user selection algorithm, which aims at selecting the strongest user to transmit, without coordinating between the users and without all users sending CSI to the base station. In the distributed scheme we assume that the exact values of the channel matrix $H$ are not available to the transmitter since the users don't send their CSI. Yet, the transmitter can approximate it's channel capacity by estimating the SNR of a pilot signal sent from the receiver and using reciprocity. The linear model of the system is thus: \begin{equation}\label{equ-SystemLinearModel} \textit{\textbf{y}(n)=H(n)\textbf{x}(n)+\textbf{N}(n)}, \end{equation} where $ \textit{\textbf{y(n)}} \in \mathbb{C}^{r} $ is the received vector from the input vector $\textit{\textbf{x(n)}}\in\mathbb{C}^r$, \textit{H} is an $r\times t$ complex matrix (r - number of receiving antennas, t - number of transmitting antennas) and \textit{\textbf{N}} is a zero mean \textit{i.i.d.} complex Gaussian noise with independent, equal variance real and imaginary parts. The transmitter is constrained in its total power to $P$, that is, $ E[\textit{\textbf{x(n)}}^\dagger\textit{\textbf{x(n)}}]\leq P.$ The matrix $\textit{H}(n)$ is a random matrix which models the Rayleigh fading environment of the time dependent channel.\\ \subsection{Time dependent channel}\label{subsec-model:Time dependent channel} The channel distribution each user sees is governed by a Markov chain with two states, G (for Good) and B (for Bad),where each state determines the channel distribution, with a transition probability $\alpha$ for G to B and $\beta$ for B to G (Figure \ref{fig-GoodBadchannel}). \begin{figure}[!t] \centering \includegraphics[width=5cm]{BadGoodChannel} \caption[Markovian channel model]{A Good-Bad channel model according to \cite{gilbert1960}. The capacity distribution is either good or bad Gaussian r.v. according to the state of the system.} \label{fig-GoodBadchannel} \end{figure} That is, we assume that in each slot, each user exists in either a Good or a Bad state which is expressed in different channel fluctuations, and, more formally, by different parameters of the Gaussian distribution which models the capacity. Thus, given the users' state in each slot, the channel is memoryless conditioned on the state process, and the channel capacity is modelled by a Gaussian random variable. The capacity of the $i$-th user in each slot is determined by the Good-Bad Markov process $\{J(n)\}$: \begin{equation}\label{equ-UserCapacityProcess} C_i(n)= \begin{cases} N_g & \text{when } J_i(n)=Good \\ N_b & \text{when } J_i(n)=Bad \end{cases} \end{equation} where $N_g$ and $N_b$ are random variables distributed normally with parameters $(\mu_g,\sigma_g)$ and $(\mu_b,\sigma_b)$, respectively. This is due to the Gaussian approximation for the MIMO channel capacity \cite{smith2002gaussian,chiani2003capacity}. The parameters reflect the differences in the channel qualities, in a way that a good channel parameters maintain $(1) \ \sigma_g>\sigma_b, \ \mu_g,\mu_b\in\mathds{R}$ or $(2) \ \sigma_g=\sigma_b$ and $\mu_g>\mu_b$.\\ \end{comment} \section{Performance analysis}\label{sec-Performance analysis} Our system consists of $K$ queues, each with an independent Poisson arrival process, and a common server (i.e., the communication channel, Figure~\ref{fig-QueuingSystem}). A user will attempt transmission only when it is backlogged and the expected transmission rate for the next slot is above a threshold. We assume that any simultaneous transmissions will fail, i.e., no capture. The challenge in analyzing such a queueing system lies in the strong interdependence between the queues. Specifically, the user's collision or successful transmission probabilities depend on which of the other users' queues is backlogged (if the threshold exceedance probability is the same for all the users, it depends on the number of backlogged queues). Several works in the literature explored the behavior of various systems with queues interdependency. Due to the interdependence between the queues, each of these works considered a different simplified mathematical model, which resulted in approximations for the required metrics. For example, an iterative approximation model was suggested in \cite{saadawi1981analysis}, utilizing decoupled Markov chains. A refined model was presented in \cite{ephremides1987delay}. In \cite{sidi1983two}, the mean delay was given for the case of two identical users, as well as an approximation for a larger population. Extension for slotted CSMA/CD model can be found in \cite{takagi1985mean} and bounds on the stability region for such system can be found in \cite{luo1999stability}. In the sequel, we investigate the behavior of the system. We start by presenting two different approximations which capture the system performance. The first is a modification of the model presented in \cite{ephremides1987delay} to fit a threshold-based system. Different analytical results for the system performance metrics are obtained. The second model uses different technique from the works mentioned above which simplifies the analysis greatly. We extend the analysis for time dependent channels for each user. \subsection[Queueing Approximate model \Rmnum{1} ]{Approximation using System and Users' State}\label{Approximate model 1} Trying to model the system state as the number of pending packets in each one of the queues, even though straightforward conceptually, is intractable and hence impractical. Accordingly, as our first approximation, we suggest an analytically tractable simplification which relies on the decoupling of the Markov chain into two separate and decoupled chains. This approximation is an adaptation of Ephremides and Zhu's model presented in~\cite{ephremides1987delay}, which analyzed the slotted ALOHA system with a finite number of buffered nodes. The main difference between the two models is that in slotted aloha described in~\cite{ephremides1987delay}, the transmission scheme is "immediate first transmission" i.e., if user $i$ has an empty queue when a packet arrives, it transmits the packet instantaneously, and in the case of collision, it may transmit again with a retransmission probability $p_i$, while our model follows a "delayed first transmission" scheme, where a transmission (first time or a later one) is delayed until the user channel state is favorable, i.e., happens only when the threshold has been exceeded. As in~\cite{ephremides1987delay}, we assume that the arrival rate at user $i$ is $\lambda_i$, and the arrival processes are statistically independent between the users\footnote{This model is able to capture heterogeneous arrival rates. Hence, we keep the subscript $i$ in its description. When we turn to the performance analysis we assume that $\lambda_i=\lambda$ as described in the system model.}. Time is slotted and it takes exactly one slot to transmit one packet. We will assume that since the transmission rates are high (users are transmitting only when their expected transmission rate is above a threshold) slot duration is quite small. Furthermore, since we are mainly interested in stable systems, i.e., the inter arrival time is much larger then the slot duration, we will assume that the probability that user $i$ receives a new packet for transmission during any given slot is $\Delta\lambda_i$, where $\Delta$ is the slot duration. We consider this duration as a unit size. The probability that a user receives more than one packet per slot is, similar to other Poisson models, negligible (i.e., $o(\Delta\lambda_i)$). Let us denote by $p_i(n)$ the probability that user $i$'s achievable rate in slot $n$ is above the predefined threshold. Since our first model assumes identically distributed channel conditions per slot, we will omit the slot index $n$, i.e., the threshold exceedance probability will be denoted by $p_i$ for all $n$. Accordingly, user $i$ attempts to transmit the head-of-the-line packet in its queue (given that its queue is nonempty), with probability $p_i$. Adopting the model in~\cite{ephremides1987delay}, we define three states in which each user can be at at the beginning of a given slot, namely \emph{Idle}, \emph{Active} or \emph{Blocked}. The states are determined at the beginning of each slot but depend on the previous slot as well. Specifically, a user is in \emph{Idle} state in two situations: having an empty queue at the beginning of the current slot, i.e., no packet arrival in the preceding slot, or having one packet in the beginning of the current slot which arrived after the beginning of the preceding slot. A user is in Blocked state if it was backlogged (i.e., its queue was not empty) at the beginning of the previous slot yet it has not transmitted successfully during the previous slot. Note that not transmitting successfully means either there was an unsuccessful transmission attempt, or that there was no transmission attempt at all, i.e., the user's channel state was below the threshold. A user is in Active state if it is backlogged at the beginning of the current slot and the user has successfully transmitted a packet during the last slot. We emphasize that the \emph{Active} state is an auxillary state which is utilized in the performance analysis by distinguishing successful transmissions for backlogged users. Due to the above user $i$'s transmission probability is: \begin{equation}\label{equ-probability for transmission queueing } p_i= \left\{ \begin{array}{l l} \frac{1}{K}\lambda_i & \ \text{if $i$ is idle},\\ \frac{1}{K} & \ \text{if $i$ is active or blocked}.\\ \end{array} \right. \end{equation} As previously mentioned, the state space of such a system, which incorporates both the status of each user and the number of pending packets in its queue is intractable. Accordingly, our approximation relies on the decoupling of the Markov chain into two interdependent yet separate chains, the system-status chain and the queue-length chain. The transition probabilities of each chain are tightly depended on the steady-state probabilities of the other chain and thus, all state equations for both chains must be solved simultaneously. The system-status chain captures the state of each user at any given time. Hence, the status variable $\overline{S}$ consists of $K$ ternary variables, $S_1,S_2,...,S_K$, each of which indicates the status of the corresponding terminal. Namely, $S_i=\{0,1,2\}$ for Idle, Active and Blocked, respectively. Since the system can incorporate at most one active user at any given time (i.e., at most one user can transmit successfully in the previous slot), it can be shown by summing over all the states where no users are active plus all the possible states in which a single user is active, that the total number of states achievable is $2^{K-1}(K+2)$. The transition probabilities of the system-status chain, which are different from the one presented in \cite{ephremides1987delay}, are given in Appendix \ref{Appendix A}. However, we emphasize two quantities which are required for the system-status chain transition probabilities calculation: \begin{equation}\label{equ-p(1|1) and p(0|2) definition} \begin{array}{l} P_i(1\mid 1) \triangleq P_r(\text{queue size $>1\mid$ user $i$ is active})\\ P_i(0\mid 2) \triangleq P_r(\text{queue size $=1\mid$ user $i$ is blocked}). \end{array} \end{equation} Note that these probabilities, which are utilized in the system-status chain solution, reflect the coupling between the two chains. That is, the steady-state probability distribution $P(\overline{S})$ relies on the queue-length chain steady state. Their calculation can be found in Appendix \ref{Appendix B}. The queue-length Markov chain tracks both the status and queue length of each user, independently of the status and queue length of the other users. Specifically, the pair $(T_i,N_i)$ represents the state of user $i$, where $N_i$ denotes the total number of packets at queue $i$ and $T_i$ is an indicator variable, indicating whether the user is blocked or unblocked (active or idle status), denoted by $0$ and $1$, respectively. We further denote by $ \mathbf {\pi}(T_i,N_i)$ user $i$'s steady-state probabilities. The transition probabilities of the chain depend on the following average transmission success probabilities (assuming a user has a packet to send): \begin{equation}\label{equ-Average success probabilities model 1} \begin{array}{l} P_B(i) = P_r(\text{success $\mid$ user $i$ is blocked})\\ P_A(i) = P_r(\text{success $\mid$ user $i$ is active})\\ P_I(i) = P_r(\text{success $\mid$ user $i$ is idle}), \end{array} \end{equation} where the averaging is performed over the status of the \emph{other users} (the status of the system). Namely, in order to calculate the probabilities \eqref{equ-Average success probabilities model 1}, one needs the stationary distribution of the system-status chain $P(\overline{S)}$. This leads again to the coupling of the two sets of equations. The calculations of these average success probabilities are presented in Appendix \ref{Appendix B}. The conditional moment generating function of the chain for user $i$ is defined as \begin{equation}\label{equ-conditional moment generating function} G_{T_i}^i(z)\triangleq \sum_{N_i=0}^{\infty} \pi(T_i,N_i)z^{N_i}, \ \ \ \ T_i=0,1. \end{equation} In order to find the steady state probabilities of the decoupled chains, which, as previously explained, must be solved simultaneously, we utilize an iterative process which is based on the Wegstein iteration method. Specifically, at each iteration we exploit the auxiliary quantities $P_i(1\mid 1),P_i(0\mid 2)$ computed in the previous iteration to compute $P_I(i),P_A(i),P_B(i)$, and vice versa. After achieving satisfying convergence, the performance metrics can be calculated using the steady state of the chains. We emphasize here that due to the "delayed first transmission" (which is the result of the threshold exceedance requirement) assumption, the transition probabilities of the system chain became quite complicated compared to the analytical derivation in \cite{ephremides1987delay}. \subsubsection{Queueing performance analysis}\label{Approximate model 1 - Queueing performance analysis} In this subsection, we analyze the delay a packet endures from the moment it is generated and arrives to a user's queue until it is successfully transmitted. This delay can be divided into three components: The queueing time to allow the packets already queued to be transmitted, denoted by $W_q(i)$; the head-of-the-line waiting time, which is the time elapsed since the packet became first in the queue until successful transmission, denoted by $W_s(i)$; the transmission duration which is exactly one slot. Each of them is approximated separately, and the sum of the three gives the total average delay (measured in slot duration): \begin{equation}\label{equ-Delay of user i model 1} D_i=W_q(i)+W_s(i)+1. \end{equation} Our main contributions for the performance analysis are given in the following Theorem, which gives the metrics affected directly from the threshold-based scheduling scheme. That is, the probability for a packet to be blocked is now affected also by the successful exceedance of the threshold and thus the service time is also affected. In addition, the probability for success given a transmission attempt, which was not calculated in \cite{ephremides1987delay}, is given and will be compared with the approximation model in the next Section. \begin{theorem}\label{Approximate model 1 - performance metrics theorem} Under the queueing model which follows the "delayed first transmission" scheme, the performance metrics, i.e., the head-of-the-line waiting time, the probability for a packet to be blocked and the probability for success given a transmission attempt are, \begin{equation}\label{equ-service time approximation model 1} W_s(i)=Pr(\text{blocked packet}) \cdot \frac{1}{P_B(i)}, \end{equation} \begin{equation} \label{equ-probability to be blocked approximation model 1} \begin{aligned} Pr(\text{blocked packet})&=\left(1-Pr(\text{unblocked packet})\right)\\ &=1-\left( \frac{\pi(1,0)}{\lambda \pi(1,0) +(G^i_1(1)-\pi(1,0))}P_I(i)+ \right.\\ &\quad \quad \quad \quad \quad \quad \quad \left. \frac{G^i_1(1)-\pi(1,0)}{\lambda \pi(1,0) +(G^i_1(1)-\pi(1,0))}P_A(i)\right), \end{aligned} \end{equation} and, \begin{equation}\label{equ-success probability model 1} p_{succ}(i)= \frac{P_A(i)(G_1^i(1)-\pi(1,0))+P_I(i)\pi(1,0)+P_B(i)G_0^i(1)}{\frac{1}{K}(1-(1-\lambda)\pi(1,0))}. \end{equation} \end{theorem} \begin{proof} The head-of-the-line waiting time takes into account the time the packet spends at the head of the queue excluding the successful transmission slot. Accordingly, the head-of-the-line waiting time is zero both in the case of successful transmission upon arrival at an empty queue (i.e., successful transmission from an \emph{Idle} state) and in the case of a successful transmission of a backlogged packet, in the slot consecutive to its becoming the head of the queue (i.e., successful transmission from an \emph{Active} state). Consequently, the head-of-the-line waiting time is the time the packet spends in \emph{Blocked} state (it is zero for packets which did not pass through Blocked state). Since the number of slots until successful transmission while Blocked is a geometric random variable with mean equal to $1/P_B(i)$ we have Equation \eqref{equ-service time approximation model 1}. The probability for a packet to be blocked (Equation \eqref{equ-probability to be blocked approximation model 1}) can be computed as the complement of the probability to be unblock (successful transmission without passing through Block state) which is essentially the probability of immediate successful transmission upon arrival of a packet (to the head of the line) which may be only in Idle or Active states. The terms before the success probabilities are the proportion of successful transmissions while in Idle and Active states, respectively. The probability for success given a transmission attempt (Equation \eqref{equ-success probability model 1}), i.e., the user has packet to send, exceeds the threshold and successfully transmits can be computed as the general success probability regardless the users' state (Idle, Active, Blocked), divided by the probability for transmission attempt. Note that this event is the result of two independent events, the user channel norm exceeding the threshold and the transmission being successful. Note also that the user can be in Idle state and yet manage to transmit successfully, which can occur upon successful transmission of a packet upon its arrival to an empty queue. \end{proof} The remaining performance metric, i.e., the queueing time, is calculated, standardly, using Little's result, hence \begin{equation}\label{equ-time in line approximation model 1} W_q(i)=\frac{L_i}{\lambda_i}, \end{equation} where $L_i$ is the average queue length of a user (without considering the blocked head-of-line packet as part of the queue), which is given by \cite{ephremides1987delay}: \begin{equation}\label{equ-mean queue size model 1} L_i=\frac{\lambda_i^2\overline{\lambda}_i\overline{P}_I(i)}{(\overline{\lambda}_iP_B(i)-\lambda_i\overline{P}_A(i))(\overline{\lambda}_iP_B(i)-\lambda_i(P_I(i)-P_A(i)))}, \end{equation} where $\overline{P}=1-P$. \begin{comment} \subsubsection{The performance metrics}\label{Approximate model 1 - Queueing performance analysis} In this subsection, we analyze the delay a packet endures from the moment it is generated and arrives to a user's queue until it is successfully transmitted. This delay can be divided into three components: The queueing time to allow the packets already queued to be transmitted, denoted by $W_q(i)$; the head-of-the-line waiting time, which relates to the time elapsed since the packet became first in the queue until successful transmission, denoted by $W_s(i)$; the transmission duration which is exactly one slot. Each of them is approximated separately, and the sum of the three gives the total average delay (measured in slot duration): \begin{equation}\label{equ-Delay of user i model 1} D_i=W_q(i)+W_s(i)+1, \end{equation} Note that the head-of-the-line waiting time takes into account the time the packet spends at the head of the queue excluding the successful transmission slot. Accordingly, the head-of-the-line waiting time is zero both in the case of successful transmission upon arrival at an empty queue (i.e., successful transmission from an \emph{Idle} state) and in the case of a successful transmission of a backlogged packet, in the slot consecutive to its becoming the head of the queue (i.e., successful transmission from an \emph{Active} state). Consequently, the head-of-the-line waiting time is the time the packet spends in \emph{Blocked} state (it is zero for packets which did not pass through Blocked state). Since the number of slots until successful transmission while Blocked is a geometric random variable with mean equal to $1/P_B(i)$, the head-of-the-line waiting time can be approximated by: \begin{equation}\label{equ-service time approximation model 1} W_s(i)=Pr(\text{blocked packet}) \cdot \frac{1}{P_B(i)}. \end{equation} The probability for a packet to be blocked can be computed as the complement of the probability to be unblock (successful transmission without passing through Block state) which is essentially the probability of immediate successful transmission upon arrival of a packet (to the head of the line) which may be only in Idle or Active states. \begin{multline} Pr(\text{blocked packet})=\left(1-Pr(\text{unblocked packet})\right)\\ =1-\left( \frac{\pi(1,0)}{\lambda \pi(1,0) +(G^i_1(1)-\pi(1,0))}P_I(i)+ \right. \left. \frac{G^i_1(1)-\pi(1,0)}{\lambda \pi(1,0) +(G^i_1(1)-\pi(1,0))}P_A(i)\right). \end{multline} The terms before the success probabilities are the proportion of successful transmissions while in Idle and Active states, respectively. The queueing time is calculated using Little's result, hence \begin{equation}\label{equ-time in line approximation model 1} W_q(i)=\frac{L_i}{\lambda_i}, \end{equation} where $L_i$ is the average queue length of a user (without considering the blocked head-of-line packet as part of the queue), which is given by \cite{ephremides1987delay}: \begin{equation}\label{equ-mean queue size model 1} L_i=\frac{\lambda_i^2\overline{\lambda}_i\overline{P}_I(i)}{(\overline{\lambda}_iP_B(i)-\lambda_i\overline{P}_A(i))(\overline{\lambda}_iP_B(i)-\lambda_i(P_I(i)-P_A(i)))}, \end{equation} where $\overline{P}=1-P$. \FMOR{Need to discuss with Omer on this paragraph} Next, we provide the average user success probability given a transmission attempt, which is the probability that a backlogged user transmits successfully. Note that this event is the result of two independent events, the user channel norm exceeding the threshold and the transmission being successful. Even though the first event is independent from the other users, the latter one is highly dependent on the state of the other users and specifically on the number of backlogged users. Yet, instead of going over all the states, counting the number of backlogged users excluding the \textcolor{red}{tagged} user, we approximate the success probability by only conditioning on the tagged user's state. Even though the user's state does not reflect the exact number of backlogged users, in the next subsection we will show via simulations that it captures the correlation between the users, providing accurate success probability estimation, even for moderate number of users, all the more so for large population size. The probability for success given a transmission attempt, i.e., the user has packet to send, exceeds the threshold and successfully transmits is given by: \ifdouble \small \fi \begin{equation}\label{equ-success probability model 1} p_{succ}(i)= \frac{P_A(i)(G_1^i(1)-\pi(1,0))+P_I(i)\pi(1,0)+P_B(i)G_0^i(1)}{\frac{1}{K}(1-\overline{\lambda}\pi(1,0))}, \end{equation} \ifdouble \normalsize \fi where the numerator is the general success probability regardless the users' state (Idle, Active, Blocked), divided by the probability for transmission attempt. Note that the user can be in Idle state and yet manage to transmit successfully, which can occur upon successful transmission of a packet upon its arrival to an empty queue. \end{comment} \subsubsection{Performance results}\label{Approximate model 1 - results} In order to gain some insight on the analytical results attained in the previous subsection, in this subsection we present some numerical results. In particular, we present the performance attained by the threshold based channel access mechanism under various metrics such as delay, average queue size and success probability. The simulation and analytic calculations were performed under homogenous settings in which all users experience the same arrival process and channel distribution. Specifically, the arrival process was approximated by a Bernoulli process in which at each slot each user receives a new packet for transmission with probability $\lambda_i=\frac{\lambda}{K}$. The threshold was set such that the exceedance probability is $\frac{1}{K}$. Accordingly, each backlogged user examined its channel state at the beginning of each slot, and if its channel state was above a threshold it started transmission. Note that since the channel distribution was homogenous in time, the threshold exceedance process can also be viewed as Bernoulli process in which the transmission probability is $\frac{1}{K}$. Since, as explained earlier, the number of states in the system-status Markov chain grows exponentially with the number of users, for this model, the simulation and the numerical results were compared only for modest number of users and specifically only for $2$ to $10$ users. In section \ref{Approximate model 2} we present a model which is able to capture a much larger number of users. We start by evaluating the effect of the number of users on the average queue length and on the delay, where we divide the delay to its different components, namely, the time in queue and service time. Figures \ref{fig-MeanQueueSize_2-10_0366}, \ref{fig-TimeInLine_2-10_0366} and \ref{fig-ServiceTime_2-10_0366} depict the performance metrics as a function of the number of users, where the total arrival rate is set close to the maximum value which still allows the system to be stable. The results depict very good match between the simulations and the analytic results given in \eqref{equ-mean queue size model 1}, \eqref{equ-time in line approximation model 1} and \eqref{equ-service time approximation model 1}. However, as mentioned earlier, this approximation can only be used for a small number of users. Figures \ref{fig-MeanQueueSize_K=7}, \ref{fig-TimeInLine_K=7} and \ref{fig-ServiceTime_K=7} depict performance as a function of the total arrival rate, which asserts the following observations: First, for small and moderate values of the arrival rate one can see very good agreement between the analytic results and the simulation. Near the maximum value, however, around the instability region, the values start to diverge, especially in Figures \ref{fig-MeanQueueSize_K=7} and \ref{fig-TimeInLine_K=7}. These results are consistent with the results of \cite{ephremides1987delay}, which used small values of $\lambda_i's$ as well. The second observation is that as the number of users grows, one can see even stronger compliance with the analytical results. This suggests that this approximate model may be suitable for a large population. Unfortunately, as mentioned before, the calculation of the system steady state is intractable due to the exponential growth of the number of states with the number of users. \ifdouble \begin{figure}[!t] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{MeanQueueSize_2-10_0366} \caption{} \label{fig-MeanQueueSize_2-10_0366} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{MeanQueueSize_K=7} \caption{} \label{fig-MeanQueueSize_K=7} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{TimeInLine_2-10_0366} \caption{} \label{fig-TimeInLine_2-10_0366} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{TimeInLine_K=7} \caption{} \label{fig-TimeInLine_K=7} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{ServiceTime_2-10_0366} \caption{} \label{fig-ServiceTime_2-10_0366} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{ServiceTime_K=7} \caption{} \label{fig-ServiceTime_K=7} \end{subfigure} \caption[System performance time independent model]{Analytic results of the approximation using system and users' state compared with simulation for the system performances. The first row is as a function of the number of users, where the total arrival rate is $\lambda_T=\frac{1}{e}(1-0.001)$ and the second row is as a function of the total arrival rate where the number of users is $K=7$. Figures (a,b) depict the mean queue size, (c,d) depict the time in line and (e,f) depict the service time. The red lines describe the anaclitic expressions, \eqref{equ-mean queue size model 1}, \eqref{equ-time in line approximation model 1} and \eqref{equ-service time approximation model 1}, while the black lines describe the simulation results.} \label{fig-QueuingPerformance} \end{figure} \else \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{MeanQueueSize_2-10_0366} \caption{Mean queue size.} \label{fig-MeanQueueSize_2-10_0366} \end{subfigure}% \begin{subfigure}[b]{0.31\textwidth} \includegraphics[width=\textwidth]{TimeInLine_2-10_0366} \caption{Time in line.} \label{fig-TimeInLine_2-10_0366} \end{subfigure} \begin{subfigure}[b]{0.31\textwidth} \includegraphics[width=\textwidth]{ServiceTime_2-10_0366} \caption{Service time.} \label{fig-ServiceTime_2-10_0366} \end{subfigure} \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{MeanQueueSize_K=7} \caption{Mean queue size.} \label{fig-MeanQueueSize_K=7} \end{subfigure}% \begin{subfigure}[b]{0.31\textwidth} \includegraphics[width=\textwidth]{TimeInLine_K=7} \caption{Time in line.} \label{fig-TimeInLine_K=7} \end{subfigure} \begin{subfigure}[b]{0.31\textwidth} \includegraphics[width=\textwidth]{ServiceTime_K=7} \caption{Service time.} \label{fig-ServiceTime_K=7} \end{subfigure} \caption[System performance time independent model]{Analytic results of the approximation using system and users' state compared with simulation for the system performances. The first raw is as a function of the number of users where the total arrival rate is $\lambda_T=\frac{1}{e}(1-0.001)$ and the second raw is as a function of the total arrival rate where the number of users is $K=7$. Figures (a,d) depict the mean queue size, (b,e) depict the time in line and (c,f) depict the service time. The red lines describes the anaclitic expressions \eqref{equ-mean queue size model 1}, \eqref{equ-time in line approximation model 1} and \eqref{equ-service time approximation model 1}, while the black lines describe the simulation results.} \label{fig-QueuingPerformance} \end{figure*} \fi Next we compare the estimated success probability given a transmission attempt, $p_{succ}$ with simulation results. Figure~\ref{fig-SuccessProbability_2-10_0366_independent} clearly depicts that the analytical approximation described in \eqref{equ-success probability model 1} matches the simulation results with high accuracy. Even for a small number of seven users we have a difference of only 1.19\%. This highly accurate approximation is the foundation for a simplified model, described in the following section, which, in contrast to the model considered thus far, is able to capture the system's behavior for a large number of users with high accuracy. \subsection[Queueing Approximate model \Rmnum{2} ]{Approximation using Constant Collision Probability}\label{Approximate model 2} As described in the previous section, a user's success (or conversely, collision) probability depends on the other users' queues, hence on the system state. That is, different system states will result in different success (collision) probabilities. Trying to solve this set of equations is complicated even for a moderate number of users, all the more so for a large user population. In this section, we present a simpler approach, one which assumes that \emph{a user's collision probability is constant}. This will allow us to give close form results which are easy to calculate, and, as simulations depict, are very accurate for large and even moderate for population sizes. The key approximation method we adopt in this section is inspired by the mean field theory. Mean field theory studies the behavior of a large number of particles which interact. Specifically, when the number of particles is large, mean field approximation suggests an \emph{independent evolution of a certain particle relatively to others} by approximating the effect of all other particles on that particle by the averaged effect (essentially this is a concentration result). Ever since the mean field approximation was introduced in physics, it was adopted by various fields, including in the context of Markov process models for various dynamic systems. For example, in \cite{bordenave2012asymptotic}, mean field approximation was used to describe the stability region of the slotted Aloha paradigm. Therein, the authors proved that the distribution of a user's queue state is not affected by the other users when $K$ goes to infinity. Given that, the collision probability can be approximated by a fixed point equation, assuming the probability for empty queue is constant and is independent of the other users. Another seminal work which utilized a constant collision probability as the key approximation is \cite{bianchi2000performance}, denoted as the \emph{decoupling approximation}. In this work, a Markov model for the 802.11 back-off process was considered, with $n$ users competing on a shared medium. This decoupling assumption can be formally justified as a consequence of convergence to mean field, as the number of users goes to infinity. Several elaborations were made along with verification for the validity of the decoupling approximation \cite{malone2007modeling}, \cite{cho2012asymptotic}. We also note that in \cite{sidi1983two}, independent M/M/1 queues were assumed, with a given distribution on the number of backlogged users. This resulted in an approximation with a \emph{varying} collision probability. Herein, and similar to the works mentioned above, we assume a constant collision probability which we denote by $p_{coll}$, i.e., we assume that given a transmission attempt, each user experiences a fixed collision probability regardless of the state or queues of the other users, and regardless of its own state. Even though the mean field theory mostly applies to large and complex stochastic models involving a large number of particles, we will show via simulation that our approach gives high accuracy even for a moderate number of users. We start by investigating the service time. \subsubsection{Service time analysis} The service time, which is the time from the moment a packet becomes first in queue, until it is successfully transmitted, depends on the rate at which a user exceeds the threshold and the probability of success (the probability that a collision did not occur). We note that the threshold exceedance process of each user can be modeled as a series of Bernoulli trials. We further note that this process converges to a Poisson process \cite{leadbetter1976weak}, especially if the slot duration is small compared to the time between threshold exceedances. Accordingly, we will approximate the time between threshold exceedances for each user as exponentially distributed with parameter $\uptau$. In the second part of this work, we will prove that indeed, as the number of users grows and the threshold exceedance probability decreases respectfully, convergence exists and exponential distribution between consecutive thresholds exceedances can be considered (see Section~\ref{subsec-Threshold_exceedance_process}). Since we decoupled the queues, we can now assume that each user's queue behaves like an $M/M/1$ queue with a feedback loop. Specifically, packets enter each user's queue according to Poisson process with rate $\lambda_i$. The inter exceedance interval is exponentially distributed with mean $1/\tau$. Upon exceeding the threshold, a packet will be successfully transmitted (hence depart the queue) with probability $p_{succ}=1-p_{coll}$, and will need to be retransmitted with probability $p_{coll}$. Accordingly, the queue can be modeled as M/M/1 queue with exponentially distributed service time with parameter $(1-p_{coll}) \uptau$. The probability of an empty such $M/M/1$ queue, is: \begin{equation}\label{equ-the probability for empty queue M/M/1} P(Q=0)=1-\rho=1-\frac{\lambda}{(1-p_{coll}) \uptau}, \end{equation} where $Q$ is the number of packets in the queue. We thus have the following lemma: \begin{comment} \begin{lemma}\label{lem-Transmission success probability satisfies the equation - Memoryless arrival} Assume each user's queue is modeled as an $M/M/1$ queue with a feedback loop, with an arrival rate $\lambda$, exceedance rate $\uptau$ and constant collision probability $p_{coll}$. Then, for sufficiently large $K$, the probability for collision $p_{coll}$ satisfies the equation \begin{equation}\label{equ-Transmission success probability satisfies the equation - Memoryless arrival} p_{coll}=1-e^{-\frac{\lambda}{(1-p_{coll}) \uptau}}. \end{equation} \end{lemma} \begin{proof} Since random selection (with probability $(1-p_{coll})$) is performed on the threshold exceedance Poisson process, the outcome successful transmission process is also a Poisson process with rate $(1-p_{coll}) \uptau$, which leads to an exponential service time with parameter $(1-p_{coll}) \uptau$. Let us examine the probability $p_{coll}$. Since the channel quality and the queue length are independent, the probability that a user will attempt transmission, equals the probability that the user is backlogged (i.e., its buffer is not empty) times the probability that its expected rate is above a threshold. Specifically, \ifdouble \small \fi \begin{equation} P(C_i>u,Q_i>0)=P(C_i>u)P(Q_i>0)=\frac{1}{K}\cdot\frac{\lambda}{\uptau (1-p_{coll})}, \end{equation} \ifdouble \normalsize \fi On the other hand, given that a user has transmitted, i.e., its expected rate exceeds $u$ and its queue is not empty, its collision probability, $p_{coll}$, is the probability that among all the other users at least one other user will attempt transmission, i.e., at least one other user is backlogged and its expected rate exceeds $u$. This is one minus the probability that no other user has attempted transmission. Thus \ifdouble \footnotesize \begin{equation*} \begin{aligned} p_{coll}&=\sum_{i=1}^{K-1} \binom {K-1} {i} \left(\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{i} \cdot\left(1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1-i} \\ &= 1-\left( 1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1} \overset{K \rightarrow \infty}{\rightarrow} 1-e^{-\frac{\lambda}{\uptau (1-p_{coll})}}, \end{aligned} \end{equation*} \normalsize \else \begin{equation*} \begin{aligned} p_{coll}=&\sum_{i=1}^{K-1} \binom {K-1} {i} \left(\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{i}\left(1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1-i} \\ =& 1-\left( 1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1} \overset{K \rightarrow \infty}{\rightarrow} 1-e^{-\frac{\lambda}{\uptau (1-p_{coll})}}, \end{aligned} \end{equation*} \fi which completes the proof. \end{proof} \end{comment} \begin{lemma}\label{lem-Transmission success probability satisfies the equation - Memoryless arrival} Assume each user's queue is modeled as an $M/M/1$ queue with a feedback loop, with an arrival rate $\lambda$, exceedance rate $\uptau$ and constant collision probability $p_{coll}$. Then, the probability for collision $p_{coll}$ satisfies the equation \begin{equation}\label{equ-Transmission success probability satisfies the equation - Memoryless arrival before 1} p_{coll}=1-e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot \left(1+o(1)\right). \end{equation} Hence, for large enough $K$ we have the following closed form for $p_{coll}$ \begin{equation}\label{equ-Transmission success probability satisfies the equation - Memoryless arrival} p_{coll}=1-e^{-\frac{\lambda}{(1-p_{coll}) \uptau}}. \end{equation} \end{lemma} \begin{proof} Since random selection (with probability $(1-p_{coll})$) is performed on the threshold exceedance Poisson process, the outcome successful transmission process is also a Poisson process with rate $(1-p_{coll}) \uptau$, which leads to an exponential service time with parameter $(1-p_{coll}) \uptau$. Let us examine the probability $p_{coll}$. Since the channel quality and the queue length are independent, the probability that a user will attempt transmission, equals the probability that the user is backlogged (i.e., its buffer is not empty) times the probability that its expected rate is above a threshold. Specifically, \ifdouble \small \fi \begin{equation} P(C_i>u,Q_i>0)=P(C_i>u)P(Q_i>0)=\frac{1}{K}\cdot\frac{\lambda}{ (1-p_{coll})\uptau}, \end{equation} \ifdouble \normalsize \fi On the other hand, given that a user has transmitted, i.e., its expected rate exceeds $u$ and its queue is not empty, its collision probability, $p_{coll}$, is the probability that among all the other users at least one other user will attempt transmission, i.e., at least one other user is backlogged and its expected rate exceeds $u$. This is one minus the probability that no other user has attempted transmission. Thus \ifdouble \footnotesize \begin{equation*} \begin{aligned} p_{coll}&=\sum_{i=1}^{K-1} \binom {K-1} {i} \left(\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{i} \cdot\left(1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1-i} \\ &= 1-\left( 1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1} \\ &= 1- e^{(K-1)\ln{\left(1-\frac{\lambda}{(1-p_{coll}) \uptau K}\right)}}\\ &= 1- e^{(K-1)\left(-\frac{\lambda}{(1-p_{coll}) \uptau K}-O\left(\frac{1}{K^2}\right)\right)}\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}-O\left(\frac{1}{K}\right)} \cdot e^{\frac{\lambda}{(1-p_{coll}) \uptau K}+O\left(\frac{1}{K^2}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot e^{\frac{\lambda}{(1-p_{coll}) \uptau K}+O\left(\frac{1}{K}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot \left(1+O\left(\frac{\lambda}{(1-p_{coll}) \uptau K}\right)\right)e^{O\left(\frac{1}{K}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot \left(1+o(1)\right)e^{O\left(\frac{1}{K}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot \left(1+o(1)\right),\\ \end{aligned} \end{equation*} \normalsize \else \begin{equation*} \begin{aligned} p_{coll}=&\sum_{i=1}^{K-1} \binom {K-1} {i} \left(\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{i}\left(1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1-i} \\ &= 1-\left( 1-\frac{1}{K} \frac{\lambda}{\uptau (1-p_{coll})} \right)^{K-1}\\ &= 1- e^{(K-1)\ln{\left(1-\frac{\lambda}{(1-p_{coll}) \uptau K}\right)}}\\ &= 1- e^{(K-1)\left(-\frac{\lambda}{(1-p_{coll}) \uptau K}-O\left(\frac{1}{K^2}\right)\right)}\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}-O\left(\frac{1}{K}\right)} \cdot e^{\frac{\lambda}{(1-p_{coll}) \uptau K}+O\left(\frac{1}{K^2}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot e^{\frac{\lambda}{(1-p_{coll}) \uptau K}+O\left(\frac{1}{K}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot \left(1+O\left(\frac{\lambda}{(1-p_{coll}) \uptau K}\right)\right)e^{O\left(\frac{1}{K}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot \left(1+o(1)\right)e^{O\left(\frac{1}{K}\right)},\\ &= 1- e^{-\frac{\lambda}{(1-p_{coll}) \uptau}} \cdot \left(1+o(1)\right),\\ \end{aligned} \end{equation*} \fi which completes the proof. \end{proof} Equation~\eqref{equ-Transmission success probability satisfies the equation - Memoryless arrival} is an implicit equation and a numerical method is needed in order to find the value of $p_{coll}$. In Figure \ref{fig-SuccessProbability_2-10_0366_independent}, we depict the numerical calculation of the success probability (the blue line), as given in \eqref{equ-Transmission success probability satisfies the equation - Memoryless arrival}, compared with the approximation derived in the previous section (Eq. \eqref{equ-success probability model 1}) and with simulation results, for different user populations. Although the simple approximation is slightly less accurate for a very small number of users (e.g., around 5\% for 4 users) it coincides with the results of the simulation \emph{and the approximate model from the previous section} (with dependent queues, which cannot be calculated for large $K$) even for moderate number of users (e.g., around 2\% for 10 users). It is important to emphasize that since the number of users is relatively small, we used the equation in its explicit form, meaning without taking $K$ to infinity. In Figure \ref{fig-Service_time_model2}, we compare the average service time computed according to the $M/M/1$ queue approximation, with service rate $p_{succ}\uptau$, which was calculated according to equation \eqref{equ-Transmission success probability satisfies the equation - Memoryless arrival}, with simulation of the system, which, of course, included dependent queues and variable $p_{succ}$. Clearly, the approximation shows excellent agreement with the simulation results. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{SuccessProbability_2-10_0366_independent} \caption[]{} \label{fig-SuccessProbability_2-10_0366_independent} \end{subfigure}% \quad \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ServiceTimeModel2} \caption[]{} \label{fig-Service_time_model2} \end{subfigure} \caption{Simulation results for the approximate model using constant collision probability. (a) Comparison of $p_{succ}$ between the analytic derivation of the approximation done in Section \ref{Approximate model 1} (equation \eqref{equ-success probability model 1}), simulation and the approximation of $K$ \emph{independent} M/M/1 queues given in Lemma \ref{lem-Transmission success probability satisfies the equation - Memoryless arrival} where the total arrival rate is $\lambda_T=\frac{1}{e}(1-0.001)$. (b) The service time of an M/M/1 queue with service rate $p_{succ}\uptau$, calculated using \eqref{equ-Transmission success probability satisfies the equation - Memoryless arrival}, compared to simulation results of system with $K$ \emph{interdependent} queues where the total arrival rate is $\lambda_T=0.3$.} \end{figure} \subsubsection[Threshold value]{Threshold value}\label{Threshold value discussion} The threshold value has a profound effect on the performance. Different threshold values will support different arrival rates, i.e., a different stability region. In addition, the threshold value affects the users' delay. On the one hand, a high threshold will result in low exceedance probability hence long intervals between transmission attempts. On the other hand, low threshold results in high exceedance probability, hence high collision probability. The optimal threshold value depends on the number of backlogged users in the system at any given time, e.g., when there are only a few backlogged users, a low threshold should be chosen and when many users are backlogged a high threshold should be chosen. Note, however, that the process of monitoring the users at all times, and notifying them regarding the current threshold before each transmission, is not only complex analytically, but mainly impractical in real systems. Therefore, in the analitical part of this work, we focused on a fixed threshold, independent of the users' status. Specifically, we chose a threshold value such that the probability of exceedance is $1/K$. This threshold is conservative, as it is designed for the case that all users are backlogged at all times. However, it is interesting to see how a different fixed value affects the results. Hence, we examine the effect of the threshold on the performance based on simulation results and specifically, we show that a better threshold value can be chosen (which is less conservative). Figures \ref{fig-System_throughput_as_function_of_threshold} and \ref{fig-System_delay_as_function_of_threshold} depict the throughput and system delay as a function of the exceedance probability, respectively, for 50 users with total arrival rate of $\lambda_T=0.35$. The blue dotted line in Figure \ref{fig-System_throughput_as_function_of_threshold} is the average number of backlogged users as a function of the exceedance probability. Expectedly, when the exceedance probability is high, the throughput decreases and the delay is rapidly growing due to high collision probability. On the other hand, when it is low, throughput decreases as well, due to the long intervals in which no users will attempt transmission, as the threshold is high. This, of course, indicates on the instability of the system. However, as can be seen from figures \ref{fig-System_throughput_as_function_of_threshold} and \ref{fig-System_delay_as_function_of_threshold}, there is a domain of probability values for which the system achieves its maximum throughput and has a low average delay values. One can see that the value of $1/K$ (black dashed line) and the value for which we have a minimum number of backlogged users are found in this domain. It is clear that although the throughput of the system is at its maximum, for this the domain of probability values, the delay and the average number of backlogged users can be reduced if we increase the exceedance probability. A possible criterion for a better excedance probability value may be a threshold which minimizes the average number of backloged users in the system. In addition, we wish to point out that figure \ref{fig-System_delay_as_function_of_threshold} may be misleading for values which the system is not stable (for $p<0.012$ and $p>0.03$ approximately), where one would expect to see an infinite delay. This is due to the run time of the simulation. Nevertheless, we still can see the sharp jump in the delay which indicate its general behaviour. \ifdouble \begin{figure}[tp!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ThresholdSim_50u} \caption{} \label{fig-System_throughput_as_function_of_threshold} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ThresholdSimDelay_50u} \caption{} \label{fig-System_delay_as_function_of_threshold} \end{subfigure} \caption[Throughput and delay as function of threshold]{The system performance metrics as a function of the threshold value. (a) System throughout and average number of backlogged users. (b) System delay. One can compare the performances under the exceedance probability of $1/K$ with different probability values (e.g. for the lower value of the average number of backlogged users). The simulation was preformed for 50 users with total arrival rate of $\lambda_T=0.35$.} \label{fig-System_throughput_and_delay_as_function_of_threshold} \end{figure} \else \begin{figure*}[tp!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ThresholdSim_50u} \caption{} \label{fig-System_throughput_as_function_of_threshold} \end{subfigure}% \quad \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{ThresholdSimDelay_50u} \caption{} \label{fig-System_delay_as_function_of_threshold} \end{subfigure} \caption[Throughput and delay as function of threshold]{The system performance metrics as a function of the threshold value. (a) System throughout and average number of backlogged users. (b) System delay. One can compare the performances under the exceedance probability of $1/K$ with different probability values (e.g. for the lower value of the average number of backlogged users). The simulation was preformed for 50 users with total arrival rate of $\lambda_T=0.35$.} \label{fig-System_throughput_and_delay_as_function_of_threshold} \end{figure*} \fi \subsection[Queueing Approximate model \Rmnum{3}]{Approximation using Constant Collision Probability - Time Dependent Channel}\label{Approximate model 3} In many practical scenarios, the channel seen by a user at a given time slot is correlated with the one seen by the user in the previous time slots. Thus, in the sequel, we will analyze the performance of the threshold based algorithm when users are experiencing a time dependent channel distribution. In particular, we assume that the channel capacity distribution experienced by each user is time varying, according to a Gilbert Elliott model \cite{gilbert1960}. Specifically, the channel distribution may be at one of two states, denoted as G (for Good) and B (for Bad), where each state determines a different channel distribution. The transitions between the states Good to Bad and Bad to Good follows a Bernoulli distribution with probabilities $\alpha$ and $\beta$, respectively. Thus, the states evolve according to a 2-state Markov chain as described in Figure \ref{fig-GoodBadchannel}. Note that while the trheshold remains fixed, the threshold exceedance probability clearly depends on the user's channel state. Namely, if a user is in a Good state, the user is expected to exceed the threshold more often than when being in a Bad state. Furthermore, the user's collision probability not only depends on the number of other backlogged users but also on the channel state of each such backlogged user. Accordingly, trying to solve the system's stationary distribution, which is one additional dimension over the previous analyzed system, is much more involved than before. We thus adopt the same simplified approach we took in Section~\ref{Approximate model 2}. Specifically, we assume that the collision probability that a user experiences is constant regardless of the number of other backlogged users or their channel states. As before, we assume that the slot duration is small compared to the time interval between two consecutive threshold exceedances, even when the user's channel is in Good state, and approximate the time between threshold exceedances for each user as exponentially distributed with parameter $\mu_g$ and $\mu_b$, depending on whether the user's channel is in Good or Bad state, respectively. Accordingly, the user's service time is exponentially distributed with rates $\mu_g\cdot p_{succ}$ or $\mu_b\cdot p_{succ}$, depending on the user being in Good or Bad state, respectively. Given the Poisson arrival process with rate $\lambda$, the decoupled user's queue model is presented in Figure \ref{fig-Time Dependent Queue}. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=0.6\textwidth]{BadGoodChannel} \caption{} \label{fig-GoodBadchannel} \end{subfigure}% \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=0.8\textwidth]{TimeDependentQueue} \caption{} \label{fig-Time Dependent Queue} \end{subfigure}% \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=0.9\textwidth]{TimeDependentQueueMarkovChain} \caption{} \label{fig-Time Dependent Queue Markov chain} \end{subfigure} \caption{The Models for the approximation using constant collision probability for time dependent channel. (a) A Good-Bad channel model according to \cite{gilbert1960}. (b) The queue diagram for a user with a time dependent channel and a constant probability for collision. (c) The queue Markov Chain Model for a time dependent user.} \end{figure} This time-dependent queue model can be represented as a continuous Markov process on the set of states $\{\pi^i_m\}$ for $i\in\{b,g\}$, which indicates the Good or Bad state, and $m=0,1,2...$ the number of packets in the queue. This two dimensional Markov chain is presented in Figure \ref{fig-Time Dependent Queue Markov chain}. To ease notation, we denote $\mu_g\cdot p_{succ}=\mu'_g$ and $\mu_b\cdot p_{succ}=\mu'_b$. In \cite{yechiali1971queuing}, the authors studied a modification of the $M/M/1$ queuing model in which the rate of arrival and the service capacity are subject to Poisson alternations. While the model therin is different than the one here, the analysis suggested relied on a Markov chain which is similar to the one presented in Figure \ref{fig-Time Dependent Queue Markov chain}. The system is solved by using generating-function techniques, resulting in a solution for the steady state probabilities, $\{\pi^i_m\}$, as a function of the transition rate parameters and the root of a third degree polynomial $g(z)$ (only one solution exists under the assumptions). We rely on the solution presented in \cite{yechiali1971queuing}. Next, we resolve the steady state distribution of the chain. Let $\hat{\mu}$ denotes the average service rate. \begin{equation*} \hat{\mu}=\pi_g \mu'_g+\pi_b\mu'_b, \quad \text{where,} \quad \pi_g=\frac{\beta}{\alpha+\beta},\quad \pi_b=\frac{\alpha}{\alpha+\beta}. \end{equation*} Note that in order to maintain stability, it is required that $\hat{\mu}>\lambda$. We define the partial generating functions of the system as: \begin{equation*} G_i(z)=\sum^{\infty}_{m=0}\pi^i_m z^m \quad \quad \mid z \mid \leq 1, \ i=g,b. \end{equation*} From~\cite{yechiali1971queuing}, we have \begin{equation*} \begin{aligned} G_g(z)&=\left( \beta(\hat{\mu}-\lambda)z+\pi^g_0\mu'_g(1-z)(\lambda z-\mu'_b) \right)/g(z) \\ G_b(z)&=\left( \alpha(\hat{\mu}-\lambda)z+\pi^b_0\mu'_b(1-z)(\lambda z-\mu'_g) \right)/g(z), \end{aligned} \end{equation*} where $g(z)$ is \begin{equation}\label{equ-third degree polynomial} \begin{aligned} g(z)=&\lambda^2z^3-(\alpha\lambda+\beta\lambda+\lambda^2+\lambda\mu'_b+\lambda\mu'_g)z^2 \\ &+(\alpha\mu'_b+\beta\mu'_g+\mu'_g\mu'_b+\lambda\mu'_b+\lambda\mu'_g)z-\mu'_g\mu'_b. \end{aligned} \end{equation} The steady state probabilities for an empty queue, depending on the channel states, are \begin{equation} \pi^g_0=\frac{\beta(\hat{\mu}-\lambda)z_0}{\mu'_g(1-z_0)(\mu'_b-\lambda z_0)}, \ \ \pi^b_0=\frac{\alpha(\hat{\mu}-\lambda)z_0}{\mu'_b(1-z_0)(\mu'_g-\lambda z_0)} \end{equation} where $z_0$ is the root of the polynomial $g(z)$. The remaining steady state probabilities are as follows \begin{equation} \begin{aligned} \pi^g_m&= \pi^g_{m-1}\frac{\lambda}{\mu'_g}+\sum^{m-1}_{j=0}\pi^g_j\frac{\alpha}{\mu'_g}- \sum^{m-1}_{j=0}\pi^b_j\frac{\beta}{\mu'_g} \quad \quad m>0 \\ \pi^b_m&= \pi^b_{m-1}\frac{\lambda}{\mu'_b}+\sum^{m-1}_{j=0}\pi^b_j\frac{\beta}{\mu'_b} - \sum^{m-1}_{j=0}\pi^g_j\frac{\alpha}{\mu'_b} \quad \quad m>0. \end{aligned} \end{equation} From the first derivative of the partial generating functions, we can attain the expected queue size, which includes the head of line packet. Unlike the analysis in subsection \ref{Approximate model 1}, now \ifdouble \small \fi \begin{equation*} \overline{Q}=\frac{\lambda}{\hat{\mu}-\lambda}+\frac{\mu'_g(\mu'_b-\lambda)\pi^g_0+ \mu'_b(\mu'_g-\lambda)\pi^b_0-(\mu'_g-\lambda)(\mu'_b-\lambda)}{(\alpha+\beta)(\hat{\mu}-\lambda)}. \end{equation*} \ifdouble \normalsize \fi Using Little's theorem, one can attain the average waiting time in the queue, $ W=\overline{Q}/\lambda.$ Note that these aforementioned results rely on $\mu'_b$ and $\mu'_g$ which rely on $p_{succ}$ which is assumed to be fixed, yet is unknown and needs to be computed. Hence, in the same manner as in the previous subsection, we define the probability that a specific user attempts transmission, i.e., exceeds the threshold and its queue is not empty, to be \begin{equation}\label{equ-probability for attempt transmission} \begin{aligned} P_t &\triangleq Pr(\text{transmission attempt})\\ &=Pr(\text{exceedance occur \& queue is not empty})\\ &=(G_g(1)-\pi^g_0)(1-e^{-\mu_g})+(G_b(1)-\pi^b_0)(1-e^{-\mu_b}), \end{aligned} \end{equation} where the first term consists of the probability to be in a Good state with a packet to transmit, times the probability which an exceedance occurs while existing in the Good state; The second term is similar, referring only to the Bad state. The probability for success, given one user is about to transmit, can be obtained by a similar calculation as in Lemma \ref{lem-Transmission success probability satisfies the equation - Memoryless arrival}, and we have \begin{equation}\label{equ-probability for sucss-third model} p_{succ}=(1-P_t)^{K-1}. \end{equation} Since the root $z_0$ and $p_{succ}$ are coupled, \eqref{equ-probability for sucss-third model} and the roots of the third degree polynomial \eqref{equ-third degree polynomial} must be solved simultaneously in order to compute them both. In order to evaluate the approximation, we ran a set of simulation. In the results presented here, the transition rates between Good and Bad states are set equal. Specifically, $\alpha=\beta=0.1$. The users were considered homogenous. The arrival and service rate parameters, presented in the figures, depict the total rates of the system, and were divided equally among all the users. The service rates were set to $\mu_g=0.7$ and $\mu_b=0.5$ for Good and Bad channel quality, respectively. Figure \ref{fig-QueuingPerformance_time dependent_P_succ} depicts the probability for success, $p_{succ}$ vs.\ the number of users for two different arrival rates, $\lambda_T=0.1$ and $\lambda_T=0.3$. The figure clearly depicts that the estimated probability for success $p_{succ}$, coincides with the simulation results for both arrival rates. Figure~\ref{fig-QueuingPerformance_time dependent} depicts the mean queue size and the mean sojourn time vs.\ the number of users for $\lambda_T=0.3$, which also shows good agreement between the approximated analytical results and the simulation results. \ifdouble \begin{figure}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \center{ \includegraphics[width=\textwidth,height=4cm,keepaspectratio]{p_succ_L_01_mug_07_mub_05_K_150} \caption{}} \label{fig-P_succ_L=0.1} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \center{ \includegraphics[width=\textwidth,height=4cm,keepaspectratio]{p_succ_L_03_mug_07_mub_05_K_150} \caption{}} \label{fig-P_succ_L=0.3} \end{subfigure} \caption[Success probability time dependent model]{Success probability for the approximation using constant collision probability for time dependent channel, as given in \eqref{equ-probability for sucss-third model}, compared to simulation results of a time dependent queueing system, according to the Good-Bad channel model, as a function of the number of users. Where $\mu_g=0.7,\mu_b=0.5$ for (a)$\lambda_T=0.1$ (b)$\lambda_T=0.3$.} \label{fig-QueuingPerformance_time dependent_P_succ} \end{figure} \begin{figure}[!h] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{gfx/pdf/QueueingSimRes/timeDepenendetModel/MeanQueueSize_L_03_mug_07_mub_05_K_150} \caption{} \label{fig-MeanQueueSize_L=0.3} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{gfx/pdf/QueueingSimRes/timeDepenendetModel/TimeLine_L_03_mug_07_mub_05_K_150} \caption{} \label{fig-TimeInLine_L=0.3} \end{subfigure} \caption[System performance time dependent model]{Performance metrics according to the analytic derivation of the approximation using constant collision probability for time dependent channel compared to simulation results of a time dependent queueing system, according to the Good-Bad channel model, as a function of the number of users. Where $\mu_g=0.7,\mu_b=0.5$ and $\lambda_T=0.3$. Where (a) mean queue size and (b) time in queue.} \label{fig-QueuingPerformance_time dependent} \end{figure} \else \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \center{ \includegraphics[width=\textwidth,height=4cm,keepaspectratio]{p_succ_L_01_mug_07_mub_05_K_150} \caption{}} \label{fig-P_succ_L=0.1} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \center{ \includegraphics[width=\textwidth,height=4cm,keepaspectratio]{p_succ_L_03_mug_07_mub_05_K_150} \caption{}} \label{fig-P_succ_L=0.3} \end{subfigure} \caption[Success probability time dependent model]{Success probability for the approximation using constant collision probability for time dependent channel, as given in \eqref{equ-probability for sucss-third model}, compared to simulation results of a time dependent queueing system, according to the Good-Bad channel model, as a function of the number of users. Where $\mu_g=0.7,\mu_b=0.5$ for (a)$\lambda_T=0.1$ (b)$\lambda_T=0.3$.} \label{fig-QueuingPerformance_time dependent_P_succ} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{MeanQueueSize_L_03_mug_07_mub_05_K_150} \caption{} \label{fig-MeanQueueSize_L=0.3} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{TimeLine_L_03_mug_07_mub_05_K_150} \caption{} \label{fig-TimeInLine_L=0.3} \end{subfigure} \caption[System performance time dependent model]{Performance metrics according to the analytic derivation of the approximation using constant collision probability for time dependent channel compared to simulation results of a time dependent queueing system, according to the Good-Bad channel model, as a function of the number of users. Where $\mu_g=0.7,\mu_b=0.5$ and $\lambda_T=0.3$. Where (a) mean queue size and (b) time in queue.} \label{fig-QueuingPerformance_time dependent} \end{figure*} \fi \begin{comment} \subsubsection{Threshold exceedance process} The point process approximation described earlier result in convergence of the points exceeding $u_n$ to a non-homogeneous Poisson process $N$. Here, we shall use the notation given in \cite{EVT:Springer1983} for the Poisson properties of exceedance.\\ Let $C_1,C_2,...,C_n$, a sequence of \textit{i.i.d.} r.v.s with distribution $F$, be the channel capacity draws of a specific user. From \cite[Theorem 2.1.1]{EVT:Springer1983}, if $u_n$ satisfies $n(1-F(u_n))\rightarrow \uptau$, then for $k=0,1,2,...,$ \begin{equation*} P(N_n\leq k)\rightarrow e^{-\uptau}\sum_{s=0}^{k}\frac{\uptau^{s}}{s!}, \end{equation*} where $N_n$ is the number of exceedances of a level $u_n$ by $\{C_n\}$. It is important to note that convergence in distribution happens if we change the "time scale" by a factor of $n$ as defined in the definition of $N_n$ in \eqref{equ-sequence of point processes N_n}. Meaning, we have a point process in the interval $(0,1]$ and the exceedance of such points has a limiting Poisson distribution. This restriction may be problematic in terms of the data arrival rate which will also need to be scaled with a factor of $n$. However, under certain conditions, there may be convergence of the exceeding points for the entire positive line and not just the unit interval. By \cite[Theorem $5.2.1 (\rmnum{2})$]{EVT:Springer1983}, if for each $\uptau >0$, there exists a sequence $\{u_n(\uptau)\}$ satisfying $n(1-F(u_n(\uptau)))=nP(C_1>u_n(\uptau))\rightarrow \uptau$ as $n \rightarrow \infty$, and that $D(u_n(\uptau)), D'(u_n(\uptau))$ hold for all $\uptau > 0$, then for any fixed $\uptau$, $N_n$ converges in distribution to a Poisson process $N$ on $(0,\infty)$ with parameter $\uptau$. Clearly, the dependence conditions hold for this case, since the sequence $\{C_n\}$ is \textit{i.i.d.}, so one only needs to show that the first condition holds for each $\uptau$. \begin{lemma}\label{lem-condition for poisson convergence on all positive line} Assume F is the Gaussian distribution and let $a_n$ and $b_n$ be given according to \cite[Theorem 1.5.3]{EVT:Springer1983}. Fix any $\uptau>0$ and set $u_n(\uptau)= \frac{\log{1/\uptau}}{a_n}+b_n$ Then, \begin{equation*} \lim_{n \to \infty} n(1-F(u_n(\uptau))) = \uptau. \end{equation*} \end{lemma} \begin{IEEEproof} In a similar way for the derivation of the normalizing constant in \cite[Theorem 1.5.3]{EVT:Springer1983}. Let us find $u_n(\uptau)$ which satisfies the equivalence condition for the convergence of the expression $n(1-F(u_n(\uptau)))$ \begin{equation*} \begin{aligned} n(1-F(u_n(\uptau))) &\rightarrow \uptau \qquad \text{as } n\rightarrow \infty \\ \frac{nf(u_n(\uptau))}{u_n(\uptau)}&\rightarrow \uptau \qquad \text{as } n\rightarrow \infty \end{aligned} \end{equation*} where the second line is true due to the Gaussian relation $1-\Phi(u)\sim \frac{\phi(u)}{u}$ as $u\rightarrow \infty$, which in our case $u_n(\uptau)$ grows with $n$. So \begin{equation*} \begin{aligned} \frac{1}{\sqrt{2\pi}}e^{-\frac{u_n^2(\uptau)}{2}} &\underset{n\rightarrow \infty}{\rightarrow} \frac{\uptau \ u_n(\uptau)}{n} \\ -\log\sqrt{2\pi}-\frac{u_n^2(\uptau)}{2}&\underset{n\rightarrow \infty}{\rightarrow} \log \uptau + \log(u_n(\uptau)) - \log n \qquad \end{aligned} \end{equation*} we know that $\log(u_n(\uptau))=\frac{1}{2}(\log 2 +\log{\log{n}})+o(1) $, hence \begin{equation*} \begin{aligned} &\frac{u_n^2(\uptau)}{2}= \log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}} + \log n +o(1)\\ &u_n^2(\uptau)=2\log n\left( 1+ \frac{\log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}}}{\log n}+o\left(\frac{1}{\log n} \right) \right)\\ &u_n(\uptau)=\sqrt{2\log n}\left( 1+ \frac{\log{ \frac{1}{\uptau}}-\frac{1}{2}\log{4\pi}-\frac{1}{2}\log{\log{n}}}{2\log n}+o\left(\frac{1}{\log n} \right) \right)\\ &u_n(\uptau)=\frac{\log{1/\uptau}}{a_n}+b_n \end{aligned} \end{equation*} where the penultimate line is due to Taylor expansion. \end{IEEEproof} Now, since all conditions for convergence hold, and the exceeding points indeed converge to a Poisson process on the real line of the exceeding points, we can conclude that a user atempts transmission at a rate of $\uptau$, assuming he has packets to send. We will note this Poisson process of threshold exceedance as $N_{exc}$. Note that the time between each user exceedance is distributed exponentially with parameter $\uptau$. The above leads us to a discussion on the duration of a slot. \begin{figure*} \centering \begin{subfigure}[b]{0.6\textwidth} \centering \includegraphics[width=\textwidth,height=5cm,keepaspectratio]{PoissonConverganceSimulationIID} \caption[Poisson convergence of exceeding points]{Simulation for 10000 user's capacities which follows an \textit{i.i.d.} Gaussian distribution, showing the behaviour of exceedance which converge to a Poisson distribution (the dots) with parameter $\uptau$.} \label{fig-PoissonConvergence} \end{subfigure}% ~ \begin{subfigure}[b]{0.35\textwidth} \center{ \begin{tabular}{l|c c} \hline & Simulation & Poisson \\ & & distribution \\ \hline Pr(N=0) & 0.3959 & 0.3961 \\ Pr(N=1) & 0.3682 & 0.3668 \\ Pr(N=2) & 0.1696 & 0.1698 \\ Pr(N=3) & 0.0527 & 0.0524 \\ Pr(N=4) & 0.0112 & 0.0121 \\ Pr(N=5) & 0.002 & 0.0022 \\ Pr(N=6) & 0.0004 & 0.0003 \\ Pr(N=7) & 0.0 & 0.00004 \\ \hline \end{tabular}} \caption[Poisson convergence of exceeding points values table]{Values table of figure \ref{fig-PoissonConvergence}.} \end{subfigure} \caption{Poisson convenance of exceeding points} \end{figure*} \begin{figure} \centering \begin{tikzpicture}[ every node/.style={anchor=south west,inner sep=0pt}, x=1mm, y=1mm, ] \node (fig1) at (0,0) {\includegraphics[width=0.5\textwidth]{./gfx/pdf/PoissonConverganceSimulationIID}}; \node (fig2) at (45,10) {\tiny \begin{tabular}{l|c c} \hline & Simulation & Poisson \\ & & distribution \\ \hline Pr(N=0) & 0.3959 & 0.3961 \\ Pr(N=1) & 0.3682 & 0.3668 \\ Pr(N=2) & 0.1696 & 0.1698 \\ Pr(N=3) & 0.0527 & 0.0524 \\ Pr(N=4) & 0.0112 & 0.0121 \\ Pr(N=5) & 0.002 & 0.0022 \\ Pr(N=6) & 0.0004 & 0.0003 \\ Pr(N=7) & 0.0 & 0.00004 \\ \hline \end{tabular}}; \end{tikzpicture} \caption{Simulation for 10000 user's capacities which follows an \textit{i.i.d.} Gaussian distribution, showing the behaviour of exceedance which converge to a Poisson distribution (the dots) with parameter $\uptau$. On the right up side the values table is given.} \label{fig-PoissonConvergence} \end{figure} At the limit of large number of users, the capacity seen by a user who exceeds the threshold is high (in fact, it scales like $O(\sigma\sqrt{2\log K}+\mu)$, see \eqref{equ-CapacityExpression}). Hence, for any finite size of a data packet, transmission time tends to zero as the number of users increases. Furthermore, in this case, as convergence to Poisson process $N_{exc}$ exists, events duration also goes to zero. This motivates us to suggest the asymptotic model, where transmission time is negligible but still exists, hence under this assumptions we will refer the slots as mini slots to emphasize this fact. \begin{remark}[Zero collisions] \label{rem-zero collisions} Since the users are independent, and, at the limit of large $K$ and small slots, are all characterized by the same $N_{exc}$ Poisson process, it is clear that no two events (transmission) can occur in the same time. Therefore, once a user exceeds $u_n$, assuming it's transmission time goes to zero, the probability that other users will exceed $u_n$ at the exact same time goes to zero. Thus, in such a scenario, the resulting approximation for the behavior of the queues is that of independent queues, each with a Poisson arrival process with rate $\lambda$ and exponential service process with parameter $\uptau$, that is, independent $M/M/1$ queues. Still, we use this scenario only as a guide line for the analysis, since we wish to keep the assumption that the services are synchronized, and hence collisions \textbf{may occur}, and result with decline in the rate of service. \end{remark} \end{comment} \begin{comment} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{MeanQueueSize_2-10_0366} \caption{} \label{fig-MeanQueueSize_2-10_0366} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{TimeInLine_2-10_0366} \caption{} \label{fig-TimeInLine_2-10_0366} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{ServiceTime_2-10_0366} \caption{} \label{fig-ServiceTime_2-10_0366} \end{subfigure} \caption[System performance time independent model]{Analytic results of approximation model A compared with simulation for the system performance as a function of the number of users. Total arrival rate is $\lambda_T=\frac{1}{e}(1-0.001)$. (a) The mean queue size. (b) The time in line. (c) The service time. The red line describes the anaclitic expressions \eqref{equ-mean queue size model 1}, \eqref{equ-time in line approximation model 1} and \eqref{equ-service time approximation model 1}, respectively.} \label{fig-QueuingPerformance_2-10_0366} \end{figure*} \end{comment} \begin{comment} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{MeanQueueSize_K=7} \caption{} \label{fig-MeanQueueSize_K=7} \end{subfigure}% \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{TimeInLine_K=7} \caption{} \label{fig-TimeInLine_K=7} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=\textwidth]{ServiceTime_K=7} \caption{} \label{fig-ServiceTime_K=7} \end{subfigure} \caption[System performance time independent model]{Analytic results of approximation model A compared with simulation for the system performance as a function of the total arrival rate. The number of users is $K=7$. (a) The mean queue size. (b) The time in line. (c) The service time. The red line describes the anaclitic expressions \eqref{equ-mean queue size model 1}, \eqref{equ-time in line approximation model 1} and \eqref{equ-service time approximation model 1}, respectively.} \label{fig-QueuingPerformance_K=7} \end{figure*} \end{comment} \begin{comment} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth,height=3cm,keepaspectratio]{gfx/pdf/QueueingSimRes/timeDepenendetModel/MeanQueueSize_L_01_mug_07_mub_05_K_150} \caption{} \label{fig-MeanQueueSize_L=0.1} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth,height=3cm,keepaspectratio]{gfx/pdf/QueueingSimRes/timeDepenendetModel/MeanQueueSize_L_03_mug_07_mub_05_K_150} \caption{} \label{fig-MeanQueueSize_L=0.3} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth,height=3cm,keepaspectratio]{gfx/pdf/QueueingSimRes/timeDepenendetModel/TimeLine_L_01_mug_07_mub_05_K_150} \caption{} \label{fig-TimeInLine_L=0.1} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth,height=3cm,keepaspectratio]{gfx/pdf/QueueingSimRes/timeDepenendetModel/TimeLine_L_03_mug_07_mub_05_K_150} \caption{} \label{fig-TimeInLine_L=0.3} \end{subfigure} \caption[System performance time dependent model]{Queuing system performance results as a function of the number of users in a time dependent queue model. $\lambda_T=\{0.1,0.3\},\mu_g=0.7,\mu_b=0.5$} \label{fig-QueuingPerformance_time dependent} \end{figure*} \end{comment} \begin{comment} As in \cite{ephremides1987delay}, we assume that the arrival rate at user $i$ is $\lambda_i$, and the arrival processes are statistically independent between the users. Time is slotted and it takes exactly one slot to transmit one packet. Thus, $\lambda_i$ is the probability of arrival for user $i$ in any given slot. User $i$ attempts, with probability $p_i$, to transmit the head-of-the-line packet in the queue (if the latter is nonempty) through a common channel to the receiver. At the beginning of a given slot, the users' status may be one of the three categories: \emph{idle}, \emph{active} or \emph{blocked}. A user is idle if there are no packets in its queue at the end of the preceding slot. It is blocked if its queue is not empty and the latest attempted transmission was unsuccessful (e.g., due to collision). It is active if its queue is not empty but its most recent attempted transmission was successful. We then let the transition probability be \begin{equation}\label{equ-probability for transmission queueing } p_i= \left\{ \begin{array}{l l} \frac{1}{K}\lambda_i & \ \text{if $i$ is idle},\\ \frac{1}{K} & \ \text{if $i$ is active or blocked}.\\ \end{array} \right. \end{equation} Where in idle state, a user may attempt transmission only if a packet arrived in the beginning of the current slot. The model consist of coupled Markov chains, the system-status chain and the queue-length chain. The transition probabilities of each chain, as will be presented later, depend on the steady-state probabilities of the other chain and thus, all state equations for both chains must be solved simultaneously. The system-status chain tries to capture the state of each user in any given time. Hence, the status variable $\overline{S}$ consists of $K$ ternary variables, $S_1,S_2,...,S_K$, each of which indicates the status of the corresponding terminal. Namely, $S_i=\{0,1,2\}$ for idle, active and blocked respectively. The total number of states achievable by this vector is $2^{K-1}(K+2)$, since no more than one active user may be present in the system in any given time. The calculations of the transition probabilities are presented in Appendix \ref{Appendix A}. Two quantities which are needed for the calculation of the transition probabilities are: \begin{equation}\label{equ-p(1|1) and p(0|2) definition} \begin{array}{l} P_i(1\mid 1) \triangleq P_r(\text{queue size $>1\mid$ user $i$ is active})\\ P_i(0\mid 2) \triangleq P_r(\text{queue size $=1\mid$ user $i$ is blocked}). \end{array} \end{equation} These probabilities reflect the coupling between the two chains. That is, in order to calculate $P(\overline{S})$, the joint steady-state probability distribution of the random vector $\overline{S}$, we first need to solve the queue-length chain steady state. The queue-length Markov chain tracks both the status and queue length of each user, independently of the status and queue length of the other users. Specifically, the pair $(T_i,N_i)$ represents the state of the user. $N_i$ is the total number of packets at queue $i$ and $T_i$ indicates blocked or unblocked (active or idle) status. $\pi(T_i,N_i)$ denotes its steady-state probability. The transition probabilities of the chain depend on the following average transmission success probabilities (assuming a user has a packet to send): \begin{equation}\label{equ-Avevrage success probabilities model 1} \begin{array}{l} P_B(i) = P_r(\text{success $\mid$ user $i$ is blocked})\\ P_A(i) = P_r(\text{success $\mid$ user $i$ is active})\\ P_I(i) = P_r(\text{success $\mid$ user $i$ is idle}), \end{array} \end{equation} where the averaging is performed over the status of the other users (the status of the system). Namely, in order to calculate the probabilities \eqref{equ-Avevrage success probabilities model 1}, one needs the stationary distribution of the system-status chain $P(\overline{S)}$. This leads again to the coupling of the two sets of equations. The calculations of these average success probabilities are presented in Appendix \ref{Appendix B}. The conditional moment generating function of the chain is defined as \begin{equation}\label{equ-conditional moment generating function} G_{T_i}^i(z)\triangleq \sum_{N_i=0}^{\infty} \pi(T_i,N_i)z^{N_i}, \ \ \ \ T_i=0,1 \end{equation} In order to find the steady state of the chains, as explained, the state equations must be solved simultaneously. The way to do so is by an iterative process, each time given the auxiliary quantities $P_i(1\mid 1),P_i(0\mid 2)$ or $P_I(i),P_A(i),P_B(i)$. The iterative method used is Wegstein's iteration scheme, and after achieving satisfying convergence, one may calculate the performance metrics using the steady state of the chains. \end{comment}
1,116,691,499,537
arxiv
\section*{Introduction} The category \textbf{DeV} of de Vries' compingent algebras \cite{deVries} can be considered among neighbouring categories in a network of Stone-like and Gelfand-like dualities involving the categories \textbf{KHaus} of compact Hausdorff spaces, \textbf{GlSp} of Gleason spaces, \textbf{KrFrm} of compact regular frames and \textbf{\emph{C}$^\star$-alg} of $C^\star$-algebras. \begin{center} \begin{tikzpicture} \node(KHaus) at (-1,0) {\textbf{KHaus}}; \node(DeV) at (1,0) {\textbf{DeV}}; \node(C) at (0,3.1) { \textbf{\emph{C}$^\star$-alg}}; \node(GlSp) at (1.6,1.9) { \textbf{GlSp}}; \node(KrFrm) at (-1.6,1.9) {\textbf{KrFrm}}; \draw[-, >=latex] (KHaus) to (DeV) ; \draw[-, >=latex] (DeV) to(GlSp); \draw[-, >=latex] (KrFrm) to(KHaus); \draw[-, >=latex] (KrFrm) to (C); \draw[-, >=latex] (GlSp) to (C); \end{tikzpicture} \end{center} In particular, the category \textbf{KHaus} and \textbf{GlSp} are equivalent, as established in \cite{Sourabh}. This was first observed via the composition of the dualities between \textbf{KHaus} and \textbf{DeV} and between \textbf{DeV} and \textbf{GlSp}, then with a direct description. Categories of this base network were later generalized in different papers. Indeed, Bezhanishvili and Harding extended in \cite{Guramtriang} the dualities and equivalences between \textbf{KHaus}, \textbf{KrFrm} and \textbf{DeV} to dualities and equivalence between the categories \textbf{StKSp} of stably compact spaces, \textbf{StKFrm} of stably compact frames and \textbf{PrFrm} of proximity frames. As for the duality between \textbf{KHaus} and \textbf{\emph{C}$^\star$-alg}, a real version of the duality, given in \cite{Gurambal}, was extended in \cite{DeHaGelfand} to a duality between \textbf{KPSp} of compact pospaces and the category \textbf{usbal} of Stone semirings. We refer to \cite{Guramtriang} and \cite{DeHaGelfand} for the relevant definitions. \begin{center} \begin{tikzpicture} \node(KHaus) at (-1,0) {\textbf{KHaus}}; \node(KPSp) at (-2,-1.4) {\textbf{KPSp}}; \node(DeV) at (1,0) {\textbf{DeV}}; \node(PrFrm) at (2,-1.4) {\textbf{PrFrm}}; \node(C) at (0,3.1) { \textbf{\emph{C}$^\star$-alg}}; \node(GlSp) at (1.6,1.9) { \textbf{GlSp}}; \node(KrFrm) at (-1.6,1.9) {\textbf{KrFrm}}; \node(StKFrm) at (-3.22,2.44) {\textbf{StKFrm}}; \node(OGlSp) at (3.22,2.44) {\textbf{?}}; \node(usbal) at (0,4.8) {\textbf{usbal}}; \draw[-, >=latex] (KHaus) to (DeV) ; \draw[-, >=latex] (DeV) to(GlSp); \draw[-, >=latex] (KrFrm) to(KHaus); \draw[-, >=latex] (KrFrm) to (C); \draw[-, >=latex] (GlSp) to (C); \draw[-, >=latex] (KPSp) to (PrFrm) ; \draw[-, >=latex] (PrFrm) to(OGlSp); \draw[-, >=latex] (StKFrm) to(KPSp); \draw[-, >=latex] (StKFrm) to (usbal); \draw[-, >=latex] (OGlSp) to (usbal); \draw[->, >=latex] (C) to (usbal); \draw[->, >=latex] (GlSp) to (OGlSp); \draw[->, >=latex] (KrFrm) to (StKFrm); \draw[->, >=latex] (KHaus) to (KPSp); \draw[->, >=latex] (DeV) to (PrFrm); \end{tikzpicture} \end{center} The aim of this paper is to complete the extensions initiated in \cite{Guramtriang} and \cite{DeHaGelfand} to the category \textbf{GlSp}. We point out that this extension process follows the same spirit as passing from Boolean algebras to distributive lattices, and from Stone spaces to Priestley spaces in the zero-dimensional setting (from the Boolean to the distributive setting as we shall often say in this paper). The methodology goes as follows. First, we will establish on Priestley spaces the counterpart of proximity relations on lattices. The road was well paved by Castro and Celani in \cite{Castro}, where the dual of a quasi-modal lattice (a generalized proximity frame, but with a different class of morphisms) was already established as Priestley spaces endowed with an increasing closed binary relation. The obtained topological structures will be named ordered Gleason spaces and will be the objects of a category whose morphisms are binary specific relations and not usual maps (as it is already the case in the Boolean setting \cite{Sourabh}). Then, since duals of proximity frames in \cite{Guramtriang} were stably compact spaces, we will spend a few words on how to describe them as compact pospaces. Finally, following Bezhanishvili steps in \cite{StonebyDV}, we will show how to obtain directly the compact po-space dual to a Proximity frame via the latter's Priestley dual. \section{Preliminaries} \label{Section_Prelim} In this section, we recall previous dualities which are essential for this paper, mainly for the sake of establishing notations that will be used throughout the rest of the paper. \subsubsection*{Priestley duality} We begin with the celebrated Priestley duality \cite{Priestley1} and its characterization to frames in \cite{Pultr} through a suitable separation property. First of all, if $(X,\leq,\tau)$ is an ordered topological space, we denote by $\tau^\uparrow$ (resp. $\tau^\downarrow$) the topology of open upsets (resp. open downsets) of $\tau$. In particular, if $(X,\leq,\tau)$ is a Priestley space, it is well known that $\tau^\uparrow$ (resp. $\tau^\downarrow$) is generated by the clopen upsets (resp. clopen downsets) of $X$, which we denote by $\uOf(X)$ (resp. $\dOf(X)$). Moreover, $\uOf(X)$ (or simply $L$, should the context cause no confusion) is a distributive lattice when ordered by inclusion. Finally, if $f : X \longrightarrow Y$ is an increasing continuous function between Priestley space, then \[ \uOf(f) : \uOf(Y) \longrightarrow \uOf(X) : O \longmapsto f^{-1}(O) \] is a lattice morphism. On the other hand, if $L$ is a bounded distributive lattice, we denote by $\Prim(L)$ (or more simply $X$) its set of prime filters, ordered by inclusion and endowed by the topology generated by \[ \lbrace \eta(a) \mid a \in L \rbrace \cup \lbrace \eta(a)^c \mid a \in L \rbrace, \] where \[ \eta(a) := \lbrace x \in \Prim(L) \mid x \ni a \rbrace. \] Then $\Prim(L)$ is a Priestley space and $\eta$ is a lattice isomorphism between $L$ and $\uOf(\Prim(L))$. Moreover, if $h : L \longrightarrow M$ is a lattice morphism then \[ \Prim(h) : \Prim(M) \longrightarrow \Prim(L) : x \longmapsto h^{-1}(x) \] is an increasing continuous function. The functors $\Prim$ and $\uOf$ establish a duality between the categories \textbf{DLat}, of bounded distributive lattices, and \textbf{Priest}, of Priestley spaces. To continue, let us recall that a \emph{frame} is a complete lattice $L$ which satisfies the \emph{join infinite distributive law}: for every subset $S \subseteq L$ and every $a\in L$, we have \[ a \wedge \bigvee S = \bigvee \lbrace a \wedge s \mid s \in S \rbrace. \] Furthermore, a lattice morphism $h : L \longrightarrow M$ between two frames is a \emph{frame morphism} if it preserves arbitrary joins. \begin{lemma}[\cite{Pultr}]\label{lem_to_f_space} Let $L$ be a frame and $(X,\leq,\tau)$ be its Priestley dual. \begin{enumerate} \item If $O \in \tau^\uparrow$, then its closure in $\tau$, denoted by $\cl(O)$, is an open upset. \item If $S$ is a subset of $\uOf(X)$, then $ \bigvee S = \cl\left( \bigcup \lbrace O \mid O \in S \rbrace \right)$. \item The map $\eta : a \longmapsto \eta(a)$ is a frame morphism. \end{enumerate} \end{lemma} Refering to this result, an \emph{f-space} is a Priestley space $(X,\leq,\tau)$ which satisfies the first item of Lemma \ref{lem_to_f_space} and an increasing continuous function $f : X \longrightarrow Y$ is an \emph{$f$-function} if $f^{-1}\left(\cl(O)\right) = \cl\left(f^{-1}(O)\right)$ for all $O \in \tau^\uparrow$. Pultr and Sichler proved in \cite{Pultr} that Priestley duality reduces to a duality between the categories \textbf{Frm} of frames and \textbf{FSp} of $f$-spaces. \subsubsection*{Proximity frames} The second duality we recall was established by Bezhanishvili and Harding in \cite{Guramtriang}, it can be seen as a generalization to frames and stably compact spaces (see \cite[Definition VI-6.7.]{Compendium}) of de Vries duality. \begin{definition}\label{def_proximity_frame_and_morphism} A \emph{proximity frame} is a pair $(L,\prec)$ where $L$ is a frame and $\prec$ is a \emph{proximity relation}, i.e. a binary relation on $L$ such that\footnote{The seemingly peculiar way used to denote the properties of $\prec$ (and the absence of S7) stems from the works on subordination and (pre-)contact algebras, see for instance \cite{Sourabh}, \cite{DeHa} or \cite{Koppelberg}.} \begin{itemize} \item $\prec$ is a \emph{subordination relation} \begin{enumerate} \item[S1.] $0 \prec 0$ and $1 \prec 1$, \item[S2.] $a \prec b,c$ implies $a \prec b \wedge c$, \item[S3.] $a,b \prec c$ implies $a \vee b \prec c$, \item[S4.] $a \leq b \prec c \leq d$ implies $a \prec d$, \end{enumerate} \item which has the following additional properties \begin{enumerate} \item [S5.] $a = \bigvee \lbrace b \in L \mid b \prec a \rbrace$, \item[S6.] $ a \prec b$ implies $a \leq b$, \item[S8.] $a \prec b$ implies that $a \prec c \prec b$ for some $c$. \end{enumerate} \end{itemize} For the sake of convenience, we often identify the pair $(L,\prec)$ with its underlying frame $L$. If $S$ is a subset of $L$, we define $\Uprec S := \lbrace b \in L\mid \exists s \in S : s \prec b \rbrace $ ( $\Dprec S$ is defined dually). As usual, for an element $a \in L$, we write $\Uprec a$ instead of $\Uprec \lbrace a \rbrace$. \end{definition} \begin{definition} A \emph{proximity morphism} is a map $h:L \longrightarrow M$ between two proximity frames such that: \begin{enumerate} \item[H0.] $h$ is a \emph{strong meet-hemimorphism}: \begin{enumerate} \item $h(1) = 1$ and $h(0) = 0$, \item $h(a \wedge b) = h(a) \wedge h(b)$; \end{enumerate} \item[H1.] $a_1 \prec b_1$ and $a_2 \prec b_2$ implies $h(a_1 \vee a_2) \prec h(b_1) \vee h(b_2)$; \item[H2.] $h(a) = \bigvee \lbrace h(b) \mid b \prec a \rbrace$. \end{enumerate} If $h : L \longrightarrow M$ and $g: M \longrightarrow N$ are proximity morphisms, their \emph{composition} is defined by \[ g \star h : L \longrightarrow N : a \longmapsto \bigvee \lbrace g(h(b)) \mid b \prec a \rbrace. \] We denote by \textbf{PrFrm} the category of proximity frames endowed with proximity morphisms. \end{definition} \begin{definition}\label{def_ends} If $L$ is a proximity frame, a \emph{round filter} of $L$ is a lattice filter $\F$ such that $\F = \Uprec \F$. We denote by $\mathcal{RF}(L)$ the set of all round filters of $L$. An \emph{end} is a round filter $p$ such that for every round filters $\F_1,\F_2$, we have $\F_1 \cap \F_2 \subseteq p$ if and only if $\F_1 \subseteq p$ or $\F_2 \subseteq p$. We denote by $\End(L)$ (or only by $P$) the set of all ends of $L$. \end{definition} The ends of Definition \ref{def_ends} will now play a role similar to the one of prime filters in Priestley duality. Indeed, endowed with the topology generated by the sets of the form \begin{equation}\label{eq_def_of_mu} \mu(a) := \lbrace p \in \End(L) \mid p \ni a \rbrace, \end{equation} $\End(L)$ is a stably compact space. Moreover, if $h: L \longrightarrow M$ is a proximity frame, then \[ \End(h) : \End(M) \longrightarrow \End(L) : p \longmapsto \Uprec h^{-1}(p) \] is a proper continuous function. On the topological side, if $(P,\tau)$ is a stably compact space, then $\tau:= \Omega(P)$ ordered by inclusion is a proximity frame when endowed with the relation $\prec$ defined by $O \prec V$ if and only if $O \subseteq K \subseteq V$ for some compact subset $K$. Furthermore, if $f : P \longrightarrow Q$ is a proper continuous function between two stably compact spaces, then \[ \Omega(f) : \Omega(Q) \longrightarrow \Omega(P) : O \longmapsto f^{-1}(O) \] is a proximity morphism. Now, the functors $\End$ and $\Omega$ establish a duality between \textbf{PrFrm} and the category \textbf{StKSp} of stably compact spaces (see \cite[Theorem 4.18]{Guramtriang}). \section{Priestley duality for proximity frames} In addition to the duality between \textbf{PrFrm} and \textbf{StKSp}, we can provide a modal-like duality between \textbf{PrFrm} and a category of f-spaces endowed with a particular binary relation $R$. Following the taxonomy of \cite{StonebyDV}, we name \emph{ordered Gleason spaces} the pairs $f$-spaces/relations obtained. At the objects level, we can rely on the works previously done in \cite{Castro} for quasi-modal lattices\footnote{The relations between proximity/subordination relations and quasi-modal operator is well discussed for instance in \cite{CelaniResume}.} and in \cite{Sourabh} for the Boolean setting. Hence, most of the proofs are left to the reader. \begin{definition}\label{def_ordered_Gleason_space} An \emph{ordered Gleason space} is a triple $(X,\leq,R)$ where $(X,\leq)$ is an $f$-space and $R$ is a binary relation on $X$ satisfying the following properties: \begin{enumerate} \item $R$ is closed in $X^2$; \item $x \leq y \mathrel{R} z \leq t$ implies $x \mathrel{R} t$; \item $R$ is a pre-order; \item For every $O \in \uOf(X)$, we have $O = \cl\left( R[-,O^c]^c \right)$. \end{enumerate} An equivalent definition is given by substituting 2 with \begin{enumerate} \item[2'.] $ x \leq y$ implies $x \mathrel{R} y$. \end{enumerate} \end{definition} \begin{remark}\label{rem_useful_starting_rem} Let us highlight some observations and introduce notations that we freely use in the rest of the paper. \begin{itemize} \item Let $R$ be a binary relation on an arbitrary set $X$: \begin{enumerate} \item If $E$ is a subset of $X$, we note \[ R[-,E] := \lbrace x \mid \exists y \in E : x \mathrel{R} y \rbrace \text{ and } R[E,-] := \lbrace x \mid \exists y \in E : y \mathrel{R} x \rbrace. \] For an element $x \in X$, we note $R[-,x]$ instead of $R[-,\lbrace x \rbrace]$. Note that, if $(L,\prec)$ is a proximity frame, then we have $ \Uprec x = {\prec}[x,-]$. \item If $E$ and $F$ are subsets of $X$, then \[R[-,E] \subseteq F \text{ if and only if } R[F^c,-] \subseteq E^c. \] \item If $X$ is a topological space, $R$ is closed in $X^2$ and $F$ is a closed subset of $X$, then $R[-,F]$ and $R[F,-]$ are closed. \end{enumerate} \item Let $L$ be a distributive lattice and $S$ an arbitrary subset of $L$. We define \[ F_S := \lbrace x \in \Prim(L) \mid S \subseteq x \rbrace. \] Remark that $ F_S = \bigcap \lbrace \eta(a) \mid a \in S \rbrace$ so that $F_S$ is a closed (and hence a compact) subset of $\Prim(L)$. \end{itemize} \end{remark} The future duality between proximity frames and ordered Gleason spaces is now obtained as follows. Let $(L,\prec)$ be a proximity frame, its dual is given by $(X,R)$ where $X=\Prim(L)$ is the Priestley dual of $L$ and $R$ is the binary relation on $X$ defined by \begin{equation}\label{eq_def_of_R} x \mathrel{R} y \text{ if and only if } \Uprec x \subseteq y. \end{equation} Let us highlight the fact that equivalent definitions of the relation $R$ are given by \begin{equation}\label{rem_alternative_def_of_R} \Uprec x \subseteq \Uprec y \text{ or } \Dprec y^c \subseteq \Dprec x^c \text{ or } \Dprec y^c \subseteq x^c. \end{equation} \begin{lemma} Endowed with the relation $R$ defined in \eqref{eq_def_of_R}, $\Prim(L)$ is an ordered Gleason space. Furthermore, for every $a,b \in L$, we have \[ a \prec b \text{ if and only if } R[\eta(a),-] \subseteq \eta(b). \] \end{lemma} \begin{proof}[Sketch of the proof] To prove Items 1 and 2 of Definition \ref{def_ordered_Gleason_space}, one just has to use the subordination part of a proximity relation (see Definition \ref{def_proximity_frame_and_morphism}). Also, one can show that $R$ is reflexive if and only if $\prec$ satisfies S6 and transitive if and only $\prec$ satisfies S8. Let us prove item 5 (which is equivalent to S5.). We have $a = \bigvee \lbrace b \mid b \prec a \rbrace$ if and only if $\eta(a) = \eta\left( \bigvee \lbrace b \mid b \prec a \rbrace \right)$. Then, by \cite[Theorem 1.5]{Pultr}, it comes that \begin{align*} \eta\left( \bigvee \lbrace b \mid b \prec a \rbrace \right) &= \cl \left( \bigcup \lbrace \eta(b) \mid b \prec a \rbrace \right ) \\ & = \cl \left( \bigcup \lbrace \eta(b) \mid R[\eta(b),-] \subseteq \eta(a) \rbrace \right) \\ &= \cl \left( \bigcup \lbrace \eta(b) \mid \eta(b) \subseteq R[-,\eta(a)^c]^c \rbrace \right). \end{align*} Finally, since $R[-,\eta(a)^c]^c$ is an open upset , it follows that \[ \eta\left( \bigvee \lbrace b \mid b \prec a \rbrace \right) = \cl \left( R[-,\eta(a)^c]^c \right), \] and the conclusion is clear. \end{proof} On the other hand, let $(X,\leq,R)$ be an ordered Gleason space, its dual is given by $(L,\prec)$ where $L:=\uOf(X)$ is the Priestley dual of $X$ and $\prec$ is the binary relation on $L$ defined by \begin{equation}\label{eq_def_of_prec} O \prec U \text{ if and only if } R[O,-] \subseteq U. \end{equation} \begin{lemma} Endowed with the relation $\prec$ defined in \eqref{eq_def_of_prec}, $\uOf(X)$ is a proximity frame. \end{lemma} To conclude the section, it remains to determine the counterpart of the proximity morphisms on Gleason spaces. Let $L$ and $M$ be proximity frames and $X$ and $Y$ their respective Priestley duals. If $h : L \longrightarrow M$ is a meet-hemimorphism, then the relation $\rho \subseteq Y \times X$ defined by \begin{equation}\label{eq_def_of_rho} y \mathrel{R} x \text{ if and only if } h^{-1}(y) \subseteq x \end{equation} satisfy the following conditions: \begin{enumerate} \item $y_1 \leq y_2 \mathrel{\rho} x_1 \leq x_2$ implies $y_1 \mathrel{\rho} x_2$, \item $\rho$ is closed in $Y \times X$, \item $O \in \uOf(X)$ implies $\rho[-,O^c]^c \in \uOf(Y)$. \end{enumerate} Since in our case, we have \emph{strong} meet-hemimorphism, the relation $\rho$ also satisfies \begin{enumerate} \item[4.] for every $y \in Y$, there exists $x \in \rho[y,-]$. \end{enumerate} We call such a relation $\rho$ a \emph{strong meet-hemirelation}. By \cite[Lemma 2]{Sofronie}, we know that strong meet-hemimorphism are in correspondence with strong meet-hemirelation. Hence, it remains to characterize the properties H1 and H2 of proximity morphisms. A key concept towards this characterization is defined below. \begin{definition} Let $(X,\leq,R)$ be an ordered Gleason space and $S$ a subset of $X$. An element $x \in S$ is said to be \emph{$R$-minimal in $S$} if for every $y \in S$, $y \mathrel{R} x$ implies $x \mathrel{R} y$. \end{definition} \begin{proposition}\label{prop_exist_R_min} Let $(X,R)$ be an ordered Gleason space and $F$ be a closed subset of $X$, then for every element $x \in F$ there exists an element $y$ $R$-minimal in $F$ such that $y \mathrel{R} x$. \end{proposition} \begin{proof} We follow the lines of the proof for po-sets (see for instance \cite[Proposition VI.5-3.]{Compendium}) Let us define a chain of $(X,R)$ to be a subset $C$ of $X$ such that for every $x,y\in C$, we have $x \mathrel{R} y$ or $y \mathrel{R} x$. We denote by $\mathfrak{C}$ the set of chains $C$ satisfying $x \in C \subseteq F$, ordered by inclusion. We have that $\mathfrak{C}$ is non-empty (by reflexivity of $R$, it contains the chain $\lbrace x \rbrace$) and a classical argument suffices to prove it is also inductive. Hence, $\mathfrak{C}$ admits a maximal element $M$. Since $\lbrace R[-,z] \cap F \mid z \in M \rbrace$ is a family of closed sets which satisfies the finite intersection properties (because $M$ is a chain contained in $F$ and $R$ is a pre-order), we know by compactness that there exists an element $y \in F$ such that $y \mathrel{R} z$ for all $z \in M$. Now, suppose that $t$ is an element of $F$ such that $t \mathrel{R} y$. By transitivity, we have that $\lbrace t \rbrace \cup M$ is a chain of $\mathfrak{C}$. By maximality of $M$, we have $t \in M$ and, therefore, we have $y \mathrel{R} t$, so that $y$ is indeed $R$-minimal in $F$, as required. \end{proof} Let us highlight that the notion of $R$-minimal element is also present in the Boolean setting, while hidden. Indeed, in the Boolean case, the relation $R$ turns out to be an equivalence relation, so that every element is actually $R$-minimal. \begin{proposition}\label{Prop_towards_ofc} Let $h: L \longrightarrow M$ be a strong meet-hemimorphism between two proximity frame and $\rho \subseteq Y \times X$ its associated strong hemi-relation: \begin{enumerate} \item $h$ satisfies H1 if and only if for every $y_1, y_2 \in Y$, every $x_1$ R-minimal in $\rho[y_1,-]$ and every $x_2 \in X$, we have \[ x_1 \mathrel{\rho^{-1}} y_1 \mathrel{R} y_2 \mathrel{\rho} x_2 \text{ implies } x_1 \mathrel{R} x_2. \] \item $h$ satisfies H2 if and only if $ \rho[-,O^c] = \int\left(\rho[-,R[-,O^c]]\right)$ for every $O \in \uOf(X)$. \end{enumerate} \end{proposition} The proof of Item 2 is almost identical to the one in the Boolean case. Therefore, we redirect the reader to \cite[Lemma 6.11]{Sourabh} for more details. The proof of Item 1 requires additional results. In the meantime, we name \emph{ordered forth condition} (shortened as \emph{ofc}) and \emph{de Vries condition} (shortened as \emph{dvc}) the conditions of the first and the second item of Proposition \ref{Prop_towards_ofc}. Before we start, let us note that \begin{equation}\label{eq_rem} \rho[-,\eta(a)^c]^c = \eta(h(a)). \end{equation} Indeed, it is clear that $\eta(h(a)) \subseteq \rho[-,\eta(a)^c]^c.$ Now, suppose that $y \in \rho[-,\eta(a)^c]^c$. Then, for every $x \in \eta(a)^c$, we have that $h^{-1}(y) \not\subseteq x$. Hence, for all $x \in \eta(a)^c$, we have that $h(a_x) \in y$ and $a_x \nin x$ for some $a_x \in L$. In particular, $\lbrace \eta(a_x)^c \mid x \in \eta(a)\rbrace$ is an open cover of $\eta(a)^c$ which is compact. Then, we now that there exist $x_1,\ldots,x_n \in \eta(a)^c$ such that \[ \eta(a_{x_1} \wedge \cdots \wedge a_{x_n})^c \supseteq \eta(a)^c. \] Moreover, we have ($y$ is a filter) \[ y \ni h(a_{x_1}) \wedge \cdots \wedge h(a_{x_n}) = h(a_{x_1} \wedge \cdots \wedge a_{x_n}) \geq h(a) \] and the conclusion is clear. \begin{proposition}\label{prop_ok_funct} Let $L,M$ be two proximity frames, $h: L \longrightarrow M$ be a proximity morphism , $y$ a prime filter of $M$ and $x$ a prime filter which is $R$-minimal in $\rho[y,-]$. Then, we have \[ \Uprec\left( h^{-1}\left( \Uprec y \right) \right) = \Uprec x. \] \end{proposition} \begin{proof} On the one hand, $ h^{-1}\left( \Uprec y \right) \subseteq x$ follows from $x \in \rho[y,-]$. Consequently, we have $\Uprec\left( h^{-1}\left( \Uprec y \right) \right) \subseteq \Uprec x$. On the other hand, suppose that $\Uprec x \not\subseteq \Uprec\left( h^{-1}\left( \Uprec y \right) \right)$. Then, there exist $a_1 \in x$ and $b \in L$ such that $a \prec b$ and $b \nin \Uprec\left( h^{-1}\left( \Uprec y \right) \right)$. By the properties of proximity relations, we know that $a_1 \prec c_1 \prec d_1 \prec e_1 \prec b$ for some $c_1,d_1,e_1 \in L$. In particular, we have $h(d_1) \prec h(e_1)$ and $e_1 \nin h^{-1}\left( \Uprec y \right)$. Therefore, we also have that $h(d_1) \nin y$. In order to obtain an absurdity and conclude the proof, we are going to invalidate the $R$-minimality of $x$ in $\rho[y,-]$. We first prove that \begin{equation}\label{eq_mid_proof} h^{-1}(y) \cap \langle c_1 \cup \Dprec x^c \rangle_{id} = \emptyset, \end{equation} where $\langle c_1 \cup \Dprec x^c \rangle_{id}$ is the lattice ideal generated by $c_1 \cup \Dprec x^c$. Suppose this is not the case. Then, there exist $a_2,c_2 \in L$ and $d_2 \in x^c$ such that $h(a_2) \in y$, $c_2 \prec d_2$ and $a_2 \leq c_1 \vee c_2$. It follows from the properties of $h$ that \[y \ni h(a_2) \leq h(c_1 \vee c_2) \prec h(d_1) \vee h(d_2). \] Now, since $y$ is a prime filter and $h(d_1) \nin y$, we have that $h(d_2) \in y$. Hence, we have $d_2 \in h^{-1}(y) \subseteq x$, which is absurd. Consequently, \eqref{eq_mid_proof} is satisfied and we have $h^{-1}(y) \subseteq z$, $c_1 \nin z$ and $\Uprec z \subseteq x$ for some prime filter $z$. In other words, we have $z \mathrel{R} x$ and $y \mathrel{\rho} z$. Now, by $R$-minimality of $x$ in $\rho[y,-]$, it follows that $x \mathrel{R} z$. Hence, in particular, we should have \[ c_1 \in \Uprec a_1 \subseteq \Uprec x \subseteq z, \] which is absurd. \end{proof} We now have the required result to finish the proof of Proposition \ref{Prop_towards_ofc}. \begin{proof}[Proof of Proposition \ref{Prop_towards_ofc}] For the only if part, suppose that $h^{-1}(y_1) \subseteq x_1$, $h^{-1}(y_2) \subseteq x_2$ and $\Uprec y_1 \subseteq y_2$. In particular, by Proposition \ref{prop_ok_funct}, we have $\Uprec \left( h^{-1}\left( \Uprec y_1 \right) \right) = \Uprec x_1$. It comes that \[ \Uprec x_1 = \Uprec \left( h^{-1}\left( \Uprec y_1 \right) \right) \subseteq h^{-1}\left( \Uprec y_1 \right) \subseteq h^{-1}(y_2) \subseteq x_2,\] or, in other words, that $x_1 \mathrel{R} x_2$, as required. For the if part, let $a_1,a_2,b_1$ and $b_2$ be elements of $L$ such that $a_1 \prec b_1$ and $a_2 \prec b_2$. To prove that $h$ satisfies H1 is to prove that \[ R[\eta(h(a_1 \vee a_2)),-] \subseteq \eta(h(b_1)) \cup \eta(h(b_2)). \] We can use \eqref{eq_rem} to rewrite this inclusion as \[ \underbrace{R[\rho[-,\eta(a_1 \vee a_2)^c]^c,-]}_{:=A} \subseteq \underbrace{\rho[-,\eta(b_1)^c]^c \cup \rho[-,\eta(b_2)^c]^c}_{:=B}. \] Let $y_2 \in A$. Then, there exists $y_1$ such that $y_1 \mathrel{R} y_2$ and such that $ \rho[y_1,-] \subseteq \eta(a_1 \vee a_2)$. Moreover, by Proposition \ref{prop_exist_R_min}, we know that there exists a filter $x_1$ $R$-minimal in $\rho[y_1,-]$. Hence, we may suppose, without loss of generality, that $a_1 \in x_1$. Let $x_2$ be a prime filter such that $y_1 \mathrel{\rho} x_2$. By the ofc, we know that $x_1 \mathrel{R} x_2$ and it follows that \[ b_1 \in \Uprec a_1 \subseteq \Uprec x_1 \subseteq x_2 .\] Hence, we proved that for every $x_2$ such that $y_2 \mathrel{\rho} x_2$, we have $x_2 \in \eta(b_1)$, that is $y_2 \in \rho[-,\eta(b_1)^c]^c \subseteq B$, as required. \end{proof} Now that we characterized the strong meet-hemirelation that stemmed from proximity morphisms, we have to determine how to compose them to actually obtain category dual to \textbf{PrFrm}. As it was already noted in \cite{Sourabh}, the rule of composition of meet-hemirelations is not easily described, even in the Boolean setting and we must rely on their associated meet-hemimorphisms. \begin{definition}\label{def_comp_of_relations} Let $\rho_1$ and $\rho_2$ be meet-hemirelations and $h_1$, $h_2$ their associated meet-hemimorphisms. We define the \emph{composition} $\rho_1 \star \rho_2$ as the meet-hemirelation associated to $h_2 \star h_1$. \end{definition} With all of the above observations, the next definition and theorem come as no surprise. \begin{definition} We denote by \textbf{OGlSp} the category whose objects are ordered Gleason spaces and whose morphisms are strong meet-hemirelations which satisfy the ofc and the dvc, with the composition of Definition \ref{def_comp_of_relations}. For the record, let us note that the identity morphisms in \textbf{OGlSp} are given by the order relations of the ordered Gleason spaces. \end{definition} \begin{theorem}\label{thm_dual_Gle_Pr} The categories \textbf{OGlSp} and \textbf{PrFrm} are dual to each other. \end{theorem} Of course, as direct corollary of Theorem \ref{thm_dual_Gle_Pr} and \cite{Guramtriang}, the categories \textbf{OGlSp} and \textbf{StKSp} are equivalent. The scope of the next section is to describe directly this equivalence.. However, since this paper is "ordered-minded", we swap the category \textbf{StKSp} for its equivalent category \textbf{KPSp} of compact pospaces, also sometimes called Nachbin spaces. \section{Compact pospaces}\label{Section3} \begin{definition} A \emph{compact pospace} is a triple $(P,\pi,\leq)$ where $(P,\pi)$ is a compact space and $\leq$ is an order relation on $P$ which is closed in $P^2$. We denote by \textbf{KPSp} the category of compact pospaces and continuous monotone maps. \end{definition} The equivalence between \textbf{KPSp} and \textbf{StKSp} is almost folklore (see for instance \cite[Section VI-6]{Compendium}). We recall here the basic facts. If $(P,\tau)$ is a stably compact space, then $(P,\pi,\leq_\tau)$ is a compact pospace where $\pi$ is the patch topology associated to $\tau$ and $\leq_\tau$ is the canonical order on $(P,\tau)$, that is $p \leq_\tau q$ if and only if $p \in \cl_\tau(\lbrace q \rbrace)$. In addition, we have $\pi^\uparrow = \tau$ and $\pi^\downarrow$ is the co-compact topology associated to $\tau$, that is the compact saturated sets of $\tau$. On the other hand, if $(P,\pi,\leq)$ is a compact pospace, then $(P,\pi^\uparrow)$ is a stably compact space. With these consideration in mind, we can describe the ends space $\End(L)$ of a proximity frame $L$ as a compact pospace. \begin{proposition}\label{prop_carac_of_P_as_KPSP} Let $L$ be a proximity frame and $P:=\End(L)$ its ends space. \begin{enumerate} \item For $p,q \in P$, $p \leq q$ if and only if $p \subseteq q$, \item The topology $\pi^\uparrow$ is generated by the sets $\mu(a)$ for $a \in L$ (see \eqref{eq_def_of_mu}), \item The closed elements of $\pi^\downarrow$ are given by the sets of the form \[ K_{\F}:= \lbrace p \in P \mid p \subseteq \F \rbrace; \] for some round filter $\F$. \end{enumerate} \end{proposition} \begin{proof} Item 2 is immediate. For item 1, we have \begin{align*} & p \leq q \\ \iff & p \in \cl_\tau(\lbrace q \rbrace) = \cap \lbrace \mu(a)^c \mid q \in \mu(a)^c \rbrace \\ \iff & \forall a \in L : a \nin q \Rightarrow a \nin p \\ \iff &q^c \subseteq p^c \iff p \subseteq q. \end{align*} To prove Item 3, we use several results established in \cite{Guramtriang}. \begin{enumerate} \item From \cite[Lemma 4.14]{Guramtriang}, there is a homeomorphism $\alpha$ from $\End(L)$ to $\pt\mathcal{RI}(L)$\footnote{The points of a frame $M$ are frame morphisms $g$ from $M$ into \textbf{2}, the two elements frame. We denote by $\mbox{pt}(M)$ the set of all points of $M$. A \emph{round ideal} of a proximity frame is a lattice ideal $\mathfrak{I}$ such that $\Dprec \mathfrak{I} = \mathfrak{I}$. The set $\mathcal{RI}(L)$ of all round ideals of $L$ is a frame when order by inclusion. } given by \[ \alpha : \pt(\RI(L)) \longrightarrow \End(L) : g \longmapsto p_g:= \lbrace a \in L \mid g(\Dprec a) = 1 \rbrace. \] \item From \cite[Remark 4.21]{Guramtriang}, there is a bijection between $\RF(L)$ and the Scott-open filters of $\RI(L)$ given by \[ \R \longmapsto \lbrace \mathfrak{I} \in \RI(L) \mid \R \cap \mathfrak{I} \neq \emptyset \rbrace. \] \item From \cite[Theorem II.1.20]{Compendium}, there is an order-reversing bijection between Scott-open filter of $\RI(L)$ and the the compact saturated sets of $\pt(\RI(L))$ which is given by \[ \mathbb{F} \longmapsto \lbrace p \in \pt(\RI(L)) \mid \forall \mathfrak{I} \in \mathbb{F} : g(\mathfrak{I}) = 1 \rbrace. \] \end{enumerate} Consequently, there is a bijection between the compact saturated sets of $\pt(\RI(L))$ and $\RF(L)$ which is given by \[ \R \longmapsto \lbrace g \in \pt(\RI(L)) \mid \forall \mathfrak{I} \in \RI(L) : \mathfrak{I}\cap\R \neq \emptyset \Rightarrow g(\mathfrak{I}) = 1 \rbrace. \] Then, since $\alpha$ is a homeomorphism, we now that there is a bijection between the compact saturated sets of $\End(L)$ and $\RF(L)$ given by \[ \R \longmapsto \lbrace p \in \End(L) \mid \underbrace{\forall I \in \RI(L):\mathfrak{I} \cap \R \neq \emptyset \Rightarrow \mathfrak{I} \cap p \neq \emptyset}_{=\star} \rbrace . \] Finally, to conclude the proof, let us show that the condition $\star$ is equivalent to $\R \subseteq p$. Clearly, if $\R \subseteq p$, then $\star$ is satisfied. Now, suppose that $\R \not\subseteq p$. Then, there exists an element $a \in \R \setminus p$. In particular, we have that $\Dprec a$ is a round ideal such that $\Dprec a \cap p = \emptyset$ and, since $\R$ is round, such that $\Dprec a \cap \R \neq \emptyset$ such that the condition $\star$ is not satisfied. \end{proof} Now, we focus on how $\End(L)$ relates with $\Prim(L)$. A first immediate remark is that for every round filter $\F$ and every prime filter $x$, we have \begin{equation}\label{eq_prime_and_round_filters} \F \subseteq x \Leftrightarrow \F \subseteq \Uprec x. \end{equation} A second step is undertaken in the following lemma. \begin{lemma}\label{lem_precx=round} Let $L$ be a proximity frame. For every prime filter $x \in \Prim(L)$, $\Uprec x$ is an end of $L$. \end{lemma} \begin{proof} It is clear that $\Uprec x$ is a filter but, by S8 of Definition \ref{def_proximity_frame_and_morphism}, it is also a round filter. Moreover, if $\F_1$ and $\F_2$ are round filters such that $\F_1 \cap \F_2 \subseteq \Uprec x$. In particular, this implies that $\F_1 \cap \F_2 \subseteq x$. Now, $x$ being a prime filter, we know that $\F_1 \subseteq x$ or $\F_2 \subseteq x$. It then follows from \eqref{eq_prime_and_round_filters} that $\Uprec x$ is an end. \end{proof} Our goal now is to prove that every end is of the form $\Uprec x$ for some prime filter $x$. We start with the next proposition. \begin{proposition}\label{prop_round_filter_and_r_increasing_set} Let $L$ be a proximity frame. There is a bijection between the round filters of $L$ and the $R$-increasing closed subsets of $X=\Prim(L)$, given by \begin{equation}\label{def_of_Phi}\Phi: \F \longmapsto F_{\F} := \lbrace x \in X \mid x \supseteq \F \rbrace. \end{equation} \end{proposition} \begin{proof} First, it is clear that $F_{\F}$ is a closed set and that it is $R$-increasing, so that $\Phi$ is well defined. Moreover, $\Phi$ is one-to-one since every filter is the intersection of the prime filters containing it. Finally, we show that $\Phi$ is onto. Let $F$ be an $R$-increasing closed set. In particular, $F$ is an increasing\footnote{for the order of $X$} closed subset (recall Definition \ref{def_ordered_Gleason_space}), and, therefore, we know that \[ F = \bigcap \lbrace \eta(a) \mid \eta(a) \supseteq F \rbrace. \] If we set $\F= \lbrace a \mid \eta(a) \supseteq F \rbrace$, then \begin{align*} x \in F_{\F} \Leftrightarrow \R \subseteq x \Leftrightarrow (\eta(a) \supseteq F \Rightarrow x \in a) \Leftrightarrow x \in F. \end{align*}Since one can show that $\F$ is a filter by routine calculations, it remains to prove it is round. Let $a$ be an element of $\F$. As $F$ is an $R$-increasing set, it comes that $R[F,-] \subseteq \eta(a)$. Recall that $R$ is a closed relation and that $\lbrace \eta(b) \mid \eta(b) \supseteq F \rbrace$ is a filtered family of closed sets such that $F = \bigcap \lbrace \eta(b) \mid \eta(b) \supseteq F \rbrace$. Hence, by Esakia Lemma (see for instance \cite[p. 995]{Sambin1}), it follows that \[ \eta(a) \supseteq R[F,-] = R[\bigcap \eta(b),-] = \bigcap R[\eta(b),-]. \] It is now sufficient to use compactness to obtain \[ R[ \eta(b_1) \cap \cdots \cap \eta(b_n),-] \subseteq R[\eta(b_1),-] \cap \cdots \cap R([\eta(b_n),-] \subseteq \eta(a) \] for some $b_1,\ldots, b_n$. If we set $b:= b_1 \wedge \cdots \wedge b_n$, we have \[ F \subseteq \eta(b) \text{ and } R[\eta(b),-] \subseteq \eta(a), \] that is $b \in \F$ and $b \prec a$ as required. \end{proof} Let us note that the application $\Phi$ defined in \eqref{def_of_Phi} is a reverse order isomorphism, in the sense that for two round filters $\F$ and $\F'$, we have $\F \subseteq \F'$ if and only $\Phi({\F}) \supseteq \Phi({\F'})$. Therefore, the $R$-increasing closed sets which are associated to ends are exactly the join-prime $R$-increasing closed sets. We will use this observation and the next definition to prove the reciprocal of Lemma \ref{lem_precx=round}. \begin{definition}\label{def_of_equiv} Let $(X,\leq,R)$ be an ordered Gleason space. We denote by $\equiv$ the equivalence relation associated to the pre-order $R$, i.e. \[ x \equiv y \text{ if and only if } x \mathrel{R} y \text{ and } y \mathrel{R} x. \] Since $R$ is closed, $\equiv$ is also closed. Moreover, $X/\equiv$ ordered by \[ x^\equiv \leq_R y^\equiv \text{ if and only if } x \mathrel{R} y \] is a compact pospace. We highlight the fact that, if $(X,\leq,R)$ is the dual of a proximity frame $(L,\prec)$, then the equivalence relation $\equiv$ can be expressed as follow: \[ x \equiv y \text{ if and only if } \Uprec x = \Uprec y, \] or, equivalently, \[ x \equiv y \text{ if and only if } \Dprec x^c = \Dprec y^c. \] \end{definition} \begin{lemma}\label{lem_exist_x_mini_in_f_u} Let $(L,\prec)$ be a proximity frame. If $p \in \End(L)$, then there exists a unique $\equiv$-class $x^\equiv$ such that $x$ is R-minimal $x$ and $F_p = R[x,-]$. \end{lemma} \begin{proof} We know that for every element $z \in F_p$, there exists an $R$-minimal element $x \in F_p$ such that $x \mathrel{R} z$. Hence, it remains to prove its uniqueness. Suppose that there exist two $R$-minimal elements $x$ and $y$ in $F_p$ such that $x \not\equiv y$. In other words, we have $x \not\mathrel{R} y$ and $x \not\mathrel{R} y$. Using a classical argument, one can show that there exist two $R$-decreasing open sets $\omega_1$ and $\omega_2$ such that $x \in \omega_1$, $y \in \omega_2$ and $\omega_1 \cap \omega_2 = \emptyset$. In other words, such that $F_p = F_p \setminus \omega_1 \cup F_p \setminus \omega_2$. Since $F_p \setminus \omega_i$ are $R$-decreasing closed sets and $F_p$ is join-prime (recall the discussion after Proposition \ref{prop_round_filter_and_r_increasing_set}), it follows that $F_p \subseteq F_p \setminus \omega_1$ or $F_p \subseteq F_p \setminus \omega_2$, which is of course impossible since, for instance, $x \in F_p $ and $x \in \omega_1 $. \end{proof} \begin{theorem}\label{thm_sigma_onto} Let $(L,\prec)$ be a proximity frame. A subset $p \subseteq L$ is an end if and only if $ p = \Uprec x$ for some $x \in \Prim(L)$. \end{theorem} \begin{proof} The if part is Lemma \ref{lem_precx=round}. For the only part, let $p$ be an end. By Lemma \ref{lem_exist_x_mini_in_f_u}, we have $F_p = R[x,-]$ for some prime filter $x$. In particular, it follows that $\Phi(p) = \Phi(\Uprec x)$ and therefore that $p = \Uprec x$, as required. \end{proof} It follows from Theorem \ref{thm_sigma_onto} that, at least for the underlying sets, $\End(L)$ is the quotient of $\Prim(L)$ by the relation $\equiv$. We denote by $\sigma$ the application \[ \sigma : \Prim(L)/{\equiv} \longrightarrow \End(L) : x^\equiv \longmapsto \Uprec x. \] We now want to prove that $\End(L)$ is the quotient of $\Prim(L)$ as ordered topological spaces, that is prove that $\sigma$ is an order homeomorphism. \begin{theorem}\label{thm_plop} Let $(L,\prec)$ be a proximity frame. Then, in the category \textbf{KPSp}, we have \[ \End(L) \cong \Prim(L)/{\equiv}, \] by the application $\sigma$. \end{theorem} \begin{proof} First, we have that $\sigma$ is onto by Theorem \ref{thm_sigma_onto} and that it is one-to-one by definition. We also have that $\sigma$ is an order isomorphism, since we have \[ x^\equiv \leq_R y^\equiv \Leftrightarrow x \mathrel{R} y \Leftrightarrow \Uprec x \subseteq \Uprec y \Leftrightarrow \sigma(x^\equiv) \leq \sigma(y^\equiv). \] Therefore, since $\End(L)$ and $\Prim(L)/{\equiv}$ are compact Hausdorff spaces, it remains to prove $\sigma$ is continuous. By Proposition \ref{prop_carac_of_P_as_KPSP}, we have to prove that $\sigma^{-1}(\mu(a))$ and $\sigma^{-1}(K_{\F})$ are respectively open and closed for every $a \in L$ and $\F \in \mathcal{RI}(L)$. Let $\Pi: x \longmapsto x^\equiv$ be the canonical quotient map. We have: \[ \Pi^{-1}(\sigma^{-1}(\mu(a))) = \lbrace x \in \Prim(L) \mid a \in \Uprec x \rbrace = \bigcup \lbrace \eta(b) \mid b \prec a \rbrace \] which is open, and \[ \Pi^{-1}(\sigma^{-1}(K_{\F})) = \lbrace x \in \Prim(L) \mid \F \subseteq x \rbrace = F_{\F}, \] which is closed. \end{proof} With Theorem \ref{thm_plop}, we can describe a functor from the category \textbf{OGlSp} to the category \textbf{KPSp} which sends an ordered Gleason space $(X,\leq,R)$ to the compact pospace $(X/{\equiv},R)$. Proposition \ref{prop_ok_funct} gives a hint on how to deal with the morphisms. Indeed, if $\rho \subseteq X \times Y$ is a strong meet-hemirelation which satisfies the ofc and the dvc between ordered Gleason spaces, then we know that it can be associated with a proximity morphism $h : \uOf(Y) \longrightarrow \uOf(X): O \longmapsto \rho[-,O^c]^c$. Then, this morphism is associated with the continuous function \[ f : \End(\uOf(X)) \longrightarrow \End(\uOf(Y)) : p \longmapsto \Uprec h^{-1}(p). \] Now, $p$ is equal to $\Uprec x$ for some $x \in X$ and if $y$ is an $R$-minimal element in $\rho[x,-]$, that is such that $h^{-1}(x) \subseteq y$, we have \[ f(p) = f(\Uprec x) = \Uprec h^{-1}(\Uprec x) = \Uprec y. \] Hence, we have to send a meet hemirelation $\rho$ to the function \[f_\rho :X/{\equiv} \longrightarrow Y/{\equiv}: x^\equiv \longmapsto y^\equiv \text{ for $y$ $R$-minimal $\rho[x,-]$}.\] By the dualities between \textbf{KPSp} and \textbf{PrFrm} and between \textbf{PrFrm} and \textbf{OGlSp}, $f$ is an increasing continuous function. But, we have a direct proof. \begin{proposition}\label{prop_continuous_function} Let $\rho \subseteq X \times Y$ be a strong hemirelation that satisfies ofc and dvc between two ordered Gleason spaces. The map defined by \[ f : X/{\equiv} \longrightarrow Y/{\equiv} : x^\equiv \longmapsto y^\equiv \] for $y$ R-minimal in $\rho[x,-]$ is an increasing continuous function. \end{proposition} \begin{proof} First, the ofc implies that $f$ is well defined and increasing. Now, since $Y/{\equiv}$ is a compact pospace, to prove that $f$ is continuous, it is enough to prove that $f^{-1}(\omega)$ and $f^{-1}(F)$ are respectively open and closed subsets of $X/{\equiv}$ for $\omega$ an open downset and $F$ a closed downset of $Y/{\equiv}$. For $\omega$, we have \begin{align} &x^\equiv \in f^{-1}(\omega) \nonumber \\ \iff & \exists y \in \Pi^{-1}(\omega) : y \text{ is $R$-minimal in } \rho[x,-] \label{plop} \\ \iff & x \in \rho[-,\Pi^{-1}(\omega)] \label{plopdeux} \end{align} While the implication $\eqref{plop} \Rightarrow \eqref{plopdeux}$ does not need to be proved, a word must be spent on the reciprocal. Suppose that $x \in \rho[-,\pi^{-1}(\omega)]$, then we have $x \mathrel{\rho} y$ for some $z \in \Pi^{-1}(\omega)$. By Proposition \ref{prop_exist_R_min}, there exists $y$ R-minimal in $\rho[x,-]$ such that $y \mathrel{R} z$. Now, $\omega$ is a downset of $Y/{\equiv}$, so that $\Pi^{-1}(\omega)$ is an $R$-decreasing subset of $Y$. It follows that $y \in \Pi^{-1}(\omega)$, as required. Restarting from \eqref{plopdeux}, since $\Pi^{-1}(\omega)$ is $R$-decreasing and open, it is in particular an open downset of $Y$. Therefore, \[ \Pi^{-1}(\omega) = \bigcup \lbrace O \in \dOf(Y) \mid O \subseteq \Pi^{-1}(\omega) \rbrace\] and, consequently, \[ \rho[-,\Pi^{-1}(\omega)] = \bigcup \lbrace \rho[-,O] \mid O \subseteq \Pi^{-1}(\omega) \rbrace. \] By the dvc (see Proposition \ref{Prop_towards_ofc}), $\rho[-,O]$ is an open subset of $X$ for every $O \in \dOf(Y)$ and, hence, so is $\rho[-,\Pi^{-1}(\omega)]$. Henceforth, we proved that $\Pi^{-1}(f^{-1}(\omega)) = \rho[-,\Pi^{-1}(\omega)]$ is open in $X$, as required. Finally, as for $\omega$, we have that \[ \Pi^{-1}(f^{-1}(F)) = \rho[-,\Pi^{-1}(F).] \] Now, since $\rho$ is closed in $X \times Y$, $\rho[-,\Pi^{-1}(F)]$ is a closed subset of $X$ and the proof is concluded. \end{proof} Hence, we have a functor $\xi$ between the categories \textbf{OGlSp} and \textbf{KPSp} which maps an ordered Gleason space $(X,\leq,R)$ to the compact pospace $(X/{\equiv}, \leq_R)$ an an ordered Gleason relation $\rho \subseteq X \times Y $ to the increasing continuous function $f : X/{\equiv} \longrightarrow Y/{\equiv}$ defined in Proposition \ref{prop_continuous_function}. This functor yields an equivalence between \textbf{OGlSp} and \textbf{KPSp} which is equivalent to the composition of the duality between \textbf{OGlSp} and \textbf{PrFrm} and the duality between \textbf{PrFrm} and \textbf{KPSp}. \begin{remark} In the Boolean setting, an important feature of Gleason spaces is that their underlying stone spaces are the projective objects in the category \textbf{KHaus}. This is not the case anymore in our distributive setting. Indeed, the $f$-spaces are not the projective objects of the category \textbf{KPSp} since it would implies that they are projective in the category \textbf{Priest}. However, the injective objects of \textbf{DLat} have been shown in \cite{Balbes} to exactly be the complete Boolean algebras and not the frames. In fact, the projective objects of \textbf{KPSp} are exactly the projective objects of \textbf{KHaus}, that is the extremally disconnected compact spaces, as we show in the next proposition. \end{remark} \begin{proposition} The projective objects in the category \textbf{KPSp} are exactly the extremally disconnected spaces (ordered by equality). \end{proposition} \begin{proof} First, let us consider $(X,=)$ a extremally disconnected space, $(P,\leq)$ and $(Q,\leq)$ compact po-spaces, $f: X \longrightarrow P$ a monotone continuous function and $g: Q \longrightarrow P$ a surjective monotone continuous function. Since every compact po-space is in particular compact Hausdorff, and since the extremally disconnected spaces are projective in \textbf{KHaus}, there exists a continuous function $h : X \longrightarrow Q$ such that $gh = f$ and, since $X$ is ordered by the equality, $h$ is clearly monotone. Hence, $(X,=)$ is indeed projective in \textbf{KPSp}. On the other hand, suppose that $(X,\tau,\leq)$ is projective in \textbf{KPSp}. Then, following the proof of Gleason in \cite{Gleason}, one can prove that $(X,\tau)$ is extremally disconnected. \end{proof} The main ``ethical'' reason behind the failure of ordered Gleason spaces as projective objects in \textbf{KPSp} is the the relation $R$ is submerged by its associated equivalence relation $\equiv$. A solution could be to change the properties of the morphisms in the projective problem so that they directly take into account $R$ instead of $\equiv$. \section*{Conclusion} We have completed the external network of equivalences and dualities started in \cite{Guramtriang} and \cite{DeHaGelfand}, generalising to the "distributive setting" the duality between Gleason spaces and compact Hausdorff spaces of \cite{Sourabh}. Hence, we obtain the following commutative diagram, where the arrowed lines represent adjunctions and the non-arrowed ones equivalences or dualities. \begin{center} \begin{tikzpicture} \node(KHaus) at (-1,0) {\textbf{DeV}}; \node(KPSp) at (-2,-1.4) {\textbf{PrFrm}}; \node(DeV) at (1,0) {\textbf{KHaus}}; \node(PrFrm) at (2,-1.4) {\textbf{KPSp}}; \node(C) at (0,3.1) { \textbf{\emph{C}$^\star$-alg}}; \node(GlSp) at (1.6,1.9) { \textbf{GlSp}}; \node(KrFrm) at (-1.6,1.9) {\textbf{KrFrm}}; \node(StKFrm) at (-3.22,2.44) {\textbf{StKFrm}}; \node(OGlSp) at (3.22,2.44) {\textbf{OGlSp}}; \node(usbal) at (0,4.8) {\textbf{usbal}}; \draw[-, >=latex] (KHaus) to (DeV) ; \draw[-, >=latex] (DeV) to(GlSp); \draw[-, >=latex] (KrFrm) to(KHaus); \draw[-, >=latex] (KrFrm) to (C); \draw[-, >=latex] (GlSp) to (C); \draw[-, >=latex] (KPSp) to (PrFrm) ; \draw[-, >=latex] (PrFrm) to(OGlSp); \draw[-, >=latex] (StKFrm) to(KPSp); \draw[-, >=latex] (StKFrm) to (usbal); \draw[-, >=latex] (OGlSp) to (usbal); \draw[->, >=latex] (C) to (usbal); \draw[->, >=latex] (GlSp) to (OGlSp); \draw[->, >=latex] (KrFrm) to (StKFrm); \draw[->, >=latex] (KHaus) to (KPSp); \draw[->, >=latex] (DeV) to (PrFrm); \end{tikzpicture} \end{center} However, A proper way to describe the functor between \textbf{KPSp} and \textbf{OGlSp} is still missing. This situation could be solved figuring out the universal problem answered by ordered Gleason spaces. This problem cannot be the usual projective one as we saw at the end of Section \ref{Section3}. We will address this problem in a forthcoming article.
1,116,691,499,538
arxiv
\section{Introduction} A time scale (or a measure chain) is an arbitrary nonempty closed subset of the real numbers \cite{Hi}. Typical examples are ${\mathbb R}$, ${\mathbb Z}$, any unions of isolated points and closed intervals, and, finally, discrete sets containing all acumulation points (like the Cantor set). The time scales were introduced in order to unify differential and difference calculus \cite{Hi,Hi2}. Partial differentiation, tangent lines and tangent planes on time scales have been introduced recently \cite{BG}. In this paper we suggest how to extend the differentiation also on Lie groups. The case of the $SU(2)$ group is discussed in detail. The difference geometry \cite{Sau} is a discrete analogue of the differential geometry. In the last years one can observe a fast development of the integrable difference geometry (see, for instance, \cite{BP-rev,BS,Ci-fam,CDS,Do,DS-nets,WKS}) closely related to the classical differential geometry \cite{DSM,Eis1}. It is interesting that in the discrete case one recovers explicit constructions and transformations known in the continuous case (e.g., Darboux, B\"acklund, Ribaucour, Laplace and Jonas transformations, soliton and finite-gap solutions etc.). A natural idea is to unify the difference and differential geometries and to formulate the integrable geometry on time scales. In this paper we propose such formulation for pseudospherical immersions (surfaces of constant negative Gaussian curvature). The discrete pseudospherical surfaces have been introduced a long time ago \cite{Sau2,Wun}, and studied intensively in the last years \cite{BP-pseudo}. The idea to extend the notion of pseudospherical surfaces on arbitrary time scales first appeared in \cite{Sw-mgr}. However, throughout that work there was assumed that all points are isolated (the discrete case). The discrete Gaussian curvature and the B\"acklund transformation were not considered at all. In the present paper we formulate a natural geometric definition of pseudospherical surfaces (more precisely: asymptotic Chebyshev nets) on time scales and present the associated spectral problem (the Lax pair) and the Darboux-B\"acklund transformation. Thus the discrete, continuous and other cases are first described in a unified framework. \section{Differentiation on time scales} This section collects basic notions and results concerning the differential calculus on time scales, compare \cite{BG}. To avoid some unimportant complications we confine ourselves to time scales which are not bounded neither from above nor from below. \begin{Def}[\cite{Hi}] Let a time scale ${\mathbb T}$ is given. The maps $\sigma : {\mathbb T} \rightarrow {\mathbb T}$ and $\rho : {\mathbb T} \rightarrow {\mathbb T}$, defined by \be \sigma (u) := \inf \{ v \in {\mathbb T} : v > u \} \ , \qquad \rho (u) := \sup \{ v \in {\mathbb T} : v < u \} \ , \end{equation} \par \noindent are called jump operator and backward jump operator, respectively. \end{Def} \begin{Def}[\cite{Hi}] \label{class} A point $u \in {\mathbb T}$ is said to be right-scattered (if $\sigma (u) > u$) or right-dense (if $\sigma (u) = u$), left-scattered (if $\rho (u) < u$) or left-dense (if $\rho (u) = u$), and isolated if \ $\rho (u) < u < \sigma (u)$. \end{Def} \begin{Def}[\cite{BG}] The delta derivative of a continuous function $f$ is defined as \be \frac{\partial f (t) }{\Delta t} = \lim_{\stackrel{s \rightarrow t}{s \neq \sigma (t)}} \frac{ f ( \sigma (t) ) - f ( s )}{\sigma (t) - s} \ , \end{equation} \par \noindent and the nabla derivative is defined by \be \frac{\partial f (t) }{\nabla t} = \lim_{\stackrel{s \rightarrow t}{s \neq \rho (t)}} \frac{ f ( \rho (t) ) - f ( s )}{\rho (t) - s} \ . \end{equation} \par \noindent \end{Def} In this paper we focus on functions defined on two-dimensional time scales, i.e., on ${\mathbb T}_1 \times {\mathbb T}_2 $, where ${\mathbb T}_1, {\mathbb T}_2$ are given time scales. The extension on $n$-dimensional time scales is usually straightforward. We denote: \be \begin{array}{l} t \equiv (t_1, t_2) \in {\mathbb T}_1 \times {\mathbb T}_2 \ , \\[2ex] \sigma_1 (t) = ( \sigma (t_1) , t_2) \ , \quad \sigma_2 (t) = (t_1, \sigma (t_2)) \ , \\[2ex] \rho_1 (t) = ( \rho (t_1) , t_2) \ , \quad \rho_2 (t) = (t_1, \rho (t_2)) \ . \end{array} \end{equation} \par \noindent Obviously, the notions from Definition~\ref{class} can be defined independently for each variable. For example, a point can be right-scattered in the first variable, and right-dense in the second variable, shortly: 1-right-scattered and 2-right-dense. We stress that throughout this paper $\sigma_1$ and $\sigma_2$ usually denote jump operators, unless stated otherwise (only in few places in the text we mention Pauli sigma matrices denoted by ${\pmb \sigma}_1$, ${\pmb \sigma}_2$, ${\pmb \sigma}_3$). In the discrete case (e.g., ${\mathbb T}_1 = {\mathbb T}_2 = {\mathbb Z}$) we have $\sigma_j (u) = T_j u$ and $\rho_j (u) = T_j^{-1} u$, where $T_1$, $T_2$ are usual shift operators. Therefore delta and nabla differentiation can be associated with forward and backward data, respectively \cite{Do}. \begin{Def}[\cite{BG}] The partial delta derivative of a continuous function $f$ is defined as \be \frac{\partial f (t) }{\Delta t_j} = \lim_{\stackrel{s_j \rightarrow t_j}{s_j \neq \sigma (t_j)}} \frac{ f ( \sigma_j (t) ) - f ( s )}{\sigma (t_j) - s_j} \ . \end{equation} \par \noindent The definition of the partial nabla derivative is analogical. \end{Def} \begin{prop}[\cite{BG}] If the mixed partial delta derivatives exist in a neighbourhood of $t_0 \in {\mathbb T}_1 \times {\mathbb T}_2$ and are continuous at $t = t_0$, then \[ \frac{\partial^2 f (t_0) }{\Delta t_1 \Delta t_2 } = \frac{\partial^2 f (t_0) }{\Delta t_2 \Delta t_1 } \ . \] \end{prop} In the continuous case (e.g., ${\mathbb T}_1 = {\mathbb T}_2 = {\mathbb R}$) the delta derivative coincides with the right-hand derivative, while the nabla derivative coincides with the left-hand derivative. Note that all results and definitions in terms of delta derivatives have their nabla derivatives analogues. In the continuous case the differentiability implies the existence of the tangent plane. The delta differentiability does not have this important property. We need a stronger notion: the complete delta differentiability. \begin{Def}[\cite{BG}] \label{complete} We say that a function $f : {\mathbb T} \rightarrow {\mathbb R}$ is completely delta differentiable at a point $t_0 \in {\mathbb T}$, if there exist a number $A$ such that \[ \begin{array}{l} f (t) - f(t_0) = A (t - t_0) + (t - t_0) \ \alpha (t_0, t) \ , \\[2ex] f(t) - f(\sigma (t_0)) = A ( t - \sigma (t_0) ) + (t - \sigma (t_0)) \ \beta (t_0, t) \ , \end{array} \] where \ $\alpha (t_0, t_0) = 0$, $\beta (t_0, t_0) = 0$, $\displaystyle \lim_{t\rightarrow t_0} \alpha (t_0, t) = 0$, and \ $\displaystyle \lim_{t\rightarrow t_0} \beta (t_0, t) = 0$. \end{Def} \begin{prop}[\cite{BG}] \label{tanline} If the function $f$ is completely delta differentiable at $t_0$, then the graph of this function has the uniquely determined delta tangent line at the point $P_0 = (t_0, f(t_0))$ specified by the equation \[ y - f(t_0) = \frac{\partial f (t_0) }{\Delta t} (x - t_0) \] \end{prop} If $P_0$ is an isolated point of the curve $\Gamma$ (hence $P_0 \neq P_0^\sigma$), then the delta tangent line to $\Gamma$ at $P_0$ coincides with the unique line through the points $P_0$ and $P_0^\sigma$. The definition of the complete delta differentiability in two-dimensional case is similar to Definition~\ref{complete} (for details, see \cite{BG}, Definition 2.1). Instead of this definition we present here an important sufficient condition for delta differentiability. \begin{prop}[\cite{BG}] Let $f : {\mathbb T}_1 \times {\mathbb T}_2 \rightarrow {\mathbb R}$ is continuous and has first order partial delta derivatives in a neighbourhood of $t_0$. If these derivatives are continuous at $t_0$, then $f$ is completely delta differentiable at $t_0$. \end{prop} If $P_0 \neq P_0^{\sigma_1}$ and $P_0^{\sigma_2} \neq P_0$ (hence also $P_0^{\sigma_1} \neq P_0^{\sigma_2}$), then the delta tangent plane to the surface $S$ at $P_0$ (if exists) coincides with the unique plane through $P_0, P_0^{\sigma_1}$ and $P_0^{\sigma_2}$. \begin{prop}[\cite{BG}] \label{tanplane} If the function $f : {\mathbb T}_1 \times {\mathbb T}_2 \rightarrow {\mathbb R}$ is completely delta differentiable at \ $t_0 = (t_{01}, t_{02})$, then the surface represented by this function has the uniquely determined delta tangent plane at the point $P_0 = (t_{01}, t_{02}, f(t_0))$ specified by the equation \be z = f(t_0) + \frac{\partial f (t_0)}{\Delta t_1} (x - t_{01}) + \frac{\partial f (t_0)}{\Delta t_2} (y - t_{02}) \end{equation} \par \noindent where $(x,y,z)$ is the current point of the plane. \end{prop} In the following sections of this paper we define pseudospherical surfaces on time scales in terms of delta derivatives. In order to simplify the notation the delta derivatives will be denoted by \be \label{D} D_j f \equiv \frac{\partial f (t) }{\Delta t_j} \ . \end{equation} \par \noindent Propositions~\ref{tanline} and \ref{tanplane} show that in geometrical contexts the complete delta differentiability, which guarantees the existence of tangent lines and tangent planes, is more useful than the delta differentiability. \section{Differentiation of $SU(2)$-valued functions on time scales} Analytic approaches to pseudospherical surfaces usually involve the Lie group $SU(2)$, Lie algebra $su(2)$ and quaternions \cite{BP-rev,MeS,PM,Sym}. Therefore it is important to extend the notion of the delta derivative on Lie groups. Given a function $f : {\mathbb T} \rightarrow M$, where $M$ is a submanifold, we can define the delta derivative of $f$ in a quite natural way. If $t$ is right-dense, then we compute the tangent vector in the point $t$ just repeating the standard procedure, well known in the case ${\mathbb T} = {\mathbb R}$. If $t$ is right-scattered, then we join $f(t)$ and $f(\sigma(t))$ by the shortest geodesic. The delta derivative is defined as the vector tangent to this geodesic. If $M = G$ is a Lie group, then we may map the tangent vector into the coresponding Lie algebra $g$. The length of this vector is $\delta/\varepsilon$, where $\varepsilon = \sigma(t) - t$ and $\delta$ is the lenght of the geodesic between $t$ and $\sigma(t)$. If $M$ is immersed in an ambient Euclidean space, then one can define the delta derivative in another way, considering geodesics (straight lines) in the ambient space instead of geodesics on $M$. Both definitions yield the same results for right-dense points, but for right-scattered points we get two different definitions of the delta derivative (even after projection onto the corrsponding tangent space). In the general case these ideas will be developed elsewhere. Here we confine ourselves to the Lie group $SU(2)$. The Lie group $SU(2)$ is defined as $\{ \Phi : \Phi^{-1} = \Phi^\dagger, \ \det \Phi = 1 \}$. Any element $\Phi \in SU(2)$ can be parameterized as \be \label{Su2} \Phi = \left( \ba{rr} a & b \\ - \bar b & \bar a \ea \right) \ , \quad |a|^2 + |b|^2 = 1 \ . \end{equation} \par \noindent Therefore \be \Phi = {\rm Re} a - {\mathbf e}_1 {\rm Re} b - {\mathbf e}_2 {\rm Im\, } b - {\mathbf e}_3 {\rm Im\, } a \ , \end{equation} \par \noindent where ${\mathbf e}_j = - i {\pmb \sigma}_j$ ($j=1,2,3$) and ${\pmb \sigma}_j$ are standard Pauli matrices. The following properties are satisfied: \be \label{ijk} {\mathbf e}_1^2 = {\mathbf e}_2^2 = {\mathbf e}_3^2 = - 1 \ , \qquad {\mathbf e}_j {\mathbf e}_k = - {\mathbf e}_k {\mathbf e}_j \quad (j\neq k) \ , \end{equation} \par \noindent \be \label{123} {\mathbf e}_1 {\mathbf e}_2 = {\mathbf e}_3 \ , \quad {\mathbf e}_2 {\mathbf e}_3 = {\mathbf e}_1 \ , \quad {\mathbf e}_3 {\mathbf e}_1 = {\mathbf e}_2 \ , \end{equation} \par \noindent \be \label{edag} {\mathbf e}_j^\dagger = - {\mathbf e}_j \ \ (j = 1, 2, 3) \ . \end{equation} \par \noindent Therefore the space spanned by $1, {\mathbf e}_1 , {\mathbf e}_2 , {\mathbf e}_3$ can be identified with quaternions ${\mathbb H}$. The standard Euclidean structure is defined by the following scalar product \be \scal{A}{B} = \frac{1}{2} {\rm Tr} (A B^\dagger) \ , \qquad A, B \in {\mathbb H} \ . \end{equation} \par \noindent Then the basis $1, {\mathbf e}_1 , {\mathbf e}_2 , {\mathbf e}_3$ is orthonormal. The space of imaginary (or pure) quaternions, ${\rm Im\, } {\mathbb H}$, is spanned by ${\mathbf e}_1, {\mathbf e}_2, {\mathbf e}_3$. The condition $|a|^2 + |b|^2 = 1$ means exactly that $\Phi$ given by \rf{Su2} is a unit vector. Hence we have the well known conclusion that the Lie group $SU(2)$ can be identified with the sphere $S^3 \subset \mathbb H$. The Lie algebra $su(2)$ coincides with pure quaternions ${\rm Im\, } {\mathbb H}$. Following the general outline given above we are going to define two delta derivatives, denoted by ${\cal D}_j$ and $D_j$, respectively. In the continuous case ($j$-right-dense points) ${\cal D}_j = D_j$ and \be U_j := (D_j \Phi ) \Phi^{-1} \end{equation} \par \noindent takes values in the Lie algebra $su(2)$. In the discrete case ($j$-right-scattered points) the situation is more complicated. Geometrically, the derivative ${\cal D}_j \Phi$ in the discrete case is tangent to the sphere $S^3$ at $\Phi$ and $|{\cal D}_j \Phi|$ is the length of the coresponding arc. Therefore, after elementary geometric considerations, \be {\cal D}_j \Phi = \frac{ (T_j (\Phi) - \Phi \cos\delta ) \delta }{\varepsilon \sin\delta} \ , \qquad \cos\delta := \scal{T_j \Phi}{\Phi} \ . \end{equation} \par \noindent Note that \be T_j \Phi = \exp ( {\mathbf u}_j \delta ) \Phi \ , \qquad {\mathbf u}_j := \frac{\varepsilon}{\delta} ( {\cal D}_j \Phi ) \Phi^{-1} \ , \end{equation} \par \noindent and ${\mathbf u}_j$ is a unit vector from $su(2)$. The derivative $D_j \Phi$ can be identified with the secant joining $\Phi$ and $T_j \Phi$ (in the space ${\mathbb H}$): \be \label{DFi} D_j \Phi = \frac{T_j \Phi - \Phi}{\varepsilon} \ . \end{equation} \par \noindent Now $(D_j \Phi) \Phi^{-1}$ is, in general, outside ${\rm Im\, } {\mathbb H}$. Therefore, it is convenient to define a projection $\Pi : {\mathbb H} \rightarrow { \rm Im\, } {\mathbb H}$ \be \label{Pi} \Pi (A_0 + A_1 {\mathbf e}_1 + A_2 {\mathbf e}_2 + A_3 {\mathbf e}_3) := A_1 {\mathbf e}_1 + A_2 {\mathbf e}_2 + A_3 {\mathbf e}_3 \ , \end{equation} \par \noindent projecting a quaternion $A$ into its imaginary (or traceless) part. One can check that \be \Pi ( ( D_j \Phi) \Phi^{-1} ) = \frac{\delta}{\sin\delta} ( {\cal D}_j \Phi ) \Phi^{-1} \ . \end{equation} \par \noindent Throughout this paper we will use only the derivative $D_j$, defined by \rf{DFi}, but applied not only to elements of $SU(2)$ but to any $\Psi \in {\mathbb H}$. Note that \be \Psi = \left( \ba{rr} a & b \\ - \bar b & \bar a \ea \right) \quad \Longrightarrow \quad D_j \Psi = \left( \ba{rr} D_j a & D_j b \\ - D_j \bar b & D_j \bar a \ea \right) \end{equation} \par \noindent and the following rules of differentiation hold \be \begin{array}{l} \label{Leib} D_j ( A B ) = ( D_j A ) B + \sigma_j (A) D_j B \ , \\[2ex] D_j (\Psi^{-1}) = - \sigma_j (\Psi) (D_j \Psi) \Psi^{-1} \ , \end{array} \end{equation} \par \noindent where $A, B, \Psi \in {\mathbb H}$. Therefore $D_j$ is convenient in calculations and turned out to be sufficient for our purposes (see Section~\ref{Lax}, where the spectral approach for pseudoshperical immersions is presented). A formulation of another spectral approach based on the more geometric derivative ${\cal D}_j$ is an open problem. It would be interesting to check the equivalence of both approaches. \section{Smooth and discrete pseudospherical surfaces} \label{pseudo-review} Pseudospherical surfaces, i.e., surfaces (immersions) of constant negative Gaussian curvature have been studied intensively since the middle of the XIX century, starting from 1839 \cite{Min}. The famous transformations found by Bianchi, Lie and B\"acklund turned out to be milestones both in differential geometry and in the soliton theory. Old and recent results concerning pseudospherical surfaces, including a lot of orignal references, are collected and reported for instance in \cite{Eis1,Hey,Ov,PM}, see also \cite{Ampol}. Let us consider a surface immersed in ${\mathbb R}^3$ explicitly described by a position vector ${\vec r} = {\vec r} (s, t)$ (we assume that this function is sufficiently smooth). We denote the normal vector by ${\vec n}$ and define the so called fundamental forms: \be \begin{array}{l} I := d {\vec r} \cdot d {\vec r} = E ds^2 + 2 F ds\, dt + G dt^2 \ , \\[2ex] II := - d {\vec r} \cdot d {\vec n} = L ds^2 + 2 M ds\, dt + N dt^2 \ , \end{array} \end{equation} \par \noindent where the center dot denotes the standard scalar product in ${\mathbb R}^3$ and $E, F, G$, $L, M, N$ are real functions of $s, t$. These functions have to satisfy nonlinear equations known as Gauss-Peterson-Codazzi equations. The Gaussian curvature can be conveniently expressed as follows \be \label{Kcont} K = \frac{\det (II)}{\det (I)} = \frac{({\vec r},_1 \cdot {\vec n},_1) ({\vec r},_2 \cdot {\vec n},_2) - ({\vec r},_1 \cdot {\vec n},_2 )({\vec r},_2 \cdot {\vec n},_1)}{({\vec r},_1 \cdot {\vec r},_1) ({\vec r},_2 \cdot {\vec r},_2) - ({\vec r},_1 \cdot {\vec r},_2)^2} \ , \end{equation} \par \noindent where ${\vec r},_1 := \partial {\vec r}/\partial t$, ${\vec r},_2 := \partial {\vec r}/\partial s$, etc. \begin{Def} Coordinates $s,t$ are called Chebyshev coordinates if the first fundamental form is given by $I = ds^2 + 2 \cos\phi ds\,dt + dt^2$, i.e., \be \label{Cheb} E \equiv {\vec r},_1 \cdot {\vec r},_1 = 1 \ , \quad G \equiv {\vec r},_2 \cdot {\vec r},_2 = 1 \ , \quad F \equiv {\vec r}_1 \cdot {\vec r}_2 = \cos\phi \ . \end{equation} \par \noindent If a less restrictive conditons hold: \be E,_2 = 0 \ , \quad G,_1 = 0 \ , \end{equation} \par \noindent then $s, t$ are called weak Chebyshev coordinates. \end{Def} Any weak Chebyshev coordinates $s,t$ can be transformed (at least locally) into Chebyshev coordinates $\tilde s, \tilde t$ by an appropriate change of variables $\tilde s = g (s)$, $\tilde t = f (t)$. \begin{Def} Coordinates $s,t$ are called asymptotic if the second fundamental form is given by $II = 2 M dt\,ds$, i.e., \be \label{asym} {\vec r},_1 \cdot {\vec n},_1 = {\vec r},_2 \cdot {\vec n},_2 = 0 \ , \quad {\vec r},_1 \cdot {\vec n},_2 = {\vec r},_2 \cdot {\vec n},_1 = - M \ . \end{equation} \par \noindent \end{Def} \begin{prop} \label{czeb} Asymptotic lines on a surface admit parameterization by Chebyshev coordinates if and only if the surface has a constant negative Gaussian curvature. In this case the Gaussian curvature is given by \be \label{Kcontas} K = \frac{ - ({\vec r},_1 \cdot {\vec n},_2 )({\vec r},_2 \cdot {\vec n},_1)}{({\vec r},_1 \cdot {\vec r},_1) ({\vec r},_2 \cdot {\vec r},_2) - ({\vec r},_1 \cdot {\vec r},_2)^2} = - \left( \frac{ M}{\sqrt{E} \sqrt{G} \sin\phi} \right)^2 \ , \end{equation} \par \noindent where $\phi$ is the angle between ${\vec r},_1$ and ${\vec r},_2$. \end{prop} \vspace{2ex} \noindent {\bf Discrete surfaces} (discrete immersions) are defined as maps $$ {\vec r}: \varepsilon_1 {\mathbb Z} \times \varepsilon_2 {\mathbb Z} \ni (\varepsilon_1 m, \varepsilon_2 n) \rightarrow {\vec r} (\varepsilon_1 m, \varepsilon_2 n) \in {\mathbb R}^3$$ such that $\Delta_1 {\vec r}$ and $\Delta_2 {\vec r}$ are linearly independent for any $m,n$, where $\Delta_j$ is defined by \be \label{Delta} \Delta_j f = \frac{T_j f - f}{\varepsilon_j} \ , \end{equation} \par \noindent and $f : \varepsilon_1 {\mathbb Z} \times \varepsilon_2 Z \rightarrow {\mathbb R}^3$. In other words, we consider the case ${\mathbb T}_1 = \varepsilon_1 {\mathbb Z}$, ${\mathbb T}_2 = \varepsilon_2 {\mathbb Z}$, where $\varepsilon_1$, $\varepsilon_2$ are fixed constants (the mesh size). Therefore, in the discrete case $D_j = \Delta_j$. In particular, for $\varepsilon_1 = \varepsilon_2 =1$ we have $\Delta_j = T_j - 1$. The discrete analogue of pseudospherical surfaces endowed with asymptotic Chebyshev coordinates is defined as follows (compare \cite{Sau2,Wun}). Weak Chebyshev coordinates were discretized in a similar way. \begin{Def}[\cite{BP-pseudo}] \label{pseudodis} Discrete asymptotic weak Chebyshev net (discrete $K$-surface) is an immersion ${\vec r}: \varepsilon_1 {\mathbb Z} \times \varepsilon_2 {\mathbb Z} \rightarrow {\mathbb R}^3$ such that for any $m,n$ \begin{itemize} \item $\Delta_1 {\vec r} \cdot \Delta_1 {\vec r} = E (m)$ , \ $\Delta_2 {\vec r} \cdot \Delta_2 {\vec r} = G (n)$ , \ (weak Chebyshev net) , discrete Chebyshev nets correspond to $E = G = 1$, \item the points ${\vec r}$, $T_1 {\vec r}$, $T_2 {\vec r}$, $T_1^{-1} {\vec r}$, $T_2^{-1} {\vec r}$ are coplanar \ (asymptotic net). \end{itemize} \end{Def} The plane containing ${\vec r}$, $T_1 {\vec r}$, $T_2 {\vec r}$, $T_1^{-1} {\vec r}$, $T_2^{-1} {\vec r}$ can be interpreted as the discrete analogue of the tangent plane and \be {\vec n} := \frac{ \Delta_1 {\vec r} \times \Delta_2 {\vec r} }{ | \Delta_1 {\vec r} \times \Delta_2 {\vec r} | } = \frac{ \Delta_1 {\vec r} \times \Delta_2 {\vec r} }{\sqrt{ (\Delta_1 {\vec r})^2 (\Delta_2 {\vec r})^2 - (\Delta_1 {\vec r} \cdot \Delta_2 {\vec r})^2 } } \ , \end{equation} \par \noindent is the discrete analogue of the normal vector (here the cross means the vector product). \section{Some old results in a new form} In order to obtain the explicit similarity between smooth and discrete cases we will reformulate the definition of discrete asymptotic nets and derive another formula for the discrete Gaussian curvature of asymptotic Chebyshev nets. \begin{prop} \label{old1} For any discrete immersion ${\vec r}$ \[ \Delta_1 {\vec n} \cdot \Delta_1 {\vec r} = 0 \quad \Longleftrightarrow \quad \Delta_1 {\vec r} \ , \ T_1 (\Delta_1 {\vec r})\ , \ T_1 (\Delta_2 {\vec r}) \ {\rm are \ coplanar} . \] \[ \Delta_2 {\vec n} \cdot \Delta_2 {\vec r} = 0 \quad \Longleftrightarrow \quad \Delta_2 {\vec r} \ , \ T_2 (\Delta_1 {\vec r})\ , \ T_2 (\Delta_2 {\vec r}) \ {\rm are \ coplanar} . \] \end{prop} \begin{Proof} From the definition of ${\vec n}$ it follows: ${\vec n} \cdot \Delta_1 {\vec r} = 0$, $T_1 {\vec n} \cdot T_1 \Delta_1 {\vec r} = 0$ and $T_1 {\vec n} \cdot T_1 \Delta_2 {\vec r} = 0$. Then $\Delta_1 {\vec n} \cdot \Delta_1 {\vec r} = 0 \ \Longleftrightarrow \ T_1 {\vec n} \cdot \Delta_1 {\vec r} = {\vec n} \cdot \Delta_1 {\vec r} $. Hence, $T_1 {\vec n} \cdot \Delta_1 {\vec r} = 0$. Therefore, $\Delta_1 {\vec r}$, $T_1 \Delta_1 {\vec r}$ and $T_1 \Delta_2 {\vec r}$ are co-planar. The proof of the second statement is similar. \end{Proof} \begin{cor} \label{cordis} For any discrete immersion the points ${\vec r}$, $T_1 {\vec r}$, $T_2 {\vec r}$, $T_1^{-1} {\vec r}$, $T_2^{-1} {\vec r}$ \ are coplanar if and only if \ $\Delta_1 {\vec n} \cdot \Delta_1 {\vec r} = 0$ and $\Delta_2 {\vec n} \cdot \Delta_2 {\vec r} = 0$. In other words, a discrete immersion ${\vec r} : \varepsilon_1 {\mathbb Z} \times \varepsilon_2 {\mathbb Z} \rightarrow {\mathbb R}^3$ is asymptotic iff \be \Delta_1 {\vec r} \cdot \Delta_1 {\vec n} = \Delta_2 {\vec r} \cdot \Delta_2 {\vec n} = 0 \ , \end{equation} \par \noindent which is a discrete analogue of \rf{asym}. \end{cor} \begin{prop} \label{Kprop} For any discrete asymptotic weak Chebyshev net, $K$ defined by \be \label{Kdisc} K : = - \frac{ (\Delta_1 {\vec n}\cdot \Delta_2 {\vec r} ) (\Delta_2 {\vec n} \cdot \Delta_1 {\vec r} ) }{ (\Delta_1 {\vec r})^2 (\Delta_2 {\vec r})^2 - (\Delta_1 {\vec r} \cdot \Delta_2 {\vec r} )^2 } \end{equation} \par \noindent is constant (i.e., does not depend on $m,n$). \end{prop} \begin{Proof} We consider the tetrahedron $ABCD$: ${\vec r} \equiv A$, $T_1 {\vec r} \equiv B$, $T_2 {\vec r} \equiv D$, $T_1 T_2 {\vec r} \equiv C$. Taking into account Definition~\ref{pseudodis}, we have \be \label{leng} |\vec{AB}| = |\vec{DC}| = \varepsilon_1 | \Delta_1 {\vec r} | \ , \quad |\vec{AD}| = |\vec{BC}| = \varepsilon_2 | \Delta_2 {\vec r} | . \end{equation} \par \noindent We denote by $h_{AB}^D$ the height of the triangle $ABD$ perpendicular to $AB$, and by $H^D$ the height of the tetrahedron $ABCD$ perpendicular to the base $ABC$, etc. Then $\theta_1$ denotes the angle between ${\vec n}$ and $T_1 {\vec n}$ (i.e., beteween the planes $ABC$ and $ABD$) and $\theta_2$ denotes the angle between ${\vec n}$ and $T_2 {\vec n}$ (i.e., between $ABD$ and $ACD$). Note that the angle between $ABC$ and $BCD$ is $T_1 \theta_2$, and the angle between $ACD$ and $BCD$ is $T_2 \theta_1$. Finally, $\phi$ is the angle between $\Delta_1 {\vec r}$ and $\Delta_2 {\vec r}$, i.e., \be \Delta_1 {\vec r} \cdot \Delta_2 {\vec r} = |\Delta_1 {\vec r}| |\Delta_2 {\vec r}| \cos\phi \ . \end{equation} \par \noindent From elementary geometric considerations we have: \be \begin{array}{l} \label{element} H^B = h_{AD}^B \sin\theta_2 \ , \quad h_{AD}^B = |\vec{AB}| \sin\phi \ , \quad h_{CD}^B = |\vec{BC}| \sin\phi \ , \\[2ex] H^D = h_{AB}^D \sin\theta_1 \ , \quad h_{AB}^D = |\vec{AD}| \sin\phi \ , \quad h_{BC}^D = |\vec{DC}| \sin\phi \ , \\[2ex] H^B = h_{CD}^B \sin T_2\theta_1 \ , \quad H^D = h_{BC}^D \sin T_1\theta_2 \ , \quad H^D = H^B \ . \end{array} \end{equation} \par \noindent The last equation results from the comparison of two formulae for the volume of the tetrahedron: $H^D P_{ABC} = H^B P_{ACD}$, where $P_{ABC} = P_{ACD}$ because the triangles $ABC$ and $ACD$ are congruent. From \rf{element} we obtain: \[ \frac{\sin\theta_1}{| \vec{AB} |} = \frac{\sin\theta_2}{ | \vec{AD} |} = \frac{\sin T_2\theta_1}{ | \vec{DC} | } = \frac{\sin T_1 \theta_2}{ | \vec{BC} | } , \] which implies \be \label{tors} \frac{\sin\theta_1}{\varepsilon_1 |\Delta_1 {\vec r} | } = \frac{\sin\theta_2}{\varepsilon_2 | \Delta_2 {\vec r} | } = {\rm const} \ . \end{equation} \par \noindent Then, \be \begin{array}{l} \displaystyle \label{main} \Delta_1 {\vec n} \cdot \Delta_2 {\vec r} = \frac{T_1 {\vec n} \cdot \Delta_2 {\vec r}}{\varepsilon_1} = \frac{| \Delta_2 {\vec r} | }{\varepsilon_1} \frac{H^D}{|\vec{AD}|} = \frac{| \Delta_2 {\vec r} | \sin\theta_1 \, \sin\phi}{\varepsilon_1} \ , \\[3ex] \displaystyle \Delta_2 {\vec n} \cdot \Delta_1 {\vec r} = \frac{T_2 {\vec n} \cdot \Delta_1 {\vec r}}{\varepsilon_2} = \frac{| \Delta_1 {\vec r} | }{\varepsilon_2} \frac{H^B}{|\vec{AB}|} = \frac{| \Delta_1 {\vec r} | \sin\theta_2 \, \sin\phi }{\varepsilon_2} \ , \\[4ex] (\Delta_1 {\vec r})^2 (\Delta_2 {\vec r})^2 - (\Delta_1 {\vec r} \cdot \Delta_2 {\vec r} )^2 = (\Delta_1 {\vec r})^2 (\Delta_2 {\vec r})^2 \sin^2 \phi \ . \end{array}\end{equation} \par \noindent Therefore, computing \rf{Kdisc}, we obtain \be K = - \frac{\sin\theta_1 \sin\theta_2}{\varepsilon_1 \varepsilon_2 |\Delta_1 {\vec r} | |\Delta_2 {\vec r} | } \ , \end{equation} \par \noindent and, taking into account \rf{tors}, we complete the proof. \end{Proof} $K$ given by the formula \rf{Kdisc} can be considered as a natural discrete analogue of the Gaussian curvature \rf{Kcontas}. Wunderlich \cite{Wun}, in the case of discrete Chebyshev nets ($\theta_1 = \theta_2 = \theta$, $|\Delta_1 {\vec r}| = |\Delta_2 {\vec r}| = 1$ and $\varepsilon_1 = \varepsilon_2 = \varepsilon$), proposed a similar definition: \be K' = - \frac{\sin^2\theta}{\varepsilon^2 \cos\theta} \ . \end{equation} \par \noindent Because in this case $\theta ={\rm const}$ (compare \rf{tors}), then, obviously, both $K$ and $K'$ are constant. In the continuous limit $\theta \rightarrow 0$ which implies $K' \rightarrow K$. \section{Pseudospherical surfaces on time scales} Corollary~\ref{cordis} shows that the assumptions of Definition~\ref{pseudodis} can be expressed completely in terms of delta derivatives. Therefore, the extension of this definition on arbitrary time scales is straightforward. First, given an immersion ${\vec r}$ on a time scale, we define the normal vector \be \label{timenor} {\vec n} := \frac{ D_1 {\vec r} \times D_2 {\vec r} }{| D_1 {\vec r} \times D_2 {\vec r} | } \ . \end{equation} \par \noindent \begin{Def} \label{pseudotime} An immersion ${\vec r}: {\mathbb T}_1 \times {\mathbb T}_2 \ni (t_1, t_2) \rightarrow {\vec r} (t_1, t_2) \in {\mathbb R}^3$ such that for any $t \equiv (t_1, t_2) \in {\mathbb T}_1 \times {\mathbb T}_2$ \begin{itemize} \item ${\vec r}$ is completely delta differentiable , \item ${\vec n}$ is completely delta differentiable , \item $(D_1 {\vec r})^2 = E (t_1)$, \ $(D_2 {\vec r})^2 = G (t_2)$ \ , \item $D_1 {\vec n} \cdot D_1 {\vec r} = D_2 {\vec n} \cdot D_2 {\vec r} = 0$ \ , \end{itemize} is called an asymptotic weak Chebyshev net on the time scale ${\mathbb T} \equiv {\mathbb T}_1 \times {\mathbb T}_2$ (or, in particular case $E = G =1$, an asymptotic Chebyshev net). \end{Def} In the continuous and discrete cases asymptotic weak Chebyshev nets have constant negative Gaussian curvature (see Propositions~\ref{czeb} and \ref{Kprop}) and, as a consequence, they can be identified with pseudospherical surfaces. This is true also in the general case. \begin{Th} \label{KTh} For any asymptotic Chebyshev net on a time scale ${\mathbb T} = {\mathbb T}_1 \times {\mathbb T}_2$, $K$ defined by \be \label{Ktime} K = - \frac{ (D_1 {\vec n}\cdot D_2 {\vec r} ) (D_2 {\vec n} \cdot D_1 {\vec r} ) }{ (D_1 {\vec r})^2 (D_2 {\vec r})^2 - (D_1 {\vec r} \cdot D_2 {\vec r} )^2 } \end{equation} \par \noindent is constant. \end{Th} \begin{Proof} It is sufficient to show that $D_1 K = D_2 K = 0$ at any $t \in {\mathbb T}$. If $t$ is both 1-right-dense and 2-right-dense, we repeat the standard proof of Proposition~\ref{czeb}. Namely, using Codazzi equations (resulting from compatibility conditions, i.e., ${\vec n} \cdot {\vec r},_{jjk} = {\vec n} \cdot {\vec r},_{jkj}$) we show that \[ k,_1 = k,_2 = 0 \ , \quad k = \frac{M}{\sqrt{E} \sqrt{G} \sin\phi} \ . \] The formula \rf{Ktime} yields $K = - k^2$, compare \rf{Kcontas}. Hence $K,_1 = K,_2 = 0$. If $t$ is both 1-right-scattered and 2-right-scattered, we use the proof of Proposition~\ref{Kprop}. We point out, however, that the proof of Proposition~\ref{old1} (which is crucial in order to identify sides of the tetrahedron with appropriate tangent planes) needs a modification. We have to use the assumption about complete delta differentiability of ${\vec r}$. Indeed, if (for instance) $T_1 t$ is 1-right-dense, then without this assumption $T_1 (\Delta_2 {\vec r})$ does not have to be perpendicular to $T_1 {\vec n}$. In the ``mixed'' case the proof is also straightforward (although it seems to be most cumbersome). Let, for instance, $t$ is 1-right-dense and 2-right-scattered. We use the Frenet basis $\vec\tau$, $\vec\nu$, $\vec\beta$: \be \vec\tau = \frac{{\vec r},_1}{\sqrt{E}} \ , \quad \vec\nu = \frac{\vec\tau,_1}{\kappa \sqrt{E}} \ , \quad \vec\beta = \vec\tau \times \vec\nu \ , \end{equation} \par \noindent where $\kappa$ is the curvature of the line $t_2 = {\rm const}$ at $t$. The Serret-Frenet equations read: \be \label{Freneteqs} \tau,_1 = \sqrt{E} \kappa \vec\nu \ , \quad \vec\nu,_1 = \sqrt{E} ( {\tilde \kappa} \vec\beta - \kappa \vec\tau ) \ , \quad \vec\beta,_1 = - \sqrt{E} {\tilde \kappa} \vec\nu \ , \end{equation} \par \noindent where $\tilde \kappa$ is the second curvature (or the torsion). We define a unit vector $\vec d$ \be \vec d := \frac{D_2 {\vec r}}{\sqrt{G}} \ , \quad {\vec n} = \frac{\vec\tau \times \vec d}{\sin\phi} \ . \end{equation} \par \noindent From ${\vec n},_1 \cdot {\vec r},_1 = 0$ we derive $({\vec r},_1 \times {\vec r},_{11}) \cdot {\vec d} = 0$. Hence $\vec d \perp \vec\beta$. Then \be \label{dvec} \vec d = \vec\tau \cos\phi + \vec\nu \sin\phi \ , \quad {\vec n} = \beta \ . \end{equation} \par \noindent $D_2 {\vec r} \cdot D_2 {\vec r} = G (t_2)$ implies $\vec d \cdot T_2 {\vec r},_1 = {\vec r},_1 \cdot \vec d$, and $D_2 {\vec n} \cdot D_2 {\vec r} = 0$ implies $T_2 {\vec n} \cdot \vec d = 0$. Then, from $T_2 {\vec n},_1 \cdot T_2 {\vec r},_1 = 0$ we get $T_2 {\vec n} \cdot T_2 {\vec r},_{11} = 0$, i.e., $T_2 {\vec n} = T_2 \vec\beta$. Hence \be \vec d = T_2 \vec\tau \cos\phi + T_2 \vec\nu \sin\phi \ . \end{equation} \par \noindent Therefore, introducing an additional angle $\vartheta$ and performing two rotations, we can express the basis $T_2 \vec\tau$, $T_2 \vec\nu$, $T_2 \vec\beta$ as follows: \be \label{T2} \begin{array}{l} T_2 \vec\tau = \vec\tau (\cos^2\phi + \cos\vartheta \sin^2\phi) + \vec\nu \sin\phi \cos\phi (1 - \cos\vartheta) + \vec\beta \sin\phi \sin\vartheta \ , \\[2ex] T_2 \vec\nu = \vec\tau \sin\phi \cos\phi (1 - \cos\vartheta) + \vec\nu (\sin^2\phi + \cos\vartheta \cos^2\phi) - \vec\beta \cos\phi \sin\vartheta \ , \\[2ex] T_2 \vec\beta = - \vec\tau \sin\phi \sin\vartheta + \vec\nu \cos\phi\sin\vartheta + \vec\beta \cos\vartheta \ . \end{array} \end{equation} \par \noindent On the other hand, we have $T_2 {\vec r} = {\vec r} + \varepsilon {\vec d} \sqrt{G}$, where $\varepsilon = \sigma (t_2) - t_2$. Differentiating it (remember that $G,_1 = 0$) and using \rf{dvec}, \rf{Freneteqs} we get \be \label{T2r1} \frac{T_2 \vec\tau - \vec\tau}{ \varepsilon \sqrt{G} } = \left( \kappa + \frac{\phi,_1}{\sqrt{E}} \right) \left( - \vec\tau \sin\phi + \vec\nu \cos\phi \right) + {\tilde \kappa} \vec\beta \sin\phi \ . \end{equation} \par \noindent Comparing \rf{T2} with \rf{T2r1} we explicitly express $\kappa$ and $\tilde\kappa$ by $\phi$ and $\vartheta$: \be \label{kappy} \varepsilon \tilde\kappa \sqrt{G} = \sin\vartheta \ , \quad \varepsilon \sqrt{G} \left( \kappa + \frac{\phi,_1 }{\sqrt{E} } \right) = (1 - \cos\vartheta) \sin\phi \ . \end{equation} \par \noindent Substituting \rf{kappy} to the compatibility conditions of equations \rf{Freneteqs} and \rf{T2} we get \be \label{stale} \vartheta,_1 = 0 \ , \quad T_2 \left( \frac{\sin\vartheta}{\varepsilon \sqrt{G}} \right) = \frac{\sin\vartheta}{\varepsilon \sqrt{G}} \ , \end{equation} \par \noindent Taking into account ${\vec n} = \vec\beta$ and equations \rf{Freneteqs}, \rf{T2}, \rf{kappy} we compute $K$ using the formula \rf{Ktime}: \be K = - \frac{ (T_2 \vec\beta \cdot {\vec r},_1)(D_2 {\vec r} \cdot \beta,_1)}{\varepsilon E G \sin^2 \phi} = - \frac{\tilde\kappa \sin\vartheta}{\varepsilon \sqrt{G}} = - \frac{\sin^2\vartheta}{\varepsilon^2 G} \ . \end{equation} \par \noindent Hence, by virtue of \rf{stale} and because $\varepsilon, G$ by assumption do not depend on $t_1$, we have $K,_1 = 0$ and $D_2 K = 0$. \end{Proof} \section{The Lax pair and the Sym formula} \label{Lax} Since a pioneering work of Sym (\cite{S-2}, see also \cite{Sym}) smooth pseudospherical surfaces can be constructed from solutions of the corresponding spectral problem (Lax pair) using the so called Sym formula $\Psi^{-1} \Psi,_\lambda$. This approach was extended on discrete surfaces by Bobenko and Pinkall \cite{BP-pseudo,BP-rev}. The results of \cite{Ci-hyper} show that relatively weak assumptions on the spectral problem yield smooth pseudospherical surfaces in asymptotic coordinates (asymptotic weak Chebyshev nets). Motivated by these results we consider the following system of quaternion-valued linear partial differential equations (the Lax pair) on a time scale ${\mathbb T}_1 \times {\mathbb T}_2$ \be \begin{array}{l} \label{problin} D_1 \Psi = U \Psi \ , \quad U = \lambda (a {\mathbf e}_1 + b {\mathbf e}_2) + c {\mathbf e}_3 + h \ , \\[2ex] D_2 \Psi = V \Psi \ , \quad V = \lambda^{-1} (p {\mathbf e}_1 + q {\mathbf e}_2) + r {\mathbf e}_3 + s \end{array} \end{equation} \par \noindent where $a, b, c, h, p, q, r, s$ are real functions on ${\mathbb T}_1 \times {\mathbb T}_2$. Thus (for real $\lambda$) $U, V$ take values in $\mathbb H$, and, as a consequence, $\Psi$ is also ${\mathbb H}$-valued. The compatibility conditions yield the following system of nonlinear equations: \be \label{cc} D_2 U - D_1 V + \sigma_2 (U) V - \sigma_1 (V) U = 0 \end{equation} \par \noindent (we recall that here $\sigma_1$, $\sigma_2$ denote jump operators, not to be confused with Pauli matrices). Given $\Psi$ satisfying a Lax pair of the form $D_1 \Psi = U \Psi$, $D_2 \Psi = V \Psi$, we define an immersion ${\mathbf r}: {\mathbb T}_1 \times {\mathbb T}_2 \rightarrow {\rm Im\, } {\mathbb H} \simeq {\mathbb E}^3$ by the (modified) Sym formula \be \label{Sym} {\mathbf r} = \Pi ( \Psi^{-1} \Psi,_\lambda ) \ , \end{equation} \par \noindent where $\Pi$ is the projection \rf{Pi}. Using \rf{Leib} we compute \[ D_j {\mathbf r} = \Pi ( - \sigma_j (\Psi^{-1}) (D_j \Psi) \Psi^{-1} \Psi,_\lambda + \sigma_j (\Psi^{-1}) ( U_j,_\lambda \Psi + U_j \Psi,_\lambda ) ) \ , \] where $U_1 := U$, $U_2 := V$. Hence \be \begin{array}{l} \label{Dr} D_1 {\mathbf r} = \Pi ( (\sigma_1 (\Psi))^{-1} U,_\lambda \Psi) \ , \\[2ex] D_2 {\mathbf r} = \Pi ( (\sigma_2 (\Psi))^{-1} V,_\lambda \Psi) \ . \end{array} \end{equation} \par \noindent \begin{Th} \label{Ksym} Let ${\mathbf r} : {\mathbb T}_1 \times {\mathbb T}_2 \rightarrow {\mathbb R}^3$ is the surface defined by \rf{Sym}, where $\Psi$ satisfies the Lax pair \rf{problin}. Then the coordinates $t_1, t_2$ are asymptotic, and the formula \rf{Ktime} yields a constant value $K = - 4 \lambda^2$. \end{Th} \begin{Proof} We will check separately right-dense points and right-scattered points. At $j$-right-dense points $\sigma_j (\Psi) = \Psi$ and \be D_1 {\mathbf r} = \Psi^{-1} (a {\mathbf e}_1 + b {\mathbf e}_2) \Psi \ , \qquad D_2 {\mathbf r} = - \lambda^{-2} \Psi^{-1} (p {\mathbf e}_1 + q {\mathbf e}_2) \Psi \ , \end{equation} \par \noindent while at $j$-right-scattered points $\sigma_j (\Psi) = (1 + \varepsilon_j U_j ) \Psi$ and \be \begin{array}{l} \displaystyle D_1 {\mathbf r} = \Psi^{-1} \left( \frac{(a + \varepsilon_1 a h + \varepsilon_1 bc) {\mathbf e}_1 + (b + \varepsilon_1 b h - \varepsilon_1 a c ) {\mathbf e}_2 }{(1 + \varepsilon_1 h )^2 + \varepsilon_1^2 c^2 + \varepsilon_1^2 \lambda^2 (a^2 + b^2) } \right) \Psi \ , \\[4ex] \displaystyle D_2 {\mathbf r} = - \Psi^{-1} \left( \frac{(p + \varepsilon_2 p s + \varepsilon_2 q r) {\mathbf e}_1 + (q + \varepsilon_2 q s - \varepsilon_2 p r ) {\mathbf e}_2 }{ \lambda^2 (1 + \varepsilon_2 s )^2 + \lambda^2 \varepsilon_2^2 r^2 + \varepsilon_2^2 (p^2 + q^2) } \right) \Psi \ , \end{array} \end{equation} \par \noindent In any case the normal vector (compare \rf{timenor}) can be chosen as \be {\mathbf n} = \Psi^{-1} {\mathbf e}_3 \Psi \ . \end{equation} \par \noindent At $j$-right-dense points $D_j {\mathbf n} = \Psi^{-1} [ {\mathbf e}_3, U_j ] \Psi$. Therefore \be D_1 {\mathbf n} = 2 \lambda \Psi^{-1} (a {\mathbf e}_2 - b {\mathbf e}_1 ) \Psi \ , \qquad D_2 {\mathbf n} = 2 \lambda^{-1} \Psi^{-1} (p {\mathbf e}_2 - q {\mathbf e}_1 ) \Psi \ . \end{equation} \par \noindent At $j$-right-scattered points $\varepsilon_j D_j {\mathbf n} = (\sigma_j (\Psi))^{-1} {\mathbf e}_3 \sigma_j (\Psi) - \Psi^{-1} {\mathbf e}_3 \Psi$, hence, after straightforward computations \be \begin{array}{l} \displaystyle D_1 {\mathbf n} = 2 \lambda \Psi^{-1} \left( \frac{ \varepsilon_1 c (a {\mathbf e}_1 + b {\mathbf e}_2) - (1 + \varepsilon_1 h) ( b {\mathbf e}_1 - a {\mathbf e}_2) + C_1 {\mathbf e}_3 }{ (1 + \varepsilon_1 h )^2 + \varepsilon_1^2 c^2 + \lambda^2 \varepsilon_1^2 (a^2 + b^2) } \right) \Psi \ , \\[4ex] \displaystyle D_2 {\mathbf n} = 2 \lambda \Psi^{-1} \left( \frac{ \varepsilon_2 r (p {\mathbf e}_1 + q {\mathbf e}_2) - (1 + \varepsilon_2 s) ( q {\mathbf e}_1 - p {\mathbf e}_2) + C_2 {\mathbf e}_3 }{\lambda^2 (1 + \varepsilon_2 s )^2 + \lambda^2 \varepsilon_2^2 r^2 + \varepsilon_2^2 (p^2 + q^2) } \right) \Psi \ , \end{array} \end{equation} \par \noindent where \[ \begin{array}{l} \displaystyle 2 \lambda C_1 = 2 h + \varepsilon_1 h^2 + \varepsilon_1 c^2 - \lambda^2 \varepsilon_1 (a^2 + b^2) \ , \\[2ex] \displaystyle 2 \lambda C_2 = \lambda^2 ( 2 s + \varepsilon_2 s^2 + \varepsilon_2 r^2) - \varepsilon_2 (p^2 + q^2) \ . \end{array} \] We check that $D_1 {\mathbf n} \cdot D_1 {\mathbf r} = D_2 {\mathbf n} \cdot D_2 {\mathbf r} = 0$ and (after cumbersome computations) \be \frac{ (D_1 {\mathbf n} \cdot D_2 {\mathbf r} ) (D_2 {\mathbf n} \cdot D_1 {\mathbf r} ) }{ (D_1 {\mathbf r} )^2 (D_2 {\mathbf r} )^2 - (D_1 {\mathbf r} \cdot D_2 {\mathbf r} )^2 } = 4 \lambda^2 \ , \end{equation} \par \noindent which ends the proof. The result is the same for points of any kind (right-dense or right-scattered in one or both directions)! \end{Proof} \section{The Darboux-B\"acklund transformation} The standard Zakharov-Shabat construction of the Darboux matrix (see, for instance, \cite{Ci-dbt}) can be extended on arbitrary time scales. We consider the transformation $\tilde \Psi = B \Psi$ (where $B$ is the Darboux matrix). Then \be \begin{array}{l} \tilde U = D_1 (B) B^{-1} + \sigma_1 (B) U B^{-1} \ , \\[2ex] \tilde V = D_2 (B) B^{-1} + \sigma_2 (B) V B^{-1} \ . \end{array} \end{equation} \par \noindent We confine ourselves to the simplest Darboux matrix $B$ such that \be B = N \left( 1 + \frac{\lambda_1 - \mu_1}{\lambda - \lambda_1} P \right) \ , \quad B^{-1} = \left( 1 + \frac{\mu_1 - \lambda_1}{\lambda - \mu_1} P \right) N^{-1} \ , \end{equation} \par \noindent where $P^2 = P$. The projector $P$ has to satisfy the system \be \begin{array}{l} \label{system} D_1 (P) (1 - P) + \sigma_1 (P) U (\lambda_1) (1 - P) = 0 \ , \\[2ex] D_2 (P) (1 - P) + \sigma_2 (P) V (\lambda_1) (1 - P) = 0 \ , \\[2ex] (I - \sigma_1 (P)) ( - D_1 P + U (\mu_1) P ) = 0 \ , \\[2ex] (I - \sigma_2 (P)) ( - D_2 P + V (\mu_1) P ) = 0 \ . \end{array} \end{equation} \par \noindent One can show that $P$ given by \be \ker P = \Psi (\lambda_1) {\vec c}_1 \ , \quad {\rm Im}\,P = \Psi (\mu_1) {\vec c}_2 \ , \end{equation} \par \noindent where ${\vec c}_j$ are constant vectors, satisfies \rf{system}. Assuming that \be U = u_0 + \lambda u_1 \ , \quad V = v_0 + \frac{1}{\lambda} v_1 \ , \end{equation} \par \noindent we compute the transformation rules for $u_0, u_1, v_0, v_1$: \be \begin{array}{l} {\tilde u}_1 = \sigma_1 (N) u_1 N^{-1} \ , \\[2ex] {\tilde u}_0 = (D_1 N) N^{-1} + \sigma_1 (N) \bigg( u_0 + (\lambda_1 - \mu_1) \big( \sigma_1 (P) u_1 - u_1 P \big) \bigg) N^{-1} \ , \\[3ex] {\tilde v}_0 = (D_2 N) N^{-1} + \sigma_2 (N) v_0 N^{-1} \ , \\[3ex] \displaystyle {\tilde v}_1 = \sigma_2 (N) \left( 1 - \frac{\lambda_1 - \mu_1}{\lambda_1} \sigma_2 (P) \right) v_1 \left( 1 - \frac{\mu_1 - \lambda_1}{\mu_1} P \right) N^{-1} \ . \end{array} \end{equation} \par \noindent The properties of the Lax pair (the reduction group): \be \begin{array}{l} \label{red1} U (-\lambda) = {\mathbf e}_3 U (\lambda) {\mathbf e}_3^{-1} \ , \\[2ex] V (-\lambda) = {\mathbf e}_3 V (\lambda) {\mathbf e}_3^{-1} \ , \end{array} \end{equation} \par \noindent \be \begin{array}{l} \label{red2} U^\dagger (\bar \lambda) U (\lambda) = \lambda^2 (a^2 + b^2) + c^2 + h^2 \ , \\[2ex] V^\dagger (\bar \lambda) V (\lambda) = \lambda^{-2} (p^2 + q^2) + r^2 + s^2 \ , \end{array} \end{equation} \par \noindent impose constraints on the Darboux matrix $B$ (compare \cite{Ci-dbt}): \be P^\dagger = P \ , \quad P = {\mathbf e}_3 (1 - P) {\mathbf e}_3^{-1} \ , \quad \lambda_1 = - \mu_1 = i \kappa_1 \quad (\kappa_1 \in {\mathbb R}) \ . \end{equation} \par \noindent In particular, $c_2$ and $c_1$ are orthogonal, and $c_2 = {\mathbf e}_3 c_1$. Therefore \be P = \frac{1}{2} \left( 1 + i {\mathbf p} \right) \ , \quad {\mathbf p} := p_1 {\mathbf e}_1 + p_2 {\mathbf e}_2 \ . \end{equation} \par \noindent where ${\mathbf p}^2 = -1$, i.e., $ p_1^2 + p_2^2 = 1$. The longest equations of the system \rf{system} simplify \be \begin{array}{l} {\tilde u}_0 = (D_1 N) N^{-1} + \sigma_1 (N) \bigg( u_0 + \kappa_1 \big( u_1 {\mathbf p} - \sigma_1 ({\mathbf p}) u_1 \big) \bigg) N^{-1} \ , \\[3ex] \displaystyle {\tilde v}_1 = \sigma_2 (N) \sigma_2 ({\mathbf p}) v_1 {\mathbf p}^{-1} N^{-1} \ , \end{array} \end{equation} \par \noindent and the Darboux matrix and its inverse become \be B = \frac{ N ( \lambda - \kappa_1 {\mathbf p} )}{\lambda - i \kappa_1} \ , \qquad B^{-1} = \frac{ (\lambda + \kappa_1 {\mathbf p}) N^{-1} }{\lambda + i \kappa_1 } \ . \end{equation} \par \noindent Finally, the transformation on the level of surfaces reads \be \label{rDBT} \tilde {\mathbf r} = {\mathbf r} + \frac{\kappa_1}{\lambda^2 + \kappa_1^2} \Psi^{-1} {\mathbf p} \Psi \ . \end{equation} \par \noindent Therefore, the B\"acklund transformation has exactly the same form as in the continuous and in the discrete case: the segment joining $\tilde {\mathbf r}$ and $\mathbf r$ is tangent to ${\mathbf r}$ and has a constant length. The main difficulty (in the case of time scales different from ${\mathbb R}$ or $\varepsilon {\mathbb Z}$) is to find explicit seed solutions. \section{Conclusions} In this paper the notion of pseudospherical immersions is extended on the so called time scales, unifying the continuous and discrete cases in a single framework. It can be especially important in the context of the numerical approximation of continuous integrable models. Another important problem raised in this paper is a search of possible sets the integrable systems can be considered on. The Gaussian curvature of discrete pseudospherical surfaces is defined in a way admitting a straightforward extension on time scales (Proposition~\ref{Kprop}). Surprisingly, the simple formula \rf{Ktime} turns out to be valid for pseudospherical surfaces in asymptotic coordinates on any time scales (Theorem~\ref{KTh}). The range of its applicability will be further investigated. The quaternion-valued spectral problem \rf{problin} for pseudospherical surfaces in asymptotic coordinates has very general form. Actually, Theorem~\ref{Ksym} generalizes some results (isospectral case) of my earlier paper \cite{Ci-hyper} not only on the discrete case, but on arbitrary time scales. The Darboux-B\"acklund transformation \rf{rDBT} can be used to generate explicit pseudospherical surfaces (soliton solutions) on some interesting, non-standard, time scales. The work in this direction is in progress. It would be interesting to extend any other results of the integrable discrete geometry on arbitrary time scales. {\it Acknowledgements.} I am grateful to Iwona \'Swis\l ocka for a cooperation \cite{Sw-mgr}, to Zbigniew Hasiewicz for helpful discussions, and to Klara Janglajew for turning my attention on references \cite{BG,Hi,Hi2}. My work was partially supported by the Polish Ministry of Science and Higher Education (grant No.\ 1 P03B 017 28).
1,116,691,499,539
arxiv
\section{Introduction} Deep Convolutional Neural Network (DCNN) has achieved a huge empirical success in multiple disciplines (e.g., computer vision~\citep{alexnet,vgg,resnet}, Computer Go~\citep{alphago,alphagozero,darkforest}, and so on). On the other hand, its theoretical properties remain an open problem and an active research topic. Learning deep models are often treated as non-convex optimization in a high-dimensional space. From this perspective, many properties in deep models have been analyzed: landscapes of loss functions~\citep{landscape-anna, skip-connection-landscape-better, mei2016landscape}, saddle points~\citep{exponential-time-saddle-point, yannd-saddle-point}, relationships between local minima and global minimum~\citep{kenji-local-min-global-min, hardt2016identity, DBLP:journals/corr/abs-1712-08968}, trajectories of gradient descent~\citep{goodfellow2014qualitatively}, path between local minima~\citep{venturi2018neural}, etc. \iffalse including , empirical regularization techniques, and so on. First, generalization performance is not modeled explicitly in the optimization framework (except for assuming iid sample and applying Chernoff bound). \fi However, two components are missing: such a modeling does not consider specific network structure and input data distribution, both of which are critical factors in practice. Empirically, deep models work particular well for certain forms of data (e.g., images); theoretically, for certain data distribution, popular methods like gradient descent is shown to be unable to recover the network parameters~\citep{brutzkus2017globally}. Along this direction, previous theoretical works assume specific data distributions like spherical Gaussian and focus on shallow nonlinear networks~\citep{tian2017analytical,brutzkus2017globally,du-spurious-local-min-icml18}. These assumptions yield nice forms of gradient to enable analysis of many properties such as global convergence, which makes it nontrivial to extend to deep nonlinear neural networks that yield strong empirical performance. \iffalse Therefore, the reason why current DL works well is probably under the mercy of data distribution. However, empirically we always see Recent empirical works also suggest that induction bias introduced by network architecture is the dominant factor of the generalization performance. How to analyze these empirical factors under a generic non-convex optimization framework can be very difficult, from which it is hard to tell what makes deep learning and gradient descent work well in practice. \fi In this paper, we propose a novel theoretical framework for deep locally connected ReLU network that is applicable to general data distributions. Specifically, we embrace a teacher-student setting: the \emph{teacher} generates classification label via a hidden computational graph, and the \emph{student} updates the weights to fit the labels with gradient descent. Then starting from gradient descent rule, we marginalize out the input data conditioned on the graph variables of the teacher at each layer, and arrive at a \emph{reformulation} that \textbf{(1)} captures data distribution as explicit terms and leads to more interpretable model, \textbf{(2)} compatible with existing state-of-the-art regularization techniques such as Batch Normalization~\citep{batchnorm}, and \textbf{(3)} favors disentangled representation when data distributions have factorizable structures. To our best knowledge, our work is the first theoretical framework to achieve these properties for deep and locally connected nonlinear networks. \iffalse we provide a novel theoretical framework . We show that \emph{expanding} the common forward and backward rules of student network upon the teacher, leads to concise reformulations To a better understand of deep models, we need to analyze its specific structure. counter examples of data distribution can be constructed This includes a few major directions: \textbf{(1)} \emph{Convergence}: whether DCNN converges to local minima or global minimum with stochastic gradient descent, how local minima are related to the global minimum. \textbf{(2)} \emph{Generalization and Induction Bias}: why the solution obtained by gradient descent leads to good performance in the test set, despite the huge number of parameters in the network? Which data distribution and network architecture leads to strong generalization power? \textbf{(3)} \emph{Intepretability}: intuition and meaning about the converged solution. How low-level/generic features are converted to high-level semantics? \fi \iffalse all of these questions remain open. In this paper, we provide a novel theoretical framework for deep locally connected ReLU network. We embrace a teacher-student setting: the \emph{teacher} generates classification label via a hidden computational graph, and the \emph{student} updates the weights to fit the labels with gradient descent. We show that \emph{expanding} the common forward and backward rules of student network upon the teacher, leads to concise reformulations that \textbf{(1)} formulates data distribution as explicit terms, which leads to more interpretable model, \textbf{(2)} compatible with existing state-of-the-art regularization techniques such as Batch Normalization, and \textbf{(3)} suggests disentangled representation when data distributions follow certain structures. To achieve that, we impose reasonable assumptions that vanish when the teacher has infinite nodes. \fi Previous works have also proposed framework to explain deep networks, e.g., renormalization group for restricted Boltzmann machines~\citep{mehta2014exact}, spin-glass models~\citep{amit1985spin,choromanska2015loss}, transient chaos models~\citep{poole2016exponential}, differential equations~\citep{su2014differential, saxe2013exact}. In comparison, our framework \textbf{(1)} imposes mild assumptions rather than unrealistic ones (e.g., independence of activations), \textbf{(2)} explicitly deals with back-propagation which is the dominant approach used for training in practice, \textbf{(3)} considers spatial locality of neurons, an important component in practical deep models, and \textbf{(4)} models data distribution explicitly. The paper is organized as follows: Sec.~\ref{sec:basic-formulation} introduces a novel approach to model locally connected networks. Sec.~\ref{sec:teacher-student-setting} introduces the teacher-student setting and label generation, followed by the proposed reformulation. Sec.~\ref{sec:batch-norm-under-coarse-model} gives one novel finding that Batch Norm is a projection onto the orthogonal complementary space of neuron activations, and the reformulation is compatible with it. Sec.~\ref{sec:property-of-coarse-model} shows a few applications of the framework, e.g., why nonlinearity is helpful, how factorization of the data distribution leads to disentangled representation and other issues. \iffalse and provides insights about how gradient descent works. \fi \iffalse As suggested by multiple visualization papers~\cite{}, it appears to be the case that the first few layers capture specific features, while the lower filters capture generic features (e.g, edge). However, due to difficulties in theoretical analysis and complexity of non-convex optimization, it is hard to justify why this is true, when this is true under which data distributions, and how gradient descent optimization achieves such specializations. Understanding these would lead to substantial understanding of how deep architecture works. \fi \def\mathrm{ch}{\mathrm{ch}} \def\mathrm{pa}{\mathrm{pa}} \def\mathcal{Z}{\mathcal{Z}} \def\rf#1{{\mathrm{rf}(#1)}} \def\mathrm{raw}{\mathrm{raw}} \def\mathrm{sign}{\mathrm{sign}} \section{Basic formulation} \label{sec:basic-formulation} \subsection{General setting} In this paper, we consider multi-layer (deep) network with ReLU nonlinearity. We consider supervised setting, in which we have a dataset $\{(x, y)\}$, where $x$ is the input image and $y$ is its label computed from $x$ in a deterministic manner. Sec.~\ref{sec:teacher-student-setting} describes how $x$ maps to $y$ in details. We consider a neuron (or node) $j$. Denote $f_j$ as its activation after nonlinearity and $g_j$ as the (input) gradient it receives after filtered by ReLU's gating. Note that both $f_j$ and $g_j$ are deterministic functions of the input $x$ and label $y$. Since $y$ is a deterministic function of $x$, we can write $f_j = f_j(x)$ and $g_j=g_j(x)$. Note that all analysis still holds with bias terms. We omit them for brevity. The activation $f_j$ and gradient $g_k$ can be written as (note that $f'_j$ is the binary gating function): \begin{equation} f_j(x) = f'_j(x) \sum_{k\in \mathrm{ch}(j)} w_{jk} f_k(x), \quad g_k(x) = f'_k(x) \sum_{j\in \mathrm{pa}(k)} w_{jk} g_j(x) \label{eq:x-update} \end{equation} And the weight update for gradient descent is: \begin{equation} \Delta w_{jk} = \ee2{x}{f_k(x)g_j(x)}\label{eq:x-weight-update} \end{equation} Here is the expectation is with respect to a training dataset (or a batch), depending on whether GD or SGD has been used. We also use $f_j^\mathrm{raw}$ and $g_j^\mathrm{raw}$ as the counterpart of $f_j$ and $g_j$ before nonlinearity. \begin{figure} \centering \includegraphics[width=\textwidth]{theory_first_figure-crop.pdf} \caption{Problem Setting. (a) Locally connected network, (b) the receptive fields of each node. (c) notations used in backpropagation. (d) nodes with same receptive fields are grouped (Eqn.~\ref{eqn:matrix-form-simplified}).} \label{fig:lcn} \end{figure} \subsection{Locally Connected Network} Locally connected networks have extra structures, which leads to our reformulation. As shown in Fig.~\ref{fig:lcn}, each node $j$ only covers one part of the input images (i.e., \emph{receptive field}). We use Greek letters $\{\alpha, \beta, \ldots, \omega\}$ to represent receptive fields. For a region $\alpha$, $x_\alpha$ is the content in that region. $j\in\alpha$ means node $j$ covers the region of $\alpha$ and $n_\alpha$ is the number of nodes that cover the same region (e.g., multi-channel case). The image content is $\reg{\alpha(j)}$, abbreviated as $\reg{j}$ if no ambiguity. The parent $j$'s receptive field covers its children's. Finally, $\omega$ represents the entire image. By definition, the activation $f_j$ of node $j$ is only dependent on the region $\reg{j}$, rather than the entire image $x$. This means that $f_j(x) = f_j(\reg{j})$ and $f_j(\reg{j}) = f'_j(x_j)\sum_k w_{jk} f_k(\reg{k})$. However, the gradient $g_j$ is determined by the entire image $x$, and its label $y$, i.e., $g_j = g_j(x, y)$. Note that here we assume that the label $y$ is a deterministic (but unknown) function of $x$. Therefore, for gradient we just write $g_j = g_j(x)$. \subsection{Marginalized Gradient} Given the structure of locally connected network, the gradient $g_j$ has some nice structures. From Eqn.~\ref{eq:x-weight-update} we knows that $\Delta w_{jk} = \ee2{x}{f_k(x)g_j(x)} = \ee2{\reg{k}}{f_k(\reg{k}) \ee2{x_{-k} | x_k}{g_j(x)}}$. Define $\reg{-k} = x \backslash \reg{k}$ as the input image $x$ except for $\reg{k}$. Then we can define the \emph{marginalized gradient}: \begin{equation} g_j(x_k) = \ee2{\regcond{k}}{g_j(x)} \end{equation} as the marginalization (average) of $\reg{-k}$, while keep $\reg{k}$ fixed. With this notation, we can write $\Delta w_{jk} = \ee2{\reg{k}}{f_k(\reg{k})g_j(\reg{k})}$. On the other hand, the gradient which back-propagates to a node $k$ can be written as \begin{equation} g_k(x) = f'_k(x) \sum_{j \in \mathrm{pa}(k)} w_{jk} g_j(x) = f'_k(\reg{k}) \sum_j w_{jk} g_j(x) \end{equation} where $f'_k$ is the derivative of activation function of node $k$ (for ReLU it is just a gating function). If we take expectation with respect to $\regcond{k}$ on both side, we get \begin{equation} g_k(\reg{k}) = f'_k(\reg{k})g^{\mathrm{raw}}_k(\reg{k}) = f'_k(\reg{k}) \sum_{j\in \mathrm{pa}(k)} w_{jk} g_j(\reg{k}) \label{eq:grad-collect} \end{equation} Note that all marginalized gradients $g_j(\reg{k})$ are independently computed by marginalizing with respect to all regions that are outside the receptive field $\reg{k}$. Interestingly, there is a relationship between these gradients that respects the locality structure: \begin{theorem}[Recursive Property of marginalized gradient] $g_j(\reg{k}) = \ee2{\reg{j, -k} | \reg{k}}{g_j(\reg{j})}$ \end{theorem} This shows that the marginal gradient has recursive structure: we can first compute $g_j(\reg{j})$ for top node $j$, then by marginalizing over the region within $\reg{j}$ but outside $\reg{k}$, we get its projection $g_j(\reg{k})$ on child $k$, then by Eqn.~\ref{eq:grad-collect} we collect all projections from all the parents of node $k$, to get $g_k(\reg{k})$. This procedure can be repeated until we arrive at the leaf nodes. \section{Teacher-Student Setting} \label{sec:teacher-student-setting} In order to analyze the behavior of neural network under backpropagation (BP), one needs to make assumptions about how the input $x$ is generated and how the label $y$ is related to the input $x$. Previous works assume Gaussian inputs and shallow networks, which yields analytic solution to gradient~\citep{tian2017analytical,du-spurious-local-min-icml18}, but might not align well with the practical data distribution. \subsection{The Summarization Ladder} We consider a multi-layered deterministic function as the teacher (not necessary a neural network). For each region $x_\alpha$, there is a latent discrete \emph{summarization} variable $z_\alpha$ that captures the information of the input $x_\alpha$. Furthermore, we assume $z_\alpha = z_\alpha(x_\alpha) = z_\alpha(\{z_\beta\}_{\beta\in\mathrm{ch}(\alpha)})$, i.e., the summarizaion only relies on the ones in the immediate lower layer. In particular, the top level summarization is the \emph{label} of the image, $y = z_\omega$, where $\omega$ represents the region of the entire image. We call a particular assignment of $z_\alpha$, $z_\alpha = a$, an \emph{event}. Finally, $m_\alpha$ is how many values $z_\alpha$ can take. During training, all summarization functions $\mathcal{Z} = \{z_\alpha\}$ are unknown except for the label $y$. \iffalse The goal here is to check whether gradient descent can reconstruct $\mathcal{Z}$, or at least relevant ones. \fi \iffalse Note that a graphical model formulation is not appropriate here, due to complicated dependency that would be introduced in the next subsections. \yuandong{Put it to discussion session}. \fi \subsection{Function Expansion on Summarization} Let's consider the following quantity. For each neural node $j$, we want to compute the expected gradient given a particular \emph{factor} $z_\alpha$, where $\alpha=\rf{j}$ (the reception field of node $j$): \begin{equation} g_j(z_\alpha) \equiv \ee2{X_j|z_\alpha}{g_j(X_j)} = \int g_j(x_j) \mathbb{P}(x_j|z_\alpha) \mathrm{d} x_j \end{equation} And $\tilde g_j(z_\alpha) = g_j(z_\alpha)\mathbb{P}(z_\alpha)$. Similarly, $f_j(z_\alpha) = \ee2{X_j|z_\alpha}{f_j(X_j)}$ and $f'_j(z_\alpha) = \ee2{X_j|z_\alpha}{f'_j(X_j)}$. Note that $\mathbb{P}(x_j | z_\alpha)$ is the \emph{frequency count} of $x_j$ for $z_\alpha$. If $z_\alpha$ captures all information of $x_j$, then $\mathbb{P}(x_j | z_\alpha)$ is a delta function. Throughout the paper, we use frequentist interpretation of probabilities. Intuitively, if we have $g_j(z_\alpha = a) > 0$ and $g_j(z_\alpha \neq a) < 0$, then the node $j$ learns about the \emph{hidden} event $z_\alpha = a$. For multi-class classification, the top level nodes (just below the softmax layer) already embrace such correlations (here $j$ is the class label): \begin{equation} g_j(y = j) > 0,\quad g_j(y \neq j) < 0, \label{eqn:grad-top-level} \end{equation} where we know $z_\omega = y$ is the top level factor. A natural question now arises: \begin{center} \emph{Does gradient descent automatically push $g_j(z_\alpha)$ to be correlated with the factor $z_\alpha$}? \end{center} If this is true, then gradient descent on deep models is essentially a \emph{weak-supervised} approach that automatically learns the intermediate events at different levels. Giving a complete answer of this question is very difficult and is beyond the scope of this paper. Here, we aim to build a theoretical framework that enables such analysis. We start with the relationship between neighboring layers: \begin{theorem}[Reformulation] \label{thm:coarse-model} Denote $\alpha = \rf{j}$ and $\beta=\rf{k}$. $k$ is a child of $j$. If the following conditions hold: \begin{itemize} \item \textbf{Focus of knowledge}. $\mathbb{P}(x_k|z_\alpha, z_\beta) = \mathbb{P}(x_k|z_\beta)$. \item \textbf{Broadness of knowledge}. $\mathbb{P}(x_j|z_\alpha, z_\beta) = \mathbb{P}(x_j|z_\alpha)$. \item \textbf{Decorrelation}. Given $z_\beta$, ($g_k^{\mathrm{raw}}(\cdot)$ and $f'_k(\cdot)$) and ($f_k^{\mathrm{raw}}(\cdot)$ and $f'_k(\cdot)$) are \emph{uncorrelated}. \end{itemize} Then the following iterative equations hold: \begin{eqnarray} f_j(z_\alpha)\!= \! f_j'(z_\alpha)\!\sum_{k\in\mathrm{ch}(j)}w_{jk}\ee2{z_\beta|z_\alpha}{f_k(z_\beta)},\quad g_k(z_\beta)\!=\! f_k'(z_\beta)\!\sum_{j\in\mathrm{pa}(k)}w_{jk}\ee2{z_\alpha|z_\beta}{g_j(z_\alpha)} \label{eq:induction} \end{eqnarray} \end{theorem} One key property of this formulation is that, it incorporates data distributions $\mathbb{P}(z_\alpha, z_\beta)$ into the gradient descent rules. This is important since running BP on different dataset is now formulated into the same framework with different probability, i.e., frequency counts of events. By studying which family of distribution leads to the desired property, we could understand BP better. For completeness, we also need to define boundary conditions. In the lowest level $L$, we could treat each input pixel (or a group of pixels) as a single event. Therefore, $f_k(z_\beta) = \ii{k = z_\beta}$. On the top level, as we have discussed, Eqn.~\ref{eqn:grad-top-level} applies and $g_j(z_\beta) = a_1\ii{j = z_\beta} - a_2\ii{j \neq z_\beta}$. The following theorem shows that the reformulation is exact if $z_\alpha$ has all information of the region. \begin{theorem} If $\mathbb{P}(x_j|z_\alpha)$ is a delta function for all $\alpha$, then all conditions in Thm.~\ref{thm:coarse-model} hold. \end{theorem} In general, $\mathbb{P}(x_j|z_\alpha)$ is a distribution encoding how much information gets lost if we only know the factor $z_\alpha$. When we climb up the ladder, we lose more and more information while keeping the critical part for the classification. This is consistent with empirical observations~\citep{bau2017network}, in which the low-level features in DCNN are generic, and high-level features are more class-specific. \subsection{Matrix Formulation} Eqn.~\ref{eq:induction} can be hard to deal with. If we group the nodes with the same reception field at the same level together (Fig.~\ref{fig:lcn}(d)), we have the matrix form ($\circ$ is element-wise multiplication): \begin{table} \centering \begin{tabular}{|l||l|l|} \hline & Dimension & Description \\ \hline\hline $F_\alpha$, $\tilde G_\alpha$, $D_\alpha$ & $m_\alpha$-by-$n_\alpha$ & Activation $f_j(z_\alpha)$, gradient $\tilde g_j(z_\alpha)$ and gating prob $f'_j(z_\alpha)$ at group $\alpha$. \\ \hline $W_{\beta\alpha}$ & $n_\beta$-by-$n_\alpha$ & Weight matrix that links group $\alpha$ and $\beta$ \\ \hline $P_{\alpha\beta}$ & $m_\alpha$-by-$m_\beta$ & Prob $\mathbb{P}(z_\beta|z_\alpha)$ of events at group $\alpha$ and $\beta$ \\ \hline \end{tabular} \caption{Matrix Notation. See Eqn.~\ref{eqn:matrix-form-simplified}.} \label{tbl:matrix-notation} \end{table} \begin{theorem}[Matrix Representation of Reformulation] \begin{equation} F_\alpha\! =\!D_{\alpha} \!\circ\! \sum_{\beta\in \mathrm{ch}(\alpha)} P_{\alpha\beta} F_\beta W_{\beta\alpha} ,\quad \tilde G_\beta\! =\! D_\beta \!\circ\! \sum_{\alpha\in\mathrm{pa}(\beta)} P_{\alpha\beta}^T \tilde G_\alpha W_{\beta\alpha}^T ,\quad \Delta W_{\beta\alpha}\! =\! (P_{\alpha\beta} F_\beta)^T \tilde G_\alpha \label{eqn:matrix-form-simplified} \end{equation} \end{theorem} See Tbl.~\ref{tbl:matrix-notation} for the notation. For this dynamics, we want $F^*_\omega = I_{n_\omega}$, i.e., the top $n_\omega$ neurons faithfully represents the classification labels. Therefore, the top level gradient is $G_\omega = I_{n_\omega} - F_\omega$. On the other side, for each region $\beta$ at the bottom layer, we have $F_\beta = I_{n_\beta}$, i.e., the input contains all the preliminary factors. For all regions $\alpha$ in the top-most and bottom-most layers, we have $n_\alpha=m_\alpha$. \iffalse \yuandong{What are the examples for the two conditions?} \fi \def\iter#1{{(#1)}} \def\mathrm{vert}{\mathrm{vert}} \def\mathrm{Conv}{\mathrm{Conv}} \def\mathrm{rank}{\mathrm{rank}} \def\bninput#1{f^{(#1)}} \def\bnzeromean#1{\hat f^{(#1)}} \def\bnstandard#1{\tilde f^{(#1)}} \def\bnoutput#1{\bar f^{(#1)}} \def\vf{\mathbf{f}} \def\hat \vf{\hat \mathbf{f}} \def\tilde \vf{\tilde \mathbf{f}} \def\bar \vf{\bar \mathbf{f}} \section{Batch Normalization under Reformulation} \label{sec:batch-norm-under-coarse-model} Our reformulation naturally incorporates empirical regularization technique like Batch Normalization (BN)~\citep{batchnorm}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{bn_figure-crop.pdf} \caption{Batch Normalization (BN) as a projection. \textbf{(a)} Three sublayers in BN (zero-mean, unit-variance, affine). \textbf{(b)} The gradient $\mathbf{g}_\mathbf{f}$ that is propagated down is a projection of input gradient $\mathbf{g}$ onto the orthogonal complementary space spanned by $\{\mathbf{f}, \mathbf{1}\}$.}\label{fig:bn} \end{figure} \subsection{Batch Normalization as a Projection} We start with a novel finding of Batch Norm: the back-propagated gradient through Batch Norm layer at a node $j$ is a projection onto the orthogonal complementary subspace spanned by all one vectors and the current activations of node $j$. Denote pre-batchnorm activations as $\bninput{i} = f_j(x_i)$ ($i = 1 \ldots N$). In Batch Norm, $\bninput{i}$ is whitened to be $\bnstandard{i}$, then linearly transformed to yield the output $\bnoutput{i}$: \begin{equation} \bnzeromean{i} = \bninput{i} - \mu, \quad\bnstandard{i} = \bnzeromean{i} /\sigma, \quad\bnoutput{i} = c_1 \bnstandard{i} + c_0 \end{equation} where $\mu = \frac{1}{N}\sum_i \bninput{i}$ and $\sigma^2 = \frac{1}{N}\sum_i (\bninput{i} - \mu)^2$ and $c_1$, $c_0$ are learnable parameters. The original Batch Norm paper derives complicated and unintuitive weight update rules. With vector notation, the update has a compact form with clear geometric meaning. \begin{theorem}[Backpropagation of Batch Norm] \label{thm:bn} For a top-down gradient $\mathbf{g}$, BN layer gives the following gradient update ($P^\perp_{\vf, \mathbf{1}}$ is the orthogonal complementary projection of subspace $\{\vf, \mathbf{1}\}$): \begin{equation} \mathbf{g}_\mathbf{f} = J^{BN}(\vf)\mathbf{g} = \frac{c_1}{\sigma}P^\perp_{\vf, \mathbf{1}}\mathbf{g}, \quad \mathbf{g}_\mathbf{c} = S(\vf{})^T \mathbf{g} \label{eq:batch-norm-projection} \end{equation} \end{theorem} Intuitively, the back-propagated gradient $J^{BN}(\vf)\mathbf{g}$ is zero-mean and perpendicular to the input activation $\vf$ of BN layer, as illustrated in Fig.~\ref{fig:bn}. Unlike~\citep{kohler2018towards} that analyzes BN in an approximate manner, in Thm.~\ref{thm:bn} we do not impose any assumptions. \subsection{Batch Norm under the reformulation} The analysis of Batch Norm is compatible with the reformulation and we arrive at similar backpropagation rule, by noticing that $\ee2{x}{f_j(x)} = \ee2{z_\alpha}{f_j(z_\alpha)}$: \begin{equation} \mu = \ee2{z_\alpha}{f_j},\quad \sigma^2 = \ee2{z_\alpha}{(f_j(z_\alpha) - \mu)^2}, \quad J^{BN}(\vf{}) = \frac{c_1}{\sigma}P^\perp_{\vf, \mathbf{1}} \label{eq:batch-norm-projection-z-alpha} \end{equation} Note that we still have the projection property, but under the new inner product $\langle f_j, g_j\rangle_{z_\alpha} = \ee2{z_\alpha}{f_j(z_\alpha)g_j(z_\alpha)}$ and norm $\|f\|_{z_\alpha} = \langle f, f\rangle_{z_\alpha}^{1/2}$. \def\mathrm{sz}{\mathrm{sz}} \section{Example applications of proposed theoretical framework} \label{sec:property-of-coarse-model} With the help of the theoretical framework, we now can analyze interesting structures of gradient descent in deep models, when the data distribution $\mathbb{P}(z_\alpha, z_\beta)$ satisfies specific conditions. Here we give two concrete examples: the role played by nonlinearity and in which condition disentangled representation can be achieved. Besides, from the theoretical framework, we also give general comments on multiple issues (e.g., overfitting, GD versus SGD) in deep learning. \subsection{Nonlinear versus linear} \label{sec:nonlinear-vs-linear} In the formulation, $m_\alpha$ is the number of possible events within a region $\alpha$, which is often exponential with respect to the size $\mathrm{sz}(\alpha)$ of the region. The following analysis shows that a linear model cannot handle it, even with exponential number of nodes $n_\alpha$, while a nonlinear one with ReLU can. \begin{definition}[Convex Hull of a Set] We define the convex hull $\mathrm{Conv}(P)$ of $m$ points $P \subset \mathbb{R}^n$ to be $\mathrm{Conv}(P) = \left\{P\mathbf{a}, \mathbf{a}\in\spx{n-1}\right\}$, where $\spx{n-1} = \left\{\mathbf{a}\in \mathbb{R}^n, a_i \ge 0, \sum_i a_i = 1\right\}$. A row $p_j$ is called \emph{vertex} if $p_j \notin \mathrm{Conv}(P \backslash p_j)$. \end{definition} \begin{definition} A matrix $P$ of size $m$-by-$n$ is called \emph{$k$-vert}, or $\mathrm{vert}(P) = k \le m$, if its $k$ rows are vertices of the convex hull generated by its rows. $P$ is called \emph{all-vert} if $k = m$. \end{definition} \begin{theorem}[Expressibility of ReLU Nonlinearity] \label{thm:sufficient-node} Assuming $m_\alpha = n_\alpha = \mathcal{O}(\exp(\mathrm{sz}(\alpha)))$, where $\mathrm{sz}(\alpha)$ is the size of receptive field of $\alpha$. If each $P_{\alpha\beta}$ is all-vert, then: ($\omega$ is top-level receptive field) \begin{equation} \min_W Loss_\mathrm{ReLU}(W) = 0, \quad \min_W Loss_\mathrm{Linear}(W) = \mathcal{O}(\exp(\mathrm{sz}(\omega))) \end{equation} \end{theorem} Note that here $Loss(W) \equiv \|F_\omega - I\|^2_F$. This shows the power of nonlinearity, which guarantees full rank of output, even if the matrices involved in the multiplication are low-rank. The following theorem shows that for intermediate layers whose input is not identity, the all-vert property remains. \begin{theorem} \textbf{\emph{(1)}} If $F$ is full row rank, then $\mathrm{vert}(PF) = \mathrm{vert}(P)$. \textbf{\emph{(2)}} $PF$ is all-vert iff $P$ is all-vert. \end{theorem} This means that if all $P_{\alpha\beta}$ are all-vert and its input $F_\beta$ is full-rank, then with the same construction of Thm.~\ref{thm:sufficient-node}, $F_\alpha$ can be made identity. In particular, if we sample $W$ randomly, then with probability $1$, all $F_\beta$ are full-rank, in particular the top-level input $F_1$. Therefore, using top-level $W_1$ alone would be sufficient to yield zero generalization error, as shown in the previous works that random projection could work well. \def\bfactor#1#2{#1[#2]} \begin{figure} \centering \includegraphics[width=\textwidth]{distributed_representation-crop.pdf} \caption{disentangled representation. \textbf{(a)} Nodes are grouped according to regions. \textbf{(b)} An example of one parent region $\alpha$ ($2$ nodes) and two child regions $\beta_1$ and $\beta_2$ ($5$ nodes each). We assume factorization property of data distribution $P$. \textbf{(c)} disentangled activations, \textbf{(d)} Separable weights.} \label{fig:disentangled-reprsentation} \end{figure} \subsection{disentangled Representation} The analysis in Sec.~\ref{sec:nonlinear-vs-linear} assumes that $n_\alpha = m_\alpha$, which means that we have sufficient nodes, \emph{one neuron for one event}, to convey the information forward to the classification level. In practice, this is never the case. When $n_\alpha \ll m_\alpha = \mathcal{O}(\exp(\mathrm{sz}(\alpha)))$ and the network needs to represent the information in a proper way so that it can be sent to the top level. Ideally, if the factor $z_\alpha$ can be written down as a list of binary factors: $z_\alpha = \left[z_{\bfactor\alpha1}, z_{\bfactor\alpha2}, \ldots, z_{\bfactor\alpha{j}}\right]$, the output of a node $j$ could represent $z_{\bfactor{\alpha}{j}}$, so that all $m_\alpha$ events can be represented concisely with $n_\alpha$ nodes. To come up with a complete theory for disentangled representation in deep nonlinear network is far from trivial and beyond the scope of this paper. In the following, we make an initial attempt by constructing factorizable $P_{\alpha\beta}$ so that disentangled representation is possible in the forward pass. First we need to formally define what is disentangled representation: \begin{definition} The activation $F_\alpha$ is \emph{disentangled}, if its $j$-th column $F_{\alpha,:j} = \mathbf{1} \otimes \ldots \!\otimes\! \mathbf{f}_{\bfactor{\alpha}{j}} \!\otimes\! \ldots\! \otimes\! \mathbf{1} $, where each $\mathbf{f}_{\bfactor{\alpha}{j}}$ and $\mathbf{1}$ is a $2$-by-$1$ vector. \end{definition} \begin{definition} The gradient $\tilde G_\alpha$ is \emph{disentangled}, if its $j$-th column $\tilde G_{\alpha,:j} = \mathbf{p}_{\bfactor{\alpha}{1}} \otimes \ldots \!\otimes\!\tilde\mathbf{g}_{\bfactor{\alpha}{j}} \!\otimes\! \ldots\! \otimes\! \mathbf{p}_{\bfactor{\alpha}{n_\alpha}} $, where $\mathbf{p}_{\bfactor{\alpha}{j}} = \left[\mathbb{P}(\bfactor{\alpha}{j} = 0), \mathbb{P}(\bfactor{\alpha}{j} = 1)\right]^T$ and $\tilde \mathbf{g}_{\bfactor{\alpha}{j}}$ is a $2$-by-$1$ vector. \end{definition} Intuitively, this means that each node $j$ represents the binary factor $z_\bfactor{\alpha}{j}$. A follow-up question is whether such disentangled properties carries over layers in the forward pass. It turns out that the disentangled structure carries if the data distribution and weights have compatible structures: \begin{definition} The weights $W_{\beta\alpha}$ is \emph{separable} with respect to a disjoint set $\{S^{\alpha\beta}_i\}$, if $W_{\beta\alpha} = \mathrm{diag}\left(W_{\beta\alpha}[S^{\alpha\beta}_1, 1], W_{\beta\alpha}[S^{\alpha\beta}_2, 2], \ldots, W_{\beta\alpha}[S^{\alpha\beta}_{n_\alpha}, n_\alpha]\right)$. \end{definition} \begin{theorem}[Disentangled Forward] If for each $\beta\in\mathrm{ch}(\alpha)$, $P_{\alpha\beta}$ can be written as a tensor product $P_{\alpha\beta} = \bigotimes_i P_{\bfactor{\alpha}{i}\bfactor{\beta}{S^{\alpha\beta}_i}}$ where $\{S^{\alpha\beta}_i\}$ are $\alpha\beta$-dependent disjointed set, $W_{\beta\alpha}$ is separable with respect to $\{S^{\alpha\beta}_i\}$, $F_\beta$ is disentangled, then $F_\alpha$ is also disentangled (with/without ReLU /Batch Norm). \end{theorem} If the bottom activations are disentangled, by induction, all activations will be disentangled. The next question is whether gradient descent preserves such a structure. The answer is also conditionally yes: \begin{theorem}[Separable Weight Update] If $P_{\alpha\beta} = \bigotimes_i P_{\bfactor{\alpha}{i}\bfactor{\beta}{S_i}}$, $F_\beta$ and $\tilde G_\alpha$ are both disentangled, $\mathbf{1}^T \tilde G_\alpha = \mathbf{0}$, then the gradient update $\Delta W_{\beta\alpha}$ is separable with respect to $\{S_i\}$. \end{theorem} Therefore, with disentangled $F_\beta$ and $\tilde G_\alpha$ and centered gradient $\mathbf{1}^T \tilde G_\alpha = \mathbf{0}$, the separable structure is conserved over gradient descent, given the initial $W^{(0)}_{\beta\alpha}$ is separable. Note that centered gradient is guaranteed if we insert Batch Norm (Eqn.~\ref{eq:batch-norm-projection-z-alpha}) after linear layers. And the activation $F$ remains disentangled if the weights are separable. The hard part is whether $\tilde G_\beta$ remains disentangled during backpropagation, if $\{\tilde G_\alpha\}_{\alpha\in\mathrm{pa}(\beta)}$ are all disentangled. If so, then the disentangled representation is self-sustainable under gradient descent. This is a non-trivial problem and generally requires structures of data distribution. We put some discussion in the Appendix and leave this topic for future work. \subsection{Explanation of common behaviors in Deep Learning} In the proposed formulation, the input $x$ in Eqn.~\ref{eq:induction} is integrated out, and the data distribution is now encoded into the probabilistic distribution $\mathbb{P}(z_\alpha, z_\beta)$, and their marginals. A change of such distribution means the input distribution has changed. For the first time, we can now analyze many practical factors and behaviors in the DL training that is traditionally not included in the formulation. \textbf{Over-fitting.} Given finite number of training samples, there is always error in estimated factor-factor distribution $\tilde{\mathbb{P}}(z_\alpha, z_\beta)$ and factor-observation distribution $\tilde{\mathbb{P}}(x_\alpha|z_\alpha)$. In some cases, a slight change of distribution would drastically change the optimal weights for prediction, which is overfitting. Here is one example. Suppose there are two different kinds of events at two disjoint reception fields: $z_\alpha$ and $z_\gamma$. The class label is $z_\omega$, which equals $z_\alpha$ but is not related to $z_\gamma$. Therefore, we have: \begin{equation} \tilde{\mathbb{P}}(z_\omega = 1|z_\alpha=1) = 1,\quad \tilde{\mathbb{P}}(z_\omega = 1|z_\alpha=0) = 0 \end{equation} Although $z_\gamma$ is unrelated to the class label $z_\omega$, with finite samples $z_\gamma$ could show spurious correlation: \begin{equation} \tilde{\mathbb{P}}(z_\omega = 1|z_\gamma=1) = 0.5 + \epsilon,\quad \tilde{\mathbb{P}}(z_\omega = 1|z_\gamma=0) = 0.5-\epsilon \end{equation} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{example_overfit-crop.pdf} \caption{Overfitting Example} \label{fig:overfitting} \end{figure} On the other hand, as shown in Fig.~\ref{fig:overfitting}, $\mathbb{P}(x_\alpha|z_\alpha)$ contains a lot of detailed structures and is almost impossible to separate in the finite sample case, while $\mathbb{P}(x_\gamma|z_\gamma)$ could be well separated for $z_\gamma = 0/1$. Therefore, for node $j$ with $\rf{j} = \alpha$, $f_j(z_\alpha) \approx \mathrm{constant}$ (input almost indistinguishable): \begin{equation} \Delta w_j = \ee2{z_\alpha}{f_j(z_\alpha)g_0(z_\alpha)} \approx 0 \end{equation} where $g_0(z_\alpha) = \ee2{z_\omega|z_\alpha}{g_0(z_\omega)} = \left\{ \begin{array}{cc} 1 & z_\alpha = 1 \\ -1 & z_\alpha = 0 \end{array}\right.$, which is a strong gradient signal backpropagated from the top softmax level, since $z_\alpha$ is strongly correlated with $z_\omega$. For node $k$ with $\rf{k} = \gamma$, an easy separation of the input (e.g., random initialization) yields distinctive $f_k(z_\gamma)$. Therefore, \begin{equation} \Delta w_k = \ee2{z_\gamma}{f_j(z_\gamma)g_0(z_\gamma)} > 0 \end{equation} where $g_0(z_\gamma) = \ee2{z_\omega|z_\gamma}{g_0(z_\omega)} = \left\{ \begin{array}{cc} 2\epsilon & z_\gamma = 1 \\ -2\epsilon & z_\gamma = 0 \end{array}\right.$, a weak signal because of $z_\gamma$ is (almost) unrelated to the label. Therefore, we see that the weight $w_j$ that links to meaningful receptive field $z_\alpha$ does not receive strong gradient, while the weight $w_k$ that links to irrelevant (but spurious) receptive field $z_\gamma$ receives strong gradient. This will lead to overfitting. With more data, over-fitting is alleviated since (1) $\tilde{\mathbb{P}}(z_\omega|z_\gamma)$ becomes more accurate and $\epsilon \rightarrow 0$; (2) $\tilde{\mathbb{P}}(x_\alpha|z_\alpha)$ starts to show statistical difference for $z_\alpha=0/1$ and thus $f_j(z_\alpha)$ shows distinctiveness. Note that there exists a \textbf{second} explanation: we could argue that $z_\gamma$ is a \emph{true} but \emph{weak} factor that contributes to the label, while $z_\alpha$ is a \emph{fictitious} discriminative factor, since the appearance difference between $z_\alpha=0$ and $z_\alpha=1$ (i.e., $\tilde{\mathbb{P}}(x_\alpha|z_\alpha)$ for $\alpha=0/1$) could be purely due to noise and thus should be neglected. With finite number of samples, these two cases are essentially indistinguishable. Models with different induction bias might prefer one to the other, yielding drastically different generalization error. For neural network, SGD prefers the second explanation but if under the pressure of training, it may also explore the first one by pushing gradient down to distinguish subtle difference in the input. This may explain why the same neural networks can fit random-labeled data, and generalize well for real data~\citep{zhang2016understanding}. \textbf{Gradient Descent: Stochastic or not?} Previous works~\citep{keskar2016large} show that empirically stochastic gradient decent (SGD) with small batch size tends to converge to ``flat'' minima and offers better generalizable solution than those uses larger batches to compute the gradient. From our framework, SGD update with small batch size is equivalent to using a perturbed/noisy version of $\mathbb{P}(z_\alpha, z_\beta)$ at each iteration. Such an approach naturally reduces aforementioned over-fitting issues, which is due to hyper-sensitivity of data distribution and makes the final weight solution invariant to changes in $\mathbb{P}(z_\alpha, z_\beta)$, yielding a ``flat'' solution. \section{Conclusion and future work} In this paper, we propose a novel theoretical framework for deep (multi-layered) nonlinear network with ReLU activation and local receptive fields. The framework utilizes the specific structure of neural networks, and formulates input data distributions explicitly. Compared to modeling deep models as non-convex problems, our framework reveals more structures of the network; compared to recent works that also take data distribution into considerations, our theoretical framework can model deep networks without imposing idealistic analytic distribution of data like Gaussian inputs or independent activations. Besides, we also analyze regularization techniques like Batch Norm, depicts its underlying geometrical intuition, and shows that BN is compatible with our framework. Using this novel framework, we have made an initial attempt to analyze many important and practical issues in deep models, and provides a novel perspective on overfitting, generalization, disentangled representation, etc. We emphasize that in this work, we barely touch the surface of these core issues in deep learning. As a future work, we aim to explore them in a deeper and more thorough manner, by using the powerful theoretical framework proposed in this paper. \iffalse Second, together with other techniques like gradient clipping, it reduces the ``crowded effects'' in which the majority of weights converges towards a single gradient direction. \fi \iffalse \textbf{Why small receptive field?} Ideally, a deep network with many fully connected layers has the capacity to learn any models that can be represented by locally connected models (e.g., CNN). However, its empirical performance on the test set is way worse than CNN and often leads to severe over-fitting. Why? From this framework, we could see that $\mathbb{P}(x|z_\alpha)$ as a high-dimensional object, is far more difficult to estimate than its low-dimensional counterpart $\mathbb{P}(x_\alpha|z_\alpha)$. Therefore, $g_j(z_\alpha)$ is extremely noisy. \yuandong{A better way to say it?} \fi \iffalse \section{Simulation} \label{sec:simulation} \subsection{Fully Connected case} We first check the fully connected case: all events shared the entire image as the reception field. Here we use a rough estimation: $D_l = I(F^{\mathrm{raw}}_l > 0)$. For the simulation, we generate all probabilistic distributions $P_l^f$ , $P_l^b$ and $\Gamma_l$ so that they are consistent, and run the dynamics. At each iteration, we scale down $\Delta W$ with too high norm. This greatly improves the stability. Interestingly, a linear version of the system (by removing all $D$) shows very slow convergence, while the nonlinear version quickly converges to a local minimum with relatively low errors. \todo{Different number of layers. Deeper is Harder} \todo{Linear/Nonlinear. Nonlinear is faster.} \todo{Same neurons as concepts ($\#j = \#z_\alpha$). More concepts than neurons ($\#j < \#z_\alpha$). } \subsection{Locally Connected Case} \fi \iffalse \todo{SGD simulation. SGD $->$ Prob perturbation gives better performance than no perturbation?} \fi \iffalse One thing we want to point out is that the gradient $G^\iter{t}_{l+1}$ at time $t$ will affect lower layer $W^\iter{t+1}_{l+1}$ and the gating $D^\iter{t+1}_{l+1}$, which makes things very complicated. Therefore, for now we just use the formulation of differential equations, and assume that \begin{equation} D^\iter{t} = \sigma\left(V^\iter{t}\right) \equiv \sigma\left(\int_0^t G^\iter{t'} \mathrm{d} t'\right) \end{equation} where $\sigma$ is a sigmoid function. Intuitively, this means that the gating function is an integral of the gradient received over time. The more positive gradient it receives, the more likely the neuron opens. Using the formulation of differential equation, we know that \begin{equation} \dot V_{l+1} = P_l^T\dot V_l W_l \circ \sigma\left(V_{l+1}\right) \end{equation} Therefore, we can solve Eqn.~\ref{eq:grad-induction-matrix} in a top-down manner, by first solving $V_1$ (and $G_l$, $D_1$), then $V_2$ (and $G_2$, $D_2$), until the lowest (input) level. For the first level, we know that $\dot V_0 = G_0$ can be represented as $I - E$, where $0 < e_{ij} < 1$ and $E\mathbf{1} = \mathbf{1}$. If we further assume that $P_l^T W_l = I$, then since $W_l^{-1}\mathbf{1} = P^T_l \mathbf{1} = \mathbf{1}$, we have: \begin{equation} \left(P_l^T E W_l\right) \mathbf{1} = P_l^T E W_l W_l^{-1} \mathbf{1} = P_l^T E \mathbf{1} = \mathbf{1} \end{equation} \fi \iffalse \section{disentangled representation} Due to the richness of patch content $x_\alpha$, the summarization variable $z_\alpha$ typically takes exponential values. In this case, $P(z_\alpha|z_\beta)$ could be a huge table. We consider a representation $z_\alpha = [v_{\alpha,1}, v_{\alpha,2}, \ldots, v_{\alpha,m}]$ in which each $v$ is a binary variable. Suppose that the two variables $z_\alpha = [v_{\alpha,1}, v_{\alpha,2}, \ldots, v_{\alpha,m}]$ and $z_\beta = [v_{\beta,1}, v_{\beta,2}, \ldots, v_{\beta,m}]$ have local correlations: \begin{equation} P(v_\alpha| v_\beta) = \frac{1}{Z}\prod_e f(v_{\alpha, e(\alpha)}, v_{\beta, e(\beta)}) \end{equation} then we want to know whether we can simplify Eqn.~\ref{eq:induction}. \begin{equation} \end{equation} \fi
1,116,691,499,540
arxiv
\section{Introduction} The Lieb-Thirring inequality \cite{LT} and its extension by Araki \cite{Ar} are regarded as a strengthening of the celebrated Golden-Thompson trace inequality, which can be written, as explicitly stated in \cite{AH}, in terms of log-majorization \begin{equation}\label{F-1.1} (A^{1/2}BA^{1/2})^r\prec_{(\log)}A^{r/2}B^rA^{r/2},\qquad r\ge1, \end{equation} for matrices $A,B\ge0$. Here, for $n\times n$ matrices $X,Y\ge0$, the log-majorization $X\prec_{(\log)}Y$ means that $$ \prod_{i=1}^k\lambda_i(X)\le\prod_{i=1}^k\lambda_i(Y),\qquad k=1,\dots,n $$ with equality for $k=n$, where $\lambda_1(X)\ge\dots\ge\lambda_n(X)$ are the eigenvalues of $X$ arranged in decreasing order and counting multiplicities. The weak log-majorization $X\prec_{w(\log)}Y$ is referred to when the last equality is not imposed. A concise survey of majorization for matrices is found in, e.g., \cite{An2} (also \cite{Hi1,Hi2}). In the present paper we generalize the log-majorization in \eqref{F-1.1} to the log-convexity of the function $$ p\in[0,\infty)\longmapsto\lambda\bigl(\Phi(A^p)^{1/2}\Psi(B^p)\Phi(A^p)^{1/p}\bigr) $$ in the sense of weak log-majorization order, involving positive linear maps $\Phi,\Psi$ between matrix algebras. More precisely, in Theorem \ref{T-3.1} of Section 3, we prove the weak log-majorization \begin{align} &\lambda\bigl(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})\Phi(A^{p_\alpha})^{1/2}\bigr) \nonumber\\ &\quad\prec_{w(\log)} \lambda^{1-\alpha}\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})\Phi(A^{p_0})^{1/2}\bigr) \lambda^\alpha\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})\Phi(A^{p_1})^{1/2}\bigr), \label{F-1.2} \end{align} where $p_\alpha:=(1-\alpha)p_0+\alpha p_1$ for $0\le\alpha\le1$. In particular, when $\Phi=\Psi=\mathrm{id}$ and $(p_0,p_1)=(0,1)$, \eqref{F-1.2} reduces to \begin{equation}\label{F-1.3} \lambda(A^{\alpha/2}B^\alpha A^{\alpha/2})\prec_{w(\log)} \lambda^\alpha(A^{1/2}BA^{1/2}),\qquad0\le\alpha\le1, \end{equation} which is equivalent to \eqref{F-1.1} by letting $\alpha=1/r$ and replacing $A,B$ with $A^r,B^r$. In Section 2 we show an operator norm inequality in a more general setting by a method using operator means. In Section 3 we extend this inequality to the weak log-majorization \eqref{F-1.2} by applying the well-known antisymmetric tensor power technique. The recent paper of Bourin and Lee \cite{BL} contains, as a consequence of their joint log-convexity theorem for a two-variable norm function, the weak log-majorization $$ (A^{1/2}Z^*BZA^{1/2})^r\prec_{w(\log)}A^{r/2}Z^*B^rZA^{r/2},\qquad r\ge1, $$ which is closely related to ours, as explicitly mentioned in Remark \ref{R-3.6} of Section 3. The complementary Golden-Thompson inequality was first shown in \cite{HP} and then it was extended in \cite{AH} to the log-majorization $$ A^r\,\#_\alpha\,B^r\prec_{(\log)}(A\,\#_\alpha\,B)^r,\qquad r\ge1, $$ where $\#_\alpha$ is the weighted geometric mean for $0\le\alpha\le1$. In a more recent paper \cite{Wa} the class of operator means $\sigma$ for which $\lambda_1(A^r\,\sigma\,B^r)\le\lambda_1^r(A\,\sigma\,B)$ holds for all $r\ge1$ was characterized in terms of operator monotone functions representing $\sigma$. In Section 4 of the paper, we show some generalizations of these results in \cite{AH,Wa} in a somewhat similar way to that of Araki's log-majorization in Sections 2 and 3. \section{Operator norm inequalities} For $n\in\mathbb{N}$ we write $\mathbb{M}_n$ for the $n\times n$ complex matrix algebra and $\mathbb{M}_n^+$ for the $n\times n$ positive semidefinite matrices. For $A\in\mathbb{M}_n$ we write $A\ge0$ if $A\in\mathbb{M}_n^+$, and $A>0$ if $A$ is positive definite, i.e., $A\ge0$ and $A$ is invertible. The operator norm and the usual trace of $A\in\mathbb{M}_n$ is denoted by $\|A\|_\infty$ and $\mathrm{Tr}\, A$, respectively. We denote by $\mathrm{OM}_{+,1}$ the set of non-negative operator monotone functions $f$ on $[0,\infty)$ such that $f(1)=1$. In theory of operator means due to Kubo and Ando \cite{KA}, a main result says that each operator mean $\sigma$ is associated with an $f\in\mathrm{OM}_{+,1}$ in such a way that $$ A\,\sigma\,B:=A^{1/2}f(A^{-1/2}BA^{-1/2})A^{1/2} $$ for $A,B\in\mathbb{M}_n^+$ with $A>0$, which is further extended to general $A,B\in\mathbb{M}_n^+$ as $$ A\,\sigma\,B:=\lim_{\varepsilon\searrow0}(A+\varepsilon I_n)\,\sigma\,(B+\varepsilon I_n). $$ We write $\sigma_f$ for the operator mean associated with $f\in\mathrm{OM}_{+,1}$. For $0\le\alpha\le1$, the operator mean corresponding to the function $x^\alpha$ in $\mathrm{OM}_{+,1}$ is called the {\it weighted geometric mean} denoted by $\#_\alpha$; more explicitly, $$ A\,\#_\alpha\,B=A^{1/2}(A^{-1/2}BA^{-1/2})^\alpha A^{1/2} $$ for $A,B\in\mathbb{M}_n^+$ with $A>0$. The case $\alpha=1/2$ is the {\it geometric mean} $\#$, first introduced by Pusz and Woronowicz \cite{PW}. Let $\sigma_f^*$ be the adjoint of $\sigma_f$, i.e., the operator mean corresponding to $f^*\in\mathrm{OM}_{+,1}$ defined as $f^*(x):=f(x^{-1})^{-1}$, $x>0$. A linear map $\Phi:\mathbb{M}_n\to\mathbb{M}_l$ is said to be positive if $\Phi(A)\in\mathbb{M}_l^+$ for all $A\in\mathbb{M}_n^+$, which is furthermore said to be strictly positive if $\Phi(I_n)>0$, that is, $\Phi(A)>0$ for all $A\in\mathbb{M}_n$ with $A>0$. In the rest of the paper, we throughout assume that $\Phi:\mathbb{M}_n\to\mathbb{M}_l$ and $\Psi:\mathbb{M}_m\to\mathbb{M}_l$ are positive linear maps. Recall the well-known fact, essentially due to Ando \cite{An1}, that $$ \Phi(A\,\sigma\,B)\le\Phi(A)\,\sigma\,\Phi(B) $$ for all $A,B\in\mathbb{M}_n^+$ and for any operator mean $\sigma$. This will be repeatedly used without reference in the sequel. For non-negative functions $\varphi_0$ and $\varphi_1$ on $[0,\infty)$ a new non-negative function $\varphi:=\varphi_0\,\sigma_f\,\varphi_1$ on $[0,\infty)$ is defined as $$ \varphi(x)=\varphi_0(x)\,\sigma_f\,\varphi_1(x) =\lim_{\varepsilon\searrow0}(\varphi_0(x)+\varepsilon)f\biggl({\varphi_1(x)+\varepsilon\over\varphi_0(x)+\varepsilon}\biggr), \qquad x\in[0,\infty). $$ \begin{prop}\label{P-2.1} Let $f\in\mathrm{OM}_{+,1}$. Let $\varphi_0$ and $\varphi_1$ be arbitrary non-negative functions on $[0,\infty)$ and define the functions $\varphi:=\varphi_0\,\sigma_f\,\varphi_1$ and $\widetilde\varphi:=\varphi_0\,\sigma_f^*\,\varphi_1$ on $[0,\infty)$ as above. Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$, \begin{align*} &\big\|\Phi(\widetilde\varphi(A))^{1/2}\Psi(\varphi(B)) \Phi(\widetilde\varphi(A))^{1/2}\big\|_\infty \\ &\quad\le\max\bigl\{\big\|\Phi(\varphi_0(A))^{1/2}\Psi(\varphi_0(B)) \Phi(\varphi_0(A))^{1/2}\big\|_\infty, \big\|\Phi(\varphi_1(A))^{1/2}\Psi(\varphi_1(B)) \Phi(\varphi_1(A))^{1/2}\big\|_\infty\bigr\}. \end{align*} \end{prop} \begin{proof} Letting $$ \gamma_k:=\big\|\Phi(\varphi_k(A))^{1/2}\Psi(\varphi_k(B)) \Phi(\varphi_k(A))^{1/2}\big\|_\infty,\qquad k=0,1, $$ we may prove that \begin{equation}\label{F-2.1} \Phi(\widetilde\varphi(A))^{1/2}\Psi(\varphi(B))\Phi(\widetilde\varphi(A))^{1/2}\le \max\{\gamma_0,\gamma_1\}I_l. \end{equation} First, assume that $\Phi$ and $\Psi$ are strictly positive and $\varphi_0(x),\varphi_1(x)>0$ for any $x\ge0$. Then $\gamma_0,\gamma_1>0$, and we have $$ \Psi(\varphi_k(B))\le\gamma_k\Phi(\varphi_k(A))^{-1},\qquad k=0,1. $$ Since $\varphi(B)=\varphi_0(B)\,\sigma_f\,\varphi_1(B)$ and $\widetilde\varphi(A)=\varphi_0(A)\,\sigma_f^*\,\varphi_1(A)$, by the joint monotonicity of $\sigma_f$ we have \begin{align} \Psi(\varphi(B))&\le\Psi(\varphi_0(B))\,\sigma_f\,\Psi(\varphi_1(B)) \nonumber\\ &\le\bigl(\gamma_0\Phi(\varphi_0(A))^{-1}\bigr)\,\sigma_f\, \bigl(\gamma_1\Phi(\varphi_1(A))^{-1}\bigr) \nonumber\\ &\le\max\{\gamma_0,\gamma_1\}\bigl\{\Phi(\varphi_0(A))\,\sigma_f^*\,\Phi(\varphi_1(A))\bigr\}^{-1} \nonumber\\ &\le\max\{\gamma_0,\gamma_1\}\Phi\bigl(\varphi_0(A)\,\sigma_f^*\,\varphi_1(A)\bigr)^{-1} \nonumber\\ &=\max\{\gamma_0,\gamma_1\}\Phi(\widetilde\varphi(A))^{-1}, \label{F-2.2} \end{align} which implies \eqref{F-2.1} under the assumptions given above. For the general case, for every $\varepsilon>0$ we define a strictly positive $\Phi_\varepsilon:\mathbb{M}_n\to\mathbb{M}_l$ by $$ \Phi_\varepsilon(X):=\Phi(X)+\varepsilon\mathrm{Tr}\,(X)I_l. $$ and similarly $\Psi_\varepsilon:\mathbb{M}_m\to\mathbb{M}_l$. Moreover let $\varphi_{k,\varepsilon}(x):=\varphi_k(x)+\varepsilon$, $k=0,1$, for $x\ge0$, and $\varphi_\varepsilon:=\varphi_{0,\varepsilon}\,\sigma_f\,\varphi_{1,\varepsilon}$, $\widetilde\varphi_\varepsilon:=\varphi_{0,\varepsilon}\,\sigma_f^*\,\varphi_{1,\varepsilon}$. By the above case we then have \begin{equation}\label{F-2.3} \Phi_\varepsilon(\widetilde\varphi_\varepsilon(A))^{1/2}\Psi_\varepsilon(\varphi_\varepsilon(B)) \Phi_\varepsilon(\widetilde\varphi_\varepsilon(A))^{1/2} \le\max\{\gamma_{0,\varepsilon},\gamma_{1,\varepsilon}\}I_l, \end{equation} where $$ \gamma_{k,\varepsilon}:=\big\|\Phi_\varepsilon(\varphi_{k,\varepsilon}(A))^{1/2}\Psi_\varepsilon(\varphi_{k,\varepsilon}(B)) \Phi_\varepsilon(\varphi_{k,\varepsilon}(A))^{1/2}\big\|_\infty,\qquad k=0,1. $$ Since $\widetilde\varphi_\varepsilon(A)\to\widetilde\varphi(A)$, $\varphi_\varepsilon(B)\to\varphi(B)$ and $\gamma_{k,\varepsilon}\to\gamma_k$, $k=0,1$, as $\varepsilon\searrow0$, we have \eqref{F-2.1} in the general case by taking the limit of \eqref{F-2.3}. \end{proof} For non-negative functions $\varphi_0,\varphi_1$ the function $\varphi_0^{1-\alpha}\varphi_1^\alpha$ with $0\le\alpha\le1$ is often called the {\it geometric bridge} of $\varphi_0,\varphi_1$, for which we have \begin{prop}\label{P-2.2} Let $\varphi_0,\varphi_1$ be arbitrary non-negative functions on $[0,\infty)$ and $0\le\alpha\le1$. Define $\varphi_\alpha(x):=\varphi_0(x)^{1-\alpha}\varphi_1(x)^\alpha$ on $[0,\infty)$ (with convention $0^0:=1$). Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$, \begin{align*} &\big\|\Phi(\varphi_\alpha(A))^{1/2}\Psi(\varphi_\alpha(B)) \Phi(\varphi_\alpha(A))^{1/2}\big\|_\infty \\ &\quad\le\big\|\Phi(\varphi_0(A))^{1/2}\Psi(\varphi_0(B)) \Phi(\varphi_0(A))^{1/2}\big\|_\infty^{1-\alpha} \big\|\Phi(\varphi_1(A))^{1/2}\Psi(\varphi_1(B)) \Phi(\varphi_1(A))^{1/2}\big\|_\infty^\alpha. \end{align*} \end{prop} \begin{proof} When $f(x):=x^\alpha=f^*(x)$ where $0\le\alpha\le1$, note that $\varphi_\alpha=\varphi_0\,\sigma_f\,\varphi_1=\varphi_0\,\sigma_f^*\,\varphi_1$. With the same notation as in the proof of Proposition \ref{P-2.1}, inequality \eqref{F-2.2} is improved in the present case as $$ \Psi(\varphi_\alpha(B))\le\gamma_0^{1-\alpha}\gamma_1^\alpha\Phi(\varphi_\alpha(A))^{-1} $$ for every $\alpha\in[0,1]$. Hence the asserted inequality follows as in the above proof. \end{proof} In particular, when $\varphi_0(x)=1$ and $\varphi_1(x)=x$, since $\varphi_0\,\sigma_f\,\varphi_1=f$ and $\varphi_0\,\sigma_f^*\,\varphi_1=f^*$ in Proposition \ref{P-2.1}, we have \begin{cor}\label{C-2.3} Assume that $\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\le I_l$. If $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$ satisfy $\Phi(A)^{1/2}\Psi(B)\Phi(A)^{1/2}\le I_l$, then \begin{equation}\label{F-2.4} \Phi(f^*(A))^{1/2}\Psi(f(B))\Phi(f^*(A))^{1/2}\le I_l \end{equation} for every $f\in\mathrm{OM}_{+,1}$, and in particular, $$ \Phi(A^\alpha)^{1/2}\Psi(B^\alpha)\Phi(A^\alpha)^{1/2}\le I_l,\qquad0\le\alpha\le1. $$ \end{cor} \begin{remark}\label{R-2.4}\rm Assume that both $\Phi$ and $\Psi$ are sub-unital, i.e., $\Phi(I_n)\le I_l$ and $\Psi(I_m)\le I_l$. If $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$ satisfy $\Phi(A)^{1/2}\Psi(B)\Phi(A)^{1/2}\le I_l$, then one can see \eqref{F-2.4} in a simpler way as follows: By continuity, one can assume that $\Phi$ is strictly positive and $A>0$; then $$ \Psi(f(B))\le f(\Psi(B))\le f(\Phi(A)^{-1})=f^*(\Phi(A))^{-1}\le\Phi(f^*(A))^{-1}. $$ The above first and the last inequalities holds by the Jensen inequality due to \cite[Theorem 2.1]{Ch} and \cite[Theorem 2.1]{HaPe}. The merit of our method with use of the operator mean $\sigma_f$ is that it enables us to relax the sub-unitality assumption into $\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\le I_l$. \end{remark} \section{Log-majorization} When $\varphi_0$ and $\varphi_1$ are power functions, we can extend Proposition \ref{P-2.2} to the log-majorization result in the next theorem. For $A\in\mathbb{M}_n^+$ we write $\lambda(A)=(\lambda_1(A),\dots,\lambda_n(A))$ for the eigenvalues of $A$ arranged in decreasing order with multiplicities. Also, for $X\in\mathbb{M}_n$ let $s(X)=(s_1(X),\dots,s_n(X))$ be the singular values of $X$ in decreasing order with multiplicities. For two non-negative vectors $a=(a_1,\dots,a_n)$ and $b=(b_1,\dots,b_n)$ where $a_1\ge\dots\ge a_n\ge0$ and $b_1\ge\dots\ge b_n\ge0$, the {\it weak log-majorization} (or the {\it log-submajorization}) $a\prec_{w(\log)}b$ means that \begin{equation}\label{F-3.1} \prod_{i=1}^ka_i\le\prod_{i=1}^kb_i,\qquad1\le k\le n, \end{equation} and the {\it log-majorization} $a\prec_{(\log)}b$ means that $a\prec_{w(\log)}b$ and equality hold for $k=n$ in \eqref{F-3.1}. On the other hand, the {\it log-supermajorization} $a\prec^{w(\log)}b$ is defined as $$ \prod_{i=n+1-k}^na_i\ge\prod_{i=n-k+1}^nb_i,\qquad1\le k\le n. $$ \begin{thm}\label{T-3.1} Let $p_0,p_1\in[0,\infty)$ and $0\le\alpha\le1$, and let $p_\alpha:=(1-\alpha)p_0+\alpha p_1$. Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$, \begin{align} &\lambda\bigl(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})\Phi(A^{p_\alpha})^{1/2}\bigr) \nonumber\\ &\quad\prec_{w(\log)} \lambda^{1-\alpha}\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})\Phi(A^{p_0})^{1/2}\bigr) \lambda^\alpha\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})\Phi(A^{p_1})^{1/2}\bigr), \label{F-3.2} \end{align} or equivalently, \begin{align} &s\bigl(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})^{1/2}\bigr) \nonumber\\ &\qquad\prec_{w(\log)}s^{1-\alpha}\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})^{1/2}\bigr) s^\alpha\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})^{1/2}\bigr). \label{F-3.3} \end{align} In particular, for every $A,B\in\mathbb{M}_n^+$, \begin{equation}\label{F-3.4} s(A^{p_\alpha}B^{p_\alpha})\prec_{(\log)} s^{1-\alpha}(A^{p_0}B^{p_0})s^\alpha(A^{p_1}B^{p_1}). \end{equation} \end{thm} \begin{proof} Let $C^*(I,A)$ be the commutative $C^*$-subalgebra of $\mathbb{M}_n$ generated by $I,A$. We may consider, instead of $\Phi$, the composition of the trace-preserving conditional expectation from $\mathbb{M}_n$ onto $C^*(I,A)$ and $\Phi|_{C^*(I,A)}:C^*(I,A)\to\mathbb{M}_d$, which is completely positive. Hence one can assume that $\Phi$ is completely positive and similarly for $\Psi$. The weak log-majorization \eqref{F-3.2} means that \begin{align} &\prod_{i=1}^k\lambda_i\bigl(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha}) \Phi(A^{p_\alpha})^{1/2}\bigr) \nonumber\\ &\quad\le\prod_{i=1}^k \lambda_i^{1-\alpha}\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})\Phi(A^{p_0})^{1/2}\bigr) \lambda_i^\alpha\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})\Phi(A^{p_1})^{1/2}\bigr) \label{F-3.5} \end{align} for every $k=1,\dots,l$. The case $k=1$ is Proposition \ref{P-2.2} in the case where $\varphi_0(x):=x^{p_0}$ and $\varphi_1(x):=x^{p_1}$ so that $\varphi_\alpha(x)=x^{p_\alpha}$. Next, for each $k$ with $2\le k\le l$ we consider the $k$-fold tensor product $$ \Phi^{\otimes k}:\mathbb{M}_n^{\otimes k}=B((\mathbb{C}^n)^{\otimes k})\to \mathbb{M}_l^{\otimes k}=B((\mathbb{C}^l)^{\otimes k}), $$ and similarly for $\Psi^{\otimes k}$. Let $P_\wedge$ be the orthogonal projection from $(\mathbb{C}^l)^{\otimes k}$ onto the $k$-fold antisymmetric tensor Hilbert space $(\mathbb{C}^l)^{\wedge k}$. Since $\Phi$ and $\Psi$ are assumed completely positive, one can define positive linear maps \begin{align*} \Phi^{(k)}&:=P_\wedge\Phi^{\otimes k}(\cdot)P_\wedge: \mathbb{M}_n^{\otimes k}\to B((\mathbb{C}^l)^{\wedge k}), \\ \Psi^{(k)}&:=P_\wedge\Psi^{\otimes k}(\cdot)P_\wedge: \mathbb{M}_m^{\otimes k}\to B((\mathbb{C}^l)^{\wedge k}). \end{align*} For every $X\in\mathbb{M}_n$ we note that $\Phi^{(k)}(X^{\otimes k})=P_\wedge\Phi(X)^{\otimes k}P_\wedge$ is nothing but the $k$-fold antisymmetric tensor power $\Phi(X)^{\wedge k}$ of $\Phi(X)$. By applying the case $k=1$ shown above to $A^{\otimes k}$ and $B^{\otimes k}$ we have \begin{align*} &\lambda_1\bigl(\Phi^{(k)}((A^{\otimes k})^{p_\alpha})^{1/2} \Psi^{(k)}((B^{\otimes k})^{p_\alpha})\Phi^{(k)}((A^{\otimes k})^{p_\alpha})^{1/2}\bigr) \\ &\quad\le\lambda_1^{1-\alpha}\bigl(\Phi^{(k)}((A^{\otimes k})^{p_0})^{1/2} \Psi^{(k)}((B^{\otimes k})^{p_0})\Phi^{(k)}((A^{\otimes k})^{p_0})^{1/2}\bigr) \\ &\qquad\qquad\lambda_1^\alpha\bigl(\Phi^{(k)}((A^{\otimes k})^{p_1})^{1/2} \Psi^{(k)}((B^{\otimes k})^{p_1})\Phi^{(k)}((A^{\otimes k})^{p_1})^{1/2}\bigr). \end{align*} Since $\Phi^{(k)}((A^{\otimes k})^{p_\alpha})=\Phi(A^{p_\alpha})^{\wedge k}$ and $\Psi^{(k)}((B^{\otimes k})^{p_\alpha})=\Psi(B^{p_\alpha})^{\wedge k}$, the above left-hand side is $$ \lambda_1\Bigl(\bigl(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha}) \Phi(A^{p_\alpha})^{1/2}\bigr)^{\wedge k}\Bigr) =\prod_{i=1}^k\lambda_i\bigl(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha}) \Phi(A^{p_\alpha})^{1/2}\bigr) $$ and the right-hand side is \begin{align*} &\lambda_1^{1-\alpha}\Bigl(\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0)}) \Phi(A^{p_0})^{1/2}\bigr)^{\wedge k}\Bigr) \lambda_1^\alpha\Bigl(\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1)}) \Phi(A^{p_1})^{1/2}\bigr)^{\wedge k}\Bigr) \\ &\quad=\prod_{i=1}^k\lambda_i^{1-\alpha}\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0}) \Phi(A^{p_0})^{1/2}\bigr) \lambda_i^\alpha\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})\Phi(A^{p_1})^{1/2}\bigr). \end{align*} Hence we have \eqref{F-3.5} for every $k=1,\dots,l$, so \eqref{F-3.2} follows. Since $\lambda(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})\Phi(A^{p_\alpha})^{1/2}) =s^2(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})^{1/2})$, it is clear that \eqref{F-3.2} and \eqref{F-3.3} are equivalent. When $\Phi=\Psi=\mathrm{id}$ and $A,B$ are replaced with $A^2,B^2$, \eqref{F-3.3} reduces to \eqref{F-3.4}. \end{proof} \begin{remark}\label{R-3.2}\rm It is not known whether a modification of \eqref{F-3.3} $$ s(\Phi(A^{p_\alpha})\Psi(B^{p_\alpha})) \prec_{w(\log)}s^{1-\alpha}(\Phi(A^{p_0})\Psi(B^{p_0}))s^\alpha(\Phi(A^{p_1})\Psi(B^{p_1})) $$ holds true or not. \end{remark} By reducing \eqref{F-3.2} to the case $(p_0,p_1)=(0,1)$ we have \begin{cor}\label{C-3.3} Let $0\le\alpha\le1$. Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$, \begin{align} &\lambda\bigl(\Phi(A^\alpha)^{1/2}\Psi(B^\alpha)\Phi(A^\alpha)^{1/2}\bigr) \nonumber\\ &\quad\prec_{w(\log)} \lambda^{1-\alpha}\bigl(\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\bigr) \lambda^\alpha\bigl(\Phi(A)^{1/2}\Psi(B)\Phi(A)^{1/2}\bigr). \label{F-3.6} \end{align} Consequently, if $\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\le I_l$, then \begin{equation}\label{F-3.7} \lambda\bigl(\Phi(A^\alpha)^{1/2}\Psi(B^\alpha)\Phi(A^\alpha)^{1/2}\bigr) \prec_{w(\log)}\lambda^\alpha\bigl(\Phi(A)^{1/2}\Psi(B)\Phi(A)^{1/2}\bigr). \end{equation} \end{cor} The last log-majorization with $\Phi=\Psi=\mathrm{id}$ and also \eqref{F-3.4} with $(p_0,p_1)=(0,1)$ give Araki's log-majorization \eqref{F-1.3} or $s(A^\alpha B^\alpha)\prec_{(\log)}s^\alpha(AB)$ for $0\le\alpha\le1$. By letting $\alpha=1/r$ with $r\ge1$ and replacing $A,B$ with $A^r,B^r$ one can rephrase \eqref{F-3.6} as \begin{align} &\lambda^r\bigl(\Phi(A)^{1/2}\Psi(B)\Phi(A)^{1/2}\bigr) \nonumber\\ &\quad\prec_{w(\log)} \lambda^{r-1}\bigl(\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\bigr) \lambda\bigl(\Phi(A^r)^{1/2}\Psi(B^r)\Phi(A^r)^{1/2}\bigr) \label{F-3.8} \end{align} for all $r\ge1$. Also, when $\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\le I_l$, \eqref{F-3.7} is rewritten as \begin{equation}\label{F-3.9} \lambda^r\bigl(\Phi(A)^{1/2}\Psi(B)\Phi(A)^{1/2}\bigr)\prec_{w(\log)} \lambda\bigl(\Phi(A^r)^{1/2}\Psi(B^r)\Phi(A^r)^{1/2}\bigr),\qquad r\ge1. \end{equation} A norm $\|\cdot\|$ on $\mathbb{M}_n$ is called a {\it unitarily invariant norm} (or a {\it symmetric norm}) if $\|UXV\|=\|X\|$ for all $X,U,V\in\mathbb{M}_n$ with $U,V$ unitaries. \begin{cor}\label{C-3.4} Let $p_0$, $p_1$, and $p_\alpha$ for $0\le\alpha\le1$ be as in Theorem \ref{T-3.1}. Let $\|\cdot\|$ be any unitarily invariant norm and $r>0$. Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$, \begin{align} &\big\|\,\big|\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})^{1/2}\big|^r\big\| \nonumber\\ &\qquad\le\big\|\,\big|\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})^{1/2}\,\big|^r\big\|^{1-\alpha} \big\|\,\big|\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})^{1/2}\big|^r\big\|^\alpha. \label{F-3.10} \end{align} In particular, for every $A,B\in\mathbb{M}_n^+$, $$ \|\,|A^{p_\alpha}B^{p_\alpha}|^r\| \le\|\,|A^{p_0}B^{p_0}|^r\|^{1-\alpha}\|\,|A^{p_1}B^{p_1}|^r\|^\alpha. $$ \end{cor} \begin{proof} We may assume that $0<\alpha<1$. Let $\psi$ be the symmetric gauge function on $\mathbb{R}^l$ corresponding to the unitarily invariant norm $\|\cdot\|$, so $\|X\|=\psi(s(X))$ for $X\in\mathbb{M}_l$. Recall \cite[IV.1.6]{Bh1} that $\psi$ satisfies the H\"older inequality $$ \psi(a_1b_1,\dots,a_lb_l)\le \psi^{1-\alpha}\Bigl(a_1^{1\over1-\alpha},\dots,a_l^{1\over1-\alpha}\Bigr) \psi^\alpha\Bigl(b_1^{1\over\alpha},\dots,b_l^{1\over\alpha}\Bigr) $$ for every $a,b\in[0,\infty)^l$. Also, it is well-known (see, e.g., \cite[Proposition 4.1.6 and Lemma 4.4.2]{Hi2}) that $a\prec_{w(\log)}b$ implies the weak majorization $a\prec_wb$ and so $\psi(a)\le\psi(b)$. Hence it follows from the weak log-majorization in \eqref{F-3.3} that \begin{align*} &\big\|\,\big|\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})^{1/2}\big|^r\big\| \\ &\quad=\psi\Bigl(s^r\bigl(\Phi(A^{p_\alpha})^{1/2}\Psi(B^{p_\alpha})^{1/2}\bigr)\Bigr) \\ &\quad\le\psi\Bigl(s^{(1-\alpha)r}\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})^{1/2}\bigr) s^{\alpha r}\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})^{1/2}\bigr)\Bigr) \\ &\quad\le\psi^{1-\alpha}\Bigl(s^r\bigl(\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})^{1/2}\bigr)\Bigr) \psi^\alpha\Bigl(s^r\bigl(\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})^{1/2}\bigr)\Bigr) \\ &\quad\le\big\|\,\big|\Phi(A^{p_0})^{1/2}\Psi(B^{p_0})^{1/2}\,\big|^r\big\|^{1-\alpha} \big\|\,\big|\Phi(A^{p_1})^{1/2}\Psi(B^{p_1})^{1/2}\big|^r\big\|^\alpha. \end{align*} \end{proof} The norm inequality in \eqref{F-3.10} is a kind of the H\"older type inequality, showing the log-convexity of the function $$ p\in[0,\infty)\longmapsto\big\|\,\big|\Phi(A^p)^{1/2}\Psi(B^p)^{1/2}\big|^r\big\|. $$ \begin{cor}\label{C-3.5} Let $\|\cdot\|$ be a unitarily invariant norm. If $\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\le I_l$, then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$, $$ \big\|\bigl\{\Phi(A^p)^{1/2}\Psi(B^p)\Phi(A^p)^{1/2}\bigr\}^{1/p}\big\|\le \big\|\bigl\{\Phi(A^q)^{1/2}\Psi(B^q)\Phi(A^q)^{1/2}\bigr\}^{1/q}\big\| \quad\mbox{if $0<p\le q$}. $$ Furthermore, if $\Phi$ and $\Psi$ are unital and $A,B>0$, then $$ \big\|\bigl\{\Phi(A^p)^{1/2}\Psi(B^p)\Phi(A^p)^{1/2}\bigr\}^{1/p}\big\| $$ decreases to $\|\exp\{\Phi(\log A)+\Psi(\log B)\}\|$ as $p\searrow0$. \end{cor} \begin{proof} Let $0<p\le q$. By applying \eqref{F-3.7} to $A^q$, $B^q$ and $\alpha=p/q$ we have $$ \lambda^{1/p}\bigl(\Phi(A^p)^{1/2}\Psi(B^p)\Phi(A^p)^{1/2}\bigr) \prec_{w(\log)}\lambda^{1/q}\bigl(\Phi(A^q)^{1/2}\Psi(B^q)\Phi(A^q)^{1/2}\bigr), $$ which implies the desired norm inequality. Under the additional assumptions on $\Phi,\Psi$ and $A,B$ as stated in the corollary, the proof of the limit formula is standard with use of $$ \Phi(A^p)^{1/2}=I_l+{p\over2}\,\Phi(\log A)+o(p),\quad \Psi(B^p)=I_l+p\Psi(\log B)+o(p).\ $$ as $p\to0$. \end{proof} \begin{remark}\label{R-3.6}\rm When $\Phi=\mathrm{id}$ and $\Psi=Z^*\cdot Z$ with a contraction $Z\in\mathbb{M}_n$, it follows from \eqref{F-3.9} that, for every $A,B\in\mathbb{M}_n^+$, \begin{equation}\label{F-3.11} \lambda^r\bigl(A^{1/2}Z^*BZA^{1/2}\bigr)\prec_{w(\log)} \lambda\bigl(A^{r/2}Z^*B^rZA^{r/2}\bigr),\qquad r\ge1, \end{equation} which is \cite[Corollary 2.3]{BL}. Although the form of \eqref{F-3.9} is seemingly more general than that of \eqref{F-3.11}, it is in fact easy to see that \eqref{F-3.9} follows from \eqref{F-3.11} conversely. Indeed, we may assume as in the proof of Theorem \ref{T-3.1} that $\Phi$ and $\Psi$ are completely positive. Then, via the Stinespring representation (see, e.g., \cite[Theorem 3.1.2]{Bh2}), we may further assume that $\Phi=V^*\cdot V$ with an operator $V:\mathbb{C}^l\to\mathbb{C}^n$ and $\Psi=W^*\cdot W$ with an operator $W:\mathbb{C}^l\to\mathbb{C}^m$. The assumption $\Phi(I)^{1/2}\Psi(I)\Phi(I)^{1/2}\le I$ is equivalent to $\|WV^*\|_\infty\le1$. One can see that $$ \Phi(A^r)^{1/2}\Psi(B^r)\Phi(A^r)^{1/2}=(V^*A^rV)^{1/2}(W^*B^rW)(V^*A^rV)^{1/2} $$ is unitarily equivalent to $A^{r/2}VW^*B^rWV^*A^{r/2}$, and thus \eqref{F-3.11} implies \eqref{F-3.9}. Here, it should be noted that the proof of \eqref{F-3.11} in \cite{BL} is valid even though $Z=WV^*$ is an $m\times n$ (not necessarily square) matrix. In this way, the log-majorization in \eqref{F-3.9} is equivalent to \cite[Corollary 2.3]{BL}. Similarly, Corollary \ref{C-3.5} is equivalent to \cite[Corollary 2.2]{BL}. The author is indebted to J.-C.~Bourin for the remark here. \end{remark} \section{More inequalities for operator means} The log-majorization obtained in \cite{AH} for the weighted geometric means says that, for every $0\le\alpha\le1$ and every $A,B\in\mathbb{M}_n^+$, \begin{equation}\label{F-4.1} \lambda(A^r\,\#_\alpha\,B^r)\prec_{(\log)}\lambda^r(A\,\#_\alpha\,B), \qquad r\ge1, \end{equation} or equivalently, $$ \lambda^q(A\,\#_\alpha\,B)\prec_{(\log)}\lambda(A^q\,\#_\alpha\,B^q), \qquad0\le q\le1. $$ The essential first step to prove this is the operator norm inequality $$ \|A^r\,\#_\alpha\,B^r\|_\infty\le\|A\,\#_\alpha\,B\|_\infty^r,\qquad r\ge1, $$ which is equivalent to that $A\,\#_\alpha\,B\le I$ $\Rightarrow$ $A^r\,\#_\alpha\,B^r\le I$ for all $r\ge1$. By taking the inverse when $A,B>0$, this is also equivalent to that $A\,\#_\alpha\,B\ge I$ $\Rightarrow$ $A^r\,\#_\alpha\,B^r\ge I$ for all $r\ge1$. The last implication was recently extended in \cite[Lemmas 2.1, 2.2]{Wa} to the assertion stating the equivalence between the following two conditions for $f\in\mathrm{OM}_{+,1}$: \begin{itemize} \item[(i)] $f(x)^r\le f(x^r)$ for all $x\ge0$ and $r\ge1$; \item[(ii)] for every $A,B\in\mathbb{M}_n^+$, $A\,\sigma_f\,B\ge I$ $\Rightarrow$ $A^r\,\sigma_f\,B^r\ge I$ for all $r\ge1$. \end{itemize} We note that the above conditions are also equivalent to \begin{itemize} \item[(iii)] for every $A,B\in\mathbb{M}_n^+$, $$ \lambda_n(A^r\,\sigma_f\,B^r)\ge\lambda_n^r(A\,\sigma_f\,B),\qquad r\ge1; $$ or equivalently, for every $A,B\in\mathbb{M}_n^+$, $$ \lambda_n(A^q\,\sigma_f\,B^q)\le\lambda_n^q(A\,\sigma_f\,B),\qquad0<q\le1. $$ \end{itemize} The next proposition extends the above result to the form involving positive linear maps. Below let $\Phi$ and $\Psi$ be positive linear maps as before. \begin{prop}\label{P-4.1} Assume that $f\in\mathrm{OM}_{+,1}$ satisfies the above condition {\rm(i)}. Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$, \begin{equation}\label{F-4.2} \bigl(\max\{\|\Phi(I_n)\|_\infty,\|\Psi(I_m)\|_\infty\}\bigr)^{r-1} \lambda_l\bigl(\Phi(A^r)\,\sigma_f\,\Psi(B^r)\bigr) \ge\lambda_l^r\bigl(\Phi(A)\,\sigma_f\,\Psi(B)\bigr) \end{equation} for all $r\ge1$. \end{prop} \begin{proof} By continuity we may assume that $\Phi$ and $\Psi$ are strictly positive. Let $0<q\le1$. Since $\Phi(I_n)^{-1/2}\Phi(\cdot)\Phi(I_n)^{-1/2}$ is a unital positive linear map, it is well-known \cite[Proposition 2.7.1]{Bh2} that $$ \Phi(I_n)^{-1/2}\Phi(A^q)\Phi(I_n)^{-1/2}\le \bigl(\Phi(I_n)^{-1/2}\Phi(A)\Phi(I_n)^{-1/2}\bigr)^q $$ so that \begin{align*} \Phi(A^q)&\le\Phi(I_n)^{1/2}\bigl(\Phi(I_n)^{-1/2}\Phi(A)\Phi(I_n)^{-1/2}\bigr)^q \Phi(I_n)^{1/2} \\ &=\Phi(I_n)\,\#_q\,\Phi(A)\le(\|\Phi(I_n)\|_\infty I_n)\,\#_q\,\Phi(A) \\ &=\|\Phi(I_n)\|_\infty^{1-q}\Phi(A)^q \end{align*} and similarly $$ \Psi(B^q)\le\|\Psi(I_m)\|_\infty^{1-q}\Psi(B)^q. $$ By the joint monotonicity of $\sigma_f$ we have \begin{align} \Phi(A^q)\,\sigma_f\,\Psi(B^q) &\le\bigl(\|\Phi(I_n)\|_\infty^{1-q}\Phi(A)^q\bigr)\,\sigma_f\, \bigl(\|\Psi(I_m)\|_\infty^{1-q}\Psi(B)^q\bigr) \nonumber\\ &\le\bigl(\max\{\|\Phi(I_n)\|_\infty,\|\Psi(I_m)\|_\infty\}\bigr)^{1-q} \bigl(\Phi(A)^q\,\sigma_f\,\Psi(B)^q\bigr). \label{F-4.3} \end{align} Therefore, \begin{align*} \lambda_l\bigl(\Phi(A^q)\,\sigma_f\,\Psi(B^q)\bigr) &\le\bigl(\max\{\|\Phi(I_n)\|_\infty,\|\Psi(I_m)\|_\infty\}\bigr)^{1-q} \lambda_l\bigl(\Phi(A)^q\,\sigma_f\,\Psi(B)^q\bigr) \\ &\le\bigl(\max\{\|\Phi(I_n)\|_\infty,\|\Psi(I_m)\|_\infty\}\bigr)^{1-q} \lambda_l^q\bigl(\Phi(A)\,\sigma_f\,\Psi(B)\bigr) \end{align*} by using the property (iii) above. Now, for $0<r\le1$ let $q:=1/r$. By replacing $A,B$ with $A^r,B^r$, respectively, we obtain $$ \lambda_l\bigl(\Phi(A)\,\sigma_f\,\Psi(B)\bigr) \le\bigl(\max\{\|\Phi(I_n)\|_\infty,\|\Psi(I_m)\|_\infty\}\bigr)^{1-{1\over r}} \lambda_l^{1/r}\bigl(\Phi(A^r)\,\sigma_f\,\Psi(B^r)\bigr), $$ which yields \eqref{F-4.2}. \end{proof} When $\sigma_f$ is the weighted geometric mean $\#_\alpha$, one can improve Proposition \ref{P-4.1} to the log-supermajorization result as follows: \begin{prop}\label{P-4.2} Let $0\le\alpha\le1$. Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$. \begin{equation}\label{F-4.4} \bigl(\|\Phi(I_n)\|_\infty\,\#_\alpha\,\|\Psi(I_m)\|_\infty\bigr)^{r-1} \lambda\bigl(\Phi(A^r)\,\#_\alpha\,\Psi(B^r)\bigr) \prec^{w(\log)}\lambda^r\bigl(\Phi(A)\,\#_\alpha\,\Psi(B)\bigr) \end{equation} for all $r\ge1$. Consequently, if $\|\Phi(I_n)\|_\infty\,\#_\alpha\,\|\Psi(I_m)\|_\infty\le1$, then $$ \lambda\bigl(\Phi(A^r)\,\#_\alpha\,\Psi(B^r)\bigr) \prec^{w(\log)}\lambda^r\bigl(\Phi(A)\,\#_\alpha\,\Psi(B)\bigr),\qquad r\ge1. $$ \end{prop} \begin{proof} When $\sigma_f=\#_\alpha$, inequality \eqref{F-4.3} is improved as $$ \Phi(A^q)\,\#_\alpha\,\Psi(B^q) \le\bigl(\|\Phi(I_n)\|_\infty\,\#_\alpha\,\|\Psi(I_m)\|_\infty\bigr)^{1-q} \bigl(\Phi(A)^q\,\#_\alpha\,\Psi(B)^q\bigr) $$ for $0<q\le1$, and hence \eqref{F-4.2} is improved as $$ \bigl(\|\Phi(I_n)\|_\infty\,\#_\alpha\,\|\Psi(I_m)\|_\infty\bigr)^{r-1} \lambda_l\bigl(\Phi(A^r)\,\#_\alpha\,\Psi(B^r)\bigr) \ge\lambda_l^r\bigl(\Phi(A)\,\#_\alpha\,\Psi(B)\bigr) $$ for all $r\ge1$. One can then prove the asserted log-supermajorization result in the same way as in the proof of Theorem \ref{T-3.1} with use of the antisymmetric tensor power technique, where the identity $\lambda_l(X^{\wedge k})=\prod_{i=l-k+1}^l\lambda_i(X)$ for $X\in\mathbb{M}_l^+$ is used instead of $\lambda_1(X^{\wedge k})=\prod_{i=1}^k\lambda_i(X)$ in the previous proof. The details may be omitted here. \end{proof} In particular, when $\Phi=\Psi=\mathrm{id}$, \eqref{F-4.4} reduces to \eqref{F-4.1} since, for $A,B>0$, the log-supermajorization $\lambda(A^r\,\#_\alpha\,B^r)\prec^{w(\log)}\lambda^r(A\,\#_\alpha\,B)$ implies the log-majorization \eqref{F-4.1}. The notion of symmetric anti-norms was introduced in \cite{BH1,BH2} with the notation $\|\cdot\|_!$. Recall that a non-negative continuous functional $\|\cdot\|_!$ on $\mathbb{M}_n^+$ is called a {\it symmetric anti-norm} if it is positively homogeneous, superadditive (instead of subadditive in case of usual norms) and unitarily invariant. Among others, a symmetric anti-norm is typically defined associated with a symmetric norm $\|\cdot\|$ on $\mathbb{M}_n$ and $p>0$ in such a way that, for $A\in\mathbb{M}_n^+$, $$ \|A\|_!:=\begin{cases}\|A^{-p}\|^{-1/p} & \text{if $A$ is invertible}, \\ 0 & \text{otherwise}. \end{cases} $$ A symmetric anti-norm defined in this way is called a {\it derived anti-norm}, see \cite[Proposition 4.6]{BH2}. By Lemma \cite[Lemma 4.10]{BH2}, similarly to Corollary \ref{C-3.5}, we have \begin{cor}\label{C-4.3} Let $0\le\alpha\le1$ and assume that $\|\Phi(I_n)\|_\infty\,\#_\alpha\,\|\Psi(I_m)\|_\infty\le1$. Then for every $A\in\mathbb{M}_n^+$ and $B\in\mathbb{M}_m^+$ and for any derived anti-norm $\|\cdot\|_!$ on $\mathbb{M}_l^+$, $$ \big\|\{\Phi(A^p)\,\#_\alpha\,\Psi(B^p)\}^{1/p}\big\|_!\ge \big\|\{\Phi(A^q)\,\#_\alpha\,\Psi(B^q)\}^{1/q}\big\|_!,\quad\mbox{if $0<p\le q$}. $$ \end{cor} \begin{problem}\label{Q-4.4}\rm It seems that our generalization of Ando-Hiai type log-majorization is not so much completed as that of Araki's log-majorization in Section 3. Although the form of \eqref{F-4.4} bears some resemblance to that of \eqref{F-3.8}, they have also significant differences. For one thing, $\prec^{w(\log)}$ arises in \eqref{F-4.4} while $\prec_{w(\log)}$ in \eqref{F-3.8}, which should be reasonable since the directions of log-majorization are opposite between them. For another, the factor $\bigl(\|\Phi(I_n)\|_\infty\,\#_\alpha\,\|\Psi(I_m)\|_\infty\bigr)^{r-1}$ in \eqref{F-4.4} is apparently much worse than $\lambda^{r-1}\bigl(\Phi(I_n)^{1/2}\Psi(I_m)\Phi(I_n)^{1/2}\bigr)$ in \eqref{F-3.8}. One might expect the better factor $\|\Phi(I_n)\,\#_\alpha\,\Psi(I_m)\|_\infty^{r-1}$ or even $\lambda^{r-1}(\Phi(I_n)\,\#_\alpha\,\Psi(I_m))$. Indeed, a more general interesting problem is the $\#_\alpha$-version of \eqref{F-3.2}, i.e., for $p_0,p_1\ge0$, $0\le\theta\le1$ and $p_\theta:=(1-\theta)p_0+\theta p_1$, $$ \lambda^{1-\theta}(\Phi(A^{p_0})\,\#_\alpha\,\Psi(B^{p_0})) \lambda^\theta(\Phi(A^{p_1})\,\#_\alpha\,\Psi(B^{p_1}))\prec^{w(\log)} \lambda(\Phi(A^{p_\theta})\,\#_\alpha\,\Psi(B^{p_\theta}))\,? $$ When $\Phi=\Psi=\mathrm{id}$, the problem becomes \begin{equation}\label{F-4.5} \lambda^{1-\theta}(A^{p_0}\,\#_\alpha\,B^{p_0}) \lambda^\theta(A^{p_1}\,\#_\alpha\,B^{p_1})\prec_{(\log)} \lambda(A^{p_\theta}\,\#_\alpha\,B^{p_\theta})\,? \end{equation} \end{problem} \begin{example}\rm Here is a sample computation of the last problem for $A,B$ are $2\times2$ and $\alpha=2$. Thanks to continuity and homogeneity, we may assume that $A,B\in\mathbb{M}_2^+$ are invertible with determinant $1$. So we write $A=aI+\mathbf{x}\cdot\sigma$ and $B=bI+\mathbf{y}\cdot\sigma$ with $a,b>0$, $\mathbf{x},\mathbf{y}\in\mathbb{R}^3$, $\det A=a^2-|\mathbf{x}|^2=1$ and $\det B=b^2-|\mathbf{y}|^2=1$, where $|\mathbf{x}|^2:=x_1^2+x_2^2+x_3^2$ and $\mathbf{x}\cdot\sigma:=x_1\sigma_1+x_2\sigma_2+x_3\sigma_3$ with Pauli matrices $\sigma_i$, i.e., $\sigma_1=\begin{bmatrix}0&1\\1&0\end{bmatrix}$, $\sigma_2=\begin{bmatrix}0&-i\\i&0\end{bmatrix}$, $\sigma_3=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$. For \eqref{F-4.5} in this situation, it suffices, thanks to \cite[Proposition 3.11]{Mo} (also \cite[Proposition 4.1.12]{Bh2}), to show that \begin{equation}\label{F-4.6} p\ge0\longmapsto\lambda_1\Biggl({A^p+B^p\over\sqrt{\det(A^p+B^p)}}\Biggr) =\biggl({\lambda_1(A^p+B^p)\over\lambda_2(A^p+B^p)}\biggr)^{1/2} \end{equation} is a log-concave function. Let $e^\alpha=a+|\mathbf{x}|$ and $e^\beta=b+|\mathbf{y}|$, so $e^{-\alpha}=a-|\mathbf{x}|$, $|\mathbf{x}|=\sinh\alpha$, and similarly for $|\mathbf{y}|$. Then a direct computation yields $$ A^p+B^p=(\cosh(\alpha p)+\cosh(\beta p))I +\biggl[{\sinh(\alpha p)\over\sinh\alpha}\,\mathbf{x}+{\sinh(\beta p)\over\sinh\beta}\,\mathbf{y}\biggr] \cdot\sigma, $$ whose eigenvalues are $$ \cosh(\alpha p)+\cosh(\beta p)\pm\bigl[\sinh^2(\alpha p)+\sinh^2(\beta p) +2c\sinh(\alpha p)\sinh(\beta p)\bigr]^{1/2} $$ with $c:={\mathbf{x}\cdot\mathbf{y}\over|\mathbf{x}|\,|\mathbf{y}|}\in[-1,1]$. Although numerical computations say that \eqref{F-4.6} is a log-concave function of $p\ge0$ for any $\alpha,\beta\ge0$ and $c\in[-1,1]$, it does not seem easy to give a rigorous proof. \end{example} In the rest of the paper we present one more log-majorization result. Let $E\in\mathbb{M}_n$ be an orthogonal projection with $\dim E=l$. A particular case of \eqref{F-3.2} is \begin{equation}\label{F-4.7} \lambda(EA^{(1-\theta)p_0+\theta p_1}E)\prec_{w(\log)} \lambda^{1-\theta}(EA^{p_0}E)\lambda^\theta(EA^{p_1}E),\qquad0\le\theta\le1 \end{equation} for every $A\in\mathbb{M}_n^+$. As a complementary version of this we show the following: \begin{prop}\label{P-4.6} Let $p_0,p_1\ge0$ and $0\le\theta\le1$. Then for every $\alpha\in(0,1]$ and $A\in\mathbb{M}_n^+$, \begin{equation}\label{F-4.8} \bigl(\lambda_i(A^{(1-\theta)p_0+\theta p_1}\,\#_\alpha\,E)\bigr)_{i=1}^l\prec^{w(\log)} \bigl(\lambda_i^{1-\theta}(A^{p_0}\,\#_\alpha\,E) \lambda_i^\theta(A^{p_1}\,\#_\alpha\,E)\bigr)_{i=1}^l. \end{equation} \end{prop} The form of this log-majorization is similar to that of the problem \eqref{F-4.5}. Although the directions of those are opposite, there is no contradiction between those two; indeed, the log-majorization of \eqref{F-4.5} is taken for matrices in $\mathbb{M}_n^+$ while that of \eqref{F-4.8} is for $l\times l$ matrices restricted to the range of $E$. First, we give a lemma in a setting of more general operator means. Let $f$ be an operator monotone function on $[0,\infty)$ such that $f(0)=0$, and let $\sigma_f$ be the operator mean corresponding to $f$ due to Kubo-Ando theory. An operator monotone function dual to $f$ is defined by $f^\perp(x):=x/f(x)$, $x>0$, and $f^\perp(0):=\lim_{x\searrow0}f^\perp(x)$. \begin{lemma}\label{L-4.7} Let $f$ and $f^\perp$ be as stated above. Then for every $A\in\mathbb{M}_n^+$ with $A>0$, $$ A\,\sigma_f\,E=(Ef^\perp(EA^{-1}E)E)^{-1}, $$ where the inverse in the right-hand side is defined on the range of $E$ (i.e., in the sense of generalized inverse). \end{lemma} \begin{proof} For $k=0,1,2,\dots$ we have $$ A^{-1/2}E(EA^{-1}E)^kEA^{-1/2}=(A^{-1/2}EA^{-1/2})^{k+1}. $$ Define a function $\widehat f$ on $[0,\infty)$ by $\widehat f(x):=f(x)/x$ for $x>0$ and $\widehat f(0):=0$. Note that the eigenvalues of $EA^{-1}E$ and those of $A^{-1/2}EA^{-1/2}$ are the same including multiplicities. By approximating $\widehat f$ by polynomials on the eigenvalues of $EA^{-1}E$, we have $$ A^{-1/2}E\widehat f(EA^{-1}E)EA^{-1/2}=A^{-1/2}EA^{-1/2}\widehat f(A^{-1/2}EA^{-1/2}) =f(A^{-1/2}EA^{-1/2}) $$ since the assumption $f(0)=0$ implies that $f(x)=x\widehat f(x)$ for all $x\in[0,\infty)$. Therefore, $$ E\widehat f(EA^{-1}E)E=A^{1/2}f(A^{-1/2}EA^{-1/2})A^{1/2}=A\,\sigma_f\,E. $$ Moreover, it is easy to verify that $(Ef^\perp(EA^{-1}E)E)^{-1}=E\widehat f(EA^{-1}E)E$. \end{proof} \noindent {\it Proof of Proposition \ref{P-4.6}.}\enspace Since the result is trivial when $\alpha=1$, we may assume that $0<\alpha<1$. Moreover, we may assume by continuity that $A$ is invertible. When $f(x)=x^\alpha$, note that $\sigma_f=\#_\alpha$ and $f^\perp(x)=x^{1-\alpha}$. Hence by Lemma \ref{L-4.7} we have $$ A^p\,\#_\alpha\,E=(EA^{-p}E)^{\alpha-1},\qquad p\ge0, $$ where $(E\cdot E)^{\alpha-1}$ is defined on the range of $E$. This implies that, for every $k=1,\dots,l$, $$ \prod_{i=l-k+1}^l\lambda_i(A^p\,\#_\alpha\,E) =\Biggl(\prod_{i=1}^k\lambda_i(EA^{-p}E)\Biggr)^{\alpha-1} $$ so that \eqref{F-4.8} immediately follows from \eqref{F-4.7} applied to $A^{-1}$.\qed \bigskip Similarly to Corollary \ref{C-3.4}, by Proposition \ref{P-4.6} and \cite[Lemma 4.10 and (4.4)]{BH2} we see that if $A\in\mathbb{M}_n^+$ and $\|\cdot\|_!$ is a derived anti-norm on $\mathbb{M}_l^+$, then $\|A^p\,\#_\alpha\,E\|_!$ is a log-concave function of $p\ge0$, where $A^p\,\#_\alpha\,E$ is considered as an $l\times l$ matrix restricted to the range of $E$. \subsection*{Acknowledgments} This research was supported in part by Grant-in-Aid for Scientific Research (C)21540208.
1,116,691,499,541
arxiv
\section{\label{sec:Introduction}Introduction} Observations of neutrino oscillations have established that lepton flavor is not strictly conserved. In the context of the Standard Model (SM), however, charged lepton flavor violating (CLFV) effects are too small to be observed \cite{CLFV}. Massive or massless weakly interacting neutral bosons $X$ such as axions \cite{Axion1,Axion2,Axion3,Axion4} and majorons \cite{Majoron1,Majoron2,Majoron3} have been suggested to extend the SM including models with dark matter candidates, baryogenesis, and solutions to the strong CP problem. Wilczek suggested such a model \cite{Familon} which may lead to CLFV where the boson X can be emitted in flavor changing interactions. Such new bosons have been sought by experiments using kaon \cite{KXH1,KXH2,KX01,KX02,KX03, Hou}, pion \cite{SINDRUM,Picciotto}, and muon decays \cite{Derenzo, Doug, PSI, TWIST, Jodidio}. When decay products from a massive boson $X_H$ are not detected due to, for example, a long lifetime, CLFV two body muon decay involving a massive boson ${\mu}^+{\to}e^+X_H$ can be sought by searching for extra peaks in the muon decay ${\mu}^+{\to}e^+{\nu}\bar{\nu}$ positron energy spectrum. The mass of the boson $m_{X_{H}}$ can be reconstructed using the equation \begin{equation}\label{eq:mass} m_{X_H}=\sqrt{m_{\mu}^2+m_e^2-2m_{\mu}E_e}, \end{equation} where $m_{\mu}$ and $m_e$ are the masses of the muon and the positron, respectively, and $E_e$ is the total energy of the decay positron. Two-body muon decays ${\mu}^+{\to}e^+X_H$ were searched for by Derenzo \cite{Derenzo} using a magnetic spectrometer; experimental limits\footnote{All limits quoted in this paper are at the 90\% confidence level.} on the branching ratio ${\Gamma}({\mu}^+{\to}e^+X_H)/{\Gamma}({\mu}^+{\to}e^+{\nu}\bar{\nu})<2{\times}10^{-4}$ were set in the mass region from 98.1 to 103.5 MeV/$c^2$. Exotic muon decays were also sought as a byproduct of the ${\pi}^+{\rightarrow}e^+{\nu}$ branching ratio measurement \cite{Bryman} by Bryman and Clifford \cite{Doug} using a NaI(T$\ell$) calorimeter, resulting in upper limits on the branching ratio ${\lesssim}3{\times}10^{-4}$ in the mass range from 39.3 to 93.4 MeV/$c^2$. Muon decay in the mass region up to the kinetic limit was studied by Bilger {\it et al}. \cite{PSI} using a germanium detector. The most sensitive experiment done so far by Bayes {\it et al}. \cite{TWIST} gave limits from $10^{-5}$ to $10^{-6}$ in the mass range from 3.2 to 86.6 MeV/$c^2$. Figure \ref{fig:FinalResult} shows a summary of the present status of the search for ${\mu}^+{\rightarrow}e^+X_H$ decay with upper limits in the mass region from 45 to 105 MeV/$c^2$. A massless boson $X_0$ was also searched for by Jodidio {\it et al}. \cite{Jodidio}, and the upper limit on the branching ratio was found to be ${\Gamma}({\mu}^+{\rightarrow}e^+X_0)/{\Gamma}({\mu}^+{\to}e^+{\nu}\bar{\nu})<2.6{\times}10^{-6}$. The present work was carried out with data from the PIENU experiment principally designed to measure the branching ratio ${\Gamma}[{\pi}^+{\to}e^+{\nu}({\gamma})]/{\Gamma}[{\pi}^+{\to}{\mu}^+{\nu}({\gamma})]$ using pion decays at rest \cite{PIENU}. A 75 MeV/$c$ ${\pi}^+$ beam from the TRIUMF M13 channel \cite{M13} was degraded by two thin plastic scintillator beam counters. Pion tracking was performed by two multiwire proportional chambers and two silicon strip detectors. The pion beam was stopped in an 8 mm thick plastic scintillator target. Positrons from ${\pi}^+{\to}{e}^+{\nu}$ decays and ${\mu}^+{\to}e^+{\nu}\bar{\nu}$ decays following ${\pi}^+{\rightarrow}{\mu}^+{\nu}$ decays were measured by two thin plastic scintillators used as telescope counters and a calorimeter consisting of a 48 cm (dia.) ${\times}$ 48 cm (length) single crystal NaI(T$\ell$) detector surrounded by pure CsI crystals \cite{PIENUNIMA}. A silicon strip detector and a multiwire proportional chamber were used to reconstruct tracks of decay positrons and define the acceptance. The energy resolution of the calorimeter was 2.2\% (FWHM) for 70 MeV positrons. A total of $1.9{\times}10^8$ muon decays were used to search for the decay ${\mu}^+{\rightarrow}e^+X_H$ with lifetime ${\tau}_X>10^{-9}$ s. The energy resolution is a factor of two improvement and the statistics are an order of magnitude larger than the previous TRIUMF experiment \cite{Doug}. The present experiment is also sensitive to a higher mass region than that of Ref. \cite{TWIST}. \begin{figure}[] \includegraphics[width=8.5cm]{FinalResult.eps} \caption{\label{fig:FinalResult}Summary of the experimental upper limits on the ${\mu}^+{\rightarrow}e^+X_H$ branching ratio. The filled red circles with the thin solid red line show the results of this work. The limits represented by the dotted blue line, thick dashed black line, thick solid gray line, and thin solid green line are from Refs. \cite{TWIST, Doug, Derenzo, PSI}, respectively.} \end{figure} \section{Analysis} \begin{figure}[] \includegraphics[width=8.5cm]{MichelResi.eps} \caption{\label{fig:MichelResi} The muon decay energy spectra from data taken before (a) and after (b) November, 2010 fit to polynomial functions (solid red line). The insert boxes show the residuals in the low energy region with statistical uncertainties (black circles) and a hypothetical signal (from MC) with the branching ratio $5.0{\times}10^{-5}$ with $m_{X_H}=90$ MeV/$c^2$ (red histograms). The bumps at 3 MeV were due to the low energy positrons that hit the telescope counters but did not reach the calorimeter (see text).} \end{figure} The data in the PIENU experiment were taken in runs occurring from 2009 to 2012. Because the energy calibration system for the CsI crystals was not available before November 2010, the data were divided into two sets, before and after that date. Pions were identified using energy loss information in the beam counters. Any events with extra hits in the beam and telescope counters were rejected. To ensure the events were from muon decay, the late time region $>200$ ns after the pion stop was selected. A solid angle cut of about 15\% was used for the data set after November 2010. A tighter acceptance cut (corresponding to about 10\% solid angle) was applied to the data taken before November 2010 to minimize electromagnetic shower leakage. Figure \ref{fig:MichelResi} shows the muon decay energy spectra for those two data sets where $E_{\rm sum}$ is the sum of energies observed in the calorimeter, telescope counters, and silicon strip detector including positron annihilation but excluding approximately 1.5 MeV energy loss in the target and inactive materials. The bumps at about 3 MeV in the low energy region of the spectra were due to positrons which hit the telescope counters but did not enter the calorimeter; positron annihilation in the last telescope scintillator resulted in one 0.511 MeV photon depositing energy in the calorimeter. The two muon decay energy spectra were each fit to smooth 6th order polynomial functions in the energy region $E_{\rm sum}=6$ to $43$ MeV but excluding a region from -1.75 to +1.25 MeV around a possible signal peak where the search was to be performed. Then, for each $m_{X_H}$, the spectra were fit simultaneously to the polynomial functions with fixed fitting parameters obtained in the initial procedure plus a peak signal shape for the decay ${\mu}^+{\to}e^+X_H$. To combine the two data sets, a common branching ratio was used as a free parameter in the fit. The validity of the fit procedure was confirmed using the simulated muon decay energy spectrum and the signal peak with the branching ratio $1.0{\times}10^{-4}$ at several energies. The polynomial function fit without any added signal shape resulted in ${\chi}^2/{\rm d.o.f}=1.09$ (${\rm d.o.f}=282$). The signal shapes were produced by a Monte Carlo (MC) simulation \cite{Geant4} that reproduced the peak of the decay ${\pi}^+{\rightarrow}e^+{\nu}$ at 69.8 MeV. This procedure was repeated in the range $E_{\rm sum}=$ 8.5 to 40.5 MeV (corresponding to the actual decay positron energy $E_{e}=$ 10 to 42 MeV) with 0.5 MeV steps. \section{Results and Conclusion} No extra peaks due to CLFV muon decay ${\mu}^+{\to}e^+X_H$ with a lifetime ${\tau}_{X}>10^{-9}$ s were observed and upper limits on the branching ratio ${\Gamma}({\mu}^+{\to}e^+X_H)/{\Gamma}({\mu}^+{\to}e^+{\nu}\bar{\nu})$ from $10^{-5}$ to $10^{-4}$ were set for the mass region $m_{X_H}=$ 47.8 to 95.1 MeV/$c^2$ as shown in Fig. \ref{fig:FinalResult}. Statistics were the dominant source of uncertainty on the branching ratios. Systematic uncertainties and acceptance effects were approximately canceled by taking the ratio of the fit amplitude of signal events to the number of total muon decays. Improved and new limits in the mass region from 87.0 MeV/$c^2$ to 95.1 MeV/$c^2$ were set. \begin{acknowledgments} This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC, number SAPPJ-2017-00033), and by the Research Fund for the Doctoral Program of Higher Education of China, by CONACYT doctoral fellowship from Mexico, and by JSPS KAKENHI Grant numbers 18540274, 21340059, 24224006, 17H01128, 19K03888 in Japan. We are grateful to Brookhaven National Laboratory for the loan of the crystals, and to the TRIUMF operations, detector, electronics and DAQ groups for their engineering and technical support. We would also like to thank to R. Bayes and A. Olin for providing the experimental data in Ref. \cite{TWIST}. \end{acknowledgments}
1,116,691,499,542
arxiv
\section{Introduction} The main result of this paper is the following theorem. \begin{theorem}\label{THE theorem} Let $T$ and $T'$ be two length-$n$ reflection factorizations of a Coxeter element of the complex reflection group $G_6$. Then, $T$ and $T'$ are in the same Hurwitz orbit if and only if they have the same multiset of conjugacy classes. \end{theorem} Theorem \ref{THE theorem} is a particular case of the following conjecture of Lewis and Reiner. \begin{conjecture}[{\cite[Conj. 6.3]{LR}}]\label{L and R} In a well-generated finite complex reflection group, two reflection factorizations of a Coxeter element lie in the same Hurwitz orbit if and only if they share the same multiset of conjugacy classes. \end{conjecture} This conjecture arose as a result of generalizing a theorem by Bessis {\cite[Prop. ~1.6.1]{Bessis}}, which is identical to Conjecture \ref{L and R} , but only makes the claim for shortest reflection factorizations of a Coxeter element. Conjecture \ref{L and R} was proven for real reflection groups by Lewis and Reiner \cite{LR}, for the groups $G_4$ and $G_5$ by Peterson \cite{Zach}, and for the infinite families of complex reflection groups by Lewis \cite{infinite}. As in the proofs of Peterson and Lewis--Reiner, we prove Theorem \ref{THE theorem} by induction. We begin in Section \ref{section2} by giving background information and defining important objects used. We first look at complex reflection groups as a whole and then consider the intricacies of $G_6$. In Section 3, we give the proof. We start by making some key observations about the outcomes of applying Hurwtiz moves and then begin constructing our inductive argument by using the idea of a marked element to move from one reflection factorization to one of a shorter length. By checking finite instances on Sage \cite{Sage}, we prove our base cases, and are able to fully construct our inductive argument, giving us a proof of Theorem~\ref{THE theorem}. \subsection*{Acknowledgements} We would like to thank Joel Lewis for his continued mentoring and support. Dounia Lazreq would also like to thank the Luther Rice Undergraduate Research Fellowship for supporting this project. \section{General Background}\label{section2} \subsection{Complex Reflection Groups} Let $V$ be a vector space over the field $\mathbb{C}$. Given a linear transformation $t:V\rightarrow V$, there are several subspaces of $V$ that can be found by considering how $t$ acts on $V$. One such subspace is the \textit{fixed space} of vectors that are unchanged when $t$ is applied, denoted by $\operatorname{fix}(t) = \{v\in V: t(v) = v\}$. We define $t$ to be a \textit{generalized reflection} if $\dim(\operatorname{fix}(t)) = \dim(V) -1$. In this case, $\operatorname{fix}(t)$ is called a \textit{reflecting hyperplane} of $t$. The objects that we are working with in this paper are \textit{complex reflection groups}. The group $G$ is defined to be a complex reflection group (CRG) if it is a finite group of transformations $t:V\rightarrow V$, where there is a subset $P \subset G$ of reflections of $G$ such that every element of $G$ can be produced by multiplying together elements of $P$. Choosing an appropriate basis, we can also write a CRG $G$ as a finite group of $\dim(V)\times \dim(V)$ matrices with complex entries. If we take some element $c$ of a CRG $G$, then the tuple $(r_1, r_2, \ldots , r_n)$ of reflections in $G$ is a \textit{reflection factorization} of $c$ if $c = r_1 \cdot r_2 \cdots r_n$, where $n$ is the \textit{length} of the factorization. If the elements $x$ and $y$ are both in the group $G$, and $x = qyq^{-1}$, where $q$ is also an element of $G$, then $x$ and $y$ are \textit{conjugate} to each other in the group. Notice that this divides the elements of the group into classes. A \textit{conjugacy class} of a CRG is a set of elements of the group such that any two elements in the same conjugacy class are conjugate to each other. A complex reflection group that can't be written as $$G\times H = \left\{\left[\begin{array}{c|c} g & 0 \\ \hline 0 & h \end{array}\right] \textrm{where } g \in G \text{ and } h \in H\right\}$$ and where $G$ and $H$ are themselves complex reflection groups, is called \textit{irreducible}. There is a classification of these irreducible groups that gives us a few infinite families and many exceptional cases \cite{ST}. Our theorem statement makes a claim about type of element that appear in some of these groups, called Coxeter elements. To understand what these are, we need to understand a few key properties of complex reflection groups. Consider a CRG $G$ acting irreducibly on a vector space $V$. The \textit{rank} of $G$ is the dimension of $V$, and let $n = \dim(V)$. Choosing an appropriate basis, we can write $G$ as a group of $n\times n$ matrices. We say that $G$ is \textit{well-generated} if $G$ is of rank $n$ and there exists a set $P$ of exactly $n$ reflections such that these $n$ reflections generate all of $G$. That is, every element of $G$ is some product of elements of $P$. The \textit{order} of an element $r\in G$ is the smallest positive integer $m$ such that $r^m$ is the identity. If we are given an element $t$ of $G$ where $t$ is of order $k$ and $\lambda$ is an eigenvalue of $t$, then $\lambda$ is a $k$th \textit{root of unity}. That is, $\lambda^k = 1$. The element $g\in G$ is said to be \textit{Springer regular} if it has an eigenvector $v$ that does not lie in any of the reflecting hyperplanes of the reflections of $G$. If $g$ furthermore has order $k$, then we say that $g$ is \textit{$k$-regular}. If a complex reflection group of rank $n$ is well-generated, the \textit{Coxeter number h} of $G$ is the largest integer such the there exists an $h$-regular element in the group. These $h$-regular elements are called \textit{Coxeter elements}. \subsection{Hurwitz Moves} Given an element $C$ of the CRG $G$ and a reflection factorization $F = (r_1, \ldots, r_{i-1},\hspace{2pt} r_i,\hspace{2pt} r_{i+1}, \ldots, r_n)$ of $c$, we define a \textit{Hurwitz move} at position $i$, where $1\leq i\leq n-1$ to be the following operation: \[\sigma_i(F) = (r_1, \ldots, r_{i-1},\hspace{12pt} r_{i+1},\hspace{12pt} r_{i+1}^{-1}\hspace{1pt} r_i\hspace{1pt} r_{i+1},\hspace{12pt}r_{i+1}, \ldots, r_n).\] Applying a Hurwitz move to a factorization produces a new factorization that multiplies to the same element $c$. If $r_i$ is a reflection in conjugacy class $\mathcal{K}$, then $ r_{i+1}^{-1}r_ir_{i+1}$ is also a reflection in conjugacy class $\mathcal{K}$. The \textit{Hurwitz orbit} of such a factorization is the set of all other distinct factorizations that can be reached by applying some number of Hurwitz moves to the original factorization. With these facts, we have the following proposition. \begin{proposition}[{Peterson, {\cite[Prop.~2.2]{Zach}}}] \label{ZP permutations} A reflection factorization with a given multiset of conjugacy classes has in its Hurwitz orbit, factorizations with all possible permutations of those conjugacy classes. \end{proposition} \subsection{The Group $\boldsymbol{G_6}$} Out of the $34$ exceptional groups, $6$ of them are real reflection groups (for which the conjecture was proved in \cite{LR}), and $8$ of them are not well generated, so there are $20$ complex reflection groups for which we hope to prove Conjecture \ref{L and R}, two of which have already been proven \cite{Zach}. In this paper we are focusing on the group $G_6$. We define this group by the generators $A$ and $B$ where $A^3 = I = B^2$ and $ABABAB = BABABA$. We denote the complex third root of unity as $\zeta = e^{\frac{2 \pi i}{3}}$, and the complex twelfth root of unity as $\gamma = e^{\frac{2 \pi i}{12}}$. More concretely, one can take for $A$ and $B$ the matrices $$A=\begin{pmatrix} 1 & 0 \\ 0 & \zeta\end{pmatrix} \text{ and } B=\frac{1}{3}\begin{pmatrix} \gamma^{11}-\gamma^7 & -2\gamma^{11}-\gamma^7\\ 2\gamma^{11}+4\gamma^7 & \gamma^7-\gamma^{11} \end{pmatrix}.$$ The group $G_6$ has four Coxeter elements, one of which is $$C = AB = \frac{1}{3}\begin{pmatrix} \gamma^{11}-\gamma^7 & -2\gamma^{11}-\gamma^7 \\ 2\gamma^{11}-2\gamma^7 & 2\gamma^{11}+\gamma^7 \end{pmatrix}.$$ For purposes of explicit calculations, as in the proofs of Propositions~\ref{basic facts} and~\ref{7 base case} below, we computed with the single Coxeter element $C$ mentioned here. From \cite[Proposition 1.4]{RRS}, this suffices to prove these results for any Coxeter element: one can use an appropriate reflection automorphism to transfer the necessary statements from any Coxeter element to any other. In our proof it is also helpful for us to consider the CRG $G_4$. This group is defined by two generators, $A'$ and $B'$ where $A'^3 = I = B'^3$ and $A'B'A' = B'A'B'$. Concretely, we can take these generators of $G_4$ to be $$A'= A = \begin{pmatrix} 1 & 0 \\ 0 & \zeta\end{pmatrix} \text{ and } B'=\frac{1}{3}\begin{pmatrix} \zeta-\zeta^2 & 3\zeta^2\\ -2\zeta^2 & -\zeta-2\zeta^2 \end{pmatrix}.$$ \begin{defn} \label{sub-conjugacy class} Suppose that a set of reflections $X$ has a subset of reflections $Y$ that are all in the same conjugacy class $C$. If there are elements in $Y$ that are in different conjugacy classes when only considered in the subgroup generated by $Y$, then these conjugacy classes are called \textit{sub-conjugacy classes}. \end{defn} The following proposition gives us a list of basic facts about the complex reflection group $G_6$. \begin{proposition} \label{basic facts} The following are true. \begin{enumerate} \item The complex reflection group $G_6$ has $48$ elements, $14$ of which are reflections. The set of these reflections, which we denote by $\mathcal{R}$, can be split up by conjugacy class into the following three sets of reflections: \begin{align*} \mathcal{R}_1 & = \left\{A , ABA(AB)^{-1}, BAB^{-1} , (BA)^{-1}ABA\right\} \\ \mathcal{R}_2 & = \left\{A^{-1} , ABA^{-1}(AB)^{-1} , BA^{-1}B^{-1} , (BA)^{-1}A^{-1}BA\right\} \\ \mathcal{S} & = \left\{B, ABA^{-1} , A^{-1}BA , (AB)^{-1}BAB , \right. \\ & \hspace{1in} \left. BAB(BA)^{-1}, (A^{-1}BA)^{-1}BA^{-1}BA\right\} \end{align*} where $\mathcal{R} = \mathcal{R}_1 \cup \mathcal{R}_2 \cup \mathcal{S}$. \item Let $\mathcal{R}' = \mathcal{R}_1 \cup \mathcal{R}_2$. Then $\mathcal{R}'$ generates the CRG $G_4$. The set $\mathcal{S}$ generates a group isomorphic to $G(4,2,2)$, as defined in our proof of this statement. \item The conjugacy class $\mathcal{S}$ of $G_6$ contains three sub-conjugacy classes, $\mathcal{S}_1$, $\mathcal{S}_2$, and $\mathcal{S}_3$, where \begin{gather*} \hspace{10pt}\mathcal{S}_1 = \{B, (A^{-1}BA)^{-1}BA^{-1}BA\},\\ \hspace{40pt}\mathcal{S}_2 = \{ABA^{-1}, BAB(BA)^{-1}\},\text{ and }\\ \hspace{10pt}\mathcal{S}_3 = \{A^{-1}BA, (AB)^{-1}BAB\}. \end{gather*} Each reflection in $\mathcal{S}$ only commutes with reflections in its sub-conjugacy class. \item All elements of $\mathcal{R}'$ are of order $3$, and all elements of $\mathcal{S}$ are of order $2.$ \item By the order of the elements and our choice of the element $A$, $\det(A) = \zeta$, $\det(B) = -1$, and $\det(C) = -\zeta$. \item Consider the pair $(x,y)$ where $x$ is a reflection of $G_6$ in conjugacy class $\mathcal{R}_1$ and $y$ is a reflection of $G_6$ in conjugacy class $\mathcal{R}_2$. If $x$ and $y$ are inverses, then applying a Hurwitz move to this pair simply commutes the two elements. However, if $x$ and $y$ are not inverses, then we have a length $4$ Hurwitz orbit \[ \hspace{35pt}(x, y) \overset{\sigma}{\to} (y, x')\overset{\sigma}{\to} (x', y') \overset{\sigma}{\to} (y', x) \overset{\sigma}{\to} (x, y), \] where two additional elements, $x'$ from conjugacy class $\mathcal{R}_1$ and $y'$ from conjugacy class $\mathcal{R}_2$, are introduced in the orbit. \item Consider the pair $(x,y)$ where $x$ is a reflection of $G_6$ in conjugacy class $\mathcal{R}_1$ or $\mathcal{R}_2$ and $y$ is a reflection of $G_6$ in the conjugacy class $\mathcal{S}$. Then, the Hurwitz orbit would be of length $6$ \begin{multline*} \hspace{40pt}(x, y) \overset{\sigma}{\to} (y, x')\overset{\sigma}{\to} (x', y') \overset{\sigma}{\to} (y', x'')\\ \overset{\sigma}{\to} (x'', y'') \overset{\sigma}{\to} (y'', x) \overset{\sigma}{\to} (x, y), \end{multline*} where there are four additional elements introduced in the orbit: $x'$ and $x''$ from the same conjugacy class as $x$ and $y'$ and $y''$ from the same conjugacy class as $y$. \end{enumerate} \end{proposition} \begin{proof} First we prove (2). The group $G(4,2,2)$ belongs to the infinite family $G(m, p, n)$ of finite complex reflection groups (named by Shephard--Todd \cite{ST}); it consists of the sixteen $2 \times 2$ monomial matrices with nonzero entries $\pm i$ and $\pm 1$ such that these nonzero entries multiply to $\pm1$. To show that $\mathcal{S}$ generates a group isomorphic to $G(4,2,2)$, we use the change of basis matrix $$M = \frac{1}{2}\begin{pmatrix} 2 & -\gamma^4-\gamma^7-2\gamma^{11} \\ \gamma^4+\gamma^{11} & -\gamma^4-\gamma^7 \end{pmatrix}.$$ This conjugates $B$ to $\begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix}$, $ABA^{-1}$ to $\begin{pmatrix}0 & 1\\ 1 & 0\end{pmatrix}$ and $A^{-1}BA$ to $\begin{pmatrix}0 & -i \\ i & 0\end{pmatrix}$. These conjugated matrices generate $G(4,2,2)$. All the other parts can be observed by checking the finite instances in which they occur, which we did by calculations on Sage \cite{Sage}. \end{proof} \begin{lemma} \label{ordering of classes} A reflection factorization with a given multiset of conjugacy classes has in its Hurwitz orbit a factorization in which, when reading the reflection from left to right, first we see all the reflections from $\mathcal{R}_1$, then all the reflections from $\mathcal{R}_2$, and then all the reflections from $\mathcal{S}$. \end{lemma} \begin{proof} This follows direct from Proposition \ref{ZP permutations}, as all permutations of conjugacy classes can be reached through Hurwitz moves. \end{proof} \begin{proposition} \label{mod 3 conj classes} In a reflection factorization of Coxeter element $C$, let $x$ be the number of elements from $\mathcal{R}_1$, $y$ be the number of elements from $\mathcal{R}_2$, and $z$ be the number of elements from $\mathcal{S}$. If we write $x = 3x'+s$ and $y = 3y'+t$ where $x', y', s, t$ are integers such that $0\leq s,t<3$, then $1\equiv (s+2t)\pmod 3$ and $z \equiv 1 \pmod 2$. \end{proposition} \begin{proof} We know by Lemma \ref{ordering of classes} that given a reflection factorization of $C$ we can perform Hurwitz moves to attain a reflection factorization of the form $$(a_1, \ldots, a_x, a'_1, \ldots, a'_y, b_1, \ldots, b_z)$$ where $a_i\in \mathcal{R}_1$, $a'_i\in \mathcal{R}_2$, and $b_i\in \mathcal{S}$. Then, $$\det(C) = \det(a_1)\cdots \det(a_x)\det(a'_1)\cdots \det(a'_y)\det(b_1) \cdots \det(b_z).$$ The determinant of any reflection in $\mathcal{R}_1$ is $\zeta$, of any reflection in $\mathcal{R}_2$ is $\zeta^2$, and of any reflection in $\mathcal{S}$ is $-1$. Additionally, the determinant of $C$ is $-\zeta$. Thus, the above equation implies $$-\zeta = \zeta^x\zeta^{2y}(-1)^z.$$ We see that we must have $z \equiv 1\pmod 2$. With this, we can then simplify the equation to $$\zeta = \zeta^{x+2y} \text{, or } 1 = 3x'+s + 6y'+2t.$$ This simplifies to $3(x'+2y') = 1-(s+2t)$, or $1\equiv (s+2t)\pmod 3$, as needed. \end{proof} \section{The Proof} \subsection{Local Results} Our goal is to find ways to take an length-$\ell$ factorization and relate it to a $\ell-1$ or $\ell-2$ length factorization. This allows us to apply the principle of induction. We begin by addressing patterns that arise in reflection factorizations when Hurwitz moves are applied depending on the conjugacy classes of the elements they are applied to. \begin{defn} Given a tuple of reflections $(x_1, \ldots, x_n)$, if we are able to perform Hurwitz moves to get a series of $n$ identical elements, such as $(\ldots, t,t, \ldots, t, \ldots)$, we call this $n$-tuple within the tuple of reflections a \textit{perfect $n$-tuple}. In the case when $n = 2$, we have a \textit{perfect pair}, and when $n = 3$ we have a \textit{perfect triple}. \end{defn} We wish to show that if we have some perfect tuple, it may be replaced by a single reflection when preforming Hurwitz moves. This allows us to relate a reflection factorization with one of shorter length so that we may apply the principle of induction. In $\mathcal{R}_1$ and $\mathcal{R}_2$, all reflections have order $3$, so finding a perfect pair $(t,t)$ in $\mathcal{R}_1$ can be replaced by its square $t^2$ in $\mathcal{R}_2$ and vice versa. In $\mathcal{S}$, all reflections have order $2$, so a perfect pair would multiply to the identity, which is not a reflection. Thus, for these reflection we want a perfect triple $(t,t,t)$ which can be replaced by the original element $t$. Using these relationships we can make the following observations. \begin{lemma} \label{4 pair in 6} Given a reflection factorization $T$ in $G_6$ of Coxeter element $C$, if there are at least three elements from the set of reflections $\mathcal{R}'$ represented, then we are able to find a perfect pair $(t,t)$ in the Hurwitz orbit of $T$, for some $t$ in $\mathcal{R}'$. \end{lemma} \begin{proof} Consider a reflection factorization $T$ of a Coxeter element of $G_6$. We know by Lemma \ref{ordering of classes} that we can assume this factorization to be sorted by conjugacy class where the elements from $\mathcal{R}'$ appear first and elements from $\mathcal{S}$ follow. In the factorization, we let the elements from $\mathcal{R}'$ be comprised of $m$ elements from $\mathcal{R}_1$ and $n$ elements from $\mathcal{R}_2$. First we consider when $m$ or $n$ is zero. That is, the elements from $\mathcal{R}'$ are in fact from a single conjugacy class, $\mathcal{R}_1$ or $\mathcal{R}_2$. First consider $m = 0\neq n$. If $n>4$, the result holds by the pigeonhole principle, and we cannot have $n=3$, by Proposition \ref{mod 3 conj classes}. Thus we only need to check the case where $n = 4$, where the factorization is \[(a_1, a_2, a_3, a_4, b_1, \ldots, b_k)\] where $a_i \in \mathcal{R}'$, all the $a_i$ are distinct, and $b_i \in \mathcal{S}$. If we perform Hurwitz moves on position $4$ between the initial elements $a_4$ and $b_1$, by Proposition \ref{basic facts}(7) we introduce two new elements from $\mathcal{R}_1$ which by the pigeonhole principle must be two of $a_1, a_2, a_3$, all of which are already in the tuple. Thus, we have two identical elements so we are able to form a $(t,t)$ pair. The case $n = 0\neq m$ is identical. Now we consider both $m$ and $n$ not equal to zero. As $n+m \geq 3$, there must be at least two elements from one conjugacy class. Without loss of generality, let $n\geq 2$. Using Hurwitz moves we rearrange the tuple so that we have $$(\ldots, x, y_1, z_1, b_1,\ldots, b_k).$$ If $y_1 = z_1$, we are done. Otherwise, consider $y_1 \neq z_1$. If $x$ and $y_1$ are inverses, we can easily remedy this by performing a Hurwitz move on the pair $(y_1, z_1)$. So, we assume $x$ and $y_1$ are not inverses. We then perform Hurwitz moves between the pairs $(x, y_1)$ and $(z_1, b_1)$. By parts (6) and (7) of Proposition \ref{basic facts} the orbit of $(x, y_1)$ will produce one additional element $y_2$ from conjugacy class $\mathcal{R}_2$, and the orbit of $(z_1, b_1)$ will produce two additional elements $z_2, z_3$ from conjugacy class $\mathcal{R}_2$. Therefore we have five elements, $y_1, y_2, z_1, z_2, z_3$ all in the same conjugacy class where there are four elements total. By the pigeonhole principle there is are $i,j$ such that $y_i = z_j$, giving us our $(t,t)$ pair. \end{proof} \begin{lemma} \label{422 perfect tuple} Given a tuple of elements $\mathcal{S}$ of length $m \geq 3$, if $n<m$ elements of the tuple, $t_1, \ldots, t_n$, are in the same sub-conjugacy class, and if there is at least one other element $s$ in the tuple not in that sub-conjugacy class, then there exists a perfect $n$-tuple. \end{lemma} \begin{proof} First, we just consider $t_1, t_2$, and $s$. If $t_1 = t_2$ we can simply rearrange the tuple using Hurwitz moves so that $t_1$ and $t_2$ are next to each other, giving us a perfect pair. Otherwise, we consider $t_1 \neq t_2$. As $s$ is not in the same conjugacy class as $t_1$ and $t_2$, we can rearrange the tuple so that we have $$(\ldots, t_1, s', t_2, \ldots)$$ where $s'$ is the element $s$ after being acted on by the appropriate Hurwtiz moves. We can then perform a Hurwitz move between $s'$ and $t_1$. By Proposition \ref{basic facts}(3), the sub-conjugacy classes of $\mathcal{S}$ have two elements which only commute with each other, so this gives us the tuple $$(\ldots, s', t_2, t_2, \ldots)$$ with a perfect pair $(t_2, t_2)$. If we wish to have the pair $(t_1, t_1)$, we can take this factorization, move $s'$ to be the rightmost of the three, where it will be conjugated to some element $s''$, and then move it back to the leftmost position, so we have $$(\ldots, s'', t_1, t_1, \ldots)$$ giving us the perfect pair $(t_1, t_1)$. Thus, given two elements $t_1$ and $t_2$, we can not only find a perfect pair, but we can choose whether it is a pair of $t_1$ or $t_2$. We can rewrite either result as $(\ldots, r, t, t, \ldots)$. We then introduce the element $t_3$ and rearrange the tuple using Hurwitz moves to get $$(\ldots, t, t, r', t_3, \ldots).$$ We apply the same process as before to get $$(\ldots, r'', t, t, t, \ldots).$$ We can continue introduce all elements of $t_1, \ldots, t_n$ one by one until we have $$(\ldots, q, \underbrace{t, t, \ldots, t}_{n}, \ldots)$$ where $q\in\mathcal{S}$, giving us a perfect $n$-tuple, as needed. \end{proof} \begin{corollary} \label{422 n+1 tuple} Let $n$ be a positive integer. For any tuple consisting of elements of $\mathcal{S}$ with length $3n +1$, we are able to find a perfect $(n+1)$-tuple. \end{corollary} \begin{proof} When $n = 0$, there is only one element and the result is trivial. Consider $n\geq 1$. In a tuple of $3n+1$ elements, all of which are from one of three sub-conjugacy classes, we have at worst case, without loss of generality $n$ elements in $\mathcal{S}_1$, $n$ elements in $\mathcal{S}_2$, and $n+1$ elements in $\mathcal{S}_3$. By Lemma \ref{422 perfect tuple}, were are able to produce a perfect $(n+1)$-tuple using Hurwitz moves. \end{proof} In our proof of Theorem \ref{THE theorem}, to show that any two reflection factorizations of $C$ are in the same orbit, we take a canonical factorization from each orbit and show that all factorizations in that orbit can reach this canonical factorization through Hurwtiz moves. This canonical form is defined by the following. \begin{defn} We say that a reflection factorization of the form $$(\underbrace{A, \ldots, A}_{n}, \underbrace{A^{-1}, \ldots, A^{-1}}_{m}, \underbrace{B, \ldots, B}_{k}).$$ is a \textit{standard factorization} and we denote it by $[n,m,k]$. \end{defn} Given a reflection factorization of $C$ that has $n$ elements in the set $\mathcal{R}_1$, $m$ elements in the set $\mathcal{R}_2$, and $k$ elements in the set $\mathcal{S}$, we wish to show that it is in the same Hurwitz orbit as the standard factorization $[n,m,k]$. \subsection{Marked Factorizations} \begin{defn} [{\cite[Defn.\ 4.17]{Zach}}] \label{marked element} A \textit{marked element} in a reflection factorization is an element $t$ that has been marked, denoted as $t^*$. A \textit{marked factorization} $\hat{T}$ is a factorization which contains a marked element. \end{defn} Now we need to be able to apply Hurwitz moves on this marked element, so we define a how Hurwitz moves would function on these elements. \begin{defn} [{\cite[Defn.\ 4.18]{Zach}}] A \textit{marked Hurwitz move}, $\sigma^*$, is defined as follows for marked reflection factorizations: \begin{equation} (\ldots, t_i, t_{i+1},\ldots) \xrightarrow{\sigma_i^*} (\ldots, t_{i+1}, t_{i+1}^{-1}\cdot t_i \cdot t_{i+1}, \ldots) \end{equation} \begin{equation} (\ldots, t_i, t_{i+1}^*) \xrightarrow{\sigma_i^*} (\ldots, t_{i+1}^*, t_{i+1}^{-1}\cdot t_i \cdot t_{i+1},\ldots) \end{equation} \begin{equation} (\ldots, t_i^*, t_{i+1}, \ldots) \xrightarrow{\sigma_i^*} (\ldots, t_{i+1}, (t_{i+1}^{-1}\cdot t_i \cdot t_{i+1})^*, \ldots) \end{equation} \end{defn} So, a marked Hurwitz move is identical to a Hurwitz move, while also shifting the position of the marking $^*$. Thus, all previously made statements about Hurwitz moves are also true for the marked Hurwitz move. In our inductive proof of the theorem, we do one of the following. \begin{enumerate} \item Take a perfect pair $(t,t)$ in $\mathcal{R}_1$ and replace it with the marked element $(t^2)^*$. \item Take a perfect pair $(t,t)$ in $\mathcal{R}_2$ and replace it with the marked element $(t^2)^*$. \item Take a perfect triple $(t,t,t)$ in $\mathcal{S}$ and replace it with the marked element $(t^3)^* = t^*$. \end{enumerate} By Lemma \ref{4 pair in 6} and Corollary \ref{422 n+1 tuple} this will be possible whenever our original factorization has at least $3$ reflections in $\mathcal{R}'$ or $7$ reflection in $\mathcal{S}$. By Proposition \ref{mod 3 conj classes}, we can see that we are guaranteed one of these cases for all factorization of length $8$ and greater. For factorization of length $7$ or less, we check the theorem using Sage, and these factorizations will function as our base case. \begin{proposition} \label{7 base case} The conjecture is true for reflection factorizations up to and including length $7$. \end{proposition} \begin{proof}As the number of elements here is finite, we are able to perform a finite number of calculations on Sage to prove the statement. Given a length $\ell$ factorization, we check with Sage how many possible reflection factorizations there are in total. Then, we consider the possible standard forms $[n,m,k]$, as limited by Proposition \ref{mod 3 conj classes}. Using Sage, we then computed the length of the orbits of each possible standard form. In all cases, the sum of these sizes is equal to the total number of possible reflection factorizations, so the conjecture holds for length $\ell$. We verified this fact for $2\leq \ell\leq 7$. \end{proof} To prove our theorem inductively, we must show that performing Hurwitz moves at length $n-k$ factorizations have the same results as performing certain Hurwitz moves on the appropriate length $n$ factorization. \begin{lemma} \label{moving marked elements} Let $T$ be a reflection factorization $T = (\ldots, t, t, \ldots, t, \ldots)$ with a perfect $n$-tuple $(t,t,\ldots,t)$, and let $\hat{T}$ be the marked factorization resulting from letting $(t,t,\ldots,t) = (t^n)^*$, so that we have $\hat{T} = (\ldots, (t^n)^*, \ldots)$. Suppose that the marked factorization $\hat{S}$ is obtained from $\hat{T}$ by performing some Hurwitz moves, and that $\hat{S}$ has marked element $(s^n)^*$. Suppose further that this marked element has a unique expansion \[s^n =\underbrace{s\cdot \cdots \cdots s}_{n}\] and let $S$ be the factorization that we get by replacing $(s^n)^*$ with the $n$-tuple $(s, \ldots, s)$. Then, there is a series of Hurwitz moves that can be performed on $T$ to obtain the factorization $S$. \end{lemma} \begin{proof} By induction, it suffices to prove the case where $\hat{T}$ is related to $\hat{S}$ by a single Hurwitz move $\sigma_i$. If this Hurwitz move does not involve the marked element, then the effect of performing a Hurwitz move in $\hat{T}$ is identical to performing a Hurwitz move in $T$ and so the result follows immediately in this case. Otherwise, we first consider the following move $$ \hat{T} = (\ldots ,s, (t^n)^*, \ldots) \xrightarrow{\sigma_i^*} (\ldots ,(t^n)^*, t^{-n}\cdot s\cdot t^n, \ldots)= : \hat{S}. $$ Thus, $T = (\ldots, s, t,\ldots, t, \ldots)$ and $S = (\ldots, t, \ldots, t, t^{-n}\cdot s\cdot t^n, \ldots)$. We can then perform the following Hurwitz moves on $T$: \begin{equation*} \begin{split} T = (\ldots, s, t, \ldots, t, \ldots) & \xrightarrow{\sigma_i} (\ldots, t, t^{-1}\cdot s\cdot t, t, \ldots, t, \ldots)\\ & \xrightarrow{\sigma_{i+1}} (\ldots, t, t, t^{-2}\cdot s\cdot t^2, t, \ldots, t, \ldots) \\ & \vdots \\ & \xrightarrow{\sigma_{i+n-1}} (\ldots, t, \ldots, t, t^{-n}\cdot s\cdot t^n, \ldots) = S. \end{split} \end{equation*} as needed. We now consider the move $$\hat{T} = (\ldots , (t^n)^*,s, \ldots) \xrightarrow{\sigma_i^*} (\ldots ,s, (s^{-1}\cdot t^n \cdot s\cdot)^* , \ldots):= \hat{S}.$$ Thus, $T = (\ldots, t, \ldots, t, s, \ldots)$ and $S = (\ldots, s, s^{-1}\cdot t \cdot s,\ldots , s^{-1}\cdot t \cdot s, \ldots)$. We can then perform the following Hurwitz moves on $T$: \begin{equation*} \begin{split} T = (\ldots, t, \ldots, t, s, \ldots) & \xrightarrow{\sigma_{i+n}} (\ldots, t, \ldots, t, s, s^{-1}\cdot t \cdot s,\ldots)\\ & \xrightarrow{\sigma_{i+n-1}} (\ldots, t, \ldots, t, s, s^{-1}\cdot t \cdot s, s^{-1}\cdot t \cdot s,\ldots)\\ & \vdots\\ & \xrightarrow{\sigma_{i+1}} (\ldots, s, s^{-1}\cdot t\cdot s, \ldots, s^{-1}\cdot t\cdot s, \ldots), \end{split} \end{equation*} as needed. \end{proof} \subsection{Proof of the Theorem} We now have all the facts need to prove our main theorem by induction on the length of the reflection factorizations of $C$. We restate it here for convenience. \begin{recap} Let $T$ and $T'$ be two length $n$ reflection factorizations of a Coxeter element of complex reflection group $G_6$. Then, $T$ and $T'$ are in the same Hurwitz orbit if and only if they have the same multiset of conjugacy classes. \end{recap} \begin{proof} We know that Hurwitz moves preserve conjugacy class, so the forward direction holds. Now we consider the backward direction which we prove by induction on the length $p$ of the reflection factorizations $T$ and $T'$. We have our base cases where $p\leq 7$ checked in Proposition \ref{7 base case}. Assume that the theorem holds for all factorizations of length $\ell$ or less. Consider a factorization $T$ of length $\ell + 1$. We wish to show that $T$ is in the same Hurwitz orbit as the standard factorization with the same multiset of conjugacy classes, $[n,m,k]$. If $T$ has $7$ or more elements from the conjugacy class $\mathcal{S}$, then by Proposition \ref{422 n+1 tuple} we are able to find a perfect triple $(t,t,t)$ in the Hurwitz orbit of $T$. From this perfect triple we are able to create the marked element $(t^3)^* = t^*$ giving us the marked factorization $\hat{T}$. As $\hat{T}$ is of length $\ell-1$, by the inductive hypothesis it is in the same Hurwitz orbit as the standard factorization $[n,m,k-2]$. Thus, the marked element here is the element $B^*$, which when replaced with $(B,B,B)$ gives us the standard factorization $[n,m,k]$. By Lemma \ref{moving marked elements} this means that the standard factorization is in the same Hurwitz orbit as $T$. If $T$ has fewer than $7$ elements from $\mathcal{S}$, then by Proposition \ref{mod 3 conj classes} there must be at least $3$ elements from $\mathcal{R}'$ as we are considering $\ell\geq 7$. By Lemma \ref{4 pair in 6} we know that we are able to find a perfect pair $(t^2,t^2)$ in a factorization in the Hurwitz orbit of $T$ so without loss of generality, we may assume that such a pair already exists in $T$. From this perfect pair we are able to create the marked element $(t^4)^* = t^*$ giving us the marked factorization $\hat{T}$. As $\hat{T}$ has length $\ell$, by the inductive hypothesis it is in the same Hurwitz orbit as the standard factorization with the corresponding multiset of conjugacy classes. The marked element would then either be the element $(A^{-1})^*$ or $A^*$. If we have $(A^{-1})^*$, we have \[(\underbrace{A, \ldots, A,}_{n-2} \underbrace{A^{-1}, \ldots, (A^{-1})^*, \ldots, A^{-1},}_{m+1} \underbrace{B, \ldots, B}_{k}).\] We expand the marked element to $(A,A)$ \[(\underbrace{A, \ldots, A,}_{n-2} \underbrace{A^{-1}, \ldots, A,A, \ldots, A^{-1},}_{m+2} \underbrace{B, \ldots, B}_{k})),\] where these two elements can be easily moved to the $\mathcal{R}_1$ section of the factorization as $A$ and $A^{-1}$ commute under Hurwitz moves. . For the same reasons, if we have $A^*$, we have \[(\underbrace{A, \ldots,A^*, \ldots, A,}_{n+1} \underbrace{A^{-1}, \ldots, A^{-1}}_{m-2}, \underbrace{B, \ldots, B}_{k}).\] We expand the marked element to to $(A^{-1},A^{-1})$ \[(\underbrace{A, \ldots,A^{-1}, A^{-1}, \ldots, A,}_{n+2} \underbrace{A^{-1}, \ldots, A^{-1}}_{m-2}, \underbrace{B, \ldots, B}_{k}),\] which can be easily moved to the $\mathcal{R}_2$ section of the factorization if it is not already. These expansions give us a $[n,m,k]$ with length $\ell+1$. By Lemma \ref{moving marked elements} this means that the standard factorization is in the same Hurwitz orbit as $T$. As $T$ is arbitrary, and $T'$ has the same multiset of conjugacy classes as $T$, then $T'$ is also in the same Hurwitz orbit as $[n,m,k]$. Thus, $T$ and $T'$ are in the same Hurwitz orbit, as needed. \end{proof}
1,116,691,499,543
arxiv
\section{Introduction} \label{sec:intro} The structure of the QCD ground state is reflected in its observable hadron spectrum. In vacuum, the formation of quark and gluon condensates leads to the generation of hadron masses and the spontaneous breaking of chiral symmetry (SBCS). The latter induces mass splittings of ca.~0.5\,GeV for chiral partners in the light-hadron spectrum, {\it e.g.}, between $\pi$-$\sigma$ or $\rho$-$a_1$. In a hot medium, chiral symmetry is restored across a region around a pseudo-critical temperature of $T_{\rm pc}$$\simeq$160\,MeV~\cite{Borsanyi:2010bp,Bazavov:2011nk}. A long-standing question is how this restoration manifests itself in the hadron spectrum, {\it i.e.}, what its observable consequences are. Dilepton data from ultra-relativistic heavy-ion collisions (URHICs)~\cite{Arnaldi:2008fw,Adamova:2006nu,Geurts:2012rv} are now providing strong evidence that the $\rho$ resonance ``melts" when the system passes through the pseudo-critical region~\cite{Rapp:2013ema}, while experimental access to the in-medium $a_1$ spectral functions (e.g., via $a_1\to\pi\gamma$) remains elusive. Thus, to test whether the $\rho$ melting in the vector channel signals chiral restoration, a theoretical evaluation of the in-medium axialvector spectral function is needed. A straightforward approach to calculate the in-medium axialvector spectral function, by using a chiral Lagrangian paralleling the treatment of the $\rho$ meson, turns out to be challenging~\cite{Urban:2001uv}. For example, the widely used scheme of implementing the $\rho$ and $a_1$ mesons into the pion Lagrangian through a local gauging procedure causes considerable problems in describing the vacuum spectral functions as measured in hadronic $\tau$ decays~\cite{Barate:1998uf,Ackerstaff:1998yj}, which led some groups to abandon the local gauging procedure~\cite{Urban:2001ru,Parganlija:2010fz}. In the present work, we adopt a more modest approach to this problem, by utilizing in-medium sum rules. Specifically, we adopt the well-known Weinberg sum rules (WSRs)~\cite{Das:1967ek,Weinberg:1967kj,Kapusta:1993hq} which relate (moments of) the difference between vector and axialvector spectral functions to operators signifying SBCS. Using available calculations of the in-medium $\rho$ spectral function together with temperature dependent order parameters as an input, we ask whether {\em a} (not necessarily {\em the}) axialvector spectral function can be found to satisfy the in-medium sum rules. To tighten our constraints, we simultaneously employ finite-temperature QCD sum rules (QCDSRs)~\cite{Shifman:1978bx,Shifman:1978by} in vector and axialvector channels, which additionally involve chirally invariant condensates. Related works have been carried out, e.g., in the low-temperature limit~\cite{Marco:2001dh,Holt:2012wr}, for heavy-quark channels~\cite{Hilger:2011cq}, or focusing on chirally odd condensates in the vector channel only~\cite{Hilger:2010cn}. The present analysis builds on our previous work~\cite{Hohler:2012xd} where QCD and Weinberg sum rules have been tested in vacuum with vector and axialvector spectral functions that accurately fit hadronic $\tau$-decays. The combination of four WSRs turned out be a rather sensitive probe of the spectral functions, allowing, {\it e.g.}, to deduce the presence of an excited axialvector meson, $a_1'$. This makes for a promising tool at finite temperature ($T$), aided by an experimentally tested in-medium vector spectral function and in-medium condensates from lattice QCD (lQCD). In the absence of reliable microscopic models for the $a_1$ and the excited states, the price to pay is the {\it a priori} unknown in-medium behavior of these states. However, with guidance from model-independent chiral mixing theorems to constrain the $T$ dependence of the higher states, one can still hope for a sensitive test of the in-medium $a_1$ spectral function, and to gain novel insights into (the approach to) chiral restoration in the $IJ^P=11^\pm$ chiral multiplet. This is the main objective of our work. The Letter is organized as follows. We recall the in-medium QCDSRs and WSRs in Sec.~\ref{sec:sumrule} and specify the $T$ dependence of their ``right-hand sides" (condensates) in Sec.~\ref{sec:cond}. The finite-$T$ axial-/vector spectral functions (``left-hand sides") are detailed in Sec.~\ref{sec:FTspec}, followed by quantitative sum rule analyses in Sec.~\ref{sec:results}. We conclude in Sec.~\ref{sec:conc}. \section{Finite Temperature Sum Rules} \label{sec:sumrule} The basic quantity figuring into WSRs and QCDSRs is the isovector current-current correlator in the vector ($V$) and axialvector ($A$) channels, \begin{equation} \Pi_{V,A}^{\mu\nu}(q^2) = - i \int d^4 x \ e^{i x q} \avg{T \vec{J}_{V,A}^{\mu}(x) \vec{J}_{V,A}^{\nu}(0) } \ . \end{equation} In the quark basis with two light flavors, the currents read $\vec{J}_V^\mu = \bar{q} \vec{\tau} \gamma^\mu q$ and $\vec{J}_A^\mu = \bar{q} \vec{\tau} \gamma^\mu \gamma_5 q$, ($\vec{\tau}$: isospin Pauli matrices). From here on, we focus on charge-neutral states (isospin $I_3$=0) and drop isospin indices. In vacuum, the currents can be decomposed into 4D transverse and longitudinal components as \begin{equation} \Pi_{V,A}^{\mu\nu}(q^2) = \Pi_{V,A}^T(q^2) \left(-g^{\mu\nu} + \frac{q^\mu q^\nu}{q^2}\right) + \Pi_{V,A}^L(q^2) \frac{q^\mu q^\nu}{q^2} \ . \end{equation} Vector-current conservation implies $\Pi_V^L(q^2)$=0, while the pion pole induces the partial conservation of the axialvector current (PCAC), \begin{equation} \Pi_A^L (q^2) = f_\pi^2 q^2 \delta(q^2-m_\pi^2) \ . \end{equation} Lorentz symmetry breaking at finite $T$ splits the 4D-transverse polarization functions into 3D-transverse and 3D-longitudinal parts. From here on, we focus on vanishing 3-momentum ($\vec{q}$=0), for which the 3D components are degenerate. We define pertinent spectral functions as \begin{equation} \rho_{V,A} = -\frac{{\rm Im}\Pi_{V,A}^{T}}{\pi} \ , \ \rho_{\bar A}= \rho_{A} - \frac{{\rm Im}\Pi_{A}^{L}}{\pi} \ . \end{equation} The QCDSRs equate a dispersion integral on the left-hand-side (LHS) to an operator product expansion (OPE) on the right-hand-side (RHS); for the axial-/vector channels they read~\cite{Hatsuda:1992bv,Leupold:1998bt,Zschocke:2002mn} \begin{eqnarray} &&\!\!\!\!\frac{1}{M^2}\!\int_0^\infty \!ds \frac{\rho_{V,\bar{A}}(s)}{s} e^{-s/M^2} = \frac{1}{8\pi^2} \left(1+\frac{\alpha_s}{\pi}\right) +\frac{m_q \langle\bar{q}q\rangle}{M^4} \nonumber\\& &\!\!\!\!+\frac{1}{24 M^4}\langle\frac{\alpha_s}{\pi} G_{\mu\nu}^2\rangle - \frac{\pi \alpha_s}{M^6} \frac{(56,-88)}{81} \langle \mathcal{O}_4^{V,A} \rangle \\ & &\!\!\!\!+\sum_h \frac{\langle \mathcal{O}^{d=4,\tau=2}_h \rangle_T}{M^4}+\frac{\langle\mathcal{O}^{d=6,\tau=2}_h \rangle_T}{M^6}+\frac{\langle \mathcal{O}^{d=6, \tau=4}_h \rangle_T}{M^6} \ldots \ , \nonumber \end{eqnarray} where the space-like $q^2$ is traded for the Borel mass $M^2$ by a standard Borel transform. On the RHS, we include all operators up to dimension-6, i.e., the common scalar operators already present in the vacuum (quark, gluon, and 4-quark condensates, $\avg{\bar{q}q}$, $\avg{\frac{\alpha_s}{\pi} G^2_{\mu\nu}}$, and $\langle\mathcal{O}_4^{V,A}\rangle$, respectively), as well as non-scalar operators induced by thermal hadrons ($h$), organized by dimension ($d$) and twist ($\tau$). The $T$ dependencies are detailed in Sec.~\ref{sec:cond}. The WSRs relate moments of the difference between the vector and axialvector spectral functions to chiral order parameters. Their formulation at finite $T$ was first carried out in Ref.~\cite{Kapusta:1993hq}. Subtracting the two channels of the finite-$T$ QCDSRs from one another, Taylor-expanding the Borel exponential, and equating powers of $M^2$ on each side of the sum rule yields \begin{eqnarray} ({\rm WSR}\, 1)& \quad \int_0^\infty \!ds \, \frac{\Delta\rho(s)}{s} = f_\pi^2 \ , \label{eq:WSR1} \\ ({\rm WSR}\, 2)& \quad \int_0^\infty\! ds\, \Delta\rho(s) = f_\pi^2 m_\pi^2 = -2 m_q \langle \bar{q}q \rangle \ , \label{eq:WSR2}\\ ({\rm WSR}\, 3)& \quad \int_0^\infty ds s \Delta\rho(s) = - 2 \pi \alpha_s \langle \mathcal{O}_4^{SB} \rangle \ ,\label{eq:WSR3} \end{eqnarray} where $\Delta \rho = \rho_V - \rho_A$. The chiral breaking 4-quark condensate is given by the axial-/vector ones as \begin{equation} \label{eq:q4sbdef} \avg{\mathcal{O}_4^{SB}} = \frac{16}{9}\left( \frac{7}{18} \avg{\mathcal{O}_4^V} + \frac{11}{18} \avg{\mathcal{O}_4^A}\right) \, . \end{equation} Since the WSRs only contain chiral order parameters, they are particularly sensitive to chiral symmetry restoration, whereas the QCDSRs are channel specific thus providing independent information. \section{In-Medium Condensates} \label{sec:cond} We now turn to the $T$ dependence of each condensate figuring into the QCDSRs. To leading order in the density of a hadron $h$ in the heat bath, the in-medium condensate associated with a given operator $\mathcal{O}$ can be approximated by \begin{equation} \label{eq:opT} \langle \mathcal{O} \rangle_T \simeq \langle \mathcal{O}\rangle_0 + d_h \int \frac{d^3 k}{\left(2 \pi\right)^3 2 E_h} \langle h(\vec{k})|\mathcal{O}|h(\vec{k})\rangle n_h(E_h) \ , \end{equation} where $\langle \mathcal{O}\rangle_0$ is the vacuum value of the operator, $\langle h(\vec{k})|\mathcal{O}|h(\vec{k})\rangle$ its hadronic matrix element, $E_h^2$=$m_h^2+\vec{k}^2$, and $d_h$, $m_h$, and $n_h$ are the hadron's spin-isospin degeneracy, mass, and thermal distribution function (Bose ($n_b$) or Fermi ($n_f$)), respectively. Working at zero baryon chemical potential ($\mu_B$=0), we absorb anti-baryons into the degeneracy factor of baryons. Corrections to Eq.~(\ref{eq:opT}) figure via multi-hadron matrix elements of the operator. We approximate the medium by a hadron resonance gas (HRG) including all confirmed states with mass $m_h$$\leq$~2\,GeV~\cite{pdg}. For the temperatures of interest here, $T$$\lesssim$\,170\,MeV, the HRG is known to reproduce the equation of state from lQCD quite well~\cite{Karsch:2003vd}. Since the calculation of the in-medium $\rho$ spectral function is also based on HRG degrees of freedom, the OPE and spectral function sides of the sum rules are evaluated in the same basis. For the subsequent discussion, we define the integrals \begin{equation} I_n^h = d_h \int \frac{d^3 k}{(2\pi)^3 E_h} k^{2n-2} n_{h}(E_h) \ . \end{equation} Note that $m_h I_1^h$ is the scalar density, $\varrho_s^h$. \subsection{Quark Condensate} The HRG correction to the quark condensate is~\cite{Gerber:1988tt,Leupold:2006ih} \begin{equation} \label{eq:q2T} \begin{split} &\frac{\avg{\bar{q}q}_T}{\avg{\bar{q}q}_0} = 1 - \frac{\varrho_s^\pi}{2 m_\pi f_\pi^2} - \frac{\varrho_s^K}{4 m_K f_K^2} - \frac{\varrho_s^\eta}{6 m_\eta f_\eta^2} - \frac{\varrho_s^{\eta'}}{ 3 m_{\eta'} f_{\eta'}^2} \\ & \qquad \quad \quad - \sum_B \frac{\sigma_B}{f_\pi^2 m_\pi^2} \varrho_s^B- \sum_M \frac{\sigma_M}{f_\pi^2 m_\pi^2} \varrho_s^M- \alpha T^{10} \ . \end{split} \end{equation} The Goldstone boson contribution can be inferred from current algebra (with decay constants given in Tab.~\ref{tab:parq2T}). The contributions from baryons ($B$) and other mesons ($M$) can be derived from the HRG partition function via $\partial \ln Z/\partial m_q$, which is nothing but the in-medium condensate. They are determined by their $\sigma$-terms which to lowest order are given by the (current) quark masses, $m_q$, of the light valence quarks in the hadron~\cite{Gasser:1990ce}. However, important contributions arise from the hadron's pion cloud~\cite{Jameson:1992ep,Birse:1992he}. We write \begin{equation} \sigma_h = \sigma_q^{\rm bare} + \sigma_\pi^{\rm cloud} \equiv \sigma_0\, m_q\, (N_q-N_s) \label{sigh} \end{equation} where $N_q$ ($N_s$) is the number of all (strange) valence quarks in $h$. We adjust the proportionality constant to $\sigma_0$=2.81, to recover the recent value, $\sigma_N$=59\,MeV~\cite{MartinCamalich:2010fp}, of the nucleon and assume it to be universal for all hadrons. This leads to fair agreement with estimates of $\sigma_h$ for other ground-state baryons~\cite{MartinCamalich:2010fp}. Note that the decomposition of the $\sigma$ terms into quark core and pion cloud effects parallels the medium effects of the $\rho$ spectral function~\cite{Rapp:2012zq}. Our HRG results reproduce lQCD ``data"~\cite{Borsanyi:2010bp} for $T$$\lesssim$140\,MeV, see Fig.~\ref{fig:cond}(a). To improve the agreement at higher $T$ without affecting the low-$T$ behavior, we introduced a term $\alpha T^{10}$ on the RHS of Eq.~(\ref{eq:q2T}), with $\alpha$=1.597~$\cdot 10^7$\,GeV$^{-10}$. The quark condensate then vanishes slightly above $T$=170\,MeV, signaling the breakdown of our approach. Choosing a somewhat higher power in $T$ (with accordingly adjusted $\alpha$) has no significant impact on our results, while a smaller power adversely affects the agreement with lQCD data at low $T$. \begin{figure}[t!] \centering \includegraphics[width=.42\textwidth]{q2cond3.eps} \includegraphics[width=.42\textwidth]{q4cond.eps} \caption{Temperature dependence of: (a) the quark condensate relative to its vacuum value, compared to thermal lQCD data~\cite{Borsanyi:2010bp}; (b) axial-/vector 4-quark condensates relative to their vacuum values, compared to the quark condensate.} \label{fig:cond} \end{figure} \begin{table}[!b] \begin{center} \begin{tabular}{|c|cccccc|} \hline Parameter & $f_\pi$ & $f_K$ &$f_\eta$ & $f_{\eta'}$ & $m_q$ & $m_\pi$ \\ \hline Value (MeV) & 92.4 & 113 &124 & 107 & 7 & 139.6 \\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \caption{Numerical values of key parameters figuring into Eq.~(\ref{eq:q2T}). For hadron masses not listed we take averages from the particle data group~\cite{pdg}.} \label{tab:parq2T} \end{table} \subsection{Gluon Condensate} For the gluon condensate, the contributions from pions and nucleons have been evaluated in Refs.~\cite{Cohen:1991nk,Hatsuda:1992bv,Zschocke:2002mn}. The HRG effect can be inferred from the trace anomaly, \begin{equation} \theta^\mu_\mu = -\frac{9}{8} \frac{\alpha_s}{\pi}G^2_{\mu\nu} + \sum\limits_{q} m_q \bar qq \ , \end{equation} by calculating $\Delta \! \avg{\theta^\mu_\mu} = \epsilon -3P = \sum_h m_h \varrho_s^h$ to obtain \begin{equation} \!\!\Delta\!\avg{\frac{\alpha_s}{\pi}G^2_{\mu\nu}} = -\frac{8}{9} \left[ \Delta \! \avg{\theta^\mu_\mu}-2 m_q \Delta\!\avg{\bar{q}q} - m_s \Delta\!\avg{\bar{s}s}\right] . \end{equation} The change in light-quark condensate is taken from Eq.~(\ref{eq:q2T}). For the strange-quark condensate, we assume its suppression from individual resonances to scale with the valence strange-quark content of each hadron $h$, paralleling the procedure of determining the $\sigma$-term for each hadron. One has \begin{equation} m_s \Delta\!\avg{\bar{s}s} = \sum_h \frac{N_s}{N_q-N_s} \left(2 m_q \Delta\!\avg{\bar{q}q}_h \right), \end{equation} where $\Delta\!\avg{\bar{q}q}_h$ is from Eq.~(\ref{eq:q2T}). The HRG suppression of the gluon condensate reaches 13\% at $T$=170\,MeV. \subsection{Four-Quark Condensates} For medium dependence of the vector and axialvector 4-quark condensates induced by Goldstone bosons, we adopt the results from current algebra~\cite{Hatsuda:1992bv}. For the non-Goldstone bosons and baryons, arguments based on the large-$N_c$ limit~\cite{Leupold:2005eq,Leupold:2006ih} suggest a factorization approximation, i.e., the medium effect linear in their (scalar) density amounts to a factor of 2 times the reduction in the quark condensate, with the same factorization parameter as in vacuum (we have checked that an increase of the in-medium factorization parameter by a factor of 2 has a negligible impact on the OPEs and thus on the resulting spectral functions). The $T$ dependence of the vector and axialvector 4-quark condensates then takes the form \begin{eqnarray} \label{eq:V4qFT} &&\frac{\langle \mathcal{O}_4^{V,A} \rangle_T}{\langle \mathcal{O}_4^{V,A} \rangle_0} = 1 - \frac{(12/7, 12/11)}{m_\pi f_\pi^2} \varrho_s^\pi - \frac{(9/14, 9/22)}{m_K f_K^2} \varrho_s^K \nonumber\\ &&\quad \ \ - \sum_B \frac{2\sigma_B}{f_\pi^2 m_\pi^2} \varrho_s^B -\sum_M \frac{2\sigma_M}{f_\pi^2 m_\pi^2} \varrho_s^M + \beta_{V,A} T^{10} \ . \end{eqnarray} As for the quark condensate, we augmented the $T$ dependence by a term $\beta_{V,A} T^{10}$. Since thermal lQCD data are not available for 4-quark condensates, we adjusted $\beta_{V,A}$ for each channel to render them vanishing at the same temperature as the quark condensate, resulting in $\beta_V$=$3.05 \cdot 10^7 {\rm GeV}^{-10}$ and $\beta_A$=$1.74 \cdot 10^7 {\rm GeV}^{-10}$. The $T$ dependence of the chiral breaking 4-quark condensate follows from the axial-/vector ones via Eq.~(\ref{eq:q4sbdef}); relative to the quark condensate, their initial fall-off is faster but slows down above $T$$\simeq$140\,MeV, cf.~Fig.~\ref{fig:cond}(b). \subsection{Non-Scalar Condensates} Hadrons in the heat bath also induce non-scalar condensates. For our QCDSR analysis the relevant ones are of dimension-4 twist-2, $\avg{\mathcal{O}^{d=4,\tau=2}}_T$, dimension-6 twist-2, $\avg{\mathcal{O}^{d=6,\tau=2}}_T$, and dimension-6 twist-4, $\avg{\mathcal{O}^{d=6,\tau=4}}_T$. We adopt their $T$ dependence as elaborated in Refs.~\cite{Hatsuda:1992bv,Leupold:1998bt,Zschocke:2002mn}, given by each hadron as \begin{eqnarray} \langle \mathcal{O}^{d=4, \tau=2}_h \rangle_T &=& \frac{A_2^h}{4}\left(m_h^2 I_1^h+\frac{4}{3} I_2^h\right),\nonumber\\ \langle \mathcal{O}^{d=6, \tau=2}_h \rangle_T &=& -\frac{5 A_4^h}{24}\left(m_h^4 I_1^h + 4 m_h^2 I_2^h +\frac{16}{5} I_3^h\right),\nonumber\\ \langle \mathcal{O}^{d=6, \tau=4}_h \rangle_T &=& \frac{B_2^h}{4}\left(m_h^2 I_1^h+\frac{4}{3} I_2^h\right). \end{eqnarray} The parameters $A_2$ and $A_4$, which control the twist-2 operators, are related to moments of parton distribution functions for the $u$ and $d$ quarks in the hadron \begin{equation} \label{eq:an} A_n = 2 \int_0^1 dx x^{n-1} (\bar{q}(x)+q(x)) \ . \end{equation} One can think of $A_2$ as twice the momentum fraction of the up and down quarks in the hadron, with $A_4$ a higher moment. Their values are reasonably well known for the pion and nucleon, $A_2^\pi = 0.97$, $A_4^{\pi}$=0.255, $A_2^N$=1.12, $A_4^N$=0.12, while there is substantial uncertainty for other hadrons. For baryons, we assume $A_2$ and $A_4$ to be identical to the nucleon values, but weighted by the light-quark fraction; {\it e.g.}, the $A_2$ of the $\Lambda$ is $\frac{2}{3}A_2^N$ . The kaons and etas are approximated with the pion's parton distribution functions, reduced by the strange-quark content. For other mesons, Eq.~(\ref{eq:an}) is used with the nucleon parton distributions functions, rescaled by the valence-quark content and also reduced by the strange-quark content. This gives $A_2$=0.801 and $A_4$=0.086 for non-strange mesons. The $B_2$'s are related to integrals of the twist-4 part of the spin-averaged (longitudinal) structure function, $F_{2(L)}^{\tau=4}$~\cite{Choi:1993cu,Leupold:1998bt}. For the nucleon, it has been extracted as $B_2^N$=$-$0.247\,GeV$^2$. Since there is no empirical information for other hadrons, we assume their $B_2$ to be the same as for the nucleon (suppressed by the strange-quark content); varying it by a factor of 2 produces no noticeable changes in the final spectral functions. Gluonic contributions are believed to be numerically insignificant~\cite{Hatsuda:1992bv,Leupold:1998bt} and have been neglected. \section{Finite Temperature Spectral Functions} \label{sec:FTspec} Our starting point are the vacuum axial-/vector spectral functions of Ref.~\cite{Hohler:2012xd}\footnote{The normalization used in Eq.~(25) of Ref.~\cite{Hohler:2012xd} for the Breit-Wigner width of the $a_1$ peak contained a (small) imaginary contribution; we have corrected this and could recover the same level of agreement with the experimental data and sum rules with a minor modification of the parameters.}. They are comprised of contributions from the ground state ($\rho$ and $a_1$ peaks), a first excited state ($\rho'$ and $a_1'$), and a chirally invariant (i.e., identical) continuum for both channels. The vacuum $\rho$ is taken from the microscopic model of Ref.~\cite{Urban:1998eg}, while $a_1$, $\rho'$ and $a_1'$ are parameterized with Breit-Wigner functions. For the present analysis, we have slightly modified the vacuum parameters of the $\rho'$ to shift its threshold energy to higher energies. This avoids its low-mass tail to reach well below 1\,GeV where the $\tau$-decay data do not exhibit any 4$\pi$ contributions. The modification to the $\rho'$ formfactor is compensated by a small modification of the mass and width of the $a_1'$ as to recover a near-perfect agreement with WSR-1 and WSR-2. The re-evaluation of the vacuum QCDSRs requires numerical values of 4-quark factorization parameter of $\kappa$=2.1 in $\avg{\mathcal{O}_4^{SB}}=\frac{16}{9}\kappa \avg{\bar qq}^2$, and of the gluon condensate of $\avg{\frac{\alpha_s}{\pi} G^2_{\mu\nu}}$=0.017\,GeV$^4$. The updated vacuum spectral functions, shown in Fig.~\ref{fig:vacsf}, are very similar to the ones in Ref.~\cite{Hohler:2012xd}. \begin{figure}[tb] \centering \includegraphics[width=.4\textwidth]{rhospec2.eps} \includegraphics[width=.4\textwidth]{a1spec2.eps} \caption{Vacuum spectral functions in the vector (top) and axialvector (bottom) channels, compared to experimental data for hadronic $\tau$ decays~\cite{Barate:1998uf}; The total spectral function in each channel (solid curve) is composed of a ground state (dotted curve), excited resonance (dashed curve), and a universal continuum (dot-dashed curve). } \label{fig:vacsf} \end{figure} Finite-temperature effects in the spectral functions are implemented as follows. For the $\rho$ meson, we employ the microscopic calculations using hadronic effective theory~\cite{Rapp:1999us} at vanishing baryon chemical potential. This is the key input to our analysis, as these spectral functions are consistent with dilepton data in URHICs~\cite{Rapp:2013ema}, and thus provide a direct link to experiment. The only amendment we allow is a reduction of the vector-dominance coupling strength (as routinely done in QCDSR analyses~\cite{Hatsuda:1992bv,Zschocke:2002mn,Leupold:1997dg,Leupold:2001hj}). Optimal agreement with the QCDSR requires a reduction of up to 7\% at $T$=170\,MeV. For the $a_1$ meson, the lack of quantitative calculations at finite $T$ leads us to parameterize the medium modifications of its spectral function. We introduce four parameters which control the $a_1$ peak's location, width, and strength in-medium. For the $a_1$ mass, we write $M^T_{a_1} = M_{a_1} (1-\delta M_{a_1}(T)/M_{a_1})$, and for the current coupling $C^T_{a_1} = C_{a_1} (1-\delta C_{a_1}(T)/C_{a_1})$. The width is increased and extended below the vacuum threshold by adding the following term to the vacuum width, $\Gamma_{a_1}(s)$, \begin{equation} \Delta \Gamma_{a_1}(s) = \left(\Gamma_1^T + \frac{s}{M_{a_1}^2} \Gamma_2^T \right) \left(\frac{\Lambda_{a_1}^2 + M_{a_1}^2}{\Lambda_{a_1}^2+s}\right)^2 \end{equation} where $\Gamma_1^T$ and $\Gamma_2^T$ are $T$-dependent constants, and the last factor is a formfactor with the same scale, $\Lambda_{a_1}$, as in vacuum. The resulting ground-state axialvector spectral function in medium takes the form \begin{equation} \rho_{a_1}(s, T) = \frac{1}{\pi} {C}^T_{a_1} \frac{\sqrt{s} \, \Gamma_{a_1}^T(s,T)}{(s-M_{a_1}^{T 2})^{2} + s \Gamma_{a_1}^T(s,T)^2} \ , \end{equation} with $\Gamma_{a_1}^T(s,T) = \Gamma_{a_1}(s) + \Delta \Gamma_{a_1}(s)$. The temperature dependence of the excited states is even less known. Instead of introducing additional parameters for their in-medium Breit-Wigners (which are hard to control), we rather apply the model independent low-temperature effect known as chiral mixing~\cite{Dey:1990ba,Steele:1996su} to the $\rho'$ and $a_1'$ states. However, in the spirit of the HRG, we go beyond the mixing induced by only thermal pions by including the effect from the virtual pion cloud of the thermal hadrons. This effect has been worked out for the pion cloud of the nucleon in cold nuclear matter~\cite{Chanfray:1998hr,Krippa:1997ss}. To extend it to other hadrons (not including the non-pion Goldstone bosons), we define a mixing parameter \begin{equation} \hat{\epsilon}_h (T) = \frac{4}{3}\frac{\sigma^{\rm cloud}_\pi}{f_\pi^2 m_\pi^2}\varrho_s^h \ . \end{equation} The total mixing parameter, $\hat{\epsilon}$, is the sum of the individual $\hat{\epsilon}_h$ plus that of the pion, $\hat{\epsilon}_\pi =2\varrho_s^\pi/(3m_\pi f_\pi^2)$. As with the quark condensates, we introduce an additional $T^{10}$-term to render $\hat{\epsilon}=1/2$ at the temperature where $\avg{\bar{q}q}_T = 0$. The in-medium spectral functions for the excited axial-/vector states then follow as \begin{equation} \begin{split} &\rho_{V'}(T) = [1-\hat{\epsilon}(T)]\rho_{V'}^{\rm vac}+\hat{\epsilon}(T) \, \rho_{A'}^{\rm vac} + \frac{1}{2}\, \hat{\epsilon}(T) \,\rho_{a_1}^{\rm vac} \, , \\ &\rho_{A'}(T) = [1-\hat{\epsilon}(T)] \rho_{A'}^{\rm vac} + \hat{\epsilon}(T) \rho_{V'}^{\rm vac}\, . \end{split} \end{equation} The $a_1$ contribution to the excited vector channel admixes only the part which is not included in the microscopic calculation of the $\rho$, see Ref.~\cite{vanHees:2007th} for details. Our approximate extension of the mixing beyond the low-$T$ pion gas limit is only carried linear in the (scalar) hadron densities, but in line with the in-medium treatment of the condensates. However, no finite-momentum nor finite-mass effects of the (virtual) pions have been accounted for. The chirally invariant continuum is assumed to be $T$-independent (e.g., chiral mixing would not affect it). Lastly, we need to address the $T$ dependence of the 4D longitudinal part of the axial-vector spectral function, i.e., the pion pole. We approximate the pion mass by the leading-order prediction of chiral perturbation theory, \begin{equation} m_\pi^2(T) = m_\pi^2\left(1+\frac{1}{4}\hat{\epsilon}_\pi(T)\right), \end{equation} {\it i.e.}, induced by the pion gas only. This produces a weak $T$ dependence as expected for a Goldstone boson. Assuming the Gell-Mann--Oakes--Renner relation to hold at finite $T$, allows us to infer $f_\pi(T)$ from the above-constructed $T$-dependence of the quark condensate. To summarize this section, we have supplemented a microscopic model for the $\rho$ spectral function with a 4-parameter ansatz for the in-medium $a_1$, chiral mixing for the excited states, and a weakly $T$-dependent pion mass from chiral perturbation theory. We now investigate whether this setup can satisfy QCDSRs and WSRs. \section{Finite-Temperature Sum Rule Analysis} \label{sec:results} \begin{figure*}[!t] \centering \includegraphics[width=.9\textwidth]{FTSFs.eps} \caption{Finite-temperature vector (black curve) and axialvector (red curve) spectral functions.} \label{fig:sf} \end{figure*} Let us start by describing the quantitative criteria which govern the numerical values of the in-medium $a_1$ parameters introduced in the previous section. \begin{table}[!b] \begin{center} \begin{tabular}{|c|cccccc|} \hline $T$ [MeV] & 0 & 100 & 140 & 150 & 160 & 170 \\ \hline $d_V (\%)$ & 0.59 & 0.43 & 0.44 & 0.49 & 0.57 & 0.67 \\ $d_A (\%)$ & 0.49 & 0.48 & 0.56 & 0.59 & 0.55 & 0.56 \\ \hline $d_{\rm WSR1} (\%)$& $\sim 0$ & 0.003 & 0.04 & 0.04 & -0.004 & 0.004 \\ $d_{\rm WSR2} (\%)$& $\sim 0$ & -0.0002 & -0.0008 & -0.002 & -0.0003 & -0.005\\ $d_{\rm WSR3} (\%)$& 200 & 181 & 258 & 372 & 585 & 11600\\ \hline $r_{-1}$ & 1 & 0.96 & 0.72 & 0.57 & 0.37 & 0.14\\ $r_{0}$ & 1 & 0.93 & 0.66 & 0.50 & 0.31 & 0.12\\ $r_{1}$ & 1 & 0.91 & 0.64 & 0.50 & 0.32 & 0.15\\ \hline \end{tabular} \end{center} \caption{Summary of deviation measures for QCDSRs (upper 2 lines) and WSRs (lower 6 lines) at finite temperature.} \label{tab:results} \end{table} To evaluate the QCDSRs, we adopt the conventional method of Refs.~\cite{Leinweber:1995fn,Leupold:1997dg} to calculate an average deviation between the LHS and RHS over a suitable Borel window, referred to as a $d$-value. The same procedure and Borel window criteria as for the vacuum analysis in Ref.~\cite{Hohler:2012xd} are adopted. A $d$-value of below 1\% has been argued to reasonably bracket remaining uncertainties in the matching procedure~\cite{Leupold:1997dg}; we adopt this as our figure of merit in both $A$ and $V$ channels below. To evaluate the WSRs, we define a similar measure of deviation between the two sides as \begin{equation} d_{\rm WSR} = \frac{{\rm LHS} - {\rm RHS}}{{\rm RHS}} \ . \end{equation} This measure is much simpler than the QCDSR analog because it does not involve any Borel window. However, it also has its subtleties. The integrands of the LHS of each WSR are oscillatory functions with appreciable cancelations to yield the RHS (cf.~Fig.~2 in Ref.~\cite{Hohler:2012xd}), especially for the higher moments. Since we only use a finite number of moments (3), this could, in principle, lead to ``fine-tuned solutions" to the WSRs where the oscillations are still large, and thus $\rho_V(s)\ne \rho_A(s)$ even close to restoration. To probe this behavior (and thus the sensitivity to any ``artificial" fine tuning), we introduce an ``absolute-value" version of the LHS by \begin{equation} \tilde{w}_n(T) \equiv \int_0^\infty ds \ s^{n} \ |\Delta \rho(s;T)| \ . \end{equation} Though these moments are not directly related to chiral order parameters, they should diminish toward restoration. We define pertinent ratios $r_n = \tilde{w}_n(T)/\tilde{w}_n(T=0)$. Our analysis proceeds as follows. We first evaluate the QCDSR for the vector channel. With a small reduction in the vector dominance coupling, we find acceptable $d_V$ values ranging from 0.43\% to 0.67\% for all $T$=0-170\,MeV (cf.~Tab~\ref{tab:results}). This is a nontrivial result by itself. For the axialvector channel, the QCDSRs and two WSRs are used simultaneously to search for in-medium $a_1$ parameters which minimize \begin{equation} f = d_{\rm WSR1}^2 + d_{\rm WSR2}^2 + d_A^2 \ , \label{f} \end{equation} while requiring a smooth $T$ dependence. The thus obtained finite-$T$ axialvector spectral functions are shown in Fig.~\ref{fig:sf}. For all cases, the percentage deviation of WSR-1 and WSR-2 is below 0.1\%, and $d_A$ remains below 0.6\%. Deviations of WSR-3 are much larger, but comparable to the vacuum up to $T$$\simeq$150\,MeV. At $T$=160 and especially 170\,MeV, the magnitude of the RHSs is small and enters into the denominator of $d_{\rm WSR}$, thus greatly magnifying residual deviations. The $r_n$ measures decrease monotonically with $T$ suggesting acceptable deviations even for WSR-3. We therefore conclude that our spectral functions are compatible with both QCDSRs and WSRs. To probe the uncertainties in our method, we depict in Fig.~\ref{fig:axialband} ranges of axialvector spectral functions with relaxed constraints, at an intermediate temperature of $T$=150\,MeV. The dashed lines border a regime of spectral functions which are obtained by only requiring $d_A$=1\% for the axialvector QCDSR (the band could be larger if all spectral functions with $d_A$$<$1\% were included). From this collection of curves, we then select those whose agreement with WSR1 is within $1\%$, producing a much narrower (shaded) region bordered by dotted lines. The {\em combined} constraints of QCDSRs and WSRs are thus shown to noticeably increase the selectivity of the in-medium axialvector spectral function. \begin{figure}[!t] \centering \includegraphics[width=.45\textwidth]{axialbands.eps} \caption{Regions of axialvector spectral functions at $T$=150\,MeV when requiring agreement with the QCDSR only at $d_A$=1\% (dashed lines), and additionally with WSR-1 at $|d_{\rm WSR1}|$$\leq$1\% (dotted lines). The solid line corresponds to a minimal $f$ value from Eq.~(\ref{f}).} \label{fig:axialband} \end{figure} A visual inspection of the in-medium spectral functions supports the trend toward restoration, cf.~Fig.~\ref{fig:sf}: the $a_1$ peak gradually merges into the $\rho$ while the excited states degenerate somewhat earlier through chiral mixing. The $\rho$-$a_1$ merging is largely dictated by the WSRs, but the concrete shape close to chiral restoration is more sensitive to the QCDSRs. Note that our analysis not only complies with a ``trivial" degeneracy at the restoration point, but rather provides a systematic temperature evolution, starting from the vacuum, compatible with current best estimates for the $T$ dependent chiral order parameters and condensates (at $T$=170\,MeV, our condensates are close to zero, undershooting the lQCD data for the 2-quark condensate; our axialvector spectral function at this temperature is thus more of an illustration of the expected degeneracy at higher $T$ where $\avg{\bar qq}_T$$\simeq$0). The in-medium $a_1$ mass shift is consistent with a leading $T^4$ behavior, in line with model-independent constraints from the chiral Lagrangian. Our analysis also suggests that the approach toward restoration ``burns off" the chiral mass splitting between the $\rho$ and $a_1$, while ``bare" masses of $m_0$$\simeq$0.8\,GeV essentially persist, similar to Ref.~\cite{Urban:2001uv}. \section{Conclusion} \label{sec:conc} The objective of this work was to test whether in-medium vector spectral functions which describe dilepton data in heavy-ion collisions are compatible with chiral symmetry restoration. Toward this end, we deployed QCD and Weinberg sum rules in a combined analysis of vector and axialvector spectral functions, using lattice-QCD and the hadron resonance gas to estimate the in-medium condensates and chiral order parameters, and chiral mixing to treat the $T$ dependence of excited states. We first found that the QCDSR in the vector channel is satisfied with a small (order 5\%) amendment of vector dominance. We then introduced a 4-parameter ansatz for the in-medium $a_1$ spectral function and found that a smooth reduction of its mass (approaching the $\rho$ mass) and large increase in width (accompanied by a low-mass shoulder) can satisfy the axialvector QCDSR and 3 WSRs over the entire temperature range from $T$=0-170\,MeV, ultimately merging with the vector channel. This establishes a direct connection between dileptons and chiral restoration, and thus the answer to the originally raised question is positive. Our findings remain to be scrutinized by microscopic calculations of the $a_1$ spectral function. Work in this direction is ongoing. \acknowledgments This work is supported by the US-NSF under grant No.~PHY-1306359 and by the A.-v.-Humboldt Foundation (Germany).
1,116,691,499,544
arxiv
\section{Introduction} In this paper we study the decay rate for solutions of the scalar wave equation with spherical symmetric initial data in the Schwarzschild geometry, which is smooth and compactly supported outside the event horizon. We prove that these solutions decay at the rate $t^{-3}$ and $t^{-4}$ for momentarily stationary initial data, respectively, as it was earlier predicted by Price \cite{Price}, though not rigorously proved. In \cite{Kronthaler} we have already shown pointwise decay for solutions of the same kind of initial data not necessarily spherical symmetric. To this end, we have derived an integral spectral representation for the solutions applying Hilbert space methods in terms of special solutions of the Schr\"odinger equation the so-called Jost solutions. In order to set up some notation, recall that in Schwarzschild coordinates $(t,r,\vartheta,\varphi)$, the Schwarzschild metric takes the form \begin{eqnarray} \label{schwarzschildgeometrie} ds^2 & \hspace{-3mm} = \hspace{-2mm} & g_{ij} \: dx^{i} dx^{j} \nonumber \\ & \hspace{-3mm} = \hspace{-2mm} & \left(1- \frac{2M}{r} \right)\: dt^2 - \left(1- \frac{2M}{r}\right)^{-1} dr^2 - r^2(d\vartheta ^2 + \mathrm{sin}^2 \vartheta \: d\varphi^2) \end{eqnarray} with $r>0,\: 0 \leq \vartheta \leq \pi ,\: 0 \leq \varphi < 2\pi$. The metric has two singularities at $r=0$ and $r=2M$. The latter is called the \textit{event horizon} and can be resolved by a coordinate transformation. We consider the scalar wave equation in the region $ r> 2M $ outside the event horizon, which is given by \begin{equation} \label{eq: Wellengleichung Grundform} \square \phi := g^{ij} \nabla_{i} \nabla_{j} \phi = \frac{1}{\sqrt{-g}} \frac{\partial}{\partial x^{i}} \left( \sqrt{-g} \:g^{ij} \frac{\partial }{\partial x^{j}} \right) \phi = 0 \end{equation} where $g$ denotes the determinant of the metric $g_{ij}$. We now state our main result. \begin{thm1} Consider the Cauchy problem of the scalar wave equation in the Schwarzschild geometry $$ \square \phi = 0\; , \quad (\phi_0, i \partial_t \phi_0)(0,r,x) = \Phi_0(r,x)$$ for smooth spherical symmetric initial data $\Phi_0 \in C^{\infty}_0 ( (2M,\infty) \times S^2)^2 $ which is compactly supported outside the event horizon. Let $\Phi (t) = (\phi (t), i \partial_t \phi (t)) \in C^\infty(\field{R} \times (2M, \infty) \times S^2)^2$ be the unique global solution which is compactly supported for all times $t$. Then for fixed $r$ there is a constant $c= c(r,\Phi_0)$ such that for large $t$ \begin{equation*} |\phi(t)| \leq \frac{c}{t^3} \; . \end{equation*} Moreover, if we have initially momentarily static initial data, i.e. $\partial_t \phi_0 \equiv 0$, the solution $\phi(t)$ satisfies \begin{equation*} |\phi(t)| \leq \frac{c}{t^4} \; . \end{equation*} \end{thm1} There has been significant work in the study of linear hyperbolic equations in black hole spacetimes. The first major contribution in this topic was made in 1957, when Regge and Wheeler studied the linearized equations for perturbations of the Schwarzschild metric \cite{ReggeWheeler}. This work was continued in \cite{Visch,Zerilli}, while more recently the decay of the perturbation and all of its derivatives was shown in \cite{Friedman} using a theorem by Wilcox. By heuristic arguments, in 1972 Price \cite{Price} got evidence for polynomial decay of solutions of the scalar wave equation in Schwarzschild, where the power depends explicitely on the angular mode. In 1973, Teukolsky \cite{Teu1} could derive by means of the Newman Penrose formalism one single master equation that describes in the Kerr background the evolution of a test scalar field ($s=0$), a test neutrino field ($s=\pm1/2$), a test electromagnetic field ($s= \pm 1$) and linearized gravitational waves ($s= \pm 2$). Here, the parameter $s$ is also called the spin weight of the field. Note that it is a quite complicated task in the case $s \neq 0$ to recover all the components of the corresponding field from a solution of this equation. For further details see \cite{Cha,Whiting1}. In two subsequent papers \cite{Teu2,Teu3}, Teukolsky and Press discussed the physical consequences of these perturbations. Note that the rigorous analysis of the equation remains a quite subtle point, though any linearized perturbation is given by this equation. For instance, in the case $s\neq0$ \textit{complex} coefficients are involved, which makes the analysis very complicated. Hence, until now there are just a few rigorous results in this case. In \cite{Finster3} local decay was proven for the Dirac equation ($s=\frac{1}{2}$) in the Kerr geometry (in the massless and massive case). Moreover, a precise decay rate has been specified in the massive case \cite{Finster4}. More recently, there has been a linear stability result for the Schwarzschild geometry under electromagnetic and gravitational perturbations \cite{Finster5}. This result relies on the mode analysis, which has been carried out in \cite{Whiting2}. More work has been done on the case $s=0$, where the Teukolsky equation reduces to the scalar wave equation. In the Schwarzschild case, Kay and Wald \cite{KayWald} proved a time independent $L^\infty$-bound for solutions of the Klein-Gordon equation. In \cite{Dafermos}, a mathematical proof is given for the decay rate of solutions with spherical symmetric initial data, which has been predicted by Price \cite{Price}, which is not sharp, however. For general initial data, the same authors derived another decay result \cite{Dafermos2}. Pointwise decay in the Kerr geometry was proven rigorously \cite{Finster1,Finster2}. Furthermore, Morawetz and Strichartz-type estimates for a massless scalar field without charge in a Reissner Nordstr{\o}m background with naked singularity are developed in \cite{Stalker}. And in \cite{Blue} a Morawetz-type inequality was proven for the semi-linear wave equation in Schwarzschild, which is also supposed to yield decay rates. In this paper we first recapitulate the framework and some notations of the foregoing paper \cite{Kronthaler}. Afterwards, we give an explicit expansion of the Jost solutions $\grave{\phi}$ of the Schr\"odinger equation which also were derived in \cite{Kronthaler}. At the end we show how to derive the exact decay rate out of this expansion. \section{Preliminaries} We usually replace the Schwarzschild radius $r$ by the Regge-Wheeler coordinate $u \in \field{R}$ given by \begin{equation} \label{reggewheelercoord} u(r) := r + 2M\: \mathrm{log} \left(\frac{r}{2M} -1 \right) \: . \end{equation} After having separated the angular modes $l,m$ using spherical harmonics, it is convenient to write the Cauchy problem in Hamiltonian formalism \begin{equation} \label{Hamiltonform allgemein} i \partial_t \Psi = H \Psi \; , \quad \Psi \big|_{t=0} = \Psi_0 \end{equation} where $\Psi=(\psi,i\partial_t \psi)^T$ is a two component vector representing the wave function and its first time derivative and $H$ is the Hamiltonian \begin{equation} \label{Hamiltonian allgemein} \left(% \begin{array}{cr} 0 & 1 \\ -\partial_u^2 + V_l(u) & 0 \\ \end{array}% \right) \; , \end{equation} with the potential \begin{equation} \label{potential} V_l(u) = \left( 1- \frac{2M}{r} \right) \left(\frac{2M}{r^3} + \frac{l(l+1)}{r^2} \right) \: . \end{equation} Constructing the resolvent of the operator $H$ and using Stone's formula we have derived an integral spectral representation for the solutions of the Cauchy problem of the following form \begin{eqnarray} \Psi(t,u)=e^{-itH} \Psi_0 (u) =\hspace*{70mm} \nonumber \\ - \frac{1}{\pi} \int_{\mathbb{R}} e^{-i \omega t} \left( \int_{\textrm{supp}\: \Psi_0} \mathrm{Im} \! \left( \frac{\acute{\phi}_{\omega l}(u) \grave{\phi}_{\omega l}(v)}{w(\acute{\phi}_{\omega l},\grave{\phi}_{\omega l})}\right) \left(% \begin{array}{lc} \omega & 1 \\ \omega^2 & \omega \\ \end{array}% \right) \Psi_0(v) dv \right) \: d\omega \; , \label{eq: Darstellung der Loesung l=0} \end{eqnarray} where the integrand is in $L^1$ with respect to $\omega$. At this point, the functions $\acute{\phi},\grave{\phi}$ play an important role. These functions are a fundamental system of the Schr\"odinger equation \begin{equation} \label{Schroedinger equation} \left( -\partial_u^2 +V_\omega(u) \right) \phi(u) = 0 \end{equation} with the potential \begin{equation} \label{schroedinger potential} V_\omega(u) = - \omega^2 + V_l(u) = - \omega^2 + \left( 1- \frac{2M}{r} \right) \left(\frac{2M}{r^3} + \frac{l(l+1)}{r^2} \right) \end{equation} with boundary conditions \begin{eqnarray} \lim_{u \rightarrow -\infty} e^{-i \omega u} \acute{\phi}_{\omega}(u) =1 \; , \quad & \displaystyle \lim_{u \rightarrow -\infty} \left( e^{-i \omega u} \acute{\phi}_{\omega}(u) \right)' = 0 \label{boundarycond3} \\ \lim_{u \rightarrow +\infty} e^{i \omega u} \grave{\phi}_{\omega}(u) =1 \; , \quad & \displaystyle \hspace{4mm} \lim_{u \rightarrow +\infty} \left( e^{i \omega u} \grave{\phi}_{\omega}(u) \right)' = 0 \; . \label{boundarycond4} \end{eqnarray} in the case $\Im (\omega) <0$. We derived these solutions using the corresponding integral equation, the so-called Jost equation, which is given by \begin{equation} \label{Jost equation bound cond -unendlich} \phi_\omega (u) = e^{i \omega u} + \int _{-\infty}^u \frac{1}{\omega} \sin(\omega(u-v)) V_l(v) \phi_\omega(v) \:dv \; , \end{equation} in the case of boundary conditions at $-\infty$ (an analog equation is considered for the boundary conditions at $\infty$). Now, we have constructed the solution $\acute{\phi}$ with the series ansatz \begin{equation} \label{Reihenentwicklung von phi 1} \acute{\phi}_\omega = \sum_{k=0}^\infty \phi_\omega^{(k)} \; , \end{equation} together with the iteration scheme \begin{equation} \label{Iterationsschema fuer phi 1} \left. \begin{array}{rcl} \phi_\omega^{(0)} (u) & = & e^{i \omega u} \\ \ & \vdots & \ \\ \phi_\omega^{(k+1)}(u) & = & \displaystyle \int_{-\infty}^u \frac{1}{\omega} \sin(\omega(u-v)) V_l(v) \phi_\omega^{(k)}(v) \:dv \\ \end{array} \right\} \begin{array}{c} \\ \\ . \\ \end{array} \end{equation} Using this we have proven that the solutions $\acute{\phi}_\omega(u)$ are analytic with respect to $\omega$ for fixed $u$ in the region $\Im(\omega)<0$. Moreover $\acute{\phi}_\omega$ can be analytically extended to the region $\Im (\omega) \leq \frac{1}{4M}$, whereas the solution $\omega^l \grave{\phi}_\omega$ can be only continuously extended to the real axis. Thus, in order to obtain the exact decay rates it is important to analyze the behavior of $\grave{\phi}_\omega$ with respect to $\omega$ at the real axis in more details. \section{Expansion of the Jost solutions $\grave{\phi}_\omega$} Since the $\omega$-dependence of the Jost solutions $\grave{\phi}_\omega$ plays an essential role in the analysis of the integral representation, we show in this section a method to expand these solutions at the critical point $\omega =0$. We start with an explicit calculation: \begin{lemma} \label{lemma: zur Reihenentwicklung} For all $u>0$, $\omega \in \field{R} \setminus \{0\}$, $\varepsilon >0$, $q \in \field{N}_0$ and $p \in \field{N}$, \begin{eqnarray} \int_u^\infty e^{-2 i \omega x - \varepsilon x} \frac{\log^q(x)}{x^p}\: dx = \sum \limits_{m=0}^q \begin{pmatrix} q\\ m \end{pmatrix} \log^{q-m} (u) \nonumber \Bigg\{ \left(2 i \omega+\varepsilon \right)^{p-1} \hspace*{1cm} \\ \hspace*{2mm} \times \left[ \frac{(-1)^{p-1}}{(p-1)!} \frac{(-1)^{m+1}}{m+1} \log^{m+1}\left[(2 i \omega+\varepsilon) u\right] + \sum \limits_{k=0}^m c_k(m) \log^k \left[(2 i \omega+\varepsilon) u\right]\right] \nonumber\\ - u^{-p+1}\sum \limits_{k=0,k\neq p-1}^\infty \frac{(-1)^k (-1)^m \: m!}{(k-p+1)^{m+1} k!} \left[(2 i \omega+\varepsilon) u\right]^k \Bigg\} \;, \hspace*{10mm} \label{eq: Lemma zur Reihenentwicklung} \end{eqnarray} where the coefficients $c_k$ involve the coefficients $a_0,...,a_q$ of the series expansion of the $\Gamma$-function at $1-p$. \end{lemma} \begin{proof} In order to prove this, we write the integral as $\lambda$-derivatives, \begin{equation} \label{eq: Umschreiben der log mit Funktional} \int_u^\infty e^{-2 i \omega x - \varepsilon x} \frac{\log^q(x)}{x^p}\: dx = \frac{d^q}{d\lambda^q} F_p(\lambda)\bigg|_{\lambda = 0}\; , \end{equation} with the generating functional, $$F_p(\lambda) = \int _u^\infty e^{(-2 i \omega -\varepsilon) x } \frac{1}{x^{p- \lambda}} \: dx = u^{-p+\lambda+1} \int_1^\infty e^{(-2 i \omega- \varepsilon)u v} \frac{1}{v^{p - \lambda}} \: dv \; ,$$ where in the last step we introduced the new integration variable $v = \frac{x}{u}$. In the following we will write $z = (2 i \omega +\varepsilon)u$ for reasons of convenience. The integral on the right hand side is also known as the Exponential Integral $E_{p-\lambda} (z)$ with the series expansion $$ E_{p- \lambda} (z) = \Gamma( 1- p +\lambda) \: z^{p- \lambda -1}- \sum \limits _{k=0}^\infty \frac{(-1)^k}{(k-p +\lambda +1)k!}\: z^k \; ,$$ for small $\lambda \neq 0$ [as a reference cf. \cite{Wo}]. Using the series expansion of the $\Gamma$-function at $1-p \in \field{Z} \setminus \field{N}$, where the $\Gamma$-function has a pole of first-order, we obtain \begin{eqnarray} F_p(\lambda) = u^{-p+\lambda+1} \left[ \left( \frac{(-1)^{p-1}}{(p-1)! \: \lambda} + \sum \limits _{n=0}^\infty a_n \lambda^n \right) z^{p-\lambda -1} \right. \hspace*{3cm} \nonumber\\ \left. - \sum \limits _{k=0}^\infty \frac{(-1)^k}{(k-p +\lambda +1)k!}\: z^k \right] \nonumber\\ = u^{-p+\lambda+1}\left[ z^{p-1} \left( \frac{(-1)^{p-1} }{(p-1)!} \left( \frac{z^{- \lambda} - 1}{\lambda} \right) + z^{-\lambda} \sum \limits _{n=0}^\infty a_n \lambda^n \right)\hspace*{10mm} \right. \nonumber \\ \hspace*{1cm} \left. - \sum \limits _{k=0 , k \neq p-1}^\infty \frac{(-1)^k}{(k-p +\lambda +1)k!}\: z^k \right]\;. \label{eq: Reihendarstellung fuer Funktional} \end{eqnarray} Using $z^{-\lambda} = e^{-\lambda \log z}$, we immediately get the formulas \begin{eqnarray*} \frac{d^n}{d \lambda^n} \left( \frac{z^{-\lambda}-1}{\lambda} \right)\bigg|_{\lambda = 0} & = &\frac{(-1)^{n+1} \log^{n+1}(z)}{n+1} \\ \frac{d^m}{d \lambda^m} \left( u^\lambda \right) \bigg|_{\lambda = 0} & =& \log^m (u) \\ \frac{d^m}{d \lambda^m} \left( z^{-\lambda} \right) \bigg|_{\lambda = 0} & =& (-1)^m \log^m (z) \end{eqnarray*} one directly verifies the claim setting (\ref{eq: Reihendarstellung fuer Funktional}) in (\ref{eq: Umschreiben der log mit Funktional}). \end{proof} Directly in the same way, one proves an analogue lemma for the case $ p \in \field{Z} \setminus \field{N}$: \begin{lemma} \label{lemma: zur Reihenentwicklung mit p<0} For all $u>0$, $\omega \in \field{R} \setminus \{0\}$, $\varepsilon >0$, $q \in \field{N}_0$ and $p \in \field{Z} \setminus \field{N}$, \begin{eqnarray} \int_u^\infty e^{-2 i \omega x - \varepsilon x} \frac{\log^q(x)}{x^p}\: dx = \hspace*{70mm} \nonumber\\ \sum \limits_{m=0}^q \begin{pmatrix} q\\ m \end{pmatrix} \log^{q-m} (u) \nonumber \Bigg\{ \left(2 i \omega+\varepsilon \right)^{p-1} \sum \limits_{k=0}^m c_k(m) \log^k \left[(2 i \omega+\varepsilon) u\right] \hspace*{5mm} \\- u^{-p+1}\sum \limits_{k=0}^\infty \frac{(-1)^k (-1)^m \: m!}{(k-p+1)^{m+1} k!} \left[(2 i \omega+\varepsilon) u\right]^k \Bigg\} \label{eq: Lemma zur Reihenentwicklung mit p<0} \end{eqnarray} where the coefficients $c_k$ involve the coefficients $a_0,...,a_q$ of the series expansion of the $\Gamma$-function at $1-p$. \end{lemma} Compared to Lemma \ref{lemma: zur Reihenentwicklung}, here the logarithmic term is of lower order due to the fact that the Gamma-function has no singularity for positive integers. In order to apply this lemma to our integral representation, we have to derive an asymptotic expansion for the potential $V_l(u)$ at $+ \infty$. Therefore, we have the following \begin{lemma} \label{lemma: asymptotische Entwicklung von V_l} For the potential $\displaystyle V_l(u) = \left(1- \frac{2M}{r(u)} \right) \left( \frac{2M}{r(u)^3} + \frac{l(l+1)}{r(u)^2} \right)$ we have the asymptotic expansion \begin{equation} \label{eq: asymptotische Entwicklung von V_l} V_l(u) = \sum \limits _{p=2}^k \sum \limits _{q=0}^{p-2} c_{pq} \frac{\log^q (u)}{u^p} + c_{k+1,k-1} \frac{\log^{k-1}(u)}{u^{k+1}} + \mathcal{O} \left(\frac{\log^{k-2}(u)}{u^{k+1}}\right) \; , \end{equation} as $ u \rightarrow \infty$ , with $k \geq 2$ and real coefficients $c_{pq}$, where e.g. the first coefficients are given by $$ \begin{array} {l} c_{20} = l(l+1) \; , \quad c_{31} = 4 l (l+1) M \; , \\ \vspace*{-2mm} \\ c_{30} = 2 M - 2M l (l+1) (1 + 2\log(2)) - 4M l (l+1) \log(M) \\ \vspace*{-2mm} \\ c_{42} = 12 l (l+1) M^2 \; , \\ \vspace*{-2mm} \\ c_{41} =-4 M^2(-3 + l(l+1)(5 + 8 \log(8)) + 6 l(l+1)\log(M)) \; ,\;...\;\;. \end{array} $$ Furthermore, in the case $l=0$ the coefficients $c_{n,n-2}$ vanish. \end{lemma} \begin{proof} First we have to find an expression for $r$ in terms of the Regge-Wheeler coordinate $u$. Remember that $ u = r + 2 M \log (\frac{r}{2M}-1)$, which is equivalent to $$ e^{\frac{u}{2M}-1} = \left( \frac{r}{2M}-1 \right)e^{\frac{r}{2M}-1}\; .$$ In order to resolve this equation with respect to $r$, we use the principal branch of the Lambert $W$ function denoted by $W(z)$. This is just the inverse function of $f(x) =x e^x$ on the positive real axis. [As a reference cf. \cite{LambertW}.] Hence, we obtain \begin{equation} \label{eq: r ausgedrueckt durch u} r = 2M +2M \: W(e^{\frac{u}{2M}-1}) \; . \end{equation} Moreover, for $W$ we have the asymptotic expansion \begin{equation} \label{eq: asymptotische Entwicklung der LambertW} W(z) = \log z -\log(\log z) + \sum \limits_{k=0}^\infty \sum \limits_{m=0}^\infty c_{km} \: (\log (\log z))^{m+1} (\log z)^{-k-m-1} \; , \end{equation} as $ z \rightarrow \infty$. Here, the coefficients $c_{km}$ are given by $c_{km} = \frac{1}{m!} (-1)^k \begin{bmatrix} k +m \\ k+1 \end{bmatrix}$, where $ \begin{bmatrix} k +m \\ k+1 \end{bmatrix} $ is a Stirling cycle number. In particular, applying this expansion to (\ref{eq: r ausgedrueckt durch u}), we get the series representation \begin{eqnarray*} r(u) =2M +2M\Big[ \frac{u}{2M}-1 -\log\left(\frac{u}{2M}-1\right) \hspace*{3cm} \\ + \sum \limits_{k=0}^\infty \sum \limits_{m=0}^\infty c_{km} \: \left(\log \left(\frac{u}{2M}-1\right)\right)^{m+1} \left(\frac{u}{2M}-1\right)^{-k-m-1}\Big] \; . \end{eqnarray*} This allows us to expand the powers $\frac{1}{r^2}, \frac{1}{r^3}$ and $\frac{1}{r^4}$ to any order in $u/2M -1$ using the method of the geometric series. Together with the expansion $$ \log \left(\frac{u}{2M}-1\right) = \log \left[\frac{u}{2M}\left(1 -\frac{2M}{u}\right) \right] = \log u -\log (2M) -\sum \limits _{n=1}^\infty \frac{1}{n} \left( \frac{2M}{u} \right)^n , $$ which holds for $u>2M$, the result follows. \end{proof} These two lemmas let us expand the solution $\grave{\phi}_\omega(u)$ in the following way. \begin{lemma} \label{lemma: Entwicklung von acute phi,l=0} For $l=0$, $\omega \in \field{R} \setminus \{0\}$ and fixed $u>0$, the fundamental solution $\grave{\phi}_\omega(u)$ can be represented as \begin{equation} \label{eq: Entwicklung von acute phi,l=0} \grave{\phi}_\omega(u)= e^{-i \omega u} + g_0(\omega,u) + 2 i \omega \log(2 i \omega) g_1(\omega,u) +2 i \omega g_2(\omega,u) \; , \end{equation} where the functions $g_0,g_1$ and $g_2$ are $C^1(\field{R})$ with respect to $\omega$. \end{lemma} In order to prove this, we need the following lemma \begin{lemma} \label{lemma: abschaetzung der omegaabl der greensfkt} For all $u \in \field{C}$ and $ n\in \field{N}_0$, \begin{equation} \label{eq: absch von ableitung von sinus} \Big| \partial^n_u \left(\frac{1}{u} \sin u \right) \Big| \leq \frac{2^{n+1}}{1 + |u|} e^{|\Im u|} \; . \end{equation} Moreover, if $\omega \neq 0$ and $v \geq u >0$, \begin{equation} \label{eq: abschaetzung der omegaabl der greensfkt} \Big| \partial^n_\omega \big[ \frac{1}{\omega} \sin(\omega(u-v))\big] \Big| \leq \frac{C(n) \: v^{n+1}}{1 +| \omega v|} e^{v |\Im \omega| + u \Im \omega} \; , \end{equation} for some constant $C(n)$, which is just depending on $n$. \end{lemma} \begin{proof} In the case $|u| \geq 1$, Eulers formula for the $\sin$-function inductively yields $$ (1 +|u|) \Big| \partial^n_u \left( \frac{1}{u} \sin u \right) \Big| \leq 2^{n+1} e^{| \Im u|} \; .$$ For $|u|<1$ we rewrite $ (1/u) \sin u$ as an integral, in order to obtain the estimate $$(1 +|u|)\Big| \partial^n_u \left( \frac{1}{u} \sin u \right) \Big| = (1 + |u|) \Big| \frac{1}{2} \int_{-1}^1 (i \tau)^n e^{i u \tau} \: d \tau \Big | \leq 2 e^{|\Im u|}\; ,$$ which shows the first claim. As a consequence, we get for $ \omega \neq 0$ and all $n \in \field{N}$ the estimate \begin{eqnarray} \nonumber \Big| \partial^n_\omega \left( \frac{1}{\omega} \sin (\omega u) \right) \Big| & = & \Big| u^{n+1} \partial^n_{\omega u} \left(\frac{1}{\omega u} \sin (\omega u) \right) \Big| \\ & \leq & \frac{2^{n+1} |u|^{n+1}}{1 + |\omega u|} e^{|\Im (\omega u)|} \; . \label{eq: abschaetzung greensfkt zwischenschritt} \end{eqnarray} In order to show (\ref{eq: abschaetzung der omegaabl der greensfkt}) we use the identity \begin{equation} \label{identity} \frac{1}{\omega} \sin(\omega(u-v)) = \frac{1}{\omega} \left( \sin (\omega u) e^{ i \omega v} - \sin (\omega v) e^{i \omega u} \right) \end{equation} and apply (\ref{eq: absch von ableitung von sinus}), $n=0$, \begin{eqnarray} \bigg| \frac{1}{\omega} \sin(\omega(u-v)) \bigg| \: \leq \: \frac{1}{| \omega |} \left( \big|\sin (\omega u) e^{ i \omega v} \big| + \big|\sin (\omega v) e^{i \omega u} \big| \right) \nonumber\\ \leq \frac{2 |u|}{1+ |\omega u|} \, e^{|u \Im \omega|} e^{-v \Im \omega} + \frac{2 |v|}{1+ |\omega v|} \, e^{|v \Im \omega|} e^{-u \Im \omega} \; . \label{eq: Zwischenschritt im Lemma zur abschaetzung von G} \end{eqnarray} Due to the assumption $v\geq u \geq 0$, we know that $|v| \geq |u|$ and thus $$ \frac{2 |u|}{1+ |\omega u|} \leq \frac{2 |v|}{1+ |\omega v|}\; , \quad \; u | \Im \omega | + v \Im \omega \geq v | \Im \omega | + u \Im \omega \; . $$ Using these inequalities in (\ref{eq: Zwischenschritt im Lemma zur abschaetzung von G}) the claim follows for $n=0$. Once again using the identity \ref{identity} we get \begin{eqnarray*} \Big| \partial_\omega \left( \frac{1}{\omega} \sin(\omega(u-v)) \right) \Big| \leq \Big| \frac{1}{\omega} \left( \sin (\omega u) (-iv) e^{ - i \omega v} - \sin (\omega v) (-i u) e^{-i \omega u} \right) \Big| \\ + \:\Big| \partial_\omega \left(\frac{1}{\omega} \sin (\omega u) \right) e^{ - i \omega v} - \partial_\omega \left( \frac{1}{\omega} \sin (\omega v) \right) e^{-i \omega u} \Big|\; . \end{eqnarray*} Using the estimates (\ref{eq: abschaetzung greensfkt zwischenschritt}) and (\ref{eq: absch von ableitung von sinus}) for $n=0$ together with the assumption $v \geq u>0$, we see as before that the first term is bounded by $$ \Big| \frac{1}{\omega} \left( \sin (\omega u) (-iv) e^{ - i \omega v} - \sin (\omega v) (-i u) e^{-i \omega u} \right) \Big| \leq \frac{4 v^2}{1+ | \omega v|} e^{ v |\Im \omega| + u \Im \omega} \;. $$ For the second term we use (\ref{eq: abschaetzung greensfkt zwischenschritt}) \begin{eqnarray*} \Big| \partial_\omega \left(\frac{1}{\omega} \sin (\omega u) \right) e^{ - i \omega v} - \partial_\omega \left( \frac{1}{\omega} \sin (\omega v) \right) e^{-i \omega u} \Big| \hspace*{20mm}\\ \leq \frac{4u^2}{1 + |\omega u|} e^{u | \Im \omega | + v \Im \omega} +\frac{4v^2}{1 + |\omega v|} e^{v | \Im \omega| + u \Im \omega} \; , \end{eqnarray*} and obtain due to the assumption $v\geq u>0$ $$\leq \frac{8 v^2}{1+ | \omega v|} e^{ v |\Im \omega|+u \Im \omega} \; . \hspace*{30mm}$$ Thus, we have shown (\ref{eq: abschaetzung der omegaabl der greensfkt}) for $n=1$. We proceed inductively to conclude the proof. \end{proof} Note that the estimate (\ref{eq: abschaetzung der omegaabl der greensfkt}) remains valid in the limit $0 \neq \omega \rightarrow 0$ for all $n$, because $$\lim_{\omega \rightarrow 0} \partial_\omega^n \left( \frac{1}{\omega} \sin(\omega (u-v)) \right) = \left\{\begin{array}{cll} \displaystyle (-1)^{n/2} \frac{1}{n+1} (u-v)^{n+1}&, & \textrm{if } n \textrm{ even,} \vspace*{2mm} \\ 0 & , & \textrm{if } n \textrm{ odd.} \end{array} \right. $$ \begin{proof}[Proof of Lemma \ref{lemma: Entwicklung von acute phi,l=0}:] First, remember that the solution $\grave{\phi}_\omega(u)$ is given by the perturbation series $$\grave{\phi}_\omega (u) = \sum_{k=0}^\infty \phi_\omega^{(k)} (u) \; ,$$ where the summands follow the iteration scheme \begin{equation} \label{eq: iterationsschema fuer grave phi}\phi_\omega^{(0)} (u) = e^{ - i \omega u}\; , \; \phi_\omega^{(k+1)}(u) = - \int_{u}^\infty \frac{1}{\omega} \sin(\omega(u-v))V_0(v) \phi_\omega^{(k)}(v) \:dv \; , \end{equation} with potential $ \displaystyle V_0(u) = \left(1- \frac{2M}{r(u)} \right) \frac{2M}{r(u)^3}$. According to Lemma \ref{lemma: asymptotische Entwicklung von V_l}, this potential can be represented for large $u$ as $\displaystyle V_0(u) = \frac{c_{30}}{u^3} +h(u)$, with $\displaystyle h(u) = \mathcal{O} \left( \frac{\log u}{u^4}\right)$. Next, we split this iteration scheme up. To this end, we define \begin{equation} \label{eq: definition phi tilde im beweis der entwicklung} \tilde{\phi}^{(1)}_\omega (u) := -\int_{u}^\infty \frac{1}{\omega} \sin (\omega(u-v)) h(v) e^{-i \omega v} \: dv \; , \end{equation} and analogously, \begin{equation} \label{eq: definition phi hut im beweis der entw.} \hat{\phi}^{(1)}_\omega(u):= - \int_{u}^\infty \frac{1}{\omega} \sin (\omega(u-v)) \frac{c_{30}}{v^3} e^{-i \omega v} \: dv \; . \end{equation} Thus, obviously $\phi_\omega^{(1)}(u) = \hat{\phi}_\omega^{(1)}(u) + \tilde{\phi}_\omega^{(1)}(u)$. Now we iterate these two functions $$ \tilde{\phi}_\omega^{(k+1)}(u) := - \int_{u}^\infty \frac{1}{\omega} \sin(\omega(u-v))V_0(v) \tilde{\phi}_\omega^{(k)}(v) \:dv \; , \; k \geq 1 \; ,$$ analogously for $\hat{\phi}_\omega^{(k+1)}(u)$. Hence, we have the formal decomposition \begin{equation} \label{eq: formal decomposition} \grave{\phi}_\omega (u) = e^{-i \omega u} + \sum_{k=1}^\infty \hat{\phi}_\omega^{(k)}(u) + \sum_{k=1}^\infty \tilde{\phi}_\omega^{(k)}(u) \;. \end{equation} Both series are well-defined. In order to show this, we use the bound \begin{equation} \label{eq: alte Ungleichung fuer greensfkt} \Big| \frac{1}{\omega} \sin(\omega(u-v)) \Big| \: \leq \:\frac{4|v|}{1 + |\omega v|} \; , \end{equation} from Lemma \ref{lemma: abschaetzung der omegaabl der greensfkt} for real $\omega$ [Note that this estimate is also valid for the case $v\geq u>0$]. Hence, we get inductively the estimates \begin{eqnarray*} \big| \hat{\phi}_\omega^{(k+1)} (u) \big| \leq \hat{R}_\omega (u) \frac{P_\omega(u)^k}{k!} \; , \\ \big| \tilde{\phi}_\omega^{(k+1)} (u) \big| \leq \tilde{R}_\omega (u) \frac{P_\omega(u)^k}{k!} \; , \end{eqnarray*} for all $k \geq 0$, where the functions $\hat{R}, \tilde{R}$ and $P$ are given by \begin{eqnarray*} \hat{R}_\omega (u) & := & \int_u^\infty \frac{4 v}{1 + |\omega| v} \Big|\frac{c_{30}}{v^3} \Big| \: dv \; ,\\ \tilde{R}_\omega (u) & := & \int_u^\infty \frac{4 v}{1 + |\omega| v} | h(v) | \: dv \; ,\\ P_\omega(u) & := & \int_u^\infty \frac{4 v}{1 + |\omega| v} | V_0 (v) | \: dv \; . \end{eqnarray*} Thus, the series $\sum \hat{\phi}_\omega^{(k)}(u)$ as well as $\sum \tilde{\phi}_\omega^{(k)}(u)$ converge locally uniformly with respect to $u$ and $\omega$. In the next step we show that, for fixed $u>0$, $\sum \tilde{\phi}_\omega^{(k)}(u)$ is $C^1(\field{R})$ with respect to $\omega$. To this end, it suffices to prove that each summand $ \tilde{\phi}_\omega^{(k)}, \; k\geq 1,$ is $C^1$ and that the series $\sum \partial_\omega \tilde{\phi}_\omega^{(k)}$ converges locally uniformly in $\omega$. Due to the estimates (\ref{eq: abschaetzung der omegaabl der greensfkt}),(\ref{eq: alte Ungleichung fuer greensfkt}), we have the inequality \begin{eqnarray} \Big| \partial_\omega \left[ \frac{1}{\omega} \sin (\omega(u-v)) h(v) e^{-i\omega v} \right] \Big| \leq \hspace*{5cm} \nonumber \\ \leq \Big| \frac{12 \: v^2}{1 +| \omega |v} h(v) \Big| + \Big| \frac{4 v^2}{1 + |\omega| v} h(v)\Big| = \frac{16 v^2}{1 + |\omega| v} |h(v)|\;. \label{eq: Abschaetzung Integralkern tilde phi 1} \end{eqnarray} Hence, the second term is an integrable bound, uniformly in $\omega$, for the first derivative of the integrand. It follows that $\tilde{\phi}_\omega^{(1)}(u)$ is $C^1$ with respect to $\omega$, bounded by \begin{eqnarray*} \big| \partial_\omega \tilde{\phi}_\omega^{(1)}(u) \big| \leq \int_u^\infty \frac{16 v^2}{1 + |\omega| v} |h(v)| \:dv =: \tilde{R}_\omega^{(1)}(u) \; . \end{eqnarray*} Together with the estimate \begin{eqnarray*} \tilde{R}_\omega (u) \leq \frac{1}{4 u} \int_u^\infty \frac{16 v^2}{1 +|\omega| v} |h(v)| \:dv \leq \frac{1}{u} \tilde{R}_\omega^{(1)}(u) \; , \end{eqnarray*} one shows inductively that $\tilde{\phi}_\omega^{(k+1)} (u)$ is $C^1$ with respect to $\omega$, bounded by \begin{eqnarray*} \big| \partial_\omega \tilde{\phi}_\omega^{(k+1)} (u) \big| \leq \tilde{R}_\omega^{(1)} (u) \frac{(4 P_\omega(u))^k}{k!} \; . \end{eqnarray*} This yields that the sum $\sum \partial_\omega \tilde{\phi}_\omega^{(k)}$ converges locally uniformly in $\omega$. Hence, the sum $\sum \tilde{\phi}_\omega^{(k)}(u)$ is $C^1(\field{R})$ with respect to $\omega$. According to the decomposition (\ref{eq: formal decomposition}), it remains to analyze the $\omega-$dependence of $\sum \hat{\phi}_\omega^{(k)}(u)$. To this end, we compute the first summand: \begin{eqnarray*} \hat{\phi}_\omega^{(1)} (u) & = & \frac{1}{2 i \omega} \int_u^\infty \left( e^{-i \omega(u-v)} - e^{i \omega (u-v)}\right) e^{-i \omega v} \frac{c_{30}}{v^3} \: dv \\ & = & \frac{1}{2 i \omega} e^{-i \omega u} \int_u^\infty \frac{c_{30}}{v^3} \: dv - \frac{1}{2 i \omega} e^{i \omega u} \int_u^\infty \frac{c_{30}}{v^3} e^{-2 i \omega v} \: dv \; . \end{eqnarray*} Integrating the second term by parts, we obtain \begin{eqnarray*} & = & \frac{1}{2 i \omega} \left( e^{-i \omega u} \frac{c_{30}}{2 u^2} - e^{i \omega u} \frac{c_{30}}{2 u^2} e^{-2 i \omega u} + e^{i \omega u} \int_u^\infty \frac{c_{30}}{-2v^2} (-2 i \omega) e^{-2 i \omega v} \: dv \right) \\ & = & e^{i \omega u} \int_u^\infty \frac{c_{30}}{2v^2} \: e^{-2 i \omega v} \: dv \; . \end{eqnarray*} The series expansion of Lemma \ref{lemma: zur Reihenentwicklung} in the limit $\varepsilon \rightarrow 0$ yields \begin{eqnarray} \label{eq: Entwicklung von hut phi 1} \hat{\phi}_\omega^{(1)} (u) = \frac{c_{30}}{2} e^{i \omega u} \bigg\{ 2 i \omega \big( \log \left(2 i \omega u\right) + c_0 \big) \hspace*{30mm} \nonumber \\ - u^{-1} \sum \limits_{k=0,k\neq 1}^\infty \frac{(-1)^k }{(k-1) k!} \left(2 i \omega u\right)^k \bigg\} \; . \end{eqnarray} Intuitively, the only term which is not $C^1$ is the term involving $2 i \omega \log (2 i \omega u)$. More precisely, defining \begin{eqnarray} \label{eq: Definition hut psi 1} \hat{\psi}_\omega^{(1)} (u) := \hat{\phi}_\omega^{(1)} (u) - c_{30} e^{i \omega u} i\omega \log \left(2 i \omega u\right) \; , \end{eqnarray} and iterating this by \begin{equation*} \hat{\psi}_\omega^{(k+1)} (u) := - \int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) V_0(v) \hat{\psi}_\omega^{(k)}(v) \: dv \; ,\; k \geq 1, \end{equation*} we show next that the sum $\sum \hat{\psi}_\omega^{(k)} (u)$ is $C^1$ with respect to $\omega$. By definition this holds for the initial function $\hat{\psi}^{(1)}_\omega (u)$. In order to prove this for the sum, we apply the same method as above. To this end, we need good estimates for the initial functions $\hat{\psi}_\omega^{(1)} (u)$ and $ \partial_\omega \hat{\psi}_\omega^{(1)} (u)$. Estimating the integral representation of $\hat{\phi}^{(1)}_\omega (u)$, we obtain for arbitrary $u>0$ and $\omega \in \field{R}$, \begin{eqnarray*} \Big| \hat{\psi}_\omega^{(1)} (u) + c_{30} e^{i \omega u} i\omega \log \left(2 i \omega u\right) \Big| = \Big| \hat{\phi}_\omega^{(1)} (u) \Big| \leq \int_u^\infty \Big| \frac{c_{30}}{2 v^2} \Big| \: dv = \frac{c_{30}}{2u} \; . \end{eqnarray*} On the other hand, looking at the series in (\ref{eq: Entwicklung von hut phi 1}), we obtain for all $u \leq \frac{1}{| \omega|} $ the estimate \begin{eqnarray} \big| \hat{\psi}^{(1)}_\omega (u) \big| = \Big| \frac{c_{30}}{2} e^{i \omega u} \Big\{ 2 i \omega c_0 - \frac{1}{u} \sum \limits_{k=0,k\neq 1}^\infty \frac{(-1)^k }{(k-1) k!} \left(2 i\omega u\right)^k \Big \} \Big| \leq \frac{\tilde{c}}{u} \; , \label{eq: Abschaetzung hut psi 1 bereich leq 1/omega} \end{eqnarray} with a suitable constant $\tilde{c}$. Thus, we get for all $u>0$ and $ \omega \in \field{R}$ the estimate \begin{equation} \label{eq: abschaetzung hut psi 1} \big| \hat{\psi}^{(1)}_\omega (u) \big| \leq \frac{c}{u} + c |\omega| |\log (2 i \omega u) | \: 1_{[\frac{1}{|\omega|} , \infty)} (u) \; , \end{equation} where $c$ is chosen suitably and $1_{.}(.)$ denotes the characteristic function. In order to estimate the derivative $\partial_\omega \hat{\psi}^{(1)}_\omega (u)$, we use in the domain $ u \geq \frac{1}{|\omega|}$, $ |\omega| \neq 0$, the following bound for $\partial_\omega \hat{\phi}^{(1)}_\omega (u)$ [see also (\ref{eq: Abschaetzung Integralkern tilde phi 1})], \begin{eqnarray*} \nonumber \Big| \partial_\omega \left( \hat{\phi}^{(1)}_\omega (u)\right) \Big| & \leq & \int_u^\infty \frac{16 v^2}{1 + |\omega| v} \Big| \frac{c_{30}}{v^3} \Big| \: dv \\ & \leq & \frac{16}{|\omega|} \int_u^\infty \frac{|c_{30}|}{v^2} \: dv \leq \frac{16 c_{30}}{|\omega| u} \leq 16 \: c_{30} \; . \end{eqnarray*} Together with the analogon to estimate (\ref{eq: Abschaetzung hut psi 1 bereich leq 1/omega}) in the region $u \leq \frac{1}{|\omega|}$, we obtain the bound \begin{eqnarray} \label{eq: abschaetzung partial omega hut psi 1} \big| \partial_\omega \hat{\psi}_\omega^{(1)} (u) \big| \leq \tilde{c} + \tilde{c}(1 + u |\omega|) |\log(2i \omega u)| \: 1_{[\frac{1}{|\omega|}, \infty) } (u) \; , \end{eqnarray} where $u>0,\omega \in \field{R}$ and $\tilde{c}$ is an appropriate constant. For reasons of simplicity, we choose $c = \tilde{c}$ such that both inequalities (\ref{eq: abschaetzung hut psi 1}),(\ref{eq: abschaetzung partial omega hut psi 1}) hold. Using these inequalities, we show by induction, in the same way as above, that $\hat{\psi}^{(k)}_\omega (u)$ is $C^1$ with respect to $\omega$ and obeys the estimates \begin{eqnarray} \big| \hat{\psi}_\omega^{(k)} (u) \big| & \leq & \frac{c}{u} \frac{P_\omega (u)^{k-1}}{(k-1)!} + \frac{c}{u}\: r (|\omega|) \frac{P_\omega (u)^{k-2}}{(k-2)!} \; , \label{eq: abschaetzung hut psi k}\\ \big| \partial_\omega \big( \hat{\psi}_\omega^{(k)} (u) \big) \big| & \leq & c \frac{\big(4 P_\omega (u)\big)^{k-1}}{(k-1)!} + 5 c\: r (|\omega|) \frac{\big(4 P_\omega (u)\big)^{k-2}}{(k-2)!} \; , \label{eq: abschaetzung partial omega hut psi k} \end{eqnarray} for all $k \geq 2,u>0$ and $\omega \in \field{R}$, where $r$ is given by \begin{equation*} r(|\omega|) := \int _{\frac{1}{|\omega|}}^\infty \frac{4| \omega| v^2}{1 + |\omega| v} \: | V_0(v) | | \log(2 i \omega v)| \: dv \; . \end{equation*} Due to (\ref{eq: abschaetzung hut psi k}),(\ref{eq: abschaetzung partial omega hut psi k}), the sums $\sum \hat{\psi}^{(k)}_\omega (u)$ and $\sum \partial_\omega \hat{\psi}^{(k)}_\omega (u)$ converge locally uniformly in $\omega$. Hence, we conclude that $\sum \hat{\psi}^{(k)}_\omega (u)$ is well defined and continuously differentiable with respect to $ \omega$. Thus, it remains to look at the term we get by the iteration of \begin{equation*} \vartheta^{(1)}_\omega (u) := c_{30} e^{i\omega u} i \omega \log(2 i \omega u) = i c_{30} \: \omega \log(2 i \omega) e^{i\omega u} + i c_{30} \: \omega\log(u) e^{i \omega u} \; . \end{equation*} To this end, we split up the iteration, exactly as we did for the iteration of $\phi^{(k)}_\omega (u)$, i.e. we define \begin{eqnarray*} \tilde{\vartheta}^{(2)}_\omega (u) & := & -\int_{u}^\infty \frac{1}{\omega} \sin (\omega(u-v)) h(v) \vartheta^{(1)}_\omega (v) \: dv \; , \\ \hat{\vartheta}^{(2)}_\omega(u) & := & - \int_{u}^\infty \frac{1}{\omega} \sin (\omega(u-v)) \frac{c_{30}}{v^3} \vartheta^{(1)}_\omega (v) \: dv \; , \end{eqnarray*} and iterate these functions, $$ \tilde{\vartheta}_\omega^{(k+1)}(u) := - \int_{u}^\infty \frac{1}{\omega} \sin(\omega(u-v))V_0(v) \tilde{\vartheta}_\omega^{(k)}(v) \:dv \; , \; k \geq 2 \; ,$$ analogously for $\hat{\vartheta}_\omega^{(k+1)}(u)$. Next, in exactly the same way as for $\tilde{\phi}^{(k)}$, one sees that $$\sum_{k=2}^\infty \tilde{\vartheta}_\omega^{(k)}(u) =2 i \omega \log (2 i \omega) \:f_1(\omega,u) +2 i \omega f_2 (\omega,u) \; ,$$ where $f_1(.,u)$ and $f_2(.,u)$ are $C^1$ with respect to $\omega$. Finally, by an exact calculation \begin{eqnarray*} \hat{\vartheta}^{(2)}_\omega (u) = i c_{30}^2 \omega \log(2 i \omega)e^{-i \omega u} \int_u^\infty e^{2i \omega v} \frac{1}{2v^2} \: dv \hspace*{10mm}\\ + i c_{30}^2 \omega e^{-i\omega u} \int_u^\infty e^{2 i \omega v} \left( \frac{1}{4v^2} + \frac{\log v}{2 v^2} \right) \: dv \; , \end{eqnarray*} together with the series expansion of Lemma \ref{lemma: zur Reihenentwicklung} in the limit $\varepsilon \rightarrow 0$ we obtain \begin{eqnarray*} \hat{\vartheta}^{(2)}_\omega (u) = \frac{1}{4} i c_{30}^2 \: \omega (1 + 2 \log(2 i \omega)) e^{-i \omega u} \hspace*{55mm} \\ \times \bigg[ (-2 i \omega) \big( \log \left(-2 i \omega u\right) + c_0 \big) - u^{-1} \sum \limits_{k=0,k\neq 1}^\infty \frac{(-1)^k }{(k-1) k!} \left(- 2 i \omega u\right)^k \bigg] \\ + \frac{1}{2} i c_{30}^2 \: \omega e^{-i\omega u} \Bigg[ \sum \limits_{m=0}^1 \begin{pmatrix} 1\\ m \end{pmatrix} \log^{1-m} (u) \nonumber \bigg\{ (-2 i \omega ) \times \hspace*{25mm} \\ \left( \frac{(-1)^{m+2}}{m+1} \log^{m+1}\left(-2 i\omega u\right) + \sum \limits_{k=0}^m c_k \log^k (-2 i\omega u)\right) \nonumber\\ - u^{-1}\sum \limits_{k=0,k\neq p-1}^\infty \frac{(-1)^k (-1)^m \: m!}{(k-1)^{m+1} k!} (-2 i \omega u)^k \bigg\} \Bigg] \; . \end{eqnarray*} Proceeding in the same way as for $\sum \hat{\psi}^{(k)}_\omega (u)$ [i.e. we omit the $\log \omega$-terms in the square brackets, and iterate these functions], we again get terms of the form $$ 2i\omega \log (2 i \omega) \:f_3(\omega,u) +2i \omega f_4 (\omega,u) $$ with continuously differentiable functions $f_3(.,u),f_4(.,u)$. So after simplifications there remain terms of the form $$(2i \omega)^2 \log^s(2i\omega) \log^r(-2i \omega) \log^t(u) e^{-i \omega u}\;.$$ These are obviously $C^1$ with respect to $\omega$ and so is their iteration, due to the fact that the additional $\omega$-order yields directly integrable bounds for all $\omega$. This completes the proof. \end{proof} Note that one can apply this idea of the proof to the case $l\geq 1$. This yields a similar result but also requires much more complex calculations according to the construction of the Jost solutions $\grave{\phi}_\omega$ [cf. \cite[Section 5]{Kronthaler}]. \section{The decay rate for spherical symmetric initial data} According to the integral representation \ref{eq: Darstellung der Loesung l=0} the solution of the Cauchy problem for compactly supported smooth initial data $\Psi_0 \in C_0^\infty (\field{R})^2$ has the pointwise representation \begin{eqnarray*} \Psi(t,u)=e^{-itH} \Psi_0 (u) =\hspace*{70mm}\\ - \frac{1}{\pi} \int_{\mathbb{R}} e^{-i \omega t} \left( \int_{\textrm{supp}\: \Psi_0} \mathrm{Im} \! \left( \frac{\acute{\phi}_{\omega l}(u) \grave{\phi}_{\omega l}(v)}{w(\acute{\phi}_{\omega l},\grave{\phi}_{\omega l})}\right) \left(% \begin{array}{lc} \omega & 1 \\ \omega^2 & \omega \\ \end{array}% \right) \Psi_0(v) dv \right) \: d\omega \; , \end{eqnarray*} Our goal is now to use the Fourier transform (\ref{eq: Darstellung der Loesung l=0}), in order to get detailed decay rates. To this end, we have to analyze the integral kernel, hence essentially \begin{equation} \label{eq: essentieller part vom Integralkern} \mathrm{Im} \! \left(\frac{\acute{\phi}_{\omega}(u) \grave{\phi}_{\omega}(v)}{w(\acute{\phi}_{\omega },\grave{\phi}_{\omega })}\right)\; . \end{equation} Since we already know that $\acute{\phi}_\omega$ is analytic on a neighborhood of the real line, it remains to understand $\grave{\phi}_\omega$ at the point $\omega = 0$. To this end, we want to use an expansion as in Lemma \ref{lemma: Entwicklung von acute phi,l=0}. The problem is that this expansion is not sufficient for this purpose. Thus, we apply a similar method in order to gain \begin{lemma} \label{lemma: bessere Entwicklung von gravephi for l=0} For $l=0$, $\omega \in \field{R} \setminus \{0\}$, $n\geq 3$ and fixed $u>0$, we get for the fundamental solution $\grave{\phi}_\omega(u)$ the representation \begin{equation}\label{eq: bessere Entwicklungvon gravephi for l=0} \grave{\phi}_\omega (u) = e^{-i\omega u} + g_0(\omega,u)+ \sum_{i\geq j+k=1}^n (2i\omega)^i \log^j(2i\omega)\log^k(-2i\omega) g_{ijk}(\omega,u) \; , \end{equation} where the functions $g_0,g_{ijk} \in C^n(\field{R})$ with respect to $\omega$. \end{lemma} In order to prove this, we need the following lemma. \begin{lemma} \label{lemma: Restiteration ist C3} Let $u>0$, $n \in \field{N}$ and $h \in C^\infty(\field{R}_+)$ be a smooth function satisfying $\int_u^\infty v^{n+1} |h(v)| \:dv < \infty$. \\ Then: \begin{enumerate} \item $$ f^{(1)}_\omega (u) := - \int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) h(v) e^{-i \omega v} \: dv$$ is $C^n(\field{R})$ with respect to $\omega$. \item For all $k \geq 1$ $$ f^{(k+1)}_\omega (u) := - \int_u^\infty \frac{1}{\omega} \sin(\omega (u-v)) V_0(v) f^{(k)}_\omega (v) \:dv\; ,$$ are $C^n(\field{R})$ with respect to $\omega$ and the series $ \sum_{k\geq1} \partial_\omega^m f^{(k)}_\omega (u)$, $m \leq n$, converge locally uniformly. \end{enumerate} In particular, $\sum f^{(k)}_\omega (u)$ is $C^n(\field{R})$ with respect to $\omega$. \end{lemma} \begin{proof} This is shown in exactly the same way as the statement that the functions $\tilde{\phi}^{(k)}_\omega$ in the proof of Lemma \ref{lemma: Entwicklung von acute phi,l=0} as well as the series are $C^1$ with respect to $\omega$. In order to show the differentiability up to the $n$-th order, we use the estimates of Lemma \ref{lemma: abschaetzung der omegaabl der greensfkt}. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma: bessere Entwicklung von gravephi for l=0}] Because of complex calculations we show this at first in the case $n=3$. To this end, we split up the iteration scheme (\ref{eq: iterationsschema fuer grave phi}) of the fundamental solutions in the following way. According to Lemma \ref{lemma: asymptotische Entwicklung von V_l}, we can write the potential $V_0$ as $$ V_0(v) = \sum_{p=3}^5 \sum_{q=0}^{p-3} c_{pq}\frac{\log^q(v)}{v^p} + r_6(v) \; ,$$ where $r_6$ is a smooth function for $v\geq u$ behaving asymptotically at infinity as $\mathcal{O}\left( \frac{\log^3(v)}{v^6}\right)$. Thus, defining $$\tilde{\phi}^{(1)}_\omega (u) := -\int_u^\infty \frac{1}{\omega}\sin(\omega(u-v)) r_6(v) e^{-i \omega v} \: dv$$ and for all $k\geq1$ $$ \tilde{\phi}^{(k+1)}_\omega (u):= - \int_u^\infty \frac{1}{\omega} \sin(\omega (u-v)) V_0(v) \tilde{\phi}^{(k)}_\omega (v) \:dv\; ,$$ Lemma \ref{lemma: Restiteration ist C3} yields that $\sum \tilde{\phi}^{(k)}_\omega \in C^3(\field{R})$ with respect to $\omega$ and is a contribution to $g_0(\omega,u)$ in the statement of the lemma. Thus, we have to compute the remaining term $$\hat{\phi}^{(1)}_\omega (u) := -\int_u^\infty \frac{1}{\omega}\sin(\omega(u-v)) \sum_{p=3}^5 \sum_{q=0}^{p-3} c_{pq}\frac{\log^q(v)}{v^p} e^{-i \omega v} \: dv.$$ We do this essentially in the same way as we computed the terms $\hat{\phi}^{(1)}, \hat{\vartheta}^{(2)}$ in the proof of Lemma \ref{lemma: Entwicklung von acute phi,l=0}. We split up the $\sin(\omega(u-v))$ with Euler's formula and integrate by parts and obtain \begin{eqnarray} \label{eq: Integraldarstellung von phihut3} = -e^{i \omega u} \int_u^\infty \left( \frac{c_{30}}{-2 v^2} + \sum_{p=3}^4 \sum_{q=0}^{p-2} \tilde{c}_{pq} \frac{\log^q v}{v^p} \right) e^{-2 i \omega v} \: dv \; , \end{eqnarray} where the coefficients $\tilde{c}_{pq}$ depend on the integral functions of the terms $\log^r v / v^s$. Now, we apply Lemma \ref{lemma: zur Reihenentwicklung} in the limit $\varepsilon \searrow 0$ and get \begin{eqnarray} \nonumber = & & e^{i \omega u} \frac{c_{30}}{2} \left\{ 2 i \omega \log(2i\omega u) - \frac{1}{u} \sum_{k=0}^\infty d_k(2 i \omega u)^k \right\} \\ \nonumber & + & e^{ i \omega u} \frac{c_{41}}{3} \bigg\{ (2i \omega)^2 \left( \frac{1}{4}\log^2(2 i \omega u) + \log(2 i \omega u) (c -\frac{1}{2} \log u) \right) \\ \nonumber & & \hspace*{40mm} +\ (c + c \log u) \frac{1}{u^2} \sum_{k=0}^\infty d_k (2i\omega u)^k\bigg\} \\ \nonumber & + & e^{i\omega u} \bigg\{ (2i \omega)^3 \sum_{s+t=1}^3 c \log^s(2i\omega u) \log^t u \\ \label{eq: expansion of phihut3} & & \hspace*{40mm} +\sum_{m=0}^2 c \log^m u \frac{1}{u^3} \sum_{k=0}^\infty d_k(2i \omega u)^k \bigg\} \end{eqnarray} with appropriate constants $c$ and $d_k$, which are of the form $$ d_k = \frac{(-1)^k (-1)^m \: m!}{(k-p+1)^{m+1} k!} \; . \qquad \textrm{[cf. Lemma \ref{lemma: zur Reihenentwicklung}]} $$ Since the series-terms are obviously $C^3(\field{R})$ with respect to $\omega$, this expression of $\hat{\phi}_\omega^{(1)}(u)$ fits into the desired expansion (\ref{eq: bessere Entwicklungvon gravephi for l=0}). In the next step we have to iterate (\ref{eq: expansion of phihut3}). To this end, we treat each term in the curly brackets separately. We show this exemplarily for the first term which we denote by \begin{eqnarray} \label{eq: def alpha 1} \alpha^{(1)} (u) & := & e^{i \omega u} \frac{c_{30}}{2} \bigg\{ 2 i \omega \log(2i\omega u) - \frac{1}{u} \sum_{k=0}^\infty d_k(2 i \omega u)^k \bigg\} \\ \nonumber & = & e^{i \omega u} \int_u^\infty \frac{c_{30}}{2 v^2} e^{-2i\omega v} \: dv \; . \end{eqnarray} In order to derive sufficient bounds for all $u>0$, we use different methods for the regions $|\omega| u\geq 1$ and $|\omega| u<1$. First, let $u$ be such that $|\omega| u\geq 1$, and by integrating by parts we get: \begin{eqnarray} \alpha^{(1)} (u)& = & e^{i \omega u} \int_u^\infty \frac{c_{30}}{2 v^2}\frac{1}{(-2 i \omega)^3} \partial_v^3 e^{-2i\omega v} \: dv \nonumber\\ & = & \frac{c_{30}}{4 i \omega} e^{- i \omega u} \frac{1}{u^2} -\frac{c_{30}}{(2i \omega)^2} e^{- i \omega u} \frac{1}{u^3} + \frac{3c_{30}}{(2i \omega)^3} e^{- i \omega u} \frac{1}{u^4} \label{eq: part int bei alpha 1} \\ & &- e^{ i \omega u} \int_u^\infty \frac{12}{v^5} \frac{1}{(2 i \omega)^3} e^{- 2 i \omega v} \: dv \; . \nonumber \end{eqnarray} Using this expression and elementary integral estimates, we get for all $u>0$ satisfying $|\omega| u \geq 1$ the bounds \begin{eqnarray}\nonumber \big| \alpha^{(1)} (u) \big| & \leq & c \frac{1}{|\omega|u^2} \:, \\ \nonumber \big|\partial_\omega \alpha^{(1)} (u) \big| & \leq & c \frac{1}{|\omega|u} \:, \\ \nonumber \big|\partial_\omega^2 \alpha^{(1)} (u) \big| & \leq & c \frac{1}{|\omega|} \:, \\ \label{eq: Abschaetzung fuer alpha bereich u omega geq1} \big|\partial_\omega^3 \alpha^{(1)} (u) \big| & \leq & c \frac{u}{|\omega|} \:, \end{eqnarray} with suitable constants $c$. Moreover, comparing the infinite sum of (\ref{eq: def alpha 1}) with the exponential function, one directly sees that it is $C^3$ with respect to $\omega$. It satisfies for all $u>0$ with $|\omega| u <1$ the bounds \begin{eqnarray} \nonumber \Big| \frac{1}{u} \sum_{k=0}^\infty d_k(2 i \omega u)^k \Big| & \leq & \frac{c}{u} \\ \nonumber \bigg| \partial_\omega \bigg( \frac{1}{u} \sum_{k=0}^\infty d_k(2 i \omega u)^k\bigg) \bigg| & \leq & c \\ \nonumber \bigg| \partial_\omega^2 \bigg( \frac{1}{u} \sum_{k=0}^\infty d_k(2 i \omega u)^k \bigg) \bigg| & \leq & c u\\ \label{eq: abschaetzungen fuer summe in alpha omega u leq 1} \bigg| \partial_\omega^3 \bigg( \frac{1}{u} \sum_{k=0}^\infty d_k(2 i \omega u)^k \bigg) \bigg| & \leq & c u^2 \; . \end{eqnarray} Using (\ref{eq: Abschaetzung fuer alpha bereich u omega geq1}),(\ref{eq: abschaetzungen fuer summe in alpha omega u leq 1}), we verify that iterating the sum first with $r_6$ followed by the full iteration with the potential $V_0$, we obtain a $C^3$-function. For the remaining term $c_{30}/2 e^{i \omega u} 2i \omega \log(2i\omega u)$ we use the identity $\log(2 i \omega u) = \log (2 i \omega) + \log u$ together with Lemma \ref{lemma: Restiteration ist C3} to show that the first iteration with $r_6$ followed by the full iteration with the potential $V_0$ yields a term of the form $ 2i\omega \log(2 i \omega) f_{110}(\omega,u) +\omega f_{100}(\omega,u)$, $f_{110},f_{100} \in C^3(\field{R})$ with respect to $\omega$, fitting into the expansion (\ref{eq: bessere Entwicklungvon gravephi for l=0}). Thus, it remains to compute the integral \begin{equation*} -\int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) \sum_{p=3}^5 \sum_{q=0}^{p-3} c_{pq}\frac{\log^q(v)}{v^p} \alpha^{(1)} (v) \:dv \; . \end{equation*} We do this exemplarily for the term \begin{equation} \label{eq: definition von beta} \beta^{(2)} (u) := -\int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) \frac{c_{30}}{v^3} \alpha^{(1)} (v) \:dv \; . \end{equation} A complex calculation, using Lemma \ref{lemma: zur Reihenentwicklung} and Lemma \ref{lemma: entwicklung des iterationsschemas der unendl summe}, which will be stated and proven afterwards, yields \begin{eqnarray} \nonumber \beta^{(2)}(u) =& \! \! \! & 2 i\omega \log (2 i\omega) e^{-i \omega u} c \bigg\{ (-2i \omega) \log(-2i\omega u) - \frac{1}{u} \sum_{k=0}^\infty d_k (-2i \omega u)^k \bigg\} \\ \nonumber & + \! \! \! & 2 i \omega e^{-i \omega u} \frac{c_{30}^2}{4} \bigg\{ (-2i\omega) \big(-\frac{1}{2} \log^2(-2i\omega u) \\ & & \nonumber \hspace*{4mm}+ \log(-2i\omega u)(c+ \log u)\big) - (c+c\log u) \frac{1}{u}\sum_{k=0}^\infty d_k (-2i\omega u)^k \bigg\} \\ & +\! \! \! & e^{-i \omega u} \bigg\{c (2i\omega)^2 \log(-2i\omega u) + \label{eq: berechnung von beta (2)} \frac{1}{u^2} \sum_{k=0}^\infty d_k(-2i\omega u)^k \bigg\}\;, \end{eqnarray} with suitable constants $c,d_k$. Hence $\beta^{(2)}(u)$ goes with (\ref{eq: bessere Entwicklungvon gravephi for l=0}). So far, we cannot finish this scheme, but if one has a close look, one sees that the most irregular term at $\omega=0$, namely $2i\omega \log (2i\omega)$, now appears with a $1/u$ decay, while the other irregularities appear with an additional $\omega$-power. Furthermore, due to the bounds (\ref{eq: Abschaetzung fuer alpha bereich u omega geq1}) together with direct integral estimates, we obtain for all $u$ with $|\omega|u \geq 1$ the bounds \begin{eqnarray} \nonumber \big| \beta^{(2)}(u) \big| & \leq & c\int_u^\infty \frac{v}{1 +|\omega| v}\frac{1}{v^3} \frac{1}{v^2 |\omega|}\:dv \leq c \frac{1}{ u^4| \omega |} \\ \nonumber \big| \partial_\omega \beta^{(2)}(u)\big| & \leq & c\frac{1}{u^3 |\omega|} \\ \nonumber \big| \partial_\omega^2 \beta^{(2)}(u)\big| & \leq &c \frac{1}{u^2 |\omega|} \\ \label{eq: Abschaetzungen fuer beta(2) bereich omega u geq1} \big| \partial_\omega^3 \beta^{(2)}(u)\big| & \leq & c\frac{1}{u |\omega|} \; . \end{eqnarray} Using in the region $|\omega|u <1$ for the sum-terms in (\ref{eq: berechnung von beta (2)}) estimates analog to (\ref{eq: abschaetzungen fuer summe in alpha omega u leq 1}), we conclude in the same way as before, that iterating these first with $r_6$ followed by the full iteration with the potential $V_0$ and summing up, we obtain $C^3-$terms. We split up the remaining $\log$-terms by $\log(-2i \omega u) = \log(-2i \omega) + \log(u)$ and use Lemma \ref{lemma: Restiteration ist C3} to show that applying the same procedure yields terms that go with (\ref{eq: bessere Entwicklungvon gravephi for l=0}). Hence, we have to analyze the integral \begin{equation*} -\int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) \sum_{p=3}^5 \sum_{q=0}^{p-3} c_{pq}\frac{\log^q(v)}{v^p} \beta^{(2)} (v) \:dv \; , \end{equation*} exemplarily we treat the term \begin{equation} \label{eq: Definition gamma (3)} \gamma^{(3)}(u) := -\int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) \frac{c_{30}}{v^3} \beta^{(2)} (v) \:dv \; . \end{equation} Computing this expression with the same methods one sees that the term with $2 i \omega \log(2 i \omega)$ decays as $1/u^2$ and the $\omega^2 \log^s(\pm 2i\omega)$-terms decay as $\log^t(u)/u$. With bounds analog to (\ref{eq: Abschaetzungen fuer beta(2) bereich omega u geq1}),(\ref{eq: abschaetzungen fuer summe in alpha omega u leq 1}) the same procedure applies and yields terms that match with (\ref{eq: bessere Entwicklungvon gravephi for l=0}). Once again it remains to analyze \begin{equation*} -\int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) \sum_{p=3}^5 \sum_{q=0}^{p-3} c_{pq}\frac{\log^q(v)}{v^p} \gamma^{(3)} (v) \:dv \; , \end{equation*} and exemplarily \begin{equation*} -\int_u^\infty \frac{1}{\omega} \sin(\omega(u-v))\frac{c_{30}}{v^3} \gamma^{(3)} (v) \:dv \; . \end{equation*} Calculating this, one checks that the $2 i \omega \log(2 i \omega)$-term decays as $1/u^3$, the $\omega^2 \log^s(\pm 2i\omega)$-terms decay as $\log^t(u)/u^2$ and the $\omega^3 \log^m(\pm 2i \omega)$-terms decay as $\log^n(u)/u$. Applying this scheme two times more, all terms which are not $C^3$ with respect to $\omega$ decay at least as $\log^s(u)/u^3$. Subtracting these terms from the full term, we obtain a $C^3$-term which is decaying at least as $\log^s(u)/u^3$, according to estimates analog to (\ref{eq: Abschaetzungen fuer beta(2) bereich omega u geq1}),(\ref{eq: abschaetzungen fuer summe in alpha omega u leq 1}) and estimating $|\omega| $ by $1/u$ in the region $|\omega| u<1$. So Lemma \ref{lemma: Restiteration ist C3} applies for the full iteration with the potential $V_0$ and we get a $C^3$-term. Due to their decay, we are able to iterate the subtracted $\log \omega$-terms also with the full potential $V_0$ and get terms that match with (\ref{eq: bessere Entwicklungvon gravephi for l=0}). Thus, the scheme can be stopped after finitely many calculations and the lemma is proven for $n=3$. For $n \geq 4$ we split the potential in the way $$ V_0(v) = \sum_{p=3}^{n+2} \sum_{q=0}^{p-3} c_{pq}\frac{\log^q(v)}{v^p} + r_{n+3}(v) \; , $$ and proceed with the same calculations. In (\ref{eq: part int bei alpha 1}) we have to integrate by parts up to the $n$-th order, in order to obtain as analogon to estimate (\ref{eq: Abschaetzung fuer alpha bereich u omega geq1}) $$ \big|\partial_\omega^m \alpha^{(1)} (u) \big| \leq \frac{c}{|\omega|} u^{2-l} \; , \quad m \leq n \; .$$ The next difference appears in the estimates (\ref{eq: Abschaetzungen fuer beta(2) bereich omega u geq1}). These cannot be done for $n\geq 4$ by simple integral estimates as a matter of convergence. Thus, we have to subtract from the result of the analog calculation to (\ref{eq: part int bei alpha 1}) for $\alpha^{(1)}(u)$ the first $n-3$ exact terms of the form $$ \frac{c}{\omega u^2}e^{-i \omega u} + ... +\frac{c}{\omega^{n-3} u^{n-2}} e^{-i \omega u} =: \rho^{(1)}(u) \; ,$$ and get for $m \leq n$ \begin{eqnarray*} \big| \partial_\omega^m \beta^{(2)}(u) \big|& \leq & \hspace*{2.7mm}\Big| \partial_\omega^m \int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) \frac{c_{30}}{v^3} \left( \alpha^{(1)}(u) - \rho^{(1)} (u) \right) \Big| \\ & & + \Big| \partial_\omega^m \int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) \frac{c_{30}}{v^3} \rho^{(1)} (u) \Big| \\ & \leq & \frac{c}{|\omega|} u^{m-4} \; , \end{eqnarray*} where for the first integral this can be done by elementary integral estimates, and for the second integral we have to integrate the subtracted terms by parts, as we did to obtain the estimates for $\alpha^{(1)}(u)$. Keeping these differences in mind, we can conclude exactly in the same way as for $n=3$, which yields the claim for arbitrary $n$. \end{proof} We now state the missing lemma. \begin{lemma} \label{lemma: entwicklung des iterationsschemas der unendl summe} Let $u>0$ and $\omega \in \field{R} \setminus \{0\}$. For the calculation of the iteration of the infinite sums that appear in the integration in Lemma \ref{lemma: zur Reihenentwicklung} with an arbitrary part of the potential, $\log^q u/u^p$, cf. Lemma \ref{lemma: asymptotische Entwicklung von V_l}, we obtain the identity \begin{eqnarray} - & \hspace*{-2mm}\displaystyle\int_u^\infty \hspace*{-2mm} & \frac{1}{\omega} \sin (\omega (u-v)) \frac{\log^q v}{v^p} \frac{\log^s v}{v^t} e^{\pm i \omega v} \sum_{k=0}^\infty d_k (\pm 2 i \omega v)^k \: dv \nonumber \\ & =& \frac{1}{u^{p+t-2}} e^{\mp i\omega u} \sum_{l=0}^{s+q} c\log^{l} (u) \sum_{k=0}^\infty d_{kl} (\mp 2 i \omega u)^k \\ \label{eq: entwicklung des iterationsschemas der unendl summe}& & + (2i \omega)^{p+t-2} e^{\mp i \omega u} \sum_{m=0}^{s+q} \sum_{r=1}^{m+1} c \log^r (\mp 2 i \omega u) \log^{s+q-m} (u) \; , \end{eqnarray} for suitable constants $d_{kl},c$. \end{lemma} \begin{proof} Let us denote $m=q+s \geq 0$ and $n=p+t \geq 4$. In order to compute the integral on the left hand side in the lemma, we insert a convergence generating factor \begin{eqnarray} - & \hspace*{-2mm}\displaystyle\int_u^\infty \hspace*{-2mm} & \frac{1}{\omega} \sin (\omega (u-v)) \frac{\log^m v}{v^n} e^{\pm i \omega v} \sum_{k=0}^\infty d_k (\pm 2 i \omega v)^k \: dv \nonumber \\ &=& \lim_{\varepsilon \searrow 0} \int_u^\infty e^{- \varepsilon v} \: \frac{1}{\omega} \sin (\omega (v-u)) \frac{\log^m v}{v^n} e^{\pm i \omega v} \sum_{k=0}^\infty d_k (\pm 2 i \omega v)^k \: dv . \hspace*{10mm} \label{eq: mit konvergenzerzeugendem Faktor} \end{eqnarray} In the next step we interchange the integral and the infinite sum. This can be done for any $\varepsilon > 2|\omega|$ by a dominating convergence argument, if one estimates the modulus of the sum very roughly by $\exp(2|\omega| v)$. Thus, the two expressions \begin{eqnarray*} \int_u^\infty \frac{1}{\omega} \sin (\omega (v-u)) e^{- \varepsilon v} \frac{\log^m v}{v^n} e^{\pm i \omega v} \sum_{k=0}^\infty d_k (\pm 2 i \omega v)^k \: dv \\ \textrm{and} \hspace*{108mm} \\ \sum_{k=0}^\infty d_k (\pm 2 i \omega)^k \int_u^\infty \frac{1}{\omega} \sin (\omega (v-u)) e^{- \varepsilon v} \frac{\log^m v}{v^n} e^{\pm i \omega v} v^k \: dv \end{eqnarray*} coincide for any $\varepsilon > 2 |\omega|$. Moreover, both expressions are analytic in $\varepsilon$ for $ \Re \varepsilon >0$. So by the identity theorem for analytic functions both expressions coincide for any $\varepsilon >0$. So (\ref{eq: mit konvergenzerzeugendem Faktor}) is equal to \begin{eqnarray*} \lim_{\varepsilon \searrow 0} \sum_{k=0}^\infty d_k (\pm 2 i \omega)^k \int_u^\infty \frac{1}{\omega} \sin (\omega (v-u)) e^{- \varepsilon v} \frac{\log^m v}{v^{n-k}} e^{\pm i \omega v} \: dv \; . \end{eqnarray*} Once again we rewrite $\sin(\omega(v-u))$ with Eulers formula and integrate by parts [note that one has to be careful with the $\varepsilon$-terms that are generated by this integration by parts, but in the limit $\varepsilon \searrow 0$ they vanish] to obtain \begin{eqnarray*} = \lim_{\varepsilon \searrow 0} \sum_{k=0}^\infty d_k (\pm 2 i \omega)^k e^{\mp i \omega u} \int_u^\infty e^{\pm 2 i \omega v - \varepsilon v} \sum_{l=0}^m c \frac{\log^l v}{v^{n-k-1}} \: dv \; , \end{eqnarray*} with suitable constants $c$ arising from the integral function of $\log^mv/v^{n-k}$. Now we apply Lemma \ref{lemma: zur Reihenentwicklung}, Lemma \ref{lemma: zur Reihenentwicklung mit p<0}, take the limit $\varepsilon \searrow 0$ and get \begin{eqnarray*} = \sum_{l=0}^m \sum_{k=0}^\infty d_k (\pm 2 i \omega)^k e^{\mp i \omega u} \sum_{i=0}^l c\begin{pmatrix} l\\ i \end{pmatrix} \log^{l-i} (u) \times \hspace*{35mm}\\ \bigg\{ (\mp 2 i \omega)^{n-k-2} \sum_{j=0}^{i+1} c \log^j[\mp 2i \omega u] - \frac{1}{u^{n-k-2}} \sum_{r=0,r\neq n-k-2}^\infty d_r (\mp2 i \omega u)^r\bigg\}. \end{eqnarray*} We reorder the two infinite sums to one infinite sum, which can be done because of the structure of the coefficients $d_k,d_r$ of the exponential integral that lets us compare the new coefficients to the coefficients of the exponential series, and get the expression (\ref{eq: entwicklung des iterationsschemas der unendl summe}). \end{proof} Next, we need a similar expansion for the derivative $\grave{\phi}'_\omega (u)$. \begin{lemma} \label{lemma: Entwicklung von phi'} For $l=0$, $\omega \in \field{R} \setminus \{0\}$, $n \geq 3$ and fixed $u>0$, the first $u$-derivative of $\grave{\phi}_\omega (u)$ satisfies the expansion \begin{equation} \label{eq: Entwicklung von phi'} \grave{\phi}'_\omega (u) = -i \omega e^{-i \omega u} + h_0(\omega,u) + \sum_{i\geq j+k=1}^n (2i\omega)^i \log^j(2 i \omega) \log^k(-2i \omega) h_{ijk}(\omega,u) \; , \end{equation} where the functions $h_0,h_{ijk} \in C^n(\field{R})$ with respect to $\omega$. \end{lemma} \begin{proof} In order to prove this, we use the fact that $\grave{\phi}'_\omega (u)$ satisfies for $u>0$ an integral equation analog to (\ref{Jost equation bound cond -unendlich}) \begin{equation*} \grave{\phi}'_\omega (u) = -i \omega e^{- i \omega u} - \int_u^\infty \cos(\omega (u-v)) V_0(v) \grave{\phi}_\omega (v) \: dv \; . \end{equation*} We estimate $\cos(\omega(u-v))$ and its $\omega-$derivatives for real $\omega$ and $v\geq u>0$ by \begin{equation} \label{eq: Abschaetzung fuer cos omega u-v} \big|\partial_\omega ^n \cos(\omega (u-v)) \big| \leq (v-u)^n \leq (2v)^n \; ,\quad n \in \field{N}_0 \; . \end{equation} Thus, using this estimate for $n=0$ together with the iteration scheme (\ref{eq: iterationsschema fuer grave phi}) for $\grave{\phi}_\omega (u)$, we obtain a well defined iteration scheme for the $u$-derivative: \begin{eqnarray} \grave{\phi}'_\omega (u) &=& \sum_{k=0}^\infty \psi_\omega^{(k)}(u) \; , \quad \textrm{where}\nonumber \\ \psi^{(0)}_\omega (u) &=& -i \omega e^{-i \omega u} = \left(\phi^{(0)}_\omega\right)' (u) \; , \label{eq: Iterationsschema fuer phi'} \\ \psi_\omega^{(k+1)}(u) &=& - \int_u^\infty \cos(\omega (u-v)) V_0(v) \phi_\omega^{(k)} (v) \: dv = \left(\phi^{(k+1)}_\omega\right)'(u)\; , \nonumber \end{eqnarray} with $k\geq0$. Due to this iteration scheme together with the estimates (\ref{eq: Abschaetzung fuer cos omega u-v}) that replace the bounds (\ref{eq: abschaetzung der omegaabl der greensfkt}) and the identity $\cos(\omega (u-v)) = 1/2 (e^{i\omega (u-v)} + e^{i \omega (v-u)})$, we can use the decompositions of the $\phi_\omega^{(k)}$, which we have made in the proof of Lemma \ref{lemma: bessere Entwicklung von gravephi for l=0}. In particular, we apply the procedure of this proof, in order to show the claim. \end{proof} Now, we use the expansions (\ref{eq: bessere Entwicklungvon gravephi for l=0}),(\ref{eq: Entwicklung von phi'}), in order to analyze the $\omega$-depen\-dence of the essential part of the integral kernel $$\mathrm{Im} \! \left(\frac{\acute{\phi}_{\omega}(u) \grave{\phi}_{\omega}(v)}{w(\acute{\phi}_{\omega },\grave{\phi}_{\omega })}\right)\; . $$ At this stage, it is enough to set $n=4$ in (\ref{eq: bessere Entwicklungvon gravephi for l=0}),(\ref{eq: Entwicklung von phi'}) for our purposes. Looking at the integral representation (\ref{eq: Darstellung der Loesung l=0}) of the solution, we see that $u \in \field{R}$ is fixed while $v \in \field{R}$ varies in a compact set, the support of our initial data $\Psi_0$. Due to the Picard-Lindel{\"o}f theorem and the analytical dependence in $\omega$ of the Schr{\"o}dinger equation from the coefficients, the expansions (\ref{eq: bessere Entwicklungvon gravephi for l=0}),(\ref{eq: Entwicklung von phi'}) extend to any $u$, and $v$, respectively, on compact sets. Moreover, the following properties follow directly by the construction of the expansions. \begin{corollary} \label{corollary: Beziehung g_ijk g_0} For $4 \geq i=j+k \geq 1$ the function $g_{ijk},h_{ijk}$ can be constructed such that they obey the equalities \begin{equation} \begin{array}{rclll} g_{ijk} (\omega, u) + o(\omega^\kappa)& = & c_{ijk} \left( e^{-i\omega u} +g_0(\omega,u) \right) \; & \mathrm{and} \\ h_{ijk}(\omega,u)+ o(\omega^\kappa) & = & c_{ijk} \, h_0(\omega,u) \; , & & \textrm{ for } i \; \mathrm{even} \; \\ g_{ijk} (\omega, u) + o(\omega^\kappa)& = &c_{ijk} ( e^{i\omega u} +\overline{g_0(\omega,u)} ) \;& \mathrm{and} \\ h_{ijk}(\omega,u) + o(\omega^\kappa) & =& c_{ijk} \, \overline{h_0(\omega,u)} \; ,& & \textrm{ for } i \; \mathrm{odd} . \end{array} \label{eq: Beziehung gijk g0 hijk h0} \end{equation} where $\kappa$ is an arbitrary integer and the $c_{ijk}$ are real constants, in particular not depending on $u$. \end{corollary} \begin{proof} We show this exemplarily for the first terms $g_{110},h_{110}$. In this situation, (\ref{eq: Beziehung gijk g0 hijk h0}) holds because the first term, where $(2i\omega) \log(2i \omega)$ appears, appears with $c_{30}/2 \; e^{i \omega u}$ and there are no other terms with this $\omega$-dependence except the terms that are generated by this [cf. the calculations (\ref{eq: expansion of phihut3}),(\ref{eq: berechnung von beta (2)})]. Thus, $g_{110} (\omega,u)$ is generated by $e^{i \omega u}$, which is just the complex conjugate of $e^{-i\omega u}$, and this behavior is kept by the iteration scheme. So any $C^4$-term that is generated is the complex conjugate of a corresponding term of $g_0$. This is valid, until one finishes the iteration scheme with the arguments at the end of the proof of Lemma \ref{lemma: bessere Entwicklung von gravephi for l=0}, by what the $o(\omega^\kappa)$-term arises. Since one can do arbitrary many calculations and in each iteration at least a $\pm 2i \omega \log(\pm 2i \omega)$ is generated, the $\kappa$ can be chosen arbitrary. Moreover, looking at the iteration scheme (\ref{eq: Iterationsschema fuer phi'}), the equalities for $h_{110}(\omega,u)$ are a consequence of the arguments for $g_{110}(\omega,u)$, because of the fact that by the calculations concerning this scheme no additional highest order $\log$-terms, i.e. $i=j+k$, are generated. \end{proof} In the following assume that $\kappa=5$. We expand the functions $g_{ijk}(\omega,u)$ and $h_{ijk}(\omega,u)$ in their Taylor polynom with respect to $\omega$ at $\omega = 0$ up to the fourth order: \begin{eqnarray*} g_{ijk} (\omega, u)= \sum_{m=0}^4 \frac{1}{m!} \partial_\omega^m g_{ijk}\big|_{(0,u)} \omega^m + r_{ijk}(\omega,u) \; , \\ h_{ijk} (\omega, u)= \sum_{m=0}^4 \frac{1}{m!} \partial_\omega^m h_{ijk}\big|_{(0,u)} \omega^m + q_{ijk}(\omega,u) \; , \end{eqnarray*} where the remaining terms $r_{ijk}(\omega,u),q_{ijk}(\omega,u) \in C^4(\field{R})$ behave for small $\omega$ as $o(|\omega|^4)$. Note that, due to this fact, any logarithmic irregularity multiplied with $r_{ijk},q_{ijk}$ yields a $C^4$-term with respect to $\omega$. Moreover, we expand for fixed $u$ the fundamental solution $\acute{\phi}_\omega (u)$ and its $u-$derivative $\acute{\phi}'_\omega (u)$ \begin{eqnarray*} \acute{\phi}_\omega (u) = \sum_{k=0}^\infty c_k(u) \omega^k \; , \quad \acute{\phi}'_\omega (u) = \sum_{k=0}^\infty d_k(u) \omega^k \; , \end{eqnarray*} which exist, because these are analytic in $\omega$ for fixed $u$. Since the fundamental solutions $\acute{\phi},\grave{\phi}$ are real for $\omega =0$, the coefficients $g_{0}(0,u),h_{0}(0,u),c_0(u)$ and $d_0(u)$ are real for all $u\in \field{R}$. Using all these properties, we expand \begin{equation} \label{eq: essentieller teil des integralkerns ohne im} \frac{\acute{\phi}_\omega(u) \grave{\phi}_\omega (v)}{w(\acute{\phi}, \grave{\phi})} \; , \end{equation} with the ansatz of a geometrical series with respect to $\omega$. Note that, according to a result in \cite[Section 6]{Kronthaler} the Wronskian does not vanish for $\omega = 0$. By a straightforward calculation it is shown that, essentially using (\ref{eq: Beziehung gijk g0 hijk h0}), the terms with the highest logarithmic order, i.e. $(2 i \omega)^i \log(2i\omega)^j\log(-2i\omega)^k, i=j+k,$ vanish. Thus, we have to pick out the terms $(2i\omega)^2 \log(2i \omega)^j \log(-2 i \omega)^k $ with $j+k=1$, in order to get the lowest regularity. Looking at the calculations (\ref{eq: expansion of phihut3}) and (\ref{eq: berechnung von beta (2)}) [Note that these are the only possible terms, where a term with this irregularity appears the first time, according to our construction. The others are just a consequence out of these and hence a contribution to functions $g_{2jk}$], the desired terms appear in $\grave{\phi}$ the first time as \begin{equation} \label{eq: wichtigster Term der Entwicklung} e^{-i \omega u} \left((2i\omega)^2 \log(2i \omega) (c+c\log u) - c(2i\omega)^2 \log(-2i\omega) \right) \; , \end{equation} where a $(2i \omega)^2 \log(2i \omega) \log u$ shows up in the first line of (\ref{eq: berechnung von beta (2)}), if one separates $\log(-2i \omega u) = \log(-2i \omega) + \log u$. All other such terms appearing in the second line of (\ref{eq: berechnung von beta (2)}) as well as in the second line of (\ref{eq: expansion of phihut3}) vanish because of their coefficients. Applying the same arguments as before, it follows that $ g_{201}(\omega,u)+o (\omega^\kappa)= c(e^{-i\omega u} + g_0(\omega,u))$ and $ h_{201}(\omega,u) +o(\omega^\kappa)= c h_0(\omega,u)$. Hence, the terms with $(2i\omega)^2 \log(-2 i \omega)$ cancel in the $\omega$-expansion of (\ref{eq: essentieller teil des integralkerns ohne im}). Because of the additional $\log u$-term, we get $$ g_{210}(\omega,u)+o(\omega^\kappa) = c_1( e^{-i\omega u} + g_0(\omega,u)) + c_2 (e^{-i \omega u}\log u+ g(\omega,u) )\; ,$$ and $$ h_{210}(\omega,u) +o(\omega^\kappa) = c_1 h_0(\omega,u) + c_2 h(\omega,u) + \frac{1}{4} c_{30} e^{i \omega u} \; ,$$ with appropriate real constants $c_1,c_2$, where the last term appears by a direct calculation of $\psi^{(1)} (u)$ with the part $c_{30}/ v^3$ of the potential $V_0(v)$. Furthermore, $g(\omega,u),h(\omega,u)$ are $C^4$-functions with respect to $\omega$, where $g(\omega,u)$ is generated by the iteration of $e^{-i \omega u} \log u$ and $h(\omega,u)$ the consequence out of this in (\ref{eq: Iterationsschema fuer phi'}). One directly verifies that $g(0,u),h(0,u)$ are real, in general non-vanishing. Putting all these informations together, one sees that there appears a term with $(2i\omega)^2 \log(2 i \omega)$ in the $\omega$-expansion of (\ref{eq: essentieller teil des integralkerns ohne im}), which is generated on the one hand by the $g(0,u),h(0,u)$, and on the other hand by the $2i\omega \log(2i\omega)$-part multiplied with the $ \omega$-contribution of first order of $\acute{\phi},\acute{\phi}'$. This represents the part with the highest irregularity with respect to $\omega$. Moreover, the related coefficients are purely \textit{real}, depending on $u,v$ and in general non-vanishing. Using the identity $$ \log(2i\omega) = i \frac{\pi}{2} \sign(\omega) + \log(2 |\omega|) \; ,$$ and taking the imaginary part of (\ref{eq: essentieller teil des integralkerns ohne im}), which is just the essential part of our integral kernel, we obtain as the lowest regular $\omega$-term in the expansion of (\ref{eq: essentieller part vom Integralkern}) at $\omega = 0$ \begin{equation} \label{eq: kleinste regularitaet vom integralkern} c_0(u)g_{20}(v) \: \omega^2 \sign(\omega) \; , \end{equation} where the function $g_{20}(v)$ arises out of the foregoing calculation. The symmetry of (\ref{eq: essentieller part vom Integralkern}) with respect to $u,v$ yields immediately $g_{20}(v) = k c_0(v)$ with an appropriate constant $k \neq 0$. \bigskip In the next step we want to use (\ref{eq: kleinste regularitaet vom integralkern}), in order to derive the decay of the solution $\Psi (t,u)$ given by (\ref{eq: Darstellung der Loesung l=0}). To this end, first we have to analyze the behavior of the $\omega$-derivatives of the integrand up to the fourth order for large $|\omega|$. \begin{lemma} \label{lemma: abfall der ome abln des integranden} For $u \in \field{R}$ and compactly supported smooth initial data $\Psi_0 \in C^\infty_0(\field{R})^2$ of the Cauchy problem, the $\omega$-derivatives of the integrand in the integral representation (\ref{eq: Darstellung der Loesung l=0}) \begin{equation} \label{eq: integrand in der integraldarstellung} \partial_\omega^m\left( \int_{\mathrm{supp}\: \Psi_0} \mathrm{Im} \! \left( \frac{\acute{\phi}_{\omega}(u) \grave{\phi}_{\omega}(v)}{w(\acute{\phi}_{\omega },\grave{\phi}_{\omega })}\right) \left(% \begin{array}{lc} \omega & 1 \\ \omega^2 & \omega \\ \end{array}% \right) \Psi_0(v) dv \right) \; , \quad m\in \{0,...,4\} \; , \end{equation} have arbitrary polynomial decay in $\omega$ for $|\omega| \rightarrow \infty$. \end{lemma} \begin{proof} We proceed essentially as in the proof of \cite[Theorem 6.5]{Kronthaler}. To this end, we have to investigate the behavior of $\acute{\phi}_\omega (u),\grave{\phi}_\omega (v)$ in $\omega$ for $u \in \field{R}$ fixed and $v$ in the compact set $\mathrm{supp} \Psi_0$. We start with $\grave{\phi}_\omega$. We assume that $|\omega| \geq 1$ and $u_0 \in \field{R}$ is arbitrary. Obviously, we find for any $v \geq u \geq u_0$ and $m \in \{0,...,4\}$ a constant $C_1(u_0)$ such that \begin{equation} \label{eq: elementare abschaetzung der omega-ableitungen der greensfktn} \Big| \partial_\omega^m \left[ \frac{1}{\omega} \sin(\omega(u-v)) \right] \Big| \leq \frac{1}{|\omega|} C_1(u_0) (1 +|v|)^m \; . \end{equation} Furthermore, splitting the potential as $$V_0(u) = \sum_{p=3}^5 \sum_{q=0}^{p-3} c_{pq}\frac{\log^q(v)}{v^p} + r_6(v)$$ and following an analog calculation as in (\ref{eq: part int bei alpha 1}), we obtain for the $\omega$-deriva\-tives of the first iteration $\phi^{(1)}_\omega (u)$ for all $u \geq 1$ and $m \in\{0,...,4\}$ the estimate \begin{equation} \label{eq: absch omega abl 1.Iteration} \Big| \partial_\omega^m \phi_\omega^{(1)} (u) \Big| \leq \frac{1}{|\omega|} C_2 u^{m-2} \; , \end{equation} with an appropriate constant $C_2$. [Note that this is just an analogue to the estimate (\ref{eq: Abschaetzung fuer alpha bereich u omega geq1}).] For all $u<1$ and $m \in\{0,...,4\}$ we get \begin{eqnarray*} \Big| \partial_\omega^m \phi_\omega^{(1)} (u) \Big| & \leq & \Big| \partial_\omega^m \int_u^1 \frac{1}{\omega} \sin(\omega(u-v)) V_0(v) e^{-i \omega v} \: dv \Big| \\ & & + \Big| \partial_\omega^m \int_1^\infty \frac{1}{\omega} \sin(\omega(u-v)) V_0(v ) e^{-i \omega v} \: dv\Big| \\ & \leq & \frac{1}{|\omega|} f(m,u) + C_3 \frac{1}{|\omega|} \sum_{k=0}^m |u|^k \; , \end{eqnarray*} where $f$ is a continuous function with respect to $u$ and the second term arises by the same method as we used for the estimate (\ref{eq: absch omega abl 1.Iteration}). Defining $C_4$ by $$ C_4 := \max_{m \in \{0,...,4\}} \max_{u \in [u_0,1]} \left\{ \left(f(m, u) + C_3 \frac{1}{|\omega|} \sum_{k=0}^m |u|^k\right) (1+|u|)^{2-m} \right\} \; ,$$ and $C_5 :=\max(C_2,C_4)$, we obtain for all $u \geq u_0$ and $m\in \{0,...,4\}$ the bound \begin{equation} \label{eq: Schranke omega abl 1.Iteration alle u} \Big| \partial_\omega^m \phi_\omega^{(1)} (u) \Big| \leq \frac{1}{|\omega|} C_5 (1+|u|)^{m-2} \; . \end{equation} In order to estimate the derivatives of the second iteration $\phi_\omega^{(2)} (u)$ up to the fourth order, we subtract the first exact term out of the integration by parts in (\ref{eq: absch omega abl 1.Iteration}), $\displaystyle \frac{c_{30}}{4 i \omega u^2} e^{-i \omega u}$, from the first iteration $\phi_\omega ^{(1)} (u)$ and obtain for $u \geq1$ and $m\leq 4$ the bounds \begin{equation} \label{eq: Fall l=4 absch iteration - exakter term} \Big| \partial_\omega^m \left( \phi^{(1)}_\omega (u) - \frac{c_{30}}{4 i \omega u^2} e^{-i \omega u}\right) \Big| \leq \frac{1}{|\omega|} C u^{m-3}. \end{equation} Thus, in order to estimate the $\omega$-derivatives of the second iteration: \begin{eqnarray*} \nonumber \Big| \partial_\omega^m \phi^{(2)}_\omega (u) ) \Big| & \hspace*{-2mm} \leq \hspace*{-2mm} & \Big| \partial_\omega^m \int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) V_0(v) \left( \phi^{(1)}_\omega (v) - \frac{c_{30}}{4 i \omega u^2} e^{-i \omega v}\right) \: dv \Big| \\ & & + \Big| \partial_\omega^m \int_u^\infty \frac{1}{\omega} \sin(\omega(u-v)) V_0(v) \frac{c_{30}}{4 i \omega u^2} e^{-i \omega v} \:dv\Big| \; . \end{eqnarray*} Using the estimates (\ref{eq: Fall l=4 absch iteration - exakter term}),(\ref{eq: elementare abschaetzung der omega-ableitungen der greensfktn}), and once again the method of splitting up the potential and integrating by parts for the second integral, we get for $u\geq1$ and $m\leq4$ the bounds \begin{equation} \nonumber \Big| \partial_\omega^m \phi^{(2)}_\omega (u) ) \Big| \leq \frac{1}{|\omega|} C u^{m-4}, \end{equation} and thus, following the foregoing arguments for all $u \geq u_0$ (after possibly enlarging $C_5$) the estimates \begin{equation} \label{eq: Schranke omega abl 2.Iteration alle u} \Big| \partial_\omega^m \phi^{(2)}_\omega (u) ) \Big| \leq \frac{1}{|\omega|} C_5 (1 + |u|)^{m-4} \; . \end{equation} Using (\ref{eq: elementare abschaetzung der omega-ableitungen der greensfktn}) and (\ref{eq: Schranke omega abl 2.Iteration alle u}), we obtain for the $\omega$-derivatives of the third iteration for all $u\geq u_0$ \begin{eqnarray} \nonumber \Big| \partial_\omega^m \phi_\omega^{(3)} (u) \Big|&\hspace*{-2mm} \leq \hspace*{-2mm} & \Big| \sum_{k=0}^m \begin{pmatrix} m \\ k \end{pmatrix} \int_u^\infty \partial_\omega^{m-k} \left( \frac{1}{\omega} \sin(\omega(v-u))\right)V_0(v) \partial_\omega^k \phi_\omega^{(2)} (v) \: dv \Big| \\ & \hspace*{-2mm} \leq \hspace*{-2mm} & 16 C_1(u_0) C_5 \frac{1}{|\omega|} \int_u^\infty (1+|v|)^{m-4} \frac{1}{|\omega|}V_0(v) \:dv \label{eq: 2.Iteration Induktionsanfang} \; . \end{eqnarray} Note that interchanging the integral and the $\omega$-derviatives is permitted, because the $\omega$-derivatives of the integrand are integrable due to the estimates (\ref{eq: elementare abschaetzung der omega-ableitungen der greensfktn}),(\ref{eq: Schranke omega abl 2.Iteration alle u}) and the $1/v^3$-decay of the potential $V_0(v)$. We show by induction in $n$ for all $u\geq u_0$ the inequality \begin{equation} \label{eq: Induktion der absch der ome abln} \Big| \partial_\omega^m \phi_\omega^{(n)} (u) \Big| \leq 16 C_1(u_0) C_5 \frac{1}{|\omega|} Q_\omega(m,u) \frac{1}{(n-3)!}P_\omega(u)^{n-3} ,\quad \forall n \geq 3 \; , \end{equation} where the functions $Q_\omega(m,u)$ and $P_\omega (u)$ are given by the integrals \begin{eqnarray*} Q_\omega(m,u) & := & \int_u^\infty (1 +|v|)^{m-4} \frac{1}{|\omega|} V_0(v) \:dv \\ P_\omega (u) & := & 16 C_1(u_0) C_6 \int_u^\infty \frac{1}{|\omega|} V_0(v) \:dv \; , \end{eqnarray*} where $C_6$ is a constant chosen such that for all $x \geq v \geq u_0$ $$ (1+|x|)^{k-m} \leq C_6 (1+|v|)^{k-m} \; , \quad 0\leq k\leq m \leq 4 \; .$$ The initial step is now given by (\ref{eq: 2.Iteration Induktionsanfang}). So assume that (\ref{eq: Induktion der absch der ome abln}) holds for $n$. Then, according to the iteration scheme, \begin{eqnarray*} \Big| \partial_\omega^m \phi_\omega^{(n+1)} (u) \Big| & \leq & \Big| \sum_{k=0}^m \begin{pmatrix} m \\ k \end{pmatrix} \int_u^\infty C_1(u_0) (1+|v|)^{m-k} \frac{1}{|\omega|} V_0(v) \\ & & \times 16 C_1(u_0) C_5 \frac{1}{|\omega|} Q_\omega(k,v) \frac{1}{(n-3)!}P_\omega(v)^{n-3} \:dv \Big| \; . \end{eqnarray*} Using the inequality $$ Q_\omega (k,v) \leq C_6 (1+|v|)^{k-m} Q_\omega(m,v) $$ and the monotonicity of $ Q_\omega $, we obtain \begin{eqnarray*} \Big| \partial_\omega^m \phi_\omega^{(n+1)} (u) \Big| & \hspace*{-2mm}\leq & \hspace*{-2mm} 16 C_1(u_0) C_5 \frac{1}{|\omega|} \int_u^\infty 16 C_1(u_0) (1+|v|)^{m-k}\frac{1}{|\omega|} V_0(v) \\ & & \hspace*{1.5cm} \times C_6 (1+|v|)^{k-m} Q_\omega(m,v) \frac{1}{(n-3)!}P_\omega(v)^{n-3} \:dv \\ &\hspace*{-2mm} \leq & \hspace*{-2mm} 16 C_1(u_0) C_5 \frac{1}{|\omega|} Q_\omega(m,u) \int_u^\infty \frac{dP_\omega}{dv} (v) \frac{1}{(n-3)!}P_\omega(v)^{n-3} \:dv \\ &\hspace*{-2mm} = & \hspace*{-2mm}16 C_1(u_0) C_5 \frac{1}{|\omega|} Q_\omega(m,u) \frac{1}{(n-2)!}P_\omega(u)^{n-2} \; , \end{eqnarray*} and (\ref{eq: Induktion der absch der ome abln}) follows. In particular, we get for all $u\geq u_0$ and $m \leq 4$ the estimate \begin{eqnarray} \Big| \partial_\omega^m \grave{\phi}_\omega (u) - \partial_\omega^m e^{-i \omega u} \Big| & \leq & C_5 \frac{1}{|\omega|} (1+|u|)^{m-2} + \frac{1}{|\omega|} C_5 (1 + |u|)^{m-4} \nonumber\\ & & + 16 C_1(u_0) C_5 \frac{1}{|\omega|} Q_\omega(m,u) e^{P_\omega (u)} \; , \label{eq: absch der ome abln von grave phi} \end{eqnarray} and the right hand side obviously tends to zero as $|\omega| \rightarrow \infty$. In an analog way using the iteration scheme (\ref{Iterationsschema fuer phi 1}) for $\acute{\phi}$, one shows for all $ u\leq u_0$ and $m \in\{0,...,4\}$ \begin{equation} \label{eq: absch der ome abln von acute phi} \Big| \partial_\omega^m \acute{\phi}_\omega (u) - \partial_\omega^m e^{i\omega u } \Big| \leq \sum_{n=1}^\infty \frac{1}{n!} M_\omega(m,u)^n = e^{M_\omega(m,u)}-1 \; , \end{equation} where $M_\omega(m,u)$ is given by $$ M_\omega(m,u) := \frac{C_7}{|\omega|} \int_{- \infty}^u (1+|v|)^m V_0(v) \: dv \; ,$$ with a sufficiently large constant $C_7$. Note that this integral is well defined, and in particular the estimate is obtained easier, due to the fact that $V_0(v)$ decays exponentially as $v \rightarrow - \infty$. Moreover, the right hand side in (\ref{eq: absch der ome abln von acute phi}) also goes to zero as $|\omega| \rightarrow \infty$. Thus, due to (\ref{eq: absch der ome abln von grave phi}) and (\ref{eq: absch der ome abln von acute phi}), the $\omega$-derivatives of the fundamental solutions up to the fourth order $\partial_\omega^m \acute{\phi}_\omega(u)$,$\partial_\omega^m\grave{\phi}_\omega(v)$ are controlled for large $|\omega|$ by constants, which depend on $u$ and the support of the initial data $\Psi_0$. One also shows with these results and applying the same arguments to $\acute{\phi}_\omega ',\grave{\phi}_\omega'$ that the Wronskian $w(\acute{\phi},\grave{\phi})$ behaves as $\mathcal{O}(|\omega|)$ and $\partial_\omega^m w(\acute{\phi},\grave{\phi}), m \leq 4$ is bounded by constants as $|\omega| \rightarrow \infty$. Hence, interchanging in the representation (\ref{eq: integrand in der integraldarstellung}) the differentiation with respect to $\omega$ and the integral, which is no problem because of the compact support of $\Psi_0$, making the substitutions \begin{eqnarray*} \grave{\phi}_\omega (v)\hspace*{-2mm}& =& \hspace*{-2mm}\frac{1}{\omega^2} \left( - \grave{\phi}''_\omega(v) + V_0(v) \grave{\phi}_\omega(v) \right) \; ,\\ \partial_\omega \grave{\phi}_\omega (v)\hspace*{-2mm}& =& \hspace*{-2mm} \frac{-2}{\omega^3} \left( - \grave{\phi}''_\omega(v) + V_0(v) \grave{\phi}_\omega(v) \right) + \frac{1}{\omega^2} \left( - \partial_\omega\grave{\phi}''_\omega(v) + V_0(v) \partial_\omega \grave{\phi}_\omega(v) \right) \end{eqnarray*} as well as the analog substitutions for the second, third and fourth $\omega$-derivative [Note that in the region $|\omega|\geq 1$ $\grave{\phi}_\omega(v)$ is $C^4$ with respect to $\omega$, cf. Lemma \ref{lemma: bessere Entwicklung von gravephi for l=0}, $n=4$] and integrating by parts with respect to $v$, one immediately has decay at least of $1/\omega^2$. Thus iterating this procedure, which can be done because $V_0$ and $\Psi_0$ are smooth, yields arbitrary decay in $\omega$ and the lemma is proven. \end{proof} \begin{remark} Since the method of the proof does not depend on the highest order $\omega$-derivative, the statement of Lemma \ref{lemma: abfall der ome abln des integranden} can be extended to arbitrary $m$. The only point where one has to be careful is the derivation of (\ref{eq: Schranke omega abl 2.Iteration alle u}), since for $\omega$-derivatives of higher order one has to calculate and subtract more exact terms than in (\ref{eq: Fall l=4 absch iteration - exakter term}), due to convergence problems. If (\ref{eq: Schranke omega abl 2.Iteration alle u}) is not sufficient, in order to start the induction, one has to iterate this procedure appropriately many times. \end{remark} We are now ready to state and prove our main theorem: \begin{theorem} \label{theorem: Haupttheorem abfall l=0} Consider the Cauchy problem of the scalar wave equation in the Schwarzschild geometry $$ \square \phi = 0\; , \quad (\phi_0, i \partial_t \phi_0)(0,r,x) = \Phi_0(r,x)$$ for smooth spherical symmetric initial data $\Phi_0 \in C^{\infty}_0 ( (2M,\infty) \times S^2)^2 $ which is compactly supported outside the event horizon. Let $\Phi (t) = (\phi (t), i \partial_t \phi (t)) \in C^\infty(\field{R} \times (2M, \infty) \times S^2)^2$ be the unique global solution which is compactly supported for all times $t$. Then for fixed $r$ there is a constant $c= c(r,\Phi_0)$ such that for large $t$ \begin{equation} \label{eq: decay normal} |\phi(t)| \leq \frac{c}{t^3} \; . \end{equation} Moreover, if we have initially momentarily static initial data, i.e. $\partial_t \phi_0 \equiv 0$, the solution $\phi(t)$ satisfies \begin{equation} \label{eq: decay initially momentarily static} |\phi(t)| \leq \frac{c}{t^4} \; . \end{equation} \end{theorem} \begin{proof} First, we decompose our initial data $\Phi_0$ into spherical harmonics. Due to the spherical symmetry we obtain $\Phi_0(r,\vartheta, \varphi) = \tilde{\Phi}_0(r) Y_{00}(\vartheta,\varphi)$, where $\tilde{\Phi}_0(r) \in C^\infty_0((2M,\infty))^2$. Introducing the Regge-Wheeler coordinate $u(r)$ and making the substitution $\Psi(t,u) = r(u) \tilde{\Phi} (t,r(u))$, our solution has the representation \begin{equation} \nonumber \Phi(t,r,\vartheta,\varphi) = \frac{1}{r} \Psi(t,u(r)) Y_{00}(\vartheta,\varphi) \; , \end{equation} where $\Psi(t,u)$ satisfies \begin{eqnarray} \nonumber \Psi(t,u) = \hspace*{95mm}\\ -\frac{1}{\pi}\int_{\mathbb{R}} e^{-i \omega t} \left( \int_{\mathrm{supp}\: \Psi_0} \mathrm{Im} \! \left( \frac{\acute{\phi}_{\omega}(u) \grave{\phi}_{\omega}(v)}{w(\acute{\phi}_{\omega },\grave{\phi}_{\omega })}\right) \left(% \begin{array}{lc} \omega & 1 \\ \omega^2 & \omega \\ \end{array}% \right) \Psi_0(v) dv \right) d\omega, \label{eq: lsg fall sphae symm} \end{eqnarray} with initial data $\Psi_0(u) := r(u) \tilde{\Phi}_0(u)$ and the Jost solutions $\acute{\phi},\grave{\phi}$ in the case $l=0$. According to the detailed analysis of (\ref{eq: essentieller teil des integralkerns ohne im}) with respect to $\omega$, the term \begin{eqnarray*} \mathrm{Im} \! \left( \frac{\acute{\phi}_{\omega}(u) \grave{\phi}_{\omega}(v)}{w(\acute{\phi}_{\omega },\grave{\phi}_{\omega })}\right) - c_0(u) g_{20}(v) \omega^2 \sign(\omega) - c_{32}(u) g_{32}(v) \omega^3 \log^2|\omega| \\ -c_{31}(u) g_{31}(v) \omega^3 \log|\omega| -c_{30}(u) g_{30}(v) \omega^3 \sign(\omega) \end{eqnarray*} is $C^3(\field{R})$ with respect to $\omega$ for fixed $u\in \field{R}$, $v \in \mathrm{supp \Psi_0}$, where the $c_{ij} (u)$, $g_{ij}(v)$ denote the appropriate coefficient functions. [Note that these are linearly dependent due to the symmetry of (\ref{eq: essentieller part vom Integralkern}) with respect to $u,v$.] Thus, defining $$ f(\omega,u) := \left(\int_{\mathrm{supp} \: \Psi_0} \mathrm{Im} \! \left( \frac{\acute{\phi}_{\omega}(u) \grave{\phi}_{\omega}(v)}{w(\acute{\phi}_{\omega },\grave{\phi}_{\omega })}\right) \left(% \begin{array}{lc} \omega & 1 \\ \omega^2 & \omega \\ \end{array}% \right) \Psi_0(v) dv \right)_1 \; ,$$ where the subscript denotes the first vector component, the term \begin{eqnarray*} \tilde{f}(\omega,u) := f(\omega,u) - \Big( c_0(u) d_{20}( \psi_0^2)\: \omega^2 \sign(\omega) + c_{32}(u) d_{32}(\psi_0^2) \: \omega^3 \log^2|\omega| \\ +c_{31}(u) d_{31}(\psi_0^2) \: \omega^3 \log|\omega| +c_{30}(u) d_{30}(\psi_0^2) \: \omega^3 \sign(\omega) \Big)\eta (\omega) \\ =: f(\omega,u) -r(\omega,u) \; , \end{eqnarray*} is also $C^3(\field{R})$ with respect to $\omega$. Here, $\psi_0^2$ denotes the second component of the initial data $\Psi_0$, $$d_{ij}(\psi_0^2):= \int_{\mathrm{supp} \: \Psi_0} g_{ij}(v) \psi_0^2(v) \: dv \; , $$ and $\eta (\omega) \in C^\infty_0(\field{R})$ is a smooth cutoff-function which is identically to $1$ on a neighborhood of $\omega=0$ and $0$ outside a compact set. Moreover, because of Lemma \ref{lemma: abfall der ome abln des integranden} the $\partial_\omega^m \tilde{f}(\omega,u), m\in \{0,1,2,3\}$ have rapid decay for large $|\omega|$ and are in particular $L^1(\field{R})$ with respect to $\omega$. Thus, due to (\ref{eq: lsg fall sphae symm}), the first component of $\Psi$ satisfies \begin{eqnarray*} \psi^1(t,u) & = & -\frac{1}{\pi} \int_\field{R} e^{-i\omega t}\tilde{f}(\omega,u)\:d\omega -\frac{1}{\pi} \int_\field{R} e^{-i\omega t} r(\omega,u) \: d\omega \\ & = & -\frac{1}{(it)^3\pi} \left( \int_\field{R} \tilde{f}(\omega,u) \partial_\omega^3 e^{-i\omega t}\:d\omega + \int_\field{R} r(\omega,u) \partial_\omega^3 e^{-i\omega t} \: d\omega \right) \; . \end{eqnarray*} We write the second integral as $\int_{- \infty}^0 + \int_0^\infty$, integrate every integral three times by parts and obtain \begin{eqnarray*} \psi^1(t,u) &=& \frac{1}{ (it)^3 \pi}\left(4c_0(u) d_{20}(\psi_0^2) + \int_\field{R} e^{-i \omega t} \partial_\omega^3 \tilde{f}(\omega,u)\:d\omega \right. \\ & & \left. +\int_{-\infty}^0 e^{-i \omega t}\partial_\omega^3 r(\omega,u) \: d\omega + \int_0^\infty e^{-i \omega t}\partial_\omega^3 r(\omega,u) \: d\omega \right) \; . \end{eqnarray*} Note that the other boundary terms vanish, because the $\partial_\omega^m \tilde{f}(\omega),m\leq3$ have rapid decay and $ \eta (\omega)\equiv 0$ outside of a compact set. Obviously, all integrals are well defined, and the Riemann-Lebesgue lemma shows the claim in the first case. If the initial data is initially momentarily static, all the $d_{ij} (\psi_0^2)$ vanish and the entries in the matrix in (\ref{eq: lsg fall sphae symm}) yield an additional $\omega$. Hence, the highest irregular term is $c_0(u) d_{20}(\psi_0^1) \omega^3 \sign(\omega)$, and the same arguments as before conclude the proof. \end{proof} \begin{remark} The decay rates $1/t^3$, and $1/t^4$, respectively, are optimal in the sense that there exists initial data such that these cannot be improved. This is obvious due to the fact that $c_0(u)>0$. \end{remark} \section{Discussion on the case $l \neq0$} According to Price's Law \cite{Price}, the $lm$-component $\Phi^{lm}(t,u) = \frac{1}{r} \Psi^{lm}(t,u)$ of a solution for the Cauchy problem for the scalar wave equation in Schwarzschild spacetime with compactly supported smooth initital data generally falls off at late times $t$ as $t^{-2l -3}$ and $t^{-2l-4}$ for initially momentarily static initial data, respectively. This has been confirmed in the previous section for spherical symmetric initial data, i.e. in the case $l=0$ [cf. Theorem \ref{theorem: Haupttheorem abfall l=0}]. Moreover, there is numerical evidence which lets us conjecture this to be correct \cite{Kar}. We briefly discuss whether the methods of the preceding section still apply to the case when the angular mode $l$ is non-zero. To this end, let us reconsider the construction of the fundamental solutions $\grave{\phi}_{\omega l}$ of the Schr\"odinger equation (\ref{Schroedinger equation}). First, we make some remarks about the fundamental solutions $ \omega^l \grave{\phi}_{\omega l} (u)$ (see also \cite[Section 5]{Kronthaler}). The fundamental solutions were constructed as the series \begin{eqnarray} \label{eq: Reihenansatz grave phi l neq 0} \omega^l \grave{\phi}_\omega (u) = \sum_{m=0}^\infty \phi_\omega^{(m)} (u) \; , \end{eqnarray} where the $\phi^{(m)}$ are given by the iteration scheme \begin{eqnarray} \label{eq: Iterationsschema grave phi l neq 0} \phi_\omega^{(m+1)} (u) = - \int_u^\infty S_\omega(u,v) W_l(v) \phi_\omega^{(m)}(v) \:dv \; , \end{eqnarray} with potential, cf. also Lemma \ref{lemma: asymptotische Entwicklung von V_l}, \begin{eqnarray} \label{eq: restpotenzial l neq 0} W_l(u) = V_l(u) - \frac{l(l+1)}{u^2} & = & c_{31} \frac{\log u}{u^3} + \frac{c_{30}}{u^3} + h(u) \; , \\ \textrm{where \ } h(u) & = & \mathcal{O} \left( \frac{\log^2 u}{u^4} \right) \nonumber \end{eqnarray} for large $u$, and Green's function \begin{eqnarray} \label{eq: Greenskern l neq 0 teil 1} S_\omega(u,v) = \frac{(-1)^{l+1}}{\omega} \big(h_1(l,\omega v) h_2(l,\omega u) - h_1(l, \omega u) h_2(l, \omega v)\big) \; , \end{eqnarray} where \begin{eqnarray} \label{eq: Greenskern l neq 0 teil 2} h_1(l, \omega u) = \sqrt{\frac{\pi \omega u}{2}} J_{l+1/2} (\omega u) \; , \quad h_2(l, \omega u) = \sqrt{\frac{\pi \omega u}{2}} J_{-l-1/2} (\omega u) \; , \end{eqnarray} and $J_\nu$ denotes the Bessel function of the first kind. As initial function $\phi_\omega^{(0)}(u)$ we have chosen \begin{equation*} \phi_\omega^{(0)} (u) = \omega^l e^{-i (l+1) \frac{\pi}{2}} \sqrt{\frac{\pi \omega u}{2}} H_{l+ 1/2}^{(2)}(\omega u) \; , \end{equation*} where $H^{(2)}_\nu$ denotes the second Hankel function. Since $l$ is an integer, these functions are directly connected to the spherical Bessel functions and simplify significantly. Namely, $h_1,h_2$ have the following representations [cf. \cite[Chapter 10]{AS}] \begin{eqnarray} \label{eq: Polynomialdarstellung fuer spherical Bessel functions l+1/2} h_1 (l, \omega u) = P(l+\frac{1}{2}, \omega u) \sin(\omega u- \frac{1}{2} l \pi) + Q (l+\frac{1}{2}, \omega u) \cos (\omega u - \frac{1}{2} l \pi) \; \;\\ \label{eq: Polynomialdarstellung fuer spherical Bessel functions -l-1/2} h_2 (l, \omega u) = P(l+\frac{1}{2}, \omega u) \cos(\omega u+ \frac{1}{2} l \pi) - Q (l+\frac{1}{2}, \omega u) \sin (\omega u + \frac{1}{2} l \pi) \; \; \end{eqnarray} where $P,Q$ are finite polynomials given by \begin{eqnarray*} P(l+ \frac{1}{2}, \omega u)&=& \sum_{k= 0}^{[\frac{1}{2} l]} \: (-1)^k \frac{(l+\frac{1}{2},2k)}{(2 \omega u)^{2k}} \; ,\\ Q(l+ \frac{1}{2}, \omega u)&=& \sum_{k=0}^{[\frac{1}{2} (l-1)]} (-1)^k \frac{(l+\frac{1}{2},2k+1)}{(2 \omega u)^{2k+1}} \; , \end{eqnarray*} with $$ (l+ \frac{1}{2},k) = \frac{(l+k)!}{k! \: \Gamma(l-k+1)} \; .$$ And the initial function can be expressed by \begin{equation} \label{eq: Polynomialdarstellung fuer anfangsfunktion} \phi_\omega^{(0)} (u) = \omega^l e^{-i \omega u} \sum_{k=0}^l \frac{(l+\frac{1}{2},k)}{(2 i \omega u)^k} \; . \end{equation} Due to the recurrence formulas for the derivatives of the Bessel functions, we have the identities \begin{eqnarray*} \partial_\omega h_1(l, \omega u) = u h_1(l-1, \omega u) - \frac{l}{\omega} h_1(l,\omega u) \hspace*{2mm} \; , \\ \partial_\omega h_2(l, \omega u) = - u h_2(l-1, \omega u) - \frac{l}{\omega} h_2(l,\omega u) \; . \end{eqnarray*} As a consequence, \begin{eqnarray*} \partial_\omega S_\omega (u,v) = & \hspace*{-2mm} - \hspace*{-2mm} & \frac{2l+1}{\omega} S_\omega \\ &\hspace*{-2mm}+\hspace*{-2mm}&v \frac{(-1)^{l+1}}{\omega} (h_1(l-1,\omega v) h_2(l,\omega u) + h_1(l,\omega u) h_2(l-1, \omega v) ) \\ &\hspace*{-2mm}+\hspace*{-2mm}&u \frac{(-1)^l}{\omega} (h_1(l,\omega v) h_2(l-1,\omega u) + h_1(l-1,\omega u) h_2(l, \omega v) ) \; . \hspace*{1.3mm} \end{eqnarray*} This allows us to derive the necessary estimates for the Green's function $S_\omega (u,v)$. Exploiting the asymptotics we have already seen in \cite[Section 5]{Kronthaler} \begin{equation*} | S_\omega (u,v) | \leq C_1 \left(\frac{u}{1 + |\omega| u} \right)^{-l} \left(\frac{v}{1 + |\omega| v} \right)^{l + 1} e^{v| \Im \omega| +u \Im \omega} \; , \end{equation*} for $v\geq u >0$ and an appropriate constant $C_1$. In order to derive an estimate for $\partial_\omega S_\omega$ and small $|\omega|$, we make use of \begin{eqnarray*} h_1(l,\omega u) & \sim & k_1 (\omega u)^{l+1} + k_2 (\omega u)^{l+3} \\ h_2(l,\omega u) & \sim & k_3 (\omega u)^{-l} + k_4 (\omega u)^{-l+2} \; , \quad \textrm{if} \; |\omega| u \ll 1 \; , \end{eqnarray*} and certain constants $k_1,...,k_4$ [refer to the series expansion of the Bessel functions \cite[9.1.10]{AS}] to obtain (note that $v\geq u >0$), \begin{eqnarray*} | \partial_\omega S_\omega (u,v)| \leq C_2 \left(\frac{u}{1 + |\omega| u} \right)^{-l} \left(\frac{v}{1 + |\omega| v} \right)^{l + 2} \; , & \textrm{if } |\omega| v \ll 1 \; . \end{eqnarray*} For large arguments $|\omega| u \gg 1$ we use (\ref{eq: Polynomialdarstellung fuer spherical Bessel functions l+1/2}),(\ref{eq: Polynomialdarstellung fuer spherical Bessel functions -l-1/2}) and get by a straightforward calculation \begin{eqnarray*} \partial_\omega S_\omega (u,v) \sim \frac{-2l}{\omega^2} \sin(\omega(u-v)) + \partial_\omega \left[ \frac{1}{\omega} \sin(\omega(u-v)) \right] \; , & \textrm{if } | \omega | u \gg 1 . \end{eqnarray*} Together with (\ref{eq: abschaetzung der omegaabl der greensfkt}), we obtain \begin{eqnarray*} | S_\omega (u,v) | \leq C_3 \frac{v^2}{1 + |\omega| v} e^{v| \Im \omega| +u \Im \omega} \; , & \textrm{if } | \omega | u \gg 1. \end{eqnarray*} Combining these estimates, we find a constant $C$ such that \begin{equation} \label{eq: Abschaetzung fuer partial omega S_omega} | \partial_\omega S_\omega (u,v) | \leq C \left(\frac{u}{1 + |\omega| u} \right)^{-l} \left(\frac{v}{1 + |\omega| v} \right)^{l + 1} v \: e^{v| \Im \omega| +u \Im \omega} \; , \end{equation} for $v\geq u >0$. Moreover, looking at (\ref{eq: Polynomialdarstellung fuer anfangsfunktion}) we get the following bounds for the initial function, \begin{eqnarray} \label{eq: Abschaetzung fuer anfangsfkt} | \phi_\omega^{(0)} (u) | & \leq & C_4 \left( \frac{u}{1+ | \omega| u} \right)^{-l} e^{u \Im \omega } \; , \\ \label{eq: Abschaetzung fuer partial omega anffkt} | \partial_\omega \phi_\omega^{(0)} (u) | & \leq & C_5 \left( \frac{u}{1+ | \omega| u} \right)^{-l} u \: e^{u \Im \omega } \; . \end{eqnarray} These estimates allow us to proceed in exactly the same way as in the proof of Lemma \ref{lemma: Entwicklung von acute phi,l=0}. As analogon to $\hat{\phi}^{(1)}_\omega (u)$ we obtain the term $$ - \int_u^\infty S_\omega(u,v) \left( c_{31} \frac{\log v}{v^3} + \frac{c_{30}}{v^3} \right) \phi_\omega^{(0)} (v) \:dv,$$ which we calculate using (\ref{eq: Polynomialdarstellung fuer spherical Bessel functions l+1/2}),(\ref{eq: Polynomialdarstellung fuer spherical Bessel functions -l-1/2}) and (\ref{eq: Polynomialdarstellung fuer anfangsfunktion}). Essentially, we get integrals of the shape $$ \frac{\omega^l}{(\omega u)^n \omega^{m+k+1}} \left(C_6 e^{i \omega u} \int_u^\infty e^{-2 i \omega v} \frac{\log^q v}{v^{3+k+m}}\:dv + C_7e^{-i \omega u} \int_u^\infty \frac{\log^q v}{v^{3+k+m}} \: dv \right)\; ,$$ where $q \in \{0,1\}, 0 \leq n,m,k \leq l$. Note that the terms involving $\omega$ singularities resolve, due to the fact that $\omega^l \grave{\phi}_\omega$ is continuous with respect to $\omega$. Computing these integrals via Lemma \ref{lemma: zur Reihenentwicklung} (in the limit $\varepsilon \rightarrow 0$), we see (as before) that the only terms not being $C^1$ with respect to $\omega$ are of the form \begin{equation} \label{eq: erste Irregularitaet im Fall l neq 0} e^{i\omega u} \frac{1}{\omega^{m+k+1}} (2i \omega)^{k +m+2} \left( \log^2(2 i \omega u) + \log u \log(2i \omega u) +\log(2i \omega u)\right) \; , \end{equation} modulo coefficients. Now, we apply the same iteration with analog estimates and all in all we have shown: \begin{lemma} \label{lemma: Entwicklung von acute phi,l geq 1} For $l\geq 1$, $\omega \in \field{R} \setminus \{0\}$ and fixed $u>0$ the fundamental solutions $\omega^l \grave{\phi}_\omega(u)$ have the representation \begin{eqnarray} \nonumber \omega^l \grave{\phi}_\omega(u)= \phi_\omega^{(0)}(u) + g_3(\omega,u) + 2 i \omega \log^2(2 i \omega) g_4(\omega,u) \hspace*{4.7mm} \\ +2 i \omega \log(2 i \omega) g_5(\omega,u) +2 i \omega g_6 (\omega,u)\; , \label{eq: Entwicklung von acute phi,l geq 1} \end{eqnarray} where the functions $g_3,g_4,g_5$ and $g_6$ are $C^1(\field{R})$ with respect to $\omega$. \end{lemma} Hence, we still have finite expressions for the Green's function $S_\omega(u,v)$ as well as for the initial function $\phi_\omega^{(0)}(v)$, which involve essentially the plane waves $e^{ \pm i \omega u},e^{\pm i \omega v}$. Expanding all these expressions and deriving estimates analog to (\ref{eq: Abschaetzung fuer partial omega S_omega}) and (\ref{eq: Abschaetzung fuer partial omega anffkt}) for higher order $\omega$-derivatives, we can improve Lemma \ref{lemma: Entwicklung von acute phi,l geq 1} in the same way as Lemma \ref{lemma: Entwicklung von acute phi,l=0} following the arguments of the proof of Lemma \ref{lemma: bessere Entwicklung von gravephi for l=0}. Also, a similar result to Corollary \ref{corollary: Beziehung g_ijk g_0} seems straightforward. The problem now arises, when we have to derive an $\omega-$expansion of the essential part of the integral kernel \begin{equation} \label{eq: essentieller teil des Integralkerns l neq 0} \mathrm{Im} \left(\frac{\acute{\phi}_{\omega l} (u) \grave{\phi}_{\omega l}(v)}{w(\acute{\phi}_l,\grave{\phi}_l)} \right) \; . \end{equation} The main difficulty can be seen as follows. If we proceeded in the same way as in the case $l=0$, the lowest regular term with respect to $\omega$ should appear with the power $\omega^{2l+2}$ [cf. proof of Theorem \ref{theorem: Haupttheorem abfall l=0}] in order to satisfy Price's law. But due to the fact that the first irregularity in $\omega$ looks as follows, $$e^{i \omega u}u^{-l} 2i\omega \big(c\log^2 (2i\omega u)+c \log u \log(2i\omega u) + c\log(2i\omega u)\big) \; ,$$ [cf. equation (\ref{eq: erste Irregularitaet im Fall l neq 0})], we would have to find a systematic way in order to check that the coefficients in front of terms with lower regularity vanish. Because of the complexity of the calculations we did not succeed in this point. Thus, following the same arguments as for $l=0$ together with the analog result to Corollary \ref{corollary: Beziehung g_ijk g_0}, which would involve $2 i \omega \log^2(2i\omega)$ as highest irregularity, we would have to assume $ \omega \log|\omega|$ as the lowest regular term in the expansion of (\ref{eq: essentieller teil des Integralkerns l neq 0}). Except for this problem, we do not expect any further difficulties in extending Lemma \ref{lemma: abfall der ome abln des integranden} to $l \neq 0$, apart from the complexity of the calculations and the estimates. Thus, for arbitrary $l$ it follows a similar statement to Theorem \ref{theorem: Haupttheorem abfall l=0}, but with the decay $|\phi(t)|\leq c/t^2$, and in the case of momentarily static initial data $|\phi(t)| \leq c/t^3$, respectively. The proof uses essentially the arguments of the proof of Theorem \ref{theorem: Haupttheorem abfall l=0}, with the difference that one basically has to check the inequality $$ \Big|\int_{-1}^1 \log|\omega| e^{-i\omega t} \: d\omega \Big| \leq \frac{c}{t} \: .$$ To this end, one makes the substitution $z = \omega t$ and splits up the integrals to obtain \begin{eqnarray*} \int_{-1}^1 \log|\omega| e^{-i\omega t} \: d\omega & = & \frac{1}{t} \bigg( \int_{-1}^1 \log|z| e^{-i z} \: dz - \log t \int_{-t}^t e^{-iz} \: dz \\ & & + \int_{-t}^{-1} \log (-z) e^{-iz} \: dz + \int_1^t \log z e^{-iz} \: dz \bigg) \; . \end{eqnarray*} Computing the second integral and integrating the last two integrals by parts yields $$ = \frac{1}{t} \bigg( \int_{-1}^1 \log|z| e^{-i z} \: dz + \frac{1}{i} \int_{-t}^{-1} \frac{1}{z} e^{-iz} \: dz + \frac{1}{i} \int_{1}^{t} \frac{1}{z} e^{-iz} \: dz \bigg) \; ,$$ and the inequality follows, after having integrated the last two integrals once again by parts followed by standard integral estimates. However, in view of Price's law, this result is not satisfying. \begin{acknowledgment} The author would like to thank Felix Finster, University of Regensburg, for introducing to this problem and also for his helpful suggestions and remarks. \end{acknowledgment}
1,116,691,499,545
arxiv
\section{Introduction} \label{int} The random walk in one-dimensional random environment in Sinai's regime (which we describe in detail below) is a famous example of a random walk with `logarithmic speed': after a long time $t$, the random walk is, roughly speaking, about $(\log{t})^2$ from the origin. In this paper we give other examples of random walks in random environments with logarithmic speeds; in these cases the environment is subject to a random perturbation. Our results cover both recurrent and transient cases. In the models that we consider, the speed is, roughly speaking, of order $(\log{t})^\beta$, where $\beta$ depends upon the model. We shall see that for the models we consider, all $\beta \in (1,\infty)$ are attained. Examples of logarithmic transience for random walks (such as given in our Theorem \ref{thm8} below) are seemingly rare. The terminology `speed' is perhaps more natural in the transient case; in the recurrent case `speed' can be thought of as a measure of the rate of growth of the upper envelope of the random walk. Before we give our main results, we describe the probabilistic setting in which we work. Given an infinite sequence $\omega = (p_0,p_1,p_2,\ldots)$ such that, for some $\delta \in (0,1/2)$, $\delta \leq p_i \leq 1-\delta$ for all $i \in \mathbb{Z}^+ := \{0,1,2,\ldots\}$, we consider $(\eta_t(\omega); t \in \mathbb{Z}^+)$ the nearest-neighbour random walk on $\mathbb{Z}^+$ defined as follows. Set $\eta_0(\omega)=r$ for some $r \in \mathbb{Z}^+$, and for $n\in \mathbb{N} := \{1,2,\ldots\}$, \begin{eqnarray} \label{1006bb} P [ \eta_{t+1}(\omega) = n-1 | \eta_t(\omega) = n] & = & p_n,\nonumber\\ P [ \eta_{t+1}(\omega) = n+1 | \eta_t(\omega) = n] & = & 1-p_n =:q_n, \end{eqnarray} and $P [ \eta_{t+1}(\omega)=0 | \eta_t(\omega)=0] = p_0$, $P [ \eta_{t+1}(\omega)=1 | \eta_t(\omega)=0] = 1-p_0 =: q_0$. The given form for the reflection at the origin ensures aperiodicity, which eases some technical complications. We call the sequence of jump probabilities $\omega$ our {\em environment}. As an example, the case $p_i=1/2$ for all $i$ gives the symmetric simple random walk on $\mathbb{Z}^+$. Here, we take $\omega$ itself to be random --- then $\eta_t(\omega)$ is a {\em random walk in random environment} (RWRE). More precisely, $p_0,p_1,\ldots$ will be random variables on a probability space $(\Omega, {\mathcal{F}},{\mathbb{P}})$. We describe our particular model at the end of this section. The RWRE was first studied by Kozlov \cite{kozlov} and Solomon \cite{solomon} (in the case where $p_i$, $i \geq 0$ form an i.i.d.~sequence). There has been considerable interest in the RWRE recently; see for example \cite{rev} or \cite{zeit} for surveys. Some authors (e.g.~\cite{sinai}) consider the RWRE with state space the whole of $\mathbb{Z}$. For our model we take the case of $\mathbb{Z}^+$, which gives rise to a richer set of models in the sense that we can obtain positive-recurrent behaviour. An important case in which the random environment is homogeneous and in some sense critical is the so-called {\em Sinai's regime}. Here $(p_0,p_1,p_2,\ldots)$ is a sequence of i.i.d.~random variables satisfying the condition ${\mathbb{E}}[\log(p_1/q_1)]=0$, where ${\mathbb{E}}$ is expectation under ${\mathbb{P}}$. In this case, a result dating back to Solomon \cite{solomon} says that $\eta_t(\omega)$ is null-recurrent for ${\mathbb{P}}$-almost every $\omega$. Solomon's result shows that Sinai's regime is critical in the sense that, for an i.i.d.~random environment, $\eta_t(\omega)$ is respectively ergodic (that is positive-recurrent, here) or transient as ${\mathbb{E}}[\log(p_1/q_1)]>0$ or ${\mathbb{E}}[\log(p_1/q_1)]<0$. A notable property of the RWRE in Sinai's regime is its {\em speed} --- roughly speaking $\eta_t(\omega)$ is of order $(\log{t})^2$ for large $t$. One way to state this more precisely (for another, see the discussion in Section \ref{sec3}) is in terms of `almost sure' behaviour, i.e.~results that hold $P$-almost surely (a.s.) for ${\mathbb{P}}$-almost every (a.e.) $\omega$. (For the remainder of this paper, we omit the $P$ and ${\mathbb{P}}$ when the context is clear.) This is the kind of result we give in the present paper. In Sinai's regime for the RWRE on $\mathbb{Z}^+$, almost sure upper and lower bounds were given by Deheuvels and R\'ev\'esz (\cite{dere}, Theorem 4 in particular). A similar upper bound result was given by Comets, Menshikov and Popov (see \cite{cmp}, Theorem 3.2), proved via a martingale technique related to some of the ideas in the present paper. Sharp results are given by Hu and Shi in \cite{hushi}. In particular, the following iterated logarithm result follows from Theorem 1.3 of \cite{hushi}. \begin{theorem} \label{thmb} \cite{hushi} Suppose that $(p_0,p_1,p_2,\ldots)$ is an i.i.d.~sequence with ${\mathbb{E}}[\log(p_1/q_1)]=0$ and ${\mathrm{Var}}(p_1) >0$. Then there exists a constant $K \in (0,\infty)$ (given explicitly in \cite{hushi}) such that, for a.e.~$\omega$, a.s., for any ${\varepsilon}>0$, \begin{itemize} \item[(i)] for all but finitely many $t$ \[ \frac{\eta_t(\omega)}{(\log t)^2} \leq (1+{\varepsilon}) K \log \log \log t; \] \item[(ii)] for infinitely many $t$ \[ \frac{\eta_t(\omega)}{(\log t)^2} \geq (1-{\varepsilon}) K \log \log \log t. \] \end{itemize} \end{theorem} Note that `a.e.~$\omega$' is short for `${\mathbb{P}}$-almost every environment $\omega$', and `a.s.' is short for `$P$-almost surely'. We use this shorthand in the statements of all our results. Our methods do not enable our results to be as sharp as those in \cite{hushi}; the best that we obtain in Sinai's regime is included in Theorem \ref{thm3} below. However, we obtain a much wider array of results. We remark that a range of {\em polynomial} speeds can be attained in certain transient homogeneous random environment regimes (see e.g.~\cite{kks}). In this paper we are interested primarily in {\em logarithmic} speed results like Theorem \ref{thmb}, for random environments that are {\em asymptotically} homogeneous. Our main results are almost sure upper bounds for $\eta_t(\omega)$ that are valid for a.e.~$\omega$ and all but finitely many $t$, and almost sure lower bounds for $\eta_t(\omega)$ that are valid for a.e.~$\omega$ {\em either} for all but finitely many $t$ (if $\eta_t(\omega)$ is transient, see e.g.~Theorem \ref{thm8}) {\em or} for infinitely many $t$ (if $\eta_t(\omega)$ is recurrent, see e.g.~Theorem \ref{thm5}). These bounds are all of size $(\log t)^\beta$, for some $\beta \in (1,\infty)$ that is a function of $\alpha$ (the size of the perturbation), depending on the model in question, with higher order logarithmic corrections. We study two particular cases of random environment. In the first, our environment will be a perturbation of the i.i.d.~environment of Sinai's regime (see Section \ref{secrwre}). In the second, our environment will be a random perturbation of the simple symmetric random walk (see Section \ref{sec2}). By studying a range of perturbations, we obtain a spectrum of possible behaviour. The related paper \cite{mw} employs the method of Lyapunov functions (see \cite{fmm}) to give qualitative characteristics for these models (amongst somewhat more general results): specifically, criteria for recurrence, transience and positive-recurrence (ergodicity, here). In the present paper we are concerned with corresponding {\em quantitative} behaviour: specifically, speeds (for those cases with logarithmic speed) and, secondarily, the rates of decay of the stationary distribution in the ergodic cases identified in \cite{mw}. We summarize the relevant results from \cite{mw} at convenient points in Section \ref{results} below. The proofs of the main results in the present paper proceeds by relating the position of the random walk to some expected hitting times. The latter are analyzed (over all environments) using estimates for sums of independent random variables; this relies on (mostly well-known) strong limit theorems. We now give a formal description of the RWRE model that we study here. Fix $\delta \in (0,1/2)$. Let $(\xi_i,Y_i)$, $i \in \mathbb{N}$, be a sequence of i.i.d.~random vectors on some probability space $(\Omega,{\mathcal{F}},{\mathbb{P}})$, such that \begin{eqnarray} \label{ue} {\mathbb{P}} [ \delta \leq \xi_1 \leq 1-\delta ] =1, \end{eqnarray} and $Y_1$ takes values in $[-1,1]$. The condition (\ref{ue}) is sometimes referred to as {\em uniform ellipticity}. Note that we allow $Y_1$ and $\xi_1$ to be dependent. We fix $\alpha > 0$. For a particular realization of the sequence $(\xi_i,Y_i)$, $i\in\mathbb{N}$, we define $p_0=q_0=1/2$ and the quantities $p_n$ and $q_n$, $n=1,2,3,\ldots$ as follows: \begin{eqnarray} \label{1006b} p_n & := & \left\{ \begin{array}{ll} \xi_n +Y_n n^{-\alpha} & ~~~~{\rm if}~~~ (\delta/2) \leq \xi_n +Y_n n^{-\alpha} \leq 1-(\delta/2) \\ \delta/2 & ~~~~{\rm if}~~~ \xi_n +Y_n n^{-\alpha} < (\delta/2) \\ 1-(\delta/2) & ~~~~{\rm if}~~~ \xi_n +Y_n n^{-\alpha} > 1-(\delta/2) \end{array} \right. \nonumber\\ q_n & := & 1-p_n. \end{eqnarray} A particular realization of $(p_n; n\in \mathbb{N})$ specifies our random environment $\omega$, and is given in terms of the $\xi_i$ and $Y_i$ as in (\ref{1006b}). For a given environment $\omega$, the stochastic process $(\eta_t(\omega);t \in \mathbb{Z}^+)$ as defined at (\ref{1006bb}) is an irreducible, aperiodic Markov chain (under $P$); the probability measure $P$ in (\ref{1006bb}) is known as the {\em quenched} measure (the measure given a fixed environment $\omega$). Under condition (\ref{ue}), we have that there exists $n_0 \in \mathbb{N}$ such that, for a.e.~$\omega$, $(\delta/2) < \xi_n +Y_n n^{-\alpha} < 1-(\delta/2)$ for all $n \geq n_0$ (since the $Y_n$ are bounded). Thus, for all $n \geq n_0$, (\ref{1006b}) implies that, for a.e.~$\omega$, \begin{eqnarray*} p_n = \xi_n +Y_n n^{-\alpha}, ~~~ q_n = 1-\xi_n - Y_n n^{-\alpha}, ~~~ (n \geq n_0). \end{eqnarray*} The conditions on the variables in (\ref{1006b}) ensure that, for a.e.~$\omega$, $(\delta/2) \leq p_n \leq 1 -(\delta/2)$ for all $n$ so that $p_n$ and $q_n$ are true probabilities bounded strictly away from $0$ and $1$, as required by our condition on $\omega$ given just before (\ref{1006bb}). \section{Main results} \label{results} In this section we describe in detail two particular cases of the model formulated in the previous section, along with our main results in each case. Then in Section \ref{sec3} we make further remarks and state some open problems. \subsection{Perturbation of random walk in random environment in Sinai's regime} \label{secrwre} Now we describe our first particular case of the model given in Section \ref{int}. For $n \in \mathbb{N}$ set \begin{eqnarray} \label{0520b} \zeta_n := \log \left( \frac{\xi_n}{1-\xi_n} \right), ~~~ Z_n := \frac{Y_n}{\xi_n (1-\xi_n)}. \end{eqnarray} With ${\mathbb{E}}$ denoting expectation under ${\mathbb{P}}$, suppose that ${\mathbb{E}}[\zeta_1]=0$ and ${\mathrm{Var}}[\zeta_1]>0$ (so our environment is truly random). In order to formulate our results, we introduce some more notation. Set \begin{eqnarray} \label{0520bx} \lambda := {\mathbb{E}} [ Z_1 ] ,\end{eqnarray} and also let \begin{eqnarray} \label{0210f} s^2:= {\mathrm{Var}}[\xi_1], ~~~ \sigma^2 := {\mathrm{Var}}[Y_1]. \end{eqnarray} Under our boundedness conditions on $\xi_1$ and $Y_1$, we have $s^2 <\infty$ and $\sigma^2 <\infty$, and under condition (\ref{ue}) we have, ${\mathbb{P}}$-a.s., \begin{eqnarray*} -\infty < \frac{-1}{\delta^2} \leq Z_1 \leq \frac{1}{\delta^2} < \infty.\end{eqnarray*} This model was introduced in \cite{mw} in somewhat more generality, and criteria for transience, recurrence and ergodicity given (see Theorems 6, 7 of \cite{mw}). In this case, the random environment described in (\ref{1006b}) corresponds to a perturbation of Sinai's regime, in the sense that, in the limit as $n \to \infty$, we have ${\mathbb{E}}[\log(p_n/q_n)] \to 0$. Despite this, the behaviour of this model may be strikingly different to that of Sinai's RWRE (as demonstrated by our results below and also those in \cite{mw}), and depends on the sign of $\lambda$ as defined at (\ref{0520bx}) (the average direction of the perturbation), and $\alpha$ (the size of the perturbation). For the following results, with the definitions at (\ref{0520b}) and (\ref{0210f}), we take $s^2>0$, ${\mathbb{E}}[\zeta_1]=0$, and $\sigma^2 \geq 0$ (so, for example, we permit the case ${\mathbb{P}}[Y_1=b]=1$ for some $b \in [-1,1]$, i.e.~a non-random perturbation of Sinai's RWRE). Of separate interest are the cases $\lambda=0$ and $\lambda \neq 0$ (where $\lambda$ is given by (\ref{0520bx})). The case of most interest to us here is $\lambda \neq 0$, for which the perturbation is on average either towards $0$ ($\lambda>0$) or away from $0$ ($\lambda<0$); this includes the case of a non-random perturbation of Sinai's RWRE. It was shown in \cite{mw} that the critical size of the perturbation is $\alpha=1/2$: for $\alpha<1/2$ the perturbation is large enough to disturb the null-recurrent behaviour; for $\alpha \geq 1/2$ it is too small. By Theorem 6 of \cite{mw}, we have that if $\lambda<0$ and $\alpha<1/2$ then $\eta_t(\omega)$ is transient for a.e.~$\omega$; if $\alpha \geq 1/2$ and $\lambda \neq 0$ then $\eta_t(\omega)$ is null-recurrent for a.e.~$\omega$; if $\lambda>0$ and $\alpha< 1/2$ then $\eta_t(\omega)$ is ergodic for a.e.~$\omega$. We obtain logarithmic speeds for the $\lambda \neq 0$ case, for the null-recurrent (Theorem \ref{thm5}), transient (Theorem \ref{thm8}), and ergodic (Theorem \ref{thm21}) regimes. In the case $\lambda =0$, the critical exponent for $\alpha$ of $1/2$ is {\em decreased}, depending on certain higher order analogues of $\lambda$ (see the remark after Theorem 7 of \cite{mw}). Here, of the $\lambda=0$ cases, we will only be concerned (see Theorem \ref{thm3}, below) with the special case where $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$, for which $\lambda=0$ and $\eta_t(\omega)$ is null-recurrent for a.e.~$\omega$ for {\em any} $\alpha>0$ (see \cite{mw}, Theorem 5). (Here and subsequently $\stackrel{{\rm d}}{=}$ stands for equality in distribution.) This case is of interest because, despite the presence of a (potentially strong) perturbation, the random walk remains null-recurrent; we show it has logarithmic speed. Our first result is Theorem \ref{thm5} below, which deals with the $\lambda \neq 0$, $\alpha \geq 1/2$ case, for which $\eta_t(\omega)$ is null-recurrent for a.e.~$\omega$ (see above). Recall the definitions of $\lambda$, $s^2$ and $\sigma^2$ from (\ref{0520bx}) and (\ref{0210f}). \begin{theorem} \label{thm5} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $\lambda \neq 0$ and $\sigma^2 \in [0,\infty)$. \begin{itemize} \item[(i)] Suppose $\alpha >1/2$. Then, for a.e.~$\omega$, for any ${\varepsilon}>0$ we have, a.s., \begin{eqnarray} \label{0210ac} 0 \leq \frac{\eta_t(\omega)}{ (\log t)^2} < (\log \log{t})^{2 +{\varepsilon}}, \end{eqnarray} for all but finitely many $t$. \item[(ii)] Suppose $\alpha=1/2$. Then, for a.e.~$\omega$, for any ${\varepsilon}>0$ we have, a.s., \begin{eqnarray} \label{0210acd} 0 \leq \frac{\eta_t(\omega)}{ (\log t )^2} < (\log \log{t})^{4+{\varepsilon}}, \end{eqnarray} for all but finitely many $t$. \item[(iii)] On the other hand, for $\alpha \geq 1/2$, for a.e.~$\omega$, for any ${\varepsilon}>0$ we have, a.s., \begin{eqnarray} \label{0920a} \frac{\eta_t(\omega)}{(\log{t})^2} > (\log\log\log{t})^{-1-{\varepsilon}}, \end{eqnarray} for infinitely many $t$. \end{itemize} \end{theorem} Our next result deals with the transient case when $\lambda<0$ and $\alpha \in (0,1/2)$, and gives a reasonably tight envelope which the random walk leaves only finitely often. Although the random walk is transient, it is very slow: we have a striking example of logarithmic transience. \begin{theorem} \label{thm8} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $\lambda < 0$, $\sigma^2 \in [0,\infty)$, and $\alpha \in (0,1/2)$. For a.e.~$\omega$, for any ${\varepsilon}>0$, we have, a.s., \begin{eqnarray} \label{0427c} (\log \log t)^{-(1/\alpha)-{\varepsilon}} < \frac{\eta_t(\omega)}{(\log t)^{1/\alpha}} < (\log \log t)^{(2/\alpha)+{\varepsilon}}, \end{eqnarray} for all but finitely many $t$. \end{theorem} A case of secondary interest is that in which $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$. Here $\lambda=0$, and further, $\eta_t(\omega)$ is null-recurrent for a.e.~$\omega$, for any $\alpha>0$ (see Theorem 5 of \cite{mw}). Our next result, Theorem \ref{thm3} below, deals with this case. The condition $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$ ensures that although the perturbation may be strong, (roughly speaking) it balances out overall with equal strength to the left and to the right. This intuition is supported by the fact that the random walk remains null-recurrent. Also, included is the case ${\mathbb{P}}[Y_1=0]=1$ and $\sigma^2=0$, i.e.~Sinai's regime. Thus, for our purposes, there is no distinction between the behaviour of the RWRE perturbed from Sinai's regime under condition $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$ and that of the RWRE in Sinai's regime itself. \begin{theorem} \label{thm3} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$, $\sigma^2 \in [0,\infty)$, and $\alpha >0$. \begin{itemize} \item[(i)] For a.e.~$\omega$, for any ${\varepsilon}>0$ we have that, a.s., \begin{eqnarray*} 0 \leq \frac{\eta_t(\omega)}{(\log t)^2} \leq (\log \log t)^{2+{\varepsilon}}, \end{eqnarray*} for all but finitely many $t$. \item[(ii)] On the other hand, for a.e.~$\omega$, for any ${\varepsilon}>0$ we have that, a.s., \begin{eqnarray*} \frac{\eta_t(\omega)}{(\log t)^2} \geq (\log \log \log t)^{-1-{\varepsilon}}, \end{eqnarray*} for infinitely many $t$. \end{itemize} \end{theorem} \noindent \textbf{Remarks. } (a) In the case of Sinai's regime (${\mathbb{P}}[Y_1=0]=1$, $\sigma^2=0$), Theorem \ref{thm3}(i) gives similar bounds to \cite{dere,cmp}, but by comparison to Theorem \ref{thmb} (due to Hu and Shi \cite{hushi}), none of the bounds in Theorem \ref{thm3} is particularly sharp. \\ (b) In the null-recurrent regimes $\lambda \neq 0$, $\alpha \geq 1/2$ (Theorem \ref{thm5}) and $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$ (Theorem \ref{thm3}) we see that the position of the random walk is essentially of order $(\log t)^2$, as in Sinai's regime (which is included in Theorem \ref{thm3}). Thus provided we have null-recurrence we have the same speed. On the other hand, in the transient case $\lambda<0$, $\alpha<1/2$ (Theorem \ref{thm8}), the $1/\alpha$ exponent in the speed of transience is in $(2,\infty)$. Thus for $\alpha$ increasingly small (i.e.~a stronger perturbation), the speed increases (but is still `slow', i.e.~logarithmic).\\ In the ergodic situations, in addition to our results on the speed of the random walk, in the present paper we also give results on the rate of decay of the stationary distribution $(\pi_n)$, $n\in \mathbb{Z}^+$, of the Markov chain $\eta_t(\omega)$. Some analogous results for non-random environments are given in \cite{mp}. Theorems \ref{thm21} and \ref{thm9} below deal with the ergodic case when $\lambda>0$ and $\alpha \in(0,1/2)$. \begin{theorem} \label{thm21} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $\lambda > 0$, $\sigma^2 \in [0,\infty)$, and $\alpha \in (0,1/2)$. For a.e.~$\omega$, for any ${\varepsilon}>0$, a.s., \begin{eqnarray*} \eta_t (\omega) \leq (1+{\varepsilon}) \left( \frac{1-\alpha}{\lambda} \right)^{1/(1-\alpha)} (\log t)^{1/(1-\alpha)}, \end{eqnarray*} for all but finitely many $t$, and \begin{eqnarray*} \eta_t (\omega) \geq (1-{\varepsilon}) \left( \frac{1-\alpha}{\lambda} \right)^{1/(1-\alpha)} (\log t)^{1/(1-\alpha)}, \end{eqnarray*} for infinitely many $t$. \end{theorem} For $\alpha \in (0,1/2)$, $1/(1-\alpha) \in (1,2)$: this is `slower' than Sinai's regime. \begin{theorem} \label{thm9} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $\lambda > 0$, $\sigma^2 \in [0,\infty)$, and $\alpha \in (0,1/2)$. For a.e.~$\omega$, as $n \to \infty$ \begin{eqnarray} \label{0922a} \pi_n = \exp \left( -\left(\frac{\lambda}{1-\alpha}\right) n^{1-\alpha} [1+o(1)] \right).\end{eqnarray} \end{theorem} \subsection{Simple random walk with random perturbation} \label{sec2} Our second model again fits into the framework of (\ref{1006b}) above, but we now take ${\mathbb{P}}[\xi_1=1/2]=1$ and $\sigma^2:={\mathrm{Var}}[Y_1]>0$. That is, we have a random perturbation of the symmetric simple random walk (SRW). In this case, from (\ref{1006b}), we have $p_0=q_0=1/2$ and for $n \in \mathbb{N}$ \begin{eqnarray} \label{1006bc} p_n & := & \left\{ \begin{array}{ll} \frac{1}{2} +Y_n n^{-\alpha} & ~~~~{\rm if}~~~ (\delta/2) \leq \frac{1}{2} +Y_n n^{-\alpha} \leq 1-(\delta/2) \\ \delta/2 & ~~~~{\rm if}~~~ \frac{1}{2} +Y_n n^{-\alpha} < (\delta/2) \\ 1-(\delta/2) & ~~~~{\rm if}~~~ \frac{1}{2} +Y_n n^{-\alpha} > 1-(\delta/2) \end{array} \right. \nonumber\\ q_n & := & 1-p_n. \end{eqnarray} Since the $Y_n$ are bounded, we have that there exists $n_0 \in \mathbb{N}$ such that for a.e.~$\omega$ we have $(\delta/2) < \frac{1}{2} +Y_n n^{-\alpha} < 1-(\delta/2)$ for all $n \geq n_0$. Thus, for a.e.~$\omega$, (\ref{1006bc}) implies that for all $n \geq n_0$ \begin{eqnarray} \label{1003a} p_n = \frac{1}{2} +Y_n n^{-\alpha}, ~~~ q_n = \frac{1}{2} - Y_n n^{-\alpha}, ~~~ (n \geq n_0). \end{eqnarray} The conditions on the variables in (\ref{1006bc}) ensure that, for a.e.~$\omega$, $(\delta/2) \leq p_n \leq 1 -(\delta/2)$ for all $n$ so that for $p_n$ and $q_n$ are bounded strictly away from $0$ and $1$. We see that, for a.e.~$\omega$, $(p_n,q_n) \to (1/2,1/2)$ as $n \to \infty$. Thus in the limit $n \to \infty$, we coincide with the symmetric SRW on $\mathbb{Z}^+$. Here we do not study the case ${\mathrm{Var}}[Y_1]=\sigma^2=0$, in which we have a non-random perturbation of the SRW. This is an example of the so-called Lamperti problem after \cite{lamp1} (see also \cite{harris}); for recurrence/transience criteria see \cite{lamp1,mai} and Theorem 2 of \cite{mw}. From now on we assume ${\mathrm{Var}}[Y_1]=\sigma^2>0$. The transience and recurrence properties of the model given by (\ref{1006bc}) were analysed in \cite{mw}. From Theorem 3(iv) of \cite{mw}, we have that in this case if ${\mathbb{E}}[Y_1]<0$ and $\alpha<1$ then $\eta_t(\omega)$ is transient for a.e.~$\omega$; if $\alpha>1$ and ${\mathbb{E}}[Y_1] \neq 0$ then $\eta_t(\omega)$ is null-recurrent for a.e.~$\omega$; if ${\mathbb{E}}[Y_1]>0$ and $\alpha<1$ then $\eta_t(\omega)$ is ergodic for a.e.~$\omega$. Thus, in contrast to the perturbation of the {\em random} environment (as in Section \ref{secrwre}), the critical exponent in this case is $\alpha=1$. When ${\mathbb{E}}[Y_1]=0$, recurrence/transience properties depend on the higher moments of $Y_1$ (see the remark after Theorem 3 of \cite{mw}). Of interest to us in the present paper is the case in which the distribution of $Y_1$ is symmetric, that is $Y_1 \stackrel{{\rm d}}{=} -Y_1$ (and ${\mathbb{E}}[Y_1]=0$). In this case (see Theorem 3(iii) of \cite{mw}) $\eta_t(\omega)$ is null-recurrent for a.e.~$\omega$, for {\em any} $\alpha>0$. In this case we obtain our logarithmic behaviour (see Theorem \ref{thm0}), in the domain $\alpha \in (0,1/2)$. We also obtain logarithmic bounds in the ergodic case mentioned above (see Theorem \ref{thm20}). \begin{theorem} \label{thm0} Suppose ${\mathbb{P}}[\xi_1=1/2]=1$, $Y_1 \stackrel{{\rm d}}{=} -Y_1$, $\sigma^2 \in (0,\infty)$, $\alpha\in (0,1/2)$. \begin{itemize} \item[(i)] For a.e.~$\omega$, for any ${\varepsilon}>0$, a.s., \begin{eqnarray} \label{0210a} 0 \leq \frac{\eta_t(\omega)}{ (\log t)^{2/(1-2\alpha)}} \leq (\log \log t)^{(2/(1-2\alpha))+{\varepsilon}}, \end{eqnarray} for all but finitely many $t$. \item[(ii)] On the other hand, for a.e.~$\omega$, for any ${\varepsilon}>0$, a.s., \begin{eqnarray} \label{0920c} \frac{\eta_t(\omega)}{(\log{t})^{2/(1-2\alpha)}} \geq ( \log \log \log t )^{-(1/(1-2\alpha))-{\varepsilon}} , \end{eqnarray} for infinitely many $t$. \end{itemize} \end{theorem} \noindent \textbf{Remark. } Note that for $\alpha \in (0,1/2)$, $2/(1-2\alpha)$ is in $(2,\infty)$. In the limit $\alpha \downarrow 0$, we approach Sinai's regime in the sense that, for fixed $\omega$ and each $n$, $(p_n,q_n) \to (\frac{1}{2}+Y_n,\frac{1}{2}-Y_n)$ where \[ {\mathbb{E}} \left [ \log \left(\frac{(1/2)+Y_n}{(1/2)-Y_n} \right) \right] = {\mathbb{E}} [ \log ((1/2)+Y_n) ] - {\mathbb{E}} [ \log ((1/2)-Y_n)]= 0 \] when $Y_1 \stackrel{{\rm d}}{=} -Y_1$. Thus it is not surprising that in the limit $\alpha \downarrow 0$, Theorem \ref{thm0} approaches Theorem \ref{thm3} (which includes Sinai's regime).\\ Theorems \ref{thm20} and \ref{thm10} below deal with the ergodic case when ${\mathbb{E}}[Y_1]>0$ and $\alpha \in(0,1)$. Note that when $\alpha \in (0,1)$, $1/(1-\alpha) \in (1,\infty)$. \begin{theorem} \label{thm20} Suppose ${\mathbb{P}}[\xi_1=1/2]=1$, ${\mathbb{E}}[Y_1] > 0$, $\sigma^2 \in (0,\infty)$, and $\alpha \in (0,1)$. For a.e.~$\omega$, for any ${\varepsilon}>0$, a.s., \begin{eqnarray*} \eta_t (\omega) \leq (1+{\varepsilon}) \left( \frac{1-\alpha}{4 {\mathbb{E}}[Y_1]} \right)^{1/(1-\alpha)} (\log t)^{1/(1-\alpha)}, \end{eqnarray*} for all but finitely many $t$, and \begin{eqnarray*} \eta_t (\omega) \geq (1-{\varepsilon}) \left( \frac{1-\alpha}{4 {\mathbb{E}}[Y_1]} \right)^{1/(1-\alpha)} (\log t)^{1/(1-\alpha)}, \end{eqnarray*} for infinitely many $t$. \end{theorem} The next result gives the rate of decay of the stationary distribution $(\pi_n)$: as in Theorem \ref{thm9}, the decay is sub-exponential. \begin{theorem} \label{thm10} Suppose ${\mathbb{P}}[\xi_1=1/2]=1$, ${\mathbb{E}}[Y_1] > 0$, $\sigma^2 \in (0,\infty)$, and $\alpha \in (0,1)$. For a.e.~$\omega$, as $n \to \infty$ \begin{eqnarray} \label{0922c} \pi_n = \exp \left( -\left(\frac{4 {\mathbb{E}}[Y_1]}{1-\alpha}\right) n^{1-\alpha} [1+o(1)] \right).\end{eqnarray} \end{theorem} \subsection{Further remarks and open problems} \label{sec3} Our results give an indication of the `almost sure' behaviour of $\eta_t(\omega)$, and there is scope for tightening our bounds. Also of interest is the so-called {\em annealed} behaviour of the RWRE (averaged over all environments). Sinai's result \cite{sinai} for the random walk in i.i.d.~random environment on $\mathbb{Z}$ with ${\mathbb{E}}[\log(p_1/q_1)]=0$ showed (roughly speaking) that $\eta_t(\omega)$ divided by $(\log{t})^2$ converges in distribution to some random variable as $t \to \infty$. The result is stated in terms of the annealed probability measure ${\mathbb{Q}}$ given by \[ {\mathbb{Q}} [\cdot] = \int_\Omega P [\cdot] \mathrm{d} {\mathbb{P}} [\omega].\] Golosov \cite{gol1} showed that for the RWRE on $\mathbb{Z}^+$ in Sinai's regime \[ {\mathbb{Q}} \left[\frac{\eta_t (\omega)}{(\log t)^2} \leq u \right] \longrightarrow F(u), ~~~ u \in \mathbb{R},\] as $t \to \infty$, where $F$ is a known distribution function. See also \cite{gol2,gol3,kes,cp} for related results. The annealed behaviour of our models is also of interest. In particular, under the conditions of Theorem \ref{thm8} do we have (analogously to the results of Sinai-Golosov \cite{sinai,gol1}) that as $t \to \infty$ \[ {\mathbb{Q}} \left[ \frac{\eta_t(\omega)}{(\log t)^{1/\alpha}} \leq u \right] \longrightarrow G(u), ~~~ u \in \mathbb{R},\] for some $G$? We do not address this question in the present paper. One can obtain $L^p$ analogues of our results, with the methods used here (compare Theorem 3.2 of \cite{cmp}). For example, under the conditions of Theorem \ref{thm5}, analogously to (\ref{0210ac}), for any $p \geq 1$, for any ${\varepsilon}>0$, for a.e.~$\omega$, as $t\to \infty$ \begin{eqnarray*} \frac{\eta_t(\omega)}{ (\log t )^{2+{\varepsilon}} } \to 0, ~\textrm{in} ~L^p. \end{eqnarray*} The methods of the present paper are well suited to logarithmic speeds, since they are based on an analysis of the expected hitting times of the random walk; some standard estimates using the submartingale property, Markov's inequality and the (first) Borel-Cantelli lemma lead to some rather sharp results, since these expected times are exponentially large. Of interest would be results for the cases of the SRW with random perturbation that are not covered by the theorems of Section \ref{sec2}. For example, if $Y_1 \stackrel{{\rm d}}{=} -Y_1$ but $\alpha > 1/2$, we expect SRW-like behaviour. On the other hand, if ${\mathbb{E}}[Y_1] \neq 0$, we suspect that $\eta_t(\omega)$ will behave in a similar way to the Lamperti problem mentioned above: roughly speaking, we expect SRW-like behaviour for $\alpha >1$, while in the transient regime ($\alpha<1$ and ${\mathbb{E}}[Y_1]<0$) we have $\eta_t (\omega) \sim t^{1/(1+\alpha)}$. Another open problem is the behaviour of this model when $\alpha=1$ (this case was not covered in \cite{mw}). We hope to address some of these issues in future work. \section{Preliminaries} \label{prelim} Before we prove our main results in Section \ref{secprfs}, we give some preparatory results. First, in Section \ref{strng}, we present some technical lemmas concerning the behaviour of sums of independent random variables; some are well-known results, others we prove. Then, in Section \ref{hitting}, we give the main apparatus of our proofs, based on some hitting time results. \subsection{Some strong theorems for sums of independent random variables} \label{strng} The following result is due to Sakhanenko \cite{sak1,sak2,sak3}, and is contained in Theorem A* of the more readily obtainable paper by Shao \cite{shao}. \begin{lemma} \label{lem1025a} Let $X_1,X_2, \ldots$ be independent random variables with $E[X_i]=0$, ${\mathrm{Var}} [X_i] = \sigma_i^2 \in (0,\infty)$ for $i \in \mathbb{N}$. Suppose that the $X_i$ are uniformly bounded, i.e., for some $B \in (0,\infty)$, $P[|X_i| > B] =0$ for all $i$. For $n \in \mathbb{N}$, set \[ s^2_n := \sum_{i=1}^n \sigma_i^2 .\] Then, there exists (possibly on an enlarged probability space) a sequence of independent normal random variables $(W_1,\ldots,W_n)$ with $E[W_i]=0$, ${\mathrm{Var}}[W_i]=\sigma_i^2$ for $1 \leq i \leq n$ such that a.s., \begin{eqnarray*} \left| \sum_{i=1}^n X_i - \sum_{i=1}^n W_i \right| \leq \frac{1}{A} \log (s_n^2) , \end{eqnarray*} for all but finitely many $n$, where $A\in (0,\infty)$ is a constant. \end{lemma} We will need a form of the Law of the Iterated Logarithm. The following result is a consequence of Theorem 7 of \cite{feller}. \begin{lemma} \label{itlog} Let $X_1, X_2, \ldots$ be independent, uniformly bounded random variables with $E[X_i]=0$, ${\mathrm{Var}}[X_i^2]=\sigma_i^2 \in (0,\infty)$ for $i \in \mathbb{N}$. For $n\in \mathbb{N}$, set $s_n^2 := \sum_{i=1}^n \sigma_i^2$. Suppose that $s_n \to \infty$ as $n \to \infty$. Then, for any ${\varepsilon}>0$, a.s., \begin{eqnarray*} \left| \sum_{i=1}^n X_i \right| \leq s_n ((2+{\varepsilon})\log\log (s_n^2))^{1/2} , \end{eqnarray*} for all but finitely many $n$. \end{lemma} We will also need the following `inverse iterated logarithm law' due to Hirsch (Theorem 2 of \cite{hirsch}; see also Theorem 3.1 of \cite{csaki}). \begin{lemma} \label{hlem} Let $X_1,X_2,\ldots$ be i.i.d., uniformly bounded random variables with $E[X_1]=0$, ${\mathrm{Var}}[X_1] \in (0,\infty)$. For $x \geq 0$, let $a(x)>0$ be a nonincreasing function such that $x^{1/2}a(x)$ is eventually increasing and \begin{eqnarray} \label{sumn} \sum_{n=1}^\infty \frac{a(n)}{n} < \infty.\end{eqnarray} Then, a.s., \begin{eqnarray*} \max_{1 \leq i \leq n} \sum_{j=1}^i X_j \geq n^{1/2} a(n), \end{eqnarray*} for all but finitely many $n$. \end{lemma} We will also need the following extension of part of Hirsch's result to independent non-identically distributed random variables. \begin{lemma} \label{lem0601} Let $X_1,X_2,\ldots$ be independent, uniformly bounded random variables with $E[X_i]=0$, ${\mathrm{Var}}[X_i]=\sigma_i^2$ for $i\in\mathbb{N}$, where $0<\sigma_i^2<M<\infty$ for all $i$. Set $s_n^2:= \sum_{i=1}^n \sigma_i^2$ for $n \in \mathbb{N}$. Suppose that $s_n \to \infty$ as $n \to \infty$. For $x \geq 0$, let $a(x)>0$ be a nonincreasing function such that $x^{1/2}a(x)$ is eventually increasing, (\ref{sumn}) holds, and \begin{eqnarray} \label{0531a} \lim_{n \to \infty} \frac{\log{n}}{n^{1/2} a(n)} = 0 .\end{eqnarray} Then, for some constant $C \in (0,\infty)$, a.s., \begin{eqnarray} \label{0210bb} \max_{1 \leq i \leq n} \sum_{j=1}^i X_j \geq C s_n a(s_n^2), \end{eqnarray} for all but finitely many $n$. \end{lemma} \noindent \textbf{Proof. } By Lemma \ref{lem1025a}, we can redefine the $X_i$, $i\in\mathbb{N}$ on a richer probability space along with a sequence of independent normal random variables $W_i$, $i\in\mathbb{N}$ with $E[W_i]=0$ and ${\mathrm{Var}}[W_i]=\sigma_i^2$, such that, a.s., \[ \left| \sum_{j=1}^i X_j - \sum_{j=1}^i W_j \right| \leq A^{-1} \log (s_i^2) \leq C \log i , \] for all but finitely many $i$, for some $A, C\in(0,\infty)$. Thus, a.s., \begin{eqnarray} \label{0531b} \left| \max_{1 \leq i \leq n} \sum_{j=1}^i X_j - \max_{1 \leq i \leq n} \sum_{j=1}^i W_j \right| \leq \max_{1 \leq i \leq n} \left| \sum_{j=1}^i X_j - \sum_{j=1}^i W_j \right| \leq C \log n, \end{eqnarray} for all but finitely many $n$. For $n \in \mathbb{N}$, set \begin{eqnarray} \label{0601f} h(n) := \min \{ m \in \mathbb{N}: s_m^2 \geq n \}.\end{eqnarray} There exists a standard Brownian motion $(B(n);n \geq 0)$ and a sequence of independent normal random variables $\delta_n \sim {\mathcal{N}}(0,s_{h(n)}^2-n)$, $n\in\mathbb{N}$, independent of $(B(n);n \geq 0)$, such that \[ B(n) + \delta_n = \sum_{i=1}^{h(n)} W_i ,\] for each $n \in \mathbb{N}$. Now, \[ \max_{1 \leq i \leq h(n)} \sum_{j=1}^i W_j \geq \max_{1 \leq i \leq n} \sum_{j=1}^{h(i)} W_j = \max_{1 \leq i \leq n} \left( B(i) + \delta_i \right).\] Hence \begin{eqnarray} \label{0601a} \max_{1 \leq i \leq h(n)} \sum_{j=1}^i W_j \geq \max_{1 \leq i \leq n} B(i) - \max_{1 \leq i \leq n} \delta_i.\end{eqnarray} Since ${\mathrm{Var}}(\delta_i)=s_{h(i)}^2-i \leq \sigma_{h(i)}^2 <M< \infty$, and $\delta_i$, $i \in \{1,\ldots,n\}$ are independent normal random variables, we have that, a.s., \begin{eqnarray} \label{0601b} \max_{1 \leq i \leq n} \delta_i \leq \log n ,\end{eqnarray} for all but finitely many $n$ (this follows from standard tail bounds on the normal distribution (see e.g.~\cite{durrett}, p.~9) and the Borel-Cantelli lemma). Suppose that $a(\cdot)$ satisfies the conditions of this lemma. Now, for a sequence $Y_1, Y_2,\ldots$ of i.i.d.~normal random variables with $E[Y_1]=0$ and ${\mathrm{Var}}[Y_1]=1$, we have by Lemma \ref{hlem} that a.s., \begin{eqnarray} \label{0601c} \max_{1 \leq i \leq n} B(i) = \max_{1 \leq i \leq n} \sum_{j=1}^i Y_j \geq n^{1/2} a(n) ,\end{eqnarray} for all but finitely many $n$. So from (\ref{0601a}), (\ref{0601b}), (\ref{0601c}) and condition (\ref{0531a}), a.s., \begin{eqnarray} \label{0601d} \max_{1 \leq i \leq h(n)} \sum_{j=1}^i W_j \geq n^{1/2} a(n) - \log n \geq C n^{1/2} a(n) ,\end{eqnarray} for all but finitely many $n$ and some $C \in (0,\infty)$. Since $\sigma_i^2>0$ for all $i$, we have from (\ref{0601f}) that $h(s_n^2)=n$; thus by (\ref{0531b}) and (\ref{0601d}) we have that, a.s., \begin{eqnarray*} \max_{1 \leq i \leq n} \sum_{j=1}^i X_j \geq \max_{1 \leq i \leq n} \sum_{j=1}^i W_j - C \log n \geq C' (s_n^2)^{1/2} a(s_n^2) - C \log n ,\end{eqnarray*} for some $C,C' \in(0,\infty)$, for all but finitely many $n$. Then by the conditions on $s_n^2$ and $a(\cdot)$, (\ref{0210bb}) follows. $\square$\\ The next two lemmas will be needed for some more delicate estimates (e.g.~in the proof of Theorem \ref{thm8}) where we need to deal with certain moving sums. The following lemma is a corollary to a result of Hirsch \cite{hirsch}. \begin{lemma} \label{lbd1} Let $X_1,X_2,\ldots$ be independent, uniformly bounded random variables with $E[X_i]=0$, ${\mathrm{Var}}[X_i] =\sigma^2 \in (0,\infty)$ for $i \in \mathbb{N}$. For $x \geq 1$, let $b(x)$ be a nondecreasing, integer-valued function such that for some $\beta >0$ and $x_0 \in (0,\infty)$, $x^\beta \leq b(x) \leq x$ for all $x \geq x_0$. Then for any ${\varepsilon}>0$, a.s., \[ \max_{1 \leq i \leq n} \max_{1 \leq j \leq b(i)} \sum_{k=i-j+1}^i X_k \geq (b(n/2))^{1/2} (\log n)^{-1-{\varepsilon}},\] for all but finitely many $n$. \end{lemma} \noindent \textbf{Proof. } For fixed $i$, note that \[ \max_{1 \leq j \leq b(i)} \sum_{k=i-j+1}^i X_k \stackrel{{\rm d}}{=} \max_{1 \leq j \leq b(i)} \sum_{k=1}^j Y_k,\] where $Y_1,Y_2,\ldots$ are independent random variables with $Y_k \stackrel{{\rm d}}{=} X_{i+1-k}$ for each $k$. Fix ${\varepsilon}>0$. Let $E_i$ denote the event \[ E_i := \left\{ \max_{1 \leq j \leq b(i)} \sum_{k=i-j+1}^i X_k \leq (b(i))^{1/2} (\log b(i))^{-1-{\varepsilon}} \right\}.\] Then Corollary 1 of Hirsch \cite{hirsch} implies that there are absolute constants $C, C' \in (0,\infty)$ such that for all $i \geq x_0$, \[ P [ E_i ] \leq C (\log b(i))^{-1-{\varepsilon}} \leq C' (\log i)^{-1-{\varepsilon}},\] since $b(i) \geq i^\beta$. Consider the subsequence $i=2^m$ for $m=1,2,\ldots$. Then \[ \sum_{m=1}^\infty P [E_{2^m}] \leq C \sum_{m=1}^\infty m^{-1-{\varepsilon}} < \infty.\] Hence by the (first) Borel-Cantelli lemma, a.s., there is a finite $m_0$ (with $2^{m_0} \geq x_0$) such that, for all $i=2^m$ with $m \geq m_0$, \[ \max_{1 \leq j \leq b(i)} \sum_{k=i-j+1}^i X_k \geq (b(i))^{1/2} (\log b(i))^{-1-{\varepsilon}} \geq (b(i))^{1/2} (\log i)^{-1-{\varepsilon}},\] since $b(i) \leq i$. Each $n \geq 2$ satisfies $n \in [2^m,2^{m+1})$ for some $m \in \mathbb{N}$; then, a.s., \begin{eqnarray*} \max_{1 \leq i \leq n} \max_{1 \leq j \leq b(i)} \sum_{k=i-j+1}^i X_k \geq \max_{1 \leq i \leq 2^m} \max_{1 \leq j \leq b(i)} \sum_{k=i-j+1}^i X_k \\ \geq \max_{1 \leq j \leq b(2^m)} \sum_{k=2^m-j+1}^{2^m} X_k \geq (b(2^m))^{1/2} (\log (2^m))^{-1-{\varepsilon}},\end{eqnarray*} provided $m \geq m_0$. Hence, since $n \geq 2^m > n/2$, a.s., \[ \max_{1 \leq i \leq n} \max_{1 \leq j \leq b(i)} \sum_{k=i-j+1}^i X_k \geq (b(n/2))^{1/2} (\log n)^{-1-{\varepsilon}},\] for all $n \geq 2^{m_0}$. $\square$ \begin{lemma} \label{ubd1} Let $X_1,X_2,\ldots$ be independent, uniformly bounded random variables with $E [X_i]=0$ for all $i\in\mathbb{N}$. Then there exists $C \in (0,\infty)$ such that, a.s., for all but finitely many $i$ \[ \left| \sum_{k=i-j+1}^i X_k \right| \leq C j^{1/2} (\log i)^{1/2},\] for all $j=1,2,\ldots,i$. \end{lemma} \noindent \textbf{Proof. } For fixed $i$, $Y^i_j := \sum_{k=i-j+1}^i X_k$ is a martingale over $j=1,2,\ldots,i$, with uniformly bounded increments. Hence the Azuma-Hoeffding inequality (see e.g.~\cite{hoef}) implies that for some $B \in (0,\infty)$, for all $j=1,\ldots,i$, for $t>0$, \[ P [ |Y^i_j| \geq t ] \leq 2 \exp ( -B^{-1} j^{-1} t^2 ).\] Thus for a suitable $C<\infty$, for $j \leq i$, $P [ |Y^i_j | \geq C j^{1/2} (\log i)^{1/2} ] \leq i^{-3}$. Then \[ \sum_{i=1}^\infty \sum_{j=1}^i P [ | Y^i_j| \geq C j^{1/2} (\log i)^{1/2} ] \leq \sum_{i=1}^\infty i^{-2} < \infty.\] Hence the (first) Borel-Cantelli lemma implies that, a.s., there are only finitely many pairs $(i,j)$ (with $j \leq i$) for which $|Y^i_j| \geq C j^{1/2} (\log i)^{1/2}$. $\square$ \subsection{Hitting times results} \label{hitting} For the proofs of our main results, we will use the expected hitting times for the random walk $\eta_t (\omega)$ as defined at (\ref{1006bb}). For the remainder of this section, we work in the quenched setting (i.e.~with fixed environment $\omega = (p_0,p_1,\ldots)$ throughout). For $0 \leq m < n$, let $\tau_{m,n}$ denote the time when $\eta_t(\omega)$ first hits $n$, starting from $m$. That is, with the convention $\min \emptyset = +\infty$, \begin{eqnarray} \label{1001a} \tau_{m,n} := \min \{ t \geq 0: \eta_t(\omega) = n | \eta_0(\omega)=m \}.\end{eqnarray} For our proofs in Section \ref{secprfs}, we take $\eta_0(\omega)=r=0$ for ease of exposition; the proofs easily extend to general $r \in \mathbb{Z}^+$. For fixed $\omega$, let $T(0):=0$, and for $n\in\mathbb{N}$ let $T(n):=E[ \tau_{0,n}]$. For $i=0,1,2,\ldots$, write $\Delta_i:=T(i+1)-T(i)=E[\tau_{i,i+1}]$, so that $\Delta_i$ is the expected time taken for $\eta_t(\omega)$ to hit $i+1$, starting at $i$. Then standard arguments yield $T(n) = \sum_{i=0}^{n-1} \Delta_i$ with $\Delta_0=1/q_0$ and for $i \geq 1$ \[ \Delta_i = 1+ p_i ( \Delta_{i-1} + \Delta_i).\] We then obtain the following classical result. \begin{lemma} \label{lemexp} Let $\omega$ be fixed. For $n \in \mathbb{N}$, we have that $T(n) = \sum_{i=0}^{n-1} \Delta_i$, and for $i \geq 0$, $\Delta_i$ is given (with the convention that an empty product is 1) by \begin{eqnarray} \label{0420b} \Delta_i & = & \sum_{j=0}^i q_{i-j}^{-1} \prod_{k=i-j+1}^{i} \frac{p_k}{q_k} = \frac{1}{q_i} + \frac{p_i}{q_i q_{i-1}} + \cdots + \frac{p_i p_{i-1} \cdots p_1}{ q_i q_{i-1} \cdots q_1 q_0} . \end{eqnarray} \end{lemma} The following fact will be very useful. That is, for a fixed environment, $T(\eta_t(\omega))$ is a submartingale with respect to the natural filtration (for a closely related supermartingale, see \cite{cmp}, equation (6)). In particular, we have the following. \begin{lemma} \label{1002a} For fixed $\omega$, any $t \in \mathbb{Z}^+$ and any $n\in \mathbb{Z}^+$, \begin{eqnarray} \label{1002b} E [ T( \eta_{t+1}(\omega)) - T(\eta_t(\omega)) | \eta_t(\omega) = n ] = 1.\end{eqnarray} \end{lemma} \noindent \textbf{Proof. } For $n \geq 1$, we have \begin{eqnarray*} & & E [ T( \eta_{t+1}(\omega)) - T(\eta_t(\omega)) | \eta_t(\omega) = n ] \\ & = & p_n (T(n-1)-T(n)) +q_n(T(n+1)-T(n)) \\ & = & q_n \Delta_n - p_n \Delta_{n-1} = 1,\end{eqnarray*} by (\ref{0420b}). Also, \[ E [ T( \eta_{t+1}(\omega)) - T(\eta_t(\omega)) | \eta_t(\omega) = 0 ] = q_0 T(1) = 1,\] since $T(1)=\Delta_0=1/q_0$. $\square$\\ We can now state the result that will be our main tool in proving almost sure upper and lower bounds for $\eta_t(\omega)$, using the expected hitting times $T(n)$. \begin{lemma} \label{1002e} For a given environment $\omega$, suppose that there exist two nonnegative, increasing, continuous functions $g$ and $h$ such that, \[ g(n) \leq T(n) \leq h(n) ,\] for all $n \in \mathbb{Z}^+$. Then: \begin{itemize} \item[(i)] For any ${\varepsilon}>0$, a.s., for all but finitely many $t$, \begin{eqnarray} \label{001} \eta_t(\omega) \leq g^{-1} ( (2t)^{1+{\varepsilon}} ). \end{eqnarray} \item[(ii)] A.s., for infinitely many $t$, \begin{eqnarray} \label{003} (\eta_t(\omega))^2 h( \eta_t(\omega)) \geq t. \end{eqnarray} \end{itemize} \end{lemma} \noindent \textbf{Remark. } In the transient case we want to do better (for Theorem \ref{thm8}) than part (ii) here, to give a lower bound for $\eta_t(\omega)$ that holds all but finitely often. See the proof of Theorem \ref{thm8} below.\\ \noindent {\bf Proof of Lemma \ref{1002e}.} Throughout we work in fixed environment $\omega$. First we prove part (i). From (\ref{1002b}), we have that for any $t \in \mathbb{Z}^+$ \[ E [ T(\eta_{t+1}(\omega)) - T(\eta_{t}(\omega))] = \sum_{n=0}^\infty P[ \eta_t(\omega)=n] = 1.\] Then, given that $\eta_0(\omega)=0$, for all $t \in \mathbb{Z}^+$ we have \begin{eqnarray} \label{0210j} E[T(\eta_t(\omega))] = t. \end{eqnarray} To prove (\ref{001}), we modify the idea of the proof of Theorem 3.2 of \cite{cmp}. Since $T(\eta_t(\omega))$ is a nonnegative submartingale (see Lemma \ref{1002a}), Doob's submartingale inequality (see e.g.~\cite{williams}, p.~137) implies that, for $t>0$, for any ${\varepsilon}>0$, \begin{eqnarray} \label{1002d} P \left[ \max_{0 \leq s \leq t} T(\eta_s(\omega)) \geq t^{1+{\varepsilon}} \right] \leq t^{-1-{\varepsilon}} E[ T(\eta_t(\omega)) ] = t^{-{\varepsilon}}, \end{eqnarray} using (\ref{0210j}). Also, given that $T(n) \geq g(n)$ for all $n$, we have, for $t>0$, \begin{eqnarray} \label{1002c} P \left[ \max_{0 \leq s \leq t} T(\eta_s(\omega)) \geq t^{1+{\varepsilon}} \right] \geq P \left[ \max_{0 \leq s \leq t} g(\eta_s(\omega)) \geq t^{1+{\varepsilon}} \right] \nonumber\\ = P \left[ g \left( \max_{0 \leq s \leq t} \eta_s(\omega) \right) \geq t^{1+{\varepsilon}} \right] ,\end{eqnarray} since $g$ is increasing. Hence from (\ref{1002d}) and (\ref{1002c}), for $t>0$, \[ P \left[ \max_{0 \leq s \leq t} \eta_s(\omega) \geq g^{-1} ( t^{1+{\varepsilon}}) \right] \leq t^{-{\varepsilon}} .\] Thus along the subsequence $t=2^m$ for $m=0,1,2,\ldots$, the (first) Borel-Cantelli lemma implies that, a.s., the event in the last display occurs only finitely often, and in particular there exists $m_0 <\infty$ such that for all $m \geq m_0$ \[ \max_{0 \leq s \leq 2^m} \eta_s(\omega) \leq g^{-1} ( (2^m)^{1+{\varepsilon}}).\] Every $t$ sufficiently large has $2^m \leq t < 2^{m+1}$ for some $m \geq m_0$; then, a.s., \[ \eta_t (\omega) \leq \max_{0 \leq s \leq t} \eta_s(\omega) \leq \max_{0 \leq s \leq 2^{m+1}} \eta_s(\omega) \leq g^{-1} ( (2^{m+1})^{1+{\varepsilon}}) ,\] for all but finitely many $t$. Now since $2^{m+1} \leq 2t$ and $g^{-1}$ is increasing, (\ref{001}) follows. Now we prove part (ii). Recall the definition of $\tau_{0,n}$ at (\ref{1001a}). By Markov's inequality, we have that for $n \in \mathbb{N}$ \[ P [ \tau_{0,n} > n^2 T(n) ] = P [ \tau_{0,n} > n^2 E[ \tau_{0,n}] ] \leq n^{-2} .\] Then, by the (first) Borel-Cantelli lemma, a.s., $\tau_{0,n} > n^2 T(n)$ for only finitely many $n$. Thus, given that $T(n) \leq h(n)$ for all $n$, we have that a.s., for all but finitely many $n$, $\tau_{0,n} \leq n^2 h(n)$. Given $\omega$, $\eta_t(\omega)$ is an irreducible Markov chain on $\mathbb{Z}^+$, hence $\limsup_{t \to \infty} \eta_t (\omega) = +\infty$ a.s.. Thus a.s.~there exists an infinite subsequence of $\mathbb{N}$, $t_1,t_2,t_3,\ldots$ (one can take, for each $i$, $t_i=\tau_{0,i}$, the time of the first visit of $\eta_t$ to $i$), such that $\eta_{t_i}(\omega) \to \infty$ as $i \to \infty$. That is, a.s., \[ t_i \leq \eta_{t_i}(\omega)^2 h(\eta_{t_i}(\omega)) .\] There are infinitely many such $t_i$, and so we have (\ref{003}). $\square$ \section{Proofs of main results} \label{secprfs} To prove our main results, we employ the machinery given in the previous section: we obtain, via the results in Section \ref{strng}, suitable functions $g$, $h$ such that $g(n) \leq T(n) \leq h(n)$ (for a.e.~$\omega$), and then apply Lemma \ref{1002e}. We consider $T(n)$ as given in Lemma \ref{lemexp}. Recalling the definition of $\Delta_i$ at (\ref{0420b}), we can write (interpreting an empty sum as zero) for $i \geq 0$ \begin{eqnarray} \label{1001c} \Delta_i = \sum_{j=0}^i q_{i-j}^{-1} \exp \sum_{k=i-j+1}^i \log ( p_k / q_k ) .\end{eqnarray} The following result gives general bounds on $T(n)$. \begin{lemma} \label{lowerbd} For a fixed environment $\omega$, for all $n \geq 1$ \begin{eqnarray} \label{ffff} T(n) \geq \exp \max_{1 \leq i \leq n-1} \sum_{k=1}^i \log (p_k/q_k) , \end{eqnarray} and for some $C\in(0,\infty)$, for all $n \geq 1$, \begin{eqnarray} \label{eeee} T(n) \leq C n^2 \exp \left( \max_{0 \leq i \leq n-1} \sum_{k=1}^i \log (p_k/q_k) + \max_{0 \leq i \leq n-1} \sum_{k=1}^i (-\log (p_k/q_k)) \right) .\end{eqnarray} \end{lemma} \noindent \textbf{Proof. } Since a sum of nonnegative terms is bounded below by its largest term, \begin{eqnarray} \label{ab1} T(n) = \sum_{i=0}^{n-1} \Delta_i \geq \max_{1 \leq i \leq n-1} \Delta_i \geq \max_{1 \leq i \leq n-1} \max_{1 \leq j \leq i} \exp \sum_{k=i-j+1}^i \log (p_k/q_k) ,\end{eqnarray} using (\ref{1001c}) and the fact that $q_{i-j}^{-1} \geq 1$. Now for $i \in \mathbb{N}$ \begin{eqnarray} \label{ab2} \max_{1 \leq j \leq i} \sum_{k=i-j+1}^i \log (p_k/q_k) \geq \sum_{k=1}^i \log (p_k/q_k),\end{eqnarray} so that by (\ref{ab1}) and (\ref{ab2}), \[ T(n) \geq \max_{1 \leq i \leq n-1} \exp \max_{1 \leq j \leq i} \sum_{k=i-j+1}^i \log (p_k/q_k) \geq \max_{1 \leq i \leq n-1} \exp \sum_{k=1}^i \log (p_k/q_k),\] and the lower bound in the lemma follows. For the upper bound, we have from (\ref{1001c}) that \begin{eqnarray} \label{ab10} T(n) \leq n \max_{0 \leq i \leq n-1} \Delta_i \leq \delta^{-1} n(n+1) \max_{0 \leq i \leq n-1} \max_{0 \leq j \leq i} \exp \sum_{k=i-j+1}^i \log (p_k/q_k) ,\end{eqnarray} since $q^{-1}_{i-j} \leq \delta^{-1}$ with $\delta$ as at (\ref{ue}). Now \begin{eqnarray} \label{ab20} \max_{0 \leq j \leq i} \sum_{k=i-j+1}^i \log (p_k/q_k) = \sum_{k=1}^i \log (p_k/q_k) + \max_{0 \leq j \leq i} \sum_{k=1}^{i-j} (-\log (p_k/q_k)) \nonumber\\ = \sum_{k=1}^i \log (p_k/q_k) + \max_{0 \leq j \leq i} \sum_{k=1}^{j} (-\log (p_k/q_k)).\end{eqnarray} Thus from (\ref{ab10}) and (\ref{ab20}), for $C \in (0,\infty)$ and all $n \geq 1$ \begin{eqnarray*} T(n) \leq C n^2 \exp \left( \max_{0 \leq i \leq n-1} \sum_{k=1}^i \log (p_k/q_k) + \max_{0 \leq i \leq n-1} \max_{0 \leq j \leq i} \sum_{k=1}^{j} (-\log (p_k/q_k)) \right) .\end{eqnarray*} Then the upper bound in the lemma follows. $\square$\\ We start with the proof of Theorem \ref{thm0} for expository purposes. The proof of Theorem \ref{thm0} will then serve as a prototype for subsequent proofs. As previously mentioned, we take $\eta_0(\omega)=0$ for the purposes of the proofs that follow (without loss of generality). \subsection{Proof of Theorem \ref{thm0}} \label{secprf1} For fixed $\omega$, by Lemma \ref{lemexp}, the expected hitting time $T(n)$ is expressed in terms of $\log (p_n/q_n)$. To prepare for the proof, we note that under the conditions of Theorem \ref{thm0} $p_n$ and $q_n$ have the same distribution, so \begin{eqnarray} \label{0608a} {\mathbb{E}} [ \log (p_n/q_n) ] = {\mathbb{E}} [ \log p_n]-{\mathbb{E}} [\log q_n] = 0.\end{eqnarray} By (\ref{1003a}), Taylor's theorem and the boundedness of the $Y_n$, for a.e.~$\omega$, \begin{eqnarray*} \log p_n & = & \log (1/2) + \log ( 1+ 2Y_n n^{-\alpha} ) \\ & = & \log(1/2) + 2Y_n n^{-\alpha} -2Y_n^2 n^{-2\alpha} + O(n^{-3\alpha}) ,\end{eqnarray*} for all $n$ sufficiently large, and \begin{eqnarray*} \log q_n & = & \log (1/2) + \log ( 1 - 2Y_n n^{-\alpha} ) \\ & = & \log(1/2) - 2Y_n n^{-\alpha} -2Y_n^2 n^{-2\alpha} + O(n^{-3\alpha}) ,\end{eqnarray*} so that \begin{eqnarray} \label{0601s} \log (p_n/q_n) = \log p_n - \log q_n = 4Y_n n^{-\alpha} + O(n^{-3\alpha}).\end{eqnarray} Lemma \ref{lem0920} below gives bounds for the expected hitting time $T(n)$, and so prepares us for the proof of Theorem \ref{thm0} via an application of Lemma \ref{1002e}. \begin{lemma} \label{lem0920} Suppose ${\mathbb{P}}[\xi_1=1/2]=1$, $Y_1 \stackrel{{\rm d}}{=} -Y_1$, $\sigma^2 \in (0, \infty)$, and $\alpha \in (0,1/2)$. Then for a.e.~$\omega$, for any ${\varepsilon}>0$, for all but finitely many $n$, \begin{eqnarray} \label{0920e} \exp ( n^{(1-2\alpha)/2} (\log n)^{-1} (\log \log n)^{-1-{\varepsilon}}) \leq T(n) \nonumber \\ \leq \exp ( n^{(1-2\alpha)/2} (\log \log n)^{(1/2)+{\varepsilon}} ).\end{eqnarray} \end{lemma} \noindent \textbf{Proof. } From (\ref{0608a}), ${\mathbb{E}}[\log ( p_k / q_k )]=0$ and from (\ref{0601s}) ${\mathrm{Var}}[\log ( p_k / q_k )]=16\sigma^2 k^{-2\alpha} + o(k^{-2\alpha})$. Hence, for $\alpha \in (0,1/2)$, for all $i$, \begin{eqnarray} \label{1002f} C_1 i^{1-2\alpha} \leq {\mathrm{Var}} \sum_{k=1}^i \log ( p_k / q_k ) \leq C_2 i^{1-2\alpha},\end{eqnarray} for some $C_1, C_2 \in (0,\infty)$ with $C_1 < C_2$. Now we derive the lower bound in (\ref{0920e}). By Lemma \ref{lem0601} and (\ref{1002f}), for an appropriate choice of $a(\cdot)$ satisfying the conditions of Lemma \ref{lem0601}, for a.e.~$\omega$, a.s., \begin{eqnarray} \label{ab3} \max_{1 \leq i \leq n-1} \sum_{k=1}^i \log(p_k/q_k) \geq C n^{(1-2\alpha)/2} a( n^{1-2\alpha}),\end{eqnarray} for all but finitely many $n$. For ${\varepsilon}>0$, we take $a(n) = ( \log n )^{-1}(\log \log n)^{-1-{\varepsilon}}$; then $a(\cdot)$ satisfies the conditions of Lemma 4. Then (\ref{ffff}) and (\ref{ab3}) imply the lower bound in (\ref{0920e}). Now we prove the upper bound in (\ref{0920e}), using (\ref{eeee}). By Lemma \ref{itlog} with (\ref{1002f}) we have that for a.e.~$\omega$, a.s., for all but finitely many $n$, \begin{eqnarray*} \max_{0 \leq i \leq n-1} \sum_{k=1}^i \log (p_k/q_k) < C n^{(1-2\alpha)/2} (\log \log{n})^{1/2}, \\ \max_{0 \leq i \leq n-1} \sum_{k=1}^i (-\log (p_k/q_k)) < C n^{(1-2\alpha)/2} (\log \log{n})^{1/2}, \end{eqnarray*} for some $C\in(0,\infty)$. Thus from (\ref{eeee}) we obtain the upper bound in (\ref{0920e}). $\square$\\ \noindent {\bf Proof of Theorem.} First we prove part (i) of Theorem \ref{thm0}. From the lower bound in (\ref{0920e}), we have that, for a.e.~$\omega$, there exists a finite positive constant $C$ (depending on $\omega$) such that, for any ${\varepsilon}>0$, for all $n$ sufficiently large, \begin{eqnarray} \label{0601h} T(n) \geq g(n) := C \exp \left( n^{(1-2\alpha)/2} ( \log n )^{-1}(\log \log n)^{-1-{\varepsilon}} \right). \end{eqnarray} So by (\ref{001}), we have that, for a.e.~$\omega$, for any ${\varepsilon}>0$, a.s., \[ \eta_t (\omega) \leq g^{-1} ( 4t^2) \leq C ((\log t) (\log \log t)^{1+{\varepsilon}})^{2/(1-2\alpha)}, \] for all but finitely many $t$, which gives (\ref{0210a}). Now we prove part (ii). From the upper bound in (\ref{0920e}), we have that, for any ${\varepsilon}>0$, \[ T(n) \leq h(n) := C \exp ( n^{(1-2\alpha)/2} (\log \log n)^{(1/2)+{\varepsilon}}), \] so that, for all $n$ sufficiently large, \begin{eqnarray} \label{uuu} h^{-1} (n) \geq C (\log n)^{2/(1-2\alpha)} (\log \log \log n)^{-(1+3{\varepsilon})/(1-2\alpha)}.\end{eqnarray} From (\ref{003}) we have that a.s., for infinitely many $t$, \[ h(\eta_t(\omega)) \geq t (\eta_t(\omega))^{-2} \geq C t (\log t)^{-5/(1-2\alpha)},\] by (\ref{0210a}). Thus a.s., for infinitely many $t$, $\eta_t(\omega) \geq h^{-1} ( C t (\log t)^{-5/(1-2\alpha)} )$, which with (\ref{uuu}) yields (\ref{0920c}). $\square$ \subsection{Proofs of Theorems \ref{thm5} and \ref{thm8}} \label{secprf2} To prove Theorems \ref{thm5} and \ref{thm8}, we proceed along the same lines as the proof of Theorem \ref{thm0} in Section \ref{secprf1}, and apply Lemma \ref{1002e}. Theorem \ref{thm8} (the transient case) requires some extra work, both to obtain suitable bounds for $T(n)$ and to prove that the lower bound on the random walk holds all but finitely often. Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $\sigma^2 \in [0,\infty)$. Then for a.e.~$\omega$ \begin{eqnarray} \label{0708q} \log \left( \frac{p_n}{q_n} \right) = \zeta_n + \log \left( 1+ \frac{Y_n}{\xi_n} n^{-\alpha}\right) - \log \left( 1 - \frac{Y_n}{1-\xi_n} n^{-\alpha}\right),\end{eqnarray} for all $n \geq n_0$ for a finite absolute constant $n_0$, where $\zeta_i$, $i\in\mathbb{N}$, as defined at (\ref{0520b}) are i.i.d.~with ${\mathbb{E}}[\zeta_1]=0$ and ${\mathrm{Var}}[\zeta_1] \in (0,\infty)$. It follows from (\ref{0708q}) and Taylor's theorem that, for a.e.~$\omega$, for all $n$ sufficiently large, \begin{eqnarray} \label{0427f} \log ( p_n / q_n ) = \zeta_n + Z_n n^{-\alpha}+O(n^{-2\alpha}),\end{eqnarray} where $Z_i$, $i\in\mathbb{N}$, are i.i.d.~with ${\mathbb{E}}[Z_1]=\lambda$ (see (\ref{0520b}) and (\ref{0520bx})). Then by (\ref{0427f}) \begin{eqnarray} \label{0439a} {\mathbb{E}}[\log(p_k/q_k)] = \lambda k^{-\alpha} + O(k^{-2\alpha}), {\mathrm{Var}}[\log(p_k/q_k)] = {\mathrm{Var}}[ \zeta_1 ] + O(k^{-\alpha}) .\end{eqnarray} \begin{lemma} \label{lem0430a} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $\sigma^2 \in [0, \infty)$, $\lambda<0$, and $\alpha \in (0,1/2)$. For a.e.~$\omega$ and any ${\varepsilon}>0$, for all but finitely many $n$, \begin{eqnarray} \label{0427e} \exp ( n^\alpha (\log n)^{-2-{\varepsilon}} ) \leq T(n) \leq \exp ( n^\alpha ( \log n)^{1+{\varepsilon}} ).\end{eqnarray} \end{lemma} \noindent \textbf{Proof. } First we prove the upper bound in (\ref{0427e}). Since $\lambda<0$, we have from (\ref{0439a}) that \begin{eqnarray} \label{tt1} {\mathbb{E}} \sum_{k=i-j+1}^i \log(p_k/q_k) \leq - C ( i^{1-\alpha} - (i-j)^{1-\alpha} ) ,\end{eqnarray} for some $C \in (0,\infty)$. Taylor's theorem implies that for $\alpha \in (0,1)$ \begin{eqnarray} \label{tt2} i^{1-\alpha} - (i-j)^{1-\alpha} = C j i^{-\alpha} (1- \theta (j/i) )^{-\alpha} ,\end{eqnarray} for some $C \in (0,\infty)$ and $\theta \in (0,1)$. Thus it follows from (\ref{tt1}) and (\ref{tt2}) that for all $i \in \mathbb{N}$, and all $j =1,2,\ldots, i$ \begin{eqnarray} \label{0429q} {\mathbb{E}} \sum_{k=i-j+1}^i \log(p_k/q_k) \leq -C j i^{-\alpha} , \end{eqnarray} for some $C\in (0,\infty)$. By Lemma \ref{ubd1} we have that, for some $C \in (0,\infty)$, for a.e.~$\omega$, all but finitely many $i$, and all $j=1,2,\ldots,i$, \begin{eqnarray} \label{0429p} \sum_{k=i-j+1}^i (\log(p_k/q_k)-{\mathbb{E}}[\log(p_k/q_k)]) \leq C j^{1/2} (\log i)^{1/2}. \end{eqnarray} Suppose ${\varepsilon}>0$. Then from (\ref{0429p}) with (\ref{0429q}), for a.e.~$\omega$, for $j \geq \lceil i^{2\alpha} (\log i)^{1 + {\varepsilon}}\rceil$ \begin{eqnarray} \label{0430a} \sum_{k=i-j+1}^i \log(p_k/q_k) \leq -C ji^{-\alpha}+C' j^{1/2} (\log i)^{1/2} \leq -C'' ji^{-\alpha},\end{eqnarray} and, for $j \leq \lceil i^{2\alpha} (\log i)^{1 + {\varepsilon}}\rceil$ \begin{eqnarray} \label{0430b} \sum_{k=i-j+1}^i \log(p_k/q_k) \leq C j^{1/2} (\log i )^{1/2},\end{eqnarray} where each inequality holds for all but finitely many $i$. So from (\ref{1001c}), (\ref{0430a}) and (\ref{0430b}) we obtain, for a.e.~$\omega$, for any ${\varepsilon}>0$, for all but finitely many $i$, \begin{eqnarray*} \Delta_i & \leq & \sum_{j=0}^{\lceil i^{2\alpha} (\log i)^{1+{\varepsilon}} \rceil} \exp (C j^{1/2} ( \log i)^{1/2}) + \sum_{j=\lceil i^{2\alpha} ( \log{i})^{1+{\varepsilon}}\rceil}^i \exp( -C' j i^{-\alpha}) \\ & \leq & \exp ( C'' i^{\alpha} (\log i)^{1+{\varepsilon}} ),\end{eqnarray*} for $C'' \in (0,\infty)$. Then the upper bound for $T(n)$ in (\ref{0427e}) follows. We now prove the lower bound in (\ref{0427e}). For ${\varepsilon}>0$ set $k_{\varepsilon} (1):=1$ and for $i >1$ define \begin{eqnarray} \label{1002w} k_{\varepsilon} (i) := \lfloor i^{2 \alpha} (\log i)^{-2-{\varepsilon}} \rfloor .\end{eqnarray} Then, for any $\alpha \in (0,1/2]$ and all $n$ sufficiently large, from (\ref{ab1}), \begin{eqnarray} \label{45a} T(n) \geq \max_{1\leq i \leq n-1} \max_{1 \leq j \leq k_{\varepsilon} (i) } \exp \sum_{k=i-j+1}^i \log (p_k/q_k).\end{eqnarray} Then (\ref{0708q}) and Taylor's theorem imply that there is a constant $C\in (0,\infty)$ such that, for all $k$, $\log(p_k/q_k) = \zeta_k + W_k k^{-\alpha}$, where $|W_k| < C$. Thus for $i \in \mathbb{N}$ and $j=1,2,\ldots,i$, \[ \sum_{k=i-j+1}^i \log (p_k/q_k) \geq \sum_{k=i-j+1}^i \zeta_k - C \sum_{k=i-j+1}^i k^{-\alpha} \geq \sum_{k=i-j+1}^i \zeta_k - C' j i^{-\alpha},\] again using Taylor's theorem (cf (\ref{tt2})). Hence by (\ref{45a}) \begin{eqnarray} \label{pp1} T(n)\geq \exp \left( \max_{1\leq i \leq n-1} \max_{1 \leq j \leq k_{\varepsilon} (i) } \sum_{k=i-j+1}^i \zeta_k - C k_{\varepsilon} (n) n^{-\alpha}\right).\end{eqnarray} By Lemma \ref{lbd1}, we have that for any ${\varepsilon}>0$, for a.e.~$\omega$, \[ \max_{1\leq i \leq n-1} \max_{1 \leq j \leq k_{\varepsilon} (i) } \sum_{k=i-j+1}^i \zeta_k \geq ( k_{\varepsilon} (n/2))^{1/2} ( \log n)^{-1-({\varepsilon}/4)} \geq C n^{\alpha} (\log n)^{-2-(3{\varepsilon}/4)},\] for all but finitely many $n$, while $k_{\varepsilon} (n) n^{-\alpha} \leq n^{\alpha} (\log n)^{-2-{\varepsilon}}$. Hence (\ref{pp1}) implies the lower bound in (\ref{0427e}). $\square$ \begin{lemma} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $\sigma^2 \in[0,\infty)$, and $\lambda \neq 0$. \begin{itemize} \item[(i)] Suppose $\alpha > 1/2$. For a.e.~$\omega$ and any ${\varepsilon}>0$, for all but finitely many $n$, \begin{eqnarray} \label{0920d} \exp ( n^{1/2} (\log n)^{-1-{\varepsilon}}) \leq T(n) \leq \exp ( n^{1/2} (\log \log n)^{(1/2)+{\varepsilon}} ).\end{eqnarray} \item[(ii)] Suppose $\alpha = 1/2$. For a.e.~$\omega$ and any ${\varepsilon}>0$, for all but finitely many $n$, \begin{eqnarray} \label{0920dd} \exp ( n^{1/2} (\log n)^{-2-{\varepsilon}}) \leq T(n) \leq \exp ( n^{1/2} (\log \log n)^{(1/2)+{\varepsilon}} ).\end{eqnarray} \end{itemize} \end{lemma} \noindent \textbf{Proof. } To prove the upper bounds in (\ref{0920d}) and (\ref{0920dd}), we apply (\ref{eeee}). For $\lambda \neq 0$, $\alpha \geq 1/2$ we have from (\ref{0439a}) that \[\sum_{k=1}^i {\mathbb{E}}[\log(p_k/q_k)] = O( \max \{ i^{1-\alpha} , \log i \} ),\] so that for some $C \in (0,\infty)$ and all $n$ \begin{eqnarray} \label{ccc1} \max_{0 \leq i \leq n} \sum_{k=1}^i \log (p_k/q_k) \leq \max_{0 \leq i \leq n} \sum_{k=1}^i (\log (p_k/q_k) -{\mathbb{E}}[\log(p_k/q_k)]) +C \max \{ n^{1-\alpha} , \log n \} ;\end{eqnarray} similarly for the second maximum in (\ref{eeee}). By Lemma \ref{itlog} and (\ref{0439a}), for a.e.~$\omega$, \[ \max_{0 \leq i \leq n} \sum_{k=1}^i (\log (p_k/q_k) -{\mathbb{E}}[\log(p_k/q_k)]) \leq C n^{1/2} (\log\log n)^{1/2} ,\] for all but finitely many $n$, and since $\alpha \geq 1/2$, (\ref{ccc1}) then implies that for a.e.~$\omega$, \[ \max_{0 \leq i\leq n} \sum_{k=1}^i \log (p_k/q_k) \leq C n^{1/2} (\log \log n)^{1/2},\] for all but finitely many $n$, and similarly for the second maximum in (\ref{eeee}). Then (\ref{eeee}) gives the upper bounds in (\ref{0920d}) and (\ref{0920dd}). Now we prove the lower bounds in (\ref{0920d}) and (\ref{0920dd}). In the case $\alpha > 1/2$, \[ \max_{1 \leq i \leq n-1} \sum_{k=1}^i \log (p_k/q_k) \geq \max_{1 \leq i \leq n-1} \sum_{k=1}^i (\log (p_k/q_k) -{\mathbb{E}}[ \log (p_k/q_k) ]) - C\max \{ n^{1-\alpha} , \log n \} ,\] by a similar argument to (\ref{ccc1}). Lemma \ref{lem0601} implies that for any ${\varepsilon}>0$, for a.e.~$\omega$, \[ \max_{1 \leq i \leq n-1} \sum_{k=1}^i (\log (p_k/q_k) -{\mathbb{E}}[ \log (p_k/q_k) ]) \geq n^{1/2} (\log n)^{-1-{\varepsilon}},\] for all but finitely many $n$; then (\ref{ffff}) implies the lower bound in (\ref{0920d}). Finally, suppose $\alpha=1/2$. Once more we define $k_{\varepsilon}(i)$ by (\ref{1002w}), and follow the argument for (\ref{pp1}). This yields the lower bound in (\ref{0920dd}). $\square$\\ \noindent {\bf Proof of Theorem \ref{thm8}.} For the upper bound in (\ref{0427c}), the lower bound in (\ref{0427e}) implies that, for a.e.~$\omega$, for any ${\varepsilon}>0$ there exists $C\in (0,\infty)$ such that \[ T(n) \geq g(n) := C \exp (n^\alpha (\log n)^{-2-{\varepsilon}}),\] for all $n$ sufficiently large. Then (\ref{001}) gives, for a.e.~$\omega$, for any ${\varepsilon}>0$, a.s., \[ \eta_t(\omega) \leq g^{-1} (4t^2) \leq C (\log t)^{1/\alpha} (\log \log t)^{(2+2{\varepsilon})/\alpha}, \] for all but finitely many $t$. Then the upper bound in (\ref{0427c}) follows. We now want to obtain the lower bound in (\ref{0427c}). Recalling the proof of Lemma \ref{1002e}(ii), we were able to show that, along a sequence of first hitting times for the random walk, these times were not too large. This gave us a lower bound that was valid infinitely often. In order to extend this technique to the transient case, and obtain a lower bound valid {\em all but finitely} often, we show in addition that (roughly speaking), in the present case, the time of the last visit of the random walk to a site is not too much greater than the first hitting time. For fixed $\omega$, let $a_n$ denote the probability that the random walk $\eta_t(\omega)$ hits $n$ in finite time, given that it starts at $2n$. For $n \geq 1$ define \begin{eqnarray} \label{ooo1} M_n := 1+ \sum_{j=1}^\infty \prod_{k=1}^{j} \frac{p_{n+k}}{q_{n+k}} = 1+ \sum_{j=1}^\infty \exp \sum_{k=1}^{j} \log \left( \frac{p_{n+k}}{q_{n+k}}\right) .\end{eqnarray} Standard hitting probability arguments yield $a_0=1$, and for $n \geq 1$, if $M_n < \infty$, \begin{eqnarray} \label{ooo2} a_n = M_n^{-1} \sum_{j=n}^\infty \prod_{k=1}^{j} \frac{p_{n+k}}{q_{n+k}} = M_n^{-1} \sum_{j=n}^\infty \exp \sum_{k=1}^{j} \log \left( \frac{p_{n+k}}{q_{n+k}}\right). \end{eqnarray} In the present case ($\lambda<0$, $\alpha \in (0,1/2)$), (\ref{0439a}) holds. Thus for $n, j \in \mathbb{N}$ \begin{eqnarray} \label{ss1} {\mathbb{E}} \sum_{k=1}^j \log (p_{n+k}/q_{n+k}) \leq -C ( (n+j)^{1-\alpha} - n^{1-\alpha} ) ,\end{eqnarray} for some $C \in (0,\infty)$. Here, by Taylor's theorem, for $\alpha \in (0,1)$, \begin{eqnarray} \label{ss3} (n+j)^{1-\alpha} - n^{1-\alpha} = C j (n+\theta j)^{-\alpha} ,\end{eqnarray} for some $C \in (0,\infty)$, $\theta \in (0,1)$. In particular, for $j \geq n$, (\ref{ss1}) and (\ref{ss3}) imply \begin{eqnarray} \label{ss2} {\mathbb{E}} \sum_{k=1}^j \log (p_{n+k}/q_{n+k})\leq - Cj (j (1+\theta))^{-\alpha} \leq -C' j^{1-\alpha},\end{eqnarray} for $C' \in (0,\infty)$. Also, by the Azuma-Hoeffding inequality and an argument similar to Lemma \ref{ubd1}, we have that, for a.e.~$\omega$, \[ \sum_{k=1}^j (\log(p_{n+k}/q_{n+k}) -{\mathbb{E}} [ \log(p_{n+k}/q_{n+k})]) \leq C j^{1/2} (\log (jn) )^{1/2} ,\] for all but finitely many $(n,j)$. Thus for all $(n,j)$ we have that, for a.e.~$\omega$, \begin{eqnarray} \label{ss5} \sum_{k=1}^j \log(p_{n+k}/q_{n+k}) \leq C j^{1/2} (\log (jn) )^{1/2},\end{eqnarray} for some $C \in (0,\infty)$. However, we have from (\ref{ss2}) that, for a.e.~$\omega$, there are constants $C,C',C'' \in (0,\infty)$ such that, for all $n\in \mathbb{N}$, and $j \geq n$ \begin{eqnarray} \label{ss4} \sum_{k=1}^j \log(p_{n+k}/q_{n+k}) \leq - C j^{1-\alpha} + C' j^{1/2} ( \log {j} )^{1/2} \leq -C'' j^{1-\alpha},\end{eqnarray} since $\alpha \in (0,1/2)$. Hence, for a.e.~$\omega$, from (\ref{ooo1}), (\ref{ss5}), and (\ref{ss4}), for $n \in \mathbb{N}$, \begin{eqnarray*} M_n \leq \sum_{j=1}^{n} \exp ( C j^{1/2} (\log (jn) )^{1/2} ) + \sum_{j=n}^\infty \exp ( -C' j^{1-\alpha} ) \\ \leq \exp ( C'' n^{1/2} (\log n)^{1/2} ) < \infty.\end{eqnarray*} Further, since $M_n \geq 1$ for all $n$, (\ref{ooo2}) and (\ref{ss4}) imply, for a.e.~$\omega$, for all $n\in \mathbb{N}$, \[ a_n \leq \sum_{j=n}^\infty \exp ( -C j^{1-\alpha} ) \leq \exp ( -C' n^{1-\alpha} ),\] for some $C' \in (0,\infty)$. Thus, for a.e.~$\omega$, $\sum_n a_n <\infty$. The (first) Borel-Cantelli lemma then implies that, for a.e.~$\omega$, a.s., for only finitely many sites $n$ does $\eta_t (\omega)$ return to $n$ after visiting $2n$. Denoting by $\ell_n$ the time of the last visit of $\eta_t(\omega)$ to $n$, we then have that $\ell_n \leq \tau_{0,2n}$ a.s.~for all but finitely many $n$. Suppose $T(n) \leq h(n)$ for all $n$. Following the proof of Lemma \ref{1002e}(ii), we have that a.s., for all but finitely many $n$, $\tau_{0,n} \leq n^2 h(n)$. Thus for a.e.~$\omega$, a.s., \begin{eqnarray} \label{bbx} \ell_n \leq \tau_{0,2n} \leq 4n^2 h(2n),\end{eqnarray} for all $n \geq n_0$ for some finite $n_0$ (depending on $\omega$). Moreover, since, for a.e.~$\omega$, $\eta_t(\omega)$ is transient, we have that, for a.e.~$\omega$, a.s., $\eta_t(\omega) \geq n_0$ for all $t$ sufficiently large. Hence from (\ref{bbx}), using the fact that $\ell_{\eta_t(\omega)} \geq t$ for all $t$, we have that for a.e.~$\omega$, a.s., for all but finitely many $t$, \[ t \leq \ell_{\eta_t(\omega)} \leq 4 \eta_t(\omega)^2 h(2 \eta_t(\omega)). \] Then, with the upper bound in (\ref{0427e}), we obtain, for a.e.~$\omega$, for any ${\varepsilon}>0$, a.s., \[ t < \exp ( \eta_t(\omega)^\alpha (\log \eta_t(\omega))^{1+{\varepsilon}} ) ,\] for all but finitely many $t$. This implies the lower bound in (\ref{0427c}). $\square$\\ \noindent {\bf Proof of Theorem \ref{thm5}.} We first prove part (i). Suppose $\alpha>1/2$. From the lower bound on $T(n)$ in (\ref{0920d}), for a.e.~$\omega$, for any ${\varepsilon}>0$, \begin{eqnarray*} T(n) \geq g(n) := C \exp (n^{1/2} (\log n)^{-1-{\varepsilon}}),\end{eqnarray*} for all $n \in \mathbb{N}$. Then (\ref{001}) implies the upper bound in (\ref{0210ac}). For part (ii), when $\alpha=1/2$, the lower bound in (\ref{0920dd}) allows us, this time, to take $g(n) := C \exp (n^{1/2} (\log n)^{-2-{\varepsilon}})$. Then (\ref{001}) gives the upper bound in (\ref{0210acd}). For part (iii) of the theorem, for $\alpha \geq 1/2$, the upper bound on $T(n)$ in (\ref{0920d}) and (\ref{0920dd}) implies that for a.e.~$\omega$ \[T(n) \leq h(n) := C\exp ( n^{1/2} (\log\log n)^{(1/2)+{\varepsilon}} ),\] for all but finitely many $n$; in particular $h^{-1}$ satisfies the lower bound of (\ref{uuu}) with $\alpha =0$. Then (\ref{003}) yields the lower bound in (\ref{0920a}). $\square$ \subsection{Proofs of Theorems \ref{thm21} and \ref{thm20}} \label{secprf2a} We now move on to the ergodic cases (Theorems \ref{thm21} and \ref{thm20}). Again we start by bounding $T(n)$. First we deal with the ergodic case of the random perturbation of the simple random walk. \begin{lemma} \label{lem33} Suppose ${\mathbb{P}}[\xi_1=1/2]=1$, ${\mathbb{E}}[Y_1]>0$, $\sigma^2 \in(0,\infty)$, and $\alpha \in (0,1)$. Then for a.e.~$\omega$, as $n \to \infty$ \begin{eqnarray*} T(n) = \exp \left( \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} n^{1-\alpha} [1+o(1)] \right). \end{eqnarray*} \end{lemma} \noindent \textbf{Proof. } In this case, (\ref{0601s}) holds. We apply a variation of the argument for Lemma \ref{ubd1}. We have that for each $i$ \[ Y^i_j := \sum_{k=i-j+1}^i ( \log(p_k/q_k)- {\mathbb{E}}[\log(p_k/q_k)])\] is a martingale over $j=1,2,\ldots,i$, with increments $|Y^i_{j}-Y^i_{j-1} |$ bounded by \[ |\log(p_{i-j+1}/q_{i-j+1}) | + | {\mathbb{E}}[\log(p_{i-j+1}/q_{i-j+1})] | \leq C (i-j+1)^{-\alpha} =: c_j^i ,\] for some $C \in (0,\infty)$, by (\ref{0601s}). Thus for each $j \leq i$, for $\alpha \in (0,1)$, \[ \sum_{k=1}^j (c_k^i)^2 = C \sum_{k=1}^j (i-k+1)^{-2\alpha} \leq C' i^{1-\alpha}.\] Then for each $i$ and $j \leq i$ the Azuma-Hoeffding inequality implies that \[ {\mathbb{P}} [ |Y^i_j |\geq t ]\leq 2 \exp ( - C t^2 i^{\alpha-1} ),\] for all $t>0$. Hence for any ${\varepsilon}>0$, the Borel-Cantelli lemma implies that \[ \max_{1 \leq j \leq i} |Y_j^i | \leq i^{((1-\alpha)/2)+{\varepsilon}},\] for all but finitely many $i$. Also, from (\ref{0601s}), \[ {\mathbb{E}} \sum_{k=i-j+1}^i \log (p_k/q_k) = \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} \left( i^{1-\alpha} - (i-j)^{1-\alpha} \right) [1+o(1)].\] Hence for all $i$ sufficiently large, since ${\varepsilon}>0$ was arbitrary and $\alpha\in (0,1)$ \begin{eqnarray} \label{gg1} \sum_{k=i-j+1}^i \log (p_k/q_k) = \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} \left( i^{1-\alpha} - (i-j)^{1-\alpha} \right)[1+o(1)] + o(i^{1-\alpha}).\end{eqnarray} Thus from (\ref{1001c}) and (\ref{gg1}), as $i\to\infty$, \begin{eqnarray*} \Delta_i & = & \exp \left( \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} i^{1-\alpha} [1+o(1)] \right) \sum_{j=0}^i \exp \left( -Cj^{1-\alpha}[1+o(1)] \right) \\ & = & \exp \left( \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} i^{1-\alpha} [1+o(1)] \right),\end{eqnarray*} from which the lemma follows. $\square$\\ \noindent {\bf Proof of Theorem \ref{thm20}.} Once again we apply Lemma \ref{1002e}. First we prove the lower bound. From Lemma \ref{lem33} we have that for a.e.~$\omega$, for all $n$ \[ T(n) \geq g(n) := \exp \left( \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} n^{1-\alpha} [1+o(1)] \right).\] It follows that \[ g^{-1} (n) = \left( \frac{1-\alpha}{4 {\mathbb{E}}[Y_1]} \right)^{1/(1-\alpha)} (\log n)^{1/(1-\alpha)} [1+o(1)].\] Then (\ref{001}) implies that a.s., for all but finitely many $t$, for any ${\varepsilon}>0$ \begin{eqnarray*} \eta_t(\omega) \leq g^{-1} ( (2t)^{1+{\varepsilon}}) \\ = (1+{\varepsilon})^{1/(1-\alpha)} \left( \frac{1-\alpha}{4 {\mathbb{E}}[Y_1]} \right)^{1/(1-\alpha)} (\log t)^{1/(1-\alpha)} [1+o(1)],\end{eqnarray*} and thus we obtain the upper bound in the theorem. On the other hand, Lemma \ref{lem33} implies that for a.e.~$\omega$, any ${\varepsilon}>0$, and all $n$ \[ T(n) \leq h(n) := \exp \left( \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} n^{1-\alpha} [1+o(1)] \right).\] Then (\ref{003}) implies that, a.s., for infinitely many $t$, \[ t \leq (\eta_t (\omega))^2 h( \eta_t(\omega)) \leq \exp \left( \frac{4 {\mathbb{E}}[Y_1]}{1-\alpha} (\eta_t (\omega))^{1-\alpha} [1+o(1)] \right),\] from which the lower bound in the theorem follows. $\square$\\ Now we deal with the ergodic case of the perturbation of Sinai's regime. \begin{lemma} \label{lem34} Suppose ${\mathbb{E}}[\zeta_1]=0$, $ s^2 \in (0,\infty)$, $\lambda>0$, $\sigma^2 \in [0,\infty)$, and $\alpha \in (0,1/2)$. Then for a.e.~$\omega$, as $n \to \infty$ \begin{eqnarray*} T(n) = \exp \left( \frac{\lambda}{1-\alpha} n^{1-\alpha} [1+o(1)] \right). \end{eqnarray*} \end{lemma} \noindent \textbf{Proof. } In this case, we have that (\ref{0439a}) holds (now with $\lambda>0$). Thus \[ {\mathbb{E}} \sum_{k=i-j+1}^i \log (p_k /q_k) = \frac{\lambda}{1-\alpha} \left( i^{1-\alpha} -(i-j)^{1-\alpha} \right) [1+o(1)] .\] Now we can apply Lemma \ref{ubd1} to obtain for a.e.~$\omega$, for all but finitely many $i$, \[ \left| \sum_{k=i-j+1}^i (\log (p_k /q_k) -{\mathbb{E}}[ \log (p_k /q_k)]) \right| \leq C j^{1/2} (\log i)^{1/2},\] for $j=1,2,\ldots,i$. Since $\alpha<1/2$ we have that for a.e.~$\omega$, as $i \to \infty$ \begin{eqnarray} \label{gg2} \sum_{k=i-j+1}^i \log (p_k /q_k) = \frac{\lambda}{1-\alpha} \left( i^{1-\alpha} -(i-j)^{1-\alpha} \right) [1+o(1)] .\end{eqnarray} Hence, from (\ref{1001c}) and (\ref{gg2}), as $i \to \infty$, \begin{eqnarray*} \Delta_i & = & \exp \left(\frac{\lambda}{1-\alpha} i^{1-\alpha} [1+o(1)] \right) \sum_{j=0}^i \exp \left( -C j^{1-\alpha} [1+o(1)] \right)\\ & = & \exp \left(\frac{\lambda}{1-\alpha} i^{1-\alpha} [1+o(1)] \right), \end{eqnarray*} and so the lemma follows. $\square$\\ \noindent {\bf Proof of Theorem \ref{thm21}.} The proof follows in a similar way to the above proof of Theorem \ref{thm20}, this time using the bounds in Lemma \ref{lem34} and applying Lemma \ref{1002e} once more. $\square$ \subsection{Proof of Theorem \ref{thm3}} \label{prf3} We now prove Theorem \ref{thm3}. Once more, with the definition of $\zeta_n$ and $Z_n$ at (\ref{0520b}), we have that for a.e.~$\omega$ and $n$ sufficiently large, $\log(p_n/q_n)$ is given by (\ref{0708q}), (\ref{0427f}). In this case ${\mathbb{E}}[\zeta_1]=0$ and $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$, which implies that ${\mathbb{E}}[\log(p_n/q_n)]=0$ for all $n$ sufficiently large. \begin{lemma} \label{lem0920q} Suppose ${\mathbb{E}}[\zeta_1]=0$, $s^2 \in (0,\infty)$, $Y_1/\xi_1 \stackrel{{\rm d}}{=} -Y_1/(1-\xi_1)$, $\sigma^2\in[0, \infty)$, and $\alpha>0$. For a.e.~$\omega$, for any ${\varepsilon}>0$, for all but finitely many $n$, \begin{eqnarray} \label{0920g} \exp( n^{1/2} (\log n)^{-1-{\varepsilon}} ) \leq T(n) \leq \exp ( n^{1/2} (\log \log n)^{(1/2)+{\varepsilon}} ).\end{eqnarray} \end{lemma} \noindent \textbf{Proof. } We apply Lemma \ref{lowerbd}. We have that (\ref{0427f}) and (\ref{0439a}) hold in this case. For the upper bound, consider (\ref{eeee}). By Lemma \ref{itlog} we have that for a.e.~$\omega$, for all but finitely many $n$, \begin{eqnarray*} \max_{0 \leq i \leq n-1} \sum_{k=1}^i \log (p_k/q_k) < C n^{1/2} (\log \log n )^{1/2}, \end{eqnarray*} for some $C\in(0,\infty)$, and similarly for the second maximum in (\ref{eeee}). Then (\ref{eeee}) implies the upper bound in (\ref{0920g}). For the lower bound, we use (\ref{ffff}). We apply Lemma \ref{lem0601} with $a(x)= (\log x)^{-1-{\varepsilon}}$ to obtain, for a.e.~$\omega$, for any ${\varepsilon}>0$ \[ \max_{1 \leq i \leq n-1} \sum_{k=1}^i \log (p_j/q_j) \geq n^{1/2} (\log n)^{-1-{\varepsilon}},\] for all but finitely many $n$. With (\ref{ffff}), the lower bound in (\ref{0920g}) follows. $\square$\\ \noindent {\bf Proof of Theorem.} Again the proof is very similar to that of Theorems \ref{thm0} and \ref{thm5}, this time using Lemma \ref{lem0920q} and Lemma \ref{1002e}. $\square$ \subsection{Proofs of Theorems \ref{thm9} and \ref{thm10}} Finally, we prove the results on the stationary distribution in the ergodic cases given in Theorems \ref{thm9} and \ref{thm10}. Given $\omega$, suppose $\eta_t(\omega)$ is ergodic; then there exists a unique stationary distribution $(\pi_0, \pi_1, \pi_2, \ldots)$. It is straightforward to obtain the result (see, for example, Lemma 5 of \cite{mw}) that, for a given $\omega$ such that $\eta_t(\omega)$ is ergodic, there exists a constant $C\in(0,\infty)$ such that, for all $n \geq 2$, \begin{eqnarray} \label{0922e} \pi_n = C \prod_{k=1}^{n} \frac{q_k}{p_k} = C \exp \left( \sum_{k=1}^n \log (q_k/p_k) \right). \end{eqnarray} \noindent {\bf Proof of Theorem \ref{thm9}.} Here we have that $\log(p_n/q_n)$ is given by (\ref{0427f}), with $\lambda>0$, $\alpha \in (0,1/2)$ and ${\mathbb{E}}[\zeta_1]=0$. In this case the $j=i=n$ case of (\ref{gg2}) implies that \[ \sum_{k=1}^n \log (q_k/p_k) = -\sum_{k=1}^n \log (p_k/q_k) = - \frac{\lambda}{1-\alpha} n^{1-\alpha} [1+o(1)],\] as $n\to \infty$. Then (\ref{0922e}) yields (\ref{0922a}). $\square$\\ \noindent {\bf Proof of Theorem \ref{thm10}.} This time we have that $\log(p_n/q_n)$ is given by (\ref{0601s}), where now ${\mathbb{E}}[Y_1]>0$ and $\alpha \in (0,1)$. In this case the $j=i=n$ case of (\ref{gg1}) implies that \[ \sum_{k=1}^n \log (q_k/p_k) =- \sum_{k=1}^n \log (p_k/q_k) = - \frac{4{\mathbb{E}}[Y_1]}{1-\alpha} n^{1-\alpha} [1+o(1)],\] as $n \to \infty$. Then (\ref{0922e}) yields (\ref{0922c}). $\square$ \begin{center} \textbf{Acknowledgements} \end{center} Some of this work was done when AW was at the University of Durham, supported by an EPSRC doctoral training account, and subsequently at the University of Bath. We are grateful to Serguei Popov for useful discussions, and to an anonymous referee for a careful reading of an earlier version of this paper.
1,116,691,499,546
arxiv
\section{Introduction\label{sec:Introduction}} The Higgs mechanism~\cite{Higgs} has been introduced into the Standard Model~\cite{Weinberg,Salam,Glashow} to explain electroweak symmetry breaking and the masses of the fundamental particles. In its simplest form the Standard Model requires a single neutral observable boson H. The search for the Higgs boson has been one of the main motivations for the construction of the Large Hadron Collider (LHC). The theoretical properties of the Standard Model Higgs boson have been extensively studied~\cite{Hhunters}. Its production mechanisms, coupling and most of the major decays have been well understood. The mass of the Higgs boson remains the only free parameter. The lower limit on the mass - $m_{H}>$ 114.4 GeV at 95\% CL - has been established in the direct searches done by the LEP experiments~\cite{pdg}. The global fits to the numerous data on electroweak processes show a strong preference for the low mass of the Higgs. The current best fit value is $m_{H}<$ 186 GeV at 95\%CL~\cite{pdg}. Low mass Higgs will decay predominantly to a pair of fermions or a pair of bosons. In the LHC experiments such decays will have to be disentangled from the copious background from QCD processes. One of the most promising channels for its observation at the LHC is the decay $H\to\gamma\gamma$. In the low Higgs mass range, this decay has a relatively small branching fraction but is also expected to have low background rate. It has been used as a benchmark in optimization of the ATLAS and CMS detectors and in estimates of the discovery potential~\cite{tdr}. \begin{figure}[h] \begin{center} \includegraphics[width=7cm]{HiggsDiagram2.eps} \end{center} \caption{The Feynman diagram for the Higgs decay with internal conversion. \label{fig:Feyn}} \end{figure} In this note we point out that Higgs decay into two photons may proceed via the internal conversion process analogous to the Dalitz decay of a neutral pion~\cite{KrollWada, Miyazaki} (see Fig.~\ref{fig:Feyn}$\,$). Here, the internal conversion refers to the decay of a virtual photon, $\gamma^*$, to a pair of fermions, where the virtual photon mass can range up to the mass of the Higgs. In contrast to the case of a neutral pion, the choice of the fermion type is thus not limited to electrons only but will include all charged leptons and all quarks allowed by the kinematics. The running effective coupling of the virtual photon to the fermion pair has to be evaluated at the mass of the virtual photon, q~\cite{Barger}: $${\alpha_{eff}(q^2)} = {{\alpha_0}\over{{1-{{{\alpha_o}\over{3\pi}}\sum_{i}e_{i}^2\Theta(q^2-4m_{i}) ln\left( {q^2}\over 4m_{i}^2\right)}}}},$$ \noindent where $\alpha_0=1/137$ and $m_{i}$ denotes the mass of the fermion in the Callan-Symanzik beta function. The easiest way to evaluate the rate for such internal conversions is to calculate the ratio of the Higgs decay rate to a photon and a virtual photon and decay to a pair of real photons. $${\rho} = {{\Gamma(H\to\gamma\gamma^*)}\over{{\Gamma(H\to\gamma\gamma)}}},$$ where the $\Gamma$ refers to partial decay width. In this ratio the terms due to the loop integration cancel out. The value of $\rho$ is given by $$\rho = {4\over{3\pi}} \int_{2m_{f}}^{m_{H}} \alpha_{eff} (q^2) \left( 1- \frac{q^2}{m_{H}^2} \right)^3 \left( 1- \frac{4m_{f}^2}{q^2} \right)^{1/2} \left( 1+ \frac{2m_{f}^2}{q^2} \right) {dq\over {q}} $$ where $m_{f}$ is the final state fermion mass. The corresponding partial width for each channel is $\Gamma_{i} = \rho\times Br(H\to\gamma\gamma)\times\Gamma_{tot},$ where $\Gamma_{tot}$ denotes the total width. The results for three values of the Higgs mass are listed in Table 1 and illustrated in Fig.2. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $ Higgs Mass$ &\multicolumn{2}{|c|}{$m_H$=120GeV} & \multicolumn{2}{|c|}{$m_H$=150GeV} & \multicolumn{2}{|c|}{$m_H$=180GeV} \\ \hline $Channel$ & $\rho$ & $Branching Fraction$ & $\rho$ & $Branching Fraction$ & $\rho$& $Branching Fraction$ \\ \hline $H\to e^+e^-\gamma$ & 0.0333 & ${{71.38\times10}^{-6}}$ & 0.0340 & ${{47.12\times10}^{-6}}$ & 0.0346 & ${{3.4\times10}^{-6}}$ \\ $H\to \mu^+\mu^-\gamma$ & 0.0167 & ${{35.90\times10}^{-6}}$ & 0.0174 & ${{24.19\times10}^{-6}}$ & 0.0180 & ${{1.79\times10}^{-6}}$ \\ $H\to \tau^+\tau^-\gamma$& 0.0078 & ${{16.77\times10}^{-6}}$ & 0.0086 & ${{11.81\times10}^{-6}}$ & 0.0091 & ${{0.91\times10}^{-6}}$ \\ $H\to u \bar{u}\gamma$ & 0.0211 & ${{45.36\times10}^{-6}}$ & 0.0220 & ${{30.58\times10}^{-6}}$ & 0.0229 & ${{2.28\times10}^{-6}}$ \\ $H\to d \bar{d}\gamma$ & 0.0053 & ${{11.39\times10}^{-6}}$ & 0.0055 & ${{7.64\times10}^{-6}}$ & 0.0057 & ${{0.57\times10}^{-6}}$ \\ $H\to s \bar{s}\gamma$ & 0.0040 & ${{8.38\times10}^{-6}}$ & 0.0042 & ${{5.83\times10}^{-6}}$ & 0.0044 & ${{0.44\times10}^{-6}}$ \\ $H\to c \bar{c}\gamma$ & 0.0123 & ${{26.44\times10}^{-6}}$ & 0.0132 & ${{18.35\times10}^{-6}}$ & 0.0140 & ${{1.39\times10}^{-6}}$ \\ $H\to b \bar{b}\gamma$ & 0.0018 & ${{3.87\times10}^{-6}}$ & 0.0020 & ${{2.78\times10}^{-6}}$ & 0.0022 & ${{0.22\times10}^{-6}}$ \\ \hline $ Total$ & 0.1022 & ${{219\times10}^{-6}}$ & 0.1070 & ${{148\times10}^{-6}}$ & 0.1110 & ${{11\times10}^{-6}}$ \\ \hline \end{tabular} \caption{Values of the ratio $\rho$ and branching fractions for the Higgs decays to two photons with single internal conversions. \label{tab:ht} } \end{center} \end{table} In this calculation we take into account the color factors for the quarks, the charge dependence of the couplings and assume the lower limit of the mass integration to be equal to the lowest mass of a physical hadron produced in the decay, i.e., pion mass for the $u$ and $d$ quarks, kaon mass for the strange quark; we use the particle Data Group values for the masses of $c$ and $b$ quarks~\cite{pdg}. \begin{figure}[h] \begin{center} \includegraphics[width=10cm]{HiggsBranchingFrCompare.eps} \end{center} \caption{The shift in the $Br(H\to\gamma\gamma)$ due to the Dalitz decay correction. The dotted line represents the branching fraction without the Dalitz decay correction and the solid line takes into account the correction. \label{fig:BrFr}} \end{figure} As can be seen, in the region of interest to the LHC the total Dalitz decay rate of the neutral Higgs is about 10\% of the $H\to\gamma\gamma$ branching fraction. Future LHC experiments should include the corresponding correction in their respective Monte Carlo programs. Finally, we note that the Higgs Dalitz decay to fermions results in the same final states as for the $H\to Z\gamma$ decay. The effects of the interference of the corresponding amplitudes have not been yet evaluated. \section*{Acknowledgments} We thank F.~Paige and W.J.~Marciano for valuable discussions and helpful comments. The work was supported by the U.S. Department of Energy under grant DE-FG03-95ER40908 and the Lightner-Sams Foundation.
1,116,691,499,547
arxiv
\section{Introduction} The behavior of Quantum Chromodynamics (QCD), the gauge theory of the strong interaction, is determined by the magnitude of its coupling $\alpha_{\rm s}$. It is large at low momentum, characterized here by $Q \equiv \sqrt{-q^2}$ with $q^2$ the square of momentum transferred in the process of electromagnetically probing a hadron. For $Q \ll 1$~GeV, $\alpha_{\rm s} (Q) \gtrsim 1$ which is one of the crucial pieces leading to quark confinement. For $Q \gg1$~GeV, $\alpha_{\rm s} (Q) \lesssim 0.2$ which enables the use of perturbative computational techniques (perturbative QCD, pQCD) constituting an accurate analytical approximation of QCD. In this domain, $\alpha^{\rm pQCD}_{\rm s}$ is well defined and known within an accuracy of 1\% at $Q=M_{Z^0}=91$~GeV, the $Z^0$ mass, and within a few percents at $Q$ values of a few GeV~\cite{dEnterria:2022hzv}. However, using pQCD at $Q \lesssim 1$ GeV produces a diverging $\alpha^{\rm pQCD}_{\rm s}$ (Landau pole) that prohibits any perturbative expansion in $\alpha^{\rm pQCD}_{\rm s}$ and signals the breakdown of pQCD. In contrast, most nonperturbative methods, including lattice QCD~\cite{Zyla:2020zbs}, the AdS/CFT (Anti-de-Sitter/Conformal Field Theory) duality~\cite{Brodsky:2014yha, Dobado:2019fxe} implemented using QCD's light-front (LF) quantization~\cite{Dirac:1949cp} and a soft-wall AdS potential (Holographic LF QCD, HLFQCD~\cite{Brodsky:2003px}) or solving the Dyson-Schwinger equations (DSEs)~\cite{Maris:2003vk} yield a finite $\alpha_{\rm s}$. In fact, many theoretical approaches predict that $\alpha_{\rm s}$ ``freezes'' as $Q \to 0$, {\it viz}, it loses its $Q$-dependence~\cite{Deur:2016tte}. There are several possible definitions of $\alpha_{\rm s}$ in the nonperturbative domain ($Q \lesssim 1$~GeV)~\cite{Deur:2016tte}. We use here the {\it effective charge} approach that defines $\alpha_{\rm s}$ from the perturbative series of an observable truncated to its first order in $\alpha_{\rm s}$~\cite{Grunberg:1980ja}. Although this definition can be applied for any $Q$ value, it was initially proposed for the pQCD domain where it makes $\alpha_{\rm s}$ the equivalent of the Gell-Mann Low coupling of Quantum Electrodynamics (QED), $\alpha$~\cite{GellMann:1954fq}. With this definition, $\alpha_{\rm s}$ can be evaluated at any $Q$ value, has no low $Q$ divergence and is analytic around quark mass thresholds. Furthermore, since the first order in $\alpha^{\rm pQCD}_{\rm s}$ of a pQCD approximant is independent of the choice of renormalization scheme (RS), effective charges are independent of RS and gauge choices. This promotes $\alpha_{\rm s}$ from a parameter depending on chosen conventions to an observable, albeit with the caveat that it becomes process-dependent since two observables produce generally different effective charges. Yet, pQCD predictability is maintained because effective charges are related without renormalization scale ambiguities by Commensurate Scale Relations (CSR)~\cite{Brodsky:1994eh}. CSR are known to hold for pQCD and QED since the latter corresponds to the $N_C\to 0$ limit of QCD, with $N_C$ the number of colors. For example, CSR explicitly relate $\alpha_{g_1}$, $\alpha_{F_3}$, $\alpha_{\tau}$ and $\alpha_R$ defined using the generalized Bjorken sum rule~\cite{Bjorken:1966jh}, the Gross-Llewellyn Smith sum rule~\cite{Gross:1969jf}, and the perturbative approximant for the $\tau$-decay rate~\cite{Brodsky:2002nb} and $R_{e^+e-}$~\cite{Gorishnii:1990vf}, respectively. In fact, the choice of process to define an effective charge is analogous to a RS choice for $\alpha^{\rm pQCD}_{\rm s}$~\cite{Deur:2014qfa} and the procedure of extracting an effective charge, e.g., from $\tau$-decay is denoted the $\tau$-scheme. Here, we discuss the effective charge $\alpha_{g_1}(Q)$ ($g_1$-scheme) extracted using the generalized Bjorken sum rule: \begin{eqnarray} \Gamma_1^{\rm p-n}(Q^2 ) \equiv \int_0^{1^-} g_1^{\rm p}(x,Q^2)- g_1^{\rm n}(x,Q^2) dx = \frac{g_{\rm A}}{6}\bigg[1-\frac{\alpha^{\rm pQCD}_{{\rm {s}}}(Q)} {\pi}-3.58\left(\frac{\alpha^{\rm pQCD}_{{\rm {s}}}(Q)}{\pi}\right)^2 \nonumber \\ -20.21\left(\frac{\alpha^{\rm pQCD}_{{\rm {s}}}(Q)}{\pi}\right)^{3} - 175.7\left(\frac{\alpha^{\rm pQCD}_{{\rm {s}}}(Q)}{\pi}\right)^{4}+\mathcal O\left(\big(\alpha^{\rm pQCD}_{\rm {s}} \big)^5\right)... \bigg]+\sum_{n > 1} \frac{\mu_{2n}}{Q^{2n-2}}, \label{eq:genBj} \end{eqnarray} where $x$ is the Bjorken scaling variable~\cite{Bjorken:1968dy}, $g_{\rm A}=1.2762(5)$~\cite{Zyla:2020zbs} is the nucleon axial charge, $ g_1^{\rm p(n)}$ is the longitudinal spin structure function of the proton(neutron) obtained in polarized lepton-nucleon scattering~\cite{Deur:2018roz} and $\mu_{2n}$ are the Operator Product Expansion's (OPE) nonperturbative higher twist (HT) terms. The integral excludes the elastic contribution at $x=1$. The series coefficients are computed for $n_f=3$ and in the $\overline{\rm MS}$ RS for the $n>1$ $\alpha_{\rm {s}}^n$ terms~\cite{Kataev:1994gd}. They originate from the pQCD radiative corrections. Although the expansion~(\ref{eq:genBj}) is only applicable in the perturbative domain, i.e., at distance scales where confinement effects are weak, the HT terms can be related to the latter~\cite{Burkardt:2008ps} and one may picture the terms of Eq.~(\ref{eq:genBj}) as coherently merging together at low $Q$ to produce confinement. The effective charge $\alpha_{g_1}$ is defined from Eq.~(\ref{eq:genBj}) expressed at first order in coupling and twist: \begin{eqnarray} \Gamma_1^{\rm p-n}(Q^2 ) \equiv \frac{g_{\rm A}}{6}\left(1-\frac{\alpha_{g_1}(Q)} {\pi} \right) ~\longrightarrow~ \alpha_{g_1}(Q) \equiv \pi \left(1-\frac{6}{g_{\rm A}}\Gamma_{1}^{\rm p-n}(Q) \right). \label{eqn:alphadef} \end{eqnarray} Thus, in the domain where Eq.~(\ref{eqn:alphadef}) applies, $\alpha_{g_1}$ can be interpreted as a running coupling that not only includes short-distance effects such as vertex correction and vacuum polarization, but all other effects, e.g., pQCD radiative corrections and, in the lower-$Q$ domain of pQCD, HT terms and other nonperturbative effects not formalized by OPE and therefore not included in Eq.~(\ref{eqn:alphadef}). The latter comes from coherent reactions of a hadron (resonances). In the nonperturbative domain where pQCD radiative corrections and HT effects have merged into global confinement effects, $\alpha_{g_1}$ may approximately retain its interpretation as a coupling if the contribution to $\Gamma_{1}^{\rm p-n}$ of nonresonant reactions continues to dominate, as they do at large $Q$~\cite{Deur:2009zy}. There are several advantages to $\alpha_{g_1}$~\cite{Deur:2016tte}. First, rigorous sum rules constrain $\alpha_{g_{1}}(Q)$ for $Q \to 0$ (the Gerasimov--Drell--Hearn (GDH) sum rule~\cite{Gerasimov:1965et}) and $Q \to \infty$ (the Bjorken sum rule). They provide analytical expressions of $\alpha_{g_{1}}(Q)$ in these limits (blue dashed line and cyan hatched band in Fig.~\ref{fig:alpha}). Furthermore, contributions from $\Delta$ baryons are quenched in $\Gamma_{1}^{\rm p-n}$~\cite{Burkert:2000qm}, enhancing the nonresonant reactions contribution to $\Gamma_{1}^{\rm p-n}$ relatively to the resonance contribution, which helps toward interpreting $\alpha_{g_1}$ as a coupling. If so, $\alpha_{g_1}$ would remain approximately equivalent to the Gell-Mann Low coupling in the nonperturbative domain, a crucial property that it is not obvious and may be specific to $\alpha_{g_1}$. Such property is supported by the agreement between $\alpha_{g_1}$ and calculations of couplings~\cite{Brodsky:2010ur, Binosi:2016nme} using a definition consistent with $\alpha_{g_1}$. Former extractions of $\alpha_{g_1}$~\cite{Deur:2005cf} were obtained from experimental data on $\Gamma_{1}^{\rm p-n}$ from CERN~\cite{Adeva:1993km}, DESY~\cite{Airapetian:2000yk}, Jefferson Lab (JLab)~\cite{Deur:2004ti} and SLAC~\cite{Anthony:1996mw}, see Fig.~\ref{fig:alpha}. Since the results reported in Ref.~\cite{Deur:2005cf}, progress occurred on both the experimental and theoretical fronts. Firstly, when Ref.~\cite{Deur:2005cf} was published, the meaning of $\alpha_{g_1}$ in the nonperturbative region was unclear. Thus, the comparison in~\cite{Deur:2005cf} of $\alpha_{g_1}$ to theoretical predictions of the nonperturbative coupling was tentative. This is now better understood: as just discussed, $\alpha_{g_1}$ essentially retains its meaning of effective charge at low $Q$~\cite{Deur:2009zy, Deur:2016tte}. Secondly, new data on $\Gamma_{1}^{\rm p-n}$ have become available from CERN (COMPASS experiment)~\cite{Alekseev:2010hc} and JLab (EG1dvcs experiment)~\cite{Deur:2014vea} at high $Q$, and from JLab (E97110, E03006 and E05111 experiments)~\cite{Deur:2021klh} at very low $Q$. Finally, new theoretical studies of the nonperturbative behavior of $\alpha_{\rm s}$ were conducted, including the first use of the AdS/CFT duality to describe the strong coupling in its nonperturbative domain~\cite{Brodsky:2010ur} and the identification of a process-independent (PI) effective charge $\hat \alpha_{\rm PI}(Q)$ that unifies a large body of research from DSE and lattice QCD to $\alpha_s$~\cite{Binosi:2016nme, Rodriguez-Quintero:2018wma}. Connections between the nonperturbative and perturbative effective charges were made~\cite{Deur:2014qfa, Deur:2017cvd, Deur:2016tte}, which permitted a prediction of $\alpha_{\rm s}$ at the $Z_0$ pole, $\alpha_{\rm s}^{\overline{\rm MS}}(M_z^2)=0.1190\pm0.0006$ at N$^3$LO~\cite{Deur:2016opc} that agrees well with the 2021 Particle Data Group compilation, $\alpha_{\rm s} (M_{\rm Z}) = 0.1179\pm0.0009$~\cite{Zyla:2020zbs}. In addition to predicting quantities characterizing hadronic structures~\cite{Brodsky:2014yha, Binosi:2016nme, Cui:2020tdf}, the effective charge helps establish conformal behavior at low $Q$. Through AdS/CFT, this helps the investigation of the physics beyond the standard model~\cite{Dobado:2019fxe} or of the quark-gluon plasma~\cite{Janik:2010we} in heavy ion collisions~\cite{Busza:2018rrf} and nuclear hydrodynamics~\cite{Florkowski:2017olj} for the latter and neutron stars~\cite{Jokela:2018ers}. \\ Here, we report on new experimental data on $\alpha_{g_1}$ extracted from~\cite{Alekseev:2010hc, Deur:2014vea, Deur:2021klh} and how they compare with the latest theory predictions. \section{Experimental extraction of $\alpha_{g_1}$} The new JLab data on $\Gamma_{1}^{\rm p-n}(Q)$ were taken by four experiments. The first experiment, E97110~\cite{Sulkosky:2019zmn}, occurred in the Hall A~\cite{Alcorn:2004sb} of JLab. The three others used the CLAS spectrometer~\cite{CLAS:2003umf} in JLab's Hall B and were experiments EG1dvcs~\cite{Prok:2014ltt}, E03006~\cite{Zheng:2021yrn} and E05111~\cite{Adhikari:2017wox} (the two latter being referred to as Experimental Group EG4). The four experiments occurred during the 6 GeV era of JLab, before its 12 GeV upgrade. The experiments used a polarized electron beam with energies ranging from 0.8 to 6 GeV. E97110 studied the spin structures of the neutron and $^3$He using the Hall A polarized $^3$He target with longitudinal and transverse polarization directions~\cite{VINCETHESIS}. EG1dvcs, E03006, E05111 studied the proton, neutron and deuteron spin structures using the Hall B longitudinally polarized ammonia (NH$_3$ or ND$_3$) target~\cite{Keith:2003ca}. The main purpose of EG1dvcs was high $Q$, up to 2.65 GeV ($Q^2=7$~GeV$^2$), exclusive measurements of Deep Virtual Compton Scattering. Therefore, it provided highly precise inclusive $\Gamma_{1}^{\rm p-n}$ data compared to the older data in the same domain~\cite{Adeva:1993km, Anthony:1996mw, Airapetian:2000yk, Deur:2004ti}. E97110, E03006 and E05111 were dedicated to test Chiral Effective Field Theory predictions by covering very low $Q$ domains: $0.19 \leq Q \leq 0.49 $, $0.11 \leq Q \leq 0.92$ and $0.14 \leq Q \leq 0.70 $~GeV, respectively. To reach low $Q$ while covering the large $x$ range necessary for the $\Gamma_{1}$ integral, high beam energy (up to 4.4 GeV) was needed and the scattered electrons had to be detected at small angles (down to about $5^\circ$). In Hall A, the low angles were reached {\it via} a supplementary dipole magnet installed in front of the spectrometer~\cite{septum}. In Hall B, a Cherenkov Counter designed for high efficiency at small angle was installed in one of the six sectors of CLAS~\cite{Adhikari:2017wox} which magnetic field was set to bent outward the scattered electrons. In addition, both the Hall A and B targets were placed about 1~m upstream of their usual positions. The EG1dvcs data on proton and deuteron were combined to form $\Gamma_{1}^{\rm p-n}$ over the range $0.78 \leq Q \leq 2.18 $~GeV~\cite{Deur:2014vea}. The $\Gamma_{1}^{\rm p-n}$ formed with the E97110 and EG4, data covers the $0.14 \leq Q \leq 0.70$~GeV range~\cite{Deur:2021klh}. The $\alpha_{g_1}$ data, obtained following Eq.~(\ref{eqn:alphadef}), are shown in Fig.~\ref{fig:alpha} and given in Table~\ref{tab:alpha}. Also shown in the figure are the older data presented in Ref.~\cite{Deur:2004ti} including $\alpha_{F_3}$ extracted from the data~\cite{Kim:1998kia} and $\alpha_{g_1(\tau)}$ from the OPAL data on $\tau$-decay~\cite{Brodsky:2002nb}. The effective charge $\alpha_{F_3}$ is nearly identical to $\alpha_{g_1}$~\cite{Deur:2005cf}, and $\alpha_{g_1(\tau)}$ was transformed from the $\tau$-scheme to the $g_1$-scheme using the CSR~\cite{Brodsky:1994eh}. Consequently, $\alpha_{F_3}$ and $\alpha_{g_1(\tau)}$ are directly comparable to $\alpha_{g_1}$. We also show in Fig.~\ref{fig:alpha} the theory predictions from AdS/CFT~\cite{Brodsky:2010ur} and DSE~\cite{Binosi:2016nme}. Remarkably, both predictions are parameter-free and gauge-invariant. The AdS/CFT coupling $\alpha^{\rm HLF}_{g_1}$ is obtained in the HLFQCD approach where QCD is quantized using LF coordinates~\cite{Dirac:1949cp}. The use of the HLFQCD approach incorporates the underlying conformal (i.e., scale-invariant) character of QCD at low and large $Q$. The deformation of the AdS$_5$ space is dual to a semiclassical potential that models quark confinement. This potential can be determined with various methods that all lead to the same harmonic oscillator form~\cite{Brodsky:2014yha, deAlfaro:1976vlx, Trawinski:2014msa}. The effective charge $\alpha^{\rm HLF}_{g_1}$ is dual to the product of the AdS$_5$ coupling {\it constant} by the AdS$_5$ space deformation term. Since the latter is dual to the CFT confinement force, the meaning of $\alpha^{\rm HLF}_{g_1}$ is analogous to that of $\alpha_{g_1}$ which at low $Q$ incorporates in $\alpha_{\rm s}$ confinement effects. The $Q$-dependence of $\alpha^{\rm HLF}_{g_1}$ is controlled by a single scale, e.g., the proton mass. The coupling is normalized to $\alpha^{\rm HLF}_{g_1}(0)=\pi$ to obey the kinematic constraint that $\Gamma_{1}^{\rm p-n}(0)=0$, i.e., $\alpha_{g_1}(0)=\pi$, see Eq.~(\ref{eqn:alphadef}). This normalization amounts to the RS choice of pQCD~\cite{Deur:2014qfa}. Thus, the $\alpha^{\rm HLF}_{g_1}(Q)$ prediction is parameter-free. Above $Q \simeq 1$~GeV HLFQCD stops to be valid because its {\it semiclassical} potential does not include, by definition, the short distance quantum effects responsible for the running of a coupling. This is palliated by matching HLFQCD and pQCD near $Q\simeq 1$~GeV where both formalisms apply, thereby providing $\alpha^{\rm HLF}_{g_1}(Q)$ at all $Q$~\cite{Deur:2014qfa}. The DSE effective charge $\hat \alpha_{\rm PI}$~\cite{Binosi:2016nme} is obtained starting with the Pinch Technique~\cite{Cornwall:1981zr} and Background Field Method~\cite{Abbott:1980hw}. They allow us to define a process-independent QCD coupling in terms of a mathematically reconstructed gluon two-point function analogous to the Gell-Mann Low effective charge of QED. The $\hat \alpha_{\rm PI}$ is then computed by combining the solution of DSE compatible with lattice QCD results. The definition of $\hat \alpha_{\rm PI}$ explicitly factors in a renormalization group invariant interaction, thus making it, like $\alpha_{g_1}(Q)$ and $\alpha^{\rm HLF}_{g_1}(Q)$, to incorporate confinement~\cite{Binosi:2014aea}. Like them, $\hat \alpha_{\rm PI}(Q)$ freezes at low $Q$ with a predicted infrared fixed-point of $\hat \alpha_{\rm PI}(0)=(0.97\pm 0.04)\pi$. The mechanism at the origin of the freezing in the DSE framework is the emergence of a dynamical gluon mass $m_g(Q)$~\cite{Cornwall:1981zr, Aguilar:2008xm} that (A) regulates the Landau pole and (B) decouples the dynamics at scales $Q \lesssim m_g(0)$, thereby causing the coupling to lose its $Q$-dependence~\cite{Brodsky:2008be}. Like $\alpha^{\rm HLF}_{g_1}$, $\hat \alpha_{\rm PI}$ is parameter-free and gauge-invariant but, in contrast to the former and $\alpha_{g_1}$, $\hat \alpha_{\rm PI}$ is also process-independent. No parameter is varied to predict the infrared fixed-point $\hat \alpha_{\rm PI}(0)$ since it is largely fixed by the value of $m_g(0)$, nor a matching is necessary to ensure agreement with the perturbative determination of $\alpha^{\rm pQCD}_{\rm {g_1}}$ from the renormalization group equations and the Bjorken sum rule. Crucially, the practical determination of $\hat \alpha_{\rm PI}(Q)$ consistently incorporates the extensive information from Lattice QCD on the gluon and ghost propagators, thereby connecting this technique to $\alpha_{g_1}$. \begin{figure}[ht!] \begin{center} \centerline{\includegraphics[width=0.5\textwidth, angle=0]{alpha_s_plot.pdf}} \end{center} \caption{\footnotesize Effective charge $\alpha_{g_1}(Q)/\pi$ obtained from JLab experiments E03006/E97110~\cite{Deur:2021klh} (solid stars), E03006/E05111~\cite{Deur:2021klh} (solid circles) and EG1dvcs~\cite{Deur:2014vea} (solid triangles) and from COMPASS~\cite{Alekseev:2010hc} (solid square). Inner error bars represent the statistical uncertainties and outer ones the systematic and statistical uncertainties added quadratically. The open symbols show the older world data~\cite{Adeva:1993km, Anthony:1996mw, Airapetian:2000yk, Deur:2004ti} with the error bars the quadratic sum of the systematic and statistical uncertainties. Also shown are the HLFQCD~\cite{Brodsky:2010ur} (red line, using the HLFQCD scale $\kappa=0.534$~GeV~\cite{Sufian:2018cpj}) and DSE~\cite{Binosi:2016nme} (magenta line and hatched band) parameter-free predictions of effective charges. The dashed line and hatched cyan band are $\alpha_{g_1}(Q)/\pi$ obtained from the GDH and Bjorken sum rules, respectively.} \label{fig:alpha} \end{figure} The new data on $\alpha_{g_1}$ agree well with the older data and display a much improved precision over the whole $Q$ range covered. In addition, the data now reach clearly the freezing domain of QCD at very low $Q$. That $\alpha_{g_1}$ freezes could be already inferred with the old data but only by complementing them with the GDH sum rule or/and the $\alpha_{g_1}(0)=\pi$ constraint. For the first time, the onset of freezing is now visible with data only. One notes that only three of the lowest $Q$ points agree with the GDH expectation. This may signal a fast arising $Q$-dependence beyond the leading behavior given by GDH. The data agree well with the $\alpha^{\rm HLF}_{g_1}$ and $\hat \alpha_{\rm PI}$ predictions. That such agreements would occur was not obvious and is a significant finding. The possible tension between the data and $\hat \alpha_{\rm PI}$ for the range $0.3 \lesssim Q \lesssim 0.5$~GeV may be because $\alpha_{g_1}$ and $\hat \alpha_{\rm PI}$ are not exactly the same effective charges (e.g., at high $Q$, $\sfrac{\alpha_{g_1}}{\hat \alpha_{\rm PI}}\simeq1+0.05\alpha^{\rm pQCD}_{\rm {s}} \neq 1$), but it is noteworthy that it occurs only in the moderately low $Q$ domain where the ghost-gluon vacuum effect as computed in the Landau gauge contributes the most to $\hat \alpha_{\rm PI}$. \section{Summary and conclusion} We used the new JLab data and COMPASS datum on the Bjorken sum to extract the QCD effective charge $\alpha_{g_1}(Q)$ in the $Q$-range $0.14 \leq Q \leq 2.18 $~GeV. The new result displays a significantly higher precision compared to the older extractions of $\alpha_{g_1}(Q)$, and improve the low $Q$ reach by about a factor of 2. The new data show that $\alpha_{g_1}(Q)$ ``freezes'', {\it viz}, loses its $Q$-dependence, at small $Q$, saturating at an infrared fixed-point $\alpha_{g_1}(Q\simeq0) \simeq \pi$. This was already apparent with the older data when combined with the GDH sum rule expectation, but the new data explicitly display the behavior without needing the sum rule and with significantly higher precision. The freezing of $\alpha_{g_1}(Q)$ together with the smallness of the light quark masses, makes QCD approximately conformal at low $Q$. The conformal behavior vanishes when transiting from the low-$Q$ effective degrees of freedom of QCD (hadrons) to the large-$Q$ fundamental ones (partons) where conformality is then restored (the long-known Bjorken scaling~\cite{Bjorken:1968dy}). This transition is revealed by the drastic change of value of the effective charge. It occurs at a $Q$ value indicative of the chiral symmetry breaking parameter, $\Lambda_B\simeq 1$~GeV. The breaking at low $Q$ of chiral symmetry, one of the crucial properties of QCD, is believed to cause the emergence of the global properties of hadrons. The new data agree well with sum rule predictions and with the latest predictions from DSE and from a AdS/CFT-based approach. It shows that a strong coupling can be consistently defined in the nonperturbative domain of QCD, namely as an effective charge analogous to the definition used in QED, and that it can then be used to compute a large variety of hadronic quantities and other phenomena in which the strong interaction plays a role. ~ \noindent {\bf Acknowledgments} The authors thank D. Binosi, S. J. Brodsky, Z.-F. Cui, G. F. de T\'eramond, J. Papavassiliou, C. D. Roberts and J. Rodríguez-Quintero for their valuable comments on the manuscript. This work is supported by the U.S.\ Department of Energy, Office of Science, Office of Nuclear Physics, contracts DE-AC05-06OR23177 and DE-FG02-99ER41101. \begin{table}[ht] \centering {\footnotesize \begin{tabular}[t]{|c|c|} \hline $Q$ (GeV) & $\alpha_{g_1} \pm \rm stat. \pm syst.$ \\ \hline \hline 0.143 & 3.064 $\pm 0.043 \pm 0.018 $ \\ \hline 0.156 & 3.129 $\pm 0.046 \pm 0.019 $ \\ \hline 0.171 & 2.955 $\pm 0.046 \pm 0.023 $ \\ \hline 0.187 & 3.083 $\pm 0.044 \pm 0.024 $ \\ \hline 0.204 & 3.022 $\pm 0.049 \pm 0.024 $ \\ \hline 0.223 & 3.002 $\pm 0.052 \pm 0.027 $ \\ \hline 0.243 & 2.988 $\pm 0.055 \pm 0.031 $ \\ \hline 0.266 & 2.947 $\pm 0.060 \pm 0.035 $ \\ \hline 0.291 & 2.983 $\pm 0.065 \pm 0.035 $ \\ \hline 0.317 & 2.961 $\pm 0.062 \pm 0.038 $ \\ \hline 0.347 & 2.730 $\pm 0.070 \pm 0.044 $ \\ \hline 0.379 & 2.853 $\pm 0.077 \pm 0.040 $ \\ \hline 0.414 & 2.745 $\pm 0.076 \pm 0.041 $ \\ \hline 0.452 & 2.779 $\pm 0.090 \pm 0.043 $ \\ \hline 0.494 & 2.451 $\pm 0.094 \pm 0.044 $ \\ \hline 0.540 & 2.397 $\pm 0.092 \pm 0.039 $ \\ \hline 0.590 & 2.349 $\pm 0.101 \pm 0.040 $ \\ \hline 0.645 & 2.431 $\pm 0.109 \pm 0.043 $ \\ \hline 0.704 & 1.996 $\pm 0.131 \pm 0.104 $ \\ \hline \end{tabular} \begin{tabular}[t]{|c|c|} \hline $Q$ (GeV) & $\alpha_{g_1} \pm \rm stat. \pm syst.$ \\ \hline \hline 0.187 & 3.016 $\pm 0.009 \pm 0.027 $ \\ \hline 0.239 & 2.973 $\pm 0.015 \pm 0.035 $ \\ \hline 0.281 & 2.952 $\pm 0.021 \pm 0.041 $ \\ \hline 0.316 & 2.929 $\pm 0.017 \pm 0.048 $ \\ \hline 0.387 & 2.815 $\pm 0.021 \pm 0.076 $ \\ \hline 0.447 & 2.704 $\pm 0.025 \pm 0.086 $ \\ \hline 0.490 & 2.575 $\pm 0.031 \pm 0.053 $ \\ \hline \hline 0.775 & 1.743 $\pm 0.007 \pm 0.071 $ \\ \hline 0.835 & 1.571 $\pm 0.007 \pm 0.101 $ \\ \hline 0.917 &1.419 $\pm 0.009 \pm 0.132 $ \\ \hline 0.986 & 1.341 $\pm 0.010 \pm 0.147 $ \\ \hline 1.088 & 1.272 $\pm 0.010 \pm 0.156 $ \\ \hline 1.167 & 1.121 $\pm 0.013 \pm 0.153 $ \\ \hline 1.261 & 0.955 $\pm 0.016 \pm 0.146 $ \\ \hline 1.384 & 0.874 $\pm 0.016 \pm 0.269 $ \\ \hline 1.522 & 0.730 $\pm 0.012\pm 0.280 $ \\ \hline 1.645 & 0.708 $\pm 0.009 \pm 0.257 $ \\ \hline 1.795 & 0.617 $\pm 0.007 \pm 0.254 $ \\ \hline 1.967 & 0.581 $\pm 0.006 \pm 0.223 $ \\ \hline 2.177 & 0.636 $\pm 0.003 \pm 0.187 $ \\ \hline \end{tabular} } \caption{ Data on $\alpha_{g_1}(Q)$ from JLab experiments EG4 (left), EG4/E97110 (top right) and EG1dvcs (bottom right).} \label{tab:alpha} \end{table} \FloatBarrier
1,116,691,499,548
arxiv
\section{Introduction} The goal of this paper is to solve the elliptic $N$-body Ruijsenaars model by the Bethe ansatz method. It was done in~\cite{FeVa96} for $N=2$, as a special case of the diagonalization of the transfer matrix of particular highest weight modules associated to the elliptic quantum group $E_{\tau,\gamma}(\mathrm{gl}_2)$. The result was achieved by a dynamical generalization of the algebraic Bethe ansatz (see~\cite{SkTaFa79, Fa82} for the description of the algebraic formulation of the Bethe ansatz). For general $N$, the Ruijsenaars operator can be constructed as the transfer matrix $\mathcal{T}(z)$ associated to the symmetric power of the vector representation of $E_{\tau,\gamma}(\mathrm{gl}_N)$~\cite{FeVa97}. Since $\mathcal{T}(z)$ is a trace over an $N$-dimensional space, we diagonalize it by the nested version of the Bethe ansatz~\cite{KuRe83}. The Ruijsenaars operator is a difference operator which is a q-deformation of the Calogero differential operator. In~\cite{FeVa95a} the solution of the elliptic $N$-body Calogero model (in the Bethe ansatz form) was obtained from the integral representation of solutions of the elliptic Knizhnik--Zamolodchikov--Bernard equations by applying the stationary phase method~\cite{EtKi93a, EtKi94, ReVa94}. Similarly, there is a link between the Ruijsenaars model and the qKZB difference equations. The eigenfunctions of the trigonometric Ruijsenaars operator are given in~\cite{FeVa95b} in the form of Etingof--Kirillov traces of intertwining operators~\cite{EtKi93b}. As for the integral representation of solutions of the qKZB equations, it is known for $\mathrm{sl}_2$ only~\cite{FetaVa96}. The structure of the paper is as follows. In section~\ref{2}, we review the construction of commuting transfer matrices associated to the representations of the elliptic quantum group $E_{\tau,\gamma}(\mathrm{gl}_N)$. In section~\ref{3}, we recall how the Ruijsenaars operator is related to the transfer matrix $\mathcal{T}(z)$ associated to the symmetric power of the vector representation of $E_{\tau,\gamma}(\mathrm{gl}_N)$. In section~\ref{4}, we explain the idea of the dynamical version of the algebraic nested Bethe ansatz. Finally in sections~\ref{5} and~\ref{6}, we apply explicitly the Bethe ansatz method; we write down the Bethe equations and the eigenvalues of the transfer matrix $\mathcal{T}(z)$. \section{The elliptic quantum group associated to $\mathrm{gl}_N$} \label{2} We recall the definition of the elliptic quantum group $E_{\tau,\gamma}(\mathrm{gl}_N)$, following~\cite{FeVa97}. Let ${\mathfrak{g}}$ be a Lie algebra, ${\mathfrak{h}}$ a Cartan subalgebra of ${\mathfrak{g}}$ and ${\mathfrak{h}}^\ast$ its dual space. Let $W$ be a finite dimensional diagonalizable ${\mathfrak{h}}$-module, {\it i.e.} a complex finite dimensional vector space with a weight decomposition $W=\oplus_{\mu \in {\mathfrak{h}}^\ast} W[\mu]$ such that ${\mathfrak{h}}$ acts on $W[\mu]$ by $x w = \mu(x) w$ for $x \in {\mathfrak{h}}, w \in W[\mu]$. The starting point to define representations of elliptic quantum groups is an $R$-matrix $R(z,\lambda) \in {\mathrm{End}}(W \otimes W)$ depending on two parameters $z \in {\mathbb C}, \lambda \in {\mathfrak{h}}^\ast$ and solution of the dynamical Yang--Baxter equation \begin{eqnarray} \label{DYB} \lefteqn{R^{(12)}(z_1-z_2, \lambda - \gamma h^{(3)}) \ R^{(13)}(z_1-z_3, \lambda) \ R^{(23)}(z_2-z_3, \lambda - \gamma h^{(1)})} \\ \nonumber & & = \ R^{(23)}(z_2-z_3, \lambda) \ R^{(13)}(z_1-z_3, \lambda - \gamma h^{(2)}) \ R^{(12)}(z_1-z_2, \lambda). \end{eqnarray} In this equation $\gamma$ is a generic complex parameter, and we use the standard notation \begin{equation} \label{shift} f(\lambda-\gamma h) \ w \ = \ f(\lambda-\gamma \mu) \ w \ \ \ \ \mathrm{if} \ w \in W[\mu] \end{equation} for any complex-valued function $f$ of $\lambda$. In what follows we are interested in the case ${\mathfrak{g}}=\mathrm{gl}_N$, and ${\mathfrak{h}}$ the algebra of diagonal complex $N \times N$ matrix, acting on the vector representation $V={\mathbb C}^N$ of $\mathrm{gl}_N$ with standard basis $(e_j)_{j=1,\ldots,N}$. We identify ${\mathfrak{h}}$ and ${\mathfrak{h}}^\ast$ via the trace, and with ${\mathbb C}^N$ via the orthonormal basis $(\omega_j=e_{jj})_{j=1,\ldots,N}$, denoting $e_{jk}$ the $N \times N$ matrix acting on the standard basis by $e_{jk} e_l = \delta_{kl} e_j$. The weight spaces of the ${\mathfrak{h}}$-module $V$ are $V[\omega_j]={\mathbb C} e_j$. In this case the notation~(\ref{shift}) reads explicitly \begin{displaymath} f(\lambda-\gamma h) \ e_k \ = \ \Gamma_k f(\lambda) \ e_k \end{displaymath} where $\Gamma_k$ is the shift operator by $-\gamma \omega_k$: if $f$ is a complex-valued function of $\lambda$, \begin{displaymath} \Gamma_k f(\lambda) \ = \ f(\lambda-\gamma\omega_k) \ = \ f(\lambda_1, \ldots, \lambda_{k-1}, \lambda_k-\gamma, \lambda_{k+1}, \ldots, \lambda_N). \end{displaymath} Let $R(z,\lambda) \in {\mathrm{End}}({\mathbb C}^N \otimes {\mathbb C}^N)$ be the elliptic solution of the dynamical Yang--Baxter equation~(\ref{DYB}) given by the formula \begin{equation} \label{Rmatrix} R(z,\lambda) \ = \ \sumi{1} e_{ii} \otimes e_{ii} + \sumij{1} \alpha(z,\lambda_i - \lambda_j) \ e_{ii} \otimes e_{jj} + \sumij{1} \beta(z,\lambda_i - \lambda_j) \ e_{ij} \otimes e_{ji}, \end{equation} where \begin{eqnarray*} \alpha(z,\lambda) & = & \frac{\theta(z) \theta(\lambda+\gamma)}{\theta(z-\gamma) \theta(\lambda)}, \ \ \ \ \beta(z,\lambda) = - \frac{\theta(z+\lambda) \theta(\gamma)}{\theta(z-\gamma) \theta(\lambda)}. \end{eqnarray*} $\theta$ is the Jacobi's first theta function: \begin{displaymath} \theta(z) = - \sum_{j \in {\mathbb Z}} \exp \left[ \pi i (j+ \frac{1}{2})^2 \tau + 2 \pi i (j+ \frac{1}{2}) (z+\frac{1}{2}) \right], \end{displaymath} where $\tau$ is a complex parameter with $\mathrm{Im}(\tau) > 0$. $\theta$ is analytic with zeroes $(n+\tau m), n,m \in {\mathbb Z}$. It satisfies \begin{displaymath} \theta(-z) \ = \ - \ \theta(z) \ = \ \theta(z+1) \ \ \ \ \mathrm{and} \ \ \ \ \theta(z+\tau) \ = \ - \ \theta(z) \ \mathrm{e}^{-i \pi (2z+\tau)}. \end{displaymath} The $R$-matrix~(\ref{Rmatrix}) has the following properties: \begin{eqnarray} & & R^{(12)}(z,\lambda) \ R^{(21)}(-z,\lambda) \ = \ \mathrm{Id}, \\ & & R^{(12)}(0,\lambda) \ = \ P^{(12)} \ = \ \sum_{i,j=1}^N e_{ij} \otimes e_{ji}, \\ \label{commRh} & & \relax [ R^{(12)}(z,\lambda) , x \otimes \mathrm{Id} + \mathrm{Id} \otimes x ] \ = \ 0, \ \ \ \ \forall x \in {\mathfrak{h}}, \\ \label{shiftR} & & \relax [ \Gamma^{(1)} \Gamma^{(2)}, R^{(12)}(z,\lambda) ] \ = \ 0, \end{eqnarray} where $\Gamma$ is the diagonal $N \times N$ matrix $\mathrm{diag} (\Gamma_i)_{i=1,\ldots,N}$. A representation of the elliptic quantum group $E=E_{\tau,\gamma}(\mathrm{gl}_N)$ is by definition a pair $(W,L)$ where $W$ is a finite-dimensional diagonalizable ${\mathfrak{h}}$-module and $L(z,\lambda)$ is a meromorphic function with values in ${\mathrm{End}}_{\mathfrak{h}}({\mathbb C}^N\otimes W)$, obeying the relation \begin{eqnarray*} \lefteqn{R^{(12)}(z_1-z_2,\lambda-\gamma h^{(3)}) \ L^{(13)}(z_1,\lambda) \ L^{(23)}(z_2,\lambda-\gamma h^{(1)})} \\ & & = \ L^{(23)}(z_2,\lambda) \ L^{(13)}(z_1,\lambda-\gamma h^{(2)}) \ R^{(12)}(z_1-z_2,\lambda) \end{eqnarray*} and commuting with the action of ${\mathfrak{h}}$: \begin{displaymath} [ L(z,\lambda) , x \otimes \mathrm{Id} + \mathrm{Id} \otimes x ] \ = \ 0, \ \ \ \ \forall x \in {\mathfrak{h}}. \end{displaymath} An $E$-submodule of an $E$-module $(W,L)$ is a pair $(W',L')$ where $W'$ is an ${\mathfrak{h}}$-submodule of $W$ such that ${\mathbb C}^N\otimes W'$ is invariant under the action of $L(z,\lambda)$, and $L'$ is the restriction of $L$ to this invariant subspace. $E$-submodules are $E$-modules. The basic example of an $E$-module is $({\mathbb C}^N,L)$ with $L(z,\lambda)=R(z-w,\lambda)$. It is called the vector representation with evaluation point $w$ and is denoted by $V(w)$. Other modules can be obtained by taking tensor products: if $(W_1,L_1)$ and $(W_2,L_2)$ are $E$-modules, then also $(W_1\otimes W_2,L)$, with an ${\mathfrak{h}}$-module structure $x (w_1 \otimes w_2) = x w_1 \otimes w_2 + w_1 \otimes x w_2$ and a $L$-operator $L(z,\lambda)= L_1^{(12)}(z,\lambda-\gamma h^{(3)}) \ L_2^{(13)}(z,\lambda)$. It is useful for what follows to introduce what we call the Lax operator associated to the $E$-module $(W,L)$. It is an $N \times N$ matrix, with elements which are operators acting on the space of meromorphic functions of $\lambda\in{\mathfrak{h}}^*$ with values in $W$. It is defined by the formula \begin{equation} \label{defL} \mathcal{L}(z) \ w(\lambda) \ = \ L^{(12)}(z,\lambda) \ \Gamma^{(1)} \ w^{(2)}(\lambda). \end{equation} More explicitly, let us introduce matrix elements by $L(z,\lambda) \ e_j \otimes w = \sum_i e_i \otimes L_{ij}(z,\lambda) \ w$ (and the same for $\mathcal{L}$). The elements $\mathcal{L}_{ij}(z)$ of $\mathcal{L}(z)$ act the following way: \begin{displaymath} \mathcal{L}_{ij}(z) \ w(\lambda) \ = \ L_{ij}(z,\lambda) \ w(\lambda-\gamma\omega_j). \end{displaymath} Using the property~(\ref{shiftR}) of the $R$-matrix, we see easily that $\mathcal{L}$ satisfies the commutation relation \begin{equation} \label{RLL} R^{(12)}(z_1-z_2, \lambda-\gamma h^{(3)}) \ \mathcal{L}^{(13)}(z_1) \ \mathcal{L}^{(23)}(z_2) = \mathcal{L}^{(23)}(z_2) \ \mathcal{L}^{(13)}(z_1) \ R^{(12)}(z_1-z_2, \lambda). \end{equation} The trace of the Lax operator $\mathrm{tr}^{(1)} \mathcal{L}^{(13)}(z) = \sum_i \mathcal{L}_{ii}(z)$ leaves the zero weight subspace $W[0]$ of $W$ invariant. The transfer matrix associated to the $E$-module $(W,L)$ is, by definition, \begin{displaymath} \mathcal{T}(z) = \left[ \mathrm{tr}^{(1)} \mathcal{L}^{(13)}(z) \right]_{W[0]}. \end{displaymath} As a consequence of relation~(\ref{RLL}), the transfer matrices commute for different values of the spectral parameters. \section{The Ruijsenaars operator} \label{3} The elliptic Ruijsenaars operator is (up to a conjugation by a function) a difference operator acting on functions of $\lambda \in {\mathfrak{h}}^\ast$: \begin{displaymath} M \ = \ \sumi{1} \prod_{j;j\neq i} \frac{\theta(\lambda_i-\lambda_j+\ell\gamma)}{\theta(\lambda_i-\lambda_j)} \ \Gamma_i, \ \ \ \ \mathrm{with \ a \ coupling \ constant} \ \ell \in {\mathbb N}. \end{displaymath} It can be obtained as the transfer matrix associated to a particular $E$-module that we now introduce. Let $n \in {\mathbb N}$. Let $V^{\otimes n}(0)$ denote the $E$-module $V(0) \otimes V(\gamma) \otimes \cdots \otimes V(\gamma(n-1))$. $V^{\otimes n}(0)$ is the pair $(W,L)$ with $W=({\mathbb C}^N)^{\otimes n}= {\mathbb C}^N \otimes \cdots \otimes {\mathbb C}^N$, and $L$ is the operator on ${\mathbb C}^N \otimes ({\mathbb C}^N)^{\otimes n}$ given by the formula \begin{displaymath} L(z,\lambda) \ = \ R^{(01)}\Big(z,\lambda-\gamma\sum_{j=2}^n h^{(j)}\Big) \ R^{(02)}\Big(z-\gamma,\lambda-\gamma\sum_{j=3}^n h^{(j)}\Big) \ \cdots \ R^{(0n)}\Big(z-\gamma(n-1),\lambda\Big). \end{displaymath} We denote by $S^n({\mathbb C}^N)$ the space of symmetric tensors of $({\mathbb C}^N)^{\otimes n}$, {\it i.e.} the subspace of $({\mathbb C}^N)^{\otimes n}$ invariant under the action of the symmetric group $S_n$. It is proved in~\cite{FeVa97} that $S^n({\mathbb C}^N)$ is an $E$-submodule of $V^{\otimes n}(0)$. It is called the $n$th symmetric power of the vector representation (with evaluation point $0$) and is denoted by $S^nV(0)$. The zero weight subspace of this module is trivial unless $n$ is a multiple of $N$; if we take $n=N\ell$, the zero weight subspace is one dimensional and is spanned by the sum of the tensors $e_{i_1} \otimes \cdots \otimes e_{i_{N\ell}}$ over all sequences $(i_j)$ such that each integer between 1 and N occurs precisely $\ell$ times\footnote{By extension we call zero weight a weight which is a multiple of $\omega=\sum_{i=1}^N \omega_i$. Since the $R$-matrix and the Lax operator depend only on differences $(\lambda_i-\lambda_j)$, adding the same constant to each $\lambda_i$ does not change our results.}. Let $\mathcal{T}(z)$ be the transfer matrix associated to $S^{N\ell}V(0)$. If we identify the zero weight subspace of $S^{N\ell}V(0)$ with ${\mathbb C}$, we have \begin{equation} \label{Ruij} \mathcal{T}(z) \ = \ \frac{\theta(z-\gamma\ell)}{\theta(z-\gamma N\ell)} M. \end{equation} Thanks to this result, we can solve the Ruijsenaars model by applying the Bethe ansatz method to the transfer matrix $\mathcal{T}(z)$. Let us remark (although we do not use this result in what follows) that $M$ belongs to a family of $N$ commuting difference operators \begin{displaymath} M_n \ = \ \sum_{I;|I|=n} \ \prod_{^{i \in I}_{j \not\in I}} \frac{\theta(\lambda_i-\lambda_j+\ell\gamma)}{\theta(\lambda_i-\lambda_j)} \ \prod_{i \in I} \Gamma_i, \ \ \ \ \ \ n=1,\ldots,N \end{displaymath} (where $|I|$ denotes the cardinality of a subset $I$ of $\{1,\ldots,N\}$). Each operator $M_n$ can be constructed by considering transfer matrices associated to $E$-modules obtained as symmetric and exterior powers of the vector representation of $E_{\tau,\gamma}(\mathrm{gl}_N)$. \section{Setting up of the Bethe ansatz} \label{4} Let us recall that the algebraic Bethe ansatz is a method for diagonalizing transfer matrices obtained as a trace of a $2 \times 2$ Lax matrix with a spectral parameter. The nested Bethe ansatz is a generalization of the Bethe ansatz to the case of Lax matrices with an $N$-dimensional auxiliary space, for $N>2$; it is achieved in (N-1) steps, each step resulting in reducing by 1 the dimension of the auxiliary space. Here we apply a dynamical version of the nested Bethe ansatz: the Lax operator is a matrix on an auxiliary $N$ dimensional space which elements are difference operators acting on functions of $N$ parameters $(\lambda_1,\ldots,\lambda_N)$ with values in some vector space. The idea consists in looking for eigenstates $\psi$ obtained by applying ``creation operators'' $B(t_k)$ to a reference state $v$: $ \psi(t_1, \ldots, t_m) = B(t_1) \cdots B(t_m) v$. The $B(t_k)$'s are particular elements of the Lax matrix $\mathcal{L}$, and the reference state $v$ is chosen to be an obvious eigenstate of the elements of the Lax matrix. The problem amounts to finding the conditions on the parameters $t_k$. More precisely, in the case $N=2$, $B(t)$ is the operator $\mathcal{L}_{12}(t)$. In the case $N>2$, it is a little more complicated: the operators $B(t)$ belong to the set $(B_j(t)=\mathcal{L}_{1j})_{j=2, \ldots,N}$, so that $\psi$ is roughly of the form $B_{j_1}(t_1) \cdots B_{j_m}(t_m) v$ and we have to choose not only the right parameters $t_k$, but also the indices $j_k$. Let us now go into details. We want to diagonalize the transfer matrix \begin{equation} \label{genT} \mathcal{T}(z) \ = \ \mathrm{tr}^{(0)} \mathcal{L}(z) \end{equation} where \begin{equation} \label{genL} \mathcal{L}(z) \ = \ R^{(01)}\Big(z,\lambda-\gamma\sum_{j=2}^{N\ell} h^{(j)}\Big) \ \cdots \ R^{(0,N\ell)}\Big(z-\gamma(N\ell-1),\lambda\Big) \ \Gamma^{(0)}. \end{equation} This Lax operator is a matrix on an auxiliary $N$ dimensional space (denoted by the index $(0)$) with elements which are difference operators acting on functions of $\lambda=(\lambda_1,\ldots,\lambda_N)$ with values in $W=({\mathbb C}^N)^{\otimes N\ell}$. We denote these elements by: \begin{displaymath} \mathcal{L}(z) \ = \ \left( \begin{array}{cc} A(z) & B_j(z) \\ C_i(z) & D_{ij}(z) \end{array} \right)_{i,j=2, \ldots,N}. \end{displaymath} We denote by $B$ the lign vector $(B_2, \ldots, B_N)$ and by $D$ the matrix $(D_{ij})_{i,j=2, \ldots,N}$. \subsection{The form of the Bethe ansatz} Now let us explain more precisely in which form we are looking for the eigenstates of $\mathcal{T}(z)$. First we choose a reference state $v$. We take a joint eigenstate of the operators $A$ and $D_{ii}$ which is furthermore annihilated by all the $D_{ij}$ for $i \neq j$: \begin{displaymath} v \ = \ \underbrace{e_1 \otimes \cdots \otimes e_1}_{N \ell \ \mathrm{factors}} \ \in W. \end{displaymath} \begin{prop} The action of the operators $A$ and $D_{ij}$ on the reference state is given by the following formulae: \begin{eqnarray} \label{Aaction} & & A(z) \ [g(\lambda) \ v] \ = \ g(\lambda-\gamma\omega_1) \ v, \\ \label{Daction} & & D(z) \ [g(\lambda) \ v] \ = \ \sum_{i,j=2}^N e_{ij} \ D_{ij}(z) \ [ g(\lambda) \ v] \ = \ \sumi{2} e_{ii} \ \Phi_i(z,\lambda) \ g(\lambda-\gamma\omega_i)\ v, \end{eqnarray} where $ \displaystyle \Phi_i(z,\lambda) \ = \ \frac{\theta(z)}{\theta(z-N\ell\gamma)} \frac{\theta(\lambda_i-\lambda_1+N\ell\gamma)}{\theta(\lambda_i-\lambda_1)}$. \end{prop} \begin{myproof} We can write $R(z,\lambda)=\sum_{i,j=1}^N e_{ij} \otimes R_{ij}(z,\lambda)$ with \begin{eqnarray*} & & R_{ii}(z,\lambda) = e_{ii} + \sum_{j;j\neq i} \alpha(z,\lambda_i-\lambda_j) \ e_{jj} \ \ \ \ \mathrm{for} \ i=1,\ldots,N, \\ & & R_{ij}(z,\lambda) = \beta(z,\lambda_i-\lambda_j) \ e_{ji} \ \ \ \ \mathrm{for} \ i,j=1,\ldots,N, \ i\neq j. \end{eqnarray*} Then we have \begin{eqnarray*} & & L_{ij}(z,\lambda) \ = \ \sum_{j_1,\ldots,j_{N\ell-1}}^N \ R^{(1)}_{i j_1}\Big(z,\lambda-\gamma\sum_{k=2}^{N\ell} h^{(k)}\Big) \ R^{(2)}_{j_1 j_2}\Big(z-\gamma,\lambda-\gamma\sum_{k=3}^{N\ell} h^{(k)}\Big) \ \cdots \\ & & \hspace*{105pt} \cdots \ R^{(N\ell)}_{j_{N\ell-1} j} \Big(z-\gamma(N\ell-1),\lambda\Big). \end{eqnarray*} Let us consider $L_{ij}$, for $i,j \geq 2$, $i\neq j$. In each term of the sum we have at least one of the factors which is of the form $R_{ik}$ for some $k \neq i$. Since $R_{ik}$ is proportional to $e_{ki}$, $R_{ik} e_1=0$. Thus $D_{ij} \ v = 0$ for $i,j \geq 2$, $i\neq j$. Now if we apply $L_{ii}$ to $v$, the only term which gives a non zero contribution is the one corresponding to $j_1=\cdots=j_{N\ell-1}=i$. Therefore \begin{displaymath} L_{11}(z,\lambda) \ v \ = \ v \ \ \ \ \mathrm{and} \ \ \ \ L_{ii}(z,\lambda) \ v \ = \ \Phi_i(z,\lambda) \ v \ \ \ \ \mathrm{if} \ i\neq 1, \end{displaymath} with $ \displaystyle \Phi_i(z,\lambda) \ = \ \prod_{k=1}^{N\ell} \alpha(z-\gamma(k-1),\lambda_i-\lambda_1+\gamma(N\ell-k))$. \end{myproof} Then we look for eigenstates in the form \begin{displaymath} \psi(t_1,\ldots,t_m,\lambda) = \sum_{j_1,\ldots,j_m=2}^N B_{j_1}(t_1) \cdots B_{j_m}(t_m) \ v \ g_{j_1,\ldots,j_m}(\lambda), \end{displaymath} with some coefficients $g_{j_1,\ldots,j_m}$ which are functions of $\lambda$ (and depend implicitely on the parameters $t_k$). We impose conditions on these $g_{j_1,\ldots,j_m}$'s so as to consider only the states $\psi$ of weight zero. $v$ has a weight $[(N-1)\ell,-\ell,\ldots,-\ell]$ in the basis $(\omega'_j=e_{jj}-\omega/N)$. Since applying the operator $B_j$ to a vector of weight $\mu$ gives a vector of weight $\mu-\omega_1+\omega_j$, we have to apply to $v$ exactly $\ell$ times each of the $B_j$ in order to get a zero weight vector. This means that we must take a state of the form \begin{equation} \label{psi} \psi(t_1,\ldots,t_{(N-1)\ell},\lambda) \ = \ B^{(1)}(t_1) \cdots B^{((N-1)\ell)}(t_{(N-1)\ell}) \ v \ g^{(1,\ldots,(N-1)\ell)}(\lambda) \end{equation} where $g^{(1,\ldots,(N-1)\ell)}$ is a function of $\lambda$ with values in the zero weight subspace of $\check{W}=({\mathbb C}^{N-1})^{\otimes (N-1)\ell}$. \subsection{The commutation relations} To evaluate the action of the transfer matrix \begin{displaymath} \mathcal{T}(z) \ = \ A(z) + \mathrm{tr} D(z) \end{displaymath} on the vector $\psi$, we have to push $A$ and $D$ to the right of the $B$ operators, and we need to write the commutation relations~(\ref{RLL}) more explicitly. \begin{prop} The commutation relations can be written in the form \begin{eqnarray} B^{(1)}(z_1) \ B^{(2)}(z_2) & = & B^{(2)}(z_2) \ B^{(1)}(z_1) \ \check{R}^{(12)}(z_1-z_2, \check{\lambda}), \nonumber \\ A(z_1) \ B(z_2) & = & B(z_2) \ A(z_1) \ a(z_2-z_1,\lambda) + B(z_1) \ A(z_2) \ b(z_2-z_1,\lambda), \nonumber \\ \label{comm} D^{(1)}(z_1) \ B^{(2)}(z_2) & = & a^{(1)}(z_1-z_2, \lambda - \gamma h) \ B^{(2)}(z_2) \ D^{(1)}(z_1) \ \check{R}^{(12)}(z_1-z_2, \check{\lambda}) \\ & & \hspace*{20pt} + \ c^{(1)}(z_1-z_2, \lambda - \gamma h) \ B^{(2)}(z_1) \ D^{(1)}(z_2) \ \check{P}^{(12)}, \nonumber \end{eqnarray} where $\check{\lambda}=(\lambda_2, \ldots, \lambda_N)$, $\check{R}$ is the $R$-matrix associated to $\mathrm{gl}_{N-1}$ \begin{displaymath} \check{R}(z,\check{\lambda}) \ = \ \sumi{2} e_{ii} \otimes e_{ii} + \sumij{2} \alpha(z,\lambda_i - \lambda_j) \ e_{ii} \otimes e_{jj} + \sumij{2} \beta(z,\lambda_i - \lambda_j) \ e_{ij} \otimes e_{ji}, \end{displaymath} $\check{P}^{(12)}=\sum_{i,j=2}^N e_{ij} \otimes e_{ji}$ is the permutation operator, and $a(z,\lambda)$, $b(z,\lambda)$, $c(z,\lambda)$ are diagonal $(N-1) \times (N-1)$ matrices with elements $(j=2, \ldots, N)$ \begin{displaymath} [a(z,\lambda)]_{jj} = \frac{1}{\alpha(z,\lambda_j-\lambda_1)}, \ \ \ \ [b(z,\lambda)]_{jj} = - \ \frac{\beta(z,\lambda_1-\lambda_j)} {\alpha(z,\lambda_j-\lambda_1)}, \ \ \ \ [c(z,\lambda)]_{jj} = - \ \frac{\beta(z,\lambda_j-\lambda_1)} {\alpha(z,\lambda_j-\lambda_1)}. \end{displaymath} \end{prop} \begin{myproof} If we denote by $R_{ik,jn}$ the elements of the $R$-matrix: \begin{displaymath} R^{(12)}(z,\lambda) \ = \ \sum_{i,j,k,n=1}^N \ R_{ik,jn}(z,\lambda) \ e_{ik} \otimes e_{jn}, \end{displaymath} the element $(ik,jn)$ of the matrix relation~(\ref{RLL}) reads \begin{displaymath} \sum_{r,s=1}^N \ R_{ir,js}(z_-w,\lambda-\gamma h) \ \mathcal{L}_{rk}(z) \ \mathcal{L}_{sn}(w) \ = \ \sum_{r,s=1}^N \ \mathcal{L}_{js}(w) \ \mathcal{L}_{ir}(z) \ R_{rk,sn}(z-w,\lambda). \end{displaymath} Taking successively $i=j=1$ and $k,n \geq 2$; $i=j=n=1$ and $k \geq 2$; $j=1$ and $i,k,n \geq 2$, and using the particular form of the $R$-matrix ($R_{ik,jn}=0$ unless $(k,n)=(i,j)$), we get \begin{eqnarray*} & & B_k(z) \ B_n(w) \ = \ \sum_{r,s=2}^N \ B_s(w) \ B_r(z) \ R_{rk,sn}(z-w,\lambda), \\ & & B_k(z) \ A(w) \ = \ B_k(w) \ A(z) \ \beta(z-w,\lambda_1-\lambda_k) \ + \ A(w) \ B_k(z) \ \alpha(z-w,\lambda_k-\lambda_1), \\ & & \vphantom{\sum_{r,s=2}^N} \beta(z-w,(\lambda-\gamma h)_i-(\lambda-\gamma h)_1) \ B_k(z) \ D_{in}(w) \ + \ \alpha(z-w,(\lambda-\gamma h)_i-(\lambda-\gamma h)_1) \ D_{ik}(z) \ B_n(w) \\ & & \hspace*{20pt} = \ \sum_{r,s=2}^N \ B_s(w) \ D_{ir}(z) \ R_{rk,sn}(z-w,\lambda), \end{eqnarray*} which can be written in the form~(\ref{comm}). \end{myproof} Since the coefficients appearing in the commutation relations~(\ref{comm}) depend on $\lambda$ and $h$, we also need to know the way the functions $f(\lambda,h)$ go through the different operators. \begin{prop} For any complex-valued function $f$ of $\lambda$ and $h$, we have: \begin{eqnarray} \nonumber A(z) \ f(\lambda,h) & = & f(\lambda-\gamma\omega_1,h) \ A(z), \\ \label{ABDshift} B_i(z) \ f(\lambda,h) & = & f(\lambda-\gamma\omega_i,h+\omega_1-\omega_i) \ B_i(z), \ \ \ \ \mathrm{for} \ i=2, \ldots, N, \\ \nonumber D_{ij}(z) \ f(\lambda,h) & = & f(\lambda-\gamma\omega_j,h+\omega_i-\omega_j) \ D_{ij}(z), \ \ \ \ \mathrm{for} \ i,j=2, \ldots, N. \end{eqnarray} \end{prop} \begin{myproof} For the $\lambda$ dependence it is a straightforward consequence of the definition ~(\ref{defL}) of the Lax operator. For the $h$ dependence, it is not difficult to see that since $L$ commutes with the action of ${\mathfrak{h}}$, we also have $[\mathcal{L}^{(12)}(z) , f(h^{(1)}+h^{(2)}) ] = 0 $, or equivalently $\mathcal{L}_{ij}(z) f(h+\omega_j) = f(h+\omega_i) \mathcal{L}_{ij}(z)$. \end{myproof} \section{The first step of the Bethe ansatz} \label{5} With the help of the commutation relations~(\ref{comm}, \ref{ABDshift}), the action of $\mathcal{T}=A+\mathrm{tr} D$ on the vector $\psi$ given by~(\ref{psi}) can be recast in the form \begin{eqnarray*} & & \mathcal{T}(z) \ \psi(t_1,\ldots,t_m,\lambda) \ = \ B^{(1)}(t_1) \cdots B^{(m)} (t_m) \ v \ g_0^{(1,\ldots,m)}(\lambda) \\ & & \hspace*{20pt} + \sum_{k=1}^m B^{(k)}(z) B^{(k+1)}(t_{k+1}) \cdots B^{(m)}(t_m) B^{(1)}(t_1) \cdots B^{(k-1)}(t_{k-1}) \ v \ g_k^{(1,\ldots,m)}(\lambda). \end{eqnarray*} Here and until the end of this section, we write $m \equiv (N-1)\ell$. Admitting that these different terms are linearly independent, we have to impose that $g_k=0$ for $k=1,\ldots,m$, and that $g_0$ is proportional to $g$. The term associated to $g_0$ is usually called the ``wanted term'' and the other ones the ``unwanted terms''. If we carried out the commutation procedure, we would get $2^m$ terms for the $A$ part, and as many for the $D$ part: we see that this is hopeless to apply the process literally for all the terms. Nevertheless the ``wanted term'' and the first ``unwanted term'' are easy to obtain. To get the ``wanted term'', we need to keep only the first part of the commutation relations $A(z) B(t) = B(t) A(z) a(t-z) + (\cdots)$, $D^{(1)}(z) B^{(2)}(t) = a^{(1)}(z-t) B^{(2)}(t) D^{(1)}(z) \check{R}^{(12)}(z-t) + (\cdots)$. Between two commutations of $A(z)$ with one $B(t)$, we push a factor $a(t-z)$ to the right, and between two commutations of $D(z)$ with one $B(t)$, we push a factor $\check{R}$ to the right, using the relations ~(\ref{ABDshift}). Once $A(z)$ and $D(z)$ are completely to the right, they act on $v \ g(\lambda)$ according to ~(\ref{Aaction}, \ref{Daction}). We get eventually \begin{eqnarray*} \lefteqn{g_0^{(1,\ldots,m)}(\lambda) \ = \ \left[ \vphantom{\sum_{k=2}^m} a^{(m)}\Big(t_m-z,\lambda_1-\gamma, \check{\lambda}\Big) \ a^{(m-1)}\Big(t_{m-1}-z,\lambda_1-\gamma,\check{\lambda}+\gamma \check{h}^{(m)}\Big) \ \cdots \right.} \\ & & \cdots \ \left. a^{(1)}\Big(t_1-z,\lambda_1-\gamma,\check{\lambda}+\gamma \sum_{k=2}^m \check{h}^{(j)}\Big) \right] \ g^{(1,\ldots,m)}(\lambda_1-\gamma,\check{\lambda}) \\ & & + \ \mathrm{tr}^{(0)} \left[ \vphantom{\sum_{k=2}^m} a^{(0)}(z-t_1,\lambda_1-\ell\gamma,\check{\lambda}) \ a^{(0)}(z-t_2,\lambda_1-(\ell+1)\gamma,\check{\lambda}) \ \cdots \right. \\ & & \vphantom{\sum_{k=2}^m} \cdots \ a^{(0)}(z-t_m,\lambda_1-(\ell+m-1)\gamma,\check{\lambda}) \ \Phi^{(0)}(z) \ \check{R}^{(0m)}\Big(z-t_m,\check{\lambda}\Big) \\ & & \left. \check{R}^{(0,m-1)}\Big(z-t_{m-1},\check{\lambda}+\gamma \check{h}^{(m)}\Big) \ \cdots \ \check{R}^{(0,1)}\Big(z-t_1,\check{\lambda}+\gamma \sum_{k=2}^m\check{h}^{(k)}\Big) \right] g^{(1,\ldots,m)}(\lambda). \end{eqnarray*} What we have in mind is to be sent back to the diagonalization of a transfer matrix $\check{\T}$ of the same kind as $\mathcal{T}$, with dimension $N$ decreased by 1. This is the case if we choose the right dependence of the vector $g$ in the variable $\lambda_1$. \begin{prop} If we take $g^{(1,\ldots,m)}(\lambda) = G(\lambda) \ \check{\psi}(\check{\lambda})$ with \begin{equation} \label{G} G(\lambda) \ = \ \mathrm{e}^{c_1 \lambda_1} \prod_{j=2}^N \prod_{p=1}^\ell \theta(\lambda_1-\lambda_j-p\gamma), \end{equation} where $c_1$ is an arbitrary constant, then \begin{eqnarray} \label{wanted} \lefteqn{g_0^{(1,\ldots,m)}(\lambda) \ = \ G(\lambda) \left[ \vphantom{\frac{\theta(z)}{\theta(z-N\ell\gamma)}} S_+(z;t_1,\ldots,t_m) \ \mathrm{e}^{-\gamma c_1} \right.} \\ & & \left. + \ \frac{\theta(z)}{\theta(z-N\ell\gamma)} \ S_-(z;t_1,\ldots,t_m) \ \check{\T}^{(m,\ldots,1)}(z) \right] \ \check{\psi}(\check{\lambda}), \nonumber \end{eqnarray} where $ \displaystyle S_\pm(z;t_1,\ldots,t_m) = \prod_{k=1}^{m} \frac{\theta(z-t_k\pm\gamma)} {\theta(z-t_k)}$ and \begin{eqnarray*} \lefteqn{\check{\Gamma}^{(1,\ldots,m)} \ \check{\T}^{(m,\ldots,1)}(z) \ \left(\check{\Gamma}^{(1,\ldots,m)}\right)^{-1}} \\ & & = \mathrm{tr}^{(0)} \left[ \check{R}^{(0m)}\Big(z-t_m,\check{\lambda}-\gamma\sum_{j=1}^{m-1} \check{h}^{(j)}\Big) \ \cdots \ \check{R}^{(01)}\Big(z-t_1,\check{\lambda}\Big) \ \check{\Gamma}^{(0)} \right]. \end{eqnarray*} Up to a conjugation by $\check{\Gamma}^{(1,\ldots,m)} = (\check{\Gamma}^{(1)} \ \cdots \ \check{\Gamma}^{(m)})$ (where $\check{\Gamma}$ is the shift operator $\check{\Gamma}=\mathrm{diag}(\Gamma_i)_{i=2,\ldots,N}$), $\check{\T}$ is the transfer matrix acting in $\check{W}=({\mathbb C}^{N-1})^{\otimes (N-1)\ell}$ given by an expression similar to ~(\ref{genT}, \ref{genL}). \end{prop} \begin{myproof} Let us look at the first term of $g_0$, which is given by the action of a diagonal matrix on $g$. Because $g$ is a zero weight vector, we find that this term is simply a multiple of $g$ given by \begin{eqnarray*} \lefteqn{\left[ a^{(m)} \ a^{(m-1)} \ \cdots \ a^{(1)} \right] g^{(1,\ldots,m)}(\lambda-\gamma\omega_1)} \\ & & = \ \prod_{k=1}^m \frac{\theta(z-t_k+\gamma)}{\theta(z-t_k)} \ \prod_{j=2}^N \frac{\theta(\lambda_1-\lambda_j-\gamma)}{\theta(\lambda_1-\lambda_j- (\ell+1)\gamma)} \ g^{(1,\ldots,m)}(\lambda-\gamma\omega_1). \end{eqnarray*} For this to be exactly $g^{(1,\ldots,m)}(\lambda)$ up to a constant factor, we have to take \begin{displaymath} g^{(1,\ldots,m)}(\lambda) \ = \ G(\lambda) \ \check{\psi}(\check{\lambda}) \end{displaymath} with the function $G$ as in ~(\ref{G}). Now let us simplify the second term. If we compute the $j$th element of the matrix $a(z-t_1,\lambda_1-\ell\gamma,\check{\lambda}) \cdots a(z-t_m,\lambda_1-(\ell+m-1)\gamma,\check{\lambda}) \Phi(z)$, we find \begin{eqnarray*} \lefteqn{\left[ a(z-t_1,\lambda_1-\ell\gamma,\check{\lambda}) \ \cdots \ a(z-t_m,\lambda_1-(\ell+m-1)\gamma,\check{\lambda}) \ \Phi(z)\right]_j} \\ & & = \ \frac{\theta(z)}{\theta(z-(m+\ell)\gamma)} \ \prod_{k=1}^m \frac{\theta(z-t_k-\gamma)}{\theta(z-t_k)} \ \frac{\theta(\lambda_1-\lambda_j-\ell\gamma)}{\theta(\lambda_1-\lambda_j)} \ \Gamma_j \end{eqnarray*} and then it is clear that \begin{eqnarray*} \lefteqn{\left[ a(z-t_1,\lambda_1-\ell\gamma,\check{\lambda}) \ \cdots \ a(z-t_m,\lambda_1-(\ell+m-1)\gamma,\check{\lambda}) \ \Phi(z)\right]_j \ G(\lambda)} \\ & & = \ \frac{\theta(z)}{\theta(z-(m+\ell)\gamma)} \ \prod_{k=1}^m \frac{\theta(z-t_k-\gamma)}{\theta(z-t_k)} \ G(\lambda) \ \Gamma_j. \end{eqnarray*} So when we take the trace we simply get the transfer matrix $\check{\T}$ up to a scalar coefficient. \end{myproof} The first ``unwanted term'' is obtained similarly, except that for the commutation of $A(z)$ and $D(z)$ accross $B(t_1)$, we keep the second part of the commutation relations: \begin{eqnarray*} \lefteqn{g_1^{(1,\ldots,m)}(\lambda) \ = \ \left[ \vphantom{\sum_{k=2}^m} a^{(m)}\Big(t_m-t_1,\lambda_1-\gamma, \check{\lambda}\Big) \ \cdots \ a^{(2)}\Big(t_2-t_1,\lambda_1-\gamma, \check{\lambda}+\gamma \sum_{k=3}^m \check{h}^{(k)}\Big) \right.} \\ & & \ \left. \ b^{(1)}\Big(t_1-z,\lambda_1-\gamma,\check{\lambda}+\gamma \sum_{k=2}^m \check{h}^{(k)}\Big) \right] \ g^{(1,\ldots,m)}(\lambda_1-\gamma,\check{\lambda}) \\ & & + \ \mathrm{tr}^{(0)} \left[ \vphantom{\sum_{k=2}^m} c^{(0)}(z-t_1,\lambda_1-\ell\gamma,\check{\lambda}) \ a^{(0)}(t_1-t_2,\lambda_1-(\ell+1)\gamma,\check{\lambda}) \ \cdots \right. \\ & & \vphantom{\sum_{k=2}^m} \cdots \ a^{(0)}(t_1-t_m,\lambda_1-(\ell+m-1)\gamma,\check{\lambda}) \ \Phi^{(0)}(t_1) \ \check{R}^{(0m)}\Big(t_1-t_m,\check{\lambda}\Big) \ \cdots \\ & & \left. \cdots \ \check{R}^{(0,2)}\Big(t_1-t_2,\check{\lambda}+\gamma \sum_{k=3}^m\check{h}^{(k)}\Big) \ \check{P}^{(01)} \right] g^{(1,\ldots,m)}(\lambda). \end{eqnarray*} \begin{prop} With $g^{(1,\ldots,m)}(\lambda) = G(\lambda) \ \check{\psi}(\check{\lambda})$ where $G(\lambda)$ is given by~(\ref{G}), we have \begin{eqnarray*} \lefteqn{g_1^{(1,\ldots,m)}(\lambda) \ = \ G(\lambda) \ \frac{\theta(\gamma)}{\theta(t_1-z)} \ X^{(1)}(z,t_1,\lambda) \ \left[ \vphantom{\frac{\theta(t_1)}{\theta(t_1-N\ell\gamma)}} \ K_+(t_1,\ldots,t_m) \ \mathrm{e}^{-\gamma c_1} \right.} \\ & & \left. - \ \frac{\theta(t_1)}{\theta(t_1-N\ell\gamma)} \ K_-(t_1,\ldots,t_m) \ \check{\T}^{(m,\ldots,1)}(t_1) \right] \ \check{\psi}(\check{\lambda}) \end{eqnarray*} where \begin{displaymath} K_\pm(t_1,\ldots,t_m) = \prod_{k=2}^{m} \frac{\theta(t_1-t_k\pm\gamma)}{\theta(t_1-t_k)} \end{displaymath} and $X$ is the diagonal $(N-1) \times (N-1)$ matrix of elements $ \displaystyle [X(z,t,\lambda)]_{jj} = \frac{\theta(z-t+\lambda_j-\lambda_1+\ell\gamma)} {\theta(\lambda_j-\lambda_1+\ell\gamma)}$, for $j=2,\ldots,N$. \end{prop} We shall not compute directly the other ``unwanted terms''. To obtain them, let us change the order of the $B$ operators in ~(\ref{psi}), using the commutation relations~(\ref{comm},\ref{ABDshift}). We get \begin{equation} \label{newpsi} \psi(t_1,\ldots,t_m,\lambda) = B^{(2)}(t_2) \cdots B^{(m)}(t_m) B^{(1)}(t_1)\ v \ \check{\mathcal{R}}^{(1;m,\ldots,2)} \ g^{(1,\ldots,m)}(\lambda), \end{equation} where \begin{displaymath} \check{\mathcal{R}}^{(1;m,\ldots,2)} \ = \ \check{R}^{(1m)}\Big(t_1-t_m,\check{\lambda}\Big) \ \cdots \ \check{R}^{(12)}\Big(t_1-t_2,\check{\lambda}+\gamma\sum_{k=3}^m \check{h}^{(k)}\Big). \end{displaymath} Starting with~(\ref{newpsi}) and applying the same procedure as before, we find another expression for the ``wanted term''; let us check that the two results are equivalent. We find now \begin{eqnarray*} \lefteqn{g_0^{(1,\ldots,m)}(\lambda) \ = \ G(\lambda) \ \left( \check{\mathcal{R}}^{(1;m,\ldots,2)} \right)^{-1} \left[ \vphantom{\frac{\theta(z)}{\theta(z-N\ell\gamma)}} S_+(z;t_2,\ldots,t_m,t_1) \ \mathrm{e}^{-\gamma c_1} \right.} \\ & & \left. + \ \frac{\theta(z)}{\theta(z-N\ell\gamma)} \ S_-(z;t_2,\ldots,t_m,t_1) \ \check{\T}^{(1,m,\ldots,2)}(z) \right] \ \check{\mathcal{R}}^{(1;m,\ldots,2)} \ \check{\psi}(\check{\lambda}). \end{eqnarray*} Since $S_\pm$ are symmetric functions of $(t_1,\ldots,t_m)$, and thanks to some commutation property of $\check{\T}$ and $\check{\mathcal{R}}$ which is written below, it is straightforward to see that this second expression is equal to~(\ref{wanted}). \begin{prop} \begin{equation} \label{TR} \check{\T}^{(1,m,\ldots,2)}(z) \ \check{\mathcal{R}}^{(1;m,\ldots,2)} \ = \ \check{\mathcal{R}}^{(1;m,\ldots,2)} \ \check{\T}^{(m,\ldots,1)}(z). \end{equation} \end{prop} \begin{myproof} We can write $\check{\T}^{(m,\ldots,1)}= \mathrm{tr}^{(0)} \tilde{\mathcal{L}}^{(0;m,\ldots,1)}$ with \begin{eqnarray*} \tilde{\mathcal{L}}^{(0;m,\ldots,1)} & = & \Gaminv{m} \RR{0}{m} \Gaminv{m-1} \RR{0}{m-1} \cdots \Gaminv{1} \RR{0}{1} \Gam{0} \Gam{1} \cdots \Gam{m} \\ & = & \Gam{0} \RR{0}{m} \Gaminv{m} \RR{0}{m-1} \Gaminv{m-1} \cdots \RR{0}{1} \Gam{2} \cdots \Gam{m} \end{eqnarray*} because $\Gam{0} \Gam{j}$ commutes with $\RR{0}{j}$. Using the dynamical Yang--Baxter equation~(\ref{DYB}) in the form \begin{displaymath} \Gam{0} \RR{0}{1} \Gaminv{1} \RR{0}{j} \Gaminv{j} \RR{1}{j} = \RR{1}{j} \Gaminv{j} \RR{0}{j} \Gam{0} \RR{0}{1} \Gaminv{1}, \end{displaymath} one can see that $\tilde{\mathcal{L}}^{(0;1,2)} \check{\mathcal{R}}^{(1;2)} = \check{\mathcal{R}}^{(1;2)} \tilde{\mathcal{L}}^{(0;2,1)}$. Then using the relations \begin{eqnarray*} \check{\mathcal{R}}^{(1;m+1,\ldots,2)} & = & \RR{1}{m+1} \Gaminv{m+1} \check{\mathcal{R}}^{(1;m,\ldots,2)} \Gam{m+1} \\ & = & \Gaminv{1} \Gaminv{m+1} \RR{1}{m+1} \Gam{1} \check{\mathcal{R}}^{(1;m,\ldots,2)} \Gam{m+1}, \end{eqnarray*} it is easy to prove the result by recursion on $m$. \end{myproof} Let us note that at this point in the usual nested Bethe ansatz, the situation is a little simpler. Indeed if there is no shift entering the definition of the Lax operator, by cyclicity of the trace $\check{\T}^{(1,m,\ldots,2)}(z)=\check{\T}^{(m,\ldots,1)}(z)$, and $\check{\mathcal{R}}^{(1;m,\ldots,2)}$ is precisely equal to $\check{\T}^{(1,m,\ldots,2)}(t_1)$. In that case, the relation~(\ref{TR}) is just the expression of the commutation of the transfer matrices for different values of the spectral parameter. But here because of the dynamical feature of the Lax operator, $\check{\mathcal{R}}^{(1;m,\ldots,2)}$ is not a transfer matrix, and $\mathcal{T}^{(1,m,\ldots,2)}(z)$ is not equal to $\mathcal{T}^{(m,\ldots,1)}(z)$. It is now clear that the expression of the second ``unwanted term'' is: \begin{eqnarray*} \lefteqn{g_2^{(1,\ldots,m)}(\lambda) \ = \ G(\lambda) \ \frac{\theta(\gamma)}{\theta(t_2-z)} \ X^{(2)}(z,t_2,\lambda) \left[ \vphantom{\frac{\theta(t_2)}{\theta(t_2-N\ell\gamma)}} K_+(t_2,\ldots,t_m,t_1) \ \mathrm{e}^{-\gamma c_1} \right.} \\ & & \left. - \ \frac{\theta(t_2)}{\theta(t_2-N\ell\gamma)} \ K_-(t_2,\ldots,t_m,t_1) \ \check{\T}^{(1,m,\ldots,2)}(t_2) \right] \check{\mathcal{R}}^{(1;m,\ldots,2)} \ \check{\psi}(\check{\lambda}), \end{eqnarray*} which, with the use of relation~(\ref{TR}), can be written \begin{eqnarray*} \lefteqn{g_2^{(1,\ldots,m)}(\lambda) \ = \ G(\lambda) \ \frac{\theta(\gamma)}{\theta(t_2-z)} \ X^{(2)}(z,t_2,\lambda) \ \check{\mathcal{R}}^{(1;m,\ldots,2)}} \\ & & \left[ K_+(t_2,\ldots,t_m,t_1) \ \mathrm{e}^{-\gamma c_1} \ - \ \frac{\theta(t_2)}{\theta(t_2-N\ell\gamma)} \ K_-(t_2,\ldots,t_m,t_1) \ \check{\T}^{(m,\ldots,1)}(t_2) \right] \ \check{\psi}(\check{\lambda}). \end{eqnarray*} The expressions of the other ``unwanted terms'' are obtained by repeating several times the permutation of the $B$'s. \begin{prop} The cancellation of all the ``unwanted terms'' is equivalent to the set of relations \begin{displaymath} \check{\T}^{(m,\ldots,1)} (t_k) \ \check{\psi}(\check{\lambda}) \ = \ \mathrm{e}^{-\gamma c_1} \ \frac{\theta(t_k-N\ell\gamma)}{\theta(t_k)} \ \prod_{^{i=1}_{i\neq k}}^{m} \frac{\theta(t_k-t_i+\gamma)}{\theta(t_k-t_i-\gamma)} \ \check{\psi}(\check{\lambda}), \ \ \ \ \forall k=1,\ldots, m. \end{displaymath} \end{prop} \section{The Bethe equations} \label{6} After the first step of the Bethe ansatz described in the preceding section, we are led to the problem of the diagonalization of $\check{\T}^{((N-1)\ell,\ldots,1)}(z)$ for arbitrary values of the parameters $t_i$. Let us call $\check{\psi}^{((N-1)\ell,\ldots,1)}(\check{\lambda})$ the zero weight eigenstate of $\check{\T}^{((N-1)\ell,\ldots,1)}(z)$, with eigenvalue $\check{\varepsilon}(z)$ (both $\check{\varepsilon}$ and $\check{\psi}$ depending implicitely on the parameters $t_i$). If the following conditions are satisfied: \begin{displaymath} \check{\varepsilon} (t_k) \ = \ \mathrm{e}^{-\gamma c_1} \ \frac{\theta(t_k-N\ell\gamma)}{\theta(t_k)} \ \prod_{^{i=1}_{i\neq k}} ^{(N-1)\ell} \frac{\theta(t_k-t_i+\gamma)}{\theta(t_k-t_i-\gamma)}, \ \ \ \ k=1,\ldots, (N-1)\ell, \end{displaymath} the vector $\psi(t_1,\ldots,t_{(N-1)\ell},\lambda)$ given by~(\ref{psi}) will be an eigenstate of $\mathcal{T}(z)$ with eigenvalue \begin{displaymath} \varepsilon(z) \ = \ \prod_{k=1}^{(N-1)\ell} \frac{\theta(z-t_k+\gamma)} {\theta(z-t_k)} \ \mathrm{e}^{-\gamma c_1} \ + \ \frac{\theta(z)}{\theta(z-N\ell\gamma)} \ \prod_{k=1}^{(N-1)\ell} \frac{\theta(z-t_k-\gamma)}{\theta(z-t_k)} \ \check{\varepsilon}(z). \end{displaymath} The second step of the Bethe ansatz thus consists in diagonalizing $\check{\T}^{((N-1)\ell,\ldots,1)}(z)$ by repeating the same procedure. The total shift $\check{\Gamma}^{(1,\ldots,(N-1)\ell)}$ entering the definition of $\check{\T}^{((N-1)\ell,\ldots,1)}$ has no action because we are looking for zero weight eigenstates only. We seek eigenstates in the form \begin{displaymath} \check{\psi}(u_1,\ldots,u_{(N-2)\ell},\check{\lambda}) = \check{B}^{(1)}(u_1) \cdots \check{B}^{((N-2)\ell)} (u_{(N-2)\ell}) \ \check{v} \ \check{g}^{(1,\ldots,(N-2)\ell)}(\check{\lambda}), \end{displaymath} where $\check{v}$ is the vector of $\check{W}$ given by $\check{v} = e_2 \otimes \cdots \otimes e_2$ and $\check{B}_j=\check{\LL}_{2j}$ for $j=3,\ldots,N$. Everything works out similarly, except the fact that the operators $\check{D}_{ij}=\check{\LL}_{ij}$ for $i,j=3,\ldots,N$ act a little differently on $\check{v}$ because of the parameters $t_k$ in $\check{\T}$: \begin{displaymath} \check{D}(z) \ [g(\check{\lambda}) \ \check{v}] \ = \ \sumi{3} e_{ii} \ \check{\Phi}_i(z,\check{\lambda}) \ g(\check{\lambda}-\check{\omega}_i) \ \check{v}, \end{displaymath} where $ \displaystyle \check{\Phi}_i(z,\check{\lambda}) = \frac{\theta(\lambda_i-\lambda_2+(N-1)\ell\gamma)} {\theta(\lambda_i-\lambda_2)} \ \prod_{k=1}^{(N-1)\ell} \frac{\theta(z-t_k)}{\theta(z-t_k-\gamma)}$. We continue in this way until the $N$th step; at this last step, the diagonalization problem to solve is simply $\psi(\lambda_N-\gamma) = \varepsilon_N \ \psi(\lambda_N)$, the solution of which is (up to a constant) $\psi(\lambda_N) = \mathrm{e}^{c_N \lambda_N}, \varepsilon_N = \mathrm{e}^{-\gamma c_N}$. The whole procedure can be summarized as follows. \begin{mydef} We introduce a set of parameters $(t^{(n)}_1,\ldots,t^{(n)}_{(N-n)\ell})$ for $n=0,\ldots,N-1$, with \begin{displaymath} t^{(0)}_j=(j-1)\gamma, \ \ \ \ \forall j=1,\ldots,N\ell. \end{displaymath} We define a set of functions $\varepsilon_n(z)$, $n=1,\ldots,N$, by the recursion relation \begin{eqnarray*} \varepsilon_N(z) & = & \mathrm{e}^{-\gamma c_N}, \\ \varepsilon_n(z) & = & \mathrm{e}^{-\gamma c_n} \prod_{k=1}^{(N-n)\ell} \frac{\theta(z-t_k^{(n)}+\gamma)}{\theta(z-t_k^{(n)})} \ + \prod_{i=1}^{(N-n+1)\ell} \frac{\theta(z-t_i^{(n-1)})}{\theta(z-t_i^{(n-1)}-\gamma)} \prod_{k=1}^{(N-n)\ell} \frac{\theta(z-t_k^{(n)}-\gamma)}{\theta(z-t_k^{(n)})} \varepsilon_{n+1}(z), \end{eqnarray*} where $c_1,\ldots,c_N$ are arbitrary complex parameters. For each $n=1,\ldots,N-1$, we define a Lax matrix by setting: \begin{eqnarray*} & & R\left(n\big|z,\lambda\right) \ = \ \sumi{n} e_{ii} \otimes e_{ii} + \sumij{n} \alpha(z,\lambda_i-\lambda_j) \ e_{ii} \otimes e_{jj} + \sumij{n} \beta(z,\lambda_i-\lambda_j) \ e_{ij} \otimes e_{ji}, \\ & & \mathcal{L}\left(n\big|z\right) \ = \ \left( \prod_{k=1}^{(N-n+1)\ell} R^{(0k)}\Big(n\big|z-t_k^{(n-1)},\lambda-\gamma \sum_{j \bowtie k} h^{(j)} \Big) \right) \ \Gamma(n)^{(0)}, \end{eqnarray*} where the product on $k$ is taken in increasing order if $n$ is odd, and decreasing order if $n$ is even. The notation $j \bowtie k$ means $j>k$ if $n$ is odd, and $j<k$ if $n$ is even. $\Gamma(n)$ is the matrix $\mathrm{diag}(\Gamma_i)_{i=n,\ldots,N}$. The creation operators are the elements of $\mathcal{L}$ given by \begin{displaymath} B_j\left(n\big|t\right) \ = \ \mathcal{L}_{nj}\left(n\big|t\right), \ \ \ \ j=n+1,\ldots,N, \end{displaymath} where the subscripts $nj$ refer to the auxiliary space (0). Finally we define the states $\psi_n(\lambda_n,\ldots,\lambda_N)$ by $\displaystyle \psi_N(\lambda_N) = \mathrm{e}^{c_N \lambda_N}$ and the recursion relation (for $n=1,\ldots,N-1$) \begin{displaymath} \psi_n \ = \ B^{(1)}\left(n\big|t_1^{(n)}\right) \ \cdots \ B^{((N-n)\ell)}\left(n\big|t_{(N-n)\ell}^{(n)}\right) \ v_n \ G_n \ \psi_{n+1}, \end{displaymath} where \begin{eqnarray*} & & v_n \ = \ e_n\otimes \cdots \otimes e_n \ \in ({\mathbb C}^{N-n+1})^{\otimes (N-n+1)\ell}, \\ & & G_n(\lambda_n,\ldots,\lambda_N) \ = \ \mathrm{e}^{c_n \lambda_n} \ \prod_{j=n+1}^{N} \prod_{p=1}^{\ell} \ \theta(\lambda_n-\lambda_j-p\gamma). \end{eqnarray*} \end{mydef} \begin{prop} If the parameters $t_k^{(n)}$ are solution of the Bethe equations \begin{eqnarray} & & \prod_{^{j=1}_{j\neq k}}^{(N-n)\ell} \frac{\theta(t_k^{(n)}-t_j^{(n)}+\gamma)} {\theta(t_k^{(n)}-t_j^{(n)}-\gamma)} \ \prod_{i=1}^{(N-n+1)\ell} \frac{\theta(t_k^{(n)}-t_i^{(n-1)}-\gamma)}{\theta(t_k^{(n)}-t_i^{(n-1)})} \ \prod_{i=1}^{(N-n-1)\ell} \frac{\theta(t_k^{(n)}-t_i^{(n+1)})}{\theta(t_k^{(n)}-t_i^{(n+1)}+\gamma)} \\ & & \hspace*{20pt} = \ \mathrm{e}^{\gamma(c_n-c_{n+1})}, \ \ \ \ \ \ \ \ \forall n=1,\ldots,N-1, \ \forall k=1,\ldots,(N-n)\ell, \nonumber \end{eqnarray} then $\psi_1(\lambda_1,\ldots,\lambda_N)$ is a zero weight eigenstate of $\mathcal{T}(z)=\mathrm{tr}^{(0)} \mathcal{L}\left(1\big|z\right)$ with eigenvalue \begin{displaymath} \varepsilon_1(z) = \sum_{i=1}^N \mathrm{e}^{-\gamma c_i} \ \prod_{m=1}^{(N-i)\ell} \frac{\theta(z-t_m^{(i)}+\gamma)}{\theta(z-t_m^{(i)})} \ \prod_{j=1}^{i-1} \ \left( \prod_{m=1}^{(N-j)\ell} \frac{\theta(z-t_m^{(j)}-\gamma)}{\theta(z-t_m^{(j)})} \ \prod_{m=1}^{(N-j+1)\ell} \frac{\theta(z-t_m^{(j-1)})}{\theta(z-t_m^{(j-1)}-\gamma)} \right). \end{displaymath} \end{prop} \begin{myproof} The cancellation of all the ``unwanted terms'' is equivalent to the set of equations \begin{displaymath} \varepsilon_{n+1}(t_k^{(n)}) \ = \ \mathrm{e}^{-\gamma c_n} \ \prod_{i=1}^{(N-n+1)\ell} \frac{\theta(t_k^{(n)}-t_i^{(n-1)}-\gamma)}{\theta(t_k^{(n)}-t_i^{(n-1)})} \ \prod_{^{j=1}_{j\neq k}}^{(N-n)\ell} \frac{\theta(t_k^{(n)}-t_j^{(n)}+\gamma)} {\theta(t_k^{(n)}-t_j^{(n)}-\gamma)}, \end{displaymath} for $n=1,\ldots,N-1$ and $ k=1,\ldots,(N-n)\ell$. It is easy to see that \begin{eqnarray*} \varepsilon_n(z) = \sum_{i=n}^N \mathrm{e}^{-\gamma c_i} \ \prod_{m=1}^{(N-i)\ell} \frac{\theta(z-t_m^{(i)}+\gamma)}{\theta(z-t_m^{(i)})} \ \prod_{j=n}^{i-1} \ \left( \prod_{m=1}^{(N-j)\ell} \frac{\theta(z-t_m^{(j)}-\gamma)}{\theta(z-t_m^{(j)})} \ \prod_{m=1}^{(N-j+1)\ell} \frac{\theta(z-t_m^{(j-1)})}{\theta(z-t_m^{(j-1)}-\gamma)} \right), \end{eqnarray*} and so in the expression of $\varepsilon_{n+1}(t_k^{(n)})$ there is only one non zero term, corresponding to $i=n+1$. \end{myproof} \begin{remark} Since the Ruijsenaars operator $M$ is related to the transfer matrix $\mathcal{T}(z)$ by the relation~(\ref{Ruij}), $\psi_1(\lambda_1,\ldots,\lambda_N)$ is an eigenfunction of $M$ with eigenvalue $\varepsilon = \frac{\theta(z-\gamma N\ell)}{\theta(z-\gamma\ell)} \varepsilon_1(z)$. This quantity does not depend on $z$, so we can evaluate it at $z=0$. We find \begin{displaymath} \varepsilon \ = \ \mathrm{e}^{-\gamma c_1} \ \frac{\theta(\gamma N\ell)}{\theta(\gamma\ell)} \ \prod_{m=1}^{(N-1)\ell} \frac{\theta(t_m^{(1)}-\gamma)}{\theta(t_m^{(1)})}. \end{displaymath} \end{remark} \bigskip \begin{center} {\sc Acknowledgement} \end{center} I thank Giovanni Felder for discussions. \bigskip
1,116,691,499,549
arxiv
\section*{Abstract} Let $M$ be a smooth manifold and let $\mbox{$\mathscr{F}$}$ be a codimension one, $C^\infty$ foliation on $M$, with isolated singularities of Morse type. The study and classification of pairs $(M,\mbox{$\mathscr{F}$})$ is a challenging (and difficult) problem. In this setting, a classical result due to Reeb \cite{Reeb} states that a manifold admitting a foliation with exactly two center-type singularities is a sphere. In particular this is true if the foliation is given by a function. Along these lines a result due to Eells and Kuiper \cite{Ku-Ee} classify manifolds having a real-valued function admitting exactly three non-degenerate singular points. In the present paper, we prove a generalization of the above mentioned results. To do this, we first describe the possible arrangements of pairs of singularities and the corresponding codimension one invariant sets, and then we give an elimination procedure for suitable center-saddle and some saddle-saddle configurations (of consecutive indices).\\ In the second part, we investigate if other classical results, such as Haefliger and Novikov (Compact Leaf) theorems, proved for regular foliations, still hold true in presence of singularities. At this purpose, in the singular set, $Sing(\mbox{$\mathscr{F}$})$ of the foliation $\mbox{$\mathscr{F}$}$, we consider {\em{weakly stable}} components, that we define as those components admitting a neighborhood where all leaves are compact. If $Sing(\mbox{$\mathscr{F}$})$ admits only weakly stable components, given by smoothly embedded curves diffeomorphic to $S^1$, we are able to extend Haefliger's theorem. Finally, the existence of a closed curve, transverse to the foliation, leads us to state a Novikov-type result.\\ \section*{Acknoledgements} I am very grateful to prof. Bruno Sc\'ardua for proposing me such an interesting subject and for his valuable advice. My hearthy good thanks to prof. Graziano Gentili for his suggestions on the writing of this article. \section{Foliations and Morse Foliations}\label{inizio} {\bf{Definition \ref{inizio}.1}} A {\em{codimension $k$, foliated manifold}} $(M,\mbox{$\mathscr{F}$})$ is a manifold $M$ with a differentiable structure, given by an atlas $\{(U_i, \phi_i)\}_{i \in I}$, satisfying the following properties:\\ (1) $\phi_i(U_i)= \mbox{\textrm{B}} ^{n-k} \times \mbox{\textrm{B}} ^k$;\\ (2) in $U_i \cap U_j \neq \varnothing$, we have $\phi_j \circ \phi_i^{-1}(x,y)=(f_{ij}(x,y),g_{ij}(y))$,\\ where $\{f_{ij}\}$ and $\{g_{ij}\}$ are families of, respectively, submersions and diffeomorphisms, defined on natural domains. Given a local chart ({\em{foliated chart}}) $(U, \phi)$, $\forall x \in \mbox{\textrm{B}} ^{n-k}$ and $y \in \mbox{\textrm{B}} ^k$, the set $\phi^{-1}(\cdot , y)$ is a {\em{plaque}} and the set $\phi^{-1}(x,\cdot)$ is a {\em{transverse section}}. The existence of a foliated manifold $(M,\mbox{$\mathscr{F}$})$ determines a partition of $M$ into subsets, the {\em{leaves}}, defined by means of an equivalence relation, each endowed of an intrinsic manifold structure. Let $x \in M$; we denote by $\mbox{$\mathscr{F}$}_x$ or $L_x$ the leaf of $\mbox{$\mathscr{F}$}$ through $x$. With the intrinsic manifold structure, $\mbox{$\mathscr{F}$}_x$ turns to be an immersed (but not embedded, in general) submanifold of $M$.\\ In an equivalent way, a foliated manifold $(M,\mbox{$\mathscr{F}$})$ is a manifold $M$ with a collection of couples $\{(U_i,g_i)\}_{i \in I}$, where $\{U_i\}_{i \in I}$ is an open covering of $M$, $g_i:U_i \rightarrow \mbox{\textrm{B}} ^k$ is a submersion, $\forall i \in I$, and the $g_i$'s satisfy the cocycle relations, $g_i=g_{ij} \circ g_j$, $g_{ii}=id$, for suitable diffeomorphisms $g_{ij}:\mbox{\textrm{B}} ^k \rightarrow \mbox{\textrm{B}}^k$, defined when $U_i \cap U_j \neq \varnothing$. Each $U_i$ is said a {\em{foliation box}}, and $g_i$ a {\em{distinguished map}}. The functions $\gamma_{ij}=\textrm{d}g_{ij}$ are the transition maps \cite{Stee} of a bundle $N \mbox{$\mathscr{F}$} \subset TM$, normal to the foliation. More completely, there exists a G-structure on $M$ \cite{Law}, which is a reduction of the structure group $GL(n,\mbox{ $ \mathbb{R}$})$ of the tangent bundle to the subgroup of the matrices $\left(\begin{array}{c|c} A & B \\\hline 0 & C \end{array}\right)$, where $A \in GL(n-k,\mbox{ $ \mathbb{R}$})$ and $C \in GL(k, \mbox{ $ \mathbb{R}$})$. A codimension one, $C^\infty$ foliation of a smooth manifold $M$, with isolated singularities, is a pair $\mbox{$\mathscr{F}$}=(\mbox{$\mathscr{F}$}^*,Sing(\mbox{$\mathscr{F}$}))$, where $Sing(\mbox{$\mathscr{F}$}) \subset M$ is a discrete subset and $\mbox{$\mathscr{F}$}^*$ is a codimension one, $C^\infty$ foliation (in the ordinary sense) of $M^*=M \setminus Sing(\mbox{$\mathscr{F}$})$. The {\em{leaves}} of $\mbox{$\mathscr{F}$}$ are the leaves of $\mbox{$\mathscr{F}$}^*$ and $Sing(\mbox{$\mathscr{F}$})$ is the {\em{singular set}} of $\mbox{$\mathscr{F}$}$. A point $p$ is a {\em{Morse singularity}} if there is a $C^\infty$ function, $f_p:U_p \subset M \rightarrow \mbox{ $ \mathbb{R}$}$, defined in a neighborhood $U_p$ of $p$, with a (single) non-degenerate critical point at $p$ and such that $f_p$ is a local first integral of the foliation, i.e. the leaves of the restriction $\mbox{$\mathscr{F}$}|_{U_p}$ are the connected components of the level hypersurfaces of $f_p$ in $U_p \setminus \{p \}$. A Morse singularity $p$, of index $l$, is a {\em{saddle}}, if $0<l<n$ (where $n=\dim M$), and a {\em{center}}, if $l=0,n$. We say that the foliation $\mbox{$\mathscr{F}$}$ has a {\em{saddle-connection}} when there exists a leaf accumulated by at least two distinct saddle-points. A {\em{Morse foliation}} is a foliation with isolated singularities, whose singular set consists of Morse singularities, and which has no saddle-connections. In this way if a Morse foliation has a (global) first integral, it is given by a Morse function.\\ Of course, the first basic example of a Morse foliation is indeed a foliation defined by a Morse function on $M$. A less evident example is given by the foliation depicted in figure \ref{less}. In the literature, the orientability of a codimension $k$ (regular) foliation is determined by the orientability of the $(n-k)$-plane field tangent to the foliation, $x \rightarrow T_x \mbox{$\mathscr{F}$}_x$. Similarly transverse orientability is determined by the orientability of a complementary $k$-plane field. A singular, codimension one foliation, $\mbox{$\mathscr{F}$}$, is {\em{transversely orientable}} \cite{Ca-Sca} if it is given by the natural $(n-1)$-plane field associated to a one-form, $\omega \in \Lambda ^1(M)$, which is integrable in the sense of Frobenius. In this case, choosing a Riemannian metric on $M$, we may find a global vector field transverse to the foliation, $X=grad(\omega)$, $\omega X \geq 0$, and $\omega_x X_x=0$ if and only if $x$ is a singularity for the foliation ($\omega(x)=0$). A transversely orientable, singular foliation $\mbox{$\mathscr{F}$}$ of $M$ is a transversely orientable (regular) foliation $\mbox{$\mathscr{F}$}^*$ of $M^*$ in the sense of the classical definition. Viceversa, if $\mbox{$\mathscr{F}$}^*$ is transversely orientable, in general, $\mbox{$\mathscr{F}$}$ is not. Thanks to the Morse Lemma \cite{Mil1}, Morse foliations reduce to few representative cases. On the other hand, Morse foliations describe a large class among transverseley orientable foliations. To see this, let $\mbox{$\mathscr{F}$}$ be a foliation defined by an integrable one-form, $\omega \in \Lambda^1(M)$, with isolated singularies. We proceed with a local analysis; using a local chart around each singularity, we may suppose $\omega \in \Lambda^1(\mbox{ $ \mathbb{R}$}^n)$, $\omega(0)=0$, and 0 is the only singularity of $\omega$. We have $\omega(x)= \sum_i h_i(x) dx^i$ and, in a neighborhood of $0 \in \mbox{ $ \mathbb{R}$}^n$, we may write $\omega(x)=\omega_1(x)+O(|x|^2)$, where $\omega_1$ is the linear part of $\omega$, defined by $\omega_1(x)= \sum_{i,j}a_{ij}x^i dx^j$, $a_{ij}=\partial h^i(x)/\partial x^j$. We recall that the integrability of $\omega$ implies the integrability of $\omega_1$ and that the singularity $0 \in \mbox{ $ \mathbb{R}$}^n$ is said to be non degenerate if and only if $(a_{ij}) \in \mbox{ $ \mathbb{R}$}(n)$ is non degenerate; in this latter case $(a_{ij})$ is symmetric: it is the hessian matrix of some real function $f$, defining the linearized foliation ($\omega_1= \textrm{d}f$). We have \begin{displaymath} \begin{array}{ccc} \{\textrm{transverseley orientable foliations, with Morse singularities}\}=\\\{\textrm{foliations, defined by non degenerate linear one-forms}\} \subset \\\{\textrm{foliations, defined by non degenerate one-forms}\}. \end{array}\end{displaymath} Let $(\sigma, \tau)$ be the space $\sigma$ of integrable one-forms in $\mbox{ $ \mathbb{R}$}^n$, with a singularity at the origin, endowed with the $C^1$-Whitney topology, $\tau$. If $\omega,\omega' \in \sigma$, we say $\omega$ {\em{equivalent}} $\omega'$ ($\omega \sim \omega'$) if there exists a diffeomorphism $\phi:\mbox{ $ \mathbb{R}$}^n \rightarrow \mbox{ $ \mathbb{R}$}^n$, $\phi(0)=0$, which sends leaves of $\omega$ into leaves of $\omega'$. Moreover, we say $\omega$ is {\em{structurally stable}}, if there exists a neighborhood $V$ of $\omega$ in $(\sigma,\tau)$ such that $\omega' \sim \omega, \forall \omega' \in V$.\\ {\bf{Theorem \ref{inizio}.2 (Wagneur)}}\cite{Wag} The one-form {\em{$\omega \in \sigma$ is structurally stable, if and only if the index of $0 \in Sing(\omega)$ is neither $2$ nor $n-2$}}. Let us denote by $S$ the space of foliations defined by non degenerate one-forms with singularities, whose index is neither $2$ nor $n-2$. If $S_1 \subset S$ is the subset of foliations defined by linear one-forms, then we have:\\ {\bf{Corollary \ref{inizio}.3}} {\em{There exists a surjective map,}}$$s: S_1 \rightarrow S/_\sim.$$ \section{Holonomy and Reeb Stability Theorems}\label{uno} It is well known that a basic tool in the study of foliations is the holonomy of a leaf (in the sense of Ehresmann). If $L$ is a leaf of a codimension $k$ foliation $(M,\mbox{$\mathscr{F}$})$, the holonomy $Hol(L,\mbox{$\mathscr{F}$})=\Phi(\pi_1(L))$, is the image of a representation, $\Phi:\pi_1(L) \rightarrow Germ(\mbox{ $ \mathbb{R}$}^k,0)$, of the fundamental group of $L$ into the germs of diffeomorphisms of $\mbox{ $ \mathbb{R}$}^k$, fixing the origin. Let $x \in L$ and $\Sigma _x$ be a section transverse to $L$ at $x$; with abuse of notation, we will write that a diffeomorphism $g: Dom(g) \subset \Sigma_x \rightarrow \Sigma_x$, fixing the origin, is an element of the holonomy group. For codimension one foliations ($k=1$), we may have: {\em{(i)}} $Hol(L,\mbox{$\mathscr{F}$})=\{e \}$, {\em{(ii)}} $Hol(L,\mbox{$\mathscr{F}$})=\{e,g \}$, with $g^2=e, g \neq e$, {\em{(iii)}} $Hol(L,\mbox{$\mathscr{F}$})=\{e,g \}$, where $g^n \neq e$, $\forall n$, and $g$ is a (orientation preserving or reversing) diffeomorphism. In particular, among orientation preserving diffeomorphisms, we might find a $g: \Sigma_x \rightarrow \Sigma_x$, such that $g$ is the identity on one component of $\Sigma_x \setminus \{x \}$ and it is not the identity on the other; in this case, we say that $L$ has {\em{unilateral holonomy}} (see figure \ref{holonomy} for some examples). \begin{figure} \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{1}{$\mbox{$\mathscr{F}$}_1$} \psfrag{2}{$\mbox{$\mathscr{F}$}_2$} \psfrag{l}{$L$} \psfrag{l0}{$L_0$} \psfrag{l1}{$L_1$} \psfrag{l2}{$L_2$} \includegraphics[scale=.4]{holonomy.eps} \caption{$\mbox{$\mathscr{F}$}_1,\mbox{$\mathscr{F}$}_2$ foliations on $\mbox{ $ \mathbb{R}$} P^2$: $Hol(L,\mbox{$\mathscr{F}$}_1)=\{e\}$, $Hol(L_0,\mbox{$\mathscr{F}$}_1)=\{e,g_0\}$, $g_0^2=e$, $Hol(L_1,\mbox{$\mathscr{F}$}_2)=\{e,g_1\}$, $g_1$ orientation reversing diffeomorphism, $Hol(L_2,\mbox{$\mathscr{F}$}_2)=\{e,g_2\}$, $g_2$ generator of unilateral holonomy.} \label{holonomy} } \end{center} }\end{minipage}% \begin{minipage}[t]{.1\linewidth}{\hspace{.1\linewidth}}\end{minipage}% \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{1}{$p_2$} \psfrag{2}{$p_3$} \psfrag{3}{$p_1$} \psfrag{4}{$q$} \psfrag{a}{$a$} \includegraphics[scale=.30]{Morse2.eps} \caption{A singular foliation of the sphere $S^2$, which does not admit a first integral. With the same spirit, a singular foliation on $S^3$ may be given.} \label{less} } \end{center} }\end{minipage} \end{figure} We recall Reeb Stability Theorems (cfr., for example, \cite{Cam} or \cite{Mo-Sca}).\\{\bf{Theorem \ref{uno}.1 (Reeb Local Stability)}} {\em{Let $\mbox{$\mathscr{F}$}$ be a $C^1$, codimension $k$ foliation of a manifold $M$ and $F$ a compact leaf with finite holonomy group. There exists a neighborhood $U$ of $F$, saturated in $\mbox{$\mathscr{F}$}$ (also called {\em{invariant}}), in which all the leaves are compact with finite holonomy groups. Further, we can define a retraction $\pi:U \rightarrow F$ such that, for every leaf $F' \subset U$, $\pi|_{F'} : F' \rightarrow F$ is a covering with a finite number of sheets and, for each $y \in F$, $\pi^{-1}(y)$ is homeomorphic to a disk of dimension $k$ and is transverse to $\mbox{$\mathscr{F}$}$. The neighborhood $U$ can be taken to be arbitrarily small.}} The last statement means in particular that, in a neighborhood of the point corresponding to a compact leaf with finite holonomy, the space of leaves is Hausdorff. Under certain conditions the Reeb Local Stability Theorem may replace the Poincar\'e Bendixon Theorem \cite{Palis} in higher dimensions. This is the case of codimension one, singular foliations $(M^n,\mbox{$\mathscr{F}$})$, with $n \geq 3$, and some center-type singularity in $Sing(\mbox{$\mathscr{F}$})$.\\ {\bf{Theorem \ref{uno}.2 (Reeb Global Stability)}} {\em{Let $\mbox{$\mathscr{F}$}$ be a $C^1$, codimension one foliation of a closed manifold, $M$. If $\mbox{$\mathscr{F}$}$ contains a compact leaf $F$ with finite fundamental group, then all the leaves of $\mbox{$\mathscr{F}$}$ are compact, with finite fundamental group. If $\mbox{$\mathscr{F}$}$ is transversely orientable, then every leaf of $\mbox{$\mathscr{F}$}$ is diffeomorphic to $F$; $M$ is the total space of a fibration $f:M \rightarrow S^1$ over $S^1$, with fibre $F$, and $\mbox{$\mathscr{F}$}$ is the fibre foliation, $\{f^{-1}(\theta)| \theta \in S^1 \}$.}} This theorem holds true even when $\mbox{$\mathscr{F}$}$ is a foliation of a manifold with boundary, which is, a priori, tangent on certain components of the boundary and transverse on other components \cite{God}. In this setting, let $\mbox{ $ \mathbb {H} $ } ^l=\{(x^1, \dots , x^l) \in \mbox{ $ \mathbb{R}$}^l|x^l \geq 0 \}$. Taking into account definition \ref{inizio}.1, we say that a foliation of a manifold with boundary is {\em{tangent}}, respectively {\em{transverse}} {\em{to the boundary}}, if there exists a differentiable atlas $\{(U_i, \phi_i)\}_{i \in I}$, such that property (1) of the above mentioned definition holds for domains $U_i$ such that $U_i \cap \partial M = \varnothing$, while $\phi_i(U_i) = \mbox{\textrm{B}} ^{n-k} \times \mbox{ $ \mathbb {H} $ } ^k$, respectively, $\phi_i(U_i) = \mbox{ $ \mathbb {H} $ } ^{n-k} \times \mbox{\textrm{B}} ^k$ for domains such that $U_i \cap \partial M \neq \varnothing$. Moreover, we ask that the change of coordinates has still the form described in property (2). Recall that $\mbox{$\mathscr{F}$}|_{\partial M}$ is a regular codimension $k-1$ (respectively, $k$) foliation of the $(n-1)$-dimensional boundary. After this, it is immediate to write the definition for foliations which are tangent on certain components of the boundary and transverse on others.\\ Observe that, for foliations tangent to the boundary, we have to replace $S^1$ with $[0,1]$ in the second statement of the Reeb Theorem \ref{uno}.2 (see Lemma \ref{reeb}.6). We say that a component of $Sing(\mbox{$\mathscr{F}$})$ is {\em{weakly stable}} if it admits a neighborhood, $U$, such that $\mbox{$\mathscr{F}$}|_{U}$ is a foliation with all leaves compact. The problem of global stability for a foliation with weakly stable singular components may be reduced to the case of foliations of manifolds with boundary, tangent to the boundary. It is enough to cut off an invariant neighborhood of each singular component. Holonomy is related to transverse orientability by the following:\\ {\bf{Proposition \ref{uno}.3}} {\em{Let $L$ be a leaf of a codimension one (Morse) foliation $(M,\mbox{$\mathscr{F}$})$. If $Hol(L,\mbox{$\mathscr{F}$})=\{e,g\}$, where $g^2=e$, $g \neq e$, then $\mbox{$\mathscr{F}$}$ is non-transversely orientable. Moreover, if $\pi:M \rightarrow M/{\mbox{$\mathscr{F}$}}$ is the projection onto the space of leaves, then $\partial (M/{\mbox{$\mathscr{F}$}}) \neq \varnothing$ and $\pi(L) \in \partial (M/{\mbox{$\mathscr{F}$}})$}}.\\ {\em{Proof.}} We choose $x \in L$ and a segment $\Sigma_x$, transverse to the foliation at $x$. Then $g: \Sigma_x \rightarrow \Sigma_x$ turns out to be $g(y)= -y$. Let $y \rightarrow N_y$ a 1-plane field complementary to the tangent plane field $y \rightarrow T_y\mbox{$\mathscr{F}$}_y$. Suppose we may choose a vector field $y \rightarrow X(y)$ such that $N_y= \textrm{span} \{X(y) \}$. Then it shoud be $X(x)= -X(x)=(\textrm{d}g)_x(X(x))$, a contraddiction. Consider the space of leaves near $L$; this space is the quotient of $\Sigma_x$ with respect to the equivalence relation $\sim$ which identifies points on $\Sigma_x$ of the same leaf. Then $\Sigma_x/_ \sim$ is a segment of type $(z,x]$ or $[x,z)$, where $\pi^{-1}(x)=L$. At last we recall a classical result due to Reeb.\\ {\bf{Theorem \ref{uno}.4 (Reeb Sphere Theorem) \cite{Reeb}}} {\em{A transversely orientable Morse foliation on a closed manifold, $M$, of dimension $n \geq 3$, having only centers as singularities, is homeomorphic to the $n$-sphere.}}\\ This result is proved by showing that the foliation considered must be given by a Morse function with only two singular points, and therefore thesis follows by Morse theory. Notice that the theorem still holds true for $n=2$, with a different proof. In particular, the foliation need not to be given by a function (see figure \ref{nofunction}). \begin{figure}[t!] \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{b}{$b$} \psfrag{a}{$a$} \includegraphics[scale=.5]{nofunction.eps} \caption{$n=2$: a singular foliation with center-type singularities, having no first integral.} \label{nofunction} } \end{center} } \end{minipage}% \begin{minipage}[t]{.1\linewidth}{\hspace{.1\linewidth}}\end{minipage}% \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{1}{$R_1$} \psfrag{2}{$R_2$} \psfrag{3}{$R_3$} \psfrag{s}{$\mbox{$\mathcal{W}$}^s(q))$} \psfrag{i}{$\mbox{$\mathcal{W}$}^u(q)$} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{u}{$U$} \psfrag{l}{$L$} \psfrag{f}{$F$} \includegraphics[scale=.35]{coupling1.eps} \caption{A trivial couple center-saddle $(p,q)$ (Theorem \ref{tre}.5, case {\em{(i)}}).} \label{coupling1} } \end{center} }\end{minipage} \end{figure} \section{Arrangements of singularities}\label{tre} In section \ref{quattro} we will study the elimination of singularities for Morse foliations. To this aim we will describe here how to identify special ``couples'' of singularities and we will study the topology of the neighbouring leaves.\\ {\bf{Definition \ref{tre}.1}} Let $n=\dim M, n \geq 2$. We define the set $\mbox{ $ \mathscr{C}$}(\mbox{$\mathscr{F}$})\subset M$ as the union of center-type singularities and leaves diffeomorphic to $S^{n-1}$ (with trivial holonomy if $n=2$) and for a center singularity, $p$, we denote by $\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ the connected component of $\mbox{ $ \mathscr{C}$}(\mbox{$\mathscr{F}$})$ that contains $p$.\\ {\bf{Proposition \ref{tre}.2}} {\em{Let $\mbox{$\mathscr{F}$}$ be a Morse foliation on a manifold $M$. We have:\\ (1) $\mbox{ $ \mathscr{C}$}(\mbox{$\mathscr{F}$})$ and $\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ are open in $M$.\\ (2) $\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \cap \mbox{ $ \mathscr{C}$}_q(\mbox{$\mathscr{F}$}) \neq \varnothing$ if and only if $\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})=\mbox{ $ \mathscr{C}$}_q(\mbox{$\mathscr{F}$})$. $\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})=M$ if and only if $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})= \varnothing$. In this case the singularities of $\mbox{$\mathscr{F}$}$ are centers and the leaves are all diffeomorphic to $S^{n-1}.$\\ (3) If $q \in Sing(\mbox{$\mathscr{F}$}) \cap \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, then $q$ must be a saddle; in this case $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \cap Sing(\mbox{$\mathscr{F}$})= \{ q \}$. Moreover, for $n \geq 3$ and $\mbox{$\mathscr{F}$}$ transversely orientable, $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \neq \varnothing$ if and only if $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \cap Sing(\mbox{$\mathscr{F}$}) \neq \varnothing$. In these hypotheses, $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ contains at least one separatrix of the saddle $q$.\\ (4) $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \setminus \{q \}$ is closed in $M \setminus \{q \}$.}}\\ {\em{Proof.}} (1) $\mbox{ $ \mathscr{C}$}(\mbox{$\mathscr{F}$})$ is open by the Reeb Local Stability Theorem \ref{uno}.1. (3) If non-empty, $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \cap Sing(\mbox{$\mathscr{F}$})$ consists of a single saddle $q$, as there are no saddle connections. The second part follows by the Reeb Global Stability Theorem for manifolds with boundary and the third by the Morse Lemma. (4) By the Transverse Uniformity Theorem (see, for example, \cite{Cam}), it follows that the intrinsic topology of $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \setminus \{q \}$ coincides with its natural topology, as induced by $M \setminus \{q \}$. We recall the following (cfr., for example \cite{Mo-Sca}):\\ {\bf{Lemma \ref{tre}.3 (Holonomy Lemma)}} {\em{Let $\mbox{$\mathscr{F}$}$ be a codimension one, transversely orientable foliation on $M$, let $A$ be a leaf of $\mbox{$\mathscr{F}$}$ and $K$ be a compact and path-connected set. If $g:K \rightarrow A$ is a $C^1$ map homotopic to a constant in $A$, then $g$ has a {\em{normal extension}} i.e. there exist $\epsilon >0$ and a $C^1$ map $G:K \times [0,\epsilon] \rightarrow M$ such that $G_t(x)=G^x(t)=G(x,t)$ has the following properties: {\em{(i)}} $G_0(K)=g$, {\em{(ii)}} $G_t(K) \subset A(t)$ for some leaf $A(t)$ of $\mbox{$\mathscr{F}$}$ with $A(0)=A$, {\em{(iii)}} $\forall x \in K$ the curve $G^x([0, \epsilon])$ is normal to $\mbox{$\mathscr{F}$}$.}}\\ \begin{figure} \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{1}{$F_1$} \psfrag{2}{$F_2$} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{l}{$L$} \includegraphics[scale=.35]{coupling2.eps} \caption{A saddle $q$ of index $1$ ($n-1$), accumulating one center $p$ (Theorem \ref{tre}.5, case {\em{(ii)}}).} \label{coupling2} }\end{center} }\end{minipage}% \begin{minipage}[t]{.1\linewidth}{\hspace{.1\linewidth}}\end{minipage}% \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{g}{$\gamma$} \includegraphics[scale=.25]{coupling4.eps} \caption{A saddle $q$ of index $1$ ($n-1$), accumulating one center $p$ (Theorem \ref{tre}.5, case {\em{(iii)}}).} \label{coupling4} } \end{center} }\end{minipage} \end{figure} \begin{figure}[t!] \begin{minipage}[t]{.45\linewidth} { \begin{center} { \includegraphics[scale=.22]{sadsaddle.eps} \caption{Two saddles in trivial coupling for the foliation defined by the function $f_\epsilon=- \frac{x^2}{2}+ \frac{y^3}{3}- \epsilon y+ \frac{z^2}{2}$, ($\epsilon >0$).} \label{twosaddles} } \end{center} } \end{minipage}% \begin{minipage}[t]{.1\linewidth}{\hspace{.1\linewidth}}\end{minipage}% \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{L_1}{$L_1$} \psfrag{L_2}{$L_2$} \psfrag{1}{$p$} \psfrag{2}{$q$} \psfrag{s_1}{$S_1$} \psfrag{s_2}{$S_2$} \psfrag{S}{$\Sigma$} \psfrag{n}{no intersection} \psfrag{L}{legenda} \includegraphics[scale=.55]{deadbranch.eps} \caption{A dead branch of a trivial couple of saddles for a foliated manifold $(M^n,\mbox{$\mathscr{F}$})$, $n \geq 3$.} \label{deadbranch} } \end{center} }\end{minipage} \end{figure} \hspace{3ex}For the case of center-saddle pairings we prove the following descriptions of the separatrix:\\ {\bf{Theorem \ref{tre}.4}} {\em{Let $\mbox{$\mathscr{F}$}$ be a $C^ \infty$, codimension one, transversely orientable, Morse foliation of a compact $n$-manifold, $M$, $n \geq 3$. Let $q$ be a saddle of index $l \notin \{1, n-1 \}$, accumulating to one center $p$. Let $L \subset \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ be a spherical leaf intersecting a neighborhood $U$ of $q$, defined by the Morse Lemma. Then $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \setminus \{q \}$ has a single connected component (see figure \ref{LF}) and is homeomorphic to $S^{n-1}/S^{l-1}$. If $F$ is a leaf such that $F \cap \big (U \setminus \overline { \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})} \big) \neq \varnothing$, then $F$ is homeomorphic to $\mbox{\textrm{B}} ^l \times S^{n-l-1} \cup_ \phi \mbox{\textrm{B}} ^l \times S^{n-l-1}$, where $\phi$ is a diffeomorphism of the boundary (for example, we may have $F \simeq S^l \times S^{n-l-1}$, but also $F \simeq S^{n-1}$, for $l=n/2$).}}\\ {\em{Proof.}} Let $\omega \in \Lambda^1(M)$ be a one-form defining the transversely orientable foliation. We choose a riemannian metric on $M$ and we consider the transverse vector field $X_x=grad(\omega)_x$. We suppose $||X||=1$. In $U$, we have $X=h \cdot grad (f)$ for some real function $h>0$ defined on $U$. Further, we may suppose that $\partial U$ follows the orbits of $X$ in a neighborhood of $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$.\\ The Morse Lemma gives a local description of the foliation near its singularities; in particular the local topology of a leaf near a saddle of index $l$ is given by the connected components of the level sets of the function $f(x)=-x_1^2- \dots -x_l^2+x_{l+1}^2+ \dots +x_n^2$. If, for $c \geq 0$, we write $f^{-1}(c)=\{(x_1, \dots ,x_n) \in \mbox{ $ \mathbb{R}$}^n|x_1^2+ \dots +x_l^2+c=x_{l+1}^2+ \dots +x_n^2 \}$, it is easy to see that $f^{-1}(0)$ is homeomorphic to a cone over $S^{l-1} \times S^{n-l-1}$ and $f^{-1}(c) \simeq \mbox{\textrm{B}} ^l \times S^{n-l-1}$ ($c>0$). Similarly, we obtain $f^{-1}(c) \simeq \mbox{\textrm{B}} ^{n-l} \times S^{l-1}$ for $c<0$. Therefore, by our hypothesis on $l$, the level sets are connected; in particular the separatrix $S \supset f^{-1}(0)$ is unique and $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})= S \cup \{ q \}$; moreover $U$ is splitted by $f^{-1}(0)$ in two different components. A priori, a leaf may intersect more than one component. As $\mbox{$\mathscr{F}$}$ is transversely orientable, the holonomy is an orientation preserving diffeomorphism, and then a leaf may intersect only non adiacent components; then this is not the case, in our hypotheses.\\ Let $L$ be a spherical leaf $\subset \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ enough near $q$. Then $L \cap U \neq \varnothing$ and it is not restrictive to suppose it is given by $f^{-1}(c)$ for some $c<0$. We define the compact set $K=S^{n-1} \setminus \mbox{\textrm{B}} ^{n-l} \times S^{l-1} \simeq L \setminus U$. As $n \geq 3$, the composition $\xymatrix{ K \ar[rr]^ \simeq && L \setminus U \ar@{^{(}->}[rr]^\imath && L }$ is homotopic to a constant in its leaf. By the proof of the Holonomy Lemma \ref{tre}.3, $L \setminus U$ projects diffeomorphically onto $A(\epsilon)=\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, by means of the constant-speed vector field, $X$. Together with the Morse Lemma, this gives a piecewise description of $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, which is obtained by piecing pieces toghether. It comes out $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \simeq S^{n-1}/S^{l-1}$, a set with the homotopy type of $S^{n-1} \vee S^l$ (where $\vee$ is the wedge sum), simply connected in our hypotheses. Consequently, the map $K \times \{\epsilon \} \rightarrow \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, obtained with the extension, admits, on turn, a normal extension. This completes the piecewise description of $F$. In case of presence of a saddle of index 1 or $n-1$, we have:\\ {\bf{Theorem \ref{tre}.5}} {\em{Let $\mbox{$\mathscr{F}$}$ be a $C^ \infty$, codimension one, transversely orientable, Morse foliation of a compact $n$-manifold, $M$, $n \geq 3$. Let $q$ be a saddle of index $1$ or $n-1$ accumulating to one center $p$. Let $L \subset \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ be a spherical leaf intersecting a neighborhood $U$ of $q$, defined by the Morse Lemma. We may have: {\em{(i)}} $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ contains a single separatrix of the saddle (see figure \ref{coupling1}) and is homeomorphic to $S^{n-1}$; {\em{(ii)}} $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ contains both separatrices $S_1$ and $S_2$ of the saddle (see figure \ref{coupling2}) and is homeomorphic to $S^{n-1}/S^{n-2} \simeq S^{n-1} \vee S^{n-1}$. If this is the case, there exist two leaves $F_i$ ($i=1,2$), such that $F_i$ and $L$ intersect different components of $U \setminus S_i$ and we have that $F_i$ is homeomorphic to $S^{n-1}$ ($i=1,2$); {\em{(iii)}} $q$ is a self-connected saddle (see figure \ref{coupling4}) and $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ is homeomorphic to $S^{n-1}/S^0$. In this case we will refer to the couple $\Big(\overline{\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})},\mbox{$\mathscr{F}$}|_{\overline{\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})}}\Big)$ as a {\em{singular Reeb component}}. Moreover, $U \setminus \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ has three connected components and $L$ intersects two of them. If $F$ is a leaf intersecting the third component of $U \setminus \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, then $F$ is homeomorphic to $S^1 \times S^{n-2}$, or to $\mbox{ $ \mathbb{R}$} \times S^{n-2}$.}}\\ {\em{Proof.}} The proof is quite similar to the proof of the previous theorem. Nevertheless we give a brief sketch here. The three cases arise from the fact that $q$ has two local separatrices, $S_1$ and $S_2$, but not necessarily $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ contains both of them. When this is the case, we may have that $S_1$ and $S_2$ belong to distinct leaves, or to the same leaf (in this case all spherical leaves contained in $\mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ intersect two different components of $U \setminus (S_1 \cup S_2)$ ). Using the Morse lemma, we construct the set $K$ for the application of the Holonomy Lemma \ref{tre}.3. We have, respectively: $K=\overline{\mbox{\textrm{B}} ^{n-1}}$, $K=K_1 \sqcup K_2= S^0 \times \overline{\mbox{\textrm{B}} ^{n-1}}$ (we apply twice the Holonomy Lemma), $K= \overline{\mbox{\textrm{B}} ^1} \times S^{n-2}$. In the first two cases, as $K$ is simply connected, the map $K \rightarrow L$, to be extended, is clearly homotopic to a constant in its leaf. Then $L \setminus U$ projects onto $\partial C_p(\mbox{$\mathscr{F}$})$ and on neighbour leaves. This completes the piecewise description in case {\em{(i)}} and {\em{(ii)}}.\\In the third case, piecing pieces together after a first application of the Holonomy Lemma, we obtain $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})\simeq S^{n-1}/S^0$ and $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \setminus \{ q \} \simeq \mbox{\textrm{B}} ^1 \times S^ {n-2}$, simply connected for $n \neq 3$. With a second application of the Holonomy Lemma ($n \neq 3$), $K$ projects diffeomorphically onto any neighbour leaf, $F$. The same also happens for $n=3$, because a curve $\gamma:S^1 \rightarrow \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, as the one depicted in figure \ref{coupling4}, is never a generator of the holonomy, which is locally trivial (a consequence of the Morse lemma). Nevertheless, there are essentially two ways to piece pieces together. We may have $F \simeq S^1 \times S^{n-2}$ or $F \simeq \mbox{ $ \mathbb{R}$} \times S^{n-2}$. The last result gives the motivation for a new concept.\\ {\bf{Definition \ref{tre}.6}} In a codimension one singular foliation $\mbox{$\mathscr{F}$}$ it may happen that, for some leaf $L$ and $q \in Sing(\mbox{$\mathscr{F}$})$, the set $L \cup \{q \}$ is arcwise connected. Let $C=\{q \in Sing(\mbox{$\mathscr{F}$})| L \cup \{q \}\textrm{ is arcwise connected} \}$. If for some leaf $L$ the set $C \neq \varnothing$, we define the corresponding {\em{singular leaf}} \cite{Wag} $S(L)= L \cup C$. In particular, if $\mbox{$\mathscr{F}$}$ is a transversely orientable Morse foliation, each singular leaf is given by $S(L)=L \cup \{ q \}$, for a single saddle-type singularity $q$, either selfconnected or not. In the case of a transversely orientable Morse foliation $\mbox{$\mathscr{F}$}$ on $M$ ($n= \dim M \geq 3$), given a saddle $q$ and a separatrix $L$ of $q$, we may define a sort of holonomy map of the singular leaf $S(L)$. This is done in the following way.\\ As the foliation is Morse, in a neighborhood $U \subset M$ of $q$ there exists a (Morse) local first integral $f:U \rightarrow \mbox{ $ \mathbb{R}$}$, with $f(q)=0$. Keeping into account the structure of the level sets of the Morse function $f$ (see Theorem \ref{tre}.4 and Theorem \ref{tre}.5) we observe that there are at most three connected components in $U \setminus S(L)= U \setminus \{ f^{-1}(0)\}$ (notice that the number of components depends on the Morse index of $q$).\\ Let $\gamma: [0,1] \rightarrow S(L)$ be a $C^1$ path through the singularity $q$. At first, we consider the case $\gamma([0,1]) \subset U$, $q= \gamma(t)$ for some $0<t<1$. For a point $x \in M \setminus Sing(\mbox{$\mathscr{F}$})$, let $\Sigma_x$ be a transverse section at $x$. The set $\Sigma_x \setminus \{x \}$ is the union of two connected components, $\Sigma^+_x$ and $\Sigma ^-_x$ that we will denote by {\em{semi-transverse sections at $x$}}. For $x= \gamma(0) \in S(L)$ we have $f(x)=0$ and we can choose semi-transverse sections at $x$ in a way that $f(\Sigma^+_x)>0$ and $f(\Sigma^-_x)<0$. We repeat the construction for $y=\gamma(1)$, obtaining four semi-transverse sections, which are contained in (at most) three connected components of $U \setminus S(L)$. As a consequence, at least two of them are in the same component. By our choices, this happens for $\Sigma_x^-$ and $\Sigma_y^-$ (but we cannot exclude it happens also for $\Sigma_x^+$ and $\Sigma_y^+$). We define the {\em{semi-holonomy map}} $h^-:\Sigma^-_{\gamma(0)} \cup \gamma(0) \rightarrow \Sigma^-_{\gamma(1)} \cup \gamma(1)$ by setting $h^-(\gamma(0))=\gamma(1)$ and $h^-(z)=h(z)$ for $z \in \Sigma^-_{\gamma(0)}$, where $h:\Sigma^-_{\gamma(0)} \rightarrow \Sigma^-_{\gamma(1)}$ is a classic holonomy map (i.e. such that for a leaf $F$, it is $h(F \cap \Sigma^-_{\gamma(0)})=F \cap \Sigma^-_{\gamma(1)}$). In the same way, if it is the case, we define $h^+$.\\ Consider now any curve $\gamma: [0,1] \rightarrow S(L)$. As $\mbox{$\mathscr{F}$}$ is transversely orientable, the choice of a semi-transverse section for the curve $\gamma([0,1]) \cap U$, may be extended continuously on the rest of the curve, $\gamma([0,1]) \setminus U$; with this remark, we use classic holonomy outside $U$. To complete the definition, it is enough to say what a semi-transverse section at the saddle $q$ is. In this way we allow $q \in \gamma(\partial[0,1])$. To this aim, we use the orbits of the transverse vector field, $grad(f)$. By the property of gradient vector fields, there exist points $t,v$ such that $\alpha(t)=\omega(v)=q$. Let $\Sigma _q^+$ ($\Sigma _q^-$) be the negative (positive) semi-orbit through $t$ ($v$). Each of $\Sigma _q^+$ and $\Sigma _q^-$, transverse to the foliation and such that $\overline {\Sigma _q^+} \cap \overline {\Sigma _q^-}= \{ q \}$, is a {\em{semi-transverse section}} at the saddle $q$. In this way, the {\em{semi-holonomy of a singular leaf}} $Hol^+(S(L), \mbox{$\mathscr{F}$})$ is a representation of the fundamental group $\pi_1(S(L))$ into the germs of diffeomorphisms of $\mbox{ $ \mathbb{R}$} _{\geq 0}$ fixing the origin, $Germ(\mbox{ $ \mathbb{R}$} _{\geq 0},0)$. Now we consider the (most interesting) case of a selfconnected separatrix $S(L)=\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, with $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ satisfying the description of Theorem \ref{tre}.5, case {\em{(iii)}}. The singular leaf $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, homeomorphic to $S^{n-1}/S^0$, has the homotopy type of $S^{n-1} \vee S^1$. We have $Hol^+(\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}),\mbox{$\mathscr{F}$})=\{ e, h^-_\gamma \}$, where $\gamma$ is the non trivial generator of the homotopy, and $h^-_\gamma$ is a map with domain contained in the complement $\complement \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$. The two options $h^-_\gamma=e$, $h^-_\gamma \neq e$ give an explanation of the two possible results about the topology of the leaves near the selfconnected separatrix. \section{Realization and elimination of pairings of singularities}\label{quattro} Let us describe one of the key points in our work, i.e. the elimination procedure, which allows us to delete pairs of singularities in certain configurations, and, this way, to lead us back to simple situations as in the Reeb Sphere Theorem (\ref{uno}.4). We need the following notion \cite{Ca-Sca}:\\ {\bf{Definition \ref{quattro}.1}} Let $\mbox{$\mathscr{F}$}$ be a codimension one foliation with isolated singularities on a manifold $M^n$. By a {\em{dead branch}} of $\mbox{$\mathscr{F}$}$ we mean a region $R \subset M$ diffeomorphic to the product $\mbox{\textrm{B}} ^{n-1} \times \mbox{\textrm{B}}^1$, whose boundary, $\partial R \approx \mbox{\textrm{B}} ^{n-1} \times S^0 \cup S^{n-2} \times \mbox{\textrm{B}} ^1$, is the union of two invariant components (pieces of leaves of $\mbox{$\mathscr{F}$}$, not necessarily distinct leaves in $\mbox{$\mathscr{F}$}$) and, respectively, of transverse sections, $\Sigma \approx \{t \} \times \mbox{\textrm{B}}^1$, $t \in S^{n-2}$.\\ Let $\Sigma_i, i=1,2$ be two transverse sections. Observe that the holonomy from $\Sigma_1 \rightarrow \Sigma_2$ is always trivial, in the sense of the Transverse Uniformity Theorem \cite{Cam}, even if $\Sigma_i \cap S(L) \neq \varnothing$ for some singular leaf $S(L)$. In this case we refer to the holonomy of the singular leaf, in the sense above. A first result includes known situations.\\ {\bf{Proposition \ref{quattro}.2}} {\em{Given a foliated manifold $(M^n,\mbox{$\mathscr{F}$})$, with $\mbox{$\mathscr{F}$}$ Morse and transversely orientable, with $Sing(\mbox{$\mathscr{F}$}) \ni p,q$, where $p$ is a center and $q \in \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$ is a saddle of index 1 or $n-1$, there exists a new foliated manifold $(M,\widetilde{\mbox{$\mathscr{F}$}})$, such that: {\em{(i)}} $\widetilde{\mbox{$\mathscr{F}$}}$ and $\mbox{$\mathscr{F}$}$ agree outside a suitable region $R$ of $M$, which contains the singularities $p,q$; {\em{(ii)}} $\widetilde{\mbox{$\mathscr{F}$}}$ is nonsingular in a neighborhood of $R$.}}\\ {\em{Proof.}} We are in the situations described by Theorem \ref{tre}.5. If we are in case {\em{(i)}}, the couple $(p,q)$ may be eliminated with the technique of the dead branch, as illustrated in \cite{Ca-Sca}. If we are in case {\em{(ii)}}, we observe that the two leaves $F_i, i=1,2$ bound a region, $A$, homeomorphic to an anulus, $S^{n-1} \times [0,1]$. We may now replace the singular foliation $\mbox{$\mathscr{F}$}|_A$ with the trivial foliation $\widetilde{\mbox{$\mathscr{F}$}}|_A$, given by $S^{n-1} \times \{t \}$, $t \in [0,1]$. If we are in case {\em{(iii)}}, we may replace the singular Reeb component with a regular one, in the spirit of \cite{Ca-Sca}. Even in this case, we may think the replacing takes place with the aid of a new sort of dead branch, the {\em{dead branch of the selfconnected saddle}}, that we describe with the picture of figure \ref{milnor2}, for the case of the foliation of the torus of figure \ref{milnor}, defined by the height Morse function \cite{Mil1}. Observe that the couples $(p,q)$ and $(r,s)$ of this foliation may be also seen as an example of the coupling described in Theorem \ref{tre}.5, case {\em{(ii)}}. In this case the elimination technique and the results are completely different (see figure \ref{milnor4}).\\ {\bf{Definition \ref{quattro}.3}} If the couple $(p,q)$ satisfies the description of Theorem \ref{tre}.5, case {\em{(i)}} (and therefore may be eliminated with the technique of the dead branch), we will say that $(p,q)$ is a {\em{trivial couple}}.\\ \begin{figure} \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{r}{$r$} \psfrag{s}{$s$} \psfrag{v}{$V$} \includegraphics[scale=.38]{milnor3.eps} \caption{On the left: the height function on the plane V defines a foliation of the torus; on the right: a possible description of the foliation.} \label{milnor} } \end{center} }\end{minipage} \begin{minipage}[t]{.1\linewidth}{\hspace{.1\linewidth}}\end{minipage} \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{1}{$P_1$} \psfrag{2}{$P_2$} \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{s}{$\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$} \includegraphics[scale=.38]{milnor2.eps} \caption{On the left, a dead branch for the selfconnected saddle $q$ of figure \ref{milnor}; on the right, the foliation obtained after the elimination of the two couples of singularities.} \label{milnor2} } \end{center} }\end{minipage} \end{figure} \hspace{3ex}A new result is the construction of saddle-saddle situations:\\ {\bf{Proposition \ref{quattro}.4}} {\em{Given a foliation $\mbox{$\mathscr{F}$}$ on an $n$-manifold $M^n$, there exists a new foliation $\widetilde \mbox{$\mathscr{F}$}$ on $M$, with $Sing(\widetilde \mbox{$\mathscr{F}$})=Sing(\mbox{$\mathscr{F}$}) \cup \{p,q\}$, where $p$ and $q$ are a couple of saddles of consecutive indices, {\em{connecting transversely}} (i.e. such that the stable manifold of $p$, $\mbox{$\mathcal{W}$}^s(p)$, intersects transversely the unstable manifold of $q$, $\mbox{$\mathcal{W}$}^u(q)$).}}\\ {\em{Proof.}} We choose the domain of (any) foliated chart, $(U,\phi)$. Observe that $R'=U$ ($\simeq \phi(U)$) is a dead branch for a foliation $\mbox{$\mathscr{F}$}_{\epsilon '}$, given (up to diffeomorphisms) by the submersion $f_\epsilon= - x_1^2/2- \dots - x_{k-1}^2/2+(x_k^3/3- \epsilon x_k)+ x_{k+1}/2+ \dots + x_n^2/2$, for some $\epsilon =\epsilon '<0$. We consider $\mbox{$\mathscr{F}$}_{\epsilon ''}$, given by taking $\epsilon=\epsilon '' >0$ in $f_\epsilon$, which presents a couple of saddles of consecutive indices, and we choose a dead branch $R''$ around them. We also choose a homeomorphism between $R'$ and $R''$ which sends invariant sets of $\mbox{$\mathscr{F}$}_{\epsilon '}$ into invariant sets of $\mbox{$\mathscr{F}$}_{\epsilon ''}$ in a neighborhood of the boundary. With a surgery, we may replace $\mbox{$\mathscr{F}$}_{\epsilon '}$ with $\mbox{$\mathscr{F}$}_{\epsilon ''}$. The converse of the above poposition is preceded by the following\\ {\bf{Remark \ref{quattro}.5}} Given a foliation $\mbox{$\mathscr{F}$}$ on $M^n$ with two complementary saddle singularities $p,q \in Sing(\mbox{$\mathscr{F}$})$, having a strong stable connection $\gamma$, there exist a neighborhood $U$ of $ p, q$ and $\gamma$ in $M^n$, a $\delta \in \mbox{ $ \mathbb{R}$}^+$ and a coordinate system $\phi: U \rightarrow \mbox{ $ \mathbb{R}$}^n$ taking $p$ onto $(0,\dots, \phi^k=-\delta ,\dots,0)$, $q$ onto $(0,\dots, \phi^k=\delta ,\dots,0)$, $\gamma$ onto the $x_k$-axis, $\{x_l=0\}_{l \neq k}$, and such that: {\em{(i)}} the stable manifold of $p$ is tangent to $\phi^{-1}(\{x_l=0\}_{l>k})$ at $p$, {\em{(ii)}} the unstable manifold of $q$ is tangent to $\phi^{-1}(\{x_l=0\}_{l<k})$ at $q$ (we are led to the situation considered in \cite{Mil2}, A first cancelation theorem). So using the chart $\phi:U \rightarrow \mbox{ $ \mathbb{R}$}^n$ we may assume that we are on a dead branch of $\mbox{ $ \mathbb{R}$}^n$ and the foliation $\mbox{$\mathscr{F}$}|_U$ is defined by $f_ \epsilon$, for $\epsilon =\delta^2$. In this way the vector field $grad(f_ \epsilon)$ defines a transverse orientation in $U$. For a suitable $\mu>0$, the points $r_1=(0,\dots,\phi ^k=-\delta-\mu, \dots,0)$ and $r_2=(0,\dots,\phi ^k=\delta+\mu, \dots,0)$ are such that the modification takes place in a region of $U$ delimited by $L_{r_i}, i=1,2$.\\ {\bf{Proposition \ref{quattro}.6}} {\em{Given a foliation $\mbox{$\mathscr{F}$}$ on $M^n$ with a couple of saddles $p,q$ of complementary indices, having a strong stable connection, there exists a dead branch of the couple of saddles, $R \subset M$ and we can obtain a foliation $\widetilde{\mbox{$\mathscr{F}$}}$ on $M$ such that: {\em{(i)}} $\widetilde{\mbox{$\mathscr{F}$}}$ and $\mbox{$\mathscr{F}$}$ agree on $M \setminus R$; {\em{(ii)}} $\widetilde{\mbox{$\mathscr{F}$}}$ is nonsingular in a neighborhood of $R$; indeed $\widetilde{\mbox{$\mathscr{F}$}}|_R$ is conjugated to a trivial fibration; {\em{(iii)}} the holonomy of $\widetilde{\mbox{$\mathscr{F}$}}$ is conjugate to the holonomy of $\mbox{$\mathscr{F}$}$ in the following sense: given any leaf $L$ of $\mbox{$\mathscr{F}$}$ such that $L \cap (M \setminus R) \neq \varnothing$, then the corresponding leaf $\widetilde{L}$ of $\widetilde{\mbox{$\mathscr{F}$}}$ is such that $Hol( \widetilde{L},\widetilde{\mbox{$\mathscr{F}$}})$ is conjugate to $Hol(L,\mbox{$\mathscr{F}$})$.}}\\ {\bf{Example \ref{quattro}.7 (Trivial Coupling of Saddles)}} Let $M=S^n, n \geq 3$. For $l=1, \dots ,n-2$ we may find a Morse foliation of $M=S^n$, invariant for the splitting $S^n=\overline{\mbox{\textrm{B}} ^{n-l}} \times S^l \cup_\phi S^{n-l-1} \times \overline{\mbox{\textrm{B}} ^{l+1}}$, where $\phi$ is a diffeomorphism of the boundary. In fact, by theorem \ref{tre}.4 or \ref{tre}.5, case {\em{(iii)}}, $\overline{\mbox{\textrm{B}} ^{n-l}} \times S^l$ admits a foliation with one center and one saddle of index $l$. Similarly, $S^{n-l-1} \times \mbox{\textrm{B}} ^{l+1}$ admits a foliation with a saddle of index $n-l-1$, actually a saddle of index $l+1$, after the attachment. We may eliminate the trivial couple of saddles and we are led to the well-known foliation of $S^n$, with a couple of centers and spherical leaves.\\ {\bf{Remark \ref{quattro}.8}} The elimination of saddles of consecutive indices is actually a generalization of the elimination of couples center-saddle, $(p,q)$ with $q \in \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$. Indeed, we may eliminate $(p,q)$ only when the saddle $q$ has index $1$ or $n-1$. This means the singularities of the couple must have consecutive indices and, as $q \in \partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$, there exists an orbit of the transverse vector field having $p$ as $\alpha$-limit (backward) and $q$ as $\omega$-limit (forward), or viceversa. Such an orbit is a strong stable connection. \section{Reeb-type theorems}\label{reeb} We shall now describe how to apply our techniques to obtain some generalizations of the Reeb Sphere Theorem (\ref{uno}.4) for the case of Morse foliations admitting both centers and saddles.\\ A first generalization is based on the following notion:\\ {\bf{Definition \ref{reeb}.1}} We say that an isolated singularity, $p$, of a $C^ \infty$, codimension one foliation $\mbox{$\mathscr{F}$}$ on $M$ is a {\em{stable singularity}}, if there exists a neighborhood $U$ of $p$ in $M$ and a $C^ \infty$ function, $f:U \rightarrow \mbox{ $ \mathbb{R}$}$, defining the foliation in $U$, such that $f(p)=0$ and $f^{-1}(a)$ is compact, for $|a|$ small. The following characterization of stable singularities can be found in \cite{Ca-Sca}.\\ {\bf{Lemma \ref{reeb}.2}} {\em{An isolated singularity $p$ of a function $f:U \subset \mbox{ $ \mathbb{R}$}^n \rightarrow \mbox{ $ \mathbb{R}$}$ defines a stable singularity for $\textrm{d}f$, if and only if there exists a neighborhood $V \subset U$ of $p$, such that, $\forall x \in V$, we have either $\omega (x) = \{p \}$ or $\alpha (x) = \{p \}$, where $\omega (x)$ (respectively $\alpha (x)$) is the $\omega$-limit (respectively $\alpha$-limit) of the orbit of the vector field $grad (f)$ through the point $x$.}} In particular it follows the well-known:\\ {\bf{Lemma \ref{reeb}.3}} {\em{If a function $f:U \subset \mbox{ $ \mathbb{R}$}^n \rightarrow \mbox{ $ \mathbb{R}$}$ has an isolated local maximum or minimum at $p \in U$ then $p$ is a stable singularity for $df$.}} The converse is also true:\\ {\bf{Lemma \ref{reeb}.4}} {\em{If $p$ is a stable singularity, defined by the function $f$, then $p$ is a point of local maximum or minimum for $f$.}}\\ {\em{Proof.}} It follows immediately by Lemma \ref{reeb}.2 and by the fact that $f$ is monotonous, strictly increasing, along the orbits of $grad (f)$. With this notion, we obtain\\ {\bf{Lemma \ref{reeb}.5}} {\em{Let $\mbox{$\mathscr{F}$}$ be a codimension one, singular foliation on a manifold $M^n$. In a neighborhood of a stable singularity, the leaves of $\mbox{$\mathscr{F}$}$ are diffeomorphic to spheres.}}\\ {\em{Proof.}} Let $p \in Sing(\mbox{$\mathscr{F}$})$ be a stable singularity. By Lemma \ref{reeb}.4, we may suppose $p$ is a minimum (otherwise we use $-f$). Using a local chart around $p$, we may suppose we are on $\mbox{ $ \mathbb{R}$}^n$ and we may write the Taylor-Lagrange expansion around $p$ for an approximation of the function $f:U \rightarrow \mbox{ $ \mathbb{R}$}$ at the second order. We have $f(p+h)=f(p)+1/2 \langle h,H(p+ \theta h)h \rangle,$ where $H$ is the Hessian of $f$ and $0< \theta <1$. It follows $\langle h,H(p+ \theta h)h \rangle \geq 0$ in $U$. Then $f$ is convex and hence the sublevels, $f^{-1}(c)$, are also convex.\\ We consider the flow $\phi: \mathscr{D}(\phi) \subset \mbox{ $ \mathbb{R}$} \times U \rightarrow U$ of the vector field $grad(f)$. By the properties of gradient vector fields, in our hypothesis, $\mathscr{D}(\phi) \supset (- \infty,0] \times U$ and $\forall x \in U$ there exists the $\alpha$-limit, $\alpha(x)=p$. For any $x \in f^{-1}(c)$, the tangent space, $T_x f^{-1}(c)$, to the sublevels of $f$ does not contain the radial direction, $\overrightarrow{px}$. This is obvious otherwise, for the convexity of $f^{-1}(c)$, the singularity $p$ should lie on the sublevel $f^{-1}(c)$, a contraddiction because, in this case, $p$ should be a saddle. Equivalently, the orbits of the vector field $grad(f)$ are transverse to spheres centered at $p$. An application of the implicit function theorem shows the existence of a smooth function $x \rightarrow t_x$, that assigns to each point $x \in f^{-1}(c)$ the (negative) time at which $\phi(t,x)$ intersects $S^{n-1}(p, \epsilon)$, where $\epsilon$ is small enough to have $\textrm{B}^{n}(p, \epsilon) \subsetneq R(f^{-1}(c))$, the compact region bounded by $f^{-1}(c)$ . The diffeomorphism between the leaf $f^{-1}(c)$ and the sphere $S^{n-1}(p, \epsilon)$ is given by the composition $x \rightarrow \phi(t_x,x)$. The lemma is proved.\\ {\bf{Lemma \ref{reeb}.6}} {\em{Let $\mbox{$\mathscr{F}$}$ be a codimension one, transversely orientable foliation of $M$, with all leaves closed, $\pi:M \rightarrow M/\mbox{$\mathscr{F}$}$ the projection onto the space of leaves. Then we may choose a foliated atlas on $M$ and a differentiable structure on $M/ \mbox{$\mathscr{F}$}$, such that $M/ \mbox{$\mathscr{F}$}$ is a codimension one compact manifold, locally diffeomorphic to the space of plaques, and $\pi$ is a $C^\infty$ map.}}\\ {\em{Proof.}} At first we notice that the space of leaves $M/\mbox{$\mathscr{F}$}$ (with the quotient topology) is a one-dimensional Hausorff topological space, as a consequence of the Reeb Local Stability Theorem \ref{uno}.1. As all leaves are closed and with no holonomy, we may choose a foliated atlas $\{(U_i, \phi_i)\}$ such that, for each leaf $L \in \mbox{$\mathscr{F}$}, L \cap U_i$ consists, at most, of a single plaque. Let $\pi:M \rightarrow M/ \mbox{$\mathscr{F}$}$ be the projection onto the space of leaves and $\pi_i:U_i \rightarrow \mbox{ $ \mathbb{R}$}$ the projection onto the space of plaques. With abuse of notation, we may write $\pi_i=p_2 \circ \phi_i$, where $p_2$ is the projection on the second component. As there is a 1-1 correspondence between the quotient spaces $\pi|_{U_i}(U_i)$ and $\pi_i(U_i)$, then, are homeomorphic. Let $V \subset M/ \mbox{$\mathscr{F}$}$ be open. The set $\pi^{-1}(V)$ is an invariant open set. We may find a local chart $(U_i,\phi_i)$ such that $\pi(U_i)=V$. We say that $(V, \pi_i \circ (\pi|_{U_i})^{-1})$ is a chart for the differentiable atlas with the required property. To see this, it is enough to prove that, if $(V, \pi_j \circ (\pi|_{U_j})^{-1})$ is another chart with the same domain, $V$, there exists a diffeomorphism between the two images of $V$, i.e. between $\pi_i \circ (\pi|_{U_i})^{-1}(V)$ and $\pi_j \circ (\pi|_{U_j})^{-1}(V)$. This is not obvious when $U_i \cap U_j= \varnothing$. Indeed, the searched diffeomorphism exists, and it is given by the Transverse Uniformity Theorem \cite{Cam}. Observe that, in coordinates, $\pi$ coincides with the projection on the second factor.\\ {\bf{Lemma \ref{reeb}.7}} {\em{Let $n \geq 2$. A weakly stable singularity for a foliation $(M^n, \mbox{$\mathscr{F}$})$ is a stable singularity.}}\\ {\em{Proof.}} Let $p$ be a weakly stable singularity, $U$ a neighborhood of $p$ with all leaves compact. We need a local first integral near $p$. As a consequence of the Reeb Local Stability Theorem \ref{uno}.1, we can find an (invariant) open neighborhood $V \subset U$ of $p$, whose leaves have all trivial holonomy. The set $V \setminus \{ p \}$ is open in $M^*=M \setminus Sing(\mbox{$\mathscr{F}$})$. Let $\mbox{$\mathscr{F}$}^*= \mbox{$\mathscr{F}$} \setminus Sing(\mbox{$\mathscr{F}$})$; the projection $\pi^*:M^* \rightarrow M^*/\mbox{$\mathscr{F}$}^*$ is an open map (see, for example \cite{Cam}). As a consequence of Lemma \ref{reeb}.6, the connected (as $n \geq 2$) and open set $\pi^* (V \setminus \{ p \})$ is a $1$-dimensional manifold with boundary, i.e. it turns out to be an interval, for example $(0,1)$. Now, we extend smoothly $\pi^*$ to a map $\pi$ on $U$. In particular, let $W \subsetneq V$ be a neighborhood of $p$. If (for example) $\pi^*(W \setminus \{p \})=(0,b)$ for some $b<1$, we set $\pi(p)=0$. Thesis follows by lemma \ref{reeb}.3.\\ {\bf{Theorem \ref{reeb}.8}} {\em{Let $M^n$ be a closed $n$-dimensional manifold, $n \geq 3$. Suppose that $M$ supports a $C^ \infty$, codimension one, transversely orientable foliation, $\mbox{$\mathscr{F}$}$, with non-empty singular set, whose elements are, all, weakly stable singularities. Then $M$ is homeomorphic to the sphere, $S^n$.}}\\ {\em{Proof.}} By hypothesis, $\forall p \in Sing(\mbox{$\mathscr{F}$})$, $p$ is a weakly stable singularity. Then it is a stable singularity. By lemma \ref{reeb}.5, in an invariant neighborhood $U_p$ of $p$, the leaves are diffeomorphic to spheres. Now we can proceed as in the proof of the Reeb Sphere Theorem \ref{uno}.4.\\ {\bf{Theorem \ref{reeb}.9 (Classification of codimension one foliations with all leaves compact)}} {\em{Let $\mbox{$\mathscr{F}$}$ be a (possibly singular, with isolated singularities) codimension one foliation of $M$, with all leaves compact. Then all possible singularities are stable. If $\mbox{$\mathscr{F}$}$ is (non) transversely orientable, the space of leaves is (homeomorphic to $[0,1]$) diffeomorphic to $[0,1]$ or $S^1$. In particular, this latter case ocurs if and only if $\partial M, Sing(\mbox{$\mathscr{F}$})= \varnothing$. In all the other cases, denoting by $\pi:M \rightarrow [0,1]$ the projection onto the space of leaves, it is $Hol(\pi^{-1}(x), \mbox{$\mathscr{F}$})=\{e \}, \forall x \in (0,1)$. Moreover, if $x=0,1$, we may have: {\em{(i)}} $\pi^{-1}(x) \subset \partial M \neq \varnothing$ and $Hol(\pi^{-1}(x), \mbox{$\mathscr{F}$})=\{e \}$; {\em{(ii)}} $\pi^{-1}(x)$ is a (stable) singularity; {\em{(iii)}} $Hol(\pi^{-1}(x),\mbox{$\mathscr{F}$})=\{e,g \}$, $g\neq e, g^2=e$ (in this case, $\forall y \in (0,1)$, the leaf $\pi^{-1}(y)$ is a two-sheeted covering of $\pi^{-1}(x)$.}}\\ {\em{Proof.}} If $\mbox{$\mathscr{F}$}$ is transversely orientable, by the Reeb Global Stability Theorem \ref{uno}.2 and Lemma \ref{reeb}.6, the space of leaves is either diffeomorphic to $S^1$ or to $[0,1]$. In particular, $M/ \mbox{$\mathscr{F}$} \approx S^1$ if and only if $M$ is closed and $\mbox{$\mathscr{F}$}$ non singular. When this is not the case, $M/ \mbox{$\mathscr{F}$} \approx [0,1]$, and there are exactly two points ($\partial [0,1]$) which come from a singular point and/or from a leaf of the boundary.\\ If $\mbox{$\mathscr{F}$}$ is non transversely orientable, there is at least one leaf with (finite) non trivial holonomy, which corresponds a boundary point in $M/ \mbox{$\mathscr{F}$}$ to (by Proposition \ref{uno}.3). By the proof of Lemma \ref{reeb}.6, the projection is not differentiable and the space of leaves $M/ \mbox{$\mathscr{F}$}$, a Hausdorff topological $1$-dimensional space, turns out to be an orbifold (see \cite{Thu}). We pass to the transversely orientable double covering, $p: (\widetilde{M}, \widetilde{\mbox{$\mathscr{F}$}}) \rightarrow (M,\mbox{$\mathscr{F}$})$. The foliation $\widetilde{\mbox{$\mathscr{F}$}}$, pull-back of $\mbox{$\mathscr{F}$}$, has all leaves compact, and singular set empty or with stable components; therefore we apply the first part of the classification to $\widetilde{M}/\widetilde{\mbox{$\mathscr{F}$}}$. Both if $\widetilde{M}/\widetilde{\mbox{$\mathscr{F}$}}$ is diffeomorphic to $S^1$ or to $[0,1]$, $M/ \mbox{$\mathscr{F}$}$ is homeomorphic to $[0,1]$, but (clearly) with different orbifold structures. \begin{figure} \begin{minipage}[t]{.45\linewidth} { \begin{center} { \includegraphics[scale=.55]{milnor4.eps} \caption{Elimination technique applied in case {\em{(ii)}} (Theorem \ref{tre}.5) for the foliation of figure \ref{milnor}.} \label{milnor4} } \end{center} }\end{minipage}% \begin{minipage}[t]{.1\linewidth}{\hspace{.1\linewidth}}\end{minipage}% \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{3}{$q$} \psfrag{1}{$p_1$} \psfrag{2}{$p_2$} \includegraphics[scale=.75]{rp2.eps} \caption{A foliation of $\mbox{ $ \mathbb{R}$} P 2$ with three singular points.} \label{rp2} } \end{center} }\end{minipage} \end{figure} \hspace{3ex} Before going on with our main generalization of the Reeb Sphere Theorem \ref{uno}.4, which extends a similar result of Camacho and Sc\'ardua \cite{Ca-Sca} concerning the case $n=3$, we need to recall another result, that we are going to generalize.\\ As we know, the Reeb Sphere Theorem, in its original statement, consideres the effects (on the topology of a manifold $M$) determined by the existence, on $M$, of a real valued function with exactly two non-degenerate singular points. A very similar problem was studied by Eells and Kuiper \cite{Ku-Ee}. They considered manifolds admitting a real valued function with exactly three non-degenerate singular points.They obtained very interesting results. Among other things, it sticks out the obstruction they found about the dimension of $M$, which must be even and assume one of the values $n=2m= 2,4,8,16$. Moreover, the homotopy type of the manifold turns out to vary among a finite number of cases, including (or reducing to, if $n=2,4$) the homotopy tupe of the projective plane over the real, complex, quaternion or Cayley numbers.\\ {\bf{Definition \ref{reeb}.10}} In view of the results of Eells and Kuiper \cite{Ku-Ee}, if a manifold $M$ admits a real-valued function with exactly three non-degenerate singular points, we will say that $M$ is an {\em{Eells-Kuiper}} manifold.\\ We have (see \cite{Ca-Sca} for the case $n=3$):\\ {\bf{Theorem \ref{reeb}.11} (Center-Saddle Theorem)} {\em{Let $M^n$ be an $n$-dimensional manifold, with $n \geq 2$ such that $(M,\mbox{$\mathscr{F}$})$ is a foliated manifold, by means of a transversely orientable, codimension-one, Morse, $C^\infty$ foliation $\mbox{$\mathscr{F}$}$. Moreover $\mbox{$\mathscr{F}$}$ is assumed to be without holonomy if $n=2$. Let $Sing(\mbox{$\mathscr{F}$})$ be the singular set of $\mbox{$\mathscr{F}$}$, with $\# Sing(\mbox{$\mathscr{F}$})=k+l$, where $k,l$ are the numbers of, respectively, centers and saddles. If we have $k \geq l+1$, then there are two possibilities:\\ {\em{(1)}} $k=l+2$ and $M$ is homeomorphic to an $n$-dimensional sphere;\\ {\em{(2)}} $k=l+1$ and $M$ is an Eells-Kuiper manifold}}.\\ {\em{Proof.}} If $l=0$, assertion is proved by the Reeb Sphere Theorem \ref{uno}.4. Let $l \geq 1$; we prove our thesis by induction on the number $l$ of saddles. We set $\mbox{$\mathscr{F}$}_l=\mbox{$\mathscr{F}$}$.\\ So let $l=1$ and $\mbox{$\mathscr{F}$}_1=\mbox{$\mathscr{F}$}$. By hypothesis, in the set $Sing(\mbox{$\mathscr{F}$})$ there exist at least two centers, $p_1,p_2$, with $p_1 \neq p_2$, and one saddle $q$. We have necessarily $q \in \partial \mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$}) \cap \partial \mbox{ $ \mathscr{C}$}_{p_2}(\mbox{$\mathscr{F}$})$. In fact, if this is not the case and, for example $q \notin \partial \mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$})$, then (keeping into account that for $n=2$, the foliation $\mbox{$\mathscr{F}$}$ is assumed to be without holonomy) $\partial \mbox{ $ \mathscr{C}$}_{p_1}=\varnothing$ and $M=\overline{\mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$})}$. A contraddiction. Let $i(q)$ the Morse index of the saddle $q$.\\ For $n \geq 3$ we apply the results of Theorems \ref{tre}.4 and \ref{tre}.5 to the couples $(p_1,q)$ and $(p_2,q)$. In particular, by Theorem \ref{tre}.5, {\em{(iii)}}, it follows that the saddle $q$ cannot be selfconnected. We now have the following two possibilities:\\ {\em{(a)}} $i(q)=1,n-1$ and $(p_1,q)$ or (and) $(p_2,q)$ is a trivial couple,\\ {\em{(b)}} $i(q) \neq 1,n-1$ and there are no trivial couples.\\ For $n=2$, we have necessarily $i(q)=1$ and, in our hypotheses, $q$ is always selfconnected. With few changes, we adapt Theorem \ref{tre}.5, to this case, obtaining $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \simeq S^1$ or $\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$}) \simeq S^1 \vee S^1$; in this latter case we will say that the saddle $q$ is {\em{selfconnected with respect to}} $p$. We obtain:\\ {\em{(a')}} $(p_1,q)$ or (and) $(p_2,q)$ is a trivial couple;\\ {\em{(b')}} $q$ is selfconnected both with respect to $p_1$ and to $p_2$.\\ In cases {\em{(a)}} and {\em{(a')}} we proceed with the elimination of a trivial couple, as stated in Proposition \ref{quattro}.2, and then we obtain the foliated manifold $(M,\mbox{$\mathscr{F}$}_0)$, with no saddle-type and some center-type singularities. We apply the Reeb Sphere Theorem \ref{uno}.4 and obtain $\# Sing(\mbox{$\mathscr{F}$})=2$ and $M \simeq S^n$.\\ In case {\em{(b)}} ($n \geq 3$), as a consequence of Theorem \ref{tre}.4, we necessarily have $i(q)=n/2$ (and therefore $n$ must be even!). Moreover $\overline{\mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$})} \approx \overline{\mbox{ $ \mathscr{C}$}_{p_2}(\mbox{$\mathscr{F}$})}$ and $M=\overline{\mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$})} \cup_\phi \overline{\mbox{ $ \mathscr{C}$}_{p_2}(\mbox{$\mathscr{F}$})}$ may be thought as two copies of the same (singular) manifold glued together along the boundary, by means of the diffeomorphism $\phi$.\\ In case {\em{(b')}} ($n=2$), we obtain the same result as above, i.e. $\overline{\mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$})} \approx \overline{\mbox{ $ \mathscr{C}$}_{p_2}(\mbox{$\mathscr{F}$})}$ and $M=\overline{\mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$})} \cup_\phi \overline{\mbox{ $ \mathscr{C}$}_{p_2}(\mbox{$\mathscr{F}$})}$. We notice that case {\em{(b')}} occurs when the set $\mbox{ $ \mathscr{C}$}_{p_i}(\mbox{$\mathscr{F}$}) \simeq \mbox{\textrm{B}} ^2/S^0$ is obtained by identifying two points of the boundary in a way that reverses the orientation.\\ In cases {\em{(b)}} and {\em{(b')}}, it turns out that $\# Sing (\mbox{$\mathscr{F}$}_1)=3$. Moreover, $\mbox{$\mathscr{F}$}_1$ has a first integral, which is given by the projection of $M$ onto the space of (possibly singular) leaves. In fact, by Lemma \ref{reeb}.6, the space of leaves is diffeomorphic to a closed interval of $\mbox{ $ \mathbb{R}$}$. In this way $M$ turns out to be an Eells-Kuiper manifold. This ends the case $l=1$.\\ Let $l>1$ (and $\# Sing(\mbox{$\mathscr{F}$}) >3$). As above, in $Sing(\mbox{$\mathscr{F}$})$ there exist at least one saddle $q$ and two (distinct) centers, $p_1, p_2$ such that $q \in \partial \mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$}) \cap \partial \mbox{ $ \mathscr{C}$}_{p_2}(\mbox{$\mathscr{F}$})$; we are led to the same possibilities {\em{(a)}}, {\em{(b)}} for $n \geq 3$ and {\em{(a)'}}, {\em{(b)'}} for $n=2$. Anyway {\em{(b)}} and {\em{(b')}} cannot occur, otherwise $M=\overline{\mbox{ $ \mathscr{C}$}_{p_1}(\mbox{$\mathscr{F}$})} \cup_\phi \overline{\mbox{ $ \mathscr{C}$}_{p_2}(\mbox{$\mathscr{F}$})}$ and $\# Sing(\mbox{$\mathscr{F}$})=3$, a contraddiction. Then we may proceed with the elimination of a trivial couple. In this way we obtain the foliated manifold $(M,\mbox{$\mathscr{F}$}_{l-1})$, which we apply the inductive hypothesis to. The theorem is proved, observing that, a posteriori, case {\em{(1)}} holds if $k=l+2$ and case {\em{(2)}} if $k=l+1$.\\ \section{Haefliger-type theorems}\label{haefli} In this paragraph, we investigate the existence of leaves of singular foliations with unilateral holonomy. Keeping into account the results of the previous paragraph, for Morse foliations, we may state or exclude such an occurrence, according to the following theorem:\\ {\bf{Theorem \ref{haefli}.1}} {\em{Let $\mbox{$\mathscr{F}$}$ be a $C^ \infty$, codimension one, Morse foliation on a compact manifold $M^n$, $n \geq 3$, assumed to be transversely orientable, but not necessarily closed. Let $k$ be the number of centers and $l$ the number of saddles. We have the following possibilities: {\em{(i)}} if $k \geq l+1$, then all leaves are closed in $M \setminus Sing(\mbox{$\mathscr{F}$})$; in particular, if $\partial M \neq \varnothing$ or $k \geq l+2$ each regular (singular) leaf of $\mbox{$\mathscr{F}$}$, is diffeomorphic (homeomorphic) to a sphere (in the second option, it is diffeomorphic to a sphere with a pinch at one point); {\em{(ii)}} if $k=l$ there are two possibilities: all leaves are closed in $M \setminus Sing(\mbox{$\mathscr{F}$})$, or there exists some compact (regular or singular) leaf with unilateral holonomy}}.\\ {\bf{Example \ref{haefli}.2}} The foliation of example \ref{quattro}.7 is an occurrence of theorem \ref{haefli}.1, case {\em{(ii)}} with all leaves closed. The Reeb foliation of $S^3$ and each foliation we may obtain from it, with the introduction of $l=k$ trivial couples center-saddle, are examples of theorem \ref{haefli}.1, case {\em{(ii)}}, with a leaf with unilateral holonomy. Now we consider other possibilities for $Sing(\mbox{$\mathscr{F}$})$.\\ {\bf{Definition \ref{haefli}.3}} Let $\mbox{$\mathscr{F}$}$ be a $C^ \infty$, codimension one foliation on a compact manifold $M^n, n\geq 3$, with singular set $Sing(\mbox{$\mathscr{F}$}) \neq \varnothing$. We say that $Sing(\mbox{$\mathscr{F}$})$ is {\em{regular}} if its connected components are either isolated points or smoothly embedded curves, diffeomorphic to $S^1$. We extend the definition of stability to regular components, by saying that a connected component $\Gamma \subset Sing(\mbox{$\mathscr{F}$})$ is {\em{(weakly) stable}}, if there exists a neighborhood of $\Gamma$, where the foliation has all leaves compact (notice that we can repeat the proof of Lemma \ref{reeb}.7 and obtain that a weakly stable component is a stable component). In the case $Sing(\mbox{$\mathscr{F}$})$ is regular, with stable isolated singularities, when $n \geq 3$ we may exclude a Haefliger-type result, as a consequence of Lemma \ref{reeb}.5 and the Reeb Global Stability Theorem for manifolds with boundary. Then we study the case $Sing(\mbox{$\mathscr{F}$})$ regular, with stable components, all diffeomorphic to $S^1$. Let $J$ be a set such that for all $j \in J$, the curve $\gamma_j:S^1 \rightarrow M$, is a smooth embedding and $\Gamma_j:= \gamma_j(S^1) \subset Sing(\mbox{$\mathscr{F}$})$ is stable. Then $J$ is a finite set. This is obvious, otherwise $\forall j \in J$, we may select a point $x_j \in \Gamma_j$ and obtain that the set $\{x_j \}_{j \in J}$ has an accumulation point. But this is not possible because the singular components are separated. We may regard a singular component $\Gamma_j$, as a {\em{degenerate leaf}}, in the sense that we may associate to it, a single point of the space of leaves. We need the following definition\\ {\bf{Definition \ref{haefli}.4}} Let $\mbox{$\mathscr{F}$}$ be a $C^ \infty$, codimension one foliation on a compact manifold $M$. Let $\overline{D^2}$ be the closed 2-disc and $g:\overline{D^2} \rightarrow M$ be a $C^ \infty$ map. We say that $p \in \overline{D^2}$ is a {\em{tangency point of $g$ with $\mbox{$\mathscr{F}$}$}} if $(\textrm{d}g)_p (\mbox{ $ \mathbb{R}$}^2) \subset T_{g(p)} \mbox{$\mathscr{F}$}_{g(p)}.$ We recall a proposition which Haefliger's theorem (cfr. the book \cite{Cam}) is based upon.\\ {\bf{Proposition \ref{haefli}.5}} {\em{Let $A: \overline{D^2} \rightarrow M$ be a $C^ \infty$ map, such that the restriction $A|_{\partial D^2}$ is transverse to $\mbox{$\mathscr{F}$}$, i.e. $\forall x \in \partial D^2,(\textrm{d}A)_x(T_x (\partial D^2))+ T_{A(x)} \mbox{$\mathscr{F}$}_{A(x)}=T_{A(x)}M$. Then, for every $\epsilon >0$ and every integer $r \geq 2$, there exists a $C^ \infty$ map, $g: \overline{D^2} \rightarrow M$, $\epsilon$-near $A$ in the $C^r$-topology, satisfying the following properties: {\em{(i)}} $g|_{\partial D^2}$ is transverse to $\mbox{$\mathscr{F}$}$. {\em{(ii)}} For every point $p \in D^2$ of tangency of $g$ with $\mbox{$\mathscr{F}$}$, there exists a foliation box $U$ of $\mbox{$\mathscr{F}$}$ with $g(p) \in U$ and a distinguished map $\pi:U \rightarrow \mbox{ $ \mathbb{R}$}$ such that $p$ is a non-degenerate singularity of $\pi \circ g:g^{-1}(U) \rightarrow \mbox{ $ \mathbb{R}$}$. In particular there are only a finite number of tangency points of $g$ with $\mbox{$\mathscr{F}$}$, since they are isolated, and they are contained in the open disc $D^2=\{z \in \mbox{ $ \mathbb{R}$}^2:||z||<1\}$. {\em{(iii)}} If $T=\{p_1, \dots, p_t \}$ is the set of tangency points of $g$ with $\mbox{$\mathscr{F}$}$, then $g(p_i)$ and $g(p_j)$ are contained in distinct leaves of $\mbox{$\mathscr{F}$}$, for every $i \neq j$. In particular, the singular foliation $g^*(\mbox{$\mathscr{F}$})$ has no saddle connections.}} We are now able to prove a similar result, in the case of existence of singular components.\\{\bf{Proposition \ref{haefli}.6}} {\em{Let $\mbox{$\mathscr{F}$}$ be a codimension one, $C^ \infty$ foliation on a compact manifold $M^n$, $n \geq 3$, with regular singular set, $Sing(\mbox{$\mathscr{F}$}) = \cup _{j \in J} \Gamma_j \neq \varnothing$, where $\Gamma_j$ are all stable components diffeomorphic to $S^1$ and $J$ is finite. Let $A: \overline{D^2} \rightarrow M$ be a $C^ \infty$ map, such that the restriction $A|_{\partial D^2}$ is transverse to $\mbox{$\mathscr{F}$}$. Then, for every $\epsilon >0$ and every integer $r \geq 2$, there exists a $C^ \infty$ map, $g: \overline{D^2} \rightarrow M$, $\epsilon$-near $A$ in the $C^r$-topology, satisfying properties {\em{(i)}} and {\em{(iii)}} of proposition \ref{haefli}.5, while {\em{(ii)}} is changed in: {\em{(ii')}} for every point $p \in D^2$ of tangency of $g$ with $\mbox{$\mathscr{F}$}$, we have two cases: (1) if $L_{g(p)}$ is a regular leaf of $\mbox{$\mathscr{F}$}$, there exists a foliation box, $U$ of $\mbox{$\mathscr{F}$}$, with $g(p) \in U$, and a distinguished map, $\pi:U \rightarrow \mbox{ $ \mathbb{R}$}$, satisfying properties as in {\em{(ii)}} of Proposition \ref{haefli}.5; (2) if $L_{g(p)}$ is a degenerate leaf of $\mbox{$\mathscr{F}$}$, there exists a neighborhood, $U$ of $p$, and a singular submersion, $\pi:U \rightarrow \mbox{ $ \mathbb{R}$}$, satisfying properties as in {\em{(ii)}} Proposition \ref{haefli}.5.}}\\ {\em{Proof.}} We start by recalling the idea of the classical proof.\\ We choose a finite covering of $A(\overline{D^2})$ by foliation boxes $\{Q_i\}^r_{i=1}$. In each $Q_i$ the foliation is defined by a distinguished map, the submersion $\pi_i:Q_i \rightarrow \mbox{ $ \mathbb{R}$}$. We choose an atlas, $\{(Q_i, \phi_i)\} ^r_{i=1}$, such that the last component of $\phi_i:Q_i \rightarrow \mbox{ $ \mathbb{R}$}^n$ is $\pi_i$, i.e. $\phi_i=(\phi_i^1, \phi_i^2, \dots , \phi_i^{n-1}, \pi_i)$. We construct the finite cover of $\overline{D^2}$, $\{W_i=A^{-1}(Q_i)\}^r_{i=1}$; the expression of $A$ in coordinates is $A|_{W_i}=(A_i^1, \dots , A_i^{n-1}, \pi_i \circ A)$. We may choose covers of $\overline{D^2}$, $\{U_i \}_{i=1}^r$, $\{V_i \}_{i=1}^r$, such that $\overline{U_i} \subset V_i \subset \overline {V_i} \subset W_i$, $i=1, \dots , r$; then we proceed by induction on the number $i$. Starting with $i=1$ and setting $g_0=A$, we apply a result (\cite{Cam}, Cap. VI, $\S$2, Lemma 1, pag. 120) and we modify $g_{i-1}$ in a new function $g_i$, in a way that $g_i(W_i) \subset Q_i$ and $\pi_i \circ g_i:W_i \rightarrow \mbox{ $ \mathbb{R}$}$ is Morse on the subset $U_i \subset W_i$. At last we set $g=g_r$. In the present case, essentially, it is enough to choose a set of couples, $\{(U_k,\pi_k)\}_{k \in K}$, where $\{U_k\}_{k \in K}$ is an open covering of $M$, $\pi_k:U_k \rightarrow \mbox{ $ \mathbb{R}$}$, for $k \in K$, is a (possibly singular) submersion and, if $U_k \cap U_l \neq \varnothing$ for a couple of indices $k,l \in K$, there exists a diffeomorphism $p_{lk}:\pi_k(U_k \cap U_l) \rightarrow \pi_l(U_k \cap U_l)$, such that $\pi_l=p_{lk}\circ \pi_k$. By hypothesis, there exists the set of couples $\{(U_i,\pi_i)\}_{i \in I}$, where $\{U_i\}_{i \in I}$, is an open covering of $M \setminus Sing(\mbox{$\mathscr{F}$})$, and, for $i \in I$, the map $\pi_i:U_i \rightarrow \mbox{ $ \mathbb{R}$}$, is a distinguished map, defining the foliated manifold $(M \setminus Sing(\mbox{$\mathscr{F}$}), \mbox{$\mathscr{F}$}^*)$. Let $y \in Sing(\mbox{$\mathscr{F}$})$, then $y \in \Gamma_j$, for some $j \in J$. As $y \in M$, there exists a neighborhood $C \ni y$, homeomorphic to an $n$-ball. Let $h:C \rightarrow \mbox{\textrm{B}}^n$ be such a homeomorphism. As the map $\gamma_j:S^1 \rightarrow \Gamma_j$ is a smooth embedding, we may suppose that, locally, $\Gamma_j$ is sent in a diameter of the ball $\mbox{\textrm{B}}^n$, i.e. $h(C \cap \Gamma_j)=\{x_2=\dots =x_n=0 \}$. For each singular point $z=h^{-1}(b,0,\dots,0)$, the set $D=h^{-1}(b,x_2,\dots ,x_n)$, homeomorphic to a small $(n-1)$-ball, is transverse to the foliation at $z$. Moreover, if $z_1 \neq z_2$, then $D_1 \cap D_2=\varnothing$. The restriction $\mbox{$\mathscr{F}$}|_D$ is a singular foliation with an isolated stable singularity at $z$. By lemma \ref{reeb}.5, the leaves of $\mbox{$\mathscr{F}$}|_D$ are diffeomorphic to $(n-2)$-spheres. It turns out that $y$ has a neighborhood homeomorphic to the product $(-1,1) \times \mbox{\textrm{B}}^{n-1}$, where the foliation is the image of the singular trivial foliation of $(-1,1) \times \mbox{\textrm{B}}^{n-1}$, given by $(-1,1) \times S^{n-2}\times \{t \}, t \in (0,1)$, with singular set $(-1,1) \times \{0 \}$. Let $\pi_y:U_y \rightarrow[0,1)$ be the projection. If, for a couple of singular points $y,w \in Sing(\mbox{$\mathscr{F}$})$, we have $U_y \cap U_w \neq \varnothing$, we may suppose they belong to the same connected component, $\Gamma_j$. We have $\pi_w \circ \pi_y^{-1}(0)=0$ and, as a consequence of lemma \ref{reeb}.6, there exists a diffeomorphism between $\pi_y(U_y \cap U_w \setminus \Gamma_j)$ and $\pi_w(U_y \cap U_w \setminus \Gamma_j)$. The same happens if $U_y \cap U_i \neq \varnothing$ for some $U_i \subset M \setminus Sing(\mbox{$\mathscr{F}$})$. It comes out that $\pi_y$ is singular on $U_y \cap Sing(\mbox{$\mathscr{F}$})$ and non-singular on $U_y \setminus Sing(\mbox{$\mathscr{F}$})$, i.e. $(d \pi_y)_z=0 \Leftrightarrow z \in U_y \cap Sing(\mbox{$\mathscr{F}$})$. At the end, we set $K=I \cup Sing(\mbox{$\mathscr{F}$})$.\\ Let $g: \overline{D^2} \rightarrow M$ be a map. Then $g$ defines the foliation $g^*(\mbox{$\mathscr{F}$})$, pull-back of $\mbox{$\mathscr{F}$}$, on $\overline{D^2}$. Observe that if $Sing(\mbox{$\mathscr{F}$})= \varnothing$, then $Sing(g^*(\mbox{$\mathscr{F}$}))=\{ \textrm{tangency points of }g \textrm{ with } \mbox{$\mathscr{F}$} \}$, but in the present case, as $Sing(\mbox{$\mathscr{F}$}) \neq \varnothing$, we have $Sing(g^*(\mbox{$\mathscr{F}$}))=\{ \textrm{tangency points of }g \textrm{ with } \mbox{$\mathscr{F}$} \} \cup g^*(Sing(\mbox{$\mathscr{F}$}))$. Either if $p$ is a point of tangency of $g$ with $\mbox{$\mathscr{F}$}$ or if $p \in g^*(Sing(\mbox{$\mathscr{F}$}))$, we have $d(\pi_k)_p=0$. With this remark, we may follow the classical proof. As a consequence of proposition \ref{haefli}.6, we have:\\ {\bf{Theorem \ref{haefli}.7 (Haefliger's theorem for singular foliations)}} {\em{Let $\mbox{$\mathscr{F}$}$ be a codimension one, $C^2$, possibly singular foliation of an $n$-manifold $M$, with $Sing(\mbox{$\mathscr{F}$})$, (empty or) regular and with stable components diffeomorphic to $S^1$. Suppose there exists a closed curve transverse to $\mbox{$\mathscr{F}$}$, homotopic to a point. Then there exists a leaf with unilateral holonomy.}} \section{Novikov-type theorems}\label{novi} We end this article with a result based on the original Novikov's Compact Leaf Theorem and on the notion of stable singular set. To this aim, we premise the following remark. Novikov's statement establishes the existence of a compact leaf for foliations on 3-manifolds with finite fundamental group. This result actually proves the existence of an invariant submanifold, say $N \subset M$, with boundary, such that $\mbox{$\mathscr{F}$}|_N$ contains open leaves whose universal covering is the plane. Moreover these leaves accumulate to the compact leaf of the boundary. In what follows, a submanifold with the above properties will be called a {\em{Novikov component}}. In particular a Novikov component may be a Reeb component, i.e. a solid torus endowed with its Reeb foliation. We recall that two Reeb components, glued together along the boundary by means of a diffeomorphism which sends meridians in parallels and viceversa, give the classical example of the Reeb foliation of $S^3$.\\ If $\mbox{$\mathscr{F}$}$ is a Morse foliation of a 3-manifold, as all saddles have index 1 or 2, we are always in conditions of proposition \ref{quattro}.2 and then we are reduced to consider just two (opposite) cases: {\em{(i)}} all singularities are centers, {\em{(ii)}} all singularities are saddles. In case {\em{(i)}}, by the proof of the Reeb Sphere Theorem \ref{uno}.4, we know that all leaves are compact; in case {\em{(ii)}}, all leaves may be open and dense, as it is shown by an example of a foliation of $S^3$ with Morse singularities and no compact leaves \cite{Ros-Rou}.\\ As in the previous paragraph, we study the case in which $Sing(\mbox{$\mathscr{F}$})$ is regular with stable components, $\Gamma_j, j \in J$, where $J$ is a finite set. We have:\\ {\bf{Theorem \ref{novi}.1}} {\em{Let $\mbox{$\mathscr{F}$}$ be a $C^ \infty$, codimension one foliation on a closed $3$-manifold $M^3$. Suppose $Sing(\mbox{$\mathscr{F}$})$ is (empty or) regular, with stable components. Then we have two possibilities: {\em{(i)}} all leaves of $\mbox{$\mathscr{F}$}$ are compact; {\em{(ii)}} $\mbox{$\mathscr{F}$}$ has a Novikov component.}}\\ {\em{Proof.}} If $Sing(\mbox{$\mathscr{F}$})= \varnothing$, thesis (case {\em{(ii)}}) follows by Novikov theorem. Let $Sing(\mbox{$\mathscr{F}$}) \neq \varnothing$. We may suppose that $\mbox{$\mathscr{F}$}$ is transversely orientable (otherwise we pass to the transversely orientable double covering). If $Sing(\mbox{$\mathscr{F}$})$ contains an isolated singularity, as we know, we are in case {\em{(i)}}. Then we suppose $Sing(\mbox{$\mathscr{F}$})$ contains no isolated singularity, i.e. $Sing(\mbox{$\mathscr{F}$})= \bigcup_{j \in J} \Gamma_j$. Set $\mathcal D(\mbox{$\mathscr{F}$})= \{\Gamma_j, j \in J \} \cup \{ \textrm{ compact leaves with trivial holonomy} \}$. By the Reeb Local Stability Theorem \ref{uno}.1, $\mathcal D(\mbox{$\mathscr{F}$})$ is open. We may have $\partial \mathcal D(\mbox{$\mathscr{F}$})= \varnothing$, and then we are in case {\em{(i)}}, or $\partial \mathcal D(\mbox{$\mathscr{F}$})\neq \varnothing$, and in this case it contains a leaf with unilateral holonomy, $F$. It is clear that $F$ bounds a Novikov component, and then we are in case {\em{(ii)}}; in fact, from one side, $F$ is accumulated by open leaves. If $F'$ is one accumulating leaf, then its universal covering is $p:\mbox{ $ \mathbb{R}$}^2 \rightarrow F'$. Suppose, by contraddiction, that the universal covering of $F'$ is $p:S^2 \rightarrow F'$. By the Reeb Global Stability Theorem for manifolds with boundary, all leaves are compact, diffeomorphic to $p(S^2)$. This concludes the proof since $F$ must have infinite fundamental group. The last result may be reread in terms of the existence of closed curves, transverse to the foliation. We have:\\ {\bf{Lemma \ref{novi}.2}} {\em{Let $\mbox{$\mathscr{F}$}$ be a codimension one, $C^ \infty$ foliation on a closed $3$-manifold $M$, with singular set, $Sing(\mbox{$\mathscr{F}$}) \neq \varnothing$, regular, with stable components. Then $\mbox{$\mathscr{F}$}$ is a foliation with all leaves compact if and only if there exist no closed transversals.}}\\ {\em{Proof.}} (Sufficiency) If the foliation admits an open (in $M \setminus Sing(\mbox{$\mathscr{F}$})$) leaf, $L$, it is well known that we may find a closed curve, intersecting $L$, transverse to the foliation. Viceversa (necessity), let $\mbox{$\mathscr{F}$}$ be a foliation with all leaves compact. If necessary, we pass to the transversely orientable double covering $p:(\widetilde{M}, \widetilde{\mbox{$\mathscr{F}$}}) \rightarrow (M,\mbox{$\mathscr{F}$})$. In this way, we apply Lemma \ref{reeb}.6 and obtain, as $Sing(\widetilde{\mbox{$\mathscr{F}$}}) \neq \varnothing$, that the projection onto the space of leaves is a (global) $C^ \infty$ first integral of $\widetilde{\mbox{$\mathscr{F}$}}$, $f:\widetilde{M} \rightarrow [0,1] \subset \mbox{ $ \mathbb{R}$}$. Suppose, by contraddiction, that there exists a $C^1$ closed transversal to the foliation $\mbox{$\mathscr{F}$}$, the curve $\gamma:S^1 \rightarrow M$. The lifting of $\gamma^2$ is a closed curve, $\Gamma:S^1 \rightarrow \widetilde{M}$, transverse to $\widetilde{\mbox{$\mathscr{F}$}}$. The set $f(\Gamma(S^1))$ is compact and then has maximum and minimum, $m_1,m_2 \in \mbox{ $ \mathbb{R}$}$. A contraddiction, because $\Gamma$ cannot be transverse to the leaves $\{ f^{-1}(m_1) \}, \{ f^{-1}(m_2)\}$. \begin{figure} \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{p}{$p$} \psfrag{q}{$q$} \psfrag{L}{$L$} \psfrag{F}{$F$} \psfrag{U}{$U$} \psfrag{S}{$\partial \mbox{ $ \mathscr{C}$}_p(\mbox{$\mathscr{F}$})$} \includegraphics[scale=.25]{neigh3.eps} \caption{$p-q$ is not a trivial coupling when $1<l<n-1$, where $l$ is the index of the saddle $q$.} \label{LF} } \end{center} }\end{minipage}% \begin{minipage}[t]{.1\linewidth}{\hspace{.1\linewidth}}\end{minipage}% \begin{minipage}[t]{.45\linewidth} { \begin{center} { \psfrag{1}{$ST_1$} \psfrag{2}{$ST_2$} \psfrag{g}{$\gamma$} \includegraphics[scale=.35]{nocycle.eps} \caption{A singular foliation of $S^3$, with no vanishing cycles.} \label{nocycle} } \end{center} } \end{minipage} \end{figure} With this result, we may rephrase the previous theorem.\\ {\bf{Corollary \ref{novi}.3}} {\em{Let $\mbox{$\mathscr{F}$}$ be a codimension one, $C^ \infty$ foliation on a $3$-manifold $M$, such that $Sing(\mbox{$\mathscr{F}$})$ is regular with stable components. Then {\em{(i)}} there are no closed transversals, or equivalently, $\mbox{$\mathscr{F}$}$ is a foliation by compact leaves, {\em{(ii)}} there exists a closed transversal, or equivalently, $\mbox{$\mathscr{F}$}$ has a Novikov component.}}\\ {\bf{Remark \ref{novi}.4}} In the situation we are considering, we cannot state a singular version of Auxiliary Theorem I (see, for example \cite{Mo-Sca}). In fact, even though a singular version of Haefliger Theorem is given, the existence of a closed curve transverse the foliation, homotopic to a constant, does not lead, in general, to the existence of a vanishing cycle, as it is shown by the following counterexample.\\ {\bf{Example \ref{novi}.5}} We consider the foliation of $S^3$ given by a Reeb component, $ST_1$, glued (through a diffeomorphism of the boundary which interchanges meridians with parallels) to a solid torus $ST_2= S^1 \times \overline{D^2}=T^2 \times (0,1) \cup S^1$. The torus $ST_2$ is endowed with the singular trivial foliation $\mbox{$\mathscr{F}$}|_{ST_2}=T^2 \times \{t \}$, for $t \in (0,1)$, where $Sing(\mbox{$\mathscr{F}$}|_{ST_2})=S^1=Sing(\mbox{$\mathscr{F}$})$. As a closed transversal to the foliation, we consider the curve $\gamma:S^1 \rightarrow ST_1 \subset S^3$, drawed in figure \ref{nocycle}. Let $f:\overline{D^2} \rightarrow S^3$ be an extension of $\gamma$; the extension $f$ is assumed to be in general position with respect to $\mbox{$\mathscr{F}$}$, as a consequence of proposition \ref{haefli}.5. As $\gamma(S^1)$ is linked to the singular component $S^1 \subset ST_2$, then $f(\overline{D^2}) \cap Sing(\mbox{$\mathscr{F}$}) \neq \varnothing$. As a consequence, we find a decreasing sequence of cycles, $\{\beta_n \}$, (the closed curves of the picture) which does not admit a cycle, $\beta_\infty$, such that $\beta_n > \beta _\infty$, for all $n$. In fact the ``limit'' of the sequence is not a cycle, but the point $f(\overline{D^2}) \cap Sing(\mbox{$\mathscr{F}$}) $. \\ {\bf{Example \ref{novi}.6}} The different situations of Theorem \ref{novi}.1 or Corollary \ref{novi}.3 may be exemplified as follows. It is easy to see that $S^3$ admits a singular foliation with all leaves compact (diffeomorphic to $T^2$) and two singular (stable) components linked together, diffeomorphic to $S^1$. In fact one can verify that $S^3$ is the union of two solid tori, $ST_1$ and $ST_2$, glued together along the boundary, both endowed with a singular trivial foliation.\\ We construct another foliation on $S^3$, modifying the previous one. We set $\widetilde{ST_1}=S^1 \times \{0 \} \cup T^2 \times (0,1/2]$. In this way, $ST_1= \widetilde{ST_1} \cup T^2 \times (1/2,1]$. We now modify the foliation in $ST_1 \setminus \widetilde{ST_1}$, by replacing the trivial foliation with a foliation with cylindric leaves accumulating to the two components of the boundary.
1,116,691,499,550
arxiv
\section{Introduction} In the context of the theory of topological graphs and graph drawing, many interesting questions have been raised concerning the adjacency structure of a family of curves in the plane or in another surface \cite{FP10}. In particular, during the past four decades, various important properties of string graphs (i.e., intersection graphs of curves in the plane) have been discovered, and the study of different crossing numbers of graphs and their relations to one another has become a vast area of research. A useful tool in these investigations is the so-called crossing lemma of Ajtai, Chv\'atal, Newborn, Szemer\'edi and Leighton \cite{ACNS}, \cite{L}. It states the following: Given a graph of $n$ vertices and $e>4n$ edges, no matter how we draw it in the plane by not necessarily straight-line edges, there are at least constant times $e^3/n^2$ crossing pairs of edges. \smallskip This lemma has inspired a number of results establishing the existence of many crossing subconfigurations of a given type in sufficiently rich geometric or topological structures \cite{D98}, \cite{Sh03}, \cite{SoT01}, \cite{GNT00}. \smallskip In this note, we will be concerned with families of curves in the plane. By a {\em curve}, we mean a non-selfintersecting continuous arc in the plane, that is, a homeomorphic image of the open interval $(0,1)$. Two curves are said to {\em touch} each other if they have precisely one interior point in common and at this point the first curve does not pass from one side of the second curve to the other. Any other pair of curves with nonempty intersection is called {\em crossing}. A family of curves is in {\em general position} if any two of them intersect in a finite number of points and no three pass through the same point. \smallskip Let $n$ be even, $t$ be a multiple of $n$, and suppose that $n\le t<{n^2\over 4}$. Consider a collection $A$ of $n-{2t\over n}>{n\over 2}$ pairwise disjoint curves, and another collection $B$ of ${2t\over n}$ curves such that (i) $A\cup B$ is in general position, (ii) each element of $B$ touches precisely ${n\over 2}$ elements of $A$, and (iii) no two elements of $B$ touch each other. \noindent The family $A\cup B$ consists of $n$ curves such that the number of touching pairs among them is $t$. The only pairs of curves that may cross each other belong to $B$. Thus, the number of crossing pairs is at most ${2t/n \choose 2}\le {2t^2\over n^2}$. See Figure 1. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{crtouching1.eps} \end{center} \caption{A set of $n$ curves with $t$ touching pairs and at most ${2t^2\over n^2}$ crossing pairs.} \end{figure} The aim of the present note is to prove that this construction is optimal up to a constant factor, that is, any family of $n$ curves and $t$ touchings has at least constant times ${t^2\over n^2}$ crossing pairs. \medskip \noindent{\bf Theorem.} {\em Consider a family of $n$ curves in general position in the plane which determines $t$ touching pairs and $c$ crossing pairs. If $t\ge 10n$, then we have $c \ge {1\over 10^5}{t^2\over n^2}$. This bound is best possible up to a constant factor.} \medskip We make no attempt to optimize the constants in the theorem. \smallskip Pach, Rubin, and Tardos \cite{PRT16} established a similar relationship between $t$, the number of touching pairs, and $C$, the number of crossing {\em points} between the curves. They proved that $C \ge t(\log\log (t/n))^{\delta}$, for an absolute constant $\delta>0$. Obviously, we have $C\ge c$. There is an arrangement of $n$ red curves and $n$ blue curves in the plane such that every red curve touches every blue curve, and the total number of crossing points is $C=\Theta(n^2\log n)$; cf.~\cite{FFPP10}. Of course, the number of crossing pairs, $c$, can never exceed ${n\choose 2}$. Between $n$ arbitrary curves, the number of touchings $t$ can be as large as $(\frac34 + o(1)){n\choose 2}$; cf. \cite{PT06}. However, if we restrict our attention to algebraic plane curves of bounded degree, then we have $t=O(n^{3/2})$, where the constant hidden in the notation depends on the degree \cite{ESZ16}. \section{Proof of Theorem} We start with an easy observation. \medskip \noindent {\bf Lemma.} {\em Given a family of $n\ge 3$ curves in general position in the plane, no two of which cross, the number of touchings, $t$, cannot exceed $3n-6$.} \medskip \noindent {\em Proof.} Pick a different point on each curve. Whenever two curves touch each other at a point $p$, connect them by an edge (arc) passing through $p$. In the resulting drawing, any two edges that do not share an endpoint are represented by disjoint arcs. According to the Hanani-Tutte theorem~\cite{Tu70}, this means that the underlying graph is planar, so that its number of edges, $t$, satisfies $t\le 3n-6$. $\Box$ \medskip \noindent {\em Proof of Theorem.} We proceed by induction on $n$. For $n\le 20$, the statement is void. Suppose that $n>20$ and that the statement has already been proved for all values smaller than $n$. We distinguish two cases. \medskip \noindent{\sc CASE A:} $t \le 10n^{3/2}$. \smallskip In this case, we want to establish the stronger statement $$c\ge {1\over 10^4}{t^2\over n^2}.$$ By the assumption, we have \begin{equation}\label{eq0} {1\over 10^4}{t^2\over n^2}\le {n\over 100}. \end{equation} Let $G_t$ (resp., $G_c$) denote the {\em touching graph} (resp., {\em crossing graph}) associated with the curves. That is, the vertices of both graphs correspond to the curves, and two vertices are connected by an edge if and only if the corresponding curves are touching (resp., crossing). \smallskip \vskip 0.3cm \begin{figure}[h] \begin{center} \includegraphics[scale=0.4]{crtouching2.eps} \end{center} \caption{Graph $G_c$.} \end{figure} Let $T$ be a minimal vertex cover in $G_c$, that is, a smallest set of vertices of $G_c$ such that every edge of $G_c$ has at least one endpoint in $T$. Let $\tau=|T|$. Let $U$ denote the complement of $T$. Obviously, $U$ is an {\em independent set} in $G_c$. According to the Lemma, the number of edges in $G_t[U]$, the touching graph induced by $U$, satisfies \begin{equation}\label{eq1.5} |E(G_t[U])|< 3|U|\le 3n. \end{equation} By the minimality of $T$, $G_c$ has at least $|T|=\tau$ edges. That is, we have $c \ge \tau$, so we are done if $\tau\ge {1\over 10^4}{t^2\over n^2}$. \smallskip From now on, we can and shall assume that $\tau < {1\over 10^4}{t^2\over n^2}.$ By (\ref{eq0}), we have ${1\over 10^4}{t^2\over n^2}\le {n\over 100}$. Hence, $|T|\le {n\over 100}$ and \begin{equation}\label{eq2.5} |U|=n-|T| \ge {99n\over 100}. \end{equation} Let $U'\subseteq U$ denote the set of all vertices in $U$ that are not isolated in the graph $G_c$. By the definition of $T$, all neighbors of a vertex $v\in U$ in $G_c$ belong to $T$. If $|U'|\ge {1\over 10^4}{t^2\over n^2}$, then we are done, because $c\ge |U'|.$ \smallskip Therefore, we can assume that \begin{equation}\label{eq3} |U'|< {1\over 10^4}{t^2\over n^2}\le {n\over 100}, \end{equation} where the second inequality follows again by (\ref{eq0}). Letting $U_0=U\setminus U'$, by (\ref{eq2.5}) and (\ref{eq3}) we obtain $|U_0|=|U|-|U'|\ge {98n\over 100}$. Clearly, all vertices in $U_0$ are isolated in $G_c$. Suppose that $G_t[T\cup U']$ has at least ${t\over 10}$ edges. Consider the set of curves $T\cup U'$. We have $n_0=|T\cup U'|\le {2n\over 100}$ and, the number of touchings, $t_0=|E(G_t[T\cup U'])|\ge {t\over 10}$. Therefore, by the induction hypothesis, for the number of crossings we have $c_0=|E(G_c[T\cup U'])|\ge {1\over 10^5}{t_0^2\over n_0^2}\ge {1\over 10^4}{t^2\over n^2}$ and we are done. Hence, we assume in the sequel that $G_t[T\cup U']$ has fewer than ${t\over 10}$ edges. Consequently, for the number of edges in $G_t$ running between $T$ and $U_0$, we have \begin{equation}\label{eq4} |E(G_t[T,U_0])|\ge t-|E(G_t[T\cup U'])|-|E(G_t[U_0\cup U'])|\ge t-{t\over 10}-3n>{t\over 2}. \end{equation} Here we used the assumption that $t\ge 10n$. Let $\chi=\chi(G_c[T])$ denote the chromatic number of $G_c[T]$. In any coloring of a graph with the smallest possible number of colors, there is at least one edge between any two color classes. Hence, $G_c[T]$ has at least ${\chi\choose 2}\ge {1\over 10^4}{t^2\over n^2}$ edges, and we are done, provided that $\chi>{1\over 70}\cdot{t\over n}$. \smallskip Thus, we can suppose that \begin{equation}\label{eq5} \chi=\chi(G_c[T])\le {1\over 70}\cdot{t\over n}. \end{equation} Consider a coloring of $G_c[T]$ with $\chi$ colors, and denote the color classes by $I_1, I_2, \ldots , I_{\chi}$. Obviously, for every $j$, $I_j\cup U_0$ is an independent set in $G_c$. Therefore, by the Lemma, $G_t[I_j\cup U_0]$ has at most $3n$ edges. Summing up for all $j$ and taking (\ref{eq5}) into account, we obtain $$|E(G_t[T,U_0])|\le \sum_{j=1}^{\chi}|E(G_t[I_j\cup U_0])| \le{1\over 70}\cdot{t\over n}3n\le {t\over 20},$$ contradicting (\ref{eq4}). This completes the proof in CASE A. \medskip \noindent{\sc CASE B:} $t\ge 10n^{3/2}$. \smallskip Set $p={10n^3\over t^2}\le {1 \over 10}$. Select each curve independently with probability $p$. Let ${\bf n'}$, ${\bf t'}$, and ${\bf c'}$ denote the number of selected curves, the number of touching pairs, and the number of crossing pairs between them, respectively. Clearly, \begin{equation}\label{varhatoertek} E[{\bf n'}]=pn, \;\; E[{\bf t'}]=p^2t, \;\; E[{\bf c'}]=p^2c. \end{equation} The number of selected curves, ${\bf n'}$, has binomial distribution, therefore, \begin{equation}\label{n'} {\rm Prob}[|{\bf n'}-pn|>{1\over 4}pn]<{1\over 3}. \end{equation} By Markov's inequality, \begin{equation}\label{c'} {\rm Prob}[{\bf c'}>3p^2c]<{1\over 3}. \end{equation} Consider the touching graph $G_t$. Let $d_1, \ldots, d_n$ denote the degrees of the vertices of $G_t$, and let $e_1, \ldots , e_t$ denote its edges, listed in any order. We say that an edge $e_i$ is {\em selected} (or belongs to the random sample) if both of its endpoints were selected. Let $X_i$ be the {\em indicator variable} for $e_i$, that is, \[ X_i=\left\{ \begin{array}{ll} 1\;\;\;\mbox{ if $e_i$ was selected,}\\ 0\;\;\;\mbox{ otherwise.}\\ \end{array} \right. \] We have $E[X_i]=p^2$. Let ${\bf t'}=\sum_{i=1}^tX_i$. It follows by straightforward computation that for every $i$, $${\rm var}[X_i]=E[(X_i-E[X_i])^2]=p^2-p^4,$$ If $e_i$ and $e_j$ have a common endpoint for some $i\neq j$, then $${\rm cov}[X_i, X_j]=E[X_iX_j]-E[X_i]E[X_j]=p^3-p^4.$$ If $e_i$ and $e_j$ do not have a common vertex, then $X_i$ and $X_j$ are independent random variables and ${\rm cov}[X_i, X_j]=0$. Therefore, we obtain $$\sigma^2={\rm var}[{\bf t'}] =\sum_{i=1}^t{\rm var}[X_i]+\sum_{1\le i\neq j\le t}{\rm cov}[X_i, X_j]$$ $$=(p^2-p^4)t+(p^3-p^4)\sum_{i=1}^nd_i(d_i-1)<p^2t+2p^3nt.$$ From here, we get $\sigma<\sqrt{p^2t}+\sqrt{2p^3nt}<p^2t=E[{\bf t'}]$. By Chebyshev's inequality, $${\rm Prob}[|{\bf t'}-p^2t|\ge \lambda\sigma]\le {1\over \lambda^2}.$$ Setting $\lambda={1\over 4}$, \begin{equation}\label{t'} {\rm Prob}[|{\bf t'}-p^2t|\ge {p^2t\over 4}]\le {1\over 4^2}<{1\over 3}. \end{equation} It follows from (\ref{n'}), (\ref{c'}), and (\ref{t'}) that, with positive probability, we have \begin{equation}\label{meg0} |{\bf n'}-pn|\le{1\over 4}pn,\;\;\;\; {\bf c'}\le 3p^2c,\;\;\;\;|{\bf t'}-p^2t|\le{1\over 4}p^2t. \end{equation} \smallskip Consider a fixed selection of $n'$ curves with $t'$ touching pairs and $c'$ crossing pairs for which the above three inequalities are satisfied. Then we have $$t'\ge {3\over 4}p^2t={300\over 4}\cdot{n^6\over t^3},$$ $$n'\le {5\over 4}pn={50\over 4}\cdot{n^4\over t^2},$$ and, hence, \begin{equation}\label{meg1} t'\ge {6n^2\over t}n'\ge 10n'. \end{equation} On the other hand, $$t'\le {5\over 4}p^2t={500\over 4}\cdot{n^6\over t^3},$$ $$n'\ge {3\over 4}pn={30\over 4}\cdot{n^4\over t^2},$$ so that \begin{equation}\label{meg2} 10(n')^{3/2}\ge 10\cdot{30^{3/2}\over 4^{3/2}}\cdot{n^6\over t^3}>t'. \end{equation} According to (\ref{meg1}) and (\ref{meg2}), the selected family meets the requirements of the Theorem in CASE A. Thus, we can apply the Theorem in this case to obtain that $c'\ge {1\over 10^4}{t'^2\over n'^2}$. In view of (\ref{meg0}), we have $$3p^2c\ge c',\;\;\;\; t'\ge{3\over 4}p^2t,\;\;\;\; n'\le {5\over 4}pn.$$ Thus, $$3p^2c\ge c'\ge{1\over 10^4}{t'^2\over n'^2}\ge {1\over 10^4}{(3p^2t/4)^2\over (5pn/4)^2} ={1\over 10^4}\left({3\over 5}\right)^2{p^2t^2\over n^2}.$$ Comparing the left-hand side and the right-hand side, we conclude that $$c\ge{1\over 10^5}{t^2\over n^2},$$ as required. This completes the proof of the Theorem. $\Box$ \bigskip \noindent{\bf Acknowledgment.} The work of J\'anos Pach was partially supported by Swiss National Science Foundation Grants 200021-165977 and 200020-162884. G\'eza T\'oth's work was partially supported by the Hungarian National Research, Development and Innovation Office, NKFIH, Grant K-111827.
1,116,691,499,551
arxiv
\section{Introduction} Since the discovery of carbon molecules such as fullerenes~\cite{fullerene} and nanotubes,~\cite{nanotube} \textit{sp$^2$} carbon network systems have been attracting much attention. In these systems, topological structures of the networks critically control the $\pi$ electronic states and material functions. In the case of fullerenes, the relative arrangements of 12 pentagonal rings in the basis of the hexagonal network of carbon atoms are responsible for a variety of electronic properties. Carbon nanotubes further demonstrate that the tubular circumferential (chiral) vector determines whether they are metallic or insulating. In addition to these materials with closed $\pi$ electron networks, nanographites which are nanometer-sized graphite fragments with open edges around the peripheries have novel electronic and magnetic properties,~\cite{enoki1,enoki2} which are not seen in bulk graphite. In the \textit{sp$^2$} network systems with open edges, geometrical arrangements of carbon atoms at the edges should play important roles on the $\pi$ electronic states. Basically, there are two edge shapes in single-layer graphite sheet (graphene), i.e., zigzag and armchair edges (see Fig.~\ref{edge_structure_fig}). Fujita \textit{et al.}~\cite{fujita1,nakada,fujita2} first predicted the existence of the peculiar electronic states localized only at the zigzag edge from the tight binding band calculations for the graphene ribbons. This localized state is known as the graphite``edge state.'' It stems from the topology of the $\pi$ electron networks at the zigzag edge and does not appear at the armchair edge. The flat band nature of the edge state results in a peak in the local density of states (LDOS) at the Fermi energy ($E_{F}$). When the ribbon width is large enough, the influence of the edge state on the total density of states is negligible. However, the LDOS near the zigzag edge is strongly affected by the edge state, which would be observable with the scanning tunneling spectroscopy (STS) technique. A similar edge state is also obtained for multilayer ribbons of $\alpha \beta$ stacking from the first-principles calculations.~\cite{miyamoto} This indicates that the edge state would exist in more realistic systems, such as step edges at bulk graphite surfaces. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{edge_structure.eps} \caption{Two types of edges for graphite ribbons: (a) zigzag edge and (b) armchair edge. The edges are denoted by the bold lines. The open and closed circles show the B- and A-site carbon atoms, respectively.} \label{edge_structure_fig} \end{center} \end{figure} In the previous STS measurements performed near circular edges of graphite nanopits,~\cite{klusek1,klusek2} a broad maximum was observed in the LDOS near $E_{F}$, which was attributed to the edge state. However, it is rather difficult, in this case, to distinguish between the electronic properties of the zigzag and armchair edges because both types of the edges inevitably coexist nearby on almost equal footing in the circular edge. Scanning tunneling microscopy (STM) measurements have been performed at linear step edges at the surface of highly oriented pyrolytic graphite (HOPG).~\cite{armchair_stm,zigzag_stm} So far, the $(\sqrt{3} \times \sqrt{3}) R 30^{\circ}$ superstructure has been clearly observed near the armchair edges,~\cite{armchair_stm} while it is not so clear yet whether similar superstructures exist near the zigzag edges.~\cite{armchair_stm,zigzag_stm} Here, we investigated the LDOS in the vinicity of linear step edges of both the zigzag and armchair types with STM/STS. A Brief Report of the STS observation of the graphite edge state has been already made elsewhere.~\cite{ass} In this paper, we present more detailed STM/STS data and more advanced theoretical analyses of the edge state. In the following section, experimental details including sample preparations are described. In Sec. III A, we show typical STM images of graphite surfaces in large area and higher resolution images in the vinicity of both the zigzag and armchair edges. The STS data near both the edges are presented in Sec. III B. Section IV is devoted to discuss the experimental results and theoretical models to be compared with them. \section{Experimental details} We used two kinds of graphites, i.e., HOPG and \textit{ZYX} exfoliated graphite~\cite{niimi} (hereafter \textit{ZYX}) to find linear step edges. HOPG is synthesized by chemical vapor deposition and subsequent heat treatment under high pressures. It is polycrystalline graphite with ordered $c$-axis orientation (the rocking angle $\theta \leq 0.5^{\circ}$). The \textit{ZYX} samples were made from HOPG by the graphite intercalation technique with HNO$_3$ and by subsequent evacuation of the intercalant at 600 $^{\circ}$C. And then, it was heated at 1500 $^{\circ}$C for 3 h to remove the remnant intercalants. \textit{ZYX} is primarily used as an adsorption substrate for studies of monolayer atoms~\cite{helium,xenon,birgeneau} and molecules,~\cite{hydrogen} due to its large specific surface area and moderately large single crystalline (platelet) size. It is reported in the previous scattering experiments~\cite{birgeneau} that \textit{ZYX} has a platelet size of 100$-$200 nm, which is an order of magnitude smaller than HOPG. Thus, the step edges should be more easily found in \textit{ZYX} compared to HOPG. Other characteristics of \textit{ZYX} have been published elsewhere.~\cite{niimi} All graphite edges studied here are monoatomic in height with almost linear shape over the length scale of several tens nanometers. We believe that active $\sigma$-orbital bonds at the edges are terminated by hydrogen or else in air since we did not remove them intentionaly in ultrahigh vacuum (UHV) at elevated temperatures. The STM/STS measurements were carried out with homemade STMs.~\cite{ULT-STM} The STM images were taken at room temperature in air with a tunnel current ($I$) of 1.0 nA and a typical bias voltage ($V$) of $+0.05$ to $+1.0$ V in the constant current mode. Mechanically sharpened Pt$_{0.8}$Ir$_{0.2}$ and electrochemically etched W wires were used as STM tips. The STS measuremets were performed at $T = 77$ K in UHV ($P \leq 2 \times 10^{-7}$ Pa). A tunnel spectrum was obtained by averaging a set of $dI/dV$ vs. $V$ curves measured with the lock-in technique ($f = 71.73$ or 412 Hz, $V_{{\rm mod}} = 1$ or 6 mV) at 100 to 900 grid points over $5 \times 5$ to $15 \times 15$ nm$^2$ area. All the results shown here did not depend on the tip material and lock-in parameter. \section{Results} \subsection{STM observations of graphite edges} Figure~\ref{stm_graphite_fig} shows exemplary STM images of three kinds of graphites [(a) Grafoil,~\cite{grafoil} (b) \textit{ZYX}, (c) HOPG]. They show that moderately long linear steps ($\geq 100$ nm) are available at the surfaces of \textit{ZYX} and HOPG. The featureless areas are atomically flat. From STM images taken over a wider area range, we determined the platelet size distributions of \textit{ZYX} and Grafoil. The results are shown in the Appendix. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{stm_graphite.eps} \caption{Typical STM images of three kinds of graphites ($T=300$ K, in air, $I=1.0$ nA, $V=0.5$ V): (a) Grafoil ($120 \times 120$ nm$^{2}$), (b) \textit{ZYX} ($360 \times 360$ nm$^{2}$), and (c) HOPG ($1200 \times 1200$ nm$^{2}$).} \label{stm_graphite_fig} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{zigzag_superstructure.eps} \caption{STM images and a cross section near a monoatomic step with zigzag edge at the surface of \textit{ZYX} ($T=300$ K, in air, $I=1.0$ nA, $V=0.1$ V). (a) $30 \times 30$ nm$^{2}$ scan image. (b) Cross section profile along the arrow in (a). (c) $6 \times 6$ nm$^{2}$ scan image of the square region in (a). The dashed and dot-dashed lines show the edge and the atomic row of B-site atoms, respectively. The diamond and hexagon represent the $(\sqrt{3} \times \sqrt{3}) R 30^{\circ}$ superstructure and honeycomb one.} \label{zigzag_superstructure_fig} \end{center} \end{figure} In Fig.~\ref{zigzag_superstructure_fig}(a), we present an STM image obtained near a zigzag edge at the surface of \textit{ZYX}. The step edge looks extending straight over 30 nm. The step height estimated from line profile analysis is 0.33 nm [Fig.~\ref{zigzag_superstructure_fig}(b)], which corresponds to the layer spacing of graphite (0.335 nm). In Fig.~\ref{zigzag_superstructure_fig}(c), we show a higher resolution image of the square region denoted in Fig.~\ref{zigzag_superstructure_fig}(a). The scan direction is rotated here by $90^{\circ}$ with respect to that in Fig.~\ref{zigzag_superstructure_fig}(a). Although a good atomic resolution is not obtained right on the edge, it can be identified as zigzag type since the atomic row of B-site carbon atoms (the dot-dashed line) is oriented at 60$^{\circ}$ to the edge direction [see Fig.~\ref{edge_structure_fig}(a)]. Note that the edge shown here is probably not a pure zigzag edge but that mingled with a small fraction of armchair edges. Large electronic density of states is observed within 2 nm from the edge. Moreover, two types of superstructures coexist only on the upper terrace. One is the $(\sqrt{3} \times \sqrt{3}) R 30^{\circ}$ superstructure, and the other is the honeycomb one which consists of six B-site carbon atoms. These superstructures extend over 3$-$4 nm from the edge. The superstructure pattern does not depend on the bias voltage in a range between $+0.05$ and $+1.0$ V. Such superstructures were also obtained at the HOPG surface at 77 K in UHV. \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{armchair_superstructure.eps} \caption{STM images and a cross section near a monoatomic step with armchair edge at the \textit{ZYX} surface ($T=300$ K, in air, $I=1.0$ nA, $V=0.1$ V). (a) $30 \times 30$ nm$^{2}$ scan image. (b) Cross section profile along the arrow in (a). (c) $6 \times 6$ nm$^{2}$ scan image of the square region in (a). The dashed line, dot-dashed line, diamond, and hexagon represent the same meanings as in Fig.~\ref{zigzag_superstructure_fig}(c).} \label{armchair_superstructure_fig} \end{center} \end{figure} In Fig.~\ref{armchair_superstructure_fig}, we show STM images near an armchair edge at the \textit{ZYX} surface. The edge is a monoatomic step edge extending straight over 30 nm [Figs.~\ref{armchair_superstructure_fig}(a) and 4(b)]. It is identified as armchair type from the fact that the atomic row of B-site carbon atoms [the dot-dashed line in Fig.~\ref{armchair_superstructure_fig}(c)] is oriented at 90$^{\circ}$ to the edge direction [see Fig.~\ref{edge_structure_fig}(b)]. As in the case of zigzag edge, both the $(\sqrt{3} \times \sqrt{3}) R 30^{\circ}$ and honeycomb superstructures are observed extending over 3$-$4 nm from the edge. The similar coexistence of the superstructures has been reported by Giunta and Kelty~\cite{armchair_stm} for an armchair step edge at HOPG surface. \subsection{STS observations of graphite edges} STS data in the vinicity of single step edges at the \textit{ZYX} and HOPG surfaces were obtained at 77 K in UHV. The scan directions were fixed parallel to the edges. Figures~\ref{zigzag_armchair_sts1_fig}(a) and 5(b) show tunnel spectra measured in the bias voltage range of $|V|\leq 0.4$ V near zigzag edges at the surfaces of \textit{ZYX} and HOPG, respectively. In order to obtain better signal-to-noise ratios, 8 to 30 $dI/dV$ curves taken at fixed distances ($d$) from the edge were averaged. A clear peak appears at negative bias voltages from $-100$ to $-20$ mV for $0 < d < 3$ nm. It grows as the tip approaches the edges on the terrace ($d > 0$) but suddenly disappears when it moves across the edges ($d < 0$). Such behavior does not depend on graphite sample (\textit{ZYX} or HOPG). Since the tunnel current was unstable at $|d| < 0.5$ nm for some reason, we could not obtain reliable spectra right on the edge. It should also be noted that the definition of $d = 0$ is somewhat arbitrary ($\pm 0.5$ nm) because of the degraded spatial resolution in that region. \begin{figure*}[htbp] \begin{center} \includegraphics[width=12cm]{zigzag_armchair_sts1.eps} \caption{$dI/dV$ curves measured at $|V|\leq 0.4$ V near zigzag edges and armchair edges at the surfaces of \textit{ZYX} [(a),(c)] and HOPG [(b),(d)] ($T=77$ K, in UHV). The numbers denoted are distances from the edge ($d=0$).} \label{zigzag_armchair_sts1_fig} \end{center} \end{figure*} By contrast with the case of zigzag edge, we obtained qualitatively different spectra near armchair edges. As is shown in Figs.~\ref{zigzag_armchair_sts1_fig}(c) and 5(d), the tunnel spectra near the armchair edges do not have such a peak within the experimental errors. These spectra are essentially independent of $d$. Therefore, the LDOS peak depending on $d$ in Figs.~\ref{zigzag_armchair_sts1_fig}(a) and 5(b) should correspond to the graphite edge state that has been theoretically predicted to exist only for the zigzag edge.~\cite{fujita1,nakada,fujita2,miyamoto} In Fig.~\ref{zigzag_armchair_sts1_fig}(c), there is a bump like structure at positive voltages ($0.08 \leq V \leq 0.1$ V). It is probably due to a local electrostatic potential induced by the tip. In STS experiments at graphite surfaces,~\cite{matsui} we often observed similar LDOS bumps depending on the tip conditions but not on the spatial position. It is claimed in the previous STS measurements near the circular edges~\cite{klusek1,klusek2} that the LDOS peak of 0.2 eV wide which appears in the positive energy range of 0.02$-$0.25 eV as $d \to 0$ should correspond to the edge state. However, these results are not consistent with our results in terms of peak energy and width. Although we do not know the reason for the discrepancy between the previous data and ours, the complicated structure of the circular edges might be responsible for that. \begin{figure}[htbp] \begin{center} \includegraphics[width=7cm]{peak_height.eps} \caption{Semilog plot of the distance ($d$) dependece of the peak heights at five zigzag edges associated with the graphite edge state. Each plot is vertically shifted for clarity. The lines show exponential fittings of the data.} \label{peak_height_fig} \end{center} \end{figure} Figure~\ref{peak_height_fig} is a semilog plot of the $dI/dV$ peak heights observed near five different zigzag edges as a function of $d$. Note that the peak heights shown here were obtained by subtracting a smooth background from the raw spectra. From this plot, we can determine a decay length ($\xi$) of the localized state. The averaged $\xi$ was estimated as $1.2 \pm 0.3$ nm. In a wide bias voltage range between $-1.0$ and $+1.0$ V, other interesting properties are found. The spectra obtained far away ($d \geq 3.5$ nm) from the edges are ``V-shaped'' ones, which are characteristic of graphite. However, the LDOS decreases selectively in a voltage range of $|V|\geq 0.6$ V within a distance of 2 nm from the zigzag edge [Fig.~\ref{zigzag_armchair_sts2_fig}(a)]. A similar decrease has been observed near the circular edges.~\cite{klusek1} On the other hand, the LDOS decreases in the whole voltage range for $d < 2.5$ nm near the armchair edge [Fig.~\ref{zigzag_armchair_sts2_fig}(b)]. \begin{figure}[htbp] \begin{center} \includegraphics[width=6cm]{zigzag_armchair_sts2.eps} \caption{$dI/dV$ curves measured at $|V|\leq 1.0$ V near (a) a zigzag edge and (b) an armchair edge at the \textit{ZYX} surface ($T=77$ K, in UHV). The numbers denoted are distances from the edge ($d=0$).} \label{zigzag_armchair_sts2_fig} \end{center} \end{figure} \section{Theoretical simulations and discussions} \subsection{Two types of superstructures} In this section, we discuss the origin of the two types of superstructures observed in the experimental STM images near both the zigzag and armchair edges. These superstructures have been also observed by other authors in the vinicity of defects on HOPG surfaces such as grain boundaries,~\cite{grain_boundaries} deposited metal particles,~\cite{adsorbed_metal1,adsorbed_metal2,adsorbed_metal3,adsorbed_metal4,adsorbed_metal5} or holes made by spattering.~\cite{hole1,hole2} They are attributed to an interference between incident and scattered electron wave functions.~\cite{adsorbed_metal1} However, topology of the defect sites was not known in these experiments, while the experimental results were explained by arbitrary combinations between the incident and scattered wave functions.~\cite{adsorbed_metal3,adsorbed_metal4} On the other hand, the zigzag and armchair edges have well-defined structures in atomic scale. In the systems with these edges, the pattern and periodicity of the superstructures should be obtained without any assumptions. We theoretically analyzed the local electronic states near both the zigzag and armchair edges and compared the calculated results with the STM images described in the previous section. Our simulations were made on a double-layer graphene system with the zigzag or armchair edge, which is more realistic than the graphene ribbon or multilayer ribbons. The bottom layer is an infinite graphene without edges. The top layer consists of periodically arranged graphene ribbons with the zigzag or armchair edge (zigzag or armchair ribbons), whose widths are 15.7 and 8.6 nm, respectively. The spacing between the ribbons is long enough, and the periodic boundary conditions are imposed along the ribbon directions. The electronic states of these graphite layers were calculated by the density-functional derived nonorthogonal tight-binding model.~\cite{Frauenheim98} We assumed that carbon atoms at the edges are hydrogen terminated, and took into account only the $\pi$ orbital at each carbon site which is relevant to the electronic states near $E_{F}$. The LDOS at each atomic site was obtained by diagonalizing the Hamiltonian and overlap matrices assosicated with the $\pi$ orbitals at the $\Gamma$ point. Figure~\ref{perfect_edge_cal_fig} shows spatial variations of calculated electronic states in the vinicity of the zigzag and armchair edges. The radii of the circles plotted on the B-sites in the figure represent integrals ($I_{\rm cal}$) of the calculated LDOS over an energy range between $E_{F}$ and $+0.1$ eV. Note that $I_{\rm cal}$ corresponds to the local tunnel currents at $V=+0.1$ V in the experimental STM images. Figure~\ref{perfect_edge_cal_fig}(a) indicates the existence of the localized electronic state in the vinicity of the zigzag edge with $\xi \sim 0.5$ nm. In this system, there appear no superstructures anywhere. Conversely, Fig.~\ref{perfect_edge_cal_fig}(b) does not show such localized states near the armchair edge but a honeycomb superstructure persisting far beyond 5 nm from the edge. These calculated results for the perfect edges are inconsistent with our experimental observations. \begin{figure}[htbp] \begin{center} \includegraphics[width=6cm]{perfect_edge_cal.eps} \caption{Simulations for tunnel currents at the B-sites near (a) the perfect zigzag and (b) armchair edges by the nonorthogonal tight-binding model. The radii of the circles on the sites indicate integrals of the calculated LDOS in an energy range between $E_{F}$ and $+0.1$ eV. The white and black dashed lines represent the zigzag and armchair edge, respectively.} \label{perfect_edge_cal_fig} \end{center} \end{figure} Hence, we have calculated the LDOS near the zigzag (armchair) edges which are mingled with small amounts of armchair (zigzag) edges. We show two examples for such edge patterns in Figs.~\ref{mixture_edge_cal_fig}(a) and 9(b). In this case, both the $(\sqrt{3} \times \sqrt{3}) R 30^{\circ}$ and honeycomb superstructures appear on the terrace with the zigzag edges slightly admixed with armchair edges. The superstructures extend over 4$-$5 nm from the edge and have complicated distributions in the parallel direction to the edge. In spite of admixing of armchair edges, the localized state still remains near the zigzag edge, but its decay length becomes longer ($\xi \sim 1.2$ nm) than that for the perfect zigzag edge ($\xi \sim 0.5$ nm). This calculation reproduces fairly well the spacial extensions of the two types of superstructures and the localized state observed in the experiment [see Figs.~\ref{zigzag_superstructure_fig}(c) and~\ref{peak_height_fig}]. Unfortunately, we could not observe the atomic arrangement of the zigzag edge clearly in the experiment. Such observations will be done in future works. Nevertheless, the spacial distributions of the two types of superstructures and the localized state strongly indicate that the edge in Fig.~\ref{zigzag_superstructure_fig}(c) is mingled with a small amount of armchair edges. \begin{figure}[htbp] \begin{center} \includegraphics[width=6cm]{mixture_edge_cal.eps} \caption{Simulations for tunnel currents at the B-sites near (a) a zigzag edge with a small amount of armchair edges and (b) an armchair edge with a small amount of zigzag edges by the non-orthogonal tight-binding model.} \label{mixture_edge_cal_fig} \end{center} \end{figure} The coexistence of the two types of superstructures is also seen in the calculation for the armchair edges slightly admixed with zigzag edges in Fig.~\ref{mixture_edge_cal_fig}(b). This again reproduces the feature of the experimental image in Fig.~\ref{armchair_superstructure_fig}(c). However, the experimental spatial extensions of the superstructures (3$-$4 nm) are shorter than the calculated ones (far beyond 5 nm). This may be due to the three dimensional character of the experimental system, which is not fully taken into account in the present calculations. \subsection{Graphite edge state} In order to examine theoretically the detailed features of LDOS near the step edges around $E_F$ ($ | E | \leq 0.2$ eV), we performed the first-principles calculations based on the density functional theory with the generalized gradient approximation. We adopt the exchange-correlation potential introduced by Perdew \textit{et al}.~\cite{Perdew96} The cutoff of the plane-wave basis set is assumed to be 20.25 Ry. The calculations were performed on the double-layer graphene system as denoted in the previous section. The zigzag and armchair ribbon widths of the top layer are 1.565 and 0.862 nm, respectively. The height of the ribbons is 0.1189 nm for both cases. The distance between the top layer and the infinite bottom layer is 0.3356 nm. The edge carbon atoms are terminated by H atoms or OH groups. The lengths between C-C, C-H, C-O, O-H bonds are fixed to be 0.14226, 0.110, 0.140, 0.100 nm, respectively. We adopt the Vanderbilt type ultrasoft pseudopotentials~\cite{Vander} for C, H, and O atoms. The theoretical LDOS shown below is obtained by integrating the LDOS calculated at the lattice point over the volume of each atom. \begin{figure*}[htbp] \begin{center} \includegraphics[width=15cm]{theoretical_ldos2.eps} \caption{First-principles calculations of LDOS for different edges and terminations. The different colors correspond to different sites denoted in the sketches of the double-layer graphene system on which the calculations were performed. (a) H-terminated zigzag edge, (b) H-terminated armchair edge, and (c) OH-terminated zigzag edge.} \label{theoretical_ldos_fig} \end{center} \end{figure*} Figures~\ref{theoretical_ldos_fig}(a) and 10(b) show the calculated LDOS at the B-sites for the zigzag and armchair ribbons whose edges are H terminated. In the former case, the LDOS peak due to the edge state appears near $E_F$ at $d=0$, and rapidly decreases with increasing $d$. The peak width of the theoretical LDOS at $d=0$ is about 80 meV, which is consistent with the experimental ones at $d=0.5$ nm [see Figs.~\ref{zigzag_armchair_sts1_fig}(a) and 5(b)]. Note again that the definition of $d = 0$ in this experiment is somewhat arbitrary ($\pm 0.5$ nm). On the other hand, such a peak dose not appear in the armchair case. Therefore, we conclude that the LDOS peak experimentally observed just below $E_F$ in Figs.~\ref{zigzag_armchair_sts1_fig}(a) and 5(b) originates from the graphite edge state. Although the experimental edge state appears at negative energies ($-100$ to $-20$ mV), the theoretical one obtained in the first-principles calculations does at a slightly higher energy which is more or less close to $E_{F}$. Recently, Sasaki \textit{et al.}~\cite{energy_shift_theory} have shown that within the tight binding approximation, the edge state shifts to the negative side ($E < 0$) if the next-nearest-neighbor hopping process is taken into account. Next, we discuss the influence of terminated atoms and/or molecules on the edge state. The graphite edge state is originally predicted for the H-terminated graphene ribbons.~\cite{fujita1,nakada,fujita2} Since the edges observed in our STS measurements had been exposed in air before being loaded into the UHV chamber, it is possbile that the edges are terminated either by hydrogen or hydroxide. We thus performed the same first-principles calculations for an OH-terminated zigzag edge as well. In Fig.~\ref{theoretical_ldos_fig}(c), we show the calculated LDOS for the OH-terminated zigzag ribbons on the infinite graphene. Although the peak width is slightly larger than that for the H-terminated zigzag ribbons, the LDOS peak still remains near $E_{F}$. Therefore, the LDOS peak assosiated with the graphite edge state is not strongly affected by the different terminations in air, i.e., H atom or OH group. Recently, Kobayashi \textit{et al.}~\cite{h-terminated} have made STM/STS measurements near the zigzag and armchair edges with H and ambient terminations. They found a similar LDOS peak for the zigzag edge with H temination to that obtained in this work in terms of width and energy. Note that the $d$ dependence of the peak was not studied systematically in their experiment. We observed the two characteristically different decreases of the LDOS at larger magnitudes of voltages on approaching the zigzag and armchair edges, as discussed in the last paragraph of Sec. III B. Near the zigzag edge, we observed that the LDOS is suppressed selectively at $|V| \geq$ 0.6 V. Kulsek \textit{et al.}~\cite{klusek1,klusek2,klusek3} observed the similar LDOS suppression near the circular edges at $|V| \geq$ 0.6$-$0.8 V, and claimed that it is associated with the $\pi$ band splitting due to multilayer interaction at the $P$ point in the two-dimensional Brillouin zone.~\cite{splitting1,splitting2} However, it is not clear why such suppression becomes prominent with dicreasing $d$ with this explanation. We have reasonable explanations neither for the other type of LDOS decrease (at $|V| \geq$ 0.3 V) near the armchair edge at this moment. \section{Conclusions} We have studied the electronic local density of states (LDOS) near single step edges at graphite surfaces. In scanning tunneling microscopy measurements, the $(\sqrt{3} \times \sqrt{3}) R 30^{\circ}$ and honeycomb superstructures were observed over 3$-$4 nm from both the zigzag and armchair edges. Calculations based on a density-functional derived nonorthogonal tight binding model show that admixing of the two types of edges is responsible for the experimental coexistence of these superstructures. Scanning tunneling spectroscopy measurements near the zigzag edges reveal the existence of a clear peak in the LDOS at several tens meV below the Fermi energy. The peak amplitude grows as we approach the edge on the terrace, but suddenly diminishes across the edge. No such a peak was observed near the armchair edges. The first-principles calculations for the zigzag and armchair ribbons on infinite graphene sheets reproduce well these experimental results. Therefore, we conclude that the LDOS peak experimentally observed only at the zigzag edge corresponds to the graphite edge state theoretically predicted in the previous calculations on the graphene ribbons. The decay length of the edge state in this experiment is about 1.2 nm, which is consistent with calculations for a zigzag edge slightly mingled with armchair edges. \begin{acknowledgments} One of us (H.F.) thanks the late Mitsutaka Fujita for stimulating his interest in the graphite edge state. The authors are grateful to H. Akisato for useful comments on this manuscript. This work was financially supported by Grant-in-Aid for Scientific Research from MEXT, Japan and ERATO Project of JST. Y.N. and T.M. acknowledge the JSPS for financial support. \end{acknowledgments}
1,116,691,499,552
arxiv
\section*{Acknowledgments} \camreadyadd{This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq, Brazil), PIIC UFES and Fundação de Amparo à Pesquisa do Espírito Santo - Brasil (FAPES) – grant 84412844. The authors thank NVIDIA Corporation for the donation of the GPUs used in this research.} \bibliographystyle{cag-num-names} \section{Introduction} Autonomous driving has been a very active research area in the past few years~\cite{badue2019selfdriving,DBLP:conf/ijcnn/BerrielTCGBSO18,berriel2017cag, mutz_large-scale_2016, lyrio_image-based_2015, berriel_ego-lane_2017}. For an autonomous vehicle to be safe, it has to be aware of its surroundings, which includes detecting pedestrians, traffic signs, and traffic lights. In this work, the focus is on traffic light detection. The goal of traffic light detection is to localize (with a bounding box) each traffic light in an input image and recognize its state (e.g., green, yellow or red). The accurate detection of traffic lights is essential for autonomous vehicles that are intended to travel on public streets, otherwise, the chances of an accident rise considerably~\cite{tl_running}. Hence, there have been numerous works tackling this problem. Most traffic lights follow a similar pattern: three bulbs (one for each state) in a black case~\cite{jensen2016vision}. Because of this pattern, the first methods proposed for traffic light detection relied on hand-crafted feature engineering. Those features were designed mainly based on colors~\cite{diaz2015robust,gomez2014traffic} and shapes~\cite{trehard2014tracking}. Nevertheless, this approach has limited robustness, since hand-crafted features tend to overfit. To increase robustness and generalization, a learning-based approach may be used, such as SVM~\cite{jang2014multiple}, AdaBoost~\cite{gong2010recognition}, or JointBoost~\cite{haltakov2015semantic}. In particular, deep neural networks (DNNs) have gained traction in recent years, outperforming traditional methods in an end-to-end manner~\cite{jensen2017evaluating}. \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{images/overview_v1.pdf} \end{center} \caption{Overview of the proposed method. Artificial traffic scenes are blended in natural backgrounds, composing a dataset to train a deep detector. Then, the trained model is ready to detect and recognize real traffic lights.} \label{fig:overview} \end{figure*} DNNs have been applied to many problems in autonomous driving, such as the detection of traffic signs~\cite{tabelini2019ijcnn} and pedestrians~\cite{ouyang2013joint,sarcinelli2019cag}. One of the reasons is that state-of-the-art deep detectors (e.g., Faster R-CNN~\cite{ren2015faster} and YOLO~\cite{redmon2016you}) can be applied to different tasks with little tuning. Nonetheless, their use for the traffic light detection task has been only recently enabled, with the release of large public datasets containing traffic light annotations~\cite{gonzalez2018annotated,fregin2018driveu}. This highlights one of the main issues with deep learning: the need for large amounts of annotated data. The processes of acquiring and annotating data for the target domain (i.e., the one in which the application is expected to operate) is usually very expensive, requiring several working hours (in this case, for image acquisition and manual annotation, conducted by humans). This is even more critical for the detection task, since each traffic light in the image has to be marked with a bounding box, along with its state. \added{Some works try to alleviate the need for data with the use of generative adversarial networks (GANs)~\cite{arruda_cross-domain_2019}; however, it usually requires some level of annotation. For example, in the case of using GANs to generate artificial traffic scenes (backgrounds) without traffic lights, it would be required training images of the environment with weak annotations indicating the presence/absence of traffic lights. Otherwise, the network could inappropriately learn to generate traffic lights as background.} Moreover, traffic light datasets are typically imbalanced, given that yellow state is less likely to be found. Since most networks struggle with imbalanced data, extra effort on handling data distribution between the classes of interest is usually necessary. To address the class imbalance and annotation effort issues in the context of traffic sign detection, Torres \textit{et al.}~\cite{tabelini2019ijcnn} proposed to combine arbitrary non-traffic-related natural images as background with templates (i.e., pictograms) of traffic signs to train a deep detector. This work aims to adapt the concept introduced by Tabelini \textit{et al.}~\cite{tabelini2019ijcnn} to the traffic light domain, which has its own set of challenges (e.g., vehicles' tail-lights and other lights, in general, can hinder detection). Therefore, this work also investigates the effect of blending non-realistic synthetically generated samples (using low-quality computer graphics) of traffic context on the natural backgrounds to enhance the detection performance. To evaluate the proposed method, experiments were performed using several traffic light databases from the literature. Experimental results showed that an adapted application of the method \cite{tabelini2019ijcnn} (using templates) yields low results (average mAP of 26.78\%). However, they showed that the method can be further enhanced by replacing the templates by the synthetic context, yielding an average mAP of 50.08\%, 4 p.p. higher than the obtained with real-world data training. These results indicate the feasibility of training deep neural networks without real-world samples from the target domain, which corroborates the results of \cite{tabelini2019ijcnn}, and that adding low-quality context to the training image backgrounds can improve the results even further. Moreover, the results also show that the method can boost (by more than 11 p.p. of mAP, in average) the results obtained with real-world data training when using the proposed method to augment the training data. \section{Related works} The literature on traffic light detection is roughly categorized into model- and learning-based methods. Most of the first works used model-based methods, which usually rely on features such as the color~\cite{li2018traffic} and shape~\cite{gomez2014traffic} of traffic lights. However, most recent works propose using learning-based methods, which can be more robust to real-world cases. The first learning-based works applied classical methods such as Histogram of Oriented Gradients (HoG) and Support Vector Machines (SVM)~\cite{barnes2015exploiting,jensen2015traffic}. More recently, deep neural networks (DNNs) have been shown quite effective in various autonomous driving tasks, including traffic light detection. Some works show that generic object detectors, such as YOLO~\cite{redmon2016you} and Faster R-CNN~\cite{ren2015faster} are effective for traffic light detection~\cite{possatti2019traffic}. Behrendt \textit{et al.}~\cite{behrendt2017deep} modified the training procedure of YOLO~\cite{redmon2016you} to better handle issues more specific to traffic light detection, such as the small size of the objects of interest. In~\cite{pon2018hierarchical}, the authors modify the architecture and the mini-batch selection mechanism of Faster R-CNN~\cite{ren2015faster} to train it to detect traffic lights and signs simultaneously. Although detection methods based on deep learning are effective, they require large datasets, which are expensive to annotate. Moreover, they suffer from intrinsic data imbalance in traffic light detection datasets. In this context, some works have been proposed to reduce the data imbalance and the effort required to build datasets. First, there are several tools~\cite{vott2018microsoft, scalabel2018berkeley} that attempt to mitigate the costs of annotating databases for detection tasks. In addition, many people are investigating (semi-)automatic techniques to aid the annotation process, some of them including human-in-the-loop~\cite{wang2018cvpr}. Second, there are some works~\cite{oquab2015cvpr, sangineto2018pami} on weakly-supervised object detection that try to leverage the massive amounts of data annotated for classification to perform detection tasks. Moreover, few-shot learning~\cite{chen2018aaai, kang2018arxiv} has also been applied, to reduce the need for large amounts of collected data. Lastly, data imbalance is a well-known issue and its impact on learning-based methods has been widely investigated even before deep learning~\cite{he2008tkde}. The standard tricks, that usually provide limited robustness, can be roughly categorized into two techniques: data re-sampling and cost-sensitive learning. In the deep learning context, some works~\cite{huang2016cvpr, wag2017neurips, zhou2018kdd} have investigated the effectiveness of these methods when dealing with imbalanced data as well as how to learn deep representations that take the imbalance into account. In~\cite{tabelini2019ijcnn}, the authors propose a method for training deep traffic sign detectors that does not require any target domain real images, where templates are overlaid on arbitrary natural images to generate training samples. Nonetheless, there is still a need for further investigation and for solutions capable of handling these issues altogether, particularly when it comes to traffic light detection. To fulfill this gap, we propose a method to generate synthetic data that does not require human-made annotations nor real data, unlike previous works, and can be used to train deep traffic light detectors with performance on par with models trained on datasets of real images. Our method is compared a state-of-the-art method~\cite{tabelini2019ijcnn}, which also does not require human-made annotations nor real data, and to the vanilla traffic light detection using deep learning \cite{ren2015faster} and real data from traffic scenes. \section{Traffic light detection using synthetic context} In order to avoid the arduous processes of acquiring and annotating real-world data, the proposed approach combines simple and non-realistic artificial 3D shapes and natural images. The artificial 3D shapes serve as foreground, describing the main elements of a traffic scene (e.g., traffic lights, lanes, poles, and vehicles), while the non-traffic-related natural images are used as background. The artificial foreground with traffic light annotations (i.e., the bounding box and the state) are generated automatically with simple and non-realistic computer graphics techniques. By combining a large variety of artificial foregrounds and natural backgrounds, a training dataset is built. This image collection is then used to train a deep detector to localize and classify traffic lights in real-world image samples. Figure~\ref{fig:overview}\footnote{This image presents modified images from COCO~\cite{lin2014microsoft} dataset which can be freely shared and modified under the Attribution License, available in \url{https://creativecommons.org/licenses/by/3.0/}. The figure, as others further presented in this work, also presents a Udacity's~\cite{gonzalez2018annotated} image free to be published under the MIT license.} summarizes the proposed method. \subsection{Backgrounds} The background image set comprises a large variety of natural scenes, like food on the table, animals in nature, people playing sports, and more. Therefore, an attractive option is to exploit large publicly available datasets, both because of their diversity (in case of a general object recognition/detection dataset) and their size. Moreover, given its use and to avoid the inclusion of false positives in the training data, we constrain this set to have only non-traffic-related images, filtering images that may fall into this category. \subsection{Foregrounds} \label{foregrounds} Traffic context scenes can be simulated by modeling and combining a few basic traffic elements, such as roads, poles, vehicles, and traffic lights. The main idea is to reproduce a driver's view of a road intersection signalized by traffic lights in a simple manner. For the generation of the foreground image set, the open-source \textit{Processing} environment~\cite{processing} was used. \textit{Processing} is a software sketchbook oriented to program computer graphics applications, which enables combining different artificial elements by controlling a set of empirically/randomly selected parameters. The most relevant parameters control the position of the observing camera; the directions where the road intersection can lead to; the number of lanes of the roads; the presence or not of a crosswalk; the number, colors and positions of the vehicles in each lane; the road side where the traffic light poles are placed; the poles format; the number, the position, the angle and the state of each traffic light in pole; and, finally, the direction of light. Most traffic elements are modeled through simple geometric forms such that not much time is required to implement them. The components of a scene are described in the following paragraphs. \paragraph{Road} A road is composed of stretches represented by a rectangle of $20H\times{lW}$ each, where $l$ corresponds to the number of lanes in the stretch and is uniformly drawn from $\{2, 4, 6\}$. Roads have two ways, each with $\frac{l}{2}$ lanes. Over the stretch, thin rectangles are also used to represent lane separators and crosswalks. Four different stretches can be generated. They can be referred as ``south'', ``west'', ``north'', and ``east'' stretches, as illustrated by Figure~\ref{fig:road}. Each scene presents a road with two to four stretches. The probabilities of generating each stretch are: 100\% for the south, 80\% for the north and west, and 100\% for the east if the previous two are not generated, otherwise 80\%. The west, north, and east stretches are generated similarly to the south stretch, but they are respectively rotated by 90, 180, or -90 degrees around their extremity so that they result in a road intersection with the configuration depicted by Figure~\ref{fig:road}. \begin{figure} \begin{center} \includegraphics[width=0.65\columnwidth]{images/road.pdf} \end{center} \caption{Illustration of a road composed by four stretches: south stretch, west stretch, north stretch, and east stretch.} \label{fig:road} \end{figure} \paragraph{Traffic light poles} Poles are represented by cylinders and are randomly positioned on the left or right side of the road stretch's extremes. Their axes are orthogonal to the road and may have a horizontal extension at their top. A pole's axis can have up to one traffic light, while its horizontal extension can have up to two. \paragraph{Traffic lights} The design of the modeled traffic lights, exemplified in Figure~\ref{fig:models3d}, corresponds to the three-bulb vertical black model, and each traffic light is assigned with a red, yellow, or green state with equal probability in order to avoid the traditional imbalance over the yellow state. A traffic light model is composed of a black box intercepted by three spheres representing the bulbs. The following models are generated: one fully-lighted bulb and a lighted timer in the central bulb (Figure~\ref{fig:timer}); one fully-lighted bulb and two off bulbs (Figure~\ref{fig:lighted}); and one bulb with only a lighted arrow (Figure~\ref{fig:arrow}). The figure shows each of the following components of the 3D models: 1.~\textit{Traffic light's body}: black box; 2.~\textit{Off-state bulb}: dark sphere; 3.~\textit{Fully-lighted bulb}: sphere simulating an emitting colored material in random tone of the color of the traffic light state; 4.~\textit{Bulb containing timer}: a timer is represented by two digits, composed by five dark-colored segments each. The segments are lighted randomly, therefore the final figure might not be a real digit. The segments are modeled by boxes (with emitting material if lighted) and the bulb has intensified transparency so that the timer inside it can be visualized; 5.~\textit{Bulb containing arrow}: similar to the previous one, but with lighted segments picturing an arrow directed to the left or right; 6.~\textit{Bulb covering}: transversal segments of cylinder covering the bulbs. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.14\textwidth]{images/models1.pdf}% \label{fig:timer}} \subfloat[]{\includegraphics[width=0.14\textwidth]{images/models2.pdf}% \label{fig:lighted}} \subfloat[]{\includegraphics[width=0.14\textwidth]{images/models3.pdf}% \label{fig:arrow}} \caption{Examples of modeled traffic lights: (a)~traffic lights containing a timer; (b)~traffic lights with fully-lighted bulbs; (c)~directional traffic lights. Represented components: 1.~Traffic light's body; 2.~Off-state bulb; 3.~Fully-lighted bulb; 4.~Bulb with timer; 5.~Bulb with arrow; 6.~Bulb covering.} \label{fig:models3d} \end{figure} \paragraph{Vehicles} Vehicles are introduced into the scene by loading (from a publicly available object file\footnote{\url{https://free3d.com/3d-model/bmw-x5-1542.html}.}) a 3D car model without texture and placing several instances into the sketch. Each instance is positioned on the road, between the lane separators, and have their colors randomly set so that the generated scene comprises vehicles with different colors. Back headlights are also modeled through placing a yellow box with emitting material inside a red translucent box and, then, added to each instance so that the deep detector could distinguish them from traffic lights. Dark ellipses under vehicles simulate their shadow over the pavement. \paragraph{Light} A directional light is added in random directions varying along and laterally to the road, but always coming from above the road. Therefore, each scene has its own illumination configuration. \paragraph{Camera} The camera position is defined to simulate the view of a driver in the scene. The camera is placed in the center of one of the possible three south road stretch right lanes ($0.5W$, $1.5W$ or $2.5W$, since the lane width is $W$). The position along the road is set to be within a certain distance ($8.9H$-$21H$) from the road intersection where the traffic lights are. This approach generates traffic lights viewed from different distances throughout scenes. The position over the road is set to represent the height of a driver's eye ($200$-$300$ pixels distant from the road surface). The camera points toward the center of the road intersection. \\ When each traffic light associated with the south stretch is placed on a position of the tridimensional scene, the bidimensional positions of its frontal face's extremities in the final image are calculated and defined as its bounding box coordinates. These traffic lights are also labeled according to their states. Two procedures are considered to avoid traffic lights with high level of occlusion to be labeled: (I) if 50\% or more of the bounding box area is out of the image limits, the traffic light is not labeled; (II) a tridimensional bounding box is considered for the vehicles in the south stretch, then the bidimensional position of its vertexes in the image is calculated. The obtained four extreme positions define the vehicle's 2D bounding box. If a vehicle's bounding box overlaps 50\% or more of the area of a traffic light's bounding box, the vehicle is not placed on the scene. Once complete, the scene is saved as an image, such as shown in Figure~\ref{fig:overview} (``Artificial 3D traffic contexts''). The background of the synthetic image (region without information) is set as transparent so that it shows the non-traffic-related natural image as background after blending. \subsection{Data generation} \label{sec:data_generation} The assembling of a training image starts by randomly selecting a foreground image to be blended into a random background image. Once background and foreground are selected, they undergo an image augmentation process in order to increase data variability within the dataset. Basically, the augmentation comprises four steps: background and foreground individual brightness transformations, foreground histogram noising, blending and blurring. All augmentation parameters were selected empirically to avoid degenerated images. \added{Both foreground and background images are rescaled to the final dimensions of $1280\times960$ pixels to aid the visual inspection of the images when choosing the range of the augmentation parameters.} The brightness transformation involves adding and multiplying random values to the original images' pixel values. For the background image, a real value sampled from the interval $[-120, 120)$ is added to each channel of all pixels. The resulting values are then multiplied by a coefficient within $[0.75, 1.25)$. Consider $A_B$ and $A_F$ as the values added to the pixels of the selected background and foreground, respectively. The same sequence of operations is applied to both, background and foreground images, with the exception that $A_F = A_B + 40$, so that the foreground tends to be slightly highlighted over the background. Since the foreground is artificially generated by a simple computer graphics process, the color of each shape is originally uniform along all their extension. To make the scene more realistic and increase data variability, a histogram noising process is performed on the selected foreground. A random integer value within $[-15, 15)$ is added in a pixel-wise fashion to each channel of the image, therefore adjacent pixels have different color intensities, even when composing the same shape. Besides color uniformity, the artificial shapes have sharp edges, i.e., there is no smooth transition between the scene elements, such as the crosswalk and the road pavement. To address this, a Gaussian blur filtering is applied to the foreground, in which the standard deviation of the filter is randomly drawn from the range $[0, 3)$. Smooth transition is also desirable between foreground's limits and the background. Therefore, the blending procedure was performed considering a mask with smooth transitions in the borders of the objects. The smooth transition was generated by applying a sequence of erosion operations in which each of them introduces a different level of opacity. More specifically, let $M_F$ denote the original foreground's mask (encoded as a 0-1 float image) and $M$ a new foreground's mask to be generated. First, $M$ is defined as $M = \frac{M_F}{3}$. Then, $M_F$ is eroded twice by a $3\times3$ square kernel and $\frac{1}{3}$ of each result is added to $M$. This makes $M$ a mask in which the object's border pixels are less intense. The blending procedure produces an image $I$ which results from overlapping the background $B$ with the foreground $F$ in correspondence to this new mask, i.e., $I = (1 - M)B + MF$. Finally, the blended image is submitted to another blur filtering in order to increase the smoothness of the final arrangement and also the data variability. The standard deviation is once more randomly selected from the range $[0, 3)$. \section{Experimental methodology} \label{experimental_methodology} The experimental evaluation aims to assess the utility of a synthetic dataset generated by the proposed approach. In this context, the performance of a deep detector trained only on the synthetic-generated data is measured on several well-known datasets, and compared with reasonable baselines as well as with a deep detector trained with real-world traffic scenes, which can be viewed as an empirical upper-bound for cross-database experiments. The following subsections describe the training, validation and test datasets used for the experiments, the base datasets used to assemble them, the metrics for performance evaluation, the experimental setup, the conducted experiments and the computational resources used to run them. \added{The concept of \textit{box-validation} (used to provide fair comparisons among different annotation schemes) is also introduced. Table~\ref{table:datasets} summarizes information about the used datasets that are described in the next subsections. The foreground images, the source code for the generation of the datasets and the trained models are publicly available\footnote{\camreadyadd{\url{https://github.com/Jpvmello/traffic-light-detection-synthetic-context}.}}.} \begin{table*}[ht] \centering \caption{\added{Details of the datasets (synthetic, real-world, and hybrid) with their respective training, validation, box-validation, and test sets.}} \label{table:datasets} \resizebox{\textwidth}{!}{% \begin{tabular}{lll|ccc|ccc} \toprule \multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Base Dataset}} & \multirow{2}{*}{\textbf{Set}} & \multicolumn{3}{c}{\textbf{Images}} & \multicolumn{3}{|c}{\textbf{Traffic lights}} \\ & & & \textbf{Size} & \textbf{Positives} & \textbf{Negatives} & \textbf{Red} & \textbf{Green} & \textbf{Yellow} \\ \midrule \multirow{2}{*}{Fully Contextualized} & \multirow{2}{*}{COCO + context} & Training & \multirow{2}{*}{1280$\times$960} & 70,000 & 0 & 46,432 & 46,619 & 47,064 \\ & & Validation & & 7,000 & 0 & 4,701 & 4,702 & 4,543 \\ \multirow{2}{*}{Uncontextualized} & \multirow{2}{*}{COCO + 3D models} & Training & \multirow{2}{*}{1280$\times$960} & 70,000 & 0 & 46,432 & 46,619 & 47,064 \\ & & Validation & & 7,000 & 0 & 4,701 & 4,702 & 4,543 \\ \multirow{2}{*}{Templates Only} & \multirow{2}{*}{COCO + 2D templates} & Training & \multirow{2}{*}{1280$\times$960} & 70,000 & 0 & 46,432 & 46,619 & 47,064 \\ & & Validation & & 7,000 & 0 & 4,701 & 4,702 & 4,543 \\ \multirow{2}{*}{Positive Backgrounds} & \multirow{2}{*}{BDD100K positives + 3D models} & Training & \multirow{2}{*}{1280$\times$960} & 70,000 & 0 & 46,432 & 46,619 & 47,064 \\ & & Validation & & 7,000 & 0 & 4,701 & 4,702 & 4,543 \\ \multirow{2}{*}{Negative Backgrounds} & \multirow{2}{*}{BDD100K negatives + 3D models} & Training & \multirow{2}{*}{1280$\times$960} & 70,000 & 0 & 46,432 & 46,619 & 47,064 \\ & & Validation & & 7,000 & 0 & 4,701 & 4,702 & 4,543 \\ \multirow{2}{*}{Real-world Reference} & \multirow{2}{*}{DTLD} & Training & \multirow{2}{*}{1280$\times$960} & 70,000 & 0 & 46,432 & 46,619 & 47,064 \\ & & Validation & & 7,000 & 0 & 4,701 & 4,702 & 4,543 \\ \midrule \multirow{2}{*}{LISA\_train+test} & \multirow{2}{*}{LISA} & Box-validation & \multirow{2}{*}{640$\times$480} & 1,277 & 125 & 2,199 & 1,509 & 100 \\ & & Test & & 18,971 & 4,615 & 29,731 & 20,889 & 1,402 \\ \multirow{2}{*}{LISA\_test} & \multirow{2}{*}{LISA} & \multirow{2}{*}{Test} & \multirow{2}{*}{640$\times$480} & \multirow{2}{*}{7,473} & \multirow{2}{*}{3,481} & \multirow{2}{*}{9,846} & \multirow{2}{*}{7,717} & \multirow{2}{*}{457} \\ & & & & & & & & \\ \multirow{2}{*}{Udacity-} & \multirow{2}{*}{Udacity} & Box-validation & \multirow{2}{*}{1920$\times$1200} & 447 & 1,052 & 666 & 463 & 29 \\ & & Test & & 4,027 & 9,474 & 6,176 & 3,777 & 199 \\ \multirow{2}{*}{Udacity+} & \multirow{2}{*}{Udacity} & Box-validation & \multirow{2}{*}{1920$\times$1200} & 447 & 1,052 & 663 & 463 & 28 \\ & & Test & & 4,027 & 9,474 & 6,162 & 3,811 & 178 \\ \multirow{2}{*}{LaRA} & \multirow{2}{*}{LaRA} & Box-validation & \multirow{2}{*}{640$\times$480} & 593 & 524 & 512 & 361 & 6 \\ & & Test & & 5,339 & 4,723 & 4,768 & 3,020 & 52 \\ \multirow{2}{*}{Proprietary} & \multirow{2}{*}{Proprietary} & Box-validation & \multirow{2}{*}{1280$\times$960} & 269 & 130 & 191 & 296 & 19 \\ & & Test & & 2,426 & 1,172 & 1,622 & 2,627 & 247 \\ \bottomrule \end{tabular} } \end{table*} \subsection{Backgrounds datasets} \label{sec:natural_backgrounds_dataset} This subsection describes the Microsoft Common Objects in Context (COCO) collection~\cite{lin2014microsoft}, from which the background images are selected, and the Berkeley DeepDrive (BDD100K), a dataset with real-world traffic scenes used as backgrounds for comparison with the use of non-traffic-related backgrounds. \subsubsection{Microsoft Common Objects in Context (COCO)} The training partition of the 2017 version of COCO is used as a source for non-traffic-related natural images to the proposed method. The dataset comprises a total of 328k images with 91 labeled categories of common objects. For the experiments, the dataset was filtered not to contain the following traffic elements: ``traffic light'', ``bicycle'', ``car'', ``bus'', ``motorcycle'', ``truck'', and ``stop sign''. From the filtered set, a subset of 37k randomly-selected images with the smallest dimension equal or higher to 120 pixels composes the natural data used as background for the experiments (30k designated for training and 7k for validation). Images were rescaled to have at least 480 height and 640 width, without altering their aspect ratios. Subsequently, the central pixels were cropped so that the images have dimensions of $640\times480$ pixels. Since COCO is available online, it is assumed that any person using the proposed method, or a variation of it, could have access to the data, including its filter tags. \subsubsection{Berkeley DeepDrive (BDD100K)} The BDD100K dataset~\cite{bdd100kberkeley,yu2020bdd100k} consists of 100k videos recording 40 seconds of more than 50k driving rides conducted in different cities from the USA. The dataset is prepared to be used for ten different tasks, incluing detection, segmentation and tracking of elements in the traffic domain. For image tasks, the interest of this work, the frame corresponding to the 10\textsuperscript{th} second of each video is annotated, resulting in an image dataset with 100k images with dimensions of $1280\times720$ pixels. For this work, only images from daytime and dawn/dusk scenes were considered. They were rescaled to be 960 pixels high and had their central $1280\times960$ pixels cropped. The resulting set was split into one comprising only the positive images (with at least one labeled traffic light), and other containing only the negative images (with no labeled traffic lights). The resulting set of positive backgrounds have 23,850 images, while its counterpart contains 23,941 images. Random 7k images from each of these sets were designated for validation and the remaining ones for training. \subsection{Training and validation datasets} This subsection describes the training and validation datasets assembled for the experiments. They include (i) synthetic-generated datasets, denoted as Fully Contextualized (Ours), Uncontextualized and Templates Only, (ii) a real-world traffic dataset, an adaptation of the DriveU Traffic Light Dataset (DTLD), used to train a strong baseline denoted as Real-world Reference, and (iii) hybrid datasets denoted as Positive Backgrounds and Negative Backgrounds, combining synthetic foregrounds with traffic-related backgrounds from the BDD100K. \subsubsection{Fully Contextualized (Ours)} \label{sec:fully_contextualized} The Fully Contextualized dataset is also denoted as Ours since it refers to the dataset generated through the proposed method. COCO backgrounds and samples from a set of 20k foregrounds generated with dimensions of $640\times480$ pixels were randomly combined to assemble the training set, according to the data generation process described in Section~\ref{sec:data_generation}. This process resulted in a set with the arbitrary number of 70k images with dimensions of $1280\times960$ pixels and 46,432 traffic lights labeled as red, 46,619 as green and 47,064 as yellow. The corresponding validation set, containing 7k images, was assembled in a similar manner. However, no repeated backgrounds and foregrounds occur, i.e., each of the 7k validation backgrounds is combined with a unique sample from a set of 7k generated foregrounds (different from the ones designated for the training set). This validation set contains 4,701 traffic lights labeled as red, 4,702 as green and 4,543 as yellow. \subsubsection{Uncontextualized} \label{sec:uncontextualized} This dataset (both training and validation sets) has the exact same traffic scenes of the Fully Contextualized dataset (Section~\ref{sec:fully_contextualized}), but drawing only the traffic lights instead of the whole traffic context. Some samples are shown in Figure~\ref{fig:context_levels}. \subsubsection{Templates Only} The Templates Only dataset is paired with the Uncontextualized dataset in both training and validation sets (presented in Section~\ref{sec:uncontextualized}) in terms of labeling (classes and dimensions) and augmentation, but it replaces the 3D traffic light models by randomly generated 2D templates with fully-lighted bulb (Figure~\ref{fig:lighted2d}), with a timer (Figure~\ref{fig:timer2d}), or with a directional arrow (Figure~\ref{fig:arrow2d}). They were designed to look as similar as possible to the faces of the 3D models represented in Figure~\ref{fig:models3d}, reproducing the ranges of possible width and height, diameter of the bulb, possible color tones, among others. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.14\textwidth]{images/timer2dnew.pdf}% \label{fig:timer2d}} \subfloat[]{\includegraphics[width=0.14\textwidth]{images/normal2dnew.pdf}% \label{fig:lighted2d}} \subfloat[]{\includegraphics[width=0.14\textwidth]{images/arrow2dnew.pdf}% \label{fig:arrow2d}} \caption{Examples of modeled templates: (a)~traffic lights containing a timer; (b)~traffic lights with fully-lighted bulbs; (c)~directional traffic lights.} \label{fig:templates2d} \end{figure} Before blending, each template's perspective is transformed so that the sides farthest from the center of the final image are smaller than the respective opposite sides. Then, it is rotated by an angle in degrees uniformly drawn from $[-3.6, 3.6]$, resized to fit the respective paired label and placed in a transparent image to be blended to a background image, maintaining the paired label's coordinates and dimensions. \subsubsection{\added{Positive Backgrounds and Negative Backgrounds}} \added{The Positive Backgrounds dataset is also paired with the Uncontextualized dataset, but replacing the COCO non-traffic-related backgrounds by the positive traffic-related backgrounds from BDD100K. The Negative Backgrounds dataset is equivalently generated, except by the fact that it uses the negative BDD100K backgrounds set.} \subsubsection{Real-world Reference: DriveU Traffic Light Dataset (DTLD)} DTLD~\cite{fregin2018driveu} contains 43,132 images with dimensions of $2048\times1024$ pixels and more than 230k annotations of traffic lights. Its images are part of recordings produced in 11 German cities. The annotations provide information about the following features of the traffic lights: (i) face orientation, (ii) occlusion and relevancy to the route of the vehicle which was used to record the dataset, (iii) axial orientation, (iv) number of light bulbs, (v) state and (vi) pictogram (fully-lighted, arrow, pedestrian, etc.). The dataset's images were processed and filtered before use. First, the 960 superior rows and the 1280 central columns of each image were cropped. Then, the cropped area was rescaled to half its dimensions, resulting in an image with dimensions of $640\times480$ pixels without distortion. Next, the dataset was filtered to select only images containing traffic lights with features of interest, namely: (i) frontal face view, (ii) all levels of relevance, (iii) vertical, (iv) three bulbs, (v) red, yellow and green, and (vi) with fully-lighted or arrow bulbs. The filtered dataset has 33,374 images and a total of 25,956, 47,919 and 2,572 traffic lights in the red, green and yellow states, respectively. To compensate class imbalance, the dataset was reorganized to replicate images from the least frequent classes as described next. Let $Y$, $R$ and $G$ be subsets of images containing, respectively: at least one yellow traffic light; at least one red and no yellow traffic lights (note that green may also occur); and at least one green and no yellow traffic lights (red also possible). Balance was achieved by selecting one image from each set repeated times and assigning them to a new set. This was repeated until a total of 70k images were selected. This results in a final training dataset with 46,496 annotated yellow traffic lights, 56,521 red and 58,993 green. For the experiments, the same augmentations applied to the COCO backgrounds, described in Section~\ref{sec:data_generation}, were also applied to this dataset. \added{A validation set based on DTLD was not assembled. Instead, a best-case scenario was considered for each test dataset. For this, the Real-world Reference was validated on each box-validation set, which comprises a small portion of its respective test dataset, as further described in Section~\ref{sec:test_and_boxval_datasets}. Therefore, its validation was conducted with a dataset-dependent positive bias so that the proposed method performance can be compared with the best performance that the Real-world Reference can provide on each test set.} \subsection{Test and box-validation datasets} \label{sec:test_and_boxval_datasets} To evaluate the performance of the trained models, four datasets of real traffic images (LISA, Udacity, LaRA, and a proprietary one) were used, as described in the following subsections. \added{Each of the mentioned datasets is split into two subsets: a minority set with randomly selected 10\% of the dataset's positive and negative images (referred to as \emph{box-validation} set), and the majority set comprising the rest of the images (referred to as \emph{test set}. In an overview, the box-validation dataset -- as a sample of the test set -- serves to guide the adjustment of the predicted bounding boxes in order to compensate inaccurate annotations (as those in Figure~\ref{fig:annotations}) in the respective real-world test set. The use of the box-validation set is better explained in Section~\ref{sec:eval}}. \subsubsection{LISA\_train+test and LISA\_test: Laboratory for Intelligent \& Safe Automobiles (LISA)} The LISA traffic light dataset ~\cite{jensen2018lisa,jensen2016vision,philipsen2015traffic} contains day- and night-time traffic-related images with dimensions of $1280\times960$ pixels collected in San Diego, California, USA. The dataset is originally divided into train and test sets. The train set is subdivided into 13 day clips and 5 night clips, while the test set in 2 day sequences and 2 night sequences. Their images were rescaled to half of their original dimensions, i.e., $640\times480$ pixels. For this work, both the original daytime train and test sets were used to compose the box-validation and test sets. Traffic lights labeled as ``stop'' or ``stopLeft'' were considered as red. Those annotated as ``go'' or ``goLeft'' were redefined as green. Finally, those annotated as ``warning'' or ``warningLeft'' were taken as yellow. From the 14,034 images that compose the used original training day clips, 12,775 are positive. A total of 37,809 traffic lights are considered, from which 22,084 are red, 14,681 are green and 1,045 are yellow. In turn, the original test day sequences contain 10,954 images, from which 7,473 are positive, and present 9,846 red traffic lights, 7,717 green and 457 yellow. For the experiments, two test sets are considered: one corresponding to LISA's original test set (denoted as LISA\_test) and another one composed by both the original train (excluding the images used for validation) and test sets (denoted as LISA\_train+test). \subsubsection{Udacity- and Udacity+} The Udacity's public repository~\cite{gonzalez2018annotated} provides two datasets with several types of annotation. However, only the second dataset was annotated for traffic lights. Therefore, only the images of the second dataset are used for evaluation. This dataset has 15k images with dimensions of $1920\times1200$ pixels, referring to daytime traffic scenes from Mountain View, California. Traffic lights labeled as occluded were not considered, resulting in 4,474 positive images and a total of 8,232 labeled red traffic lights, 5,639 greens and 278 yellows. However, almost 3k of the traffic lights were originally labeled more than once, causing overlapping bounding boxes. Thus, two distinct annotation patterns were considered to decide on overlapping bounding boxes: the first one, denoted as Udacity-, considers only the boxes with the smallest area, while the second one, denoted as Udacity+, considers the boxes with the biggest area. There are small differences on the number of traffic lights labeled as each state between each of the two patterns, due to the occurrence of overlapping boxes from more than one traffic light, but approximately 6,8k are labeled as red, 4,2k as green and little more than 200 as yellow. \subsubsection{\textit{La Route Automatis{\'e}e} (LaRA)} The LaRA traffic lights dataset~\cite{charette2013traffic} comprises 11,179 images with dimensions of $640\times480$ pixels from a video acquired through the traffic of Paris, France. Approximately 55\% of its images (5,932) are positive. It has a total of 5,280 annotations of red traffic lights (as ``stop''), 3,381 of greens (as ``go'') and only 58 in the yellow state (as ``warning''). \subsubsection{Proprietary dataset} A proprietary dataset \camreadyadd{(Intelligent Autonomous Robotic Automobile (IARA)\footnote{\camreadyadd{Available in \url{https://drive.google.com/drive/folders/1iATG5suB9bHnFi9x6XaWtjG-uzwsJ8kb}.}})} was also used for this work. It is composed of 3,997 traffic-related images of dimensions of $1280\times960$ pixels captured by a camera. This image set comprises 2,695 positive images. The dataset has a total of 1,813, 2,923 and 266 traffic lights in the red, green and yellow states, respectively. \subsection{Experimental setup} The models were trained using a publicly available Tensorflow implementation\footnote{\url{https://github.com/endernewton/tf-faster-rcnn}.} of a consolidated state-of-the-art object detector, Faster R-CNN~\cite{ren2015faster}, given the satisfactory results obtained by Torres \textit{et al.}~\cite{tabelini2019ijcnn}, using the also state-of-the-art ResNet-101~\cite{he2016deep} feature extractor. Faster R-CNN comprises a Convolutional Neural Network that proposes regions of interest as candidates of possible objects and two fully-connected networks, one for the bounding eu box regression and another for classification. The anchor boxes scale and ratio sets were empirically defined as $\{2, 4, 8, 16, 32\}$ and $\{0.5, 1, 2\}$ respectively. The minimum overlap threshold of regions of interest is set to zero. Each model is trained for 70k iterations, so that each image is iterated once, with a batch size of 1. \added{This number of iterations was also empirically verified to be enough for convergence.} The learning rate is defined as $10^{-3}$ during the first 50k iterations and then it is decreased to 10\% of its original value for the rest of the training. To increase the range of the traffic lights sizes during the training stage, the set of training image scales was defined as $\{480, 960\}$, i.e., each image is resized during training so that its smallest dimension becomes equal to one of those two values (randomly sampled). As the training samples have final dimensions of $1280\times960$, each image either keeps its original dimensions or is rescaled to half its dimensions. In turn, the test images were scaled during evaluation so that their smallest dimension is equal to 960, without change in their aspect ratios. In other words, images from the LISA, LaRA and the proprietary datasets were rescaled to $1280\times960$ pixels, while Udacity images were rescaled to $1536\times960$ pixels. \subsection{Evaluation metrics and procedures} \label{sec:eval} The metrics adopted for the evaluation of the models were F1-score and mean Average Precision (mAP), derived from the precision, i.e., the ratio of correct predictions from all predicted bounding boxes, and recall, i.e., the ratio of correct predictions from all ground truths. To be considered correct, a prediction box must have an Intersection-over-Union (IoU) equal to or greater than 0.5 with a ground truth box, along with the correct classification. The precision and recall themselves may also be assessed for helping some results analysis. The F1-score represents the harmonic mean of the precision and the recall. The higher the F1-score, the better the correspondence between predictions and ground truths is. The mean Average Precision depends on the individual Average Precisions (AP) obtained in each class (traffic light state). Basically, the AP of a class represents the area under the cumulative precision-recall curve~\cite{everingham2010pascal}. Then, mAP is calculated as the arithmetic mean of all APs. \added{The calculation of the F1-score is based on an optimal confidence threshold calculated over a validation set as described next. On the other hand, the mAP is calculated over all predictions, i.e., the confidence threshold is set to zero and no validation data is required. Finally, the mAP is also used as the selecting metric in the box-validation, as described last.} \paragraph{\added{Validation}} The validation sets are used to investigate the confidence threshold that yields the best F1-score result. Thresholds from 0.01 to 1.0 in steps of 0.01 were considered, i.e., for each step, only predictions with confidence score equal or higher to the threshold were considered. Once the best confidence threshold was found, it was adopted as an optimal parameter of the final application, being used for calculating the corresponding F1-score for each test dataset. \paragraph{\added{Box validation}} For the box-validation, the evaluation metrics were computed for different proportions of the prediction bounding boxes to confirm the effectiveness of the models. For each dataset's box-validation set, the metrics were calculated considering the areas of the prediction boxes multiplied by a factor $f = 0.4, 0.5, 0.6, \ldots, 1.9$, with the boxes' centers and aspect ratios preserved, so that it would be possible to find a factor which would make the prediction boxes fit best with the ground truths and, therefore, yield best evaluation results. Then, the metrics were calculated for the test sets considering the boxes proportion which yielded the highest mAP for the respective box-validation sets (adopting the value of $f$ which is closer to 1.0 as tiebreaker). It is worthy to emphasize that this procedure has a different semantic of a conventional validation step: the latter intends to find the optimal model for the task, whereas the box-validation focuses on compensating the inaccurate annotation which can mislead the performance assessment. To enable a fair evaluation, this procedure was repeated for each of the methods being compared, including the model trained with real data. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{images/udacity_annotations1.pdf} \end{center} \caption{Sample from the Udacity dataset with the original ground truth bounding boxes. Note that their area considerably exceeds the actual traffic lights' areas.} \label{fig:annotations} \end{figure} \subsection{Experiments} The experiments aim at evaluating whether a traffic light detector can be trained without real data from the target domain, as well as investigating the influence of context on the learning process and the effectiveness of using the proposed method as data augmentation. \subsubsection{Context impact analysis} \label{sec:context_analysis} These experiments aim to evaluate the deep detector's capability of learning from totally uncontextualized data, as well as to investigate the impact of the context in the detector performance. For this, the Fully Contextualized and Uncontextualized datasets were properly trained, validated, box-validated and tested to have their performance compared against each other. \begin{figure} \centering \subfloat[Fully Contextualized]{\includegraphics[width=0.48\columnwidth]{images/ctx.pdf}% \label{fig:with_context}} \subfloat[Uncontextualized]{\includegraphics[width=0.48\columnwidth]{images/noctx.pdf}% \label{fig:with_outcontext}} \caption{Samples from the Fully Contextualized and Uncontextualized datasets.} \label{fig:context_levels} \end{figure} \subsubsection{Templates \textnormal{versus} 3D models} The impact of using traffic lights' 3D models was measured by comparing it with using simple 2D templates, as in~\cite{tabelini2019ijcnn}. For this, the test results yielded by a model trained, validated and box-validated with the Templates Only dataset are compared to the obtained with the Uncontextualized-based model. \subsubsection{\added{Background domain analysis}} \added{Torres \textit{et al.}~\cite{tabelini2019ijcnn} argue that using images from the problem domain as backgrounds may impair the detector performance, as the target-object may occur in the image and be also treated as background. To also confirm this issue in the traffic light application, experiments were conducted in order to compare the performance of models trained with backgrounds from the problem domain containing or not containing traffic lights (i.e., Positive Backgrounds and Negative Backgrounds datasets respectively) against their equivalent version trained with non-traffic-related backgrounds (i.e., the Uncontextualized dataset).} \subsubsection{Use of the proposed method as data augmentation} Experiments were also performed to evaluate the use of the proposed method as data augmentation, i.e., as an improvement for models based on real traffic scenes. The following experiments were conducted: \begin{itemize} \item Context + Real-world: the Fully Contextualized training set was mixed with the Real-world Reference training set, creating a new set with 140k images. Then, a model was trained using this set as input during 140k iterations, so that each image was iterated once. Evaluation was performed not only on the final trained model but also on the checkpoint model corresponding to the first 70k iterations, since this was the number of iterations used to train the proposed method and the Real-world Reference; \item Fine tuning on real data: a 70k-iterations training on the Real-world Reference training set was performed to fine tune the model previously trained with the Fully Contextualized set. \end{itemize} For determination of the best confidence threshold to be used for testing these experiments, the Fully Contextualized validation set was used. \subsubsection{Synthetic \textnormal{versus} real-world data} To compare the results achieved with training using synthetic data against results for real-world data, the Real-world Reference was trained, validated, box-validated and tested so that the test results could be compared with the obtained with the model trained with the Fully Contextualized dataset. Exceptionally for the Real-world Reference, the best confidence threshold was not obtained through evaluation on a specific validation set, but obtained individually for each box-validation set. It is expected that this bias the results in favor of the Real-world reference model so that the performance of the Fully Contextualized-based model could be compared to the best possible case obtained with the reference for each dataset. \subsection{Computational resources} \removed{The preparation of the training datasets was processed on a Intel\textsuperscript{\textregistered} Core\textsuperscript{\texttrademark} i5-7200U CPU (2.50GHz, 8GB RAM), a Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} CPU X5690 (3.47GHz, 50GB RAM) and a Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} CPU E5606 (2.13GHz, 22GB RAM).} The training and inference processes were performed on a Intel\textsuperscript{\textregistered} Core\textsuperscript{\texttrademark} i7-4770 CPU (3.40GHz, 16GB RAM) and an NVIDIA TITAN Xp GPU with 12GB memory, which performs one training iteration in approximately 0.35 seconds \added{and inference on a image in about 0.13 seconds}. \section{Results and discussion} Figure~\ref{fig:test_results} shows the results of the mAP and F1-score obtained on the test datasets for all trained models, considering the value $f$ of the bounding boxes' area multiplier (one $f$ for each of the evaluated methods) which yielded the highest results on their respective box-validation sets. \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{images/results_v3.pdf} \end{center} \caption{mAP and best F1-score across the test datasets for all trained models.} \label{fig:test_results} \end{figure*} \subsection{Context impact analysis} The results in Figure~\ref{fig:test_results} reveal that the model trained with the Fully Contextualized dataset (Ours\added{; green bar}) performs better than the one trained with its counterpart (Uncontextualized\added{; orange bar}) for all test datasets and both evaluation metrics. In all test cases, the presence of context yields a mAP increasing within a range of approximately 2 to 6 p.p. The smallest gain occurs for LISA\_test (from 44.46 to 46.68\%) and the highest for Udacity+ (from 40.16 to 45.74\%). Meanwhile, the F1-score presents a significant increase for some test datasets. For the LaRA dataset, for example, the F1-score increases by more than 16 p.p. (from 39.60 to 55.78\%) with the insertion of context to the scene, while the Proprietary dataset presents an increasing of more than 14 p.p. (from 49.24 to 63.36\%). For the remaining datasets, the F1-score presents less pronounced increasing (minimum of 4.36 p.p, for LISA\_test, and maximum of 9.33 p.p., for Udacity+), but no decreasing to any case. A direct analysis of the results of precision and recall, considering the confidence thresholds which yield the best F1-scores on the validation sets, reveals that the superior performance of the model trained with Fully Contextualized data is in almost all cases associated with significant increases in precision, while recall variation is less pronounced. For the Proprietary dataset, for example, the precision increases from 46.58 to 71.56\%, while the recall varies from 58.19 to 56.88\%. Overall, increasing in precision is around 10-25 p.p. depending on the dataset, while recall does not vary more than 2.5 p.p. The only exception stands for LISA\_test, which suffers a decrease in recall of approximately 9 p.p. The precision increasing is, however, higher than 15 p.p. \subsection{Templates \textnormal{versus} 3D models} Figure~\ref{fig:test_results} shows that the model trained with Uncontextualized data \added{(orange bars)} outperforms significantly the training with Templates Only \added{(blue bars)} in all scenarios. The differences in mAP range from 15.01, for LISA\_train+test, to 29.61 p.p., for LaRA, while in F1-score they range from 6.74, for Udacity+, to 13.26 p.p., for the Proprietary dataset. This experiment shows that training using 3D models yields better results than training with 2D templates despite the similar appearance. In fact, the tridimensional aspect makes the traffic light models way more realistic and, therefore, provide better matching with real-world traffic lights. \subsection{\added{Background domain analysis}} \added{Overall, the results shown in Figure~\ref{fig:test_results} evidence that the performance of the model based on the Positive Backgrounds dataset (light-gray bars) are considerably inferior when compared to the Uncontextualized-based model (orange bars). The differences in mAP range from 13.42 (for Udacity+) to 47.73 p.p. (for LaRA) and the F1-score presents little difference for Udacity-based models only, while for the remaining datasets it ranges from 20.78 (for LISA\_test) to 35.63 p.p. (for LaRA). When analysed individually, the model reveals not to perform well. The mAP is limited to 26.74\% and the F1-scores are not higher than 37.62\% (both for Udacity+). On the other hand, the Negative Backgrounds model (dark-gray bars) seems to perform comparably with the Uncontextualized model. The most evident differences occur for LaRA and the Proprietary dataset. While Uncontextualized performance is superior by 13.69 p.p. in mAP and 8.56 p.p. in F1-score for the former, it is inferior by 5.46 p.p. in mAP and 7.90 p.p. in F1-score for the latter.} \added{The results confirm that using domain-related backgrounds containing the target-object impairs performance, as stated in~\cite{tabelini2019ijcnn}. However, domain-related backgrounds without the target objects yields performance comparable to backgrounds from non-related domains. This motivates even further the use of the proposed method since it requires no real-world data from the problem domain.} \subsection{Use of the proposed method as data augmentation} According to Figure~\ref{fig:test_results}, the comparison between the training with the Real-world Reference \added{(red bars)} and with mixed data (context + Real-world Reference; \added{purple and brown bars for 70k and 140k iterations, respectively}) reveals in general that using synthetic data as complement for real-world data improves or, at least, preserves performance, showing little decrease for LaRA's F1-score only. Doubling the number of iterations (from 70 to 140k) kept the results of the augmentation nearly unaltered. The highest gain occurs for Udacity+, with more than +21 p.p. for mAP and +10 p.p. for F1-score. The worse performance occurs for the F1-scores of LISA\_test and LaRA. In its turn, fine tuning the context-based model with real-world data \added{(pink bars)} is less promising. Its performance is considerably worse than mixing data for LISA\_test, Udacity-, Udacity+ and LaRA and comparable for the remaining datasets. Also, the performance comparison between the fine tuning and the Real-world Reference reveals that, for almost all cases, they perform comparably, indicating that the knowledge on real-world data may be strongly predominating over the synthetic data. When comparing the performance of the proposed method (Fully Contextualized; \added{green bars}) with the augmentation models, it is noticeable that in general the performance is also improved for all non-Udacity-based datasets when augmentation is applied, with a gain of up to 18.38 p.p. for mAP for the 70k-iterations data mixing (tested on LaRA) and 10.86 p.p. for F1-score for the 140k-iterations data mixing (tested on LISA\_test). Fine tuning tested in LaRA opposes to this in F1-score (46.37\% from fine tuning against 55.78\% from the Fully Contextualized model), but compensates in mAP (61.44 against 52.55\%). In addition, although augmentation does not provide improvements for Udacity compared to the Fully Contextualized model, the mixed data model with 70k iterations yield nearly the same mAP and little degradation for the F1-score, limited to 6.06 p.p. (Udacity+; Fully Contextualized against mixed data and 70k iterations). Overall, the results show that the proposed method tends to be effective as data augmentation. Additionally, it is preferable to train models with both real-world and synthetic data, in which the deep detector learns both patterns simultaneously, than to train with synthetic data and then refine the learned pattern by fine tuning with real-world data. \subsection{Synthetic \textnormal{versus} real-world data and discussion on the proposed method performance} The model trained with the proposed method \added{(green bars)} outperforms the Real-world Reference \added{(red bars)} for Udacity-, Udacity+ and LISA\_train+test, although the negligible difference for the latter (1.7 p.p. of mAP and 0.56 p.p. of F1-score). For LISA\_test and LaRA, the real-world model outperforms the proposed method, but the highest difference is limited to 11.29 p.p. for mAP and 4.87 p.p. for F1-score, obtained for LaRA. Considering the results obtained for all test datasets, the proposed method achieves an average mAP of 50.08\% $\pm$ 5.99\% and an average F1-score of 55.93\% $\pm$ 5.50\%, while the Real-world Reference yields average mAP and F1-score of, respectively, 45.61\% $\pm$ 18.26\% and 52.21\% $\pm$ 13.1\%. \camreadyadd{ Overall, these results are comparable to recent deep learning approaches for traffic light detection ~\cite{possatti2019traffic,pon2018hierarchical,kim2018efficient}. They report mAP ranging from 38 to 55\% in a intra-database scenario, which tends to be less challenging than the investigated cross-database scenario. Despite its relevance, the latter scenario is usually overlooked in the literature and, in this work, it is covered by the Real-world Reference. Given the results obtained with both Ours and the Real-world Reference models, it is evidenced that the proposed method achieves performance competitive to the state-of-the-art methods and the choice of the real-world baseline was satisfactory even with the intrinsic challenges of the cross-database approach.} The results suggest that the proposed method has the potential to match with a model trained with real-world data without manual effort for acquisition and annotation of data. Moreover, the results were further enhanced by mixing synthetic and real-world data, yielding an average mAP of 56.63\% $\pm$ 11.77\% and an average F1-score of 57.55\% $\pm$ 9.93\% for the 70k-iterations model. Figure~\ref{fig:predictions} shows a visual example of the proposed method results. The light red boxes represent the original ground truth boxes. The cyan boxes correspond to the original prediction, which fits best with the real traffic light face area. The red boxes followed by the confidence score represents the prediction bounding boxes with area multiplied by the optimal factor, making the prediction boxes' areas closer to the ground truth. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{images/predictions_v1.pdf} \end{center} \caption{Example of inference result for an image from Udacity dataset. The light red boxes represent the original ground truth boxes. The cyan boxes correspond to the original prediction. The red boxes followed by the confidence score represents the adjusted predictions after box-validation.} \label{fig:predictions} \end{figure} \section{Conclusion} Detecting and recognizing traffic elements, particularly traffic lights, is essential for autonomous vehicles to drive safely and according to the traffic legislation. Although deep detectors are effective solutions for traffic light detection, acquisition and annotation of training data demand significant time and effort. Also, real data is subject to imbalance since yellow-stated traffic lights are less likely to be recorded than red and green. The proposed method tackles these issues by generating artificial data that simulate simple traffic context. The results show that it is possible to obtain reasonable performance by just inserting artificial traffic lights in natural non-traffic-related contexts. Results become even better when a simple traffic context is modeled and added to the scene. The experiments showed that this proposal yields average mAP and average F1-score of approximately 50\% and 56\%, respectively, each nearly 4 p.p. higher than the respective results obtained by training with real-world traffic data. It is also clear that training with datasets built with little generation effort (related to the construction of the traffic scene) and no annotation efforts provides results comparable to results obtained with real training data exhaustively annotated. Although there are some traffic light datasets available, the application of this principle may enhance the performance for cases in which it is necessary to detect traffic light models distinct from the ones comprised by the available datasets. For example, models with more than three bulbs and/or in horizontal orientation may be added. Such flexibility enables the application of the detector in cities with different traffic light pattern without having to acquire and annotate a new dataset.
1,116,691,499,553
arxiv
\section{#1}\indent} \renewcommand{\a}{\alpha} \renewcommand{\b}{\beta} \newcommand{\delta}{\delta} \newcommand{\mu}{\mu} \renewcommand{\lambda}{\lambda} \newcommand{\nu}{\nu} \renewcommand{\r}{\rho} \newcommand{\sigma}{\sigma} \newcommand{\abs}[1]{\left|\,#1\,\right|} \newcommand{\set}[1]{{\left\{ #1 \right\}}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\inv}[1]{#1^{-1}} \newcommand{\inte}[2]{\int_{#1}^{#2}} \newcommand{\operatorname{tr}}{\operatorname{tr}} \newcommand{\pd}[2][1]{\ifnum#1=1 \frac{\partial}{\partial {#2}} \else \frac{\partial^#1}{\partial {#2}^{#1}}\fi} \newcommand{\dpd}[2][1]{\ifnum#1=1 \dfrac{\partial}{\partial {#2}} \else \frac{\partial^#1}{\partial {#2}^{#1}}\fi} \newcommand{\td}[2][1]{\ifnum#1=1 \frac{d}{d{#2}} \else \frac{d^#1}{d{#2}^{#1}}\fi} \renewcommand{\d}{\partial} \newcommand{\anticomm}[2]{\left\{#1,#2\right\}} \newcommand{\bigg|}{\bigg|} \newcommand{\bra}[1]{\left\langle #1 \right|} \newcommand{\ket}[1]{\left| #1 \right\rangle} \newcommand{\mathscr{L}}{\mathscr{L}} \newcommand{d^4\theta}{d^4\theta} \newcommand{{p_0}}{{p_0}} \newcommand{\overline{\psi}}{\overline{\psi}} \newcommand{\overline{u}}{\overline{u}} \newcommand{\overline{v}}{\overline{v}} \renewcommand{\arraystretch}{1.2} \newcommand{\order}[1]{\mathcal{O}({#1})} \newcommand{\gamma} \newcommand{\G}{\Gamma}{\gamma} \newcommand{\G}{\Gamma} \newcommand{\varepsilon}{\varepsilon} \newcommand{\phi} \newcommand{\F}{\Phi}{\phi} \newcommand{\F}{\Phi} \newcommand{\omega}{\omega} \newcommand{\xi}{\xi} \newcommand{\eta}{\eta} \renewcommand{\d}{\partial} \newcommand{\nabla}{\nabla} \renewcommand{\o}{\over} \newcommand{\sqrt}{\sqrt} \renewcommand{\(}{\left(} \renewcommand{\)}{\right)} \newcommand{\mathrm{STr}}{\mathrm{STr}} \newcommand{\mathrm{Sym}}{\mathrm{Sym}} \newcommand{\mathrm{diag}}{\mathrm{diag}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\!\phantom{|}_M\langle 0}{\!\phantom{|}_M\langle 0} \newcommand{0 \rangle_M}{0 \rangle_M} \newcommand{\!\phantom{|}_R\langle 0}{\!\phantom{|}_R\langle 0} \newcommand{0 \rangle_R}{0 \rangle_R} \newcommand{\!\phantom{|}_L\langle 0}{\!\phantom{|}_L\langle 0} \newcommand{0 \rangle_L}{0 \rangle_L} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \def{\rlap{1} \hskip 1.6pt \hbox{1}}{{\rlap{1} \hskip 1.6pt \hbox{1}}} \newcommand{{\,\lower0.9pt\vbox{\hrule \hbox{\vrule height 0.2 cm \hskip 0.19 cm \vrule height 0.2 cm}\hrule}\,}}{{\,\lower0.9pt\vbox{\hrule \hbox{\vrule height 0.2 cm \hskip 0.19 cm \vrule height 0.2 cm}\hrule}\,}} \newcommand{\ {\rm Tr}\ }{\ {\rm Tr}\ } \renewcommand{\v}[1]{\vec{#1}} \def\href#1#2{#2} \textheight 22.4cm \textwidth 15.5cm \topmargin -1cm \oddsidemargin 5mm \evensidemargin 5mm \renewcommand{\baselinestretch}{1} \usepackage{color} \newcommand{\netta}[1]{\textcolor{blue}{(#1)}} \usepackage{xcolor} \usepackage[citebordercolor=green, linkbordercolor={ 0 0 1}, linktocpage=true]{hyperref} \begin{document} \begin{titlepage} \begin{NoHyper} \hfill \vbox{ \halign{#\hfil \cr } } \vspace*{20mm} \begin{center} {\Large \bf Towards a Reconstruction of General Bulk Metrics} \vspace*{15mm} \vspace*{1mm} Netta Engelhardt and Gary T. Horowitz \vspace*{1cm} \let\thefootnote\relax\footnote{[email protected], [email protected]} {Department of Physics, University of California\\ Santa Barbara, CA 93106, USA} \vspace*{1cm} \end{center} \begin{abstract} We prove that the metric of a general holographic spacetime can be reconstructed (up to an overall conformal factor) from distinguished spatial slices -- ``light-cone cuts'' -- of the conformal boundary. Our prescription is covariant and applies to bulk points in causal contact with the boundary. Furthermore, we describe a procedure for determining the light-cone cuts corresponding to bulk points in the causal wedge of the boundary in terms of the divergences of correlators in the dual field theory. Possible extensions for determining the conformal factor and including the cuts of points outside of the causal wedge are discussed. We also comment on implications for subregion/subregion duality. \end{abstract} \end{NoHyper} \end{titlepage} \tableofcontents \vskip 1cm \begin{spacing}{1.2} \section{Introduction}\label{sec:intro} One of the most intriguing aspects of gauge/gravity duality~\cite{Mal97,Wit98a, GubKle98} is its implication that spacetime geometry is emergent. The metric is not a fundamental variable of quantum string theory with asymptotically Anti-de Sitter (AdS) boundary conditions: it is rather an object which emerges in the appropriate limits. The quantum structure from which spacetime emerges, however, remains mysterious. There is an active research program devoted to reconstruction of the bulk metric from the dual field theory. Much of this program has focused on recovering the bulk geometry from various measures of quantum entanglement in the dual field theory (starting with~\cite{Van09, Van10}). This approach is particularly appealing since entanglement entropy is dual to the area of bulk extremal surfaces ~\cite{RyuTak06, HubRan07}: it may be possible to reconstruct the metric by comparing entanglement entropy for different regions. To our knowledge, the most developed approaches along these lines are ``hole-ography'' and the (related) construction of kinematic space~\cite{ BalChoCze, MyeHea14, CzeLam, CzeLamMcC15a, BalCzeCho, CzeDonSul, MyeRaoSug}. However, this method of bulk reconstruction suffers from some drawbacks: it is at this time understood in only a limited class of cases, and in particular it is not formulated for generic holographic spacetimes in more than $2+1$ dimensions\footnote{Generalizations to higher dimensions are limited to highly symmetric setups~\cite{MyeRaoSug}.}, and it is subject to a set of no-go theorems and constraints discussed in~\cite{Hub12, EngWal13, EngFis15}. Other approaches to reconstruction, e.g.~\cite{deHSolSke00, HamKab06, Kab11, ChrSke16} often assume the bulk equations of motion. We provide an alternative approach for reconstructing the bulk metric from field theory data. The reconstruction we propose is based on a new way of identifying bulk points in terms of distinguished boundary spatial slices, the ``light-cone cuts''. We will give a complete prescription for recovering the bulk conformal metric, i.e. the metric up to an overall conformal rescaling, just from the location of the light-cone cuts. (We believe it should also be possible to obtain the conformal factor, but this is still under investigation.) We will then show that the light-cone cuts themselves may be found from the divergence structure of boundary $n$-point functions, using the work of~\cite{MalSimZhi}. This approach is completely well defined for (most) bulk points in the causal wedge of the entire asymptotic boundary, i.e. points which have both past and future causal contact with the asymptotic boundary. As we will discuss, the former part of the reconstruction is also valid outside of this region, including some points inside a black hole event horizon; it is not yet clear how to extend the latter part. We emphasize that our approach is covariant and well-defined for any holographic spacetime of any dimension. We make no assumptions about the matter content; in particular, we do not assume the null energy condition. \begin{figure}[t] \centering \includegraphics[width=8cm]{MiddleCut.pdf} \caption{The intersection of the lightcone (up to caustics) of a bulk point $p$ with the asymptotic boundary defines the past and future cuts of $p$, $C^{\pm}(p)$. The cuts are complete spatial slices of the asymptotic boundary.} \label{fig:pfcutIntro} \end{figure} The starting point of our procedure is the construction of a unique spatial slice of the boundary geometry from any bulk point, provided that the point is within causal contact of the boundary. This slice is the intersection of the (past or future) light cone of the bulk point with the asymptotic boundary (up to caustics), as illustrated in Fig.~\ref{fig:pfcutIntro}. Clearly, not every boundary spatial slice corresponds to the light cone of a bulk point. We call the special slices which do correspond to bulk points ``light-cone cuts'', or ``cuts'' for short. Our approach is similar in spirit to the program initiated by Newman in the 1970's \cite{New76, HanNewPen} for asymptotically flat spacetimes. In particular, it was shown in \cite{KozNew83} that the conformal metric of an asymptotically flat spacetime can be recovered from similar light-cone cuts at null infinity. A crucial new ingredient in our approach is the use of the dual field theory to determine the cuts. We show that a light-cone cut corresponds to a unique bulk point, and give a prescription for reconstructing the conformal metric from the set of light-cone cuts. In this way, we show that the space of cuts acts as an auxiliary spacetime, filling a similar role in this causal reconstruction as de Sitter space does in the geodesic reconstruction of~\cite{CzeLamMcC15a}. Our approach to reconstruction is local: we determine the conformal metric pointwise. We also present partial results for determining the causal separation for points at finite separation directly from the behavior of their cuts. The causal relations between certain points can be determined from the type of intersection of their cuts. The above reconstruction of the conformal metric from the space of light-cone cuts applies to points in causal contact (either to the future or past) of the boundary. This includes points inside event horizons, and is more general than requiring that points lie in the causal wedge, which requires causal contact both to the future and past. See Fig.~\ref{fig:nofuturecutIntro} for an example. \begin{figure}[t] \centering \includegraphics[width=8cm]{NoFutureCut.pdf} \caption{The point $p$ lies inside the event horizon (dotted line) of a collapsing star. The boundary of the future of $p$ never intersects the asymptotic boundary. $p$ has only a past cut. More generally, the interior of an event horizon lies at least partly inside the boundary's domain of influence, but (by definition) not within the boundary's causal wedge.} \label{fig:nofuturecutIntro} \end{figure} More importantly, for (most) points inside the causal wedge, we can complete the reconstruction by determining the light-cone cuts without reference to the bulk. This can be done using results from~\cite{MalSimZhi} (which was based on earlier work by~\cite{PolSus99, GarGid09, HeePen09, Pen10, OkuPen11}), where it was shown how to determine a bulk point in $AdS_{d+1}$ from the singularities in time-ordered Lorentzian $(d+2)$-point correlators. The correlator diverges when all the boundary points are null related to a single vertex point, where the vertex can lie on the boundary or in the bulk. In cases where the vertex lies in the bulk (and there is no analogous boundary vertex) the result is a ``bulk-point'' singularity, which can be used to identify the bulk point. Extending this to Lorentzian $(d+3)$-point correlators in excited states corresponding to asymptotically AdS spacetimes yields a construction of light-cone cuts from the field theory. More general prescriptions may exist for finding cuts for points outside of the causal wedge that have only a past or only a future cut. This remains to be investigated. The paper is structured as follows: in Sec.~\ref{sec:goodcut}, we define light-cone cuts more precisely, state some of their properties, and calculate them in a simple example. Sec.~\ref{sec:confmetric} gives (i) a way of recovering the bulk conformal metric at any bulk point in the domain of influence of the boundary from the set of light-cone cuts, and (ii) a prescription for finding the light-cone cuts associated to points within the causal wedge of the boundary. In Sec.~\ref{sec:globalrecon}, we give a partial procedure for determining the causal separation between two bulk points which need not be in some local neighborhood of one another. In Sec.~\ref{sec:discussion} we discuss possible ways of obtaining the conformal factor, $1/N$ corrections, implications for subregion/subregion duality, and possible extensions for future work. \section{Light-cone Cuts}\label{sec:goodcut} Recall that the chronological past of a point $p$, $I^-(p)$, is defined to be the set of all points $q$ that can be connected to $p$ by a future-directed timelike curve. The causal past $J^-(p)$ is defined similarly, with ``timelike" replaced by ``timelike or null". Let $M$ be an asymptotically AdS metric with conformal boundary $\partial M$. We assume that $M$ is at least $C^{2}$, maximally extended, connected and AdS hyperbolic: there are no closed causal curves, and for any two points $(p,q)$, $J^+(p)\cap J^-(q)$ is compact after conformally compactifying the AdS boundary~\cite{Wal12}. While many of our results may be generalized to asymptotically AdS spacetimes with two boundaries, we will assume in this paper that $\partial M$ is connected for simplicity. All conventions, unless otherwise stated, are as in~\cite{Wald}. The \textit{past light-cone cut} of a bulk point $p\in M$, or past cut for short, denoted $C^{-}(p)$, is defined as the intersection of the boundary of the past of $p$ with $\partial M$: $C^{-}(p)\equiv \partial{J}^{-}(p)\cap \partial M$. The future cut of $p$ is defined similarly: $C^{+}(p)\equiv \partial{J}^{+}(p) \cap\partial M$. See Fig.~\ref{fig:pfcutIntro} for an illustration. When a statement applies equivalently to past or future cuts, we will simply denote the cuts in question as $C(p)$. These cuts are essentially the intersection of the light cone of a bulk point with the asymptotic boundary: a cut is not the entire light cone, however, since null geodesics can focus due to gravitational lensing. When geodesics cross, they produce caustics, which cause the null geodesic to leave the boundary of the past or future of $p$. The possible existence of caustics implies that the cuts need not be smooth cross-sections of the boundary. In general, they will be continuous, but may contain cusps where they fail to be $C^1$. We expect the cusps to be a measure zero subset of any cut, and we will assume this to be the case. We will now state three results about the correspondence between light-cone cuts and bulk points. For pedagogical reasons, we will present our results without proof, and provide the proofs in Appendix~\ref{appendix}. The following proposition holds for any bulk point in causal contact with the boundary (either to the future or past):\\ \noindent \textbf{Proposition:} (1) $C(p)$ is a complete spatial slice of the boundary $\partial M$, (2) For every point $p\in I^{+}[\partial M]$, there is precisely one past cut and for every point $p\in I^{-}[\partial M]$ there is precisely one future cut, (3) $C(p)\cap C(q)$ contains a nonempty open set if and only if $ p=q$.\\ \noindent The proposition immediately implies that all points in the domain of influence of the boundary have at least one cut, past or future, and at most both: points in the causal wedge have both a past and a future cut. It establishes a one-to-one map from light-cone cuts to bulk points. For past cuts, this map covers all of $I^{+}[\partial M]$, while for future cuts it covers all of $I^{-}[\partial M]$. This map does not always cover the entire spacetime since there may exist points without any causal contact with the boundary. \subsection{Example: the cuts of AdS}\label{subsec:example} In this section we provide a concrete example of a set of cuts, specifically those of AdS$_{d+1}$. For simplicity, we derive them from symmetries, although they can also be obtained by solving for the null geodesics\footnote{These cuts may also be derived from field theory data, via the prescription in Sec.~\ref{subsec:findingcuts}.}. Setting the AdS length scale to one, AdS can be obtained from the space of unit timelike vectors $P^{a}$ in a vector space with metric of signature $(2,d)$: $P_a P^a = -1$. This is a hyperboloid with closed timelike curves. After finding the cuts for this space, we may pass to the covering space to obtain the usual causally well behaved definition of AdS. Since the identification does not affect the relation between points and their cuts, we will refer to the hyperboloid as AdS. The boundary at infinity is represented by null vectors $\ell^a$ in the vector space up to scaling: $\ell^a \equiv \lambda \ell^a$. The light cone of a point $P^a$ in AdS intersects infinity at the points where $P_a \ell^a = 0$. This can be seen from the fact that if $\ell^a$ is orthogonal to $P^a$, then $P^a + s \ell^a$ with $s\in (0,\infty)$ is a curve in AdS going from $P^a$ to the boundary. This curve is null since its tangent vector $\ell^{a}$ is null. Thus the cut of a point $P^a$ just consists of the orthogonal null vectors. To make this more explicit, we introduce coordinates $(T_1, T_2, X^i)$ with $i = 1, \cdots, d$ so AdS is given by \begin{equation}\label{eq:AdS} -T_{1}^{2}-T_{2}^{2} +X_{i} X^{i}=-1, \end{equation} These coordinates are related to AdS global coordinates via the following map: \begin{equation} r^{2}=X_{i}X^{i} , \ \ T_{1}= \sqrt{r^{2}+1} \ \sin t, \ \ T_{2}= \sqrt{r^{2}+1}\ \cos t \label{eq:globalcoord} \end{equation} \noindent where the angular coordinates transform in the obvious way\footnote{The AdS metric in these coordinates takes the familiar form: \begin{equation}ds^{2} = -(r^{2}+1)dt^{2} +(r^{2}+1)^{-1}dr^{2} + r^{2}d\Omega^{2}. \end{equation}}. We will take the boundary at $r\rightarrow \infty$ to be in the conformal frame of the Einstein Static Universe, or the static cylinder, with metric: \begin{equation} \label{eq:ESU} ds^{2}=-dt^{2} + d\Omega^{2}, \end{equation} \noindent where the time and angle coordinates are the same as those of AdS. To connect this with the abstract definition of the cut, let $T_1^a, T_2^a, X_i^a$ be an orthonormal basis corresponding to the above coordinates. To begin, suppose $P^a = \cos t_0 \ T_2^a + \sin t_0\ T^a_1$. This corresponds to the point $t=t_0, r=0$ in AdS. The orthogonal timelike direction is $\xi^a = -\sin t_0\ T_2^a + \cos t_0\ T^a_1$, so $\ell^a$ is the sum of $\xi^a$ and any unit vector in the $X_i^a$ directions. To describe the cut, we note from \eqref{eq:globalcoord} that $\tan t_{\infty} = T_1/T_2$, where $t_{\infty}$ is the value of the time coordinate $t$ at the cut. Since $T_1$ and $T_2$ are just the coefficients of the corresponding basis vectors in $\ell^a$ we have \begin{equation} \tan t_{\infty} = \frac{T_1}{T_2} = - \cot t_0 \end{equation} So the cut is simply given by $t_{\infty} = t_0 \pm \pi/2$, where the plus sign refers to future cuts and the minus sign refers to past cuts. Of course this result could have been obtained just from spherical symmetry and the fact that light rays take $\pi/2$ coordinate time to get to the boundary. The advantage of this approach is that it yields the cuts for points off the axis just as easily. To see this, let $P^a = \sqrt{1 + r_0^2} \ T_2^a + r_0\ X_1^a$. This corresponds to a point at $t_0=0$ and $r=r_0$ in the direction $X_1$. To find the orthogonal null vectors, we expand $\ell^a = c_1 T_1^a + c_2 T_2^a + d_i X_i^a$. Imposing $P\cdot \ell = 0$ and $\ell\cdot \ell = 0$, and solving for $\ell^{a}$, we obtain \begin{equation} \label{eq:AdScuts} \tan t_{\infty}(\theta) = \frac{c_1}{c_2} = \frac{\sqrt{1+r_0^{2}\sin^{2}\theta}}{r_0 \cos\theta}. \end{equation} where $\theta$ is the angle with the $X^{1}$ axis. This cut is tilted with respect to a cut at constant $t_{\infty}$. The cut for an arbitrary point can be obtained from this one by time translations and rotations, so there is a $(d+1)$-dimensional space of cuts, labeled by $t_0, r_0,$ and the $(d-1)$-directions of the tilt. Since there are no caustics in AdS, these cuts are all smooth. Note that in the limit $r_0\to \infty$ (which corresponds to the bulk point approaching the boundary), the cut reduces to $t_\infty= \pm \theta$; this is just a null cross-section of the boundary. For pure AdS, there is a simple relation between the behavior of the cuts and the global causal relation of the corresponding points\footnote{We say that two points are spacelike separated if there exists no causal curve between them; null separated if there exists a null achronal curve between them but no timelike curve between them; timelike separated if there exists a timelike curve between them.}: $C(p)$ and $C(q)$ do not intersect if and only if $p$ and $q$ are timelike related, $C(p)$ and $C(q)$ intersect at precisely one point if and only if they are null related, $C(p)$ and $C(q)$ intersect at more than one point if and only if $p$ and $q$ are spacelike related. These results are intuitively clear and follow from the results in the appendix (and the fact that there are no caustics in AdS). We will see that only some of these results extend to general asymptotically AdS spacetimes. There is an intrinsic way of characterizing these light-cone cuts of AdS. The light cone of a point in AdS is shear-free. Given any smooth spacelike cross-section of the boundary, the shear of the congruence of ingoing orthogonal null geodesics from that cross-section always vanishes asymptotically as ${\cal O}(1/r^2)$. Demanding that the $r^{-2}$ contribution to the shear vanishes yields a differential equation for the cross-sections with solutions given precisely by \eqref{eq:AdScuts}. In the case of asymptotically flat spacetimes, this was the starting point for H-space: the set of asymptotically shear-free cuts of null infinity (see e.g.~\cite{New76, HanNewPen} and \cite{HSpaceRev} for a review). However, this characterization is less useful for us, since in generic asymptotically AdS spacetimes, light-cone cuts are not asymptotically shear-free in this sense. We will discuss another intrinsic way to characterize the light-cone cuts in the next section; instead of geometric constructs such as shear, we will make use of a tool not available to~\cite{New76, HanNewPen}: the dual field theory. \section{Reconstruction of the Conformal Metric}\label{sec:confmetric} In this section we will first consider bulk points in the past or future of the conformal boundary and show how to reconstruct the bulk conformal metric given the set of cuts. We can work with either past or future cuts, but will focus on past cuts for definiteness. We will then describe a way of constructing the light-cone cuts associated with (most) points in the causal wedge from field theory data. There are indications that more general procedures may exist. \subsection{Reconstructing the conformal metric from the cuts} \label{subsec:reconcausal} Under the assumption that we have been provided with the set of light-cone cuts, past or future, we would like to determine the conformal metric. Below we do this in two steps: the first is a result showing that the conformal metric at a point is fixed by any open subset of the point's light cone; the second is a prescription, making use of a result proven in the appendix, for constructing the conformal metric from the set of cut locations. The conformal metric at a point is simply the metric up to an overall positive constant: $g_{\mu\nu} \equiv \lambda^2 g_{\mu\nu}$. Clearly, two conformally related metrics have identical light cones. Conversely, the conformal metric at a point is uniquely fixed by the light cone at that point. Since an open set of a cut fixes the entire cut, an even stronger result is true: the conformal metric is uniquely fixed by any open subset of the light cone. This result was proven in~\cite{KozNew83}; here we give a different argument which will be useful below in reconstructing the metric from the cuts. In $d+1$ dimensions, we may take any $d+1$ linearly independent past- or future-pointing null vectors $\ell_{i}$ at a point $p$, and view them as a basis of the tangent space at $p$. This is always possible, since there is no vector orthogonal to all of the null vectors. The vectors $\ell_i$ may lie anywhere on the lightcone of $p$: we need only an open subset of the lightcone to find this basis. By definition, the $\ell_{i}$ all have zero norm, but unknown inner products; the conformal metric at $p$ is precisely fixed by these inner products. To determine them, take a new collection of null vectors, $\eta_k$, and expand them in terms of $\ell_i$: \begin{equation} \eta_{k} = \sum\limits_{i} M_{ki}\ell_{i}. \end{equation} Each $\eta_{k}$ has zero norm by definition; this yields a set of algebraic equations for the inner products $\ell_i \cdot \ell_j$: \begin{equation}0= \eta_{k}\cdot \eta_{k} = \sum\limits_{i,j} M_{ki}M_{kj} (\ell_{i}\cdot \ell_{j}) \qquad {\rm no\ sum\ on \ }k. \end{equation} While it is not generally true that such equations must have a solution, we are guaranteed a solution precisely because these equations describe a Lorentzian metric which by construction exists. By choosing enough vectors $\eta_k$, we will find a solution which is unique up to an overall constant rescaling of all inner products. This determines the conformal metric at $p$.\footnote{Repeating this construction at each point yields a smooth tensor field, which in particular includes complete information about all of its derivatives.} We will now implement this approach to recover the conformal metric from the cuts. Suppose that we are given the set of past cuts ${\cal M}$. From the proposition, this is a $(d+1)$-dimensional space representing all bulk points in $I^+[\partial M]$. We now define a conformal metric on ${\cal M}$. To do so, we use the following result (proven in the Appendix): \\ \noindent \textbf{Theorem 1:} If $C(p)$ and $C(q)$ intersect at precisely one point, and both cuts are $C^{1}$ at this point, then $p$ and $q$ are null-separated.\\ \noindent The crux of the proof is in the uniqueness of the inward-directed orthogonal null geodesics $\gamma$ from every $C^1$ point of $C(p)$. If $C(p)$ and $C(q)$ are tangent at a regular point of both cuts, $\gamma$ must lie on the boundary of both $J^-(p)$ and $J^-(q)$. This is only possible if $\gamma$ goes through both $p$ and $q$, so the points $p$ and $q$ must be null-separated. The result proved in the Appendix is actually stronger, and shows that there exists a cut tangent to $C(p)$ for every bulk point along an achronal null geodesic from $p$ to the boundary. Theorem 1 endows the space of cuts with a natural Lorentzian structure, inherited from the bulk Lorentzian structure: given a point $P$ in ${\cal M}$, i.e., a cut $C(p)$, the set of all other cuts which are tangent to $C(p)$ at a regular point $x$ forms a null curve in ${\cal M}$; this null curve precisely corresponds to the unique null bulk generator shared by all the cuts which are tangent at the regular point $x$. See Fig.~\ref{fig:nullgeodesic} for an illustration. More generally, the set of all cuts which are tangent to $C(p)$ at any regular point forms a null hypersurface in ${\cal M}$. The tangent vectors to this null hypersurface at $P$ form (a part of) the light cone of $P$, see Fig.~\ref{fig:spaceofcuts}, just as the unique null generators fired from all regular points of $C(p)$ form a subset of the bulk lightcone of $p$. To reconstruct the bulk conformal metric at $p$, we need only recover it at $P$. \begin{figure}[t] \centering \includegraphics[width=8cm]{NullGeodesicCuts.pdf} \caption{Cuts corresponding to a null bulk geodesic. The cuts are all tangent at the point $x$ at which the null geodesic reaches the boundary.} \label{fig:nullgeodesic} \end{figure} \begin{figure}[t] \centering \subfigure[]{ \includegraphics[width=6cm]{SpaceOfCutsLeft.pdf} \label{subfig:crossing} } \hspace{1cm} \subfigure[]{ \includegraphics[width=6cm]{SpaceOfCutsRight.pdf} \label{subfig:sandwich} } \caption{(a): $\partial J^{-}(p)$ will generally have caustics and some isolated $C^{0}$ points on the cut $C(p)$. At any regular point $x$, there is a null achronal geodesic $\gamma$ from $p$ all the way to $C(p)$. (b): In the space of cuts ${\cal M}$, a point $P$ corresponds to a cut $C(p)$; the null curve $\gamma$ of $\partial J^{-}(p)$ corresponds to a null curve $\gamma$, where points $Q$ on $\gamma$ are cuts $C(q)$ which are tangent to $C(p)$ at $x$.} \label{fig:spaceofcuts} \end{figure} By the reasoning above, a set of $d+1$ regular points of $C(p)$, with cuts tangent to each of the regular points, maps to a set of $d+1$ linearly independent null vectors on the lightcone of $P$. As argued in the beginning of this section, this uniquely determines the conformal metric at $P$, and therefore immediately also the conformal metric at $p$\footnote{Note that generically, the entire lightcone cannot be recovered in this way due to $C^0$ points on the cut arising from caustics in the bulk, but that is not required to fix the conformal metric at $P$.}; the additional null vectors $\eta_{k}$ required to determine the conformal metric at $P$ may be obtained from cuts which are tangent to $C(p)$ at other points. We emphasize that given the past cuts, we can recover the conformal metric at all points in $I^+[\partial M]$ this way, even points inside black holes. The fact that $q$ might be on a caustic of $\partial I^-(p)$ for some point $p$ is not an obstacle to finding the conformal metric at $q$. \subsection{Finding the light-cone cuts}\label{subsec:findingcuts} The distinguishing characteristic of a cut $C(p)$ is that every point on $C(p)$ is null-related to the same point $p$ in the bulk. More generally, $C^{-}(p)$ and $C^{+}(p)$ are the past and future cuts of a bulk point $p$ whenever there exist null geodesics from every point on $C^{-}(p), C^{+}(p)$ to $p$. A similar structure was used recently to identify a bulk point in $AdS_{d+1}$ from $(d+2)$-point time-ordered Lorentzian correlation functions in the dual field theory~\cite{MalSimZhi}. In general, $n$-point functions have divergences when all points are null separated from an interaction vertex and energy-momentum conservation holds at the vertex. In~\cite{MalSimZhi}, it was shown that there are cases where the correlators diverge due to an interaction vertex in the bulk that was null separated from the boundary points, but there is no analogous vertex point on the boundary to explain the divergence. Such singularities were termed ``bulk-point singularities''. These bulk-point singularities can be used to uniquely specify a bulk point. In $d+1$ bulk dimensions, there is a $d$-dimensional subspace of points which are connected to a point on the boundary by a future-directed null geodesic. Given $d+1$ boundary points, then, these null subspaces can intersect at most at a point. Conservation of energy momentum at the vertex requires one more boundary point, so singularities of $(d+2)$-point correlators can be used to fix a bulk point. Singularities of higher point correlators can also be used to fix bulk points. Although the analysis in~\cite{MalSimZhi} was restricted to vacuum correlation functions dual to pure AdS, similar behavior should occur for correlation functions in excited states corresponding to asymptotically AdS spacetimes \cite{MalPC}. In fact, the generalization to asymptotically AdS spacetimes has an advantage. In higher dimensional $AdS_{d+1}$, it can be difficult to show that the singularity in the correlator is in fact due to a bulk-point singularity and not an ordinary null field theory singularity. It is crucial for our construction to work that the singularity be sourced by a bulk null separation. To construct bulk points from correlation function singularities, we must show that there is no boundary point which is null related to all $d+2$ points in the correlation function. In \cite{MalSimZhi}, this was shown for $d = 2,3$. This is, in fact, simpler to show in any dimension away from pure AdS: when the spacetime is not exactly AdS, null geodesics take longer to pass through the bulk than they do on the boundary~\cite{GaoWal00}. Thus, the bulk-point singularities away from AdS will occur when the boundary points are separated in time by more than the light travel time on the boundary, which immediately precludes the possibility of a null related boundary vertex point\footnote{This relies on gravitational delay in the bulk, which is true under assumption of the Averaged Null Curvature Condition, a condition expected to be obeyed by low energy supergravity limits of string theory.}. Using these bulk-point singularities, we can identify the past and future cuts of most bulk points in the causal wedge. To determine the cut, we want to move some of the boundary points while keeping the vertex point fixed in the bulk. In order to keep energy momentum conserved at the vertex we need one more boundary point. So we start by taking two boundary points $z_1$ and $z_2$ which are spacelike related to one another and $d+1$ other boundary points $x_{1},\cdots ,x_{d+1}$ where the $x_{i}$ are to the future of $z_{1}, z_{2}$ and spacelike related to each other (see Fig.~\ref{fig:findingcuts2}). Consider deforming the set of points $(z_1, z_2, x_j)$ until the $(d+3)$-point correlation function diverges: \begin{equation} \label{eq:divergence} \left \langle \mathcal{O}(z_1)\mathcal{O}(z_2) \mathcal{O}(x_{1})\cdots \mathcal{O}(x_{d+1})\right\rangle\rightarrow \infty, \end{equation} The divergence in Eq.~\ref{eq:divergence} will occur whenever the points $(z_1, z_2, x_j)$ are all null-related to a vertex $y$ in the bulk or the boundary, and high energy test quanta fired from $z_1, z_2$ scatter at $y$ to the $x_{i}$, where energy and momentum are conserved at $y$. In terms of the global time on the static cylinder, any two points on the boundary which are null separated must have time separation less than $\pi$. By taking $x_i$ to be more than a time $\pi$ to the future of $z_1$ and $z_2$, we can ensure that they will not be null related to a point on the boundary. Note that there cannot be more than one vertex in the bulk: energy momentum conservation requires at least two incoming and two outgoing quanta at each vertex, and there are only two incoming quanta. We now fix the points ${x_i}$, which fixes a point $y$ in the bulk, and vary $z_1$ and $z_2$ (requiring that they remain on a spacelike boundary slice). See Fig.~\ref{fig:findingcuts2} for an illustration. The collection of all points $z_1, z_2$ satisfying Eq.~\ref{eq:divergence} will trace out the cut $C^{-}(y)$. Here, however, we must issue a caveat: since energy and momentum must be conserved at the vertex, we may not recover the entire cut this way. Generically, the existence of caustics means that only part of the light cone of $y$ is connected to $C^{-}(y)$ by null geodesics. Let us call this subset of the light cone $N$. We can recover parts of the cut corresponding to pairs of points in $N$ whenever energy and momentum can be conserved at $y$.\footnote{It is possible that more of the cut can be recovered by considering singularities in higher point correlators.} Fortunately, this restriction does not affect our bulk reconstruction. Since we are fixing the $d+1$ future points, we still get a one-to-one map between our partial cuts and bulk points. Furthermore, we can still construct the nearby cuts that are tangent at $C^1$ points, which determines the conformal metric at that point. By reversing the above construction, we may similarly construct the future cut $C^{+}(y)$. \begin{figure}[t] \centering \includegraphics[width=8cm]{findingcuts.pdf} \caption{A Landau diagram of a bulk-point singularity in a 7-point function: $z_1$, $z_2$, and the $x_{i}$ are all null-separated from a bulk point $y$ so that high energy test particles from $z_1, z_2$ scatter at $y$, conserving energy and momentum. To find the past cut of $y$, we vary $z_1, z_2$ in a spatial direction while keeping the 7-point function singular.} \label{fig:findingcuts2} \end{figure} If $N$ is too small, then we cannot recover any of the cut, and our prescription for obtaining cuts will fail. This happens, for example, close to a black hole. Consider a static spherical AdS black hole for simplicity, and let $r_n$ denote the radius of the closed null geodesic around the black hole. Then for $r<r_n$, most of the light rays fall into the black hole. Less than half the light cone makes it out to the boundary, and our construction is insufficient. For $r $ slightly larger than $ r_n$, more than half of the light cone makes it out to $\partial M$, but not all of the null geodesics stay on the boundary of $J^-(p)$ and reach $C(p)$. This should not be a problem since we expect singularities in the correlators for all boundary points which are related to the vertex by a null geodesic, even if the null geodesic is not achronal. Thus we should be able to recover part of the cut for all $r > r_n$. However, points with $r<r_n$ constitute a ``shadow region" around a black hole where we cannot recover the cuts from field theory correlators in this way. The divergence in the correlation function should be present in the large $N$ and large $\lambda$ limit of any holographic field theory (potentially including perturbative corrections in $1/N$ and $1/\lambda$). These divergences are expected to disappear at finite $N$ and $\lambda$, in agreement with the intuition that nonperturbative stringy and quantum effects fuzz out bulk points. We emphasize that we have described just one way of obtaining the cuts from field theory data; other procedures may exist. If the bulk has certain symmetries, for instance, there will be a distinguished set of cuts that correspond to fixed points of these symmetries. As an example, consider the field theory dual to a spherically symmetric collapse in the bulk. The dual field theory undergoes thermalization, and the field theory stress tensor is spherically symmetric. There are preferred cuts on the boundary which are invariant under this spherical symmetry. These cuts are precisely the past cuts of bulk points at the origin, which are fixed points of the symmetry. Such cuts include points that lie inside the black hole event horizon. This approach admittedly does not yield a complete reconstruction of the conformal metric inside the black hole interior, and it requires the strong assumption of spherical symmetry, but it indicates that there may be additional ways of obtaining cuts. \section{Some Global Causal Relations from Cuts}\label{sec:globalrecon} The set of cuts contains more information about the causal relations between bulk points than we used in Sec.~\ref{subsec:reconcausal}, where the focus was on null separations. In this section we describe some further results. We will consider bulk points in the causal wedge of the entire boundary, and assume that we are given the complete set of past and future cuts of these points. We note that some of the results below only use one set of cuts; those results hold everywhere within the boundary's domain of influence. Let $\{C^{+}(p), C^{-}(p)\}, \ \{C^{+}(q), C^{-}(q)\}$ be two distinct pairs of cuts, with corresponding bulk points $p$, $q$. We start with a definition:\\ \noindent \textbf{Definition:} $C(p)$ and $C(q)$ {\it cross} if $C(p)\cap I^{+}(C(q))\neq \varnothing$ and $C(q)\cap I^{+}(C(p))\neq \varnothing$.\\ \noindent This is the case when the the intersection $C(p)\cap C(q)$ divides $C(p)$ and $C(q)$ each into two or more connected (nonempty) components. The following result (which is proved in the Appendix) tells us $p$ and $q$ are spacelike separated under the following conditions:\\ \begin{figure}[t!] \centering \subfigure[]{ \includegraphics[width=5cm]{crossing.pdf} \label{subfig:crossing} } \hspace{1cm} \subfigure[]{ \includegraphics[width=5cm]{sandwich.pdf} \label{subfig:sandwich} } \caption{Two points $p$ and $q$ are spacelike separated if \subref{subfig:crossing} their cuts cross, or \subref{subfig:sandwich} $C^\pm(q)$ both lie between $C^+(p)$ and $C^-(p)$.} \label{fig:spacelike} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=5.5cm]{BagOfGold.pdf} \caption{A bag of gold geometry, where the region behind the horizon contains part of de Sitter space~\cite{FreHub05}. Points in the dS region are in the causal shadow: they have no corresponding boundary cuts. Within the causal wedge, points that are spacelike separated at some large distance will feature the more unusual cut configuration of Fig.~\ref{subfig:sandwich}.} \label{fig:bagofgold} \end{figure} \noindent \textbf{Theorem 2:} $p$ and $q$ are spacelike separated if one of the following is true: \begin{enumerate} \item $C(p)$ and $C(q)$ cross, where $C(p)$ and $C(q)$ are either both past or both future cuts (Fig.~\ref{subfig:crossing}). \item $C^{\pm}(q)$ both lie between $C^{+}(p)$ and $C^{-}(p)$ (Fig.~\ref{subfig:sandwich}). \end{enumerate} \begin{figure}[t] \centering \includegraphics[width=5cm]{Timelike.pdf} \caption{A point $q$ is in the future of $p$ whenever $C^-(q)$ lies to the future of $C^+(p)$.} \label{fig:timelike} \end{figure} \noindent Case (1) is in agreement with expectation: we expect that two points are spacelike-separated if their pasts or futures intersect, but are not proper subsets of one another. Case (2) is more unusual, but can arise e.g. when one point is close to a bifurcation surface of a black hole. Consider the bag of gold geometry shown in Fig.~\ref{fig:bagofgold}, as constructed in~\cite{FreHub05}: $p$ clearly has future-directed outgoing null geodesics which reach the boundary at late time. Since the cut must be spacelike, the entire future cut $C^+(p)$ exists at late time. Note that while some null geodesics enter the black hole and do not make it out to infinity, there are others which curve around the horizon and reach the boundary, so $C^+(p)$ is always a complete spacelike slice of $\partial M$, as required by the proposition in section 2. Similarly, there are past-directed null geodesics which reach the boundary at early time, so $C^-(p)$ exists at early time. Any point farther from the horizon, like $q$ in the figure, will have $C^{\pm}(q)$ both lying between $C^{+}(p)$ and $C^{-}(p)$. We now consider timelike separated points in the bulk. There is a simple condition on the cuts which ensures that the bulk points are timelike separated: there is a future-directed timelike path from $p$ to $q$ ($q\in I^{+}(p)$) if $C^{-}(q)$ is in the (chronological) future of $C^{+}(p)$, see Fig~\ref{fig:timelike}. This follows since there is a past-directed causal path from $q$ to $C^{-}(q)$, another from $C^{-}(q)$ to $C^{+}(p)$, and another from $C^{+}(p)$ to $p$. Since this causal path contains a timelike segment from $C^{-}(q)$ to $C^{+}(p)$, $p$ and $q$ are timelike related. This condition is limited to points separated by a sufficiently long time. To determine when two points are timelike separated with shorter time differences is more difficult. It is true that if $q\in I^{-}(p)$, then $C^-(q) \subset I^{-}[C^-(p)]$ and the cuts do not intersect. The converse, however, is false: if $\partial J^-(p)$ contains a line of caustics, and $q$ is a point on this line, then there is a null geodesic $\gamma_1$ from $p$ to $q$. These two points are therefore null related. By definition, for any point $r \in C^-(q)$, there is a null geodesic $\gamma_2$ from $q$ to $r$. Combining $\gamma_{2}$ with $\gamma_1$, we obtain a broken null geodesic from $p$ to $r$; this broken null geodesic can always be lengthened by rounding out the corner\footnote{If the two geodesics happen to join smoothly without a corner, $r$ lies on the continuation of $\gamma_1$. Since this geodesic encounters a caustic, $r$ must still be in the past of $p$.}. This means that the entire cut $C^-(q)$ is in the past of $p$, and $C^-(q) \subset I^{-}[C^-(p)]$. See Fig.~\ref{fig:Caustics}. Since these two cuts do not intersect even though the points are null related, we can move $q$ slightly to the future or past and get nonintersecting cuts for spacelike, timelike or null separated points. \begin{figure}[t] \centering \includegraphics[width=6cm]{CausticsProof} \caption{When there are caustics, $C(q)$ can lie in the past of $C(p)$ even though $p$ and $q$ are null related.} \label{fig:Caustics} \end{figure} Finally, we note that some spacetimes have points which have no causal contact at all with the boundary (e.g. points inside the bag of gold geometry~\cite{FreHub05} shown in Fig.~\ref{fig:bagofgold}, or points deep in the throat of the shockwave geometries of~\cite{SheSta14}). It is not clear at this time if there is any generalization of the notion of a light-cone cut that would apply to these points. \section{Discussion}\label{sec:discussion} We have presented a new approach for reconstructing the bulk spacetime from its dual field theory. Given a $d$-dimensional field theory, there is a $(d+1)$-dimensional space of light-cone cuts which represent the points of the holographic spacetime. The cuts can be determined from singularities in correlation functions, and the bulk metric (up to a conformal rescaling) is simply recovered from the location of the cuts. This procedure works for most points in both past and future causal contact with the boundary, i.e. the causal wedge of the entire boundary. (There is a ``shadow region" around e.g. a static black hole where our procedure fails to determine the cuts.) There are many open questions related to possible extensions of these results. We have shown that the conformal metric can be recovered from cuts associated with any points that are within the domain of influence of the boundary. The problem now is to find new ways to recover the cuts from field theory data. We would like to determine cuts in the shadow region around black holes as well as points outside the causal wedge. As discussed in Sec.~\ref{subsec:findingcuts}, symmetries can facilitate finding such cuts. This suggests that there may be a more general prescription for obtaining the light-cone cuts in such spacetimes, in keeping with recent arguments that bulk reconstruction is possible beyond the causal wedge, all the way to the entanglement wedge~\cite{CzeKar12, Wal12, HeaHub14, JafSuh14, JafLew15, DonHar16}. Another important extension is the determination of the conformal factor from boundary data. In cases when the conformal factor is analytic, we may easily obtain the conformal factor from a Fefferman-Graham expansion near the boundary, where the free coefficients are fixed by field theory expectation values. In general, any analytic spacetime metric can be obtained in this way, but we emphasize that this construction is more general: the conformal metric need not be analytic; only the conformal factor does. We have focused on the metric, but it is also interesting to ask whether other bulk fields can be recovered in a natural way from light-cone cuts. One possible direction to explore is suggested by the similarity between our construction and twistor theory. The latter involves considering the space of all null geodesics in Minkowski space, and identifying a spacetime point by the sphere of null geodesics passing through it. Spacetime fields are then encoded in certain singular holomorphic functions on (complexified) twistor space. Possibly some equally elegant prescription for describing spacetime fields exists on the space of light-cone cuts. Other extensions include the generalization to spacetimes where the boundary is not connected, such as the two-sided black hole. It is plausible that many of the proofs would carry over to such cases, but we have not yet shown this rigorously. There are also hints of connections between light-cone cuts and bulk singularities, which should be explored further. The existence of past cuts which have no corresponding future cut (or vice versa) is an indication that some future-directed (past-directed) null geodesics never reach the asymptotic boundary. Generically this indicates null geodesic incompleteness in the bulk, although exceptions may exist (e.g. null geodesics trapped in orbit). We emphasize that our construction is entirely covariant, and that proofs of the necessary results rely exclusively on causal structure and continuity arguments. No assumptions have been made regarding the bulk equations of motion or matter content besides those which follow from field theory causality. This naturally raises the question of which, if any, of our results remain valid when the bulk undergoes quantum and stringy corrections. The reconstruction procedure remains valid so long as there is a well-defined notion of bulk causal structure. This is true even with perturbative quantum ($1/N$ ) or stringy ($1/\lambda$) corrections. The work of~\cite{MalSimZhi} suggests that bulk-point singularities exist perturbatively, so the procedure of Sec.~\ref{subsec:findingcuts} should also work when perturbative corrections are included. The result would be the expectation value of the conformal metric, rather than a metric operator. Nonperturbative quantum physics in the bulk of course remains mysterious, and it is not clear that there is a good notion of causality. Certainly we do not expect that there is a sharply defined notion of points and distances. In particular,~\cite{MalSimZhi} showed that bulk-point singularities vanish nonperturbatively, so the correlation function prescription of Sec.~\ref{subsec:findingcuts} will no longer work at finite $\lambda, N$. \begin{figure}[t] \centering \includegraphics[width=8cm]{RegionSubregion.pdf} \caption{The conformal metric of a region deep in the bulk is contained in a time strip of the boundary consisting of cuts of points in the bulk region.} \label{fig:regionsubregion} \end{figure} Finally, our approach to bulk reconstruction suggests a new type of subregion/ subregion duality. This phrase is usually interpreted as meaning that a region of the boundary such as a causal diamond is dual to a bulk region anchored on the asymptotic causal diamond~\cite{BouFre12, BouLei12, CzeKar12, HubRan12}. By considering light-cone cuts, one sees that a time strip of the boundary can describe a region in the domain of influence deep in the bulk, as illustrated in Fig.~\ref{fig:regionsubregion}. (To obtain the cuts from the correlation functions, one would need two time strips.) It would be interesting to investigate this different formulation of subregion/subregion duality in more depth. \end{spacing} \section*{Acknowlegements} It is a pleasure to thank S. Fischetti, D. Garfinkle, D. Harlow, S. Hartnoll, J. Maldacena, D. Marolf, and A. Wall for discussions. This work was supported in part by NSF grant PHY-1504541. The work of NE was also supported by the NSF Graduate Research Fellowship under grant DE-1144085 and by funds from the University of California.
1,116,691,499,554
arxiv
\section{Introduction}\label{sec:intro} Many methods in statistics, machine learning, and applied mathematics require the generation of samples from a certain target distribution. For example, in Bayesian statistics, the most crucial step is to sample from the posterior distribution of the unknown parameters. In this case, the target distribution is often partially known without the scaling constant. Numerical integration is another important tool used in many computational solutions in finance, engineering, science, etc. It approximates the multidimensional integration $I=\mathbb{E}_{\bm x\sim \mu}[f(\bm X)]=\int_{\Omega} f(\bm x) \mu(\dd \bm x)$ by the sample mean $\hat{I}_n=\frac{1}{N}\sum_{i=1}^N f(\bm x_i)$, where the $f(\bm x)$ is the integral function, $\mu$ is the probability measure with the support region $\Omega$, and $\bm x_i$'s are the i.i.d. samples following the distribution of $\mu$. For numerical integration, the target distribution is fully specified. Statistical design of experiments is also related to this area. One such instance is the uniform space-filling design \citep{fang2000uniform}, in which the design points should approximate the uniform distribution. Broadly speaking, many supervised and unsupervised learning tasks involve the problem of generating samples from a distribution. Only in this case, the target distribution is an empirical distribution from the observation data. Among these tasks, generative learning models \citep{harshvardhan2020comprehensive} have gained a lot of attention and popularity due to the wide application of generative adversarial networks (or GANs) \citep{creswell2018generative,goodfellow2014generative} and variational autoencoders (or VAE) \citep{kingma2013auto}. The essential task of generative learning is to generate new samples from the empirical distribution of training data in a parametric or nonparametric fashion. In recent decades, variational inference (VI) has become an important and popular tool in machine learning , statistics, applied mathematics \citep{jordan1999introduction,WibisonoE7351,blei2017variational,mnih2016variational, gorbach2018scalable}, etc. In short, the main goal of a VI method is to generate samples to approximate a target distribution. Naturally, VI is strongly tied to these aforementioned research areas. First and foremost, variational Bayesian inference \citep{fox2012tutorial,blei2017variational} is an alternative to the classic MCMC approach to approximate a posterior distribution. Compared with classic MCMC, VI methods are less computationally intensive and thus more suitable to analyze large datasets and can be used whenever there is a need to explore many models \citep{blei2006variational}. The core idea of a VI approach is to convert the inference problem into an optimization problem by minimizing a certain dissimilarity functional that measures the difference between any distribution and the posterior distribution. Its drawback, compared to MCMC, is that the minimal solution is not guaranteed to converge to the posterior distribution, which depends on many aspects including the distribution family, i.e., the feasible region of the minimization. Despite so, its computational advantage has propelled the development of many VI-based supervised and unsupervised learning methods, such as Bayesian neural networks \citep{grave2011practical,welling2017multiplicative, wu2019deterministic,shridhar2019comprehensive}, Gaussian process model \citep{king2006fast, nguyen2013efficient, nguyen2014automated, shetha2015sparse, damianou2016variational, cheng2017variational}, and generative learning models \citep{kingma2014semi,hu2018unifying}. In this paper, we propose a new variational inference approach by minimizing the kernel discrepancy via the energetic variational approach \citep{wang2021particle}. Essentially, we generate samples, or \emph{particles} using the more conventional wording in VI, to approximate various target distributions that are fully specified, partially specified up to the normalizing constant, or empirically available from data. In Quasi-Monte Carlo (QMC), the low-discrepancy sequence is the sequence that leads to a small kernel discrepancy value so as to approximate the uniform distribution, such as Sobol sequences \citep{sobol1976uniformly}. Usually, such sequences are computed sequentially following an explicit formula \citep{caflisch1998monte}. Inspired by the low-discrepancy sequence, we name the particles generated by the proposed variational approach \emph{low-discrepancy points}. We use the word ``points'' instead of ``sequence'' to imply that the points are not generated sequentially and are not just for uniform distribution. \subsection{Related Works}\label{sub:related} As mentioned above, the core idea of VI is to minimize a user-specified dissimilarity functional that measures the difference between two distributions. Many dissimilarity functionals, such as Kullback–Leibler (KL-)divergence and the more general $f$-divergence \citep{csiszar2004information,zhang2019variational}, Wasserstein distance \citep{villani2021topics}, kernel stein discrepancy (KSD) \citep{liu2016kernelized, chen2018stein}, and the more general kernel discrepancy, have been used in the literature. If the target distribution is known up to the intractable normalizing constant, KL-divergence is commonly used \citep{liu2016stein, blei2017variational, ma2019sampling, heng2021gibbs}. For example, the Langevin Monte Carlo (LMC) \citep{welling2011bayesian,cheng2018underdamped,bernton2018langevin} and the Stein Variational Gradient Descent (SVGD) \citep{liu2016stein} can be considered as a discretization of the Wasserstein gradient flow \citep{jordan1998variational} of the KL-divergence. However, KL-divergence is only suitable for the target distribution whose density function takes the form $\frac{1}{Z} \exp(-V({\bm x}))$. Moreover, the KL-divergence based algorithms require repeated evaluation of the gradient of the target distribution, which can be computationally costly if the target distribution is complicated to compute. Kernel discrepancy is another popular dissimilarity functional. In machine learning, kernel discrepancy is better known as \emph{Maximum Mean Discrepancy} or \emph{MMD}. It is suitable to the cases where the target distribution is compactly supported or the target is only known in the form of training data \citep{li2015generative, li2017mmd}. Besides, minimizing MMD does not require the repeated evaluation of the density of the target distribution. For these reasons, we choose the kernel discrepancy or MMD as the objective functional. We defer the detailed review of kernel discrepancy/MMD and its related literature in Section \ref{sec:background}. Another important aspect of a VI approach is the minimization method. As reviewed by \cite{blei2017variational}, the complexity of the minimization is largely decided by the distribution family $\mathcal{Q}$, i.e., the set of feasible distributions to approximate the target distribution. It can be a family of parametric distributions. If independence assumption is properly imposed, mean-field approach and coordinate descent can be used \citep{blei2017variational} for minimization. Sometimes, the parametric distribution is too restrictive, and thus flow-based VI methods have been created, in which $\mathcal{Q}$ consists of distributions obtained by a series of smooth transformations from a tractable initial reference distribution. Examples include normalizing flow VI methods \citep{rezende2015variational, kingma2016improved, salman2018deep} and particle-based VI methods (ParVIs) \citep{liu2016stein, liu2017stein, liu2018riemannian, chen2018unified, liu2019understanding, chen2019projected,wang2021particle}. The proposed approach belongs to the ParVIs category. Among all the ParVIs, SVGD \citep{liu2017stein} is one of the most popular early works and we also choose it for comparison. \subsection{Our Contributions}\label{sub:contribution} In this paper, we propose a deterministic method of generating a set of low-discrepancy points that minimizes the kernel discrepancy via the general energetic variational inference (EVI) framework \citep{wang2021particle}. Compared to some existing works that also minimize MMD, the proposed approach applies to many scenarios, including the cases when the target distribution is fully known, partially known up to the normalizing constant, or empirically given by the data, whereas most existing MMD methods \citep{gretton2012kernel,li2015generative,liu2016kernelized,li2017mmd,mak2018support,cheng2021neural,hofert2021quasi} focus on the two-sample problem, in which the target distribution is not given but only training data are available. It can explain why most of the referred methods are applied to generative learning models. Another contribution is the combination of the EVI framework with the MMD functional. As shown in Section \ref{sec:method} and discussed in Section \ref{sec:con}, EVI transforms the minimization problem into a dynamic system, which can be solved by many different numerical schemes. Due to limited space, we only choose the simplest explicit Euler scheme for illustration. The rest of the paper is organized as follows. Section \ref{sec:background} gives the necessary background on the kernel discrepancy and the EVI framework. In Section \ref{sec:method}, we apply the general EVI framework to minimize the kernel discrepancy and obtain a dynamic ODE system to update the particles iteratively. Using the explicit Euler numerical scheme to solve this ODE system, we introduce the EVI-MMD algorithm. In Section \ref{sec:num}, three numerical examples are used to compare the proposed approach with some competitors. We conclude the paper with a discussion on the extension of the proposed approach in Section \ref{sec:con}. \section{Background}\label{sec:background} We first review the concept of kernel discrepancy, which is better known as MMD in machine learning literature, and then explain the EVI framework. The two combined are the foundation of the proposed low-discrepancy points generation method. \subsection{Kernel Discrepancy or MMD}\label{sub:mmd} Before its wide recognition in machine learning as MMD, kernel discrepancy has been an important concept in QMC literature and was promoted as a goodness-of-fit statistic and a quality measure for statistical experimental design by many works in the '90s and early 2000, such as \cite{hickernell1998generalized,hickernell1999goodness,fang2000uniform,fang2000miscellanea,fang2002centered,hickernell2002uniform}. One of the main reasons that kernel discrepancy is so influential in so many different areas is that it can be interpreted in different ways. \cite{hickernell2016trio} and \cite{li2020transformed} explained \emph{three identities} of kernel discrepancy. First, it can be considered as a norm on a Hilbert space of measures, which has to include the Dirac measure. Second, it is commonly used as a deterministic cubature error bound for Monte Carlo methods. Third, it is the root mean squared cubature error, where the kernel function is also the covariance function for a stochastic process. Here we review it using the second identity and then generalize it and connect it with MMD. Let $\Omega\subset \mathbb{R}^d$ be the domain of a probability measure $\mu$, which has density $\rho(\bm x)$ and cumulative distribution function $F(\bm x)$. The three concepts, measure, density, and CDF, are used interchangeably in the rest of the paper to refer to distribution. Let $(\mathcal{H}, \ip[\mathcal{H}]{\cdot}{\cdot})$ be a reproducing kernel Hilbert space (RKHS) of functions $f:\Omega \rightarrow \mathbb{R}$. By definition, the reproducing kernel, $K$, is the unique function defined on $\Omega \times \Omega$ with the properties that $K(\cdot, \bm x)\in \mathcal{H}$ for any $\bm x \in \Omega$ and $f(\bm x)=\ip[\mathcal{H}]{K(\cdot,\bm x)}{f}$. The second property implies that $K$ reproduces function values via the inner product. It can be verified that $K$ is symmetric in its arguments and positive definite. A cubature method approximates the integral $I=\int_{\Omega} f(\bm x)\rho(\bm x)\dd \bm x=\mathbb{E}_{\bm x \sim \mu}[f(\bm X)]$ of an $f\in \mathcal{H}$ by the sample mean \[\hat{I}_N=\frac{1}{N}\sum_{i=1}^N f(\bm x_i),\quad \text{where } \bm x_i\sim ^{iid} F(\bm x).\] Let $\mathcal{X}=\{\bm x_i\}_{i=1}^N$ be the set of the i.i.d. samples following $F(\bm x)$ distribution. To measure the quality of the approximation, define the cubature error as \[ \text{err}(f,\mathcal{X})=I-\hat{I}_N=\int_{\Omega} f(\bm x)\rho(\bm x)\dd \bm x-\frac{1}{N}\sum_{i=1}^N f(\bm x_i)=\int_{\Omega} f(\bm x)\dd [F(\bm x)-F_{\mathcal{X}}(\bm x)], \] where $F_{\mathcal{X}}$ is the empirical CDF based on the sample $\mathcal{X}$. Under modest assumptions of the reproducing kernel, based on Cauchy-Schwarz inequality, the tight, worst-case cubature error bound is \[ |\text{err}(f, \mathcal{X})|\leq \|f\|_{\mathcal{H}}D(\mathcal{X}, F, K), \] where $\|f\|_{\mathcal{H}}$ is the norm of the function $f$ based on the inner product of the RKHS $\mathcal{H}$ and $D(\mathcal{X}, F, K)$ is the kernel discrepancy whose square is equal to \begin{align}\nonumber D^2(\mathcal{X}, F, K)&=\int_{\Omega \times \Omega} K(\bm x,\bm y) \dd [F(\bm x)-F_{\mathcal{X}}(\bm x)] \dd [F(\bm y)-F_{\mathcal{X}}(\bm y)]\\\label{eq:mmdv1} &=\int_{\Omega\times \Omega} K(\bm x,\bm y)\dd F(\bm x)\dd F(\bm y)-\frac{2}{N}\sum_{i=1}^N\int_{\Omega} K(\bm x_i, \bm y)\dd F(\bm y)+\frac{1}{N^2} \sum_{i,j=1}^NK(\bm x_i, \bm x_j). \end{align} Recall that the kernel discrepancy is also the norm on a Hilbert space of measures, i.e., the first identity mentioned earlier. More specifically, this Hilbert space of measures, denoted by $\mathcal{M}$, is the closure of the pre-Hilbert space and its inner product is defined as \[ \ip[\mathcal{M}]{\nu_1}{\nu_2}=\int_{\Omega\times \Omega} K(\bm x,\bm y)\nu_1(\dd x) \nu_2(\dd y). \] For the given kernel $K$, the Hilbert space contains all measures such that $\|\nu\|_{\mathcal{M}}$ is bounded. Please see \cite{hickernell2016trio} or \cite{li2020transformed} for the detailed definitions of the RKHS $\mathcal{H}$, $\mathcal{M}$, and the derivation of \eqref{eq:mmdv1}. To make sure the cubature error is small for any function $f\in \mathcal{H}$, $\mathcal{X}$ should minimize the discrepancy $D^2(\mathcal{X}, F, K)$, i.e., have a low discrepancy. Based on the above definitions, we formally introduce low-discrepancy points. \begin{definition}[Low-discrepancy points] For any given measure $\mu \in \mathcal{M}$, the set of low-discrepancy points with respect to $\mu$ is defined as \begin{equation}\label{optimization} \mathcal{X}^*=\{ \bm x_i^*\}_{i=1}^{N}= \mathop{\arg\min}_{\mathcal{X}\subset \Omega } D^2(\mathcal{X},F, K), \end{equation} where $F$ is the CDF corresponding to $\mu$. \end{definition} The kernel discrepancy can be more generally defined by \begin{equation}\label{eq:mmdv2} D^2(\nu_1, \nu_2, K)=\int_{\Omega\times \Omega}K(\bm x, \bm y)[\nu_1(\dd \bm x)-\nu_2(\dd \bm x)][\nu_1(\dd \bm y)-\nu_2(\dd \bm y)], \end{equation} measuring the difference between any $\nu_1, \nu_2\in \mathcal{M}$. \cite{gretton2012kernel} defined the maximum mean discrepancy (MMD) as \[ \text{MMD}(\mathcal{H}, \nu_1,\nu_2)=\sup_{f\in \mathcal{H}}(\mathbb{E}_{\bm x\sim \nu_1}[f(\bm x)]-\mathbb{E}_{\bm y\sim \nu_2}[f(\bm y)]), \] and under the same definition of $\mathcal{H}$ and $\mathcal{M}$, the square of MMD is \[ \text{MMD}^2(\mathcal{H}, \nu_1,\nu_2)=\mathbb{E}_{\bm x,\bm x' \sim \nu_1}[K(\bm x, \bm x')]-2\mathbb{E}_{\bm x\sim \nu_1,\bm y\sim \nu_2}[K(\bm x,\bm y)]+\mathbb{E}_{\bm y\sim \nu_2, \bm y'\sim \nu_2}[K(\bm y, \bm y')], \] which is equivalent to $D^2(\nu_1,\nu_2,K)$ in \eqref{eq:mmdv2}. Therefore, in the rest of the paper, we use kernel discrepancy and MMD interchangeably. Kernel discrepancy has many desirable properties, one of which is on the convergence of distributions. In fact, $\text{MMD}(\mathcal{H}, \nu_1,\nu_2)=0$ if and only if $\nu_1=\nu_2$, provided that $\Omega$ is a compact metric space and more importantly, $K$ is a universal kernel and thus $\mathcal{H}$ is a universal RKHS \citep{gretton2012kernel}. Simply put, universal kernel \citep{micchelli2006universal} means that $K$ has to be complex enough such that $\mathcal{H}$ and $\mathcal{M}$ are sufficiently big. Lower-order polynomial kernels, such as linear and second-order polynomials are not universal. MMD induced by the second-order polynomial kernel can distinguish two distributions in terms of mean and variance, and the linear kernel can only do so in terms of the mean. On the other hand, the Gaussian kernel is universal and thus the MMD based on it can be used as a metric for measures \citep{micchelli2006universal,fukumizu2007kernel}. Therefore, with a proper kernel, if $D^2(\mathcal{X}_N,F,K)\rightarrow 0$ as $N\rightarrow \infty$, then $F_{\mathcal{X}_N}\rightarrow F$. For fixed $N$, if $D^2(\mathcal{X},F,K)\rightarrow 0$ as $n \rightarrow \infty$ ($n$ is the notation for iteration of algorithm), then $F_{\mathcal{X}}\rightarrow F$. Kernel discrepancy is also related to energy statistics \citep{szekely2013energy} and support points \citep{mak2018support}. If let $K(\bm x,\bm y)=\|\bm x-\bm y\|_2^2$, then the kernel discrepancy becomes energy statistics and thus low-discrepancy points become support points. \subsection{Energetic Variational Inference}\label{sub:evi} Motivated by the energetic variational approaches for modeling the dynamics of non-equilibrium thermodynamical systems \citep{Giga2017}, the energetic variational inference (EVI) framework uses a continuous energy-dissipation law to specify the dynamics of minimizing the objective function in machine learning problems. Under the EVI framework, a practical algorithm can be obtained by introducing a suitable discretization to the continuous energy-dissipation law. This idea was introduced and applied to variational inference by \cite{wang2021particle}. It can also be applied to other machine learning problems similarly to \cite{trillos2018bayesian} and \cite{weinan2020machine}. We first introduce the EVI using the continuous formulation. Let $\bm \phi_t$ be the dynamic flow map $\bm \phi_t: \mathbb{R}^d \rightarrow \mathbb{R}^d$ that continuously transforms the $d$-dimensional distribution from an initial distribution toward the target one and we require the map $\bm \phi_t$ to be smooth and one-to-one. The functional $\mathcal{F}(\bm \phi_t)$ is a user-specified divergence or other machine learning objective functional, such as the KL-divergence in \cite{wang2021particle}. Taking analogy of a thermodynamics system, $\mathcal{F}(\bm \phi_t)$ is the Helmholtz free energy. Following the First and Second Law of thermodynamics \citep{Giga2017} (kinetic energy is set to be zero) \begin{equation}\label{eq:energydiss} \frac{\dd}{\dd t} \mathcal{F}(\bm \phi_t) = - \triangle(\bm \phi_t, \dot{\bm \phi}_t), \end{equation} where $\triangle(\bm \phi_t, \dot{\bm \phi}_t)$ is a user-specified functional representing the rate of energy dissipation, and $\dot{\bm \phi}_t$ is the derivative of $\bm \phi_t$ with time $t$. So $\dot{\bm \phi}_t$ can be interpreted as the ``velocity'' of the transformation. Each variational formulation gives a natural path of decreasing the objective functional $\mathcal{F}(\bm \phi_t)$ toward an equilibrium. The dissipation functional should satisfy $\triangle(\bm \phi_t, \dot{\bm \phi}_t)\geq 0$ so that $\mathcal{F}(\bm \phi_t)$ decreases with time. As discussed in \cite{wang2021particle}, there are many ways to specify $\triangle(\bm \phi_t, \dot{\bm \phi}_t)$ and the simplest among them is a quadratic functional in terms of $\dot{\bm \phi}_t$, \[ \triangle(\bm \phi_t,\dot{\bm \phi}_t)=\int_{\Omega_t}\rho_{[\bm \phi_t]}\|\dot{\bm \phi}_t\|_2^2 \dd \bm x, \] where $\rho_{[\bm \phi_t]}$ denotes the pdf of the current distribution which is the initial distribution transformed by $\bm \phi_t$, $\Omega_t$ is the current support, and $\| {\bm a} \|_2 = {\bm a}^\top {\bm a}$ for $\forall {\bm a} \in \mathbb{R}^d$. This simple quadratic functional is appealing since it has a simple functional derivative, i.e., \begin{equation*} \frac{\delta \triangle(\bm \phi_t,\dot{\bm \phi}_t)}{\delta \dot{\bm \phi}_t}= 2\rho_{[\bm \phi_t]} \dot{\bm \phi}_t, \end{equation*} where $\delta$ is the variation operator, i.e., functional derivative. With the specified energy-dissipation law \eqref{eq:energydiss}, the energy variational approach derives the dynamics of the systems through two variational procedures, the Least Action Principle (LAP) and the Maximum Dissipation Principle (MDP), which leads to \begin{equation*} \frac{\delta \frac{1}{2}\triangle}{\delta \dot{\bm \phi}_t} = - \frac{\delta \mathcal{F}}{\delta \bm \phi_t}. \end{equation*} The approach is motivated by the seminal works of Raleigh \citep{strutt1871some} and Onsager \citep{onsager1931reciprocal,onsager1931reciprocal2}. Using the quadratic $\mathcal{D}(\bm \phi_t, \dot{\bm \phi}_t)$, the dynamics of decreasing $\mathcal{F}$ is \begin{equation}\label{eq:EVIv1} \rho_{[\bm \phi_t]} \dot{\bm \phi}_t = - \frac{\delta \mathcal{F}}{\delta \bm \phi_t}. \end{equation} In general, this continuous formulation is difficult to solve, since the manifold of $\bm \phi_t$ is of infinite dimension. Naturally, there are different approaches to approximate an infinite-dimensional manifold by a finite-dimensional manifold. One such approach, as used in \cite{wang2021particle}, is to use particles (or samples) to approximate the continuous distribution $\rho_{[\bm \phi_t]}$ with kernel regularization. If this approximation applies to \eqref{eq:EVIv1}, after the LAP and MDP variational steps, we call it the ``variation-then-approximation'' approach. If this approximation is applied to \eqref{eq:energydiss} directly, before any variational steps, we call it the ``approximation-then-variation'' approach. The latter leads to a discrete version of the energy-dissipation law, i.e., \begin{equation}\label{eq:EVIv2} \frac{\dd}{\dd t} \mathcal{F}_h(\{\bm x_i(t)\}_{i=1}^N)=-\triangle_h(\{\bm x_i(t)\}_{i=1}^N, \{\dot{\bm x}_i(t)\}_{i=1}^N). \end{equation} Here $\{\bm x(t)\}_{i=1}^N$ is the locations of $N$ particles at time $t$ and $\dot{\bm x}_i(t)$ is the derivative of $\bm x_i$ with $t$, and thus is the velocity of the $i$th particle as it moves toward the target distribution. The subscript $h$ of $\mathcal{F}$ and $\triangle$ denotes the bandwidth parameter of the kernel function used in the kernelization operation. Applying the variational steps to \eqref{eq:EVIv2}, we obtain the dynamics of decreasing $\mathcal{F}$ at the particle level, \begin{equation}\label{eq:EVIv3} \frac{\delta \frac{1}{2}\triangle_h}{\delta \dot{\bm x}_i(t)} = - \frac{\delta \mathcal{F}_h}{\delta \bm x_i}, \quad \text{for }i=1, \ldots, N. \end{equation} This leads to an ODE system of $\{\bm x_i(t)\}_{i=1}^N$ and can be solved by different numerical schemes. The solution is the particles approximating the target distribution. Due to limited space, we can only briefly review the EVI framework and explain it intuitively. Readers can find the rigorous and concrete explanation in \cite{wang2021particle}. It also suggested many different ways to specify the energy-dissipation, the ways to approximate the continuous formulation and different ways to solve the ODE system. \iffalse In general, the manifold $\mathcal{Z}$ is infinite dimension. In \cite{liu2020lagrangian}, the author develop a so-called discrete energetic variational approach, which first introduce a finite dimensional approximation to the manifold $\mathcal{Z}$, which corresponds to the choice of the hypothesis space in statistics. As a consequence, one can construct a finite dimensional energy-dissipation law, given by \begin{equation} \frac{\dd}{\dd t} \mathcal{F}_h(z_h) = - \langle M_h(z_h) \dot{z}_h, \dot{z}_h \rangle, \quad z_h \in \mathbb{R}^K, \end{equation} where $M(z_h) \in \mathbb{R}^{K \times K}$ is a positive, symmetric matrix, and $\langle ., . \rangle$ is the standard inner product in $\mathbb{R}^K$. Then $z_h$ satisfies the ODE \begin{equation}\label{Finite-dimension} M(z_h) \dot{z}_h = \nabla_{\bm z_h} \mathcal{F}_h(\bm z_h), \quad \text{or} \quad \dot{z}_h = - M^{-1}(z_h) \nabla_{\bm z_h} \mathcal{F}_h(\bm z_h), \end{equation} which corresponds to a preconditioned gradient flow dynamics. An optimization method can be constructed through a suitable temporal discretization. The approach follows the idea of ``discretize-then-variation'' or ``discretize-then-optimize''. \fi \section{Low-Discrepancy Points via EVI}\label{sec:method} In this section, we first apply the EVI framework to minimize the kernel discrepancy and derive the ODE system of the particles and then introduce the EVI-MMD algorithm which uses the explicit Euler method to discretize the ODEs in time and the AdaGrad algorithm to decide the step size. The EVI-MMD algorithm solves the ODE system and generates the low-discrepancy points. We also discuss the choice of the kernel function and how to adjust outliers from the initial distribution. \subsection{EVI-MMD}\label{sub:continuous} Given the target probability measure $\mu$ whose CDF is $F$, and the proper reproducing kernel, we choose the squared kernel discrepancy $D^2(\mathcal{X}_N, F, K)$ defined in \eqref{eq:mmdv1} as the Helmholtz free energy and the quadratic dissipation functional to set up the energy-dissipation law. Specifically, \begin{equation}\label{eq:evi-mmd1} \mathcal{F}_h(\{{\bm x}_i(t)\}_{i=1}^N)=D^2(\{{\bm x}_i(t)\}_{i=1}^N, F, K), \quad \triangle_h(\{{\bm x}_i(t)\}_{i=1}^N,\{\dot{{\bm x}}_i(t)\}_{i=1}^N)=\frac{1}{N}\sum_{i=1}^N \|\dot{{\bm x}}_i(t)\|_2^2. \end{equation} For the kernel discrepancy defined in \eqref{eq:mmdv1}, which directly compares the empirical distribution of $\mathcal{X}_N$ and $F$, using the approximation-then-variation makes more sense since the free energy is already approximated by $N$ particles. Based on these setup, the discrete energy-dissipation law for the low-discrepancy points is \begin{equation}\label{eq:evi-mmd2} \frac{\dd}{\dd t} D^2(\{{\bm x}_i(t)\}_{i=1}^N, F, K)=-\frac{1}{N}\sum_{i=1}^N\|\dot{{\bm x}}_i(t)\|_2^2. \end{equation} Next, we need to derive of variation of $D^2(\{{\bm x}_i(t)\}_{i=1}^N, F, K)$ and $\frac{1}{N}\sum_{i=1}^N\|\dot{{\bm x}}_i(t)\|_2^2$ with respect to ${\bm x}_i$ and $\dot{{\bm x}}_i$, respectively. \begin{align} \label{eq:dDiss} \frac{\delta \frac{1}{2} \triangle_h}{\delta \dot{{\bm x}}_i}&=\frac{1}{N} \dot{{\bm x}}_i(t),\\ \label{eq:dF} \frac{\delta \mathcal{F}_h}{\delta {\bm x}_i}&=\frac{2}{N^2}\sum_{k\neq i}^N [\nabla_{\bm x}K(\bm x_i, \bm x_k)]- \frac{2}{N}\int_{\Omega}\left[\nabla_{\bm x}K(\bm x_i, \bm y)\right]\dd F(\bm y). \end{align} The derivation of the two variations are included in Appendix \ref{sec:derivation}. Following \eqref{eq:EVIv3}, we obtain the ODE system \eqref{eq:EVI-MMD} that provides the dynamics of minimizing $D^2(\{{\bm x}_i(t)\}_{i=1}^N, F, K)$. \begin{equation}\label{eq:EVI-MMD} \dot{{\bm x}}_i(t)=2\int_{\Omega}\left[\nabla_{\bm x}K(\bm x_i, \bm y)\right]\dd F(\bm y)-\frac{2}{N}\sum_{k\neq i}^N [\nabla_{\bm x}K(\bm x_i, \bm x_k)], \quad\text{for }i=1,\ldots, N. \end{equation} Note that the gradient $\nabla_{{\bm x}} K({\bm x},{\bm y})$ is applied to ${\bm x}$ of the kernel. We choose a symmetric kernel function as in most machine learning methods, i.e., $K({\bm x},{\bm y})=K({\bm y},{\bm x})$. Particularly, if $K({\bm x},{\bm y})$ takes the radial basis function $K({\bm x},{\bm y})=r(\|{\bm x}-{\bm y}\|_2)$ with a certain positive function $r(\cdot)$, then $\nabla_{{\bm x}} K({\bm x},{\bm y})=-\nabla_{{\bm y}} K({\bm x},{\bm y})$. There are two terms on the right side of \eqref{eq:EVI-MMD}. Intuitively, the first term $\int_{\Omega}\left[\nabla_{\bm x}K(\bm x_i, \bm y)\right]\dd F(\bm y)$ can be interpreted as a \emph{driving force} that guides particles towards the target distribution, and the second term $\frac{1}{N}\sum_{k\neq i}^N [\nabla_{\bm x}K(\bm x_i, \bm x_k)]$ is a \emph{repulsive force} to stop particles from collapsing. We have borrowed this interpretation from \cite{ba2019towards} and the authors interpreted the stein variational gradient descent (SVGD) and MMD-descent in the same way. The SVGD and MMD-descent share some similar structures with \eqref{eq:EVI-MMD}. We will remark the difference between the proposed EVI-MMD and MMD-descent in Section \ref{sub:kernel}. Also, in Section \ref{sub:kernel}, we will discuss how to easily compute the integration in the driving force term. A practical algorithm can be obtained by solving the ODE system numerically. Indeed, most optimization algorithms can be viewed as an approximation of a continuous ODE or SDE, which in turn provides a theoretical foundation of these optimization methods \citep{cheng2018underdamped, wibisono2016variational}. In this paper, we adopt the simplest explicit Euler discretization to the ODE \eqref{eq:EVI-MMD}, which leads to \begin{equation} {\bm x}_i^{n+1}= {\bm x}_i^{n}- \tau_n \bm v_i^n, \quad i = 1, 2, \ldots, N. \label{eq:forwardeuler} \end{equation} where $\tau_n$ is the step size of the $n$th iteration, and \begin{equation}\label{eq:v} \bm v_i^n = \left(\frac{2}{N}\sum_{k\neq i}^N [\nabla_{\bm x}K(\bm x_i^n, \bm x_k^n)]-2\int_{\Omega}\left[\nabla_{\bm x}K(\bm x_i^n, \bm y)\right]\dd F(\bm y)\right). \end{equation} Interestingly, the explicit Euler discretization turns out to be the standard gradient descent of the kernel discrepancy. A drawback of the explicit Euler approach is that the step size has to be small so that the algorithm converges to the true solution with stability as long as some modest conditions on convergence are satisfied. On the other hand, small step size can lead to long iteration until convergence and thus make the algorithm less efficient. To overcome this dilemma between stability and efficiency, we use the AdaGrad (Adaptive Gradient) algorithm \citep{duchi2011adaptive} to determine the step size. The AdaGrad updates each particle $\bm x_i$ as follows, \begin{align}\label{eq:adagrad} {\bm x}_i^{n+1}&= {\bm x}_i^n - \bm \eta_t\bm v_i^n,\\ \label{eq:learningrate} \bm \eta_t&=\eta_0\mbox{diag}[\bm G_i^n + \epsilon_0 {\bm I}_d]^{-\frac{1}{2}}. \end{align} Here $\bm \eta_t$ is commonly called \emph{learning rate} in machine learning. It is a diagonal matrix that updates the step size in the direction of $\bm v_i^n$ elementwise, which is an improvement to the constant step size $\tau_n$ for all elements of $\bm v_i^n$ in the explicit Euler. The scalar $\eta_0$ is the initial learning rate, a user-specified tuning parameter. The matrix $\bm G_i^n$ is $\bm G_i^n=\sum_{j=1}^{n}\bm v_i^j (\bm v_i^j)^\top$ and $\bm I_d$ is the identity matrix of the same size. Adding $\epsilon_0\bm I_d$ to $\bm G_i$ is to regularsize $\bm G_i^n$ to avoid singularity. Note that updating the learning rate via \eqref{eq:learningrate} only needs the diagonal entry of $\bm G_i^n$. So a less computational way to update $\bm \eta_t$ is \begin{equation}\label{eq:learningrate2} \bm \eta_t=\eta_0\mbox{diag}\left[\left( \textstyle{\sum}_{j=1}^n (v_{i,1}^j)^2+\epsilon_0\right)^{-1/2},\ldots,\left( \textstyle{\sum}_{j=1}^n (v_{i,d}^j)^2+\epsilon_0\right)^{-1/2}\right], \end{equation} where the operation $\mbox{diag}$ makes the $d$ elements into a diagonal matrix. Applying other numerical schemes to solve the ODE system, such as the implicit Euler \citep{wang2021particle}, will lead to different optimization algorithms. We will explore this in the future. \subsection{Kernel Function}\label{sub:kernel} The remaining question is how to compute the integration contained in the driving force term. If the target distribution $F(\bm y)$ is given by the training data $\{ {\bm y}_k \}_{k=1}^M$ as in the two-sample problem, the driving force term can be computed directly by \begin{equation}\label{eq:drivesample} \mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm x}} K({\bm x}_i, {\bm y})]=\int \nabla_{{\bm x}} K({\bm x}_i,{\bm y})\dd F(\bm y)=\frac{1}{M} \sum_{k = 1}^M \nabla_{{\bm x}} K({\bm x}_i, {\bm y}_k) \end{equation} for $i=1,\ldots, N$. In this case, many different types of symmetric and positive definite kernels can be used. In practice, if $M$ is too large, we can randomly sample a subset of training data to estimate each $\mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm x}} K({\bm x}_i, {\bm y})]$. We discuss the case when the target distribution is given analytically. It becomes a trivial problem if the target distribution is a known distribution and easy to sample from. Otherwise, it is a challenging problem to estimate the driving force term efficiently. To solve this problem, we look into the term $\mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm x}} K({\bm x}, {\bm y})]$ more closely for any ${\bm x}$. Let $\rho({\bm y})$ be the pdf of $F({\bm y})$. If $K({\bm x},{\bm y})$ is the Gaussian kernel, $K({\bm x},{\bm y})=\exp\left(-\frac{\|{\bm x}-{\bm y}\|_2^2}{2h^2}\right)$, then its gradient is \begin{equation}\label{eq:gpkernel} \nabla_{{\bm x}} K({\bm x}, {\bm y}) = - \frac{{\bm x} - {\bm y}}{h^2} \exp\left( -\frac{\|\bm x-\bm y\|_2^2}{2h^2} \right). \end{equation} Then the integration of the gradient of the kernel is \begin{equation}\label{eq:intgrad} \int_{\Omega} \nabla_{{\bm x}} K({\bm x}, {\bm y}) \rho({\bm y}) \dd {\bm y} = \int_{\Omega} - \frac{{\bm x} - {\bm y}}{h^2} \exp\left( -\frac{\|\bm x-\bm y\|_2^2}{2h^2} \right) \rho({\bm y}) \dd {\bm y}. \end{equation} If $\Omega=\mathbb{R}^d$, the integration becomes the following \begin{equation}\label{eq:intgrad2} \int_{\mathbb{R}^d} \nabla_{{\bm x}} K({\bm x}, {\bm y}) \rho({\bm y}) \dd {\bm y}=(\sqrt{2\pi}h)^d\mathbb{E}_{{\bm y}}\left[-\frac{{\bm x}-{\bm y}}{h^2}\rho({\bm y})\right], \end{equation} where the expectation is with respect to $\bm y$ following the normal distribution $\mathcal{N}({\bm x}, h^2\bm I_d)$. If $\Omega\subsetneq \mathbb{R}^d$, we can still use \eqref{eq:intgrad2} to approximate $\int_{\Omega} \nabla_{{\bm x}} K({\bm x}, {\bm y})\rho({\bm y}) \dd {\bm y}$ because the bandwidth parameter $h$ is typically much smaller compared to the scale of $\Omega$ (to be discussed next) and the probability mass outside of $\Omega$ is negligible. To sum up, if we choose Gaussian kernel, we can estimate the driving force term by the following cubature, \begin{equation} \mathbb{E}_{{\bm y}\sim F({\bm y})}[\nabla_{{\bm x}}K({\bm x}_i,{\bm y})]\approx -(\sqrt{2\pi}h)^d\frac{1}{L}\sum_{l=1}^L\frac{{\bm x}_i-{\bm y}_l}{h^2}\rho({\bm y}_l), \end{equation} where ${\bm y}_l$ for $l=1,\ldots, L$ are iid samples following $\mathcal{N}({\bm x}_i,h^2\bm I_d)$. In practice, we can first generate $L\times d$ samples $y_{l,k}$ from the univariate $\mathcal{N}(0,h^2)$ for $k=1,\ldots,d$ and shift the mean of ${\bm y}_l$ to ${\bm x}_i$ to reduce the computational costs. Alternatively, to reduce the bias of the samples from a single batch, we can generate $L$ samples in each iteration for each ${\bm x}_i$. Considering both accuracy and efficiency, we choose the latter approach and set $L=200$ in the numerical examples. One can see that this idea applies to other universal kernel functions as long as its gradient $\nabla_{{\bm x}}K({\bm x},{\bm y})$ is proportional to a known pdf and is also easy to sample from. Other than the Gaussian kernel, exponential kernel (or Laplacian kernel) also meets these conditions. Since the Gaussian kernel is one of the most widely used kernel functions in machine learning, we choose the Gaussian kernel in the numerical examples. To facilitate the description of the algorithm, we define the terms representing the driving and repulsive force for each of the particle in the $n$th interation as follows. \begin{align} \label{eq:driving}\text{driving}_i^n&=-\frac{2(\sqrt{2\pi}h)^d}{L} \sum_{l = 1}^L \frac{\bm x_i^n - {\bm y}_l}{h^2} \rho({\bm y}_l),\\ \label{eq:repulsive}\text{repulsive}_i^n&= -\frac{2}{N}\sum_{k\neq i}^N \frac{\bm x_i^n-\bm x_k^n}{h^2}\exp\left(-\frac{\|\bm x_i^n-\bm x_k^n\|_2^2}{2h^2}\right). \end{align} Then the gradient $\bm v_i^n$ in \eqref{eq:v} and \eqref{eq:adagrad} is $\bm v_i^n=\text{repulsive}_i^n-\text{driving}_i^n$. As in most ParVI methods, the performance of the EVI-MMD algorithm is affected by the tuning parameter $h$. So far, there has not been a clear guideline in the literature on how to choose $h$ to minimize MMD. For SVGD, \cite{liu2016stein} suggested using the median trick, i.e., set $h^2 = \text{med}^2 / \log N$, where $\text{med}$ is the median of the pairwise distance between the particles in the current iteration. Therefore, the median trick updates $h$ in each iteration. However, the median trick only works for SVGD. Some more sophisticated methods have been introduced to select $h$ \citep{liu2019understanding,wang2019stein} for SVGD. Despite these guidelines, choosing $h$ still largely depends on the application. Most existing works select all the tuning parameters, including $h$, in a trial-and-error fashion. In this paper, we set $h$ to be a constant throughout the algorithm. First, $h$ has to be sufficiently small. If $h$ is too large, the driving force in \eqref{eq:driving} would be too small in size to have any ``attraction'' to move the particles toward the target distribution, and the repulsive force in \eqref{eq:repulsive} would be too small in size as well and cause the particles to collapse. The typical range of $h$ should match the range of the pairwise distance between particles as suggested in \cite{peyre2019computational}. We can estimate this range from the initial distribution of the particles, then adjust its value based on multiple trials and choose the $h$ value with the best result. Although \emph{ad hoc}, this way of selecting $h$ often performs better and more efficiently than the adaptive methods such as median trick in numerical experiments. This practice of fixing $h$ is not rare in the VI literature. For example, \cite{li2015generative} used a constant bandwidth parameter in the kernel function to train a deep generative model by minimizing MMD. The EVI-Im algorithm proposed in \cite{wang2021particle} also sets $h$ in this way. We choose other tuning parameters of the EVI-MMD algorithm in the trial-and-error fashion as well. These settings have performed reasonably well in many numerical studies we have conducted. \begin{remark}\label{re:MMD-descent} We want to compare the EVI-MMD with the existing MMD-descent approach. MMD-descent denotes the approach of applying gradient descent directly to MMD, which leads to \[ \frac{{\bm x}_i^{n+1}-{\bm x}_i^n}{\tau}=\Delta({\bm x}_i^n)=-\mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm y}}K({\bm x}_i^n,{\bm y})]+\frac{1}{N} \sum_{j=1}^N \nabla K_{{\bm x}_j}({\bm x}_j^n,{\bm x}_i^n), \] using the notation of this paper. One recent review on MMD-descent can be found in \cite{ba2019towards}. Ignoring the constant, this updating scheme is the same as the ODE system of EVI-MMD in \eqref{eq:EVI-MMD}. But EVI-MMD is fundamentally different from MMD-descent. First, EVI-MMD is originated from the EVI framework. Only using the combinations of these settings, simple quadratic dissipation law $\triangle(\bm \phi_t, \dot{\bm \phi}_t)=\int_{\Omega_t}\rho_{[\bm \phi_t]}\|\dot{\bm \phi}_t\|^2 \dd {\bm x}$, the ``approximation-then-variation'' order, and the explicit Euler scheme, the EVI-MMD would turn out to have the same updating formula as the MMD-descent. As suggested in \cite{wang2021particle}, other versions of EVI-MMD algorithms can be developed by choosing different dissipation functionals, switching the order of approximation and variation, and using implicit Euler or second-order numerical schemes to solve the ODE system. It is a future research direction we plan to pursue. Second, both EVI-MMD and MMD-descent face the same bottleneck, computing the driving force term. Our answer is estimating it by taking advantage of the Gaussian kernel. On the other hand, many MMD-descent methods bypass this problem via two options. The first option is only using MMD to solve a two-sample problem, in which the target distribution is given by the training data. So the driving force can be easily calculated using \eqref{eq:drivesample}. This explains why MMD is mostly applied to the generative learning model, which is a two-sample problem. The second option is to use the Stein kernel based on the target distribution, then the driving force becomes \[ -\mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm y}}K({\bm x}_i,{\bm y})]=\mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm y}} \log \rho({\bm y}) K({\bm x},{\bm y})]. \] Without the context of the application, it is not possible to comment on which kernel is better, the Gaussian kernel or Stein kernel. Theoretically, this depends on which RKHS based on either kernel suits the application better. In terms of computational efficiency, this also depends on how easy it is to compute $\mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm y}} \log \rho({\bm y}) K({\bm x},{\bm y})]$, which circles back to the same bottleneck problem if $\rho({\bm y})$ or $F$ is not a known distribution. \end{remark} \subsection{Initial Outlier Detection and Adjustment}\label{sub:oulier} A drawback of using Gaussian kernels is that it can only capture the interaction between particles in a relatively small neighborhood, due to the fast decay of exponential function and small $h$ value. During the early iterations of the algorithm, if a particle sits in a region where the density of the target distribution is small, in other words, the particle is an outlier to the target distribution, then the driving force $\mathbb{E}_{{\bm y}\sim F}[\nabla_{{\bm x}}K({\bm x}_i,{\bm y})]$ of this particle would be small in size. As a result, it would take more iterations to move this particle to the high-density region of the target distribution, or worse, the driving force is too small to make any significant movement and thus the particle remains an outlier when the algorithm terminates. In Appendix \ref{sec:driving}, we give a more rigorous explanation of the driving force term. To overcome the outlier issue, we propose a heuristic way to identify the outliers and then adjust them. This stage (denoted as Stage 1) is included the EVI-MMD algorithm and is done before the Adagrad optimization stage (denoted as Stage 2). In Stage 1, given the initial particles, we first detect the outliers based on the driving force value. As discussed above, an outlier has a small driving force, so we detect the outlier according to the $l_2$ norm, $\|\text{driving}_i^n\|_2$. With a pre-specified tolerance \texttt{tol}, we label a particle to be an outlier as long as $\|\text{driving}_i^n\|_2\leq \texttt{tol}$. Once an outlier is detected, we adjust it by enlarging the size of the driving force. Specifically, we force the driving force to have a pre-specified size \texttt{a} while keeping the direction of the driving force, i.e., \[ \text{driving}_i^n \leftarrow \texttt{a} \frac{\text{driving}_i^n}{\|\text{driving}_i^n\|}. \] Then, we update all particles using the explicit Euler scheme, ${\bm x}_i^{n+1}={\bm x}_i^n-\tau \bm v_i^n$. So far we have set the two tuning parameters \texttt{tol} and \texttt{a} based on experience and trial-and-error. Generally, the \texttt{tol} should be around the same scale of the smallest norm of driving force and \texttt{a} should be around the median of the norms of the driving force of all particles. We fix the step size $\tau$ throughout Stage 1 and it should be slightly larger than the initial learning rate $\eta_0$ for the Adagrad optimization. The number of iterations of Stage 1 is denoted $B$ and it is relatively small compared to the total number of iterations of both stages, denoted as \texttt{maxIter}. Since the purpose of the first stage is to quickly move the outliers to the high-density region, the way we choose $\tau$ and $B$ is reasonable intuitively and effective in practice. \begin{remark} The convergence issue caused by outliers, or more generally the initial distribution, has been identified in the literature before. \cite{arbel2019maximum} studied the convergence of the continuous MMD gradient flow using the forward Euler discretization. Using the continuous formulation, the authors proved that if $\| \rho_t({\bm x}) - \rho({\bm x})\|_{\dot{H}^{-1} (\rho_t) } \leq C$ for all $t > 0$, then $\mathcal{F}(\rho_t) \leq C / (C \mathcal{F}(\rho_0)^{-1} + 4t)$. Here $\| \|_{\dot{H}^{-1} (\rho_t)}$ is the weighted homogeneous $H^{-1}$ norm. The details can be found in \cite{arbel2019maximum}. Intuitively, this result implies that the convergence rate of the MMD depends on the ``distance'' between $\rho_t$, the current density function, and the target distribution $\rho$. However, as pointed out in \cite{arbel2019maximum}, the condition for the convergence rate is difficult to guarantee even if the ``distance'' between the initial distribution $\rho_0$ and the target distribution $\rho$ is small. The outlying particles with a small driving force described above correspond to the case when the ``distance'' between the empirical distribution of the current particles and the target distribution is large. Therefore, the outliers can lead to slow convergence or even fail to converge to the target distribution. In \cite{arbel2019maximum}, the author proposed to inject noise to improve the robustness of the algorithm to outliers, whereas we propose the remedy of manually pushing the particles to a region of the higher probability density of the target distribution. \end{remark} We present the EVI-MMD approach with outlier detection and adjustment in Algorithm \ref{alg:EVI-MMD}. The algorithm requires a list of tuning parameters. As we have explained, we set the tuning parameters mostly based on intuition, experience, and experimentation, which is a common practice in nearly all VI methods. How to set the tuning parameters is worthy of further investigation in the future. An important detail in Algorithm \ref{alg:EVI-MMD} is regarding the second for-loop (indexed by $i$) through the particles in both Stage 1 and 2. Although written into a for-loop, all the particles can be updated simultaneously by array operation or parallel computing, and we have used array operation in our codes. The most computational costly step is in estimating the driving force for each particle in every iteration, which depends on the sample size $L$ or $M$. \begin{algorithm} \caption{The EVI-MMD Algorithm with Adjustment to Outliers}\label{alg:EVI-MMD} \begin{algorithmic}[1] \Require \\ \begin{itemize} \item [] $N$: total number of particles; \item [] $B$: number of iterations of Stage 1; \item [] $h$: bandwith parameter of the Gaussian kernel function; \item [] $L$: the size of samples generated from $\mathcal{N}({\bm x}_i,h^2\bm I_d)$; \item [] \texttt{maxIter}: the total number of iterations of both Stage 1 and 2; \item [] \texttt{tol}: tolerance for outlier detection; \item [] \texttt{a}: adjustment parameter of the driving force of outlier; \item [] $\tau$: step size of explicit Euler in Stage 1; \item [] $\eta_0$: initial learning rate of Adagrad in Stage 2; \item [] $\rho_0$: an initial distribution with support $\Omega$ to generate initial particles. \end{itemize} \State Generate initial particles $\{\bm x_i^0\}_{i=1}^N$ from distribution $\rho_0$. \Statex \textit{Stage 1: Initial Outlier Detection and Adjustment.} \For{$n=1:B$} \For{$i=1:N$} \State Calculate the terms representing driving and repulsive force in \eqref{eq:driving} and \eqref{eq:repulsive}. \If{$\|\text{driving}_i^n\|\leq \texttt{tol}$} \State $\text{driving}_i^n \leftarrow \texttt{a}\frac{\text{driving}_i^n}{\|\text{driving}_i^n\|}$. \EndIf \State $\bm v_i^n = \text{repulsive}^n_i-\text{driving}_i^n$ \State ${\bm x}_i^{n+1}={\bm x}_i^n-\tau\bm v_i^n$ \EndFor \EndFor \Statex \textit{Stage 2: Adaptive Gradient Euler.} \For{$n=1:(\texttt{maxIter}-B)$} \State Reset the learning rate to $\eta_0$. \For{$i=1:N$} \State Calculate the terms representing driving and repulsive force in \eqref{eq:driving} and \eqref{eq:repulsive}. \State $\bm v_i^n=\text{repulsive}^n_i-\text{driving}_i^n$ \State Compute $\bm \eta_t$ using \eqref{eq:learningrate2}. \State $\bm x_i^{n+1}=\bm x_i^n-\bm \eta_t\bm v_i^n$. \EndFor \EndFor \end{algorithmic} \end{algorithm} \section{Numerical Examples}\label{sec:num} We demonstrate the performance of the proposed EVI-MMD algorithm through three examples. They cover three scenarios in which the target distribution is fully specified, partially specified up to the normalizing constant, and empirically specified through training data. The last scenario is shown by a generative learning model using EVI-MMD. We also compare the EVI-MMD with the Monte Carlo cubature for a numerical integration problem. Through three toy examples, we compare EVI-MMD to three alternatives, the EVI-Im, SVGD, and the stochastic Langevin Monte Carlo (LMC) by \cite{welling2011bayesian}. \subsection{Toy Examples}\label{sub:toy} We specify the following three target distributions of $d=2$ dimensions. \begin{enumerate} \item Star-shaped five-component Gaussian mixture distribution: \[ \rho({\bm x})=\frac{1}{5}\sum_{i=1}^5 N(\bm x |\bm \mu_i,\bm \Sigma_i), \] where \[ \bm \mu_i= \left[ \begin{array}{rr} \cos\left(\frac{2\pi}{5}\right), & -\sin\left(\frac{2\pi}{5}\right)\\ \sin\left(\frac{2\pi}{5}\right),& \cos\left(\frac{2\pi}{5}\right) \end{array} \right]^{i-1} \left[ \begin{array}{c} 1.5 \\ 0 \end{array} \right], \quad \bm \Sigma_i=\left[ \begin{array}{rr} \cos\left(\frac{2\pi}{5}\right), & -\sin\left(\frac{2\pi}{5}\right)\\ \sin\left(\frac{2\pi}{5}\right),& \cos\left(\frac{2\pi}{5}\right) \end{array} \right]^{i-1} \left[ \begin{array}{cc} 1, & 0 \\ 0, & 0.01 \end{array} \right]. \] \item Eight-component Gaussian mixture distribution: \[ \rho({\bm x})=\frac{1}{8}\sum_{i=1}^8 N(\bm x|\bm \mu_i,\bm \Sigma), \] where $\bm \mu_1=(0,4)$, $\bm \mu_2=(2.8,2.8)$, $\bm \mu_3=(4,0)$, $\bm \mu_4=(-2.8,2.8)$, $\bm \mu_5=(-4,0)$, $\bm \mu_6=(-2.8,-2.8)$, $\bm \mu_7=(0,-4)$, $\bm \mu_8=(2.8,-2.8)$, and $\bm \Sigma=\left[\begin{array}{rr} 0.2, & 0 \\ 0, & 0.2 \end{array}\right]$. \item Wave-shaped distribution: \[ \rho({\bm x})=C^{-1}\exp\left(-0.1x_1^2-(x_2-\sin(\pi x_i1))^2\right). \] \end{enumerate} Although the first two distributions are both Gaussian mixture distributions, the second distribution is more challenging since the effective support region for each Gaussian component is separated, unlike the star-shaped distribution. The wave-shaped distribution contains an unknown normalizing constant $C$. Different from the approaches using KL-divergence, the EVI-MMD does require the normalizing constant to calculate the driving force term \eqref{eq:driving}. So we estimate $C$ using the Newton-Cotes quadrature implemented in Mathematica by \cite{Mathematica}, and the result is $C\approx 9.93$. For all three toy examples, we set the size of the particles $N=200$. The initial distribution is a standard 2-dimensional Gaussian distribution. All the algorithms under comparison are terminated at \texttt{maxIter}=1000 when we are sure they all have reached convergence. For the EVI-MMD algorithm, we set $\tau=1$, $\eta_0=0.1$, $h=0.2$, $B=20$, $\texttt{tol}=0.002$, $L=200$, and $\texttt{a}=1$ by default. For the eight-component Gaussian mixture distribution, due to the separated effective support region, there are more outliers among the initial particles, so we set the $\texttt{a}=2$ to accelerate the adjustment step. For the star-shaped distribution, we discover $\tau=0.5$ leads to a more stable decrease of the MMD. The tuning parameters of the other algorithms are specified as their respective optimal ones based on our experimentation and/or previous publications. For the EVI-Im and SVGD methods, we set $\eta_0=0.1$ and $h=0.2$. The tuning parameters of LMC are $a=0.1$, $b=1$ and $c=0.55$. The three rows of sub-figures in Figure \ref{fig:toys} illustrate how the particles are moved by the EVI-MMD algorithm from the $n=5$ iteration, the $n=100$ iteration, and the last iteration when the algorithm is terminated. We can see that most of the particles have been aligned to the high-density region (highlighted by the yellow color) around the 100th iteration. To compare the performance of EVI-MMD with EVI-Im, SVGD, and LMC algorithms, we compute the MMD$^2$ or $D^2(\mathcal{X},F,K)$ in each iteration for all of the four methods. Note that the EVI-MMD algorithm does not need to compute the MMD$^2$ if not for this comparison. Figure \ref{fig:toymmd} shows the decay of the MMD$^2$ within the first five seconds of all four algorithms. The LMC and EVI-MMD have the fastest decay of the MMD$^2$ and perform equally well for the star-shaped and wave-shaped distributions. But LMC is less stable than EVI-MMD as the MMD$^2$ of LMC fluctuates around zero in later iterations. For the most challenging case, the eight-component Gaussian mixture distribution, LMC is slightly better than EVI-MMD. The SVGD has the worst performance as its corresponding MMD$^2$ of does not converge to zero. SVGD was shown to be less effective when the target distribution has separated support regions as in \cite{wang2021particle}. In this example, the SVGD has missed several of the eight components of the Gaussian mixture. \begin{figure}[htb] \centering \begin{subfigure}{\linewidth} \includegraphics[width=0.26\linewidth]{star5-eps-converted-to} \includegraphics[width=0.26\linewidth]{eg5-eps-converted-to} \includegraphics[width=0.48\linewidth,height=124pt]{wave5-eps-converted-to} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=.26\linewidth]{star100-eps-converted-to} \includegraphics[width=.26\linewidth]{eg100-eps-converted-to} \includegraphics[width=.48\linewidth,height=124pt]{wave100-eps-converted-to} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=.26\linewidth]{star1000-eps-converted-to} \includegraphics[width=.26\linewidth]{eg1000-eps-converted-to} \includegraphics[width=.48\linewidth,height=124pt]{wave1000-eps-converted-to} \end{subfigure} \caption{The particles generated by the EVI-MMD algorithm at $n=5$, $n=100$, and $n=1000$ iterations in the three rows of the sub-figures, respectively, for the three target distributions.} \label{fig:toys} \end{figure} \begin{figure}[htb] \centering \begin{subfigure}{\linewidth} \includegraphics[width=.33\linewidth]{desstar}\hfill \includegraphics[width=.33\linewidth]{deseg} \hfill \includegraphics[width=.33\linewidth]{deswave} \end{subfigure} \caption{From left to right, the three sub-figures show the decreasing MMD$^2$ in the first five seconds of the four algorithms under comparison for the star-shaped distribution, eight-component Gaussian mixture distribution, and the wave-shaped distribution.} \label{fig:toymmd} \end{figure} \subsection{Numerical Integration}\label{sub:intigration} As reviewed in Section \ref{sub:mmd}, kernel discrepancy can be interpreted as part of the upper bound of the numerical integration error. So we demonstrate the performance of EVI-MMD through a numerical integration for Keister's example given as follows. \begin{equation*} I=\int_{\mathbb{R}^d} \cos(\|{\bm x}\|_2)\exp(-\|{\bm x}\|_2^2)\dd {\bm x}. \end{equation*} The exact integration can be calculated using the formula given in \cite{jagadeeswaran2019fast}. Rounding up to four decimal digits, the integration values are $I_{d=2}=1.8082$ and $I_{d=5}=1.1353$ for $d=2$ and $d=5$ respectively. If using numerical integration, Keister's integration can be easily computed using the classic Monte Carlo cubature. Rewrite the integration as follows \[ I=\int_{\mathbb{R}^d} \cos(\|{\bm x}\|_2)\exp(-\|{\bm x}\|_2^2)\dd {\bm x} = \int_{\mathbb{R}^d} \pi^{\frac{d}{2}} \cos{\|{\bm x}/\sqrt{2}\|_2} \rho({\bm x})\dd {\bm x}, \] where $\rho({\bm x})$ is the pdf of a standard multivariate Gaussian distribution of $d$ dimension. Then $\hat{I}_N=\frac{1}{N}\sum_{i=1}^N \pi^{\frac{d}{2}}\cos\|{\bm x}_i/\sqrt{2} \|_2$ where $\{{\bm x}_i\}_{i=1}^N$ are iid samples following $\mathcal{MVN}({\bf 0}, \bm I_d)$. The absolute relative error of the cubature is \[ \text{err}_r(f, \mathcal{X})=\left\vert\frac{I-\hat{I}_N}{I}\right\vert, \quad\text{where }f({\bm x})=\pi^{\frac{d}{2}}\cos\|{\bm x}/\sqrt{2}\|_2. \] We compare two approaches of generating random samples for the cubature. One is the classic Monte Carlo sampling that directly generates $N$ samples from the Gaussian distribution. The other is the low-discrepancy points generated by the EVI-MMD method. The initial distribution of the EVI-MMD is a uniform distribution in $[-1,1]^d$. The number of samples $N$ is set the same for both approaches. For $d=2$, we vary $N$ from 50 to 250, and 100 to 500 for $d=5$. The tuning parameters of EVI-MMD are set to be $L=200$, $\tau=0.05$, $\eta_0=0.05$, $B=5$, $\texttt{tol}=0.001$, and $\texttt{a}=1$ for both dimensions. But $h=0.5$ for $d=2$ and $h=1.2$ for $d=5$. Figure \ref{fig:intbox} compares the two methods with different sample sizes for $d=2$ and $d=5$. For each $N$, we use a box plot to show the distribution of absolute relative errors of 20 replications of cubatures computed from 20 sets of samples. The EVI-MMD significantly outperforms the classic Monte Carlo for the cubature and has must less bias. Since it is essentially an optimization approach, the variance is very small as well. \begin{figure}[htb] \centering \begin{subfigure}{\linewidth} \includegraphics[width=.5\linewidth]{2dintegration} \includegraphics[width=.5\linewidth]{5dintegration} \end{subfigure} \caption{The absolute relative error of 20 replications of cubatures of the Keister's example for $d=2$ (left) and $d=5$ (right) case.} \label{fig:intbox} \end{figure} \subsection{Generative Learning Model}\label{sub:gen} Generative learning models have been widely used in various machine learning applications. They can solve both supervised and unsupervised learning problems. Simply put, the generative learning model generates new samples based on the training data. More advanced generative learning models are combined with deep neural networks \citep{jabbar2021survey}, Naive Bayes, Gaussian mixture model, hidden Markov model, etc. \citep{harshvardhan2020comprehensive}. In this example, we use the simplest nonparametric generative learning setup and apply the EVI-MMD to the famous benchmark MNIST dataset \citep{deng2012mnist}. Each data point in the MNIST dataset is a $28\times 28$ pixel image of a handwritten digit from 0 to 9 and a pixel value is in $[0,1]$. For a simple demonstration, we randomly choose 1000 images from the entire MNIST dataset as the training data. Then we generate $N=100$ new imagines using the EVI-MMD approach. Essentially, this is a two-sample problem. In a certain sense, it is simpler than the previous examples as the driving force can be directly estimated from the training data using \eqref{eq:drivesample}. On the other hand, this is not an easy problem because the dimension $d=28^2=784$ is extremely high considering the training data only has 1000 entries. We specify the EVI-MMD as follows. The initial $N=100$ particles are sampled from a uniform distribution in $[0,1]^{784}$. The other tuning parameters are set to $\tau=0.1$, $\eta_0=0.05$, $B=20$, $\texttt{tol}=0.1$, $\texttt{a}=10$, and $h=1.1$. We terminate the algorithm at $\texttt{maxIter}=30$ and the Adagrad converges in 10 iterations. The EVI-MMD does not need many iterations to decrease the MMD$^2$ to nearly zero because each image is highly sparse. Figure \ref{fig:gen} compares side-by-side the original 100 out of the 1000 training data and $N=100$ low-discrepancy points generated by EVI-MMD. Both the training and the low-discrepancy points are put into a $10\times 10$ panel. We can see that the digits generated by the EVI-MMD are similar to the training data. Note that the EVI-MMD is by no means the best approach for the MNIST benchmark example. Readers can find many more sophisticated generative learning approaches that return better results. However, the EVI-MMD is probably the simplest by comparison and its results are adequate. \begin{figure}[htb] \centering \begin{subfigure}{\linewidth} \includegraphics[width=.45\linewidth]{train}\hfill \includegraphics[width=.45\linewidth]{result}\hfill \end{subfigure} \caption{Visual comparison between the 100 randomly sampled digits from the training data and 100 digits generated by the EVI-MMD.} \label{fig:gen} \end{figure} \section{Conclusion}\label{sec:con} In this paper, we develop a variational inference approach to generate low-discrepancy points to approximate a target distribution by minimizing the kernel discrepancy, alternatively known as maximum mean discrepancy (MMD). The minimization of MMD is solved by the general energetic variational inference (EVI) framework firstly introduced by \cite{wang2021particle}. Specifically, we use the quadratic dissipation functional of the EVI, apply the particle approximation to the continuous energy-dissipation law, which is then followed by the variation procedure. This leads to a dynamic system that moves the particles from their initial positions to the target distribution. Using the explicit Euler scheme to solve this dynamic system, we obtain a special algorithm based on the EVI framework to minimize MMD, which we call EVI-MMD algorithm. Compared to the existing MMD-descent approaches, the EVI-MMD algorithm can be applied to different target distributions other than the two-sample problem. More importantly, if we change some settings of the EVI framework, new algorithms can be developed. For example, another possible dissipation functional is \begin{equation} \triangle_h = \dot{\bf X}^{\rm T} {\bm A} \dot{\bf X}, \quad {\bf X} \in \mathbb{R}^{Nd} ~\text{is the vectorized} ~ \{ {\bm x}_i \}_{i=1}^N, \end{equation} where ${\sf A} \in \mathbb{R}^{Nd \times Nd} $ is some positive-definite matrix. A simple case of ${\bm A}$ is the circular convolution matrix \begin{equation*} {\bm A} = \begin{bmatrix} 1+2\sigma & -\sigma & 0 & ... & -\sigma\\ -\sigma& 1+2\sigma & -\sigma & ... &0 \\ 0& -\sigma & 1+2\sigma & ...&0 \\ ...& ...& ...& ... & ...\\ -\sigma & 0 & 0 &... & 1+2\sigma \end{bmatrix}, \end{equation*} which corresponds to the Laplacian smoothing approach proposed in \cite{osher2018laplacian}. Potentially, this dissipation function can prevent the solution from being trapped at saddle points and local minimal. Some other dissipation functionals and numerical schemes for the EVI framework are suggested in \cite{liu2020variational} and \cite{wang2021particle}. We will explore which EVI algorithm works most efficiently to minimize the MMD next. A remaining issue for the EVI-MMD approach is how to specify the list of tuning parameters. Similar to many machine learning methods, we select the tuning parameters based on intuition and experimentation. A more systematic approach is called for and we plan to investigate this topic in the future. \begin{center} {\Large\bf Acknowledgment} \end{center} L. Kang's work is partially supported by the National Science Foundation Grant DMS-1916467. Y. Wang and C. Liu are partially supported by the National Science Foundation Grant DMS-1759536 and DMS-1950868. \begin{center} {\Large\bf Appendix: Derivations and Extra Discussion} \end{center} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{lem}{0} \setcounter{theorem}{0} \setcounter{proposition}{0} \setcounter{definition}{0} \setcounter{section}{0} \makeatletter \renewcommand{\thefigure}{A\@arabic\c@figure} \renewcommand{\thetable}{A\@arabic\c@table} \renewcommand{\thelem}{A\@arabic\c@lem} \renewcommand{\theproposition}{A\@arabic\c@proposition} \renewcommand{\thetheorem}{A\@arabic\c@theorem} \renewcommand{\thedefinition}{A\@arabic\c@definition} \renewcommand{\thesection}{A\@arabic\c@section} \makeatother \section{Derivations of \eqref{eq:dDiss} and \eqref{eq:dF}}\label{sec:derivation} Let $\bm x_i^{\epsilon}=\bm x_i+\epsilon\bm s_i$ for any direction $\bm s_i$. We need to obtain the first order variation $\frac{\delta \mathcal{F}}{\delta {\bm x}_i}$. First we need to calculate \begin{equation*} \begin{aligned} \frac{\dd \mathcal{F}}{\dd \epsilon}\Big\vert_{\epsilon=0}&=-\frac{2}{N}\int_{\Omega}\frac{\dd}{\dd \epsilon}\Big\vert_{\epsilon=0}K(\bm x_i^{\epsilon}, \bm y)\dd F(\bm y)+\frac{2}{N^2}\sum_{k\neq i}^N \frac{\dd K(\bm x_i^{\epsilon}, \bm x_k)}{\dd \epsilon}\Big\vert_{\epsilon=0}\\ &=-\frac{2}{N}\int_{\Omega}\left[\nabla_{\bm x}K(\bm x_i, \bm y)\right]^\top \bm s_i\dd F(\bm y)+\frac{2}{N^2}\sum_{k\neq i}\left[\nabla_{\bm x} K(\bm x_i, \bm x_k)\right]^\top s_i\\ &=-\frac{2}{N}\left[\int_{\Omega}(\nabla_{\bm x}K(\bm x_i, \bm y))\dd F(\bm y)\right]^\top \bm s_i+\frac{2}{N^2}\sum_{k\neq i}\left[\nabla_{\bm x} K(\bm x_i, \bm x_k)\right]^\top \bm s_i. \end{aligned} \end{equation*} Therefore, \begin{equation*} \frac{\delta \mathcal{F}}{\delta \bm x_i}=-\frac{2}{N}\int_{\Omega}\left[\nabla_{\bm x}K(\bm x_i, \bm y)\right]\dd F(\bm y)+\frac{2}{N^2}\sum_{k\neq i}\nabla_{\bm x} K(\bm x_i, \bm x_k). \end{equation*} Similarly, for any direction $\bm s_i$ \[ \frac{\dd \frac{1}{2}\triangle_h}{\dd \epsilon}\Big\vert_{\epsilon=0}=\frac{1}{2N}\frac{\dd}{\dd \epsilon}(\dot{{\bm x}}_i+\epsilon \bm s_i)^\top (\dot{{\bm x}}_i+\epsilon \bm s_i)\Big\vert_{\epsilon=0}=\frac{1}{N}(\dot{{\bm x}}_i+\epsilon \bm s_i)^\top \bm s_i\Big\vert_{\epsilon=0}=\frac{1}{N}\dot{{\bm x}}_i^\top \bm s_i. \] and \[ \frac{\delta \frac{1}{2}\triangle_h}{\delta \dot{{\bm x}}_i}=\frac{1}{N}\dot{{\bm x}}_i. \] \section{Discussion of the driving force term}\label{sec:driving} We show that if a particle is an outlier, its corresponding driving force term would be small and thus make the algorithm stagnate. First, we give a rigorous definition of an outlier. Recall the driving force term is defined as (ignore the $-$ sign) \begin{equation*} \text{driving}_i=\int_{\Omega}-\left[\frac{{\bm x}_i-{\bm y}}{h^2}\exp(-\frac{\|{\bm x}_i-{\bm y}\|_2^2}{2h^2}) \rho(\bm y) \right] \dd \bm y. \end{equation*} Given $\epsilon >0$, define \begin{equation*} S_1:=\{\bm y ~ | ~ \rho(\bm y)>\epsilon\}, \quad S_2:=\left\{\bm y ~ \Big\vert ~ \frac{\|{\bm x}_i-{\bm y}\|_{\infty}}{h^2}\exp\left(-\frac{\|{\bm x}_i-{\bm y}\|_2^2}{2h^2}\right)>\epsilon\right\}. \end{equation*} We introduce the following definition of $\epsilon$-outlier \begin{definition} A particle $\bm x_i$ is an {\bf $\epsilon$-outlier} if there exist a constant $\epsilon$ such that $S_1\cap S_2 = \varnothing$. \end{definition} The support region $\Omega$ can be split into three parts, $S_1$, $S_2$ and $S_3=(S_1\cup S_2)^c$. If ${\bm x}_i$ is an $\epsilon$-outlier, we have $S_3=S_1^c\cap S_2^c$. Accordingly, we can split the driving force term into integrals in these three regions and denote them as $\bm l_{S_1}$, $\bm l_{S_2}$ and $\bm l_{S_3}$. \begin{proposition} If a particle $\bm x_i$ is an $\epsilon$-outlier, then the $l_2$-norm of the cross term satisfies, \[ \|{\rm driving}_i\|_{\infty} \leq (C + 2) \epsilon \quad \text{and} \quad \|{\rm driving}_i\|_2 \leq \sqrt{d}(C + 2) \epsilon, \] \end{proposition} where $C = 2 \pi^{d/2} / \Gamma (d/2)$ with $\Gamma$ being the Gamma-function. \begin{proof} Apply the triangle inequality, we have \[ \|\text{driving}_i\|_{\infty} \leq \| \bm l_{S_1} \|_{\infty}+\|\bm l_{S_2}\|_{\infty}+\|\bm l_{S_3} \|_{\infty}. \] Since the integration of $\bm l_{S_1}$ is in $S_1$ and $S_1 \cap S_2 =\varnothing$, $S_1\subseteq S_2^c$ \begin{equation*} \begin{aligned} \|\bm l_{S_1}\|_{\infty} & \leq \int_{S_1}\frac{\left\| {\bm x}_i - {\bm y}\right\|_{\infty}}{h^2}\exp(-\frac{\|{\bm x}_i-{\bm y}\|_2^2}{2h^2}) \rho(\bm y) \dd \bm y \leq \epsilon \int_{S_1} \rho(\bm y) \dd \bm y \leq \epsilon \end{aligned} \end{equation*} Similarly, we can prove using the fact that $S_2\subseteq S_1^c$ and $S_3\subseteq S_2^c$, \begin{align*} & \|\bm l_{S_2}\|_{\infty}\leq \epsilon \int_{S_2}\frac{\left\| \bm y - \bm x_{i}\right\|_{\infty}}{h^2}\exp(-\frac{\|\bm y-\bm x_i\|_2^2}{2h^2}) \dd \bm y \leq \epsilon\int_{S_2}\frac{\left\| \bm y - \bm x_{i}\right\|_2}{h^2}\exp(-\frac{\|\bm y-\bm x_i\|_2^2}{2h^2}) \dd \bm y \leq C \epsilon \\ & \|\bm l_{S_3}\|_{\infty} \leq \int_{S_3}\frac{\left\| \bm y_j - \bm x_{i}\right\|_{\infty}}{h^2}\exp(-\frac{\|\bm y-\bm x_i\|_2^2}{2h^2}) \rho(\bm y) \dd \bm y\leq \epsilon, \end{align*} where $C = 2 \pi^{d/2} / \Gamma (d/2)$ with $\Gamma$ being the Gamma-function. Following this result, if a particle is an $\epsilon$-outlier, then $\|\text{driving}_i\|_{\infty} \leq (2 + C) \epsilon$. Since $\|{\bm x}\|_2 \leq \sqrt{d}\|{\bm x}\|_{\infty}$ for any ${\bm x} \in \mathbb{R}^d$, we have the upper bound for $\|\text{driving}_i\|_2$. \end{proof} \iffalse \subsection{Universality} Inequality \ref{integration bound} tells us that the discrepancy of our target distribution and our approximating particles is the upper bound of the error function of any integration with integrand $f(x)\in \mathcal{H}_k$. The following concept provides us a way to expand the inequality for all continuous functions $f(x)$ \begin{definition} Suppose $\mathcal{H}_k$ is the reproduction kernel Hilbert space, we call it universal if $\mathcal{H}_k$ is dense in $\mathcal{C}(X,Y)$. The kernel $k(\cdot,\cdot)$ is called a universal kernel. \end{definition} Such definition tells us that if our reproducing kernel Hilbert space is universal, we can expand the result of (3) such that it applies for every continuous function. For an arbitrary continuous function $f(x)$, then error of integration is $\left|\int_{\Omega} f(x)d\mu -\int_{\Omega} f(x)d\nu \right|$. Suppose our discrepancy is defined under an universal kernel, by the definition of universality, for any $\epsilon>0$, there exist an $g(x)\in H_k$ such that $\left|\int_{\Omega} f(x)d\mu -\int_{\Omega} g(x)d\mu \right|<\epsilon$ and $\left|\int_{\Omega} f(x)d\nu -\int_{\Omega} g(x)d\nu \right|<\epsilon$, thus we have, \begin{align} &\left|\int_{\Omega} f(x)d\mu -\int_{\Omega} f(x)d\nu \right|\\ &= \left|\int_{\Omega} f(x)d\mu+ \int_{\Omega} g(x)d\mu -\int_{\Omega} g(x)d\mu+\int_{\Omega} g(x)d\nu -\int_{\Omega} g(x)d\nu-\int_{\Omega} f(x)d\nu \right| \\ &\leq \left|\int_{\Omega} f(x)d\mu- \int_{\Omega} g(x)d\mu \right| +\left|\int_{\Omega} g(x)d\mu-\int_{\Omega} g(x)d\nu \right|+\left|\int_{\Omega} g(x)d\nu-\int_{\Omega} f(x)d\nu \right|\\ &\leq \epsilon + \left|\int_{\Omega} g(x)d\mu-\int_{\Omega} g(x)d\nu \right| +\epsilon \\ &\leq \|g\|_{\mathcal{H}}D(\mathcal{X}, F, k)+2\epsilon \end{align} It can be proven that the RKHS with Gaussian kernel is universal. \fi \bibliographystyle{asa}
1,116,691,499,555
arxiv
\section{Introduction}\label{sec:Intro} Compelling evidence for an abundant, non-baryonic, non-luminous (dark) matter component was collected in the last decades \cite{PdG}. Yet, the nature of the dark matter (DM) remains totally unknown, and the quest for an answer ranks as one of the main issues of the experimental particle physics, astrophysics and cosmology. Weakly Interacting Massive Particles (WIMPs) \cite{ref1,ref2} are creditable, theoretically appealing DM candidates. If these massive relics of the early universe do exist, they are expected to be gravitationally bound to the baryonic visible matter. A direct search for WIMPs in the mass range from a few GeV/c$^2$ to a few TeV/c$^2$ could be based on the detection of nuclear recoils induced by WIMP elastic scattering. Cross-sections are not expected to exceed those of weak processes. The kinetic energy of scattered nuclei and consequently their range in dense matter would be determined by the WIMP mass and by its velocity relative to a terrestrial target. In the Standard Halo Model the WIMP speed in the galaxy is supposed to follow a Maxwellian distribution, showing null average values of all the velocity components. The motion of the Solar System through the galaxy, however, creates an apparent wind of dark matter particles, blowing opposite to the direction of the Sun's motion toward the Cygnus constellation. The intensity of this wind, i.e.~the WIMP flux, is expected to be time-modulated due to the Earth motion in the Solar System, with an annual period and a maximum rate in summer \cite{Spergel}. The speed of the Earth in the Solar System is anyway small compared to the speed of the Sun in the Milky Way, so the amplitude of the annual modulation is of the order of a few percent. The DAMA experiment \cite{DAMA} at LNGS has indeed reported a signal with a very clear evidence of annual modulation, as a possible indication of DM induced signal. This signal, although statistically extremely significant ($>8$ standard deviations), is controversial because many experiments have already partially or totally excluded the region allowed by DAMA. Therefore DAMA results remain an intriguing puzzle. Figure~\ref{fig:StateOfArt} shows the upper limits and contour regions for the WIMP spin-independent cross sections, normalized to the scattering on a single nucleon, as function of the WIMP mass. The constraints from SUSY models with the inclusion of LHC results are also shown. The figure was made with the \texttt{dmtools} web page~\cite{dmtool}. \begin{figure} \centering \includegraphics[width=0.65\linewidth]{figs/StateOfArt_dmtools} \caption{WIMP cross sections (normalized to a single nucleon) for spin-independent couplings versus mass. The DAMA/LIBRA~\cite{DAMA} and CoGeNT~\cite{COGENT} contour regions indicate possible signal events. The 90$\%$ C.L.~upper limits for the CRESST-II~\cite{CRESST}, CDMS+EDELWEISS~\cite{EDELWEISS}, XENON100~\cite{Xenon100} and LUX~\cite{LUX} experiments are shown as solid curves. The green region indicates the predictions from the Minimal Supersymmetrized Standard Model (MSSM) integrated with constraints set by LHC experiments~\cite{MSSM}. } \label{fig:StateOfArt} \end{figure} On the other hand, the angular distribution of the scattered nuclei is peaked around the direction of the apparent dark matter wind. The directional modulation is expected to be stronger than the annual modulation, with a rate of forward-scattered nuclei one order of magnitude higher than the backward-scattered nuclei. Since background sources are expected to be isotropic, the detection of a signal with a preferred direction would provide a powerful discrimination. Directional experiments intend to exploit this effect by measuring the direction of nuclear recoils, and hence the WIMP direction. In the above sketched WIMP scenario, the key points for the design of an experiment searching for DM with a directional approach are the expected event rate and the expected angular and energy distribution of recoiling nuclei. The expected event rate does not exceed 1 event/kg/year. Such extremely low rates require strong background suppression. The WIMPs mean velocity inside our galaxy is a few hundred kilometers per second at the location of our Solar System. For these velocities, WIMPs interact with ordinary matter mainly via elastic scattering on nuclei. With expected WIMP masses in the range from 10 GeV to 10 TeV, typical nuclear recoil energies are of the order of 1 $\div$ 100 keV. The expected nuclear recoil energy spectrum decreases almost exponentially with energy. To exploit directionality with light-medium mass scattered nuclei, the required spatial accuracy is in the sub-mm domain for gaseous detectors and in the sub-$\mu$m range for solid detectors. In the first case the low event rate sets the requirement of very large volumes while in the second case an extremely high resolution is required in order to cope with the very short range of the recoil nuclei. Experiments for dark matter searches based on solid or liquid targets are not able to measure the direction of nuclear recoils. They search for a WIMP signal as an excess of events over the expected background with possibly an annual modulation of the event rate, if sensitive enough. The gaseous detectors, on the other hand, are capable of reconstructing the three-dimensional tracks of nuclear recoils, but their mass and the corresponding sensitivity are rather limited. Current gas-based detectors as DRIFT \cite{DRIFT}, NEWAGE \cite{NEWAGE}, DMTPC \cite{DMTPC} and MIMAC \cite{MIMAC} make use of low-pressure CF$_4$ with a fiducial volume ranging from 3 to 140 g \cite{DRIFT}, thus providing limits only on the spin-dependent WIMP-proton cross-section. The use of a solid target would allow to explore lower cross sections in the phase space indicated by recent limits drawn by direct search experiments, the challenge being the shorter track length, $O$(100nm), resulting in the WIMP-nucleus scattering. The Nuclear Emulsions for WIMP Search (NEWS) project presented here aims at the direct detection of dark matter candidates by measuring the direction of WIMP-induced nuclear recoils. For this challenge, the detector exploits new generation nuclear emulsions with nanometric grains. An R$\&$D conducted by the Nagoya University in collaboration with the Fujifilm Company has established the production of films with nanometric grains for an ultra-high spatial resolution. We do report the results of this R$\&$D and the corresponding development of new fully automated scanning systems capable of detecting such short tracks, with improved optical technologies overcoming the diffraction limit of conventional systems. We have studied the detection efficiency of nanometric tracks, using ion implantation systems to reproduce nuclear tracks of the same length as expected from WIMP-induced nuclear recoils. A paragraph of this document is devoted to the measurements performed on the neutron yield from intrinsic film radioactivity and more in general to the discussion of potential background sources. Given that nuclear emulsions are time insensitive, the detector will be placed on a standard equatorial telescope to keep its orientation fixed toward the Cygnus constellation. The choice of appropriate shielding materials and detector layout is also discussed. Finally we propose the design and construction of a one-kilogram detector for a pilot experiment, acting as a demonstrator of the technology and aiming at scaling it up to a larger scale experiment. The construction, run and data analysis are planned on the time scale of about six years. This experiment will demonstrate the potentiality of the technique and will start constraining the parameter space outlined by the DAMA experiment. \section{NIT: Nano Imaging Tracker} \label{sec:NIT} After decades of remarkable experimental applications, nuclear emulsions still mantain their attraction as ionizing particle detectors of unmatched spatial and angular resolution. The first application of fully automated scanning systems to large-scale experiment was for the CHORUS experiment~\cite{CHORUS}. Impressive achievements with new generation systems, more than one order of magnitude faster, allowed the design of the OPERA experiment~\cite{OPERA} while current developments of the technology still inspire the design of high-statistics neutrino experiment with large active target~\cite{SHIP}. Nuclear emulsions are made of silver halide crystals embedded in a gelatine matrix. When light falls on the emulsion, or ionizing particles pass through it, some of the halide crystals are modified in such a way that they are turned into grains of silver when the emulsion is immersed in a reducing bath (the so-called \emph{developer}). The modifications in the grains caused by the action of light or radiation are invisible and the effect is referred to as \emph{formation of latent image}. After development, a silver halide emulsion is placed in a second bath, called \emph{fixer}, which dissolves the unaffected grains of silver halide but leaves the small black granules of silver. Finally, the plate is washed and dried~\cite{Emulsion1,Emulsion2,Chap2HandbookOf}. The primary function of the gelatine is to provide a three-dimensional framework which serves to locate the small crystals of the halide and to prevent them migrating during development and fixation. The three-dimensional trajectory of passing through particles can be reconstructed with an optical microscope by connecting all the silver grains produced after development. The size of silver halide crystals in standard emulsion ranges from 0.1 $\mu$m to 1 $\mu$m. The sensitivity of the emulsion strongly depends on the size of the crystals: the larger the grain, the higher the emulsion sensitivity to ionising radiation. Due to the low recoil energy of a WIMP-scattered nucleus, the expected track length is of the order of a few hundred nanometers. State-of-the-art emulsions produced by the Fuji Co.~\cite{OPERAemulsion} for the OPERA experiment, with a linear dimension of the crystals of 200 nm, are therefore not suitable for Dark Matter searches. The R$\&$D performed at Nagoya University, in collaboration with Fuji Co. experts, led to the production of novel emulsion films with grain diameters down to a few tens of nm, one order of magnitude smaller than conventional ones. The so-called Nano Imaging Trackers (NIT) and Ultra-Nano Imaging Trackers (U-NIT), have grains of 44.2 and 24.8 nm diameter respectively (see Figure~\ref{fig:grains}). NIT films have a linear density of crystals of about 11 crystals/$\mu$m~\cite{NIT} while U-NIT show 29 crystals/$\mu$m~\cite{U-NIT}. They make the reconstruction of trajectories with path lengths shorter than 100 nm possible, if analyzed by means of microscopes with enough resolution. \\ \begin{figure}[tbph] \centering\includegraphics[width=1.0\linewidth]{figs/NIT_grain_distribution_2} \caption{Distribution of the crystal diameter measured with an electron microscope for NIT (left) and U-NIT (right) emulsions. The measurements refer to three different batches.} \label{fig:grains} \end{figure} \begin{figure}[tbph] \centering\includegraphics[width=0.8\linewidth]{figs/production-machine_old} \caption{NIT gel production machine.} \label{fig:production-machine} \end{figure} NIT are produced in three steps using a dedicated machine (see Figure~\ref{fig:production-machine}). First, the AgBr crystal growth is obtained by mixing in a thermostatic bath AgNO$_3$ and NaBr exploiting the following reaction: \begin{equation} \mbox{AgNO}_3 + \mbox{NaBr} \rightarrow \mbox{AgBr} + \mbox{Na}^+ + \mbox{NO}_3^- \end{equation} Polyvinyl alcol (PVA) is then added to ensure the uniformity of the grain size of the crystals. NaI, with a concentration of 4\% mol, is also used in order to increase the quantum efficiency in the activation of the crystals. Next, in the desalination phase, AgBr crystals are mixed with the gelatin while the residual extra ions (Na$^+$,NO$_3^-$) are extracted by means of a reduction process. A homogeneous crystal distribution is obtained with a centrifugation process at 1000 rpm and $50^\circ$ C. Finally, the emulsion gel obtained with this procedure (see Figure~\ref{fig:gel}, left) is mixed with ultra-pure water and poured on a rigid support (usually plastic or glass) as shown in the right picture of Figure~\ref{fig:gel}. The production machine is able to produce up to 3 kg of NIT emulsion gel per week. The mass fractions of NIT constituents and the chemical composition of NIT emulsions are reported in Tables~\ref{tab:composition} and~\ref{tab:constituents}, respectively. The emulsion composition has been carefully determined for light elements by an elemental analyser (YANACO MT-6) with an uncertainty of 0.3 \%. The mass fraction of silver and bromine has been measured by an energy dispersive X-ray analysis with an uncertainty of 2\%. The density amounts to 3.43 g/cm$^{3}$. \begin{figure}[tbph] \centering\includegraphics[width=1.0\linewidth]{figs/gel1} \caption{Left: emulsion gel. Right: emulsion gel poured on a glass support.} \label{fig:gel} \end{figure} \begin{table}[htpb] \centering \begin{tabular}{c|c} \hline Constituent & Mass Fraction \\ \hline AgBr-I & 0.78 \\ Gelatin & 0.17 \\ PVA & 0.05 \\ \hline \end{tabular} \caption{Constituents of NIT emulsions} \label{tab:composition} \end{table} \begin{table}[htpb] \centering \begin{tabular}{c|c|c} \hline Element & Mass Fraction & Atomic Fraction \\ \hline Ag & 0.44 & 0.10 \\ Br & 0.32 & 0.10 \\ I & 0.019 & 0.004 \\ C & 0.101 & 0.214 \\ O & 0.074 & 0.118 \\ N & 0.027 & 0.049 \\ H & 0.016 & 0.410 \\ S & 0.003 & 0.003 \\ \hline \end{tabular} \caption{Elemental composition of NIT emulsions.} \label{tab:constituents} \end{table} During the whole lifetime of the emulsion and before the development, due to thermal excitation, sensitive crystals can be randomly activated thus resulting in the production of random dark grains: the so-called \emph{fog} (of the order of $1 \div 10 \slash(10 \mu$m$)^3$ for OPERA emulsions) represents a potentially dangerous source of background when looking for very short track length ($O$(100nm)) made of only two consecutive dark grains. In this case, indeed, if the fog density is too high, the probability that two fog grains are close enough to mimic a signal track is not negligible. A recent R$\&$D led to a new chemical development procedure resulting in a suppression of the fog density of one order of magnitude: using a low-temperature ($5^\circ$C) developer based on MAA-1 a fog density of $\sim0.1 \slash(10 \mu$m$)^3$ has been achieved. Moreover, fog grains show a rather different contrast and shape with respect to radiation sensitized grains. These important features can be exploited to enhance the signal to background ratio, as it will be explained in Section~\ref{sec:read-out}. \section{Experimental concept} \label{sec:expConcept} NEWS is a very innovative approach for a high sensitivity experiment aiming at the directional detection of WIMPs: the detector is based on recent developments of the nuclear emulsions technology allowing to reach an extremely high spatial resolution. The detector is conceived as a bulk of nuclear emulsions acting both as a target and as a tracking device surrounded by a shield (see Section~\ref{sec:set-up}) to reduce the external background. The detector will be placed on an equatorial telescope in order to absorb the earth rotation, thus keeping the orientation towards the Cygnus constellation fixed. The emulsion films will lie with their surface permanently parallel to the expected average WIMP wind direction. Figure~\ref{fig:wimp_direction} shows the distribution of the WIMP incoming angle, in the laboratory frame, projected on a plane containing the average WIMP wind direction. The majority of WIMPs are directed forward with a peak at zero. The superimposed red curve shows the same angle if one is not sensitive to the forward/backward direction. The angular distribution of the trajectories of WIMP-scattered nuclei is therefore expected to be anisotropic. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{figs/wimp_angle_2D} \caption{WIMP 2-dim angle distribution on a plane containing the average WIMP wind direction (blue curve). The red curve shows the same angle if one is not sensitive to the forward/backward direction.} \label{fig:wimp_direction} \end{figure} The presence in the emulsion gel of lighter nuclei such as carbon, oxygen and nitrogen, in addition to the heavier nuclei of silver and bromine, is a key feature of the NEWS project, resulting in a good sensitivity to WIMPs with both light and heavy masses. The sensitivity indeed strongly depends on the minimum detectable track length. The path length of the recoiled track depends in turn on the kinetic energy of the scattered nucleus, being the kinematics determined both by the mass of the incident WIMP and by that of the target nucleus. The correlation between the track length of the recoiled nucleus and its kinetic energy is shown in Figure~\ref{fig:correlation} for the different target nuclei. WIMP with a mass of about 100 GeV/c$^2$ prefers Ag and Br as target, producing e.g.~Br recoils with an average kinetic energy of about 50 keV. Although Ag and Br are the most effective targets for WIMP masses in this range, the detection capability is reduced since their ranges are shorter than lighter elements at the same energy. Instead, for a WIMP with a mass around 10 GeV/c$^2$, the kinematics favours lighter nuclei that, for a given kinetic energy, have a longer range. Therefore, the contribution of the C, N and O ions is essential for WIMP masses around 10 GeV/c$^2$. \begin{figure}[tbph] \centering \includegraphics[width=0.6\linewidth]{figs/correlation} \caption{Correlation between the track length of the recoiled nuclei and their kinetic energy, for different target nuclei in NIT emulsions.} \label{fig:correlation} \end{figure} The estimated WIMP rates are of the order of 1 event$\slash$kg$\slash$year, much lower than the usual radioactive backgrounds. For this reason, the detector has to be placed underground to be protected from cosmic-ray induced background. Moreover, a careful control of the radioactive contamination of the materials used for the detector construction and a precise estimation of the corresponding induced background are needed. We will discuss in detail the most relevant background sources for the WIMP search with an emulsion based detector on the mass scale of a few kilograms. After the exposure, the emulsion films composing the target will be developed and the whole detector volume will be analyzed by using fully automated scanning systems. The read-out (see Section~\ref{sec:read-out}) is performed in two phases. In the first phase a fast scanning is performed (see Section \ref{sec:optical-read-out}) by means of an improved version of the optical microscope used for the scanning of the OPERA films (\cite{ESS,S-UTS}). By this step a fast pre-selection of the candidate signal tracks with a relatively low spatial resolution (200 nm) can be achieved. In order to resolve the nanometric grains belonging to a signal tracks and to enhance the signal to background ratio, a further scanning of the pre-selected tracks with a ultra-high resolution scanning system is foreseen (see Section~\ref{sec:plasmon}). The final resolution for the reconstruction of nuclear recoil tracks is estimated to be between $10$ and $20$ nm in position and better than $15 ^\circ$ in angle. \section{Read-out technique} \label{sec:read-out} In the NEWS experiment the expected WIMP signal will consist of short-path, anisotropically distributed, nuclear recoils over an isotropically distributed background. The search for signal candidates requires the scanning of the whole emulsion volume. The read-out system has therefore to fulfill two main requirements: a fast, completely automated, scanning system is needed to analyse the target volume over a time scale comparable with the exposure; the spatial resolution has to be improved by more than one order of magnitude compared to that achieved with standard emulsion films, reaching the challenging value of a few tens of nanometers, in order to ensure high efficiency and purity in the selection of signal candidates. The analysis of NIT emulsions is performed with a two-step approach: a fast scanning with a state-of-the-art resolution for the signal preselection followed by a pin-point check of preselected candidates with unprecedented nanometric resolution to further enhance the signal to noise ratio and perform very accurate measurements of the range and the recoil direction. These two steps are discussed in the next sub-sections. \subsection{Optical microscopy for candidate selection} \label{sec:optical-read-out} \begin{figure}[htbp] \centering% \subfigure[Japanese prototype\label{fig:mic_Nagoya}]% {\includegraphics[width=0.5\linewidth]{figs/mic_Nagoya.jpg}}\qquad\qquad \subfigure[Prototype developed by INFN groups \label{fig:mic_LNGS}]% {\includegraphics[angle=-90,width=0.5\linewidth]{figs/mic_LNGS2}}\qquad\qquad \caption{Optical scanning systems modified for the analysis of NIT. \ref{fig:mic_Nagoya} Prototype installed at Nagoya University. \ref{fig:mic_LNGS} Prototype installed at LNGS and Naples scanning laboratories. \label{fig:mics}} \end{figure} The members of the NEWS Collaboration own state-of-the-art experience of large-scale fast automated scanning with a spatial resolution of about $1\mu$m and an angular resolution of about 1 mrad, as currently applied in the OPERA experiment \cite{OPERAhowTo}: the European Scanning System (ESS~\cite{ESS}) in Europe and the Super-Ultra Track Selector (S-UTS~\cite{S-UTS}). In the last years an R\&D program aimed at improving the ESS performances was carried by INFN groups, leading to prototypes with resolution improved by one order of magnitude, achieving a speed of almost 200 cm$^2$/h~\cite{ESS-new}. A new system is being developed in Japan (the Super-Ultra Track Selector), aiming at increasing the scanning speed up to 5000 cm$^2$/h. Stepping into the nano imaging domain requires substantial upgrades of the OPERA-style scanning systems. New prototypes (see Figure~\ref{fig:mics}) were already set-up both in Japan and in Italy, featuring: \begin{itemize} \item higher magnification of the objectives lens, from 50x to 100x \item higher numerical aperture, from 0.8 to 1.45 \item higher optical contrast (illumination by reflected light instead of transmitted light). \item light with green or blue wavelength to improve the resolution \item high pixel to micron ratio ($\sim$ 28 nm/pixel), one order of magnitude better than the systems used in OPERA \item high resolution (4Mpx) and high speed (563 fps) CMOS camera. \end{itemize} In parallel with the hardware improvements, the development of a new acquisition software and a new tracking algorithm has been carried out: the high data rate (1.7 GB/s), a factor 4 higher than the ESS one due to the improved sensor resolution, has required the use of last generation acquisition boards (Matrox Radient eCL SFCL/DFCL). As a consequence, a more powerful computing system, exploiting a GPU (Graphic Processing Unit) based architecture, has been implemented. In order to evaluate the performances of the new scanning systems, extensive tests were performed with exposures of NIT to slow ions and neutron beams. Results are discussed here. The starting point of the emulsion scanning is the image analysis to collect clusters made of dark grains at several depths across the emulsion plate thickness. Given the intrinsic resolution of the optical microscope ($\sim$ 200 nm), the sequence of several grains making a track of a few hundred nanometers, appears as a single cluster. Therefore, the key element to distinguish clusters made of several grains from clusters made of a single grain produced by thermal excitation (fog) is the analysis of their shape. A cluster made of several grains indeed tends to have an elliptical shape with the major axis along the direction of the trajectory, while a cluster produced by a single grain tends to have a spherical shape. In order to simulate the effect of a WIMP-induced nuclear recoil and to measure the efficiency and the resolution of the new optical prototype, a test beam with low velocity ions was performed. We used both a Kr ion beam with energies of 200 and 400 keV~\cite{ShapeAnalysis} and a C ion beam with energies of 60, 80 and 100 keV. Kr and C ions of such energies produce in emulsion tracks with a length in the range 100$\div$300 nm. These ions were implanted in the emulsions using an implantation facility of low speed ions at Nagoya University. When analysed with the optical microscope, submicrometric tracks produced by Kr and C ions appear as shown in Figure~\ref{fig:KrIon-ShapeAnalysis}. Although silver grains belonging to the tracks are not distinguishable and appear as a single cluster, the elongated form of the cluster is clearly visible~\cite{ShapeAnalysis2}. An elliptical fit of the cluster shape allows a clear separation between fog grains and signal tracks: the latter ones are expected to have ellipticity larger than a given threshold, typically 1.25 or higher (see left plot of Figure~\ref{fig:shapeAnalysis60} and \ref{fig:shapeAnalysis80}). The angular distributions of 60 and 80 keV C ions are reported in the right plot of Figure~\ref{fig:shapeAnalysis60} and Figure~\ref{fig:shapeAnalysis80}, respectively. A peak corresponding to the direction of the implanted ions is clearly visible; the width of the distribution corresponds to the angular resolution, amounting to 360 mrad. The angular resolution is given by the convolution of the intrinsic resolution with the angular deviations caused by the scattering in the material. For low energy (below 100 keV) tracks, the scattering cannot be neglected. In order to evaluate the intrinsic angular resolution of the scanning system we analysed an emulsion sample exposed to a 2.8 MeV neutron beam at the Fusion Neutronics Source (FNS) of the Japan Atomic Energy Agency (JAEA). In this case the track length distribution of neutron-induced proton recoils shows a wider range, up to a few hundred of micrometers. A sample of tracks with length of the order of few tens of micrometers and made by a sequence of several elliptical clusters was selected, being the scattering effect negligible for them. The same ellipticity cut applied in the previous analysis was used for the selection of the clusters. For each cluster, the angular difference $\Delta \theta$ between its major axis and the fitted track was evaluated (see Figure~\ref{fig:angularResMethod}). The distribution of $\Delta \theta$ shows a gaussian shape, as shown in Figure~\ref{fig:angularResAndrey} with a width corresponding to the intrinsic angular resolution and amounting to 230 mrad. This value represents the intrinsic angular resolution achieved with fully automated scanning systems, by far the best resolution achieved with direction sensitive detectors in this energy range. The simulation shows that this result is compatible with the measurement reported above when the scattering contribution is included. \begin{figure}[tbph] \centering \includegraphics[width=0.6\linewidth]{figs/KrIon-ShapeAnalysis_2} \caption{Kr ions implanted on NIT films. The image is taken with an optical microscope. The selection of candidate tracks is based on the elliptic fit of the clusters} \label{fig:KrIon-ShapeAnalysis} \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=1.0\linewidth]{figs/shapeAnalysis60} \caption{Left: scatter plot of major and minor axes for clusters analysed with an elliptical fit in a 60 keV C ion test beam. Signal tracks are shown as red dots, fog grains in blue. Right: angular distribution of 60 keV C ion tracks selected by the ellipticity cut.} \label{fig:shapeAnalysis60} \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=1.0\linewidth]{figs/shapeAnalysis80} \caption{Left: scatter plot of major and minor axes for clusters analysed with an elliptical fit in a 80 keV C ion test beam. Signal tracks are shown as red dots, fog grains in blue. Right: angular distribution of 80 keV C ion tracks selected by the ellipticity cut.} \label{fig:shapeAnalysis80} \end{figure} \begin{figure}[tbph] \centering \subfigure[\label{fig:angularResMethod}] {\includegraphics[width=0.48\linewidth]{figs/angularResMethod}} \subfigure[\label{fig:angularResAndrey}] {\includegraphics[width=0.51\linewidth]{figs/Resolution_protons}} \caption{Left: sketch of the method used for the evaluation of the intrinsic angular resolution. Right: intrinsic angular resolution of the optical scanning system.} \label{fig:AngularResolution} \end{figure} Tracks selected with the shape analysis were validated using the X-ray microscope~\cite{NakaX-ray}. This technique features a higher resolution (of the order of 60 nm) but a slower scanning speed when compared with the optical microscopy. The analysis of a few hundred $\mu$m$^2$ takes about 100 s. The X-ray microscopy can therefore be used only to check a sample of already selected candidate tracks: X-ray analysis was used to demonstrate the principle of selection by elliptical shape analysis and measure the efficiency achievable with the optical microscopy. The comparison of optical and X-ray images of candidate tracks is reported in Figure~\ref{fig:x-ray_confirmation}: the high resolution of the X-ray microscope allows to resolve grains belonging to submicrometric tracks thus providing the final discrimination between signal and background. In Figure~\ref{fig:eff_vs_length} the detection efficiency of the optical system as a function of the track length is shown: the efficiency is obtained first selecting a set of multi-grain tracks with the X-ray microscope and then scanning them with the optical one and applying the shape analysis. In this test an optical microscope with a pixel to micron ratio of 55 nm/pixel was used. Results show that the efficiency reaches 100$\%$ above 180-200 nm. In Figure~\ref{fig:eff_vs_energy} the efficiency as a function of the recoil energy for C ions of 60, 80 and 100 keV, is shown: MC simulations (red line) well describes the data (blue points). It is worth noting that the capability of reconstructing low energy tracks (E $<$ 40 keV), corresponding to shorter path lengths, although with a lower efficiency, could significantly enhance the sensitivity to low WIMP mass regions. The scanning speed of the prototype currently used for the shape analysis is of about 25 mm$^2$/h. \begin{figure}[tbph] \centering \includegraphics[width=0.8\linewidth]{figs/x-ray_confirmation_2} \caption{Comparison between reconstructed tracks of a few hundred nanometers length with the optical microscope and with the X-ray microscope.} \label{fig:x-ray_confirmation} \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=0.6\linewidth]{figs/eff} \caption{Efficiency of the elliptical fit analysis versus the track length when an ellipticity of 1.25 is used as a threshold. } \label{fig:eff_vs_length} \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=0.8\linewidth]{figs/Eff-vs-Energy} \caption{Efficiency of the elliptical fit analysis versus the C ion energy when an ellipticity of 1.25 is used as a threshold. MC simulation (red line) well describes the data (blue points).} \label{fig:eff_vs_energy} \end{figure} \subsection{Beyond the limits of the optical scanning for candidate validation} \label{sec:plasmon} The use of optical microscopes allows the reconstruction of tracks down to 200 nm. The X-ray microscopy can overcome this limit though being extremely slow if compared with automated optical systems. Being the speed an issue in the analysis of a large mass detector, NEWS aims at the improvement of the spatial resolution enhancing the optical microscopy without using X-ray microscopes. The basic idea is to exploit the resonance effect occurring when nanometric metal grains are dispersed in a dielectric medium~\cite{ResonantLightScattering}. The polarization dependence of the resonance frequencies strongly reflects the shape anisotropy and can be used to infer the presence of non-spherical nanometric silver grains. Figure~\ref{fig:resonantLight} shows the results of the resonant light scattering from individual Ag nanoparticles~\cite{ResonantLightScattering}: spherical particles do not show any different response as a function of the incident polarization, while a deformed sphere is sensitive to the polarization. \begin{figure}[tbph] \centering \includegraphics[width=1.0\linewidth]{figs/resonantLight} \caption{Scattered-light spectra from individual Ag particles with spherical (left) and spheroidal (right) shape \cite{ResonantLightScattering}. The inset shows the 300 $\times$ 300 nm$^2$ SEM image of the particle. Arrows indicate the polarization of the incident light. A dependence of the response on the light polarization is observed for particles with ellipsoidal shape.} \label{fig:resonantLight} \end{figure} NEWS will use this technology to retrieve track information in NIT emulsions beyond the optical resolution. Images of the same cluster taken with different polarization angles will show a displacement of the position of its barycenter. The analysis of the displacements allows to distinguish clusters made of a single grain from those made of two (or more) grains. \begin{figure}[tbph] \centering \includegraphics[width=1.0\linewidth]{figs/plasmon_prototype} \caption{Schematic view of the optical path instrumented with a polarizer to obtain a nanometric resolution with optical microscopes. } \label{fig:plasmon_prototype} \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=1.0\linewidth]{figs/plasmon_analysis1} \caption{Application of resonant light scattering to an elliptical cluster with ellipticity 1.27. Left plot: $dx$ and $dy$ are the displacements of the cluster barycenter for a given polarization in pixel units (1 pixel = 55 nm). Right plot: track slope fit and its length of about 90 nm.} \label{fig:plasmon_analysis1} \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=1.0\linewidth]{figs/plasmon_resolution} \caption{Position accuracy of $x$ (left) and $y$ (right) coordinates of about 10 nm with the resonant light scattering.} \label{fig:plasmon_resolution} \end{figure} In order to study the polarized light effect, several tests have been performed on NIT samples exposed to 100 keV C ions. Optical microscopes have been equipped with a polarization filter as shown in Figure~\ref{fig:plasmon_prototype}. The polarization direction can be changed by rotating the polariser. The rotation is at the moment done by hand while its automation is being designed. Images of the same clusters were taken by rotating the polarizer of 180$^\circ$ with steps of 10$^\circ$. An example of the analysis performed on a cluster with ellipticity 1.27 is reported in Figure~\ref{fig:plasmon_analysis1}. For all the images, the displacement ($dx$, $dy$) of the cluster barycenter in $x$ and $y$ coordinates is measured in terms of pixel units (1 pixel $=$ 55 nm). A displacement exceeding the position accuracy of a single grain is the evidence for a cluster made of two consecutive grains and therefore produced by a signal track. From the analysis of $dy$ versus $dx$ it is possible to retrieve the track length and slope. In this case, the measured track length is 1.5 pixel, corresponding to about 90 nm. The evaluation of the position accuracy was performed by analysing images of single grains. The unprecedented accuracy of about 10 nm can be achieved in both coordinates, as shown in Figure~\ref{fig:plasmon_resolution}. The test performed demonstrates that this technology is very promising and that it can replace the X-ray microscope. The resonant light scattering has, in fact, the big advantage to achieve a nanometric resolution with optical microscopes. The validation of the candidates identified by the shape analysis will be performed in the same scanning laboratory, without moving the samples to a dedicated laboratory for the X-ray analysis. Moreover, optical microscopes are characterized by a much faster scanning speed with respect to X-ray microscopes, since they profit of all the R$\&$D performed in the last decades both for the OPERA and the NEWS experiments. \section{Expected Background} \label{sec:expected-bkg} The final sensitivity of low-energy rare event searches is strongly limited by the background induced by radioactivity. Two main categories have to be taken into account: the environmental or external background and the intrinsic one. The flux of the former can be significantly reduced by placing the detector underground, to absorb the cosmic radiation, and designing an appropriate shield against the natural radioactivity. The latter is an irreducible source of radiation: it is therefore crucial to control the radioactivity of the materials used for the construction of both the detector and of the shield as well as of the structure of the apparatus. Background sources for dark matter searches are $\alpha$ and $\beta$ particles, $\gamma$-rays and neutron induced recoils, while NIT are essentially not sensitive to minimum ionizing particles (MIP). The main sources of $\alpha$-particles are U and Th radioactive chains and Radon. The $\alpha$-particles produced in those processes have energies of the order of MeV and their range in emulsion is of the order of tens of microns, by far longer than WIMP-induced nuclear recoils. $\alpha$-particles can therefore be identified and discarded in the emulsions by an upper cut on the track length. Anyway Radon progeny $^{214}$Pb, $^{214}$Bi and $^{210}$Bi emit energetic $\beta$ and $\gamma$ radiation. To prevent Radon contamination, the detector has to be kept sealed from the air and continuously flushed with boil-off nitrogen. The $\gamma$ radiation due to environmental radioactivity constitutes a non-negligible contribution to the total background budget. In Figure~\ref{fig:gamma-bkg} the measured $\gamma$ flux in the LNGS underground halls is shown~\cite{BrunoPhDThesis,arneodo,wulandari}. Passive or active shielding (usually water, copper or lead) can be used to suppress the external $\gamma$-radiation down to the level of ppb or ppt. The thickness \emph{l} required to reduce the external flux by a factor $f > 1$ can be estimated assuming exponential damping $\emph{l} = \lambda (E_\gamma) \times \log f$, where $\lambda (E_\gamma)$ is the energy-dependent attenuation length and $E_\gamma$ is the $\gamma$-ray energy. A relevant source of background is represented by $\beta$-rays produced in $^{14}$C decay. Given the carbon content in the emulsions and the $^{14}C$ activity, a rejection power R$_{\beta}\leq10^{-8}$ is required in order to make it negligible (i.e. less than one background track/kg/year). The current rejection power for tracks made by two crystals is R$_{\beta}=10^{-6}$. In order to further improve the rejection, three possibile improvements are under investigation. The first one is based on the different energy deposition per path length of WIMP induced recoils and electrons~\cite{gamma-response}: the response of emulsions can be tuned by dedicated chemical treatments (e.g.~Tetrazorium compound~\cite{tetraz}). The second possibility is to exploit the response of $\beta$-rays to the polarized light scattering: indeed grains induced by $\beta$-rays might be less sensitive to polarized light. Finally, a reduction of the background can be achieved by performing a cryogenic exposure and by exploiting the phonon effect. Preliminary tests at $\sim 100$ K show an upper limit of R$_{\beta}<10^{-7}$ for tracks made by two crystals. \begin{figure}[tbph] \centering\includegraphics[width=0.7\linewidth]{figs/gamma-flux} \caption{$\gamma$-flux measured in the underground LNGS halls \cite{BrunoPhDThesis,arneodo,wulandari}.} \label{fig:gamma-bkg} \end{figure} Neutron induced recoils rank as the main background source because they are not distinguishable from the expected WIMP signal, except for the isotropic angular distribution and for the typical track length, largely exceeding the range expected for WIMP-induced recoils. Indeed, while neutron-induced proton recoils can be as long as few hundred microns, the maximum length of a WIMP-induced nuclear recoil is smaller than $1\mu$m even for large ($O$(TeV)) WIMP masses. Three types of neutron sources affect underground experiments: radiogenic neutrons in the MeV range produced in ($\alpha$, n) and spontaneous fission reactions in the detector due to its intrinsic radioactive contaminants, cosmogenic neutrons with energy spectrum extending to GeV energies induced by muons penetrating underground through the rock, neutrons induced by environmental radioactivity. In Figure~\ref{fig:neutron-flux} the measured neutron flux in the LNGS underground halls is shown \cite{BrunoPhDThesis}: for a neutron energy of the order of a few MeV (\emph{fast} neutrons) the flux ranges from $10^{-6}$ to $10^{-10}$ cm$^{-2}$ s$^{-1}$ MeV$^{-1}$. Light materials are effective moderators for fast neutrons: polyethylene (PE, C$_2$H$_4$) is commonly used to reduce the external neutron flux. \begin{figure}[tbph] \centering\includegraphics[width=0.7\linewidth]{figs/neutron-flux} \caption{The neutron flux measured in the underground LNGS halls \cite{BrunoPhDThesis}.} \label{fig:neutron-flux} \end{figure} While the external neutron flux can be reduced to a reasonable level with an appropriate shielding, the intrinsic emulsion radioactivity would be responsible of an irreducible neutron yield through ($\alpha$, n) and $^{238}$ U spontaneous fission reaction. In order to estimate this contribution, a sample of each component of the nuclear emulsion (AgBr, Gelatin and PVA) has been analysed by the Chemistry Service in Laboratori Nazionali del Gran Sasso (LNGS, Italy), with the Inductively Coupled Plasma Mass Spectrometry (ICP-MS) technique~\cite{ICP-MS} and at the low background facility STELLA (SubTErranean Low Level 125 Assay) of the LNGS~\cite{STELLA} with germanium detectors. The complementary use of these techniques allows to determine both the Uranium and Thorium activities and to verify the secular equilibrium hypothesis. The measured activities are reported in Table~\ref{tab:activities-MS} for all the constituents. The upper limits on PVA are evaluated at 95$\%$ CL. By weighting the measured activity of each constituent for its mass fraction, the total activity of nuclear emulsion can be calculated. Using the contamination measured with the mass spectrometry, the $^{238}$U activity amounts to $23\pm 7$ mBq kg$^{-1}$, while the $^{232}$Th one is $5.1\pm 1.5$ mBq kg$^{-1}$. The reported errors are dominated by the 30$\%$ uncertainty in the radioactive contamination measurements. By assuming a null contribution from PVA, the previous contaminations are reduced by $\sim 2\%$. The $\gamma$ spectrometry gives comparable results for the AgBr sample. For the gelatin the measurements provide comparable results for the $^{232}$Th chain, while the measured concentrations of $^{226}$Ra in the $^{238}$U chain is about 20 times smaller than the parent isotope, with a measured value of $2.4\pm 0.6$ mBq kg$^{-1}$. This measurement suggests a break in the secular equilibrium of the decay chain at this point. Therefore the secular equilibrium is assumed for the upper part of this chain, using the activity measured by mass spectrometry, while, for the lower part, nuclides are considered in equilibrium with $^{226}$Ra and the activity measured with $\gamma$-spectroscopy is used. The nuclear emulsion activity for nuclides of the $^{226}$Ra sub-chain is therefore $15\pm 5$ mBq kg$^{-1}$~\cite{intrisicBkgPaper}. \begin{table}[tph] \begin{center} \begin{tabular}{c|c|c} \hline Nuclide & Contamination [10$^{-9}$ g g$^{-1}$] & Activity [mBq kg$^{-1}$] \\ \hline \multicolumn{3}{c}{AgBr-I} \\ \hline $^{232}$Th & 1.0 & 4.1 \\ $^{238}$U & 1.5 & 18.5 \\ \hline \multicolumn{3}{c}{Gelatin} \\ \hline $^{232}$Th & 2.7 & 11.0 \\ $^{238}$U & 3.9 & 48.1 \\ \hline \multicolumn{3}{c}{PVA} \\ \hline $^{232}$Th & $< 0.5$ & $< 2.0$ \\ $^{238}$U & $< 0.7$ & $< 8.6$ \\ \hline \end{tabular} \end{center} \caption{Results obtained by ICP-MS in terms of contamination and activity for the different constituents of the nuclear emulsion. The estimated uncertainty is 30$\%$. The upper limits on PVA are evaluated at 95$\%$ CL.} \label{tab:activities-MS} \end{table} The measured activity was used to determine the neutron yield both through a semi-analitical calculation~\cite{refCalcFabio1,refCalcFabio2} and a MC simulation based on the SOURCES code~\cite{SOURCES}. Results are reported in Table~\ref{tab:resNeutronYield}. The two approaches give comparable results and the flux due to the intrinsic radioactive contamination is expected to be of the order of $1.2 \pm 0.4$ neutron per year per kilogram of nuclear emulsion. The energy spectrum of the produced neutrons, as calculated with SOURCES, is reported in Figure~\ref{fig:SOURCES-spectrum}. \begin{figure}[tbph] \centering\includegraphics[width=0.7\linewidth]{figs/neutron_spectrum} \caption{Total neutron energy spectrum (black line); in red the contribution from $^{238}$ U spontaneous fission is shown, while in blue and green the contributions from ($\alpha$,n) reactions due to nuclides in the $^{238}$U and $^{232}$Th chains respectively are displayed \cite{intrisicBkgPaper}.} \label{fig:SOURCES-spectrum} \end{figure} In order to estimate the detectable background due to radiogenic neutrons produced by the intrinsic radioactive contamination of the nuclear emulsions, a GEANT4 based simulation was performed. Simulated neutrons have an isotropic angular distribution and are uniformly distributed in a target where emulsion are arranged in a stack with a surface of $25 \times 25$ cm$^2$ and a thickness of 0.5 cm; their energy spectrum was generated according to Figure~\ref{fig:SOURCES-spectrum}. The fraction of interacting neutrons is 20.4$\%$: they can produce either a proton a nuclear recoil. In the former case the track length in emulsion extend up to several hundreds $\mu$m (see Figure \ref{fig:proton_recoils}) while nuclear recoils show shorter track lengths, not exceeding 3 $\mu$m for light nuclei (C, N, O) and 1 $\mu$m for heavy nuclei (Ag, Br, I) (see Figure \ref{fig:nuclear_recoils}). The overall fraction of neutron-induced recoils contributing to the background is computed by accounting for recoil tracks with lengths above the read-out threshold. Moreover an upper limit on the track length can be introduced since the signal is expected to be below 1 $\mu$m even for large ($O$(TeV)) WIMP masses (see Figure \ref{fig:maximumRange-vs-WIMPmass}). The fractions of neutron-induced recoils below this cut, as a function of the read-out threshold, are reported in Table~\ref{tab:recoils1}: only fraction, from 5\% to 10\%, contributes to the background. A further reduction of $\sim 70\%$ of the neutron-induced background can be achieved exploiting the directionality information with the cut $-1 < \phi < 1$. Under these assumptions, the detectable neutron-induced background would be 0.02 $\div$ 0.03 per year per kilogram. \begin{figure}[tbph] \centering\includegraphics[width=1.0\linewidth]{figs/proton_recoils_G4} \caption{Track length (left) and energy spectrum (right) for proton recoils produced by elastic (blue curve) and inelastic (red curve) processes.} \label{fig:proton_recoils} \end{figure} \begin{figure}[tbph] \centering\includegraphics[width=1.0\linewidth]{figs/nuclear_recoils_G4} \caption{Track length (left) and energy spectrum (right) for heavy (blue curve) and light (red curve) nuclei.} \label{fig:nuclear_recoils} \end{figure} \begin{figure}[tbph] \centering\includegraphics[width=0.7\linewidth]{figs/maximum_range_vs_WIMPmass} \caption{Maximum range expected for nuclear recoils as a function of the WIMP mass for the various nuclei.} \label{fig:maximumRange-vs-WIMPmass} \end{figure} \begin{table}[tph] \begin{center} \begin{tabular}{c|c|c} \hline Process & SOURCES simulation & Semi-analytical calculation \\ & [kg$^{-1}$ y$^{-1}$] & [kg$^{-1}$ y$^{-1}$] \\ \hline ($\alpha$, n) from $^{232}$Th chain & 0.12 & $0.10 \pm0.03$ \\ ($\alpha$, n) from $^{238}$U chain & 0.27 & $0.26 \pm 0.08$ \\ Spontaneous fission & 0.79 & $0.8 \pm 0.3$ \\ \hline Total flux & 1.18 & $1.2 \pm 0.4$ \\ \hline \end{tabular} \end{center} \caption{Neutrons per kilogram per year due to ($\alpha$, n) and spontaneous fission reactions in the nuclear emulsion, evaluated with the SOURCES code and semi-analytical calculation using the measured $^{238}$U and $^{232}$Th contaminations as input.} \label{tab:resNeutronYield} \end{table} The neutron-induced background due to the intrinsic radioactive contamination allows the design of an emulsion detector with an exposure of about 10 kg year. A careful selection of the emulsion components and a better control of their production could further increase the radiopurity, thus extending the detector mass and exposure time. In particular, since the activity of the gelatin is higher than that of the other emulsion components (see Table~\ref{tab:activities-MS}) and since PVA shows a very low radioactive level, we are studying a possible replacement of gelatin with PVA. In nuclear emulsion-based detectors the instrumental background is due to the so called \emph{fog} grains, i.e.~dark grains produced by thermal excitation. The fog density determines the probability of random coincidences of two or more fog grains mimicking a WIMP-induced nuclear recoil. The measured value of the fog density for current NIT samples is about 0.1 grains/(10$\mu$m)$^3$. The number of background tracks due to random coincidences of fog grains depends on the minimum number of grains required to build a track and increases with the track length, as shown in the left plot of Figure~\ref{fig:combinatorial_bkg}, where the instrumental background for 1 kg emulsion target is reported. In NIT (U-NIT) emulsions a track made of 2 grains has an average length of about 100 nm (50 nm). The number of background tracks corresponding to this track length amounts to 10$^4$ (10$^3$), as outlined by red arrows on the plot. \\ In order to make the combinatorial background smaller than one, the coincidence of at least 3 grains has to be required. In NIT (U-NIT) emulsions a track made of a sequence of 3 grains has on average a path length of about 200 nm (100 nm): the corresponding background level is 0.3 tracks ($4\times10^{-3}$ tracks). \\ The right plot in Figure~\ref{fig:combinatorial_bkg} shows the number of background tracks as function of the fog density in NIT emulsions, if 2-grain tracks are accepted: the background can be considered as negligible only reducing the fog density to from the current value to 10$^{-3}$~grains/(10$\mu$m)$^3$. Preliminary tests show that a value of 0.03 grains/(10$\mu$m)$^3$ can be obtained using purified gelatine. Further purification might lead to lower fog values. This research line will be followed in collaboration with the firm producing the gelatine. \\ In order to further reduce the fog density, two possible improvements are under study. The first one exploits the response of fog grains to the polarized light scanning: fog grains show indeed both a different image contrast and a different size with respect to the grains sensitized by a nuclear recoil. This effect is essentially due to different $dE/dx$ of the two processes and offers a powerful discrimination of such kind of background. Moreover, a reduction of the fog density can be achieved operating the detector at low temperature (from a simple refrigeration down to a cryogenic regime of $\sim 80$~K) or by applying dedicated chemical treatments. \begin{figure}[tbph] \centering\includegraphics[width=1.0\linewidth]{figs/combinatorial_bkg_arrow} \caption{Left: number of background tracks in 1 kg of NIT emulsions as function of the track length for tracks made by two (continuous red line) and three fog grains (dashed blue line). Right: number of background tracks in 1 kg of NIT emulsions as function of the fog density for 50 nm (continuous green line), 100 nm (dashed black line) and 200 nm (dotted-dashed magenta line) threshold in the track length. Only tracks made by two grains are considered here. } \label{fig:combinatorial_bkg} \end{figure} Finally, the requirement of a background-free experiment sets the necessity of operating in a clean environment in order to avoid surface contamination. Moreover in order to reduce the activation risk of detector materials, an underground location for the emulsion production and handling facilities is required. The construction of a (dark) clean room in the Gran Sasso underground Laboratory is therefore needed. \begin{table}[tph] \begin{center} \begin{tabular}{c | c } \hline Threshold [nm] & Fraction \\ \hline 50 & 0.100 \\ 100 & 0.075 \\ 150 & 0.060 \\ 200 & 0.052 \\ \hline \end{tabular} \end{center} \caption{Fraction of detectable neutron-induced recoils as a function of the read-out threshold.} \label{tab:recoils1} \end{table} \section{Experimental set-up} \label{sec:set-up} As a first phase of the project, we plan to perform a pilot experiment with a detector of 1 kg exposed for one year. Details of the related schedule will be examined in Section~\ref{sec:schedule}. A detector with one kg mass of NIT can be made of 50 $\mu$m thick-films assembled in a stack of 100 planes with a surface of $25 \times 25$ cm$^2$. We are considering the option of embedding OPERA-like films between two consecutive NIT planes: OPERA-like films would act as a monitoring system to register, with micrometric accuracy and high sensitivity, all the radiation integrated by the detector along the exposure. As composed of the same raw materials, the intrinsic radioactivity of the OPERA-like films would be of the same order of magnitude of that of NIT, therefore tolerable for a 1 kg detector. The emulsion planes are placed with their surface parallel to the expected WIMP wind direction. We might consider to place an equivalent amount of emulsion films in an orthogonal plane. These films would act as a control sample. In case a signal would be found in thefirst sample, and only in this case, the scanning of these films would be performed to demonstrate that the signal found is not an artefact. To maintain the detector with a fixed orientation towards the Cygnus constellation it will be installed on an Equatorial Telescope (see Figure~\ref{fig:detector}) allowing to cancel out the effect of the Earth rotation thus keeping the detector pointed on a fixed position in the sky. An equatorial telescope has two axes: the so called Polar Axis, parallel to the rotation axis of the Earth and pointed to the North celestial pole, and the Declination Axis, perpendicular to the polar one. The motion of the Earth can be canceled out by driving at a constant speed the Polar Axis synchronised with the apparent daily motion of the sky. The Polar Axis will be motorized and both axes will be equipped with precise encoders to constantly check the position of the mechanics with high accuracy. The detector will be therefore pointed towards the Cygnus constellation and kept in that direction with an accuracy better than 1 degree. A calibration procedure of the telescope will be performed before the installation in the underground laboratory. To ensure a precise syncronization of the mount with the apparent daily motion of the sky it is necessary to tune the response of the mechanics and to correct for any possible periodic error. The calibration procedure foresees several steps. The mount will at first be tested in the external laboratory using an optical telescope mounted on it and aligned with the Polar Axis: the telescope will be used, during the night, to point a star in the Cygnus constellation. Using a specific software and an imaging CCD camera attached to the prime focus of the telescope, the mount will be guided to keep the star centered in the field of view of the CCD camera. The software will record all the guiding parameters, as the position of the star and the corrections applied to the Polar and Declination Axis. This procedure will be repeated during several nights and all the data collected will be analyzed in order to get and apply the necessary correction to the mechanics and to the electronic system. In a second phase the mount will be used throughout the whole day to compensate the apparent daily motion: the position during the night will be then measured in order to evaluate the pointing accuracy given by the difference between the nominal and measured positions. This measurement will provide a fine tuning on the position of both the Polar and the Declination axes. Finally the mount will be moved underground in its final position: profiting of the presence, in the underground halls, of already existing high precision reference points the mount will be aligned with high accuracy in the north-south direction in order to align the Polar Axis parallel to the rotation axis of the Earth. The design and construction of the equatorial telescope will be carried-out in collaboration with specialized firms. A screening of all the materials used in the construction of the telescope is foreseen in order to evaluate their intrinsic radioactivity. A detailed simulation of all the components of the telescope is planned. A large telescope supporting both the target and the surrounding shield is considered (see Figure~\ref{fig:detector}). This configuration allows to build a light shield while ensuring a low contamination of the background originating from the telescope itself. \begin{figure}[tbph] \centering\includegraphics[width=0.7\linewidth]{figs/Mount_1} \caption{Schematic view of the detector structure.} \label{fig:detector} \end{figure} In Figure~\ref{fig:detector} a schematic view of the detector structure is shown: a stack of NIT films is placed at the center of a plexiglass sphere with a diameter of 30 cm. A sphere of polyethylene will act as a shield against the external neutron background. The target and the shielding are installed on the equatorial telescope. The target emulsions are arranged in such a way to have the film surface parallel to the WIMP wind. From a preliminary evaluation a layer of 50 cm of polyethylene will reduce the external neutron flux by a factor of the order of $10^{4}$: considering an integrated flux of the order of $\phi_n \sim 2 \times 10^{-6}$ cm$^{-2}$ s$^{-1}$, for a target with an exposed surface of $25 \times 25$ cm$^2$ and a thickness of 0.5 cm this corresponds to a residual flux of the order of 1 neutron/kg/year, the same order of magnitude of the intrinsic neutron contamination. More accurate evaluations of the polyethylene thickness sufficient to provide the required background rejection power are under study. The addition of a thin ($1\div 2$ cm) layer of Cadmium to capture thermalised neutrons is under study. As explained in Section~\ref{sec:expected-bkg} NIT have a high electron rejection power: a proper chemical treatment allows to reach a reduction factor of the order of 10$^{-6}$ in the sensitivity to electrons. For this reason the use of high-Z shielding materials (Pb and Cu) against the external $\gamma$ flux is not foreseen at the moment. Both the passive shield and the emulsion target will be enclosed in a sealed plexiglass box maintained in High Purity (HP) Nitrogen atmosphere in slight overpressure with respect to the external environment to prevent radon contamination. The shape of the shield surrounding the detector will be optimized in order to design the lighter and efficient structure. Two solutions are under study, either a parallelepiped box containing the shielding and the emulsion target, or a spherical one. In the first case the weight of the PE layer is of the order of 1.8 ton, while in the spherical option the weight is $\sim$1.14 ton. Even if the latter option ensures a lighter and symmetrical shielding, the final choice will depend on the cost and on the technical implementation of the design. Nevertheless, we are also investigating a completely different approach based on the use of water as shielding material against the external background. A preliminary layout is shown in Figure~\ref{fig:WaterOption}: the emulsion detector is hermetically enclosed inside a spherical container made of low-Z material (teflon or polyethylene) with a diameter of 55~cm. The inner volume is flushed with N$_2$. The container is mounted on a long shaft and positioned in the centre of a tank (diameter 5~m, height 5~m) filled with ultra-pure water. The shaft is made of light, low-radioactive material (i.e.~aluminum) aligned with Earth's rotation axis. The constant orientation of the target with respect to the Cygnus constellation is kept thanks to the slow rotation of the shaft with the period of one sidereal day. All the mechanics needed to keep the orientation and the rotation of the shaft is mounted outside the water tank. The immersed part can be constructed in a way to keep the mean density close to 1~g/cm$^3$. In this way the mechanical load becomes negligible, thus simplifying the design and providing a big flexibility for materials selection. This solution can be more flexible and cheaper, allowing to hold much larger masses without changing neither the mechanics of the telescope nor the shielding. A detailed simulation of the shielding and a study of the mechanics requirements, together with an estimation of the costs, are ongoing. \begin{figure}[thp] \centering\includegraphics[width=1.0\linewidth]{figs/Mount_2} \caption{Schematic view of the detector structure for the water shielding: the detector holder is placed in the centre of tank. Its orientation toward the Cygnus constellation is kept by the rotating pivot mounted with one edge above the water surface. Only pure and low-Z materials are used for the immersed part.} \label{fig:WaterOption} \end{figure} \begin{figure}[thp] \centering\includegraphics[width=0.85\linewidth]{figs/layout_CleanRoom_100m2_v3} \caption{Sketch of the planimetry of the NIT production and development facility.} \label{fig:emulsion-facility} \end{figure} \begin{figure}[thp] \centering\includegraphics[width=0.7\linewidth]{figs/CSfacility} \caption{A picture of the existing OPERA CS facility in hall B.} \label{fig:CSemulsion-facility} \end{figure} \begin{figure}[thp] \centering\includegraphics[width=0.7\linewidth]{figs/planimetria_csFacility} \caption{Planimetry the existing OPERA CS facility in hall B.} \label{fig:CSemulsion-facility-planimetry} \end{figure} \subsection{Emulsion production and development facility} The layout of the facility we intend to build is shown in Figure~\ref{fig:emulsion-facility}. The total surface is about 100 m$^2$ and it is divided in four parts: emulsion gel production, emulsion gel pouring, film development and chemical solution preparation.\\ Once produced, the gel will be sealed in an envelop flushed and filled with N$_2$ and moved to the pouring station, where a glove box flushed with HP Nitrogen will be installed. After the pouring, the films will be sealed in an envelop flushed and filled with N$_2$ and stored underground until the exposure. \\ All the operations involving the emulsion production and development require a dark room environment.\\ In order to minimize the surface contamination and the activation risk, the facility will be hosted in a clean room located underground. A class 1'000 clean room is required for the emulsion production, pouring and the chemical solutions preparation; a class 100'000 will be installed for the area devoted to the development.\\ An air conditioning system will be installed in order to stabilize and monitor the temperature, ($20 \pm 1)^\circ$, and the humidity, ($60\pm 5)\%$, of the clean room. A demineralized water treatment plant and a chemical waste system are also required. For the film development and the pouring activity foreseen in the first year of the project an excellent starting point is the existing OPERA emulsion handling facility shown in Figure~\ref{fig:CSemulsion-facility}. The facility, currently hosted in Hall B, is made of three rooms: a control room, a handling room and a development room, for a total surface of $\sim$ 50 m$^2$ (see Figure~\ref{fig:CSemulsion-facility-planimetry}). The handling room will be equipped with a pouring station and a development station. The installation of two systems for the temperature control is also foreseen. The scanning of the exposed films will be performed in the existing OPERA scanning facilities in Italy, Russia, Turkey and Japan. In Italy the scanning laboratories are located at LNGS, Naples and Bari with 13, 5 and 3 OPERA microscopes respectively. Few more microscopes are currently located in Russian and Turkish scanning laboratories. Moreover at LNGS and Naples two additional microscopes, partially upgraded for the scanning of NIT and the polarized light analysis, are available. An equivalent scanning power is hosted at Nagoya University. \section{Physics reach} The 90$\%$ C.L. upper limit in case of null observation is shown in Figure~\ref{fig:sensitivity1Kg} for an exposure of 1 kg$\cdot$year of NIT emulsions, with a minimum detectable track length ranging from 200 nm down to 50 nm and in the hypothesis of zero background. Even not including the directionality discrimination of the signal and assuming to reach a negligible background level, such an experiment would cover a large part of the parameter space indicated by the DAMA/LIBRA results with a small (1 kg) detector mass, using a powerful and complementary approach. It is worth noting that the sensitivity strongly depends on the final detection threshold: as explained in Section~\ref{sec:expected-bkg} the current threshold value is limited to 200 nm only by the fog density. A reduction of the fog density or its discrimination through the use of the optical microscope with polarized light, would allow to lower the threshold to 100 nm. In order to lower the threshold down to 50 nm the use of the U-NIT technology is needed. Moreover we are conservatively assuming zero efficiency below the threshold value while, as shown in Figure~\ref{fig:eff_vs_length}, the efficiency is not negligible even for shorter tracks. This would enhance the sensitivity to low WIMP masses. This effect will be taken into account. \begin{figure}[hbtp] \centering \includegraphics[width=0.6\linewidth]{figs/NEWS_sensitivity1Kg_JP} \caption{The 90$\%$ C.L. upper limits for a NIT detector with an exposure of 1 kg $\times$ year, a threshold ranging from 200 nm down to 50 nm, in the zero background hypothesis. The directionality information is not included.} \label{fig:sensitivity1Kg} \end{figure} \section{Schedule, Cost Estimate, Organization} \subsection{Time schedule} \label{sec:schedule} \begin{sidewaysfigure}[hbtp] \includegraphics[width=1.2\linewidth]{figs/gantt_v7} \caption{Gantt diagram with the different phases of the project.} \label{fig:gantt} \end{sidewaysfigure} On a time scale of six years we intend to perform the first exposure with a target mass of 1 kg and the corresponding analysis of the data taken. In Figure~\ref{fig:gantt} a detailed plan of all the phases of the project is reported. In the beginning of 2016 we plan to construct a prototype shield and the equipment for the emulsion pouring. The above mentioned phases have to be completed in nine months in order to perform a first test to benchmark the level of intrinsic radioactivity of emulsions. For this measurement, we will use the gelatine produced at Nagoya University and perform the pouring underground. We will perform an exposure of a 10 g detector surrounded by the prototype shield. The detector exposure together with the analysis of the emulsion films will last nine months. The results of this test will provide a measurement of the background, intended to cross-check the estimates based on simulation and measurements of intrinsic radioactivity. In parallel, tests with radioactive sources are foreseen to characterize the response to external radioactivity. We do consider the possibility of getting raw materials for the emulsion production within European countries, provided that their intrinsic radioactivity does not exceed the level measured in Japanese samples. This would allow a reduction of the activation processes induced during transportation. This activity will take place in 2016. The measurement of intrinsic radioactivity of the different emulsion components and the prototype shield materials will be performed from June 2016 to the end of 2017. Tests of the activation due to cosmic rays during the transportation will be performed by bringing samples back and forth between Italy and Japan. The design of the gel production machine will start in September 2016 while the design of the clean room will be carried on with the help of specialized firms, starting from January 2017. The construction of the clean room, the pouring facility and the gel production machine will start from January 2018 and will last six months. As soon as the film production machine will be operational in the underground laboratory and the gelatine will be produced, the measurement of intrinsic radioactivity will be performed. If satisfying the required radioactivity level, the pouring of the gelatine will be performed. The design of the equatorial telescope and the choice of the materials is supposed to start soon in 2016 and last 18 months. The construction of the telescope will start in the beginning of 2018 and last six months. In the second part of 2018 the surface calibration measurements and the underground telemetry will be carried out. The construction of the shield and the target holding will start in 2018. Once the equatorial telescope installation will be finalised, the detector commissioning will start. We plan to finalize the upgrade of the read-out system on a prototype microscope, exploiting in particular the resonant light scattering technique. This activity will start at the beginning of 2016. In June 2017 a Technical Design Report will be submitted. The upgrade of all the available OPERA systems will start in the second half of 2018 and last 27 months. By March 2020 we plan to have the final equipment installed on a number of microscopes adequate for the analysis of 1 kg of NIT emulsion in one year. Once the whole film production will be completed, the run with 1 kg mass detector will start. The data taking will last one year: from October 2019 to October 2020. The emulsion films will be developed soon after the exposure. The scanning and the analysis of the emulsion films will start once the upgrade of all the read-out systems will be complered and it is supposed to be completed by the end of 2021. \subsection{Costs} The cost for the constructions of the clean room (75 m$^2$ class 1'000 and 25 m$^2$ class 100'000) is estimated to be around 200 k\officialeuro. As explained in Section~\ref{sec:set-up} the clean room will host the production machines, the pouring and the development facilities.\\ The cost of the production machine is of the order of 200 k\officialeuro. The pouring and the development facilities will cost about 18 k\officialeuro and 50 k\officialeuro, respectively. The above mentioned costs will be shared according to a MoU to be signed between parties. In case the production will be carried out at Nagoya University, Japan will cover the corresponding costs. Japan will cover the costs for the all the emulsion components. \\ A first estimate of the cost for the equatorial telescope is 15 k\officialeuro for the design and 240 k\officialeuro; the cost for the construction of the shielding amounts to about 15 k\officialeuro. \\ Finally the upgrade of the read-out systems will be needed. Japan will cover the cost for the realization of their own scanning systems. The construction of the microscope prototype in Europe costs about 300 k\officialeuro; the hardware and computing upgrade of each OPERA microscope amounts to about 30 k\officialeuro. Depending on the final scanning speed, from 10 to 14 systems will be modified for the high resolution and high speed scanning of NIT for a total cost ranging from $\sim$ 300 k\officialeuro to $\sim$ 420 k\officialeuro. \\ An expense of 80 k\officialeuro is expected for the maintenance of the microscopes and 120 k\officialeuro for the consumables. In Table \ref{tab:costs} a summary of the expected costs is reported. \begin{table}[htb] \centering \begin{tabular} {c | c | c} \hline \hline Category & Cost [k\officialeuro] & Assignment \\ \hline Clean Room & 200 & EU \\ NIT production machine & 200 & JP \\ Pouring facility & 18 & EU \\ Development facility & 50 & EU \\ Equatorial Telescope & 255 & EU \\ Shielding & 15 & EU\\ EU Prototype Microscope & 30 & EU \\ EU Microscopes Upgrade & 300 & EU \\ EU Microscope Maintenance & 80 & EU \\ JP Microscopes Upgrade & 300 & JP \\ Consumables & 120 & EU\\ \hline TOTAL & 1468 & \\ \hline \hline \end{tabular} \caption{Summary of the expected costs.} \label{tab:costs} \end{table} \subsection{Collaboration} NEWS is at present a collaboration between Italy, Japan and Russia and Turkey. The involved groups are: \begin{itemize} \item University and INFN Bari, Italy \item Lab. Naz. Gran Sasso, Italy \item University and INFN Naples, Italy \item University and INFN Rome, Italy \item Nagoya University and KM Institute, Japan \item Chiba University, Japan \item JINR Dubna, Russia \item Moscow State University, Moscow, Russia \item Lebedev Physical Institute, Moscow, Russia \item METU, Ankara, Turkey \end{itemize} All the above mentioned groups are leaders in the emulsion scanning having gathered the experience of the emulsion analysis in in the OPERA experiment. The scanning and the analysis of the exposed emulsions will be shared according to the available scanning power of each group. The development of the prototype, both for hardware and software, of the new read-out system is shared between LNGS and Naples while it is entirely carried out at Nagoya University for the Japanese one. The LNGS and Napoli groups are in charge of the design of the telescope, the construction of the prototype and the calibration measurements. The same groups will perform the intrinsic background measurements, the studies about the environmental background and the design of the detector shielding and structure. The Russian groups will perform radioactive studies. The design and the realization of the local underground facilities will be shared between LNGS and Japan. The simulation of the detector response, efficiency and resolution as well as and the expected sensitivity is shared between LNGS, Naples and Nagoya. The responsibility about the emulsion production, development and handling is currently assigned to the Nagoya group. \section{Conclusions and outlook} \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{figs/NEWSsensitivity_10-100Kg_50-100nm_JP_2} \caption{Sensitivity at 90$\%$ C.L, in the zero background hypothesis for an experiment with a mass of 10 kg (green) and 100 kg (blue) for two value of detection threshold: 100 nm (dashed lines) and 50 nm (solid line). } \label{fig:NEWSsensitivity_10-100Kg_50-100nm_JP} \end{figure} NEWS is meant to be the first experiment with a solid target for directional dark matter searches: the use of a nuclear emulsion based detector, acting both as target and tracking device, would allow to explore the low cross section region in the phase space indicated by DAMA. The novel emulsion technology, based on the use of nuclear emulsion with nanometric AgBr crystals (NIT), makes it possible to record the sub-micrometric tracks produced by the WIMP scattering off a target nucleus. The presence, in the emulsion components, of light and heavy nuclei results in an enhanced sensitivity to both light and heavy WIMP masses. The read-out of tracks with length of the order of 100 nm, is possible thanks to an R$\&$D carried out on the scanning systems currently used for the analysis of the OPERA emulsions. The use of improved optics and mechanics allowed to reach a spatial and angular resolution of the order of 100 nm and 235 mrad, respectively, with a tracking efficiency approaching 100$\%$ for tracks with lengths longer than 180 nm. The new optical microscope has a scanning speed of about 25 mm$^2$/h allowing to perform a fast preselection of the candidate signal tracks with the shape analysis method. The final signal confirmation is obtained with powerful optical microscope equipped with a light polarizer: exploiting the different response of non spherical grain clusters to different polarization angles, the unprecedented spatial resolution of 10 nm is obtained. This resolution allows to resolve grains belonging to a few hundred of nanometer long tracks thus providing the final signal confirmation with a very high signal to noise ratio. The intrinsic radioactivity of nuclear emulsions has been measured and a detailed MC simulation has been performed: the estimated neutron yield allows to design an experiment with masses of the order of 10 kg while keeping this background negligible. A careful evaluation of the external background sources has been performed allowing to design a proper shielding. The final experimental set-up foresees the use of an equatorial telescope holding both the emulsion target and the shielding. We plan to perform a pilot experiment with a 1 kg mass target on a time scale of six years: even using a rather small detector mass we would be able to explore the region indicated by the DAMA experiment with a powerful and complementary approach (see Figure~\ref{fig:sensitivity1Kg}). The actual intrinsic radioactive level allows to scale the target mass and exposure time up to one order of magnitude. A careful selection of the emulsion components and a better control of their production could further increase the radiopurity, thus allowing larger detector mass. The reduction of the fog density and further developments of the optical microscopy with polarized light would allow to reduce the detection threshold down to 50 nm. Improvements both in the mechanics (use of piezoelectric-driven objective) and in the image acquisition (use of multiple image sensors) envisage already now the possibility to analyse with such a resolution a volume of 100 kg or larger. Moreover further improvements both in the microscope hardware and in the analysis software will permit to fully exploit the intrinsic emulsion capability of recording 3D tracks. In Figure~\ref{fig:NEWSsensitivity_10-100Kg_50-100nm_JP} the upper limit in case of null observation for an experiment with a mass of 10 (green) and 100 (blue) kg and for a detection threshold of 50 (dashed lines) and 100 (solid lines) nm is shown at 90 $\%$ C.L and in the zero background hypothesis. The proposed program would open a new window in the DM search. The developments done will likely have impact on the nano-imagining applications in physics, biology and medicine.
1,116,691,499,556
arxiv
\section{Introduction} \vspace{-0.2cm} Recent breakthroughs in deep neural networks (DNNs) have motivated an explosive demand for intelligent edge devices. Many of them, such as autonomous vehicles and healthcare wearables, require real-time and on-site learning to enable them to proactively learn from new data and adapt to dynamic environments. The challenge for such on-site learning is that the massive and growing cost of state-of-the-art (SOTA) DNNs stands at odds with the limited resources available at the edge devices, raising a major concern even when training in cloud using powerful GPUs/CPUs \cite{Strubell2019Energy, you2020drawing}. To address the above challenge towards efficient DNN training, low-precision training have been developed recognizing that the training time/energy efficiency is a quadratic function of DNNs' adopted precision \cite{wang2018training}. While they have showed promising training efficiency, they all adopt \emph{(i)} a \textbf{static} quantization strategy, i.e., the precisions are fixed during the whole training process; \emph{(ii)} the \textbf{same} quantization precision for all training samples, limiting their achievable efficiency. In parallel, it has been recognized that different stages along DNNs' training trajectory require different optimization and hyperparameters, and not all inputs and layers are equally important/useful for training: \cite{achille2018critical} finds that DNNs which learn to fit different patterns at different training stages tend to have better generalization capabilities, supporting a common practice that trains DNNs starting with a large learning rate and annealing it when the model is fit to the training data plateaus; \cite{layers_equal} reveals that some layers are critical to be intensively updated for improving the model accuracy, while others are insensitive, and \cite{veit2016residual, greff2016highway} show that different samples might activate different sub-models. Inspired by the prior arts, we propose FracTrain, which \textbf{for the first time} advocates a progressive and dynamic quantization strategy during training. Specifically, we make the following contributions: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item We first propose \textbf{progressive fractional quantization} (PFQ) training in which the precision of activations, weights, and gradients increases gradually and will not reach the precision of SOTA static low-precision DNN training until the final training stage. We find that a lower precision for the early training stage together with a higher precision for the final stage consistently well balance the training space exploration and final accuracy, while leading to large computational and energy savings. Both heuristic and principled PFQ are effective. \item We then introduce \textbf{dynamic fractional quantization} (DFQ) training which automatically adapts precisions of different layers to the inputs. Its core idea is to hypothesize layer-wise quantization (to different precisions) as intermediate ``soft'' choices between fully utilizing and completely skipping a layer. DFQ's finer-grained dynamic capability consistently favors much better trade-offs between accuracy and training cost across different DNN models, datasets, and tasks, while being realized with gating functions that have a negligible overhead ($<$ 0.1\%). \item We finally integrate PFQ and DFQ into one unified framework termed \textbf{FracTrain}, which is \textbf{the first} to adaptively squeeze out training cost from the finest bit level \textit{temporally} and \textit{spatially} during training. Extensive experiments show that FracTrain can aggressively boost DNN training efficiency while achieving a comparable or even better accuracy, over the most competitive SOTA baseline. Interestingly, FracTrain's effectiveness across various settings coincides with recent findings \cite{achille2018critical,li2019towards, layers_equal, greff2016highway, wang2018skipnet} that \emph{(i)} different stages of DNN training call for different treatments and \emph{(ii)} not all layers are equally important for training convergence. \end{itemize} \section{Prior works} \noindent\textbf{Accelerating DNN training.} Prior works attempt to accelerate DNN training in resource-rich scenarios via communication-efficient distributed optimization and larger mini-batch sizes \cite{goyal,jia2018highly,you2018imagenet,Akiba2017ExtremelyLM}. For example, \cite{jia2018highly} combined distributed training with a mixed precision framework to train AlexNet in 4 minutes. While distributed training can reduce training time, it increases the energy cost. In contrast, we target energy efficient training for achieving in-situ, resource-constrained training. \noindent\textbf{Low-precision training.} Pioneering works have shown that DNNs can be trained under low precision \cite{banner2018scalable, wang2018training, MixPT, gupta2015deep, sun2019hybrid}, instead of full-precision. First, distributed efficient learning reduces the communication cost of aggregation operations using quantized gradients \cite{seide20141,de2017understanding,terngrad,signSGD}, which however cannot reduce training costs as they mostly first compute full-precision gradients and then quantize them. Second, low-precision training achieves a better trade-off between accuracy and efficiency towards on-device learning. For example, \cite{wang2018training, sun2019hybrid} and \cite{banner2018scalable, yang2019training} introduced an 8-bit floating-point data format and a 8-bit integer representation to reduce training cost, respectively. FracTrain explores from an orthogonal perspective, and can be applied on top of them to further boost the training efficiency. \noindent\textbf{Dynamic/efficient DNN training.} More recently \textit{dynamic inference} \cite{8050797, wang2018skipnet, blockdrop, convnet-aig, gaternet, gao2018dynamic, hua2019channel, DDI} was developed to reduce the average inference cost, which was then extended to the most fine-grained bit level~\cite{shen2020fractional, song2020drq}. While energy-efficient training is more complicated than and different from inference, many insights of the latter can be lent to the former. For example, \cite{prunetrain} recently accelerated the empirical convergence via active channel pruning during training; \cite{e^2_train} integrated stochastic data dropping, selective layer updating, and predictive low-precision to reduce over 80\% training cost; and \cite{biggest_loser} accelerated training by skipping samples that leads to low loss values per iteration. Inspired by these works, our proposed FracTrain pushes a new dimension of dynamic training via temporally and spatially skipping unnecessary bits during the training process. \section{The proposed techniques} \vspace{-0.1in} This section describes our proposed efficient DNN training techniques, including PFQ (Section \ref{sec:PFQ}), DFQ (Section \ref{sec:DFQ}), and FracTrain that unifies PFQ and DFQ (Section \ref{sec:PDFQ}). \vspace{-0.3cm} \subsection{Progressive Fractional Quantization (PFQ)} \label{sec:PFQ} \vspace{-0.2cm} \begin{figure*}[t] \vspace{-0.1in} \centering \includegraphics[width=\textwidth]{images/PFQ-method-nips.pdf} \vspace{-0.2in} \caption{\textbf{(a)} A high-level view of the proposed PFQ vs. SOTA low-precision training, where PFQ adopts a four-stage precision schedule to gradually increase the precision of weights, activations, and gradients up to that of the static baseline which here employs 8-bit for both the forward and backward paths, denoted as FW-8/BW-8, and \textbf{(b)} the corresponding training loss trajectory.} \vspace{-0.15in} \label{fig:pfq} \end{figure*} \textbf{Hypothesis.} The proposed PFQ draws inspiration from \emph{(i)} \cite{pmlr-v97-rahaman19a,xu2019frequency}, which argue that DNNs first learn low-complexity (lower-frequency) functional components before absorbing high-frequency features, with the former being more robust to perturbations, and \emph{(ii)} \cite{li2019towards}, which shows that training DNNs starting with a large initial learning rate helps to learn more generalizable patterns faster and better, i.e., faster convergence and higher accuracy. We hypothesize that precision of DNNs can achieve similar effects, i.e., a lower precision in the early training stage fits the observed behavior of learning lower-complexity, coarse-grained patterns, while increasing precision along with training trajectory gradually captures higher-complexity, fine-grained patterns. In other words, staying at lower precisions implies larger quantization noise at the beginning, that can inject more perturbations to favor more robust exploration of the optimization landscape. Therefore, it is expected that DFQ can boost training efficiency while not hurting, or even helping, the model's generalization performance. \textbf{Design of PFQ.} We propose PFQ that realizes the aforementioned hypothesis in a principled manner by developing a simple yet effective indicator to automate PFQ's precision schedule, as described in \textit{Algorithm} \ref{alg:PFQ}. Specifically, we measure the difference of the normalized loss function in consecutive epochs, and increase the precisions when the loss difference in the previous five epochs is smaller than a preset threshold $\epsilon$; We also scale $\epsilon$ by a decaying factor $\alpha$ to better identify turning points, as the loss curve proceeds to the final plateau. The proposed indicator adapts $\epsilon$ proportionally w.r.t. the loss function's peak magnitude, and thus can generalize to different datasets, models, and tasks. In addition, it has negligible overhead ($<$ 0.01\% of the total training cost). Note that PFQ schedules precision during training based on prior works' insights on DNN training: \emph{(i)} Gradients often require a higher precision than weights and activations~\cite{zhou2016dorefa}, and \emph{(ii)} more precise update (i.e., a higher precision) at the end of the training process is necessary for better convergence ~\cite{zhu2019towards}. Fig.\ref{fig:pfq} (a) shows a high-level view of PFQ as compared to static low-precision training, with Fig.\ref{fig:pfq} (b) plotting an example of the corresponding training loss trajectory. In the example of Fig.\ref{fig:pfq}, we adopt a four-stage precision schedule for the early training phase (here referring to the phase before the first learning rate annealing at the 80-th epoch): the first stage assigns 3-bit and 6-bit precisions for the forward (i.e., weights and activations) and backward (i.e., gradients) paths, denoted as FW-3/BW-6; The final stage employs a precision of FW-8/BW-8, which is the same as that of the static low-precision training baseline; and the intermediate stages are assigned precision that uniformly interpolates between that of the first and final stages. PFQ in this example achieves 63.19\% computational cost savings over the static training baseline, while improving the accuracy by 0.08\%. Note that we assume integer-based quantization format~\cite{banner2018scalable} in this example. \textbf{Insights.} PFQ reveals a new ``fractional'' quantization progressively along the training trajectory: SOTA low-precision DNN training that quantizes or skips the whole model can be viewed as the two ``extremes'' of quantization (i.e., full vs. zero bits), while training with the intermediate precision attempts to ``fractionally'' quantize/train the model. As shown in Fig.\ref{fig:pfq} and validated in Section \ref{sec:exp_pfq}, PFQ can automatically squeeze out unnecessary bits from the early training stages to simultaneously boost training efficiency and accuracy, while being simple enough for easy adoption. \begin{figure}[!t] \begin{minipage}{0.48\textwidth} \centering \begin{algorithm}[H] \caption{Progressive Fractional Quantization Training } \label{alg:PFQ} \begin{algorithmic}[1] \STATE Initialize the precision schedule \{$bit_i$\} (initial $i$ = 0), indicating threshold $\epsilon$, decaying factor $\alpha$, training epoch $max\_epoch$ \WHILE{epoch < $max\_epoch$ } \STATE Training with precision setting $bit_i$ for one epoch \STATE Calculate normalized loss difference $Loss\_diff$ between consecutive epochs \IF{$Loss\_diff < \epsilon$} \STATE $i \leftarrow i + 1$ (switch to next $bit_{i+1}$) \STATE decay $\epsilon$ by $\alpha$ (prepare for next switch) \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \end{minipage} \hfill \begin{minipage}{0.48\textwidth} \centering \begin{algorithm}[H] \caption{FracTrain: Integrating PFQ and DFQ} \label{alg:FracTrain} \begin{algorithmic}[1] \STATE Initialize target $cp$ schedule \{$cp_i$\} (initial $i$ = 0), indicating threshold $\epsilon$, decaying factor $\alpha$, training epoch $max\_epoch$ \WHILE{epoch < $max\_epoch$ } \FOR{training one epoch} \STATE Optimize Eq. (\ref{objective function}) with DFQ \STATE Adaptively flip the sign of $\beta$ in Eq. (\ref{objective function}) \ENDFOR \STATE Get $Loss\_diff$ as in \textit{Algorithm}~\ref{alg:PFQ} \IF{$Loss\_diff < \epsilon$} \STATE $i \leftarrow i + 1$ (switch to next $cp_{i+1}$) \STATE decay $\epsilon$ by $\alpha$ (prepare for next switch) \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \end{minipage} \vspace{-1em} \end{figure} \vspace{-0.2cm} \subsection{Dynamic Fractional Quantization (DFQ)} \vspace{-0.1cm} \label{sec:DFQ} \textbf{Hypothesis.} We propose DFQ to dynamically adapt precisions of activations and gradients in an input-dependent manner. Note that SOTA DNN hardware accelerators have shown that dynamic precision schemes are \textbf{hardware friendly}. For example, \cite{Sharma_2018} developed a bit-flexible DNN accelerator that constitutes bit-level processing units to dynamically match the precision of each layer. With such dedicated accelerators, DFQ's training cost savings would be maximized. \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-0.18in} \centering \includegraphics[width=0.5\textwidth]{images/dfq-method-nips.pdf} \caption{Illustrating the proposed DFQ on top of the $i$-th DNN layer/block, where the orange circle $G$ indicates a recurrent neural network (RNN) gate and Q ($\cdot$) denotes a quantization operation. } \vspace{-0.35in} \label{fig:dfq} \end{wrapfigure} \textbf{Design of DFQ.} To our best knowledge, DFQ is the first attempt to unify the binary selective layer update design and quantization into one unified training framework in order to dynamically construct intermediate ``soft'' variants of selective layer update. Fig. \ref{fig:dfq} illustrates our DFQ framework, in which the operation of a DNN layer can be formulated as: \begin{equation} F_{i} = \sum_{n=1}^{N-1} G_{i}^nC_{i}^{b_n}(F_{i-1}) + G_{i}^0F_{i-1} \label{skipping layer} \end{equation} where we denote \emph{(i)} the output and input of the $i$-th layer as $F_{i}$ and $F_{i-1}$, respectively, \emph{(ii)} the convolution operation of the $i$-th layer executed with $k$ bits as $C_{i}^k$, where a gating network $G_{i}$ determines the fractional quantization precision, \emph{(iii)} $G_{i}^n$ $\in \{0, 1\}$ as the $n$-th entry of $G_{i}$, and \emph{(iv)} $b_n$ as the precision option of the $n$-th entry (e.g., $n$ = 0 or $N-1$ represents precisions of zero or full bits). Note that only one of the precision in $b_n (n=0,..,N-1)$ will be activated during each iteration. For designing the gating network, we follow \cite{e^2_train} to incorporate a light-weight RNN per layer/block, which takes the same input as its corresponding layer, and outputs soft-gating probabilistic indicators. The highest-probability precision option is selected to train at each iteration. The RNN gates have a negligible overhead, e.g., $<$0.1\% computational cost of the base layer/block. \textbf{Training of DFQ.} To train DFQ, we incorporate a cost regularization into the training objective: \begin{equation} \min \limits_{(W_{base}, W_G)}\,\, L(W_{base}, W_G)\,\, +\, \beta \times cp(W_{base}, W_G) \label{objective function} \end{equation} where $L$, $cp$, and $\beta$ denote the accuracy loss, the cost-aware loss, and the weighting coefficient that trades off the accuracy and training cost, respectively, and $W_{base}$ and $W_G$ denote weights of the backbone and gating networks, respectively. The cost-aware loss $cp$ in this paper is defined as the ratio of the computational cost between the quantized and full-precision models in each training iteration. To achieve a specified $cp$, DFQ automatically controls the sign of $\beta$: if $cp$ is higher than the specified one, $\beta$ is set to be positive, enforcing the model to reduce its training cost by suppressing $cp$ in Eq. (\ref{objective function}); if $cp$ is below the specified one, the sign of $\beta$ is flipped to be negative, encouraging the model to increase its training cost. In the end, $cp$ will stabilize around the specified value. \textbf{Insights.} DFQ unifies two efficient DNN training mindsets, i.e., dynamic selective layer update and static low-precision training, and enables a ``fractional'' quantization of layers during training, in contrast to either a full execution (selected) or complete non-execution (bypassed) of layers. Furthermore, DFQ introduces \textit{input-adaptive quantization} at training for the first time, and automatically learns to adapt the precision of different layers' activations and gradients in contrast to current practice of low-precision training \cite{banner2018scalable, yang2019training, zhou2016dorefa} that fixes layer-wise precision during training regardless of inputs. In effect, the selective layer update in \cite{e^2_train} can be viewed as a coarse-grained version of DFQ, i.e., allowing only to select between full bits (executing without quantization) and zero bits (bypassed). \subsection{FracTrain: unifying PFQ and DFQ} \label{sec:PDFQ} PFQ and DFQ explore two orthogonal dimensions for adaptive quantization towards efficient training: ``\textit{temporally}'' along the training trajectory, and ``\textit{spatiallly}'' for the model layout. It is hence natural to integrate them into one unified framework termed FracTrain, that aims to maximally squeeze out unnecessary computational cost at the finest granularity. The integration of PFQ and DFQ in FracTrain is straightforward and can be simply realized by applying PFQ to DFQ based models: the DFQ based training process is automatically divided into multiple stages controlled by the PFQ indicator in \textit{Algorithm}~\ref{alg:PFQ} and each stage is assigned with different target $cp$, thus squeezing more bit savings from the early training stages. \textit{Algorithm}~\ref{alg:FracTrain} summarizes the design of our FracTrain. \section{Experiments} \vspace{-0.2cm} We will first present our experiment setup in Section \ref{exp:setup} and the ablation studies of FracTrain (i.e., evaluating PFQ and DFQ) in Sections \ref{sec:exp_pfq} and \ref{sec:exp_dfq} respectively, and then FracTrain evaluation under different training settings in Section \ref{sec:exp_cdpt} and \ref{sec:adaptation}. Finally, we will discuss the connections of FracTrain with recent theoretical analysis of DNN training and the ML accelerators to support FracTrain. \subsection{Experiment setup} \label{exp:setup} \textbf{Models, datasets, and baselines.} \underline{Models \& Datasets: } We consider a total of \textbf{six DNN models} (i.e., ResNet-18/34/38/74 \cite{he2016deep}, MobileNetV2 \cite{sandler2018mobilenetv2}, and Transformer-base~\cite{vaswani2017attention}) and \textbf{four datasets} (i.e., CIFAR-10/100~\cite{krizhevsky2009learning}, ImageNet~\cite{imagenet_cvpr09}, and WikiText-103~\cite{merity2016pointer}). \underline{Baselines: } We evaluate FracTrain against three SOTA static low-precision training techniques, including SBM~\cite{banner2018scalable}, WAGEUBN~\cite{yang2019training}, and DoReFa~\cite{zhou2016dorefa}, and perform ablation studies of FracTrain (i.e., evaluation of PFQ and DFQ) over SBM~\cite{banner2018scalable} which is the most competitive baseline based on both their reported and our experiment results. Note that we keep all the batch normalization layers~\cite{ioffe2015batch} in floating point precision in all experiments for our techniques and the baselines, which is a common convention in literature. \textbf{Training settings.} For training, we follow SOTA settings in \cite{wang2018skipnet} for experiments on CIFAR-10/100 and \cite{wang2018skipnet} for experiments on ImageNet, for which more details are provided in the supplement. For the hyperparameters of FracTrain, we simply calculate $\epsilon$ and $\alpha$ (0.05 and 0.3, respectively) from the normalized loss around the turning points for a four-stage PFQ on ResNet-38 with CIFAR-10 (see Fig. 1), and then apply them to all experiments. The resulting $\epsilon$ and $\alpha$ work for all the experiments, showing FracTrain’s insensitivity to its hyper hyperparameters. \textbf{Evaluation metrics.} We evaluate PFQ, DFQ, and FracTrain in terms of the following cost-aware metrics of training costs, in addition to the model \textit{accuracy (Acc)}: \emph{(i)} \underline{\textit{Computational Cost (CC):}} Inspired by~\cite{zhou2016dorefa} and following~\cite{shen2020fractional}, we calculate the computational cost of DNNs using the effective number of MACs, i.e., (\# of $MACs$)$*Bit_{a}/32 * Bit_{b}/32$ for a dot product between $a$ and $b$, where $Bit_{a}$ and $Bit_{b}$ denote the precision of $a$ and $b$, respectively. As such, this metric is proportional to the total number of bit operations. \emph{(ii)} \underline{\textit{Energy and Latency:}} The \textit{CC} might not align well with the actual energy/latency in real hardware \cite{yang2017designing}, we thus also consider training energy and latency characterized using a SOTA cycle-accurate simulator, named BitFusion, based on Register-Transfer-Level (RTL) implementations in a commercial CMOS technology and a SOTA DNN accelerator that supports arbitrary precisions \cite{sharma2018bit}. Since backpropagation can be viewed as two convolution processes (for computing the gradients of weights and activations, respectively), we estimate the training energy and latency by executing the three convolution processes sequentially in BitFusion. Note that we apply BitFusion for both our FracTrain and all the integer-only baselines to make sure that the adopted hardware parameters (e.g., dataflows) are the same for a fair comparison. \subsection{FracTrain ablation study: evaluate PFQ} \label{sec:exp_pfq} This subsection evaluates PFQ over the most competitive baseline, SBM \cite{banner2018scalable}. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{images/PFQ-results-nips.pdf} \caption{Comparing PFQ (blue) with the most competitive baseline, SBM~\cite{banner2018scalable} (red), in terms of model accuracy vs. the total number of training MACs on ResNet-38/74 models with CIFAR-10/100. Note that we use FW-6/BW-8 to denote FW(6,6,6,6)/BW(8,8,8,8) for short.} \label{fig:PFQ-resnet} \vspace{-1em} \end{figure*} \textbf{PFQ on ResNet-38/74 and CIFAR-10/100.} Fig. \ref{fig:PFQ-resnet} compares the accuracy vs. the total number of MACs of SBM and PFQ on ResNet-38/74 and CIFAR-10/100 under four different precision schemes. We have \textbf{two observations}. \underline{First}, PFQ consistently outperforms SBM \cite{banner2018scalable} by reducing the training cost while achieving a comparable or even better accuracy. Specifically, PFQ reduces the training cost by 22.7\% $\sim$ 73.2\% while offering a comparable or better accuracy (-0.08\% $\sim$ +0.34\%), compared to SBM. For example, when training ResNet-74 on CIFAR-100, PFQ of FW(3,4,6,8)/BW(6,8,12,16) achieves 73.2\% computational savings and a better (+0.28\%) accuracy over SBM of FW(8,8,8,8)/BW(16,16,16,16). \underline{Second}, PFQ achieves larger computational cost savings when the models target a higher accuracy and thus require a higher precision. Experiments under more precision settings are provided in the supplement. \begin{wrapfigure}{r}{0.45\textwidth} \vspace{-1.5em} \centering \includegraphics[width=\linewidth]{images/pfq_sensitivity.pdf} \vspace{-1.6em} \caption{PFQ of FW(3,4,6,8)/BW(6,6,8,8) vs. SBM on ResNet-38 and CIFAR-100 under various $\epsilon$ (see shapes) and $\alpha$ (see colors).} \label{fig:pfg_sensitivity} \vspace{-1.5em} \end{wrapfigure} \textbf{Sensitivity to hyperparameters in PFQ.} To verify the sensitivity of PFQ to its hyperparameters, we evaluate PFQ of FW(3,4,6,8)/BW(6,6,8,8) for training ResNet-38 on CIFAR-100 as shown in Fig.~\ref{fig:pfg_sensitivity} under various $\epsilon$ (different shapes) and $\alpha$ (different colors). We can see that a good accuracy-efficiency trade-off can be achieved by PFQ in a large range of hyperparameter settings as compared with its static baselines, showing PFQ’s insensitivity to hyperparameters. It is intuitive that (1) $\epsilon$ and $\alpha$ control the accuracy-efficiency trade-off, and (2) a larger $\epsilon$ and $\alpha$ (i.e., faster precision increase) lead to a higher training cost and higher accuracy. We also show PFQ’s insensitivity to its precision schedule under three different precision schedule strategies in the supplement. \begin{wraptable}{r}{0.6\textwidth} \vspace{-0.18in} \centering \caption{PFQ vs. SBM on MobileNetV2 and CIFAR-10/100.} \resizebox{0.6\textwidth}{!}{ \begin{tabular}{c|c|c|c|c} \toprule Precision Setting & DataSet & Acc ($\Delta$Acc) & MACs & Comp. Saving \\ \midrule \multirow{2}[1]{*}{\tabincell{c}{FW(4,4,6,8)/\\BW(6,6,8,8)}} & CIFAR-10 & 93.77 (+0.04\%) & 1.22E+14 & 17.16\% \\ & CIFAR-100 & 74.84 (+0.09\%) & 6.33E+13 & 27.41\% \\ \midrule \multirow{2}[1]{*}{\tabincell{c}{FW(4,4,6,8)/\\BW(6,8,10,12)}} & CIFAR-10 & 93.69 (+0.03\%) & 1.40E+14 & 27.30\% \\ & CIFAR-100 & 75.07 (+0.37\%) & 6.69E+13 & 44.94\% \\ \midrule \multirow{2}[1]{*}{\tabincell{c}{FW(4,4,6,8)/\\BW(6,8,12,16)}} & CIFAR-10 & 93.90 (-0.10\%) & 1.87E+14 & 21.96\% \\ & CIFAR-100 & 74.94 (+0.03\%) & 7.60E+13 & 49.84\% \\ \bottomrule \end{tabular}% } \vspace{-0.1in} \label{tab:mbv2}% \end{wraptable}% \textbf{PFQ on MobileNetV2 and CIFAR-10/100.} We also evaluate PFQ on compact DNN models. Table \ref{tab:mbv2} shows that PFQ's benefits even extend to training compact models such as MobileNetV2. Specifically, as compared to SBM under three precision schedule schemes, PFQ achieves computational cost savings of 27.4\% $\sim$ 49.8\% and 17.2\% $\sim$ 27.3\%, while having a comparable or better accuracy of -0.10\% $\sim$ +0.04\% and -0.10\% $\sim$ +0.04\%, respectively, on the CIFAR-10 and CIFAR-100 datasets. For experiments using MobileNetV2 in this paper, we adopt a fixed precision of FW-8/BW-16 for depthwise convolution layers as MobileNetV2's accuracy is sensitive to the precision of its separable convolution. \begin{wraptable}{r}{0.6\textwidth} \centering \caption{PFQ vs. SBM on ResNet-18/ImageNet and Transformer-base/WikiText-103.} \resizebox{0.6\textwidth}{!}{ \begin{tabular}{ccccc} \toprule \multicolumn{1}{c}{Model / Dataset} & Method & \multicolumn{1}{c}{Precision Setting} & Acc (\%) / Perplexity & \multicolumn{1}{c}{MACs} \\ \midrule \multirow{3}[4]{*}{\tabincell{c}{ResNet-18\\ImageNet}} & SBM & \multicolumn{1}{c}{FW-8 / BW-16} & 69.51 & 3.37E+15 \\ & PFQ & \multicolumn{1}{c}{FW(3,4,6,8) / BW(6,8,12,16)} & 69.68 & 2.64E+15 \\ \cmidrule{2-5} & & \textbf{PFQ Improv.} & \textbf{+0.17} & \textbf{21.44\%} \\ \midrule \multirow{3}[4]{*}{\tabincell{c}{Transformer-base\\WikiText-103}} & SBM & \multicolumn{1}{c}{FW-8 / BW-16} & 31.55 & 2.81E+14 \\ & PFQ & \multicolumn{1}{c}{FW(3,4,6,8) / BW(6,8,12,16)} & 31.49 & 1.57E+14 \\ \cmidrule{2-5} & & \textbf{PFQ Improv.} & \textbf{-0.06} & \textbf{44.0\%} \\ \bottomrule \end{tabular}% } \label{tab:pfq-imagenet}% \vspace{-1em} \end{wraptable}% \textbf{PFQ on ImageNet and WikiText-103.} We then evaluate PFQ on \emph{(i)} a large vision dataset ImageNet and \emph{(ii)} a language modeling dataset WikiText-103 to verify its general effectiveness. Table \ref{tab:pfq-imagenet} shows that PFQ again outperforms SBM on both tasks: PFQ reduces the computational cost by 21.44\% on the relatively small ResNet-18 while improving the accuracy by 0.17\% on ImageNet and decreases the computational cost by 44.0\% on Transformer-base/WikiText-103 while improving the perplexity by 0.06, as compared to the competitive SBM baseline. Notably, \textbf{PFQ even achieves a higher accuracy than the SOTA floating-point training technique} \cite{he2016deep} under most of the aforementioned experiments including ResNet-38/74 on CIFAR-10/100 and ResNet-18 on ImageNet, demonstrating PFQ's excellent generalization performance. \subsection{FracTrain ablation study: evaluate DFQ} \label{sec:exp_dfq} This subsection evaluates the proposed DFQ over SBM \cite{banner2018scalable} on three DNN models (RestNet-38/74 and MobileNetV2) and two datasets (CIFAR-10 and CIFAR-100), as shown in Fig. \ref{fig:exp_dfq}. We can see that DFQ surpasses SBM from two aspects: \underline{First}, DFQ always demands less computational cost (i.e., the total number of MACs) to achieve the same or even better accuracy, on both larger models ResNet-38/74 and the compact model MobileNetV2; \underline{Second}, while the static training baselines' performance deteriorates under very low computational costs, DFQ maintains decent accuracies, indicating that DFQ can achieve a better allocation of precision during training. Specifically, DFQ reduces the computational cost by 54.5\% under a comparable accuracy (-0.11\%), or boosts the accuracy by 22.7\% while reducing the computational cost by 28.5\%. Note that DFQ significantly outperforms the selective layer update in ~\cite{e^2_train}, e.g., achieving 7.3$\times$ computational savings with a higher accuracy on ResNet-38 and CIFAR-10, validating our hypothesis that DFQ's intermediate ``soft" variants of selective layer update favor better trade-offs between accuracy and training costs. More details for the experiment settings of Fig. \ref{fig:exp_dfq} are provided in the supplement. \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{images/dfq-nips.pdf} \vspace{-0.1in} \caption{Comparing DFQ with SBM~\cite{banner2018scalable} on ResNet-38/74/MobileNetV2 and CIFAR-10/100. } \label{fig:exp_dfq} \end{figure} \begin{table*}[!t] \centering \caption{The training accuracy, computational cost, energy, and latency of FracTrain, SBM~\cite{banner2018scalable}, WAGEUBN~\cite{yang2019training}, and DoReFa~\cite{zhou2016dorefa}, when training the ResNet-38/74 models on CIFAR-10/100.} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{clrcc|cc} \toprule \multicolumn{1}{l}{Model / Dataset} & Method & \multicolumn{1}{l}{Precision Setting} & Acc (\%) & \multicolumn{1}{c}{MACs} & Energy (kJ) & \multicolumn{1}{l}{ Latency (min)} \\ \midrule \multirow{6}[4]{*}{\tabincell{c}{ResNet-38\\CIFAR-10}} & WAGEUBN & \multicolumn{1}{c}{FW-8 / BW-8} & 91.81 & 1.11E+14 & 31.63 & 292.74 \\ & DoReFa & \multicolumn{1}{c}{FW-8 / BW-8} & 92.02 & 1.21E+14 & 34.34 & 317.83 \\ & SBM & \multicolumn{1}{c}{FW-8 / BW-8} & \textbf{92.66} & 1.18E+14 & 33.55 & 310.47 \\ & DFQ & \multicolumn{1}{c}{$cp$=3} & 92.49 & 4.96E+13 & 23.97 & 230.32 \\ & FracTrain & \multicolumn{1}{c}{$cp$-1.5/2/2.5/3} & 92.54 & \textbf{3.97E+13} & \textbf{22.93} & \textbf{221.13} \\ \cmidrule{2-7} & \textbf{FracTrain Improv.} & & \textbf{-0.12} & \textbf{64.4 $\sim$ 67.2\%} & \textbf{27.5 $\sim$ 33.2\%} & \textbf{24.5 $\sim$ 30.4\%} \\ \midrule \multirow{6}[4]{*}{\tabincell{c}{ResNet-38\\CIFAR-100}} & WAGEUBN & \multicolumn{1}{c}{FW-8 / BW-8} & 67.95 & 1.15E+14 & 32.76 & 303.19 \\ & DoReFa & \multicolumn{1}{c}{FW-8 / BW-8} & 68.63 & 1.01E+14 & 25.31 & 234.19 \\ & SBM & \multicolumn{1}{c}{FW-8 / BW-8} & 69.78 & 6.88E+13 & 19.55 & 180.93 \\ & DFQ & \multicolumn{1}{c}{$cp$=3} & 69.81 & 3.23E+13 & 16.22 & 155.91 \\ & FracTrain & \multicolumn{1}{c}{$cp$-1.5/2/2.5/3} & \textbf{69.82} & \textbf{2.66E+13} & \textbf{15.74} & \textbf{151.87} \\ \cmidrule{2-7} & \textbf{FracTrain Improv.} & & \textbf{+0.04} & \textbf{61.3 $\sim$ 77.0\%} & \textbf{19.6 $\sim$ 51.9\%} & \textbf{16.1 $\sim$ 49.9\%} \\ \midrule \multirow{6}[4]{*}{\tabincell{c}{ResNet-74\\CIFAR-10}} & WAGEUBN & \multicolumn{1}{c}{FW-8 / BW-8} & 91.35 & 2.38E+14 & 68.21 & 629.58 \\ & DoReFa & \multicolumn{1}{c}{FW-8 / BW-8} & 91.16 & 2.33E+14 & 66.84 & 616.90 \\ & SBM & \multicolumn{1}{c}{FW-8 / BW-8} & 93.04 & 2.62E+14 & 75.01 & 692.29 \\ & DFQ & \multicolumn{1}{c}{$cp$=3} & \textbf{93.09} & 7.33E+13 & 35.88 & 343.76 \\ & FracTrain & \multicolumn{1}{c}{$cp$-1.5/2/2.5/3} & 92.97 & \textbf{5.85E+13} & \textbf{33.40} & \textbf{321.98} \\ \cmidrule{2-7} & \textbf{FracTrain Improv.} & & \textbf{+0.05} & \textbf{74.9 $\sim$ 77.6\%} & \textbf{50.0 $\sim$ 55.5\%} & \textbf{47.8 $\sim$ 53.5\%} \\ \midrule \multirow{6}[4]{*}{\tabincell{c}{ResNet-74\\CIFAR-100}} & WAGEUBN & \multicolumn{1}{c}{FW-8 / BW-8} & 69.61 & 1.34E+14 & 38.46 & 354.93 \\ & DoReFa & \multicolumn{1}{c}{FW-8 / BW-8} & 69.31 & 1.79E+14 & 51.28 & 473.24 \\ & SBM & \multicolumn{1}{c}{FW-8 / BW-8} & 71.01 & 1.40E+14 & 40.08 & 369.94 \\ & DFQ & \multicolumn{1}{c}{$cp$=3} & 70.58 & 6.72E+13 & 32.89 & 315.11 \\ & FracTrain & \multicolumn{1}{c}{$cp$-1.5/2/2.5/3} & \textbf{71.03} & \textbf{5.46E+13} & \textbf{32.23} & \textbf{310.67} \\ \cmidrule{2-7} & \textbf{FracTrain Improv.} & & \textbf{+0.02} & \textbf{59.3 $\sim$ 69.5\%} & \textbf{16.2 $\sim$ 37.1\%} & \textbf{12.5 $\sim$ 34.4\%} \\ \bottomrule \end{tabular}% } \label{tab:fractrain}% \end{table*}% \subsection{FracTrain over SOTA low-precision training} \label{sec:exp_cdpt} \vspace{-0.1cm} We next evaluate FracTrain over three SOTA low-precision training baselines including SBM~\cite{banner2018scalable}, DoReFa \cite{zhou2016dorefa}, and WAGEUBN \cite{yang2019training}. Here we consider standard training settings. FracTrain's bit allocation visualization are provided in the supplement. \begin{wraptable}{r}{0.5\textwidth} \centering \caption{FracTrain vs. SBM on ImageNet.} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ccccc} \toprule \multicolumn{1}{c}{Model} & Method & \multicolumn{1}{c}{Precision Setting} & Acc (\%) & \multicolumn{1}{c}{MACs} \\ \midrule \multirow{3}[4]{*}{\tabincell{c}{ResNet-18}} & SBM & \multicolumn{1}{c}{FW-8 / BW-16} & 69.51 & 3.37E+15 \\ & FracTrain & \multicolumn{1}{c}{$cp$=3/3.5/4/4.5} & 69.44 & 1.59E+15 \\ \cmidrule{2-5} & & \textbf{PFQ Improv.} & \textbf{-0.07} & \textbf{52.85\%} \\ \midrule \multirow{3}[4]{*}{\tabincell{c}{ResNet-34}} & SBM & \multicolumn{1}{c}{FW-8 / BW-16} & 73.34 & 7.18E+15 \\ & FracTrain & \multicolumn{1}{c}{$cp$=3/3.5/4/4.5} & 73.03 & 3.45E+15 \\ \cmidrule{2-5} & & \textbf{PFQ Improv.} & \textbf{-0.31} & \textbf{51.90\%} \\ \bottomrule \end{tabular}% } \label{tab:fractrain_imagenet}% \vspace{-1em} \end{wraptable}% \textbf{Accuracy and training costs.} Table \ref{tab:fractrain} compares FracTrain in terms of accuracy and various training costs (computational cost, energy, and latency) against the three baselines when training \underline{ResNet-38/74 with CIFAR-10/100}, where we use \textit{FracTrain Improv.} to record the performance improvement of FracTrain over \textbf{the strongest competitor} among the three SOTA baselines. We can see that FracTrain \textbf{consistently outperforms} all competitors by reducing training computational cost, energy, and latency, while improving the accuracy in most cases, under all the models and datasets. Specifically, FracTrain can achieve training cost savings of 59.3 $\sim$ 77.6\%, 16.2\% $\sim$ 55.5\%, and 12.5\% $\sim$ 53.5\% in terms of the computational cost, energy, and latency, while leading to a comparable or even better accuracy (-0.12\% $\sim$ +1.87\%). Furthermore, we also evaluate FracTrain using \underline{ResNet-18/34 on ImageNet} (see Table \ref{tab:fractrain_imagenet}). Again, we can see that FracTrain reduces the training computational cost (by 52.85\% and 51.90\% respectively) while achieving a comparable accuracy. \begin{figure}[bt] \centering \includegraphics[width=\textwidth]{images/fractrain_lc-NIPS.pdf} \caption{The testing accuracy's evolution along the training trajectories of different low-precision training schemes on ResNet-38/74 with CIFAR-10, where the x-axis captures the total computational cost up to each epoch.} \vspace{-0.15in} \label{fig:fractrain} \end{figure} \textbf{Training trajectory.} Fig. \ref{fig:fractrain} visualizes the testing accuracy's trajectories of FracTrain ($cp$=1.5/2/2.5/ 3), DFQ ($cp$=3), and the three baselines as the training computational cost increases on the ResNet-38/74 models and CIFAR-10 dataset, where DFQ-CP3 denotes the DFQ training with $cp=$3\%. We can see that FracTrain reaches the specified accuracy given the least training computational cost. \begin{table*}[!htbp] \centering \caption{Adaptation \& fine-tuning training performance of the proposed PFQ, DFQ, FracTrain, and the SBM baseline~\cite{banner2018scalable} when training ResNet-38 on CIFAR-100's subsets.} \resizebox{0.98\textwidth}{!}{ \begin{tabular}{clccc|cc} \toprule \multirow{2}[4]{*}{Model / Dataset} & \multirow{2}[4]{*}{Method} & \multicolumn{1}{c}{\multirow{2}[4]{*}{Precision Setting}} & \multicolumn{2}{c}{Adaptation} & \multicolumn{2}{c}{Fine-tuning} \\ \cmidrule{4-7} & & & Acc (\%) & \multicolumn{1}{c}{MACs} & Acc (\%) & MACs \\ \midrule \multirow{5}[4]{*}{\tabincell{c}{ResNet-38\\CIFAR-100}} & SBM & FW-8 / BW-8 & 77.44 & 9.96E+13 & 64.83 & 1.22E+14 \\ & PFQ & FW(3,4,6,8) / BW(6,6,8,8) & 77.76 & 7.87E+13 & 64.72 & 8.68E+13 \\ & DFQ & $cp$=3 & \textbf{78.08} & 4.96E+13 & \textbf{65.01} & \textbf{3.95E+13} \\ & FracTrain & $cp$-1.5/2/2.5/3 & 77.52 & \textbf{3.12E+13} & 64.53 & 4.45E+13 \\ \cmidrule{2-7} & \multicolumn{2}{l}{\textbf{FracTrain Improv.}} & \textbf{+0.08\%} & \textbf{68.7\%} & \textbf{-0.3\%} & \textbf{67.6\%} \\ \bottomrule \end{tabular}% } \label{tab:adaptation}% \end{table*}% \subsection{FracTrain on adaptation \& fine-tuning scenarios} \label{sec:adaptation} To evaluate the potential capability of FracTrain for on-device learning~\cite{li2020halo}, we consider training settings of both adaptation and fine-tuning, where the detailed settings are described in the supplement. Table~\ref{tab:adaptation} compares the proposed PFQ, DFQ, and FracTrain with the SBM baseline in terms of accuracy and the computational cost in the adaptation \& fine-tuning stage, i.e., the highest accuracy achieved during retraining and the corresponding computational cost. We can see that PFQ, DFQ, and FracTrain can all achieve a better or comparable accuracy over SBM, while leading to a large computational cost savings. Specifically, FracTrain reduces the training cost by $68.7\%$ and $67.6\%$ while offering a better (+0.08\%) or comparable (-0.3\%) accuracy, as compared to the SBM baseline for adaptation and fine-tuning training, respectively. \subsection{Discussions} \textbf{Connections with recent theoretical findings.} There have been growing interests in understanding and optimizing DNN training. For example, \cite{pmlr-v97-rahaman19a,xu2019frequency} advocate that DNN training first learns low-complexity (lower-frequency) functional components and then high-frequency features, with the former being less sensitive to perturbations; \cite{achille2018critical} argues that important connections and the connectivity patterns between layers are first discovered at the early stage of DNN training, and then becomes relatively fixed in the latter training stage, which seems to indicate that critical connections can be learned independent of and also ahead of the final converged weights; and \cite{li2019towards} shows that training DNNs with a large initial learning rate helps the model to memorize easier-to-fit and more generalizable pattern faster and better. Those findings regarding DNN training seem to be consistent with the effectiveness of our proposed FracTrain. \textbf{ML accelerators to support FracTrain.} Both dedicated ASIC~\cite{lee20197, kim20201b} or FPGA accelerators (e.g., EDD~\cite{li2020edd}) can leverage the required lower average precision of FracTrain to reduce both the data movement and computation costs during training. As an illustrative example, we implement FracTrain on FPGA to evaluate its real-hardware benefits, following the design in EDD~\cite{li2020edd}, which adopts a recursive architecture for mixed precision networks (i.e., the same computation unit is reused by different precisions) and a dynamic logic to perform dynamic schedule. The evaluation results using ResNet-38/ResNet-74 on CIFAR-100 and evaluated on a SOTA FPGA board (Xilinx ZC706~\cite{zc706}) show that FracTrain leads to 34.9\%/36.6\% savings in latency and 30.3\%/24.9\% savings in energy as compared with FW8-/BW-8, while achieving a slightly better accuracy as shown in Table~\ref{tab:fractrain}. \section{Conclusion} \vspace{-0.2cm} We propose a framework called FracTrain for efficient DNN training, targeting at squeezing out computational savings from the most redundant bit level along the training trajectory and per input. We integrate two dynamic low-precision training methods in FracTrain, including Progressive Fractional Quantization and Dynamic Fractional Quantization. The former gradually increases the precision of weights, activations, and gradients during training until reaching the final training stage; The latter automatically adapts precisions of different layers' activations and gradients in an input-dependent manner. Extensive experiments and ablation studies verify that our methods can notably reduce computational cost during training while achieving a comparable or even better accuracy. Our future work will strive to identify more theoretical grounds for such adaptively quantized training. \section*{Broader impact} Our FracTrain framework will potentially have a deep social impact due to its impressive efforts on efficient DNN training, which will greatly contribute to the popularization of Artificial Intelligence in daily life. Efficient DNN training techniques are necessary from two aspects. For one thing, recent breakthroughs in deep neural networks (DNNs) have motivated an explosive demand for intelligent edge devices. Many of them, such as autonomous vehicles and healthcare wearables, require real-time and on-site learning to enable them to proactively learn from new data and adapt to dynamic environments. The challenge for such on-site learning is that the massive and growing cost of state-of-the-art (SOTA) DNNs stands at odds with the limited resources available at the edge devices. With the development of efficient training techniques, on-site learning becomes more efficient and economical, enabling pervasive intelligent computing systems like smart phones or smart watches in our daily life which deeply influences the life style of the whole society. From another, despite the substantially growing need of on-device learning, current practices mostly train DNN models in a cloud server, and then deploy the pre-trained models into the devices for inference, due to the large gap between the devices' constrained resource and the highly complex training process. However, based on a recent survey training a DNN will generate five cars' life time carbon dioxide emission which is extremely environmental unfriendly. Efficient training techniques will notably help mitigate the negative ecological influence when training DNNs at data centers during the evolution of the AI field, which further boosts the high-speed development of AI and deepens its influences on the society. Therefore, as the proposed FracTrain framework has been verified to be effective on various applications, its contribution to the efficient training field will directly bring positive impacts to the society. However, due to more pervasive applications driven by AI enabled by efficient training techniques, personal data privacy can be a potential problem which needs the help of other privacy-protecting techniques or regulations. \section*{Acknowledgement} We would like to thank Mr. Xiaofan Zhang at UIUC for his useful discussions and suggestions in our evaluation of FracTrain's training savings on FPGA when implemented using their EDD design. The work is supported by the National Science Foundation (NSF) through the Real-Time Machine Learning (RTML) program (Award number: 1937592, 1937588). \section{Ablation study about different PFQ strategies} To evaluate the general effectiveness of the proposed PFQ, here we compare three different PFQ strategies, including one heuristic PFQ (termed as manual-PFQ) and two principled PFQ (termed as Auto-PFQ). In particular, for the heuristic PFQ, we uniformly split the training process into four stages, and the principled PFQ (i.e., two-stage/four-stage Auto-PFQ) differs in the number of stages with progressive precisions as controlled by the loss indicator (see Section 3.1). From the results shown in Fig. \ref{fig:PFQ-resnet-appendix}, we can observe that: \underline{First}, both the heuristic and principled PFQ outperform SBM \cite{banner2018scalable} by reducing the training cost while achieving a comparable or even better accuracy. Specifically, the Manual-PFQ with four stages and the Auto-PFQ with two and four stages achieve a reduced training cost of 14.8\% $\sim$ 64.0\%, 8.2\% $\sim$ 63.1\%, and 22.7\% $\sim$ 73.2\%, respectively, with a comparable or better accuracy of -0.07\% $\sim$ +0.57\%, -0.04\% $\sim$ +0.32\%, and -0.08\% $\sim$ +0.34\%, respectively, compared with SBM. For example, when training ResNet-74 on CIFAR-100, the Auto-PFQ with a precision schedule of FW(3,4,6,8)/BW(6,8,12,16) achieves 73.2\% computational savings over SBM with FW-8/BW-16, with a better (+0.28\%) accuracy. \underline{Second}, progressive quantization along the training trajectory, i.e., PFQ, is in general effective towards efficient DNN training regardless of the precision schedule designs. For example, in the experiments corresponding to Fig. \ref{fig:PFQ-resnet-appendix}, all the three PFQ variants can reduce the training cost, while not hurting, or even improving, the accuracy, \textbf{under different precision schedule schemes with both heuristic and principled designs}. \begin{figure*}[bht] \centering \includegraphics[width=\textwidth]{images/PFQ-results-nips-supple.pdf} \caption{Comparing PFQ with the most competitive baseline, SBM \cite{banner2018scalable} (red), in terms of model accuracy vs. the total number of training MACs on ResNet-38/74 with CIFAR-10/100, where three variants of PFQ under four precision settings are considered. Note that we use FW-6/BW-8 to denote FW(6,6,6,6)/BW(8,8,8,8) for short.} \label{fig:PFQ-resnet-appendix} \vspace{-0.5em} \end{figure*} \section{FracTrain on larger and deeper models} To verify the scalability of FracTrain on larger and deeper models, we further apply FracTrain to ResNet-110/ResNet-164 as in Table~\ref{tab:deep} on CIFAR-10/CIFAR-100 and find that again FracTrain consistently outperforms the FW8/BW8 baseline (SBM~\cite{banner2018scalable}) with 38.69\% $\sim$ 67.3\% computational savings under a slightly higher accuracy (+0.05\% $\sim$ +0.25\%). \begin{table}[htb] \centering \caption{Comparing FracTrain with SBM~\cite{banner2018scalable} on ResNet-110/164 on CIFAR-10/100. } \label{tab:deep} \resizebox{0.8\textwidth}{!} { \begin{tabular}{ccccc} \hline \multirow{2}[2]{*}{ \tabincell{c}{Method} } & \multicolumn{2}{c}{ \vspace{-0.1em} ResNet-110 } & \multicolumn{2}{c}{ ResNet-164 } \\ \cmidrule{2-5} & CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\ \hline FW8/BW8 & 93.38 & 72.11 & 93.72 & 74.55 \\ FracTrain & 93.51 \textcolor{blue}{($\uparrow$0.13\%)} & 72.19 \textcolor{blue}{($\uparrow$0.08\%)}& 93.77 \textcolor{blue}{($\uparrow$0.05\%)}& 74.8 \textcolor{blue}{($\uparrow$0.25\%)} \\ \hline Comp. Saving & 67.3\% & 45.17\% & 38.69\% & 43.6\% \\ \hline \end{tabular} } \end{table} \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-2em} \centering \includegraphics[width=0.4\textwidth]{images/bit_allocation.pdf} \caption{Bit allocations of FracTrain ($cp$-1.5/2/ 2.5/3) on ResNet-38/CIFAR-100 at different training epochs: (a) 5-th, (b) 45-th and (c) 85-th epoch.} \vspace{-2em} \label{fig:bit_allocation} \end{wrapfigure} \section{Visualization of bit allocations in FracTrain} \textbf{Settings.} In Fig.~\ref{fig:bit_allocation}, we visualize the bit allocations of FracTrain ($cp$-1.5/2/2.5/3) at the 5-th, 45-th, and 85-th epoch across all the blocks (one block shares the same precision option) on ResNet-38/CIFAR-100, under which FracTrain achieves a 0.04\% higher accuracy and 61.3\% reduction in computational cost over the baseline SBM~\cite{banner2018scalable} (see the main content's Table 3). Note that due to the input adaptive property of FracTrain, here we show the precision option with the highest probability to be selected by each block averaging over all the images (a total of 50000) from the training dataset. \textbf{Observations and insights.} First, at the early training stage when FracTrain is specified with a small target $cp$, FracTrain allocates more bits to the shallow blocks of the network with smaller widths (i.e., number of output channels), which seems to balance the lighter computation in those blocks. This observation is consistent with that in the SOTA layer-wise quantization work HAQ~\cite{wang2019haq} under constrained model size. Second, as FracTrain learns to switch to a larger target $cp$ towards the end of the training, more bits will be allocated to the last several blocks for better convergence. We can see that FracTrain automatically learns to balance the task accuracy and training efficiency during training by allocating dynamic bits progressively along the training trajectory and spatially across different blocks. \section{Detailed training settings on CIFAR-10/100, ImageNet, and WikiText-103 } \textbf{Model structure and optimizer.} For ResNet-18/34, we follow the model definition in~\cite{he2016deep}; and for ResNet-38/74, we follow the model definition in~\cite{wang2018skipnet}. For MobileNetV2 on CIFAR-10/100 and the Transformer-base model on WikiText-103, we follow the ones in~\cite{e^2_train} and~\cite{merity2016pointer}, respectively. For the experiments on all the datasets, we adopt an SGD optimizer with a momentum of 0.9 and a weight decay factor of 1e-4, following \cite{wang2018skipnet}. \textbf{Training on CIFAR-10/100.} We adopt a batch size of 128, and a learning rate (LR) is initially set to 0.1 and then decayed by 10 at both the 80-th and 120-th epochs among the total 160 epochs, as in \cite{wang2018skipnet}. \textbf{Training on ImageNet.} We adopt a batch size of 256, and the LR is initially set to 0.1 and then decayed by 10 every 30 epochs among the total 90 epochs, following \cite{wang2018skipnet}. \textbf{Training on WikiText-103.} We train the basic transformer \cite{vaswani2017attention} on WikiText-103 consisting of 100M tokens and a vocabulary of around 260K. We use a dropout rate of 0.1, and the Adam optimizer with $\beta_1=0.9$, $\beta_2=0.98$ and $\epsilon=10^{-9}$. Each training batch contains a set of 1024 tokens with a sequence length of 256. We train the model for a total of 50,000 steps, following \cite{ott2019fairseq}. \section{Settings of FracTrain on adaptation \& fine-tuning scenarios} To evaluate the potential capability of FracTrain for on-device learning, we consider training settings of adaptation and fine-tuning, defined as: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item \textbf{Adaptation.} We split the CIFAR-100 training dataset into two non-overlapping subsets, each contains 50 non-overlapping classes, and first pre-train the model on one subset using full precision. Then starting from the pre-trained model, we retrain it on the other subset to see how efficiently they can adapt to the new task. The same splitting is applied to the test set for accuracy validation. \vspace{0.5em} \item \textbf{Fine-tuning.} We split the CIFAR-100 training dataset into two non-overlapping subsets, each contains all the classes. Similar with adaptation, we first pre-train the model on the first subset using full precision, and then retrain it from the pre-trained model on the other subset, expecting to see the continuous growth in performance. We use the same test set for accuracy validation. \end{itemize} \section{More details for the experiments of the main content's Fig. 5} In Fig. 5, each experiment result corresponds to one $cp$ setting which ranges from 1\% to 6\% for experiments with ResNet-38/74 and 3\% to 6\% for experiments with MobileNetV2. For the experiments with ResNet-38/74, DFQ considers seven precision options (including FW-0/BW-0, FW-2/BW-6, FW-3/BW-6, FW-4/BW-6, FW-4/BW-12, FW-6/BW-8, and FW-6/BW-12); and for the experiments with MobileNetV2, DFQ adopts five precision options (including FW-0/BW-0, FW-4/BW-8, FW-6/BW-8, FW-6/BW-10, FW-8/BW-8), where FW-0/BW-0 means skipping the computation of the whole block and reusing the activations from the previous layer/block as SLU~\cite{e^2_train}. \section{Determine cp for FracTrain's different stages} Here we explain how to determine $cp_i$ in Algorithm 2, which corresponds to the $cp$ value for FracTrain's different stages for a given overall goal of $cp$ (computation percentage over the full precision models and denoted as $cp_{total}$ hereafter) for the whole training process. Specifically, we adopt a simple and intuitive strategy to derive $cp_i$ from $cp_{total}$: \begin{equation} cp_{total} = \frac{1}{M} \sum_{i=0}^{M-1} cp_{i} \,\, , \quad where \,\,\, cp_{i+1} = cp_{i} + \Delta_{cp} \label{eqn:cp} \end{equation} where $M$ is the total number of stages. In this principled way, we can easily determine the $cp$ value of different stages for achieving the specified $cp_{total}$. In all our experiments, we simply adopt a step size of 0.5 , i.e., $\Delta_{cp}=0.5$. Once $cp_i$ for the $i$-th stage is specified, $\beta$ in Eq.(2) will adaptively flip its sign to achieve the $cp_i$ constraint.
1,116,691,499,557
arxiv
\section{Introduction} {\bf Introduction:} The Large Hadron Collider (LHC), starting its operation within a year is expected to probe a new hitherto unexplored domain of particles and forces beyond the standard model. It can not only clarify some of the many mysteries of the standard model but also perhaps provide a glimpse of other new physics in the TeV energy range. The sense of expectation generated by this in the particle physics community has led to a burst of theoretical activity designed to explore great many theoretical concepts that perhaps the LHC can throw light on. They include ideas such as extra dimensions, supersymmetry, new strong forces, new Higgs bosons, new quarks and leptons etc. In this paper, we explore the possibility that LHC can throw light on a new kind of color sextet Higgs fields (denoted by $\Delta_{u^cu^c}$). Existence of such fields will indicate a fundamentally new direction for unification than the conventional grand unified theories. Indeed, there is a class of partial unification theories based on supersymmetric $SU(2)_L\times SU(2)_R\times SU(4)_c$ \cite{ps} model where $\Delta_{u^cu^c}$ fields appear with mass in the TeV range even though the gauge symmetry breaking scale is in the range of $10^{11}$ GeV due to the existence of accidental symmetries \cite{chacko}. These models are interesting since they not only unify quarks and leptons but also implement the seesaw mechanism for small neutrino masses \cite{seesaw} and therefore provide a theoretical basis to contemplate the existence of TeV scale diquark Higgs fields. They make the unique prediction that $\Delta_{u^cu^c}$ fields couple only to the right-handed up-type quarks of all generations. These particles are also connected to baryon number violating processes such as neutron-anti-neutron oscillation \cite{chacko}. Discovery of these particles would point towards quark-lepton unification at an intermediate scale rather than at the commonly assumed grand unification scale of $10^{16}$ GeV. An interesting point about these particles is that they can be produced and detected at the LHC. Their couplings are however constrained by low energy observations. In this letter we explore this topic and the main results of our investigation are: \begin{itemize} \item the present experimental information on $D^0-\overline{D^0}$ mixing can be satisfied by setting to zero only the diagonal coupling of the $\Delta_{u^cu^c}$ to the charm quarks; \item the remaining coupling can be large enough so that the production rate in $pp$ collision is significant and there are observable signal for the diquark Higgs field via its double top and single top plus jet production. Note also that a $pp$ colliding machine such as LHC is more favorable for the production of these kind of fields compared to a $p\overline{p}$ machine e.g. Tevatron; \item the $\Delta_{u^cu^c}$ coupling matrix to quarks can be a direct measure of the neutrino mass matrix if the neutrino masses have a inverted hierarchy within our scheme, providing a unique way to probe the lepton flavor structure using the LHC. \end{itemize} {\bf Brief overview of model with naturally light $\Delta_{u^cu^c}$: } In order to theoretically motivate our study of color sextet Higgs fields, we discuss how these ``light mass'' particles can naturally arise in a class of supersymmetric seesaw models for neutrino masses \cite{seesaw}. The seesaw mechanism extends the standard model with three right handed neutrinos and add large Majorana masses for them. The fact that the seesaw scale is much lower than the Planck scale suggests that there may be a symmetry protecting this scale. A natural symmetry is local B-L symmetry whose breaking leads to the right-handed Majorana neutrino masses. A gauge theory that accommodates this scenario is the left-right symmetric model based on the gauge group $SU(2)_L\times SU(2)_R\times U(1)_{B-L}\times SU(3)_c$. This model being quark lepton symmetric easily lends itself to quark-lepton unification a la Pati-Salam into the gauge group $SU(2)_L\times SU(2)_R\times SU(4)_c$ \cite{ps}. It has already been shown \cite{chacko} that within a supersymmetric Pati-Salam scheme, if $SU(4)_c$ color is broken not by $SU(2)_{L,R}$ doublet fields as was suggested in \cite{ps} but rather by triplets as proposed in \cite{goran}, then despite the high seesaw scale of around $10^{11}$ GeV or so, there are light (TeV mass) sextet diquark of the type $\Delta_{u^cu^c}$. To show this more explicitly, recall that the quarks and leptons in this model are unified and transform as $\psi:({\bf 2,1,4})\oplus \psi^c:({\bf 1,2},\overline{\bf 4})$ representations of $SU(2)_L\times SU(2)_R\times SU(4)_c$. For the Higgs sector, we choose, $\phi_1:(\bf{2,2,1})$ and $\phi_{15}:(\bf{2,2,15})$ to give mass to the fermions and the $\Delta^c:({\bf 1,3,10})\oplus \overline{\Delta}^c:({\bf 1,3},\overline{\bf 10})$ to break the $B-L$ symmetry. The diquarks mentioned above are contained in the $\Delta^c:(\bf{1,3,10})$ multiplet. The renormalizable superpotential for this model has a large global symmetry of $U(30,c)$ and on gauge symmetry breaking, leads to all diquark Higgs fields and a pair of doubly charged Higgs bosons remaining light. In this theory, the gauge couplings become non-perturbative in the $10-100$ TeV range and do not yield a high seesaw scale, as may be desirable. On the other hand, if we add an extra $B-L$ neutral triplet Higgs field $\Omega:(\bf{1,3,1})$ to this theory, the symmetry of the theory gets lowered and this helps to greatly reduce the number of light diquark states. The reduction of the global symmetry can be seen from the superpotential of this model $ W~=~W_Y~+~W_H$ where \begin{eqnarray} W_H&=&\lambda_1 S( \Delta^c\overline{\Delta}^c-M_\Delta^2) +\mu_{i}{\rm Tr}\,(\phi_i\phi_i) \,,\\ W_Y&=&h_1\psi\phi_1 \psi^c + h_{15} \psi\phi_{15} \psi^c + f \psi^c\Delta^c \psi^c. \label{Yukawa} \end{eqnarray} Note that since we do not have parity symmetry in the model, the Yukawa couplings $h_1$ and $h_{15}$ need not symmetric matrices. This superpotential has $U(10,c)\times SU(2)$ global symmetry. When the neutral component of $(\bf{1,3,10+\overline{10}})$ picks up VEV, this symmetry breaks down to $U(9,c)\times U(1)$, leaving 21 complex massless scalar fields. Since the gauge symmetry also breaks down from $SU(2)_R\times SU(4)_c$ to $SU(3)_c\times U(1)_Y$, nine of these are absorbed leaving 12 complex massless states, which are the sextet $\Delta_{u^cu^c}$ (the submutiplet of the $\Delta^c$ in Eq. (\ref{Yukawa})) plus its complex conjugate states from the ${\bf \overline{10}}$ representation above. Once supersymmetry breaking effects are included and higher dimensional terms \begin{eqnarray} && \lambda_A \frac{(\Delta^c\overline{\Delta}^c)^2}{M_{P\!\ell}} +\lambda_B\frac{(\Delta^c{\Delta^c})(\overline{\Delta}^c\overline{\Delta}^c)}{ M_{P\!\ell}} \nonumber \\ &+& \lambda_C \Delta^c\overline{\Delta}^c\Omega + \lambda_D \frac{{\rm Tr}\,(\phi_1\Delta^c \overline{\Delta}^c\phi_{15})}{M_{P\!\ell}} \, , \end{eqnarray} are included, these $\Delta_{u^cu^c}$ fields pick up mass of order $\lambda_B\frac{v^2_{BL}}{M_{Pl}}$ which for $v_{BL}\sim 10^{11}$ GeV is in the 100 GeV to TeV range naturally. We denote the mass of $\Delta_{u^cu^c}$ by $m_\Delta$. {\bf Phenomenological constraints on $\Delta_{u^cu^c}$ couplings to quarks:} The magnitudes of the couplings of diquark Higgs to up-type quarks are important for its LHC signal as well as other manifestations in the domain of rare processes. As is clear from Eq.~(\ref{Yukawa}), the sextet $\Delta_{u^cu^c}$ couplings to quarks, $f_{ij}$ are also directly related to the neutrino masses, which provides a way to probe neutrino masses from LHC observations. Due to the existence of other parameters, current neutrino observations do not precisely pin down the $f_{ij}$. There are however other constraints on them. To study these constraints, we define the $\Delta_{u^cu^c}$ couplings ($f_{ij}$) in a basis where the up-type quarks are mass eigenstates. A major constraint on them comes from the $D^0-\overline{D^0}$ mixing which is caused by the exchange of $\Delta_{u^cu^c}$ field: \begin{eqnarray} M_{D^0-\overline{D^0}}~=~\frac{f_{11}f_{22}}{4m^2_{\Delta}}\overline{c}\gamma_\mu (1-\gamma_5)u \overline{c}\gamma^\mu(1-\gamma_5) u ; \end{eqnarray} The present observations \cite{ddbar} imply that the transition mass $\Delta M_D$ for $D^0-\overline{D^0}$ to be $8.5\times 10^{-15} \leq \Delta M_D\leq 1.9\times 10^{-14}$ in GeV units. In our model, we can estimate this to be \begin{eqnarray} \Delta M_D\simeq \frac{f_{11}f_{22}}{4m^2_{\Delta}}f^2_DM_D \end{eqnarray} which implies that $\frac{f_{11}f_{22}}{4m^2_{\Delta}}\leq 10^{-12}$ GeV$^{-2}$; for a TeV delta mass, which is in the range of our interest, this implies $f_{11}f_{22}\leq 4\times 10^{- 6}$. If we assume that $f_{11}\gg f_{22}$, then for $f_{11}\sim 0.1$ or so, $f_{22}$ is close to zero, which assume to be the case in our phenomenological analysis \cite{pakvasa}. Next constraint comes from non-strange pion decays e.g. $D\rightarrow \pi\pi$ which are suppressed compared to the decays with strange final states. This bound however is weak. The present limits on such non-strange final states are at the level of $B\leq 10^{-4}$ \cite{PDG}, which implies $f_{11}f_{12}\leq 4\times 10^{-2}$ for $m_\Delta \sim$ few hundred GeV to TeV range. This will be easily satisfied if $f_{11}\sim f_{12}\sim 0.2$. {\bf Collider phenomenology:} Due to the diquark Higgs coupling to a pair of up-type quarks, it can be produced at high energy hadron colliders such as Tevatron and LHC through the annihilation of a pair of up quarks. Clearly, a proton-proton collider leads to a higher production rate for $\overline{\Delta}_{u^cu^c}$ compared to the proton-anti-proton colliding machine. As a signature of diquark productions at hadron colliders, we concentrate on its decay channel which includes at least one anti-top quark (top quark for anti-diquark Higgs case) in the final state. Top quark has large mass and decays electroweakly before hadronizing. Due to this characteristic feature distinguishable from other quarks, top quarks can be used as an ideal tool \cite{TopPhys} to probe other new physics beyond the standard model \cite{tp}. Since diquark couples with only up-type quarks, once it is produced, its decay give rise to production of double top quarks ($\overline{\Delta}_{u^cu^c} \rightarrow tt$) and a single top quark + jet ($\overline{\Delta}_{u^cu^c} \rightarrow t u$ or $t c$). These processes have no standard model counterpart, and the signature of diquark production would be cleanly distinguished from the standard model background. We leave detailed collider studies on signal event of diquark (anti-diquark) Higgs production and the standard model background event for future works. Instead, as a conservative treatment, we calculate resonant production of diquark and anti-diquark Higgs at Tevatron and LHC and compare it to $t \overline{t}$ production in the standard model. The reason is that to observe resonant production of $\Delta_{u^cu^c}$ and measure its mass, it is necessary to reconstruct the invariant mass of the final state. In the double top quark production, if one uses the leptonic decay mode of a top quark, $t \rightarrow b W^+ \rightarrow b \ell^+ \nu$, for the identification of top quark, with one missing neutrino and the hadronic decay mode for the other top quark to reconstruct the invariant mass \cite{topinvmass}, it becomes difficult to tell $t$ from $\overline{t}$. However note that if one can use leptonic decay modes for both tops, one can distinguish $tt$ from $t\overline{t}$ through charges of produced leptons. First, we give basic formulas for our study on diquark Higgs production at hadron colliders. The fundamental processes in question are $u u \rightarrow \overline{\Delta}_{u^cu^c} \rightarrow tt, tu, tc$ ( $\overline{u} \overline{u} \rightarrow \Delta_{u^cu^c} \rightarrow \overline{t} \overline{t}, \overline{u} \overline{t}, \overline{c} \overline{t}$ for anti-diquark Higgs production). From Eq.~(\ref{Yukawa}), the cross section is found to be \begin{eqnarray} && \frac{ d \sigma( uu \rightarrow\overline{ \Delta}_{u^cu^c} \rightarrow tt)} {d \cos \theta} = \frac{|f_{11}|^2 \; |f_{33}|^2}{16 \pi} \nonumber\\ &\times & \frac{\hat{s}-2 m_t^2} {(\hat{s}-m_{\Delta}^2)^2 + m_{\Delta}^2 \Gamma_{\rm tot}^2 } \sqrt{1- \frac{4 m_t^2}{\hat{s}}}, \nonumber \\ && \frac{d \sigma( uu \rightarrow \overline{\Delta}_{u^cu^c} \rightarrow ut, ct)}{d \cos \theta} = \frac{|f_{11}|^2 \; |f_{13,23}|^2}{8 \pi \hat{s}} \nonumber\\ &\times & \frac{(\hat{s}- m_t^2)^2} {(\hat{s}-m_{\Delta}^2)^2 + m_{\Delta}^2 \Gamma_{\rm tot}^2 } . \label{CrossParton} \end{eqnarray} Here, we have neglected all quark masses except for top quark mass ($m_t$), $\cos \theta$ is the scattering angle, and $\Gamma_{\rm tot}$ is the total decay width of diquark Higgs, which is the sum of each partial decay width, \begin{eqnarray} \Gamma (\overline{\Delta}_{u^cu^c} \to uu, cc) &=& \frac{3}{16 \pi} |f_{11,22}|^2 \; m_{\Delta}, \nonumber \\ \Gamma (\overline{\Delta}_{u^cu^c} \to tt) &=& \frac{3}{16 \pi} |f_{33}|^2 \; m_{\Delta} \nonumber \\ & \times & \sqrt{1-\frac{4 m_t^2}{m_{\Delta}^2}} \left( 1- \frac{2 m_t^2}{m_{\Delta}^2} \right) \nonumber \\ \Gamma (\overline{\Delta}_{u^cu^c} \to uc) &=& \frac{3}{8 \pi} |f_{12}|^2 \; m_{\Delta}, \nonumber \\ \Gamma (\overline{\Delta}_{u^cu^c} \to u t, c t) &=& \frac{3}{8 \pi} |f_{13,23}|^2 \; m_{\Delta} \left( 1- \frac{m_t^2}{m_{\Delta}^2} \right)^2. \end{eqnarray} Note that the cross section is independent of the scattering angle because the diquark Higgs is a scalar. With these cross sections at the parton level, we study the diquark production at Tevatron and LHC. At Tevatron, the total production cross section of an up-type quark pair ($u_i u_j$ where $u_{1,2,3}=u,c,t$) through diquark Higgs in the s-channel is given by \begin{eqnarray} && \sigma (p \overline{p} \to u_i u_j) = \int dx_1 \int dx_2 \int d \cos \theta \nonumber \\ &\times & f_u(x_1, Q^2) f_{\overline{u}}(x_2, Q^2) \nonumber \\ &\times & \frac{d \sigma(u u \to \overline\Delta_{u^cu^c} \to u_i u_j; \hat{s}=x_1 x_2 E_{\rm CMS}^2)}{d \cos \theta}, \label{CrossTevatron} \end{eqnarray} where $f_u(x_1, Q^2)$ and $ f_{\overline{u}}(x_2, Q^2)$ denote the parton distribution function, and $E_{\rm CMS}$ is the collider energy. Note that one parton distribution function is for up quark and the other is for the sea up quark, since it comes from an anti-proton ( for a proton-anti-proton system such as at Tevatron). This fact indicates that at Tevatron the production cross section of diquark Higgs is the same as the one of anti-diquark Higgs, reflecting that the total baryon number of initial $p \overline{p}$ state is zero. At LHC, the total production cross section of an up-type quark pair is given by \begin{eqnarray} && \sigma (p p \to u_i u_j) = \int dx_1 \int dx_2 \int d \cos \theta \nonumber \\ &\times& f_u(x_1, Q^2) f_u(x_2, Q^2) \nonumber \\ &\times & \frac{d \sigma(u u \to \overline\Delta_{u^cu^c} \to u_i u_j; \hat{s}=x_1 x_2 E_{\rm CMS}^2)}{d \cos \theta}. \end{eqnarray} Here, both of parton distribution functions are for up quark in proton (both valence quarks), corresponding to a proton-proton system at LHC. Total production cross section of an up-type anti-quark pair ($\overline{u}_i \overline{u}_j$) is obtained by replacing the parton distribution function into the one for anti-quark. The initial $p p$ state has a positive baryon number, so that the production cross section of diquark Higgs is much larger than the one of anti-diquark Higgs at LHC. The dependence of the cross section on the final state invariant mass $M_{u_i u_j}$ is described as \begin{eqnarray} && \frac{d \sigma (p p \to u_i u_j)}{d M_{u_i u_j}} = \int d \cos \theta \int^1_{ \frac{M_{u_i u_j}^2}{E_{\rm CMS}^2}} dx_1 \frac{2 M_{u_iu_j}}{x_1 E_{\rm CMS}^2} \nonumber\\ &\times& f_u(x_1, Q^2) f_u \left( \frac{M_{u_iu_j}^2}{x_1 E_{\rm CMS}^2}, Q^2 \right) \nonumber \\ &\times & \frac{d \sigma(u u \to \overline\Delta_{u^cu^c} \to u_i u_j)} {d \cos \theta}. \label{CrossLHC} \end{eqnarray} The production cross section of the diquark Higgs and its branching ratio to final state up-type quarks depends on the coupling $f_{ij}$. This coupling is, in general, a free parameter in the model, and in our following analysis, we take an example for $f_{ij}$, \begin{eqnarray} f_{ij}= \left[ \begin{array}{ccc} 0.3 & 0 & 0.3 \\ 0 & 0 & 0 \\ 0.3 & 0 & 0.3 \end{array} \right] . \end{eqnarray} In this example, the phenomenological constraints on $f_{ij}$ discussed in the previous section are satisfied with $f_{12}=f_{22}=0$. This example gives rise to processes, $ uu \to tt, ut$, that we are interested in. Let us first examine the lower bound on the diquark Higgs mass from Tevatron experiments. We refer the current experimental data of the cross section of top quark pair production \cite{CDF}, \begin{eqnarray} \sigma(t \overline{t}) = 7.3 \pm 0.5({\rm stat}) \pm 0.6({\rm syst}) \pm 0.4({\rm lum}) \; {\rm pb}, \end{eqnarray} and impose a constraint for the double top quark and a single top quark production cross sections through diquark Higgs in the s-channel. Since most of the $\sigma_{t\overline{t}}$ value can be understood as the standard model effect, the possible new physics should be in the uncertainty range of $\sigma_{t\overline t}$, we take the following conservative bound as \begin{eqnarray} \sigma( p \overline{p} \to \Delta_{u^cu^c} \to tt, ut) \lesssim 1.5 {\rm pb}. \end{eqnarray} In our numerical analysis, we employ CTEQ5M \cite{CTEQ} for the parton distribution functions with the factorization scale $Q=m_t=172$ GeV. Fig.~1 shows the total cross section of $tt$ and $tu$ productions as a function of the diquark Higgs mass, with $E_{\rm CMS} = 1.98$ TeV. The lower bound is found to be $m_{\Delta} \gtrsim$ 470 GeV. \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{Fig1.eps} \caption{ The cross sections of $tt$ (dotted line) and $tj$ (dashed line) productions mediated by the diquark Higgs in s-channel at Tevatron with $E_{\rm CMS}=1.96$ TeV. } \end{figure} \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{Fig2.eps} \caption{ The differential cross sections for $tj$ (dashed line), $tt$ (dotted line), $\overline{t}j$ (dashed-dotted line) and $\overline{t}\overline{t}$ (dashed-dotted-dotted line) as a function of the invariant mass of final state $M_{u_iu_j}$. The left peak corresponds to $m_\Delta=600(\rm GeV)$ and the right one to $m_\Delta=1$ TeV. The solid line is the standard model $t\overline{t}$ background. } \label{Fig2} \end{figure} \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{Fig3.eps} \caption{ Angular distribution of the cross section for $m_{\Delta}=600$ GeV with $M_{\rm cut}=550$ GeV, together with the $t \overline{t}$ production in the standard model. The same line convention as in the Fig.~\ref{Fig2} has been used. }\label{Fig3} \end{figure} Next we investigate the diquark and anti-diquark Higgs production at LHC with $E_{\rm CMS}=14$ TeV. The differential cross sections for each process with $m_{\Delta}=600$ GeV and 1 TeV are depicted in Fig.~2, together with the $t \overline{t}$ production cross section in the standard model. We can see that the peak cross sections for the $tt$ and $tu$ productions exceed the standard model cross section while the $\overline{t} \overline{t}$ and $\overline{t} \overline{u}$ cross sections are lower than it. This discrepancy between the production cross sections of diquark and anti-diquark Higgs at LHC is the direct evidence of the non-zero baryon number of diquark Higgs. The charge of the lepton from leptonic decay of top quark or anti-top quark can distinguish top quark from anti-top quark. Counting the number of top quark events and anti-top quark events from their leptonic decay modes would reveal non-zero baryon number of diquark Higgs. The angular distribution of the final states carries the information of the spin of the intermediate states. As shown in Eq.~(\ref{CrossParton}), there is no angular dependence on the diquark Higgs production cross section, because the diquark Higgs is a scalar particle. On the other hand, the top quark pair production in the standard model is dominated by the gluon fusion process, and the differential cross section shows peaks in the forward and backward region. Therefore, the signal of the diquark Higgs production is enhanced at the region with a large scattering angle (in center of mass frame of colliding partons). Imposing a lower cut on the invariant mass $M_{\rm cut}$, the angular dependence of the cross section is described as \begin{eqnarray} && \frac{d \sigma (p p \to u_i u_j) } {d \cos \theta} = \int_{M_{\rm cut}}^{E_{\rm CMS}} d M_{u_i u_j} \int^1_{ \frac{M_{u_i u_j}^2}{E_{\rm CMS}^2} } dx_1 \nonumber \\ &\times& \frac{2 M_{u_iu_j}}{x_1 E_{\rm CMS}^2} f_u(x_1, Q^2) f_u \left( \frac{M_{u_iu_j}^2}{x_1 E_{\rm CMS}^2}, Q^2 \right) \nonumber \\ &\times & \frac{d \sigma(u u \to \Delta_{u^cu^c} \to u_i u_j)} {d \cos \theta}. \label{DCrossLHC} \end{eqnarray} The results for $m_{\Delta}=600$ GeV with $M_{\rm cut}=550$ GeV are depicted in Fig.~3, together with the standard model result. Here the lower cut on the invariant mass close to the diquark Higgs mass dramatically reduces the standard model cross section compared to the diquark Higgs signal. We now discuss the connection of the coupling $f_{ij}$ to the neutrino mass. Once the $B-L$ symmetry is broken by $\langle \Delta^c \rangle$ along the $\nu^c\nu^c$ direction, right-handed neutrinos acquire masses through the Yukawa coupling in Eq.~(\ref{Yukawa}) and their mass matrix is proportional to $f_{ij}$. Therefore, $f_{ij}$ is related to neutrino oscillation data though the (type I) see-saw mechanism which unfortunately involves unknown Dirac Yukawa couplings. When we impose the left-right symmetry on a model, $\Delta^c$ is accompanied by $\overline{\Delta} : ({\bf 3,1},\overline{\bf 10})$, which adds a new term to the superpotential $ f \psi \overline{\Delta}^c_L \psi$ with the same Yukawa coupling $f_{ij}$. Through this Yukawa coupling, the type II see-saw mechanism can generates Majorana masses for left handed neutrinos. When the type II see-saw contributions dominate the light neutrino mass matrix becomes proportional to $f_{ij}$. In this case, there is a direct relation between the collider physics involving diquark Higgs production and neutrino oscillation data. For the type II see-saw dominance, a sample value for $f_{ij}$ that fits neutrino observations is given by, \begin{eqnarray} f_{ij}= \left[ \begin{array}{ccc} 0.27 & -0.48 & -0.47 \\ -0.48 & 0 & -0.38 \\ -0.47 & -0.38 & 0.2 \end{array} \right] . \end{eqnarray} Again, this Yukawa coupling matrix is consistent with phenomenological constraints discussed in the previous section. The type II see-saw gives the light neutrino mass matrix via $m_\nu = f v_T$ with $v_T= \langle \overline{\Delta} \rangle$. For $v_T=0.1$ eV, it predicts neutrino oscillation parameters to be: \begin{eqnarray} && \Delta m_{12}^2 = 8.9 \times 10^{-5} \; {\rm eV}^2, \; \; \Delta m_{23}^2 = 3 \times 10^{-3} \; {\rm eV}^2, \nonumber \\ && \sin^2 \theta_{12} = 0.32, \; \; \sin^2 2 \theta_{23} = 0.99, \; \; |U_{e3}| = 0.2, \nonumber \end{eqnarray} which are all consistent with the current neutrino oscillation data \cite{PDG}. Here the resultant light neutrino mass spectrum is the inverse hierarchical. For $f_{22} \ll1 $ as required by $D^0-\overline{D^0}$ mixing data, analytic and numerical studies show that only the inverse hierarchical mass spectrum can reproduce the observed neutrino oscillation data for the type II seesaw case. We have performed the same analysis as before for this case and find the lower bound on the diquark Higgs mass from Tevatron data to be $m_{\Delta} \gtrsim$ 450 GeV, which is a little milder than before. In this case, the peak cross section of only the single top + jet production exceeds the $t \overline{t}$ production cross section of the standard model. The differential cross section of Eq.~(\ref{DCrossLHC}) is independent of the scattering angle, and we find $d \sigma/d \cos \theta = 60.6$ pb for the single top + jet production for $m_{\Delta}=600$ GeV with $M_{\rm cut}=550$ GeV. Finally, we comment on spin polarization of the final state top (anti-top) quark. Because of its large mass, top quark decays before hadronizing and the information of the top quark spin polarization is directly transferred to its decay products and results in significant angular correlations between the top quark polarization axes and the direction of motion of the decay products \cite{TopSpin}. Measuring the top spin polarization provides the information on the chirality nature of top quark in its interaction vertex. It has been shown that measuring top spin correlations can increase the sensitivity to a new particle at Tevatron \cite{SpinCorr1} and LHC \cite{SpinCorr2}. In the diquark Higgs production, it is very interesting to measure the polarization of top (anti-top) quark in the single top production. Only the right-handed top quark couples to diquark Higgs and the top quark produced from diquark Higgs decay is right-handed state, while top quark from the single top production through electroweak processes in the standard model is purely left-handed. \acknowledgments We like to thank K. S. Babu, T. Han, K. Smolek and C. P. Yuan for useful discussions. The works of R.N.M. and H.B.Y. are supported by the National Science Foundation Grant No. PHY-0652363. The work of N.O. is supported in part by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture of Japan (\#18740170).
1,116,691,499,558
arxiv
\section*{Highlights} \begin{itemize} \item Robust monolithic solver with adaptive under-relaxation and arc-length method. \item Snap-back behaviour is captured with a phase-field fracture energy-based dissipation constraint. \item PHT-splines within IGA framework is utilized for adaptive mesh refinement. \end{itemize} \bigskip \begin{abstract} \noindent The phase-field fracture free-energy functional is non-convex with respect to the displacement and the phase field. This results in a poor performance of the conventional monolithic solvers like the Newton-Raphson method. In order to circumvent this issue, researchers opt for the alternate minimization (staggered) solvers. Staggered solvers are robust for the phase-field based fracture simulations as the displacement and the phase-field sub-problems are convex in nature. Nevertheless, the staggered solver requires very large number of iterations (of the order of thousands) to converge. In this work, a robust monolithic solver is presented for the phase-field fracture problem. The solver adopts a fracture energy-based arc-length method and an adaptive under-relaxation scheme. The arc-length method enables the simulation to overcome critical points (snap-back, snap-through instabilities) during the loading of a specimen. The use of an under-relaxation scheme stabilizes the solver by preventing the divergence due to an ill-behaving stiffness matrix. The efficiency of the proposed solver is further amplified with an adaptive mesh refinement scheme based on PHT-splines within the framework of isogeometric analysis. The numerical examples presented in the manuscript demonstrates the efficacy of the solver. All the codes and data-sets accompanying this work will be made available on GitHub (\url{https://github.com/rbharali/IGAFrac}). \end{abstract} \keywords{phase-field fracture, brittle material, monolithic solver, arc length method, variational damage, IGA, PHT-splines} \newpage \section{Introduction}\label{sec:1} The seminal work of Francfort and Marigo \cite{Francfort1998} led to the emergence of the phase-field based fracture model, as an alternative fracture modelling technique. Therein, the Griffith fracture criterion was cast into a variational setting with certain limitations: no concept of internal length scale and maximum allowable stresses. Later, \cite{Bourdin2000,Bourdin2007} proposed a computationally convenient framework of the Francfort and Marigo model, adopting a scalar auxiliary variable that interpolates between fully fractured and intact material states. In this context, the Ambrosio-Tortorelli regularization of the Mumford-Shah functional \cite{mumford1989optimal} was utilized. Based on the minimization of the global energy function, the phase-field model eliminates the need for tedious tracking of the fracture path and remeshing techniques, frequently observed in the discrete fracture models like XFEM\cite{bordas2011performance}. Furthermore, the phase-field model for fracture has proven its capabilities to handle topologically complex fracture patterns (branching, kinking and merging of cracks)\cite{Bourdin2000}. Soon after the inception of phase field based fracture model, the concept was cast into a thermodynamically consistent framework in \cite{miehe2010b}, adopting an energy-based fracture driving criterion. This work was later extended towards a generic stress-based fracture driving criterion in \cite{miehe2015449}, ductile fracture with plasticity models in \cite{ambati2015ductile,miehe2015486}, anisotropic fracture \cite{TEICHTMEISTER20171,BLEYER2018213}, hydraulic fracture \cite{WILSON2016264,chukwudozie2019variational}, desiccation cracking \cite{cajuhi2018phase,HEIDER2020112647} in a non-exhaustive list of single-scale brittle fracture applications. In the context of multi-scale modelling, the overlapping domain decomposition techniques were adopted in \cite{patil2018adaptive,nguyen2019multiscale,triantafyllou2020generalized,gerasimov2018non,noii2020adaptive}, while \cite{fantoni2020phase,he2020numerical,Bharali2021} adopted the hierarchical modelling technique in the FE$^2$ sense \cite{feyel2000fe2}. Despite its popularity in several multi-physics domains, the phase-field model has its own set of computational challenges in its implementation in fracture analysis. They include, (\textit{1}) a non-convex free-energy functional with respect to the coupled field variables, (\textit{2}) variational inequality due to fracture irreversibility constraint, and (\textit{3}) the need to resolve the smeared fracture zone with an extremely fine mesh. The coupled fields can be solved using either a monolithic solver or a staggered solver. Provided that the non-linear solver converges, the monolithic solution scheme is more efficient and faster than the staggered one. However, the non-convex energy functional generally leads to poor convergence and loss of robustness of the monolithic solver. In order to circumvent this, \cite{Gerasimov2016} proposed a line search technique which included a negative search direction, \cite{Heister2015} proposed convexification of the energy functional based on linear extrapolation of the phase-field for the momentum balance equation. Other methods developed in this context include the arc-length solvers \cite{vignollet2014phase,may2015numerical,singh2016fracture}, modified Newton-Raphson method \cite{wick2017modified}, error-oriented Newton-Raphson method \cite{Wick2017a}, and trust regions methods \cite{kopanivcakova2020recursive}. Nevertheless, the development of a robust monolithic solver still remains an active research area in the phase-field fracture community. As an alternative approach, \cite{Miehe2010} suggested the use of staggered solution scheme, since the energy functional is convex with respect to either of the coupled field, if the other one is held constant. The second computational challenge associated with the monolithic solver pertains to the variational inequality formulation that arises from the fracture irreversibility constraint. In this context, \cite{Gerasimov2016,GERASIMOV2019990} opted for a simple penalization technique, \cite{Heister2015} adopted the primal-dual active set method, whereas \cite{Wick2017a,wick2017modified} used an Augmented Lagrangian method based on the Moreau-Yoshida indicator function \cite{Wick2017a,wick2017modified}. More recent approaches include the micromorphic approach that transforms the phase-field into a local variable \cite{miehe2016phasemicromorphic,Bharali2022micromorphic}, and the slack variable approach \cite{Bharali2022slack}. Alternatively, a heuristic approach was proposed by \cite{Miehe2010}, replacing the fracture driving energy with its maximum value over the loading history. However, this method is not variationally consistent with the phase-field fracture energy functional \cite{GERASIMOV2019990,DeLorenzis2021}. Finally, the phase-field model for fracture analysis requires an extremely fine mesh to resolve the smeared region. A simple and straight forward but computationally expensive way would be to use uniformly refined mesh. If the fracture path is known $\it{a\;priori}$, a certain part of the computational domain may also be pre-refined. The latter case is more applicable when it comes to benchmark models from the literature. However, in the scenarios where the fracture path is not known in advance, adaptive mesh refinement techniques is the preferred option. In this context, the elements are marked based on either a critical threshold value defined over the phase field parameter \cite{Heister2015,goswami2020adaptive}, or by local increase in the tensile energy \cite{klinsmann2015assessment}. Other adaptive refinement schemes include the recovery-based error indicator \cite{jansari2019adaptive}, $\it{a\;posteriori}$ error estimation based on the dual-weighted residual method \cite{wick2016goal}, the finite cell method \cite{nagaraja2019phase}, and the dual mesh concept for the two coupled fields with different mesh refinement indicators \cite{goswami2019adaptive}. To overcome the issues discussed in the current state-of-the-art, we propose a novel monolithic solver, which is based on an adaptive under-relaxation scheme, and is integrated with fracture energy-based arc-length method. Although the under-relaxation strategies result in decreased rate of convergence of the solver, for the phase-field model, it ensures a guarantee to circumvent of divergence issues arising due to erratic behaviour of the jacobian \cite{gerasimov2018non}. The fracture energy-based arc-length method adopted in this work is displacement controlled, which provides the flexibility to take larger displacement steps while accurately capturing the brutal nature of the crack growth. In this work, we have studied displacement controlled fracture, where the load $\it{vs.}$ displacement curve in the post-peak behaviour encounters a discontinuity and the representative point drops onto the lower branch with negative slope, indicating that both load and displacement must decrease to obtain a controlled crack extension. Such observations are often neglected in staggered solvers, but these phenomena are captured accurately using our fast and efficient monolithic solver. Besides capturing the possible snap-back behaviour, the arc-length method also results in an adaptive time-stepping procedure, hence larger energy dissipation is permitted. The adaptive scheme enables a dynamically changing mesh which in turn allows the refinement to remain local at singularities and high gradients. The adaptive $\it{h}$-refinement technique is implemented using polynomial splines over hierarchical T-meshes (PHT-splines). The PHT-splines possess a very efficient local refinement algorithm and they also inherit the properties of adaptivity and locality of T-splines. Moreover, in all the examples, the crack is not initialized but it is allowed to nucleate naturally. The penalization approach is adopted to treat the variational inequality formulation. This remainder of the article is structured as follows: Section \ref{sec:2} introduces the reader to the phase-field model for fracture analysis, its underlying energy functional and the pertinent Euler-Lagrange equations. Subsequently, in Section \ref{sec:3}, the isogeometric analysis framework and the discrete equations for the phase-field fracture model are introduced. Section \ref{sec:4} presents the main contribution of this work, the robust monolithic solver. The numerical benchmark problems are addressed in Section \ref{sec:5}, followed by concluding remarks in Section \ref{sec:6}. \section{Phase-field model for fracture}\label{sec:2} \subsection{The energy functional} Let $\hbox{$\Omega$} \in \mathbb{R}^{\text{dim}}$ (dim $= 1,2,3$) be a domain occupied by a fracturing continuum. A discrete representation of fracture is shown in Figure \ref{fig:sec:2-1:continuumpotato_a} where the fracture $\mathcal{C}$ may be represented by a cohesive zone fracture interface. Its diffused counterpart, obtained through the phase-field regularisation is presented in Figure \ref{fig:sec:2-1:continuumpotato_b}. Here, the fracture is represented by an auxiliary variable, the phase-field parameter, $\hbox{$\varphi$} \in [0,1]$ within a diffusive (smeared) zone of width $\hbox{$l$}>0$, where $\hbox{$l$}$ denotes a length scale parameter that controls the width of the diffused zone. The bounds over $\hbox{$\varphi$}$, zero and one indicate the intact material state and total loss of integrity, respectively. Furthermore, the surface $\hbox{$\Gamma$}$ of both, the discrete and the diffused fracture continuum is decomposed into a Dirichlet boundary, $\surfarg{D}{u}$ and a Neumann boundary, $\surfarg{N}{u}$, such that $\hbox{$\Gamma$} = \surfarg{D}{u} \cup \surfarg{N}{u}$ and $\surfarg{D}{u} \cap \surfarg{N}{u} = \emptyset$. \begin{figure*}[ht!] \begin{subfigure}[b]{0.49\textwidth} \begin{tikzpicture}[scale=0.8] \coordinate (K) at (0,0); \draw [fill=black!10,line width=1pt] (K) plot [smooth cycle,tension=0.7] coordinates {(3,1) (7,1) (8,3) (7,4.475) (5,4.5) (2,4) (1.7,2.5)}; \node[ ] at (3.25,2.15) {$\hbox{$\Omega$}$}; \draw[line width=1.5pt,black] (5,0.75) to (5,2.5); \node[ ] at (5.35,2.0) {$\hbox{$\mathcal{C}$}$}; \draw (K) [line width=2.5pt,black] plot [smooth, tension=0.8] coordinates {(3,4.3) (1.7,3.7) (1.7,2.5)}; \node[ ] at (1.15,4) {$\surfarg{D}{u}$}; \node[ ] at (6.75,5.05) {$\surfarg{N}{u}$}; \end{tikzpicture} \caption{discrete crack} \label{fig:sec:2-1:continuumpotato_a} \end{subfigure} % \begin{subfigure}[b]{0.49\textwidth} \begin{tikzpicture}[scale=0.8] \coordinate (K) at (0,0); \draw [fill=black!10,line width=1pt] (K) plot [smooth cycle,tension=0.7] coordinates {(3,1) (7,1) (8,3) (7,4.475) (5,4.5) (2,4) (1.7,2.5)}; \node[ ] at (3.25,2.15) {$\hbox{$\Omega$}$}; \fill[black, path fading=fade out, draw=none] (5,2.5) circle (0.3); \draw[line width=0.1pt,black] (5,0.75) to (5,2.5); \draw[line width=0.1pt,black] (5,0.75) to (5,2.5); \shade [top color=black,bottom color=black!10,shading angle=90] (5,0.78) rectangle (5.3,2.5); \shade [top color=black!10,bottom color=black,shading angle=90] (4.7,0.78) rectangle (5,2.5); \draw[->,line width=1pt,black] (4.3,1.7) to (4.85,1.7); \draw[<-,line width=1pt,black] (5.15,1.7) to (5.7,1.7); \node[ ] at (5.85,1.7) {$\hbox{$l$}$}; \draw (K) [line width=2.5pt,black] plot [smooth, tension=0.8] coordinates {(3,4.3) (1.7,3.7) (1.7,2.5)}; \node[ ] at (1.15,4) {$\surfarg{D}{u}$}; \node[ ] at (6.75,5.05) {$\surfarg{N}{u}$}; \end{tikzpicture} \caption{diffused (smeared) crack} \label{fig:sec:2-1:continuumpotato_b} \end{subfigure} \caption{A solid, $\hbox{$\Omega$} \in \mathbb{R}^2$, embedded with (a) discrete crack $\hbox{$\mathcal{C}$}$ and (b) diffused (smeared) crack, with Dirichlet and Neumann boundaries indicated as $\surfarg{D}{u}$ and $\surfarg{N}{u}$ respectively. Figure reproduced from \cite{Bharali2021}.} \label{fig:sec:2:continuumpotato} \end{figure*} A general form of the energy functional for the phase-field fracture model, shown in Figure \ref{fig:sec:2-1:continuumpotato_b}, is given by, \begin{equation}{\label{eqn:sec2:EFunc}} \displaystyle E(\hbox{$\mathbf{u}$},\hbox{$\varphi$}) = \int_{\hbox{$\Omega$}}^{} g(\hbox{$\varphi$}) \Psi^f(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$})) \: \text{d}\hbox{$\Omega$} + \int_{\hbox{$\Omega$}}^{} \Psi^r(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$})) \: \text{d}\hbox{$\Omega$} + \int_{\hbox{$\Omega$}}^{} \dfrac{\hbox{$G_c$}}{c_w \hbox{$l$}} \left( w(\hbox{$\varphi$}) + \hbox{$l$}^2 |\hbox{\boldmath$\nabla$} \hbox{$\varphi$}|^2 \right) \: \text{d}\hbox{$\Omega$}, \end{equation} \noindent in the absence of any external loading (body and traction forces). Here, $g(\hbox{$\varphi$})$ is a monotonically decreasing stress-degradation function attached to the fracture driving strain energy $\Psi^f(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$}))$, and $\Psi^r(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$}))$ is the residual strain energy. Moreover, $\hbox{$G_c$}$ is the Griffith fracture energy, which is a material parameter, and $c_w$ is a normalisation constant associated with the choice of the locally dissipated fracture energy function, $w(\hbox{$\varphi$})$. Finally, $\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$})$ is the symmetric part of the deformation gradient, where $\hbox{$\mathbf{u}$}$ is the displacement field. The choice of $w(\hbox{$\varphi$}) = \hbox{$\varphi$}^2$ ($c_w=1/2$) typically denotes a AT2 phase field model, while the choice $w(\hbox{$\varphi$}) = \hbox{$\varphi$}$ ($c_w=8/3$) is often referred to as the AT1 model. The phase-field model for fracture allows great flexibility in the choice of degradation function $g(\hbox{$\varphi$})$ and locally dissipated energy function $w(\hbox{$\varphi$})$, albeit with some restrictions. The degradation function must satify the following criterion: $g(0) = 1$, $g(1) = 0$, and $g'(1) = 0$, to ensure that the fracture driving energy reaches zero for fully developed crack, $\textit{i.e.,}$ for $\hbox{$\varphi$} = 1$. Nevertheless, several researchers have proposed different combinations of degradation functions and locally dissipated fracture energy functions, some of which are presented in Table \ref{tab:sec:2:degradationfunctions}. \begin{table}[ht!] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{@{}lll@{}} \toprule \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{$g(\hbox{$\varphi$})$} & \multicolumn{1}{c}{Contribution} \\ \midrule Quadratic & $(1-\hbox{$\varphi$})^2$ & Bourdin et. al. \cite{Bourdin2000} \\ Cubic & $s((1-\hbox{$\varphi$})^3-(1-\hbox{$\varphi$})^2)+3(1-\hbox{$\varphi$})^2 - 2(1-\hbox{$\varphi$})^3$ & Borden et. al. \cite{borden2016} \\ Rational & $\dfrac{(1-\hbox{$\varphi$})^p}{(1-\hbox{$\varphi$})^p + a_1\hbox{$\varphi$} + a_1 a_2\hbox{$\varphi$}^2 + a_1 a_2 a_3\hbox{$\varphi$}^3}$ & Wu \cite{wu2017} \\ \bottomrule \end{tabular} \caption{Stress-degradation functions, popular in the phase-field fracture literature} \label{tab:sec:2:degradationfunctions} \end{table} \begin{comment} \begin{table}[ht!] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{@{}lll@{}} \toprule \multicolumn{1}{c}{Abbreviated name} & \multicolumn{1}{c}{$w(\hbox{$\varphi$})$} & \multicolumn{1}{c}{$c_w$} \\ \midrule AT1 & $\hbox{$\varphi$}$ & $8/3$ \\ AT2 & $\hbox{$\varphi$}^2$ & $2$ \\ PFCZM & $2\hbox{$\varphi$} - \hbox{$\varphi$}^2$ & $\pi$ \\ \bottomrule \end{tabular} \caption{Locally dissipated fracture energy functions, popular in the phase-field fracture literature} \label{tab:sec:2:localdisspationfunction} \end{table} \end{comment} For AT1 and AT2 brittle fracture, the quadratic degradation function proposed in \cite{Bourdin2000} is most commonly adopted. However, it is observed that the AT2 model lacks an initial elastic stage. In order to obtain a linear pre-peak response with the AT2 model, researchers opt for a cubic degradation function proposed in \cite{borden2016}, with $0 < s \leq 1$, determining the slope of $g(\hbox{$\varphi$})$ in the undamaged state. For quasi-brittle fracture, \cite{wu2017} developed a rational degradation function with model parameters $a_1$, $a_2$, $a_3$, and $p$ is used. With these parameters, the different traction-separation laws are obtained. The reader is referred to \cite{wu2017} for more on this aspect. In this work, the AT2 model is adopted along with quadratic and cubic degradation functions. \begin{comment} Based on their choices, different traction-separation laws can be implemented, as shown in Table \ref{tab:sec:2:TSL_quasi_brittle}. \begin{table}[ht!] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{@{}lclll@{}} \toprule \multicolumn{1}{c}{Traction-separation law} & \multicolumn{1}{c}{$a_1$} & \multicolumn{1}{c}{$a_2$} & \multicolumn{1}{c}{$a_3$} & \multicolumn{1}{c}{$p$} \\ \midrule Linear & $\dfrac{4 E_0 \hbox{$G_c$}}{\pi \hbox{$l$} f_t^2}$ & $-0.5$ & $0.0$ & $2.0$ \\ Exponential & " & $2^{5/3}-3$ & $0.0$ & $2.5$ \\ Cornellisen & " & $1.3868$ & $0.9106$ & $2.0$ \\ \bottomrule \end{tabular} \caption{Traction-separation laws for quasi-brittle PFCZM} \label{tab:sec:2:TSL_quasi_brittle} \end{table} \end{comment} Furthermore, the choice of the fracture driving strain energy, $\Psi^f(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$}))$ and the residual, $\Psi^r(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$}))$ is also not unique. Table \ref{tab:sec:2:strainenergysplit} presents a few commonly adopted fracture driving and residual energy. The first model proposed in \cite{Bourdin2000,Bourdin2007} assumes the fracture to driven by the strain energy density $\Psi$, without accounting for tension-compressive asymmetry. Such a model predicts similar fracture in tension and compression. However, most researchers have adopted the notion that fracture is driven by the tensile strain energy density. In this context, \cite{Miehe2010} adopted a spectral decomposition of the strain energy density function. This yielded the tensile strain energy and the compressive strain energies as the fracture driving and residual energies respectively. In an alternative approach, \cite{wu2017} proposed the energy associated with the maximum principal stress $\sigma_1$ as the fracture driving energy. With reference to Table \ref{tab:sec:2:strainenergysplit}, $E$, $\lambda$ and $\mu$ are material constants corresponding to the Young's modulus, Lam\'e constant and shear modulus respectively. The trace operator is given by $tr$, while $\langle \cdot \rangle_{\pm}$ represents the positive/negative Macaulay brackets. Furthermore, $\hbox{$\bepsilon$}^{\pm}$ indicates the tensile/compressive strain tensors, obtained through spectral decomposition of the strain tensor. \begin{table}[ht!] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{@{}llll@{}} \toprule \multicolumn{1}{c}{Type} &\multicolumn{1}{c}{$\Psi^f$} & \multicolumn{1}{c}{$\Psi^r$} & \multicolumn{1}{c}{Contribution} \\ \midrule No Split & $\frac{1}{2} \lambda tr^2(\hbox{$\bepsilon$}) + \mu \, \hbox{$\bepsilon$} \colon \hbox{$\bepsilon$}$ & $0$ & Bourdin et. al. \cite{Bourdin2000,Bourdin2007} \\ Spectral & $\frac{1}{2} \lambda \langle tr(\hbox{$\bepsilon$}) \rangle_+^2 + \mu \, \hbox{$\bepsilon$}^+ \colon \hbox{$\bepsilon$}^+$ & $\frac{1}{2} \lambda \langle tr(\hbox{$\bepsilon$}) \rangle_-^2 + \mu \, \hbox{$\bepsilon$}^- \colon \hbox{$\bepsilon$}^-$ & Miehe et. al. \cite{Miehe2010} \\ Rankine & $\frac{1}{2 E} \langle \sigma_1 \rangle^2_+ $ & - & Wu \cite{wu2017} \\ \bottomrule \end{tabular}% \caption{Strain energy density decompositions in phase-field based fracture analysis.} \label{tab:sec:2:strainenergysplit} \end{table} The additive decomposition of the strain energy densitiy into fracture driving energy and a residual energy renders the displacement sub-problem non-linear. In order to preserve a linear displacement sub-problem, \cite{Ambati2015} proposed a `\textit{hybrid}' approach. With this approach, the degradation function $g(\hbox{$\varphi$})$ is applied on the entire strain energy density $\Psi$ instead of $\Psi^f$ in the momentum balance equation. As a consequence, the variational consistency of the problem is lost. Nevertheless, the formulation is consistent w.r.t thermodynamic principles. This formulation, referred to as the `hybrid' phase-field fracture model, is adopted in this work. \subsection{Euler-Lagrange equations} \label{sec:Euler_Largrange} The Euler-Lagrange equations for the phase-field model is obtained upon taking the first variation of the energy functional (\ref{eqn:sec2:EFunc}) w.r.t. its solution fields, vector-valued displacements, $\hbox{$\mathbf{u}$}$ and scalar-valued phase-field, $\hbox{$\varphi$}$. Incorporating the hybrid formulation \cite{Ambati2015}, and with appropriately defined test and trial spaces, the complete problem statement assumes the form: \smallskip \begin{problem}\label{Problem1} Find ($\hbox{$\mathbf{u}$}$, $\hbox{$\varphi$}$) $\in \mathbb{U} \times \mathbb{P}$ with \begin{subequations} \begin{align} E'(\hbox{$\mathbf{u}$},\hbox{$\varphi$};\hbox{$\delta\mathbf{u}$}) & = \int_{\hbox{$\Omega$}}^{} \bigg( g(\hbox{$\varphi$}) \dfrac{\partial \Psi(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$}))}{\partial \hbox{$\bepsilon$}} \bigg) \colon \hbox{$\bepsilon$}(\hbox{$\delta\mathbf{u}$}) \: \normalfont\text{d}\hbox{$\Omega$} = 0 \hspace{9.15em}\forall \: \hbox{$\delta\mathbf{u}$} \in \mathbb{U}^0, \label{eqn:sec:2:P1MomentumBalance} \\ E'(\hbox{$\mathbf{u}$},\hbox{$\varphi$};\hat{\hbox{$\varphi$}}) & = \int_{\hbox{$\Omega$}}^{} \bigg( g'(\hbox{$\varphi$}) \Psi^f(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$})) + \dfrac{\hbox{$G_c$}}{c_w \, \hbox{$l$}} w'(\hbox{$\varphi$}) \bigg) (\hat{\hbox{$\varphi$}} - \hbox{$\varphi$}) \: \normalfont\text{d}\hbox{$\Omega$} \label{eqn:sec:2:P1PfEvolution} \\ \nonumber & + \int_{\hbox{$\Omega$}}^{} \dfrac{2 \hbox{$G_c$} \hbox{$l$}}{c_w} \hbox{\boldmath$\nabla$}\hbox{$\varphi$} \cdot \hbox{\boldmath$\nabla$}(\hat{\hbox{$\varphi$}} - \hbox{$\varphi$}) \: \normalfont\text{d}\hbox{$\Omega$} \geq 0 \hspace{11em} \forall \: \hat{\hbox{$\varphi$}} \in \mathbb{P}. \end{align} \end{subequations} \noindent considering the Dirichlet boundary conditions $\hbox{$\mathbf{u}$}^p$ on $\surfarg{D}{u}$ and $\hbox{$\varphi$}^p$ on $\surfarg{D}{\hbox{$\varphi$}}$, and Neumann boundary condition $\trac{u}$ on $\surfarg{N}{u}$. Moreover, the trial and test spaces are given by \begin{subequations} \begin{align} \mathbb{U} & = \{ \hbox{$\mathbf{u}$} \in [H^1(\hbox{$\Omega$})]^{\normalfont\text{dim}}| \hbox{$\mathbf{u}$} = \hbox{$\mathbf{u}$}^p \text{ on } \surfarg{D}{u} \}, {\label{eqn:sec:2:P1disp_trialspace}} \\ \mathbb{U}^0 & = \{ \hbox{$\mathbf{u}$} \in [H^1(\hbox{$\Omega$})]^{\normalfont\text{dim}}| \hbox{$\mathbf{u}$} = \mathbf{0} \text{ on } \surfarg{D}{u} \}, {\label{eqn:sec:2:P1disp_testspace}} \\ \mathbb{P} & = \{ \hbox{$\varphi$} \in [H^1(\hbox{$\Omega$})] | \hbox{$\varphi$} \geq {}^{n}\hbox{$\varphi$} |\hbox{$\varphi$} = \hbox{$\varphi$}^p \text{ on } \surfarg{D}{\hbox{$\varphi$}} \}. {\label{eqn:sec2:P1pf_space}} \end{align} \end{subequations} The left superscript $n$ in (\ref{eqn:sec2:P1pf_space}) refers to the previous step in (pseudo) time. {\color{black}\hfill $\blacksquare$} \end{problem} Problem \ref{Problem1} belongs to the variational inequality category (see Equation (\ref{eqn:sec:2:P1PfEvolution} and test/trial space (\ref{eqn:sec2:P1pf_space})). The treatment of variational inequality is not new in the phase-field fracture model literature. A review of the different approaches ensuring fracture irreversibility is presented in Section \ref{sec:1}. Adopting the history-variable approach \cite{Miehe2010}, in conjunction with appropriately defined test and trial spaces, the complete problem statement takes the form: \smallskip \begin{problem}\label{Problem2} Find ($\hbox{$\mathbf{u}$}$, $\hbox{$\varphi$}$) $\in \mathbb{U} \times \mathbb{P}$ with \begin{subequations} \begin{align} E'(\hbox{$\mathbf{u}$},\hbox{$\varphi$};\hbox{$\delta\mathbf{u}$}) & = \int_{\hbox{$\Omega$}}^{} \bigg( g(\hbox{$\varphi$}) \dfrac{\partial \Psi(\hbox{$\bepsilon$}(\hbox{$\mathbf{u}$}))}{\partial \hbox{$\bepsilon$}} \bigg) \colon \hbox{$\bepsilon$}(\hbox{$\delta\mathbf{u}$}) \: \normalfont\text{d}\hbox{$\Omega$} = 0 \hspace{9.15em}\forall \: \hbox{$\delta\mathbf{u}$} \in \mathbb{U}^0, \label{eqn:sec:2-2:P2MomentumBalance} \\ E'(\hbox{$\mathbf{u}$},\hbox{$\varphi$};\hbox{$\delta\varphi$}) & = \int_{\hbox{$\Omega$}}^{} \bigg( g'(\hbox{$\varphi$}) \mathcal{H} + \dfrac{\hbox{$G_c$}}{c_w \, \hbox{$l$}} w'(\hbox{$\varphi$}) \bigg) \hbox{$\delta\varphi$} \: \normalfont\text{d}\hbox{$\Omega$} \label{eqn:sec:2-2:P2PfEvolution} \\ \nonumber & + \int_{\hbox{$\Omega$}}^{} \dfrac{2 \hbox{$G_c$} \hbox{$l$}}{c_w} \hbox{\boldmath$\nabla$}\hbox{$\varphi$} \cdot \hbox{\boldmath$\nabla$}\hbox{$\delta\varphi$} \: \normalfont\text{d}\hbox{$\Omega$} = 0 \hspace{9.15em} \forall \: \hbox{$\delta\varphi$} \in \mathbb{P}^0, \end{align} \end{subequations} \noindent considering the Dirichlet boundary conditions $\hbox{$\mathbf{u}$}^p$ on $\surfarg{D}{u}$ and $\hbox{$\varphi$}^p$ on $\surfarg{D}{\hbox{$\varphi$}}$. Moreover, the trial and test spaces are given by \begin{subequations} \begin{align} \mathbb{U} & = \{ \hbox{$\mathbf{u}$} \in [H^1(\hbox{$\Omega$})]^{\normalfont\text{dim}}| \hbox{$\mathbf{u}$} = \hbox{$\mathbf{u}$}^p \text{ on } \surfarg{D}{u} \}, {\label{eqn:sec:2-2:P2disp_trialspace}} \\ \mathbb{U}^0 & = \{ \hbox{$\mathbf{u}$} \in [H^1(\hbox{$\Omega$})]^{\normalfont\text{dim}}| \hbox{$\mathbf{u}$} = \mathbf{0} \text{ on } \surfarg{D}{u} \}, {\label{eqn:sec:2-2:P2disp_testspace}} \\ \mathbb{P} & = \{ \hbox{$\varphi$} \in [H^1(\hbox{$\Omega$})] | \hbox{$\varphi$} = \hbox{$\varphi$}^p \text{ on } \surfarg{D}{\hbox{$\varphi$}} \}, {\label{eqn:sec2:pf_P2trialspace}} \\ \mathbb{P}^0 & = \{ \hbox{$\varphi$} \in [H^1(\hbox{$\Omega$})] | \hbox{$\varphi$} = 0 \text{ on } \surfarg{D}{\hbox{$\varphi$}} \}. {\label{eqn:sec2:P2pf_testspace}} \end{align} \end{subequations} \noindent The history-variable $\mathcal{H}$ is defined as the maximum fracture driving energy $\Psi^f$ over the entire loading history. Mathematically, \begin{equation}\label{eqn:sec2:P2hist} \mathcal{H} = \operatorname{max} ({}^{n}\mathcal{H},\Psi^f). \end{equation} The left superscript $n$ in (\ref{eqn:sec2:P2hist}) refers to the previous step in (pseudo) time. {\color{black}\hfill $\blacksquare$} \end{problem} \smallskip \begin{remark} The history-variable approach in Problem \ref{Problem2} results in a variational equality problem, with relaxed test and trial spaces for the phase-field (cf. Problems \ref{Problem1} and \ref{Problem2}). \end{remark} \section{Isogeometric analysis and Discrete equations}\label{sec:3} \subsection{Isogeometric analysis and NURBS} Isgeometric analysis (IGA) \cite{hughes2005isogeometric} allows an exact representation of complex geometries, such as such as spheres, ellipsoids, paraboloids and hyperboloids. The representation is carried out using polynomial functions, with the Non-Uniform Rational B-splines (NURBS) most commonly adopted. The smoothness of the the NURBS basis functions is advantageous in problems with multi-faceted surface, that can trigger traction oscillations, when simulated using conventional geometry discretization. Furthermore, IGA also offers the ease in obtaining higher-order continuous basis functions with NURBS. As a consequence, it is appealing for higher-order Partial Differential Equations (PDEs). However, NURBS based modeling is often recognized to have significant flaws in constructing watertight geometries using tensor-product meshes. Also, the scale and scope of refining procedures causes the tensor product structure of NURBS to be inefficient leading to erroneous error estimation and improper implementation of adaptivity algorithms. In the context of phase-field fracture model, NURBS-based simulation was carried out in \cite{BORDEN2014100} for the fourth-order phase-field fracture model, albeit without mesh refinement. The restrictions of the NURBS-based models were mitigated using T-Splines, while keeping the recognizable structure of NURBS algorithms. T-splines alleviate the deficiencies of NURBS by generating a single patch of watertight geometry that can be fine-tuned and coarsened locally. Implementation of T-splines within the framework of IGA has gained a lot of attention. The B\'ezier extraction \cite{scott2011isogeometric} of the basis makes it suitable to be efficient integration into existing finite element programs. However, the linear independence of T-splines is not assured in generic T-meshes. The concept of analysis suitable T-splines was proposed in \cite{buffa2010linear}, which adopts the essential mathematical entities of NURBS, such as linear independence and partition of unity under certain restrictions on the T-mesh, while giving a highly localized and efficient refining algorithm. As an alternative to T-Splines, PHT-splines were proposed, which are generalization of B-Splines over hierarchical T-meshes. The local refinement algorithm for PHT-splines is extremely efficient, and easy to implement. In this section, the basic concepts of PHT-splines based IGA is discussed, which is then used as a discretization technique to solve the phase-field fracture problem. In one-dimension, the PHT-spline representation takes the form, \begin{equation}\label{eq:T_mesh} \mathbb{T}(\xi) = \sum_{i=1}^{n_{cp}}\mathcal{N}_{i,p}(\xi)\mathbf{P}_i, \end{equation} where, $\mathcal{N}_{i,p}(\xi)$ indicates the cubic B-spline basis functions with $C^1$ continuity defined over the knot vector $\Xi$, and $n_{cp}$ is the total number of control points defined over the control mesh used to determine the scaffolding of the geometry. Furthermore, $p$ denotes the order of the polynomial, and $\mathbf{P}_i \in \mathcal{R}^d$ is the set of control points in $d$ dimensions for the B-spline curve with the knot vector $\Xi$. The initial set of knot vectors is denoted by the set of vertices $\Xi^{d}$ corresponding to each spatial direction in the parameter space, $\hat {\Omega} = \left[0,1\right]^d$, and is given by: \begin{equation}\label{eq:knot_vector} \Xi^{d} = \{{\xi_0^d, \xi_0^d},{\xi_1^d, \xi_1^d},{\xi_2^d},\ldots,{\xi_{n_i-1}^d, \xi_{n_i-1}^d},{\xi_{n_i}^d, \xi_{n_i}^d}\}. \end{equation} \noindent Here, ${\xi_i^d}\leq{\xi_{i+1}^d}$, and ${\xi_0^d}\leq{\xi_{1}^d} =0$, ${\xi_{n_i-1}^d}\leq{\xi_{n_i}^d}=1$. Moreover, $n_i =n_{cp} +p+1$ represents the number of elements in each parametric direction. The knot vector is uniform when the distance between the consecutive knots are constant. For each interior vertex $\xi_i$, there are two supporting basis functions, $\left[\xi_{i-1},\xi_{i+1}\right]$. For cubic polynomials, one of the distinguishing aspects of PHT-splines is that they maintain $C^1$ continuity, where the start and end knots are repeated $p+1$ times, while the interior knots are repeated only once. By repeatedly inserting vertices, all the knot spans can be obtained at the same refinement level, hence converting a PHT-spline to a B-spline. B-splines are non-local, in the sense that, a B-spline typically encompasses more than more element. However, in a finite element framework, a local representation of the B-splines within each element is desired. This local representation of the B-spline is extracted using the B\'ezier decomposition technique. The B\'ezier extraction operator is generated using information from the knot vectors and does not rely on the control points or basis functions. Bernstein polynomials have an edge over NURBS basis functions in terms of implementation because the Bernstein basis functions are the same for all elements, as as observed from Figure \ref{fig:Bez_operator}. Following this idea, the B\'ezier extraction allows for the pre-computation of the Bernstein basis on the reference element. During the simulation, the Bernstein basis function can be mapped to each element, with minimal effort. \begin{figure}[ht!] \centering \includegraphics[width = 0.55\textwidth]{Figures/Bezier_Extraction.pdf} \caption{B\'ezier extraction operator implemented for a cubic B-spline curve with $\Xi$ = [0,0,0,0,1/3,2/3,1,1,1,1]. Here, a rational B\'ezier curve, $\mathbf{B}(\bar \xi) = \left\{\beta_{i,p}\right\}_{i=1}^{p+1}$ and its associated control points are $\mathbf{Q} = \left\{Q_i\right\}_{i=1}^{p+1}$ is defined in a reference space $\bar \Omega = [-1,1]$, where $\beta_{i,p}$ are rational Bernstein polynomial basis functions. The B-spline basis functions, $\mathbf{N}(\bar \xi)$ are obtained from the Bernstein basis functions, $\mathbf{B}(\bar \xi)$ using the linear B\'ezier extraction operator $\mathbf{C}$. In case of obtaining the B\'{e}zier control points, the mapping is reversed \cite{goswami2021phase}.}. \label{fig:Bez_operator} \end{figure} The initial discretization, designated as level 0, is a tensor-product mesh. By splitting the components into $2d$ sub-elements using the cross-insertion approach, the coarse meshes at level $k$ are refined to obtain finer meshes at level $(k+1)$, following the principle of adaptivity. The basis functions for an element on the coarse mesh are replaced by a set of basis functions generated over the refined element in the hierarchical approach. For detailed implementation of the B\'ezier extraction operator and method to obtain the control points on the refined mesh, the reader is referred to \cite{hennig2016bezier}. \subsection{Discrete phase-field fracture equations} The IGA proposed for the phase-field fracture model in this work, is similar to the classical Finite Element Analysis (FEA), the only difference being in the basis functions used. For the phase-field fracture model, the Euler-Lagrange equations in Problem \ref{Problem2} is used as the starting point for discretization. Considering the displacement and the phase-field at control points ($\tilde{\hbox{$\mathbf{u}$}}_i,\tilde{\hbox{$\varphi$}}_i$) as the fundamental unknowns, the corresponding continuous fields ($\hbox{$\mathbf{u}$}$,$\hbox{$\varphi$}$) are approximated as, \begin{equation}\label{eqn:discrete_disp_pf} \hbox{$\mathbf{u}$} = \sum\limits_{i = 1}^m {N_i^{\boldsymbol{u}}\tilde{\boldsymbol{u}}_i}\;\;\;\text{ , } \hbox{$\varphi$} = \sum\limits_{i = 1}^m N_i^{\varphi}\tilde{\varphi}_i. \end{equation} \noindent In the above equation, $N_i^{\boldsymbol{u}}$ and $N_i^{\hbox{$\varphi$}}$ are the basis functions for the displacement and the phase-field, associated with the $i^{\text{th}}$ control point. The total number of control points is given by $m$. The spatial derivatives of the basis functions $N_i^{\boldsymbol{u}}$ and $N_i^{\hbox{$\varphi$}}$ in a two-dimensional case are computed as, \begin{equation}\label{eqn:Bmatrices} \mathbf{B}_i^{\boldsymbol{u}} = \left[ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {{N_{i,x}}} \\ 0 \\ {{N_{i,y}}}\\ \end{array}}&{\begin{array}{*{20}{c}} 0 \\ {{N_{i,y}}} \\ {{N_{i,x}}} \\ \end{array}} \end{array}} \right] \text{ , } \mathbf{B}_i^{\hbox{$\varphi$}} = \left[ {\begin{array}{*{20}{c}} {{N_{i,x}}} \\ {{N_{i,y}}} \end{array}} \right]. \end{equation} \noindent Here, the subscripts $,x$ and $,y$ indicate spatial derivatives in $x$ and $y$ directions respectively. Using (\ref{eqn:Bmatrices}), the strain $\hbox{$\bepsilon$}$, and the gradient of the phase-field $\hbox{\boldmath$\nabla$}\hbox{$\varphi$}$ are defined as, \begin{equation}\label{eqn:strain_gradpf} \boldsymbol{\epsilon} = \sum\limits_{i = 1}^m {\mathbf{B}_i^{\boldsymbol{u}} \tilde{\boldsymbol{u}}_i}\;\;\;\text{ , } \hbox{\boldmath$\nabla$}\hbox{$\varphi$} = \sum\limits_{i = 1}^m {\mathbf{B}_i^{\hbox{$\varphi$}} \tilde{\hbox{$\varphi$}}}. \end{equation} \noindent The discrete phase-field fracture problem is obtained upon inserting (\ref{eqn:discrete_disp_pf}-\ref{eqn:strain_gradpf}) in the Euler-Lagrange equations from Problem \ref{Problem2}. Thereafter, (\ref{eqn:sec:2-2:P2MomentumBalance}) and (\ref{eqn:sec:2-2:P2PfEvolution}) are assumed as the internal forces, and stiffness matrix derived from its derivative. This notation is consistent with \cite{de2012nonlinear}. This allows the presentation of the phase-field fracture problem within the incremental iterative framework as: \smallskip \begin{dproblem}\label{Problem3} Compute the solution increment ($\tilde{\Delta\hbox{$\mathbf{u}$}}$, $\tilde{\Delta\hbox{$\varphi$}}$)$_{i+1}$ in the current iteration $i+1$ using \begin{subequations} \begin{equation}\label{dproblem1:discSystem} \underbrace{ \begin{bmatrix} \mathbf{K}^{\boldsymbol{uu}} & \mathbf{K}^{\boldsymbol{u}\varphi} \\ \mathbf{K}^{\varphi\boldsymbol{u}} & \mathbf{K}^{\varphi\varphi} \end{bmatrix}_{i}}_{\text{Stiffness matrix}} \begin{Bmatrix} \Delta \tilde{\hbox{$\mathbf{u}$}} \\ \Delta \tilde{\hbox{$\varphi$}} \end{Bmatrix}_{i+1} = \underbrace{\begin{Bmatrix} \mathbf{f}^{ext,\boldsymbol{u}} \\ \mathbf{f}^{ext,\varphi} \end{Bmatrix}_{i} - \begin{Bmatrix} \mathbf{f}^{int,\boldsymbol{u}} \\ \mathbf{f}^{int,\varphi} \end{Bmatrix}_{i}}_{\text{Residual}}, \end{equation} \noindent and update the solution fields, \begin{equation}\label{dproblem1:solUpdate} \begin{Bmatrix} \tilde{\hbox{$\mathbf{u}$}} \\ \tilde{\hbox{$\varphi$}} \end{Bmatrix}_{i+1} = \begin{Bmatrix} \tilde{\hbox{$\mathbf{u}$}} \\ \tilde{\hbox{$\varphi$}} \end{Bmatrix}_{i} + \begin{Bmatrix} \Delta \tilde{\hbox{$\mathbf{u}$}} \\ \Delta \tilde{\hbox{$\varphi$}} \end{Bmatrix}_{i+1}, \end{equation} \noindent until convergence is achieved. The local element stiffness matrices are given by: \begin{equation}\label{eq:stiff_comp} \displaystyle \begin{split} \mathbf{K}^{\boldsymbol{uu}} & = \int_{\Omega} \left[\mathbf{B}^{\boldsymbol{u}}\right]^T \bigg( g(\hbox{$\varphi$}) \dfrac{\partial^2 \Psi}{\partial\hbox{$\bepsilon$}^2} \bigg) \left[\mathbf{B}^{\boldsymbol{u}}\right]\,d\Omega, \\ \mathbf{K}^{\boldsymbol{u}\varphi} & = \int_{\Omega} \left[\mathbf{B}^{\boldsymbol{u}}\right]^T \bigg( g'(\hbox{$\varphi$}) \dfrac{\partial\Psi}{\partial \hbox{$\bepsilon$}} \bigg) \left[N^{\varphi} \right] \,d\Omega, \\ \mathbf{K}^{\varphi\boldsymbol{u}} & = \int_{\Omega} \left[N^{\varphi}\right] \bigg( g'(\hbox{$\varphi$}) \dfrac{\partial \mathcal{H}}{\partial \epsilon} \bigg) \left[\mathbf{B}^{\boldsymbol u}\right]\;d\Omega, \\ \mathbf{K}^{\varphi\varphi} & = \int_{\Omega}\left\{\left[\mathbf{B}^{\varphi}\right]^T \left( \dfrac{2 \hbox{$G_c$} \hbox{$l$}}{c_w} \right) \left[\mathbf{B}^{\varphi}\right] + \left[N^{\varphi}\right] \left(g''(\hbox{$\varphi$}) \mathcal{H} +\dfrac{\hbox{$G_c$}}{c_w \hbox{$l$}} w''(\varphi) \right) \left[N^{\varphi}\right] \right\} \;d\Omega, \\ \end{split} \end{equation} \noindent and the local internal force vectors are computed as \begin{equation}\label{eq:fint_comp} \begin{split} \mathbf{f}^{int,\boldsymbol{u}} & = \int_{\Omega} \left[\mathbf{B}^{\boldsymbol{u}}\right]^T \bigg( g(\hbox{$\varphi$}) \dfrac{\partial \Psi}{\partial\hbox{$\bepsilon$}} \bigg) \,d\Omega,\\ \mathbf{f}^{int,\varphi} & = \int_{\Omega}\left\{\left[\mathbf{B}^{\varphi}\right]^T \left( \dfrac{2 \hbox{$G_c$} \hbox{$l$}}{c_w} \right) \left[\mathbf{B}^{\varphi}\right] \tilde{\hbox{$\varphi$}} + \left[N^{\varphi}\right]^T \left(g'(\hbox{$\varphi$}) \mathcal{H} +\dfrac{\hbox{$G_c$}}{c_w \hbox{$l$}} w'(\varphi) \right) \right\} \;d\Omega. \end{split} \end{equation} The external force vectors $f^{ext,\boldsymbol{u}}$ and $f^{ext,\varphi}$ are considered equal to zero. {\color{black}\hfill $\blacksquare$} \end{subequations} \end{dproblem} \section{A monolithic solution technique}\label{sec:4} \subsection{Incremental fracture energy-based arc-length method} Since the inception of the dissipation-based arc-length solver in \cite{Verhoosel2009dissip}, variants thereof have been utilized for the phase-field based fracture modeling \cite{vignollet2014phase,may2015numerical,singh2016fracture}. Motivated by these studies, in particular, the fracture path controlled path following method proposed in \cite{singh2016fracture}, a fracture energy-based arc-length method is proposed in this work. From the phase-field fracture energy functional (\ref{eqn:sec2:EFunc}), the energy associated with fracture is identified as: \begin{equation}{\label{eqn:sec2:EFrac}} \displaystyle G(\hbox{$\varphi$}) = \int_{\hbox{$\Omega$}}^{} \dfrac{\hbox{$G_c$}}{c_w \hbox{$l$}} \left( w(\hbox{$\varphi$}) + \hbox{$l$}^2 |\hbox{\boldmath$\nabla$} \hbox{$\varphi$}|^2 \right) \: \text{d}\hbox{$\Omega$}, \end{equation} \noindent and its incremental form is given by \begin{equation}{\label{eqn:sec2:EFracIncr}} \displaystyle \Delta G(\hbox{$\varphi$},\Delta\hbox{$\varphi$}) = \int_{\hbox{$\Omega$}}^{} \dfrac{\hbox{$G_c$}}{c_w \hbox{$l$}} \left( w'(\hbox{$\varphi$})\Delta\hbox{$\varphi$} + 2 \hbox{$l$}^2 \hbox{\boldmath$\nabla$} \hbox{$\varphi$} \cdot \hbox{\boldmath$\nabla$} \Delta \hbox{$\varphi$} \right) \: \text{d}\hbox{$\Omega$}. \end{equation} \noindent Thereafter, an arc-length constraint equation $\Lambda(\hbox{$\varphi$},\Delta\hbox{$\varphi$})$ is devised, limiting the incremental phase-field fracture energy in (\ref{eqn:sec2:EFracIncr}) to a certain value $\Delta \tau$. Mathematically, \begin{equation}\label{eqn:arclength_cons} \Lambda^{(\hbox{$\varphi$},\Delta\hbox{$\varphi$})} := \displaystyle \Delta G(\hbox{$\varphi$},\Delta\hbox{$\varphi$}) - \Delta \tau = 0. \end{equation} Within a displacement-controlled loading scenario, an additive decomposition of the displacement is carried out as, \begin{equation} \tilde{\hbox{$\mathbf{u}$}} = \mathbf{C} \tilde{\hbox{$\mathbf{u}$}}_f + \tilde{\hbox{$\mathbf{u}$}}_p + \tilde{\lambda} \hat{\hbox{$\mathbf{u}$}}. \end{equation} \noindent Here, $\mathbf{C}$ is a constraint matrix \cite{Shephard1984linear}, $\tilde{\hbox{$\mathbf{u}$}}_f$, $\tilde{\hbox{$\mathbf{u}$}}_p$ and $\hat{\hbox{$\mathbf{u}$}}_p$ are free, prescribed and unit displacements. The load level $\tilde{\lambda}$ acts only on the prescribed displacements $\hat{\hbox{$\mathbf{u}$}}_p$. With this setup, the discrete problem assumes the form: \begin{dproblem}\label{Problem4} Compute the solution increment ($\tilde{\Delta\hbox{$\mathbf{u}$}}$, $\tilde{\Delta\hbox{$\varphi$}}$, $\Delta\hat{\lambda}$)$_{i+1}$ in the current iteration $i+1$ using \begin{subequations} \begin{equation} \underbrace{ \begin{bmatrix} \mathbf{K}^{\boldsymbol{uu}} & \mathbf{K}^{\boldsymbol{u}\varphi} & \mathbf{K}^{\boldsymbol{u}\lambda} \\ \mathbf{K}^{\varphi\boldsymbol{u}} & \mathbf{K}^{\varphi\varphi} & \mathbf{K}^{\varphi\lambda} \\ \mathbf{0} & \mathbf{K}^{\lambda\varphi} & \mathbf{0} \end{bmatrix}_{i}}_{\text{Stiffness matrix}} \begin{Bmatrix} \Delta \tilde{\hbox{$\mathbf{u}$}} \\ \Delta \tilde{\hbox{$\varphi$}} \\ \Delta \tilde{\lambda} \end{Bmatrix}_{i+1} = - \underbrace{\begin{Bmatrix} \mathbf{f}^{int,\boldsymbol{u}} \\ \mathbf{f}^{int,\varphi} \\ \Lambda^{(\varphi,\Delta\varphi)} \end{Bmatrix}_{i}}_{\text{Residual}} \end{equation} \noindent and update the solution fields, \begin{equation}\label{eq:arcL_solUpdate} \begin{Bmatrix} \tilde{\hbox{$\mathbf{u}$}} \\ \tilde{\hbox{$\varphi$}} \\ \tilde{\lambda} \end{Bmatrix}_{i+1} = \begin{Bmatrix} \tilde{\hbox{$\mathbf{u}$}} \\ \tilde{\hbox{$\varphi$}} \\ \tilde{\lambda} \end{Bmatrix}_{i} + \begin{Bmatrix} \Delta \tilde{\hbox{$\mathbf{u}$}} \\ \Delta \tilde{\hbox{$\varphi$}} \\ \Delta \tilde{\lambda} \end{Bmatrix}_{i+1}. \end{equation} \noindent until convergence is achieved. The local element stiffness matrices $\mathbf{K}^{\boldsymbol{uu}}$, $\mathbf{K}^{\boldsymbol{u}\varphi}$, $\mathbf{K}^{\varphi\boldsymbol{u}}$ and $\mathbf{K}^{\varphi\varphi}$, and the local internal force vectors $\mathbf{f}^{int,\boldsymbol{u}}$ and $\mathbf{f}^{int,\varphi}$ remain same as that presented in Problem \ref{Problem3}. The additional matrices are given by \begin{equation}\label{eq:addl_stiff_comp} \displaystyle \begin{split} \mathbf{K}^{\boldsymbol{u}\lambda} & = \mathbf{K}^{\boldsymbol{uu}} \: \hat{\hbox{$\mathbf{u}$}}, \\ \mathbf{K}^{\varphi\lambda} & = \mathbf{K}^{\varphi\boldsymbol{u}} \: \hat{\hbox{$\mathbf{u}$}}, \\ \mathbf{K}^{\lambda\varphi} & = \dfrac{\hbox{$G_c$}}{c_w \hbox{$l$}} \int_{\Omega}\left\{ \left[N^{\varphi}\right]^T \big( w''(\hbox{$\varphi$})\Delta\hbox{$\varphi$} + w'(\hbox{$\varphi$}) \big) + 2 \hbox{$l$}^2 \left[\mathbf{B}^{\varphi}\right]^T \cdot \big( \hbox{\boldmath$\nabla$}\Delta\hbox{$\varphi$} + \hbox{\boldmath$\nabla$}\hbox{$\varphi$} \big) \right\} \;d\Omega, \end{split} \end{equation} \noindent and the additional internal force vector is computed as \begin{equation}\label{eq:addl_fint_comp} \begin{split} \Lambda^{(\varphi,\Delta\varphi)} & = \int_{\hbox{$\Omega$}}^{} \dfrac{\hbox{$G_c$}}{c_w \hbox{$l$}} \left( w'(\hbox{$\varphi$})\Delta\hbox{$\varphi$} + 2 \hbox{$l$}^2 \hbox{\boldmath$\nabla$} \hbox{$\varphi$} \cdot \hbox{\boldmath$\nabla$} \Delta \hbox{$\varphi$} \right) \: \text{d}\hbox{$\Omega$} - \Delta\tau. \end{split} \end{equation} \end{subequations} {\color{black}\hfill $\blacksquare$} \end{dproblem} In this work, the phase-field fracture problem is solved using both the conventional displacement-controlled solution scheme (Problem \ref{Problem3}) and the Arc-length method (Problem \ref{Problem4}). Algorithm \ref{alg:dispToArc} presented in Appendix \ref{appA:algorithms} combines both within a single monolithic solution strategy. The time-stepping commences with a displacement-controlled solution scheme and the Newton-Raphson method. Upon convergence, the incremental dissipation $\Delta G$ is computed using (\ref{eqn:sec2:EFracIncr}). If $\Delta G$ is greater than a certain user-defined switch energy, the method is switched to the Arc-length method. Additionally, $\Delta \tau$ is set to the switch energy, and $\Delta \lambda$ is set to zero. Upon achieving convergence in a certain step, an increment in $\Delta \lambda$ is carried out, subjected to a maximum value of $\Delta\tau_{max}$. \subsection{Adaptive under-relaxation scheme} An under-relaxation scheme introduces a scalar parameter $\beta \in (0,1)$ such that the solution field is updated using, \begin{equation}\label{eq:underRelaxSolUpdate} \begin{Bmatrix} \tilde{\hbox{$\mathbf{u}$}} \\ \tilde{\hbox{$\varphi$}} \\ \tilde{\lambda} \end{Bmatrix}_{i+1} = \begin{Bmatrix} \tilde{\hbox{$\mathbf{u}$}} \\ \tilde{\hbox{$\varphi$}} \\ \tilde{\lambda} \end{Bmatrix}_{i} + \beta \begin{Bmatrix} \Delta \tilde{\hbox{$\mathbf{u}$}} \\ \Delta \tilde{\hbox{$\varphi$}} \\ \Delta \tilde{\lambda} \end{Bmatrix}_{i+1}. \end{equation} \noindent When $\beta$ is set to one, the Newton-Raphson method is recovered, otherwise, the method maybe referred to as a modified Newton approach. Under-relaxation schemes are usually robust, however, it may reduce the rate of convergence of the problem \cite{STORVIK2021113822}. For the phase-field fracture problem, the under-relaxation scheme is adopted to prevent divergence due to an ill behaving stiffness matrix. Algorithm \ref{alg:dispToArcUnderRelax} in Appendix \ref{appA:algorithms} presents the under-relaxation adopted in this work. In this scheme, $\beta$ starts with a value one, corresponding to a full Newton-Raphson update. When the convergence is not achieved, the value to $\beta$ is reduced by a factor 1.25. This reduction is carried out twice before performing a reduction in the prescribed incremental dissipation $\Delta\tau$. The motivation behind this is to try the current dissipation step with smaller solution increments within the iterative process of the Newton-Raphson method. Such an approach prevents divergence due to an ill behaving stiffness matrix. \subsection{Adaptive mesh refinement and solution transfer} The phase-field fracture model requires an extremely fine mesh to resolve the crack zone in the computational domain, $\hbox{$\Omega$}$. A sharp crack topology is recovered in the limit as $\hbox{$l$} \to 0$ \cite{Bourdin2000}. With fracture length-scales $\hbox{$l$}$ very small compared to $|\hbox{$\Omega$}|$, an uniformly refined mesh enormously increases the resources required in terms of computing power and storage. In this work, the novel monolithic solver is integrated with an efficient adaptive mesh refinement (AMR) scheme. The elements of the mesh are chosen for refinement based on a critical threshold value of $\hbox{$\varphi$}$, $\hbox{$\varphi$}_{\text{threshold}}$ \cite{Heister2015,goswami2019adaptive}, which is typically referred as the the refinement indicator. To locally refine the crack path, polynomial splines over hierarchical T-meshes (PHT-splines) are used within the framework of IGA. The PHT-splines possess a very efficient and easy to implement local refinement algorithm. The hierarchical approach replaces the basis functions for an element on the coarse mesh with a set of basis functions constructed over the refined element. The refinement of an element originally defined on the coarse mesh is restricted by a pre-decided maximum number of allowable refinements, to avoid repeated refinements of the elements which are already in the cracked domain. An adaptive h-refinement scheme is adopted in this work, in which the order of the basis functions remains constant throughout the refinement process. Within the simulation, a series of hierarchical meshes evolve. The mesh during the onset of the simulation is denoted as the base mesh or Level 0 mesh. At any hierarchical level, say `k', some (parent) elements are marked for refinement, following which they are sub-divided into $2^{\text{dim}}$ (children) elements. Once a parent element is refined, it becomes inactive and its children take the place in the computational domain as active elements. Finally, for computational efficiency, the basis functions are computed only for the children elements upon refinement, and not for the entire computational domain. The field variables are projected from the coarser mesh to the finer mesh for each refined element, repeatedly during the mesh refinement. To avoid re-computation of the problem from the beginning, a variable transfer is required. The discretized variables include the field variables such as $\hbox{$\mathbf{u}$}$ and $\hbox{$\varphi$}$, computed at the control points that are required to be transferred to the new element. The projection of the field variables from a coarser mesh to a finer mesh is implemented using a similar technique to that described in \cite{hennig2016bezier}. For computing the new control points, instead of projecting the geometrical information at the basis vertex, we project $\hbox{$\varphi$}$ on the finer mesh. \section{Numerical experiments}\label{sec:5} In this section, numerical experiments are carried out on representative fracture problems. These include a tapered bar under tension, a specimen with an eccentric hole under tension, a single edge notched specimen under tensile and later shear loading. For all problems, the geometry, material properties as well as loading conditions are presented in the respective sub-sections. The load-displacement curves and the phase-field fracture topology at the final step of the analysis are also presented therein. A residual-based convergence criterion is adopted in this work. More specifically, the iterations pertaining to the Newton-Raphson method is terminated, when the norm of the residual is less than $1e-3$. \subsection{Tapered Bar under Tension (TBT)}\label{sec5:TBtension} The first example in the numerical study section is a tapered bar under tensile loading, as shown in Figure \ref{sec5:fig:TPTdiagram}. The bar has dimensions $5$ [mm] in length, $0.75$ [mm] and $2$ [mm] width of fixed end and the prescribed loading edges respectively. The loading in applied in the form of prescribed displacement increment $\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$, where $\hat{\hbox{$\mathbf{u}$}}$ is a unit load vector and $\tilde{\lambda}$ is the load factor. When the analysis is started, the displacement-controlled approach is adopted and $\hat{\lambda}$ is incremented in steps of $1e-2$ [mm]. Following the switch to the arc-length method, $\hat{\lambda}$ becomes an unknown variable, and is solved with the arc-length constraint equation. The additional parameters required for the analysis are presented in Table \ref{sec5:table:TBTparams}. \begin{figure}[ht] \begin{minipage}[b]{0.45\linewidth} \centering \begin{tikzpicture}[scale=0.5] \coordinate (K) at (0,0); \draw[line width=0.75pt,black] (0,0) to (0,-0.75); \draw[line width=0.75pt,black] (0,-0.75) to (10,-2); \draw[line width=0.75pt,black] (10,-2) to (10,2); \draw[line width=0.75pt,black] (10,2) to (0,0.75); \draw[line width=0.75pt,black] (0,0.75) to (0,0); \draw[->,line width=1.5pt,black] (10.2,0.0) to (10.8,0.0); \node[ ] at (11.5,0.0) {$\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$}; \draw[line width=0.75pt,black] (0,-1.25) to (0,1.25); \draw[line width=0.75pt,black] (0,1.25) to (-0.25,1.0); \draw[line width=0.75pt,black] (0,1.00) to (-0.25,0.75); \draw[line width=0.75pt,black] (0,0.75) to (-0.25,0.50); \draw[line width=0.75pt,black] (0,0.50) to (-0.25,0.25); \draw[line width=0.75pt,black] (0,0.25) to (-0.25,0.00); \draw[line width=0.75pt,black] (0,0.00) to (-0.25,-0.25); \draw[line width=0.75pt,black] (0,-0.25) to (-0.25,-0.50); \draw[line width=0.75pt,black] (0,-0.50) to (-0.25,-0.75); \draw[line width=0.75pt,black] (0,-0.75) to (-0.25,-1.00); \draw[line width=0.75pt,black] (0,-1.00) to (-0.25,-1.25); \draw[line width=0.75pt,black] (0,-1.25) to (-0.25,-1.50); \end{tikzpicture} \caption{TBT experiment} \label{sec5:fig:TPTdiagram} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \begin{tabular}{ll} \hline \textbf{Parameters} & \textbf{Value} \\ \hline Model & AT2 \\ $\Psi^f$ & No Split \\ $\lambda$ & 0.0 [MPa] \\ $\mu$ & 50.0 [MPa] \\ $\hbox{$G_c$}$ & 1.0 [N/mm] \\ $\hbox{$l$}$ & 0.25 [mm] \\ $\hbox{$\varphi$}_{\text{threshold}}$ & 0.2 \\ $\Delta\tau_{max}$ & 0.0125 [N] \\ \hline \end{tabular} \captionof{table}{Model parameters} \label{sec5:table:TBTparams} \end{minipage} \end{figure} \begin{figure}[!ht] \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[thick,scale=0.95, every node/.style={scale=0.95}] \begin{axis}[legend style={draw=none}, legend columns = 2, transpose legend, ylabel={Load\:[N]},xlabel={Displacement\:[mm]}, xmin=0, ymin=0, xmax=0.45, ymax=25, yticklabel style={/pgf/number format/.cd,fixed,precision=2}, every axis plot/.append style={very thick}] \pgfplotstableread[col sep = comma]{./Data/TaperBar/lodi_quad.txt}\Adata; \pgfplotstableread[col sep = comma]{./Data/TaperBar/lodi_cubics0_01.txt}\Bdata; \pgfplotstableread[col sep = comma]{./Data/TaperBar/lodi_cubics0_1.txt}\Cdata; \pgfplotstableread[col sep = comma]{./Data/TaperBar/lodi_cubics1.txt}\Ddata; \addplot [ color=black, mark=*, mark size=0.15pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Adata}; \addlegendentry{Quadratic} \addplot [ color=red, mark=*, mark size=0.5pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Cdata}; \addlegendentry{Cubic: $s = 0.1$} \addplot [ color=blue, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Ddata}; \addlegendentry{Cubic: $s = 1.0$} \end{axis} \end{tikzpicture} \caption{ } \label{sec5:fig:TBT_lodi} \end{subfigure} \hfill % \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture} \node[inner sep=0pt] () at (0,0) {\includegraphics[width=5.5cm,trim=8.6cm 2.38cm 2.65cm 7.15cm, clip]{Figures/TaperBar/Step78.png}}; \node[inner sep=0pt] () at (-1.3,-3.05) {\begin{axis}[ hide axis, scale only axis, height=0pt, width=0pt, colorbar horizontal, point meta min=0, point meta max=1, colorbar style={ width=4.85cm, xtick={0,0.5,1.0}, xticklabel style = {yshift=-0.075cm} }] \addplot [draw=none] coordinates {(0,0)}; \end{axis}}; \node[inner sep=0pt] () at (0,-3.75) {$\hbox{$\varphi$}$}; \end{tikzpicture} \caption{ } \label{sec5:fig:TBT_failure} \end{subfigure} \caption{Figure (a) presents the load-displacement curves for the tapered bar under tension. The legend entries correspond to the choice of degradation functions. Figure (b) shows the distribution of the phase-field variable at the final step of the analysis.} \end{figure} Figure \ref{sec5:fig:TBT_lodi} presents the load-displacement curves for the tapered bar under tension. For both, quadratic and cubic degradation function, the specimen exhibits a snap-back behaviour upon localisation. Furthermore, consistent with other studies in the literature, the cubic degradation function demonstrates a linear pre-peak behaviour for small values of $s$ ($<1$). It is also observed that the use of the cubic degradation function yields a higher peak load compared to the quadratic degradation function. The reason for this behaviour can be explained with analytical studies on 1D bar \cite{pham2011}. Finally, in Figure \ref{sec5:fig:TBT_failure}, the phase-field fracture topology at the final step of the analysis is presented, where the fracture is seen to appear on the fixed end. \subsection{Specimen with an eccentric hole under tension (EH)} A unit square (in mm) embedded with a eccentric hole \cite{May2016}, as shown in Figure \ref{sec5:fig:EHdiagram}, is considered for numerical study. The hole has a radius 0.2 [mm] and is centered at (0.6,0.0). A quasi-static loading is applied at the top boundary in the form of prescribed displacement increment $\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$, where $\hat{\hbox{$\mathbf{u}$}}$ is a unit load vector and $\tilde{\lambda}$ is the load factor. The loading in applied in the form of prescribed displacement increment $\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$, where $\hat{\hbox{$\mathbf{u}$}}$ is a unit load vector and $\tilde{\lambda}$ is the load factor. When the analysis is started, the displacement-control approach is adopted and $\hat{\lambda}$ is incremented in steps of $1e-4$ [mm]. Following the switch to the arc-length method, $\hat{\lambda}$ becomes an unknown variable, and is solved with the arc-length constraint equation. The bottom boundary remains fixed. The additional parameters required for the analysis are presented in Table \ref{sec5:table:EHparams}. \begin{figure}[ht] \begin{minipage}[b]{0.45\linewidth} \centering \begin{tikzpicture}[scale=0.7] \coordinate (K) at (0,0); \draw [fill=black!20] (-2.5,-2.5) rectangle (2.5,2.5); \draw [fill=white] (0.5,0) circle (1.0); \draw[line width=0.75pt,black] (2.5,2.75) to (-2.5,2.75); \draw[->,line width=1.5pt,black] (0.0,3.0) to (0.0,3.5); \node[ ] at (0,3.85) {$\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$}; \draw[line width=0.75pt,black] (-2.5,-2.5) to (-2.75,-2.75); \draw[line width=0.75pt,black] (-2.25,-2.5) to (-2.5,-2.75); \draw[line width=0.75pt,black] (-2.0,-2.5) to (-2.25,-2.75); \draw[line width=0.75pt,black] (-1.75,-2.5) to (-2.0,-2.75); \draw[line width=0.75pt,black] (-1.5,-2.5) to (-1.75,-2.75); \draw[line width=0.75pt,black] (-1.25,-2.5) to (-1.5,-2.75); \draw[line width=0.75pt,black] (-1.0,-2.5) to (-1.25,-2.75); \draw[line width=0.75pt,black] (-0.75,-2.5) to (-1.0,-2.75); \draw[line width=0.75pt,black] (-0.5,-2.5) to (-0.75,-2.75); \draw[line width=0.75pt,black] (-0.25,-2.5) to (-0.5,-2.75); \draw[line width=0.75pt,black] (0.0,-2.5) to (-0.25,-2.75); \draw[line width=0.75pt,black] (0.25,-2.5) to (0.0,-2.75); \draw[line width=0.75pt,black] (0.5,-2.5) to (0.25,-2.75); \draw[line width=0.75pt,black] (0.75,-2.5) to (0.5,-2.75); \draw[line width=0.75pt,black] (1.0,-2.5) to (0.75,-2.75); \draw[line width=0.75pt,black] (1.25,-2.5) to (1.0,-2.75); \draw[line width=0.75pt,black] (1.5,-2.5) to (1.25,-2.75); \draw[line width=0.75pt,black] (1.75,-2.5) to (1.5,-2.75); \draw[line width=0.75pt,black] (2.0,-2.5) to (1.75,-2.75); \draw[line width=0.75pt,black] (2.25,-2.5) to (2.0,-2.75); \draw[line width=0.75pt,black] (2.5,-2.5) to (2.25,-2.75); \end{tikzpicture} \caption{EH experiment} \label{sec5:fig:EHdiagram} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \begin{tabular}{ll} \hline \textbf{Parameters} & \textbf{Value} \\ \hline Model & AT2 \\ $\Psi^f$ & Rankine \\ $\lambda$ & 121.154 [GPa] \\ $\mu$ & 80.769 [GPa] \\ $\hbox{$G_c$}$ & 2700 [N/m] \\ $\hbox{$l$}$ & 2e-2 [mm] \\ $\hbox{$\varphi$}_{\text{threshold}}$ & 0.2 \\ $\Delta\tau_{max}$ & 0.05 [N/mm$^2$] \\ \hline \end{tabular} \captionof{table}{Model parameters} \label{sec5:table:EHparams} \end{minipage} \end{figure} \begin{figure}[!ht] \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[thick,scale=0.95, every node/.style={scale=0.95}] \begin{axis}[legend style={draw=none}, legend columns = 2, transpose legend, ylabel={Load\:[N]},xlabel={Displacement\:[mm]}, xmin=0, ymin=0, xmax=0.008, ymax=1600, yticklabel style={/pgf/number format/.cd,fixed,precision=2}, every axis plot/.append style={very thick}] \pgfplotstableread[col sep = comma]{./Data/EH/lodi_quad.txt}\Adata; \pgfplotstableread[col sep = comma]{./Data/EH/lodi_cubics0_1.txt}\Cdata; \pgfplotstableread[col sep = comma]{./Data/EH/lodi_cubics1.txt}\Ddata; \addplot [ color=black, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Adata}; \addlegendentry{Quadratic} \addplot [ color=red, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Cdata}; \addlegendentry{Cubic: $s = 0.1$} \addplot [ color=blue, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Ddata}; \addlegendentry{Cubic: $s = 1.0$} \end{axis} \end{tikzpicture} \caption{ } \label{sec5:fig:EH_lodi} \end{subfigure} \hfill % \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture} \node[inner sep=0pt] () at (0,0) {\includegraphics[width=5.5cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/EH/Step235.png}}; \node[inner sep=0pt] () at (-1.3,-3.05) {\begin{axis}[ hide axis, scale only axis, height=0pt, width=0pt, colorbar horizontal, point meta min=0, point meta max=1, colorbar style={ width=4.85cm, xtick={0,0.5,1.0}, xticklabel style = {yshift=-0.075cm} }] \addplot [draw=none] coordinates {(0,0)}; \end{axis}}; \node[inner sep=0pt] () at (0,-3.75) {$\hbox{$\varphi$}$}; \end{tikzpicture} \caption{ } \label{sec5:fig:EH_failure} \end{subfigure} \caption{Figure (a) presents the load-displacement curves for the single edge notched specimen under shear. The legend entries correspond to the choice of degradation functions. Figure (b) shows the distribution of the phase-field variable at the final step of the analysis.} \end{figure} Figure \ref{sec5:fig:EH_lodi} presents the load-displacement curves for the unit square specimen with an eccentric hole under tension. The different curves correspond to the choice of the degradation function, quadratic and cubic. It is observed that the specimen exhibits a linear pre-peak behaviour with the cubic degradation function for $s<1$. This linear stage is missing for the quadratic degradation function and the cubic degradation function with $s=1$. However, irrespective of the choice of the degradation function, two snap-back behaviours are observed. The first one occurring at the onset of the localization from the hole towards the right of the specimen. The second snap-back behaviour corresponds to the onset of the localization from the hole towards the left edge of the specimen. Next, in Figure \ref{sec5:fig:EH_failure}, the phase-field fracture topology at the final step of the analysis is presented. The fracture topology similar to those observed in the literature \cite{May2016}. The refined meshes corresponding to the different stages in the evolution of the phase-field is shown in Figure \ref{sec5:fig:adap_mesh_EH}. \begin{figure}[!ht] \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/EH/Ref/Step39.png}}; \end{tikzpicture} \caption{ } \end{subfigure} % \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/EH/Ref/Step97.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/EH/Ref/Step127.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/EH/Ref/Step209.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.08\textwidth} \begin{tikzpicture} \begin{axis}[% hide axis, scale only axis, height=1.75\linewidth, width=0.01\linewidth, point meta min=0.0, point meta max=1.0, colorbar right, colorbar sampled, colorbar style={ separate axis lines, samples=256, }, every colorbar/.append style={ height=\pgfkeysvalueof{/pgfplots/parent axis height}, ytick={0,0.5,1},yticklabels={0,0.5\:\,$\hbox{$\varphi$}$,1} } ] \addplot [draw=none] coordinates {(0,0)}; \end{axis} \end{tikzpicture} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/EH/Ref/Step39.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/EH/Ref/Step97.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/EH/Ref/Step127.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/EH/Ref/Step209.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \caption{Figures (a-d) shows the evolution of the phase-field, and the corresponding refined meshes are shown in Figures (e-h) for the eccentric hole. The colors in the latter figures represent the different patches in the IGA context.} \label{sec5:fig:adap_mesh_EH} \end{figure} \subsection{Single Edge Notched specimen under Tension (SENT)}\label{sec5:SENtension} A unit square (in mm) embedded with a horizontal notch, midway along the height is considered, as shown in Figure \ref{sec5:fig:SENTdiagram}. The length of the notch is equal to half of the edge length of the plate (shown in red). The notch is modelled explicitly in the finite element mesh. A quasi-static loading is applied at the top boundary in the form of prescribed displacement increment $\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$, where $\hat{\hbox{$\mathbf{u}$}}$ is a unit load vector and $\tilde{\lambda}$ is the load factor. When the analysis is started, the displacement-control approach is adopted and $\hat{\lambda}$ is incremented in steps of $1e-4$. Following the switch to the arc-length method, $\hat{\lambda}$ becomes an unknown variable, and is solved with the arc-length constraint equation. Furthermore, the bottom boundary remains fixed. The additional parameters required for the analysis are presented in Table \ref{sec5:table:SENTparams}. \begin{figure}[ht] \begin{minipage}[b]{0.45\linewidth} \centering \begin{tikzpicture}[scale=0.7] \coordinate (K) at (0,0); \draw[line width=0.75pt,black] (-2.5,-2.5) to (2.5,-2.5); \draw[line width=0.75pt,black] (2.5,-2.5) to (2.5,2.5); \draw[line width=0.75pt,black] (2.5,2.5) to (-2.5,2.5); \draw[line width=0.75pt,black] (-2.5,2.5) to (-2.5,-2.5); \draw[line width=1.5pt,red] (-2.5,0) to (0,0); \draw[line width=0.75pt,black] (2.5,2.75) to (-2.5,2.75); \draw[->,line width=1.5pt,black] (0.0,3.0) to (0.0,3.5); \node[ ] at (0,3.85) {$\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$}; \draw[line width=0.75pt,black] (-2.5,-2.5) to (-2.75,-2.75); \draw[line width=0.75pt,black] (-2.25,-2.5) to (-2.5,-2.75); \draw[line width=0.75pt,black] (-2.0,-2.5) to (-2.25,-2.75); \draw[line width=0.75pt,black] (-1.75,-2.5) to (-2.0,-2.75); \draw[line width=0.75pt,black] (-1.5,-2.5) to (-1.75,-2.75); \draw[line width=0.75pt,black] (-1.25,-2.5) to (-1.5,-2.75); \draw[line width=0.75pt,black] (-1.0,-2.5) to (-1.25,-2.75); \draw[line width=0.75pt,black] (-0.75,-2.5) to (-1.0,-2.75); \draw[line width=0.75pt,black] (-0.5,-2.5) to (-0.75,-2.75); \draw[line width=0.75pt,black] (-0.25,-2.5) to (-0.5,-2.75); \draw[line width=0.75pt,black] (0.0,-2.5) to (-0.25,-2.75); \draw[line width=0.75pt,black] (0.25,-2.5) to (0.0,-2.75); \draw[line width=0.75pt,black] (0.5,-2.5) to (0.25,-2.75); \draw[line width=0.75pt,black] (0.75,-2.5) to (0.5,-2.75); \draw[line width=0.75pt,black] (1.0,-2.5) to (0.75,-2.75); \draw[line width=0.75pt,black] (1.25,-2.5) to (1.0,-2.75); \draw[line width=0.75pt,black] (1.5,-2.5) to (1.25,-2.75); \draw[line width=0.75pt,black] (1.75,-2.5) to (1.5,-2.75); \draw[line width=0.75pt,black] (2.0,-2.5) to (1.75,-2.75); \draw[line width=0.75pt,black] (2.25,-2.5) to (2.0,-2.75); \draw[line width=0.75pt,black] (2.5,-2.5) to (2.25,-2.75); \end{tikzpicture} \caption{SENT experiment} \label{sec5:fig:SENTdiagram} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \begin{tabular}{ll} \hline \textbf{Parameters} & \textbf{Value} \\ \hline Model & AT2 \\ $\Psi^f$ & No Split \\ $\lambda$ & 121.154 [GPa] \\ $\mu$ & 80.769 [GPa] \\ $\hbox{$G_c$}$ & 2700 [N/m] \\ $\hbox{$l$}$ & 2e-2 [mm] \\ $\hbox{$\varphi$}_{\text{threshold}}$ & 0.2 \\ $\Delta\tau_{max}$ & 0.025 [N] \\ \hline \end{tabular} \captionof{table}{Model parameters} \label{sec5:table:SENTparams} \end{minipage} \end{figure} Figure \ref{sec5:fig:SENT_lodi} presents the load-displacement curves for the single edge notched specimen under tension. The different curves correspond to the choice of the degradation function, quadratic and cubic. Similar to the previous section, it is observed that the specimen exhibits a linear pre-peak behaviour with the cubic degradation function for $s<1$. The quadratic degradation function does not exhibit a linear pre-peak behaviour. Moreover, beyond the first snap-back behaviour, the post-peak branches of all curves are similar. Next, in Figure \ref{sec5:fig:SENT_failure}, the phase-field fracture topology at the final step of the analysis is presented. The fracture topology is consistent with those observed in the literature, for instance, \cite{Miehe2010}. The refined meshes corresponding to the different stages in the evolution of the phase-field is shown in Figure \ref{sec5:fig:adap_mesh_SENT}. \begin{figure}[!ht] \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[thick,scale=0.95, every node/.style={scale=0.95}] \begin{axis}[legend style={draw=none}, legend columns = 2, transpose legend, ylabel={Load\:[N]},xlabel={Displacement\:[mm]}, xmin=0, ymin=0, xmax=0.0075, ymax=1300, yticklabel style={/pgf/number format/.cd,fixed,precision=2}, every axis plot/.append style={very thick}] \pgfplotstableread[col sep = comma]{./Data/SENT/lodi_quad.txt}\Adata; \pgfplotstableread[col sep = comma]{./Data/SENT/lodi_cubics0_01.txt}\Bdata; \pgfplotstableread[col sep = comma]{./Data/SENT/lodi_cubics0_1.txt}\Cdata; \pgfplotstableread[col sep = comma]{./Data/SENT/lodi_cubics1.txt}\Ddata; \addplot [ color=black, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Adata}; \addlegendentry{Quadratic} \addplot [ color=red, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Cdata}; \addlegendentry{Cubic: $s = 0.1$} \addplot [ color=blue, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Ddata}; \addlegendentry{Cubic: $s = 1.0$} \end{axis} \end{tikzpicture} \caption{ } \label{sec5:fig:SENT_lodi} \end{subfigure} \hfill % \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture} \node[inner sep=0pt] () at (0,0) {\includegraphics[width=5.5cm,trim=8.35cm 1.25cm 2.65cm 6.25cm, clip]{Figures/SENT/Step101.png}}; \node[inner sep=0pt] () at (-1.3,-3.05) {\begin{axis}[ hide axis, scale only axis, height=0pt, width=0pt, colorbar horizontal, point meta min=0, point meta max=1, colorbar style={ width=4.85cm, xtick={0,0.5,1.0}, xticklabel style = {yshift=-0.075cm} }] \addplot [draw=none] coordinates {(0,0)}; \end{axis}}; \node[inner sep=0pt] () at (0,-3.75) {$\hbox{$\varphi$}$}; \end{tikzpicture} \caption{ } \label{sec5:fig:SENT_failure} \end{subfigure} \caption{Figure (a) presents the load-displacement curves for the single edge notched specimen under tension. The legend entries correspond to the choice of degradation functions. Figure (b) shows the distribution of the phase-field variable at the final step of the analysis.} \end{figure} \begin{figure}[!ht] \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENT/Ref/Step32.png}}; \end{tikzpicture} \caption{ } \end{subfigure} % \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENT/Ref/Step55.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENT/Ref/Step100.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENT/Ref/Step148.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.08\textwidth} \begin{tikzpicture} \begin{axis}[% hide axis, scale only axis, height=1.75\linewidth, width=0.01\linewidth, point meta min=0.0, point meta max=1.0, colorbar right, colorbar sampled, colorbar style={ separate axis lines, samples=256, }, every colorbar/.append style={ height=\pgfkeysvalueof{/pgfplots/parent axis height}, ytick={0,0.5,1},yticklabels={0,0.5\:\,$\hbox{$\varphi$}$,1} } ] \addplot [draw=none] coordinates {(0,0)}; \end{axis} \end{tikzpicture} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENT/Ref/Step32.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENT/Ref/Step55.png}}; \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENT/Ref/Step100.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENT/Ref/Step148.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \caption{Figures (a-d) shows the evolution of the phase-field, and the corresponding refined meshes are shown in Figures (e-h) for the single edge notched specimen under tension. The colors in the latter figures represent the different patches in the IGA context.} \label{sec5:fig:adap_mesh_SENT} \end{figure} Furthermore, sensitivity studies are carried out w.r.t to the choice of the prescribed maximum dissipation $\Delta\tau_{max}$ and the phase-field threshold for mesh refinement. The results in Appendix \ref{appA:sensitivity} present the $\Delta\tau_{max}$ for which similar load-displacement curves are obtained. Next, it is also observed that the adaptive mesh refinement technique with different values of $\hbox{$\varphi$}_{th.}$ yield similar load-displacement curve as that obtained on a fixed mesh. Finally, in Appendix \ref{appA:comparison}, the proposed monolithic solver is compared with conventional alternate minimization solver and the quasi-Newton Raphson method \cite{Heister2015}. For the SENT problem, it is observed that all methods/solvers yield similar peak load and pre-peak behaviour. However, the proposed monolithic solver augmented with the arc-length method captures snap-back behaviours, which is not possible with alternative minimization and quasi-Newton Raphson method. \subsection{Single Edge Notched specimen under Shear (SENS)} The shear test is carried out on the single edge notched specimen by loading in the horizontal direction, as shown in Figure \ref{sec5:fig:SENSdiagram}. The model parameters remain same as presented in Table \ref{sec5:table:SENSparams}. Similar to the SENT model, a quasi-static loading is applied at the top boundary in the form of prescribed displacement increment $\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$, where $\hat{\hbox{$\mathbf{u}$}}$ is a unit load vector and $\tilde{\lambda}$ is the load factor. The loading in applied in the form of prescribed displacement increment $\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$, where $\hat{\hbox{$\mathbf{u}$}}$ is a unit load vector and $\tilde{\lambda}$ is the load factor. When the analysis is started, the displacement-control approach is adopted and $\hat{\lambda}$ is incremented in steps of $5e-4$ [mm]. Following the switch to the arc-length method, $\hat{\lambda}$ becomes an unknown variable, and is solved with the arc-length constraint equation. The bottom boundary of the specimen is fixed, while the left and the right edges are restricted in the vertical direction. \begin{figure}[ht] \begin{minipage}[b]{0.45\linewidth} \centering \begin{tikzpicture}[scale=0.7] \coordinate (K) at (0,0); \draw[line width=0.75pt,black] (-2.5,-2.5) to (2.5,-2.5); \draw[line width=0.75pt,black] (2.5,-2.5) to (2.5,2.5); \draw[line width=0.75pt,black] (2.5,2.5) to (-2.5,2.5); \draw[line width=0.75pt,black] (-2.5,2.5) to (-2.5,-2.5); \draw[line width=1.5pt,red] (-2.5,0) to (0,0); \draw[line width=0.75pt,black] (2.5,2.75) to (-2.5,2.75); \draw[->,line width=1.5pt,black] (0.1,3.25) to (0.8,3.25); \node[ ] at (-0.5,3.25) {$\tilde{\lambda}\hat{\hbox{$\mathbf{u}$}}$}; \draw[line width=0.75pt,black] (-2.5,-2.5) to (-2.75,-2.75); \draw[line width=0.75pt,black] (-2.25,-2.5) to (-2.5,-2.75); \draw[line width=0.75pt,black] (-2.0,-2.5) to (-2.25,-2.75); \draw[line width=0.75pt,black] (-1.75,-2.5) to (-2.0,-2.75); \draw[line width=0.75pt,black] (-1.5,-2.5) to (-1.75,-2.75); \draw[line width=0.75pt,black] (-1.25,-2.5) to (-1.5,-2.75); \draw[line width=0.75pt,black] (-1.0,-2.5) to (-1.25,-2.75); \draw[line width=0.75pt,black] (-0.75,-2.5) to (-1.0,-2.75); \draw[line width=0.75pt,black] (-0.5,-2.5) to (-0.75,-2.75); \draw[line width=0.75pt,black] (-0.25,-2.5) to (-0.5,-2.75); \draw[line width=0.75pt,black] (0.0,-2.5) to (-0.25,-2.75); \draw[line width=0.75pt,black] (0.25,-2.5) to (0.0,-2.75); \draw[line width=0.75pt,black] (0.5,-2.5) to (0.25,-2.75); \draw[line width=0.75pt,black] (0.75,-2.5) to (0.5,-2.75); \draw[line width=0.75pt,black] (1.0,-2.5) to (0.75,-2.75); \draw[line width=0.75pt,black] (1.25,-2.5) to (1.0,-2.75); \draw[line width=0.75pt,black] (1.5,-2.5) to (1.25,-2.75); \draw[line width=0.75pt,black] (1.75,-2.5) to (1.5,-2.75); \draw[line width=0.75pt,black] (2.0,-2.5) to (1.75,-2.75); \draw[line width=0.75pt,black] (2.25,-2.5) to (2.0,-2.75); \draw[line width=0.75pt,black] (2.5,-2.5) to (2.25,-2.75); \draw[fill=gray!50] (-2.75,1.25) -- (-2.25,1.25) -- (-2.5,1.75)-- (-2.75,1.25); \draw[fill=black!75] (-2.5,1.05) circle (0.2); \draw[line width=1pt,black] (-2.25,0.8) to (-2.75,0.8); \draw[fill=gray!50] (-2.75,-1.5) -- (-2.25,-1.5) -- (-2.5,-1.)-- (-2.75,-1.5); \draw[fill=black!75] (-2.5,-1.7) circle (0.2); \draw[line width=1pt,black] (-2.25,-1.95) to (-2.75,-1.95); \draw[fill=gray!50] (2.75,1.25) -- (2.25,1.25) -- (2.5,1.75)-- (2.75,1.25); \draw[fill=black!75] (2.5,1.05) circle (0.2); \draw[line width=1pt,black] (2.25,0.8) to (2.75,0.8); \draw[fill=gray!50] (2.75,-1.5) -- (2.25,-1.5) -- (2.5,-1.)-- (2.75,-1.5); \draw[fill=black!75] (2.5,-1.7) circle (0.2); \draw[line width=1pt,black] (2.25,-1.95) to (2.75,-1.95); \end{tikzpicture} \caption{SENS experiment} \label{sec5:fig:SENSdiagram} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \begin{tabular}{ll} \hline \textbf{Parameters} & \textbf{Value} \\ \hline Model & AT2 \\ $\Psi^f$ & Rankine \\ $\lambda$ & 121.154 [GPa] \\ $\mu$ & 80.769 [GPa] \\ $\hbox{$G_c$}$ & 2700 [N/m] \\ $\hbox{$l$}$ & 2e-2 [mm] \\ $\hbox{$\varphi$}_{\text{threshold}}$ & 0.1 \\ $\Delta\tau_{max}$ & 0.025 [N] \\ \hline \end{tabular} \captionof{table}{Model parameters} \label{sec5:table:SENSparams} \end{minipage} \end{figure} Figure \ref{sec5:fig:SENS_lodi} presents the load-displacement curves for the single edge notched specimen under shear. The different curves correspond to the choice of the degradation function, quadratic and cubic. It is observed that the specimen exhibits a linear pre-peak behaviour with the cubic degradation function for $s<1$. This linear stage is not exhibited by the quadratic degradation function and the cubic degradation function with $s=1$. However, irrespective of the choice of the degradation function, two snap-back behaviours are observed. The first one occurring at the onset of the localization whereas the second snap-back behaviour appears when the crack has reached the bottom edge. Next, in Figure \ref{sec5:fig:SENS_failure}, the phase-field fracture topology at the final step of the analysis is presented. The fracture topology differs from that presented in \cite{Miehe2010}, and the reason lies in the choice of fracture driving $\Psi^f$. In \cite{Miehe2010}, the fracture is driven by the tensile strain energy obtained through spectral decomposition, whereas in this work, the Rankine criterion \cite{wu2017} is adopted. The refined meshes corresponding to the different stages in the evolution of the phase-field is shown in Figure \ref{sec5:fig:adap_mesh_SENS}. \newpage \begin{figure}[!ht] \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture}[thick,scale=0.95, every node/.style={scale=0.95}] \begin{axis}[legend style={draw=none}, legend columns = 2, transpose legend, ylabel={Load\:[N]},xlabel={Displacement\:[mm]}, xmin=0, ymin=0, xmax=0.014, ymax=800, yticklabel style={/pgf/number format/.cd,fixed,precision=2}, every axis plot/.append style={very thick}] \pgfplotstableread[col sep = comma]{./Data/SENS/lodi_quadn.txt}\Adata; \pgfplotstableread[col sep = comma]{./Data/SENS/lodi_cubics0_1n.txt}\Cdata; \pgfplotstableread[col sep = comma]{./Data/SENS/lodi_cubics1n.txt}\Ddata; \addplot [ color=black, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Adata}; \addlegendentry{Quadratic} \addplot [ color=red, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Cdata}; \addlegendentry{Cubic: $s = 0.1$} \addplot [ color=blue, mark=*, mark size=0.25pt, ] table [ x expr=\thisrowno{1}, y expr=\thisrowno{0} ] {\Ddata}; \addlegendentry{Cubic: $s = 1.0$} \end{axis} \end{tikzpicture} \caption{ } \label{sec5:fig:SENS_lodi} \end{subfigure} \hfill % \begin{subfigure}[t]{0.45\textwidth} \centering \begin{tikzpicture} \node[inner sep=0pt] () at (0,0) {\includegraphics[width=5.5cm,trim=8.35cm 1.25cm 2.65cm 6.25cm, clip]{Figures/SENS/Step126.png}}; \node[inner sep=0pt] () at (-1.3,-3.05) {\begin{axis}[ hide axis, scale only axis, height=0pt, width=0pt, colorbar horizontal, point meta min=0, point meta max=1, colorbar style={ width=4.85cm, xtick={0,0.5,1.0}, xticklabel style = {yshift=-0.075cm} }] \addplot [draw=none] coordinates {(0,0)}; \end{axis}}; \node[inner sep=0pt] () at (0,-3.75) {$\hbox{$\varphi$}$}; \end{tikzpicture} \caption{ } \label{sec5:fig:SENS_failure} \end{subfigure} \caption{Figure (a) presents the load-displacement curves for the single edge notched specimen under shear. The legend entries correspond to the choice of degradation functions. Figure (b) shows the distribution of the phase-field variable at the final step of the analysis.} \end{figure} \begin{figure}[!ht] \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENS/Ref/Step21.png}}; \end{tikzpicture} \caption{ } \end{subfigure} % \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENS/Ref/Step40.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENS/Ref/Step85.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=8.35cm 1.5cm 2.65cm 6.25cm, clip]{Figures/SENS/Ref/Step126.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.08\textwidth} \begin{tikzpicture} \begin{axis}[% hide axis, scale only axis, height=1.75\linewidth, width=0.01\linewidth, point meta min=0.0, point meta max=1.0, colorbar right, colorbar sampled, colorbar style={ separate axis lines, samples=256, }, every colorbar/.append style={ height=\pgfkeysvalueof{/pgfplots/parent axis height}, ytick={0,0.5,1},yticklabels={0,0.5\:\,$\hbox{$\varphi$}$,1} } ] \addplot [draw=none] coordinates {(0,0)}; \end{axis} \end{tikzpicture} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENS/Ref/Step21.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENS/Ref/Step40.png}}; \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENS/Ref/Step85.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \begin{tikzpicture}[scale=0.5] \node[inner sep=0pt] () at (0,0) {\includegraphics[width=3.0cm,trim=2.45cm 6.40cm 8.15cm 0.5cm, clip]{Figures/SENS/Ref/Step126.png}}; \end{tikzpicture} \caption{ } \end{subfigure} \caption{Figures (a-d) shows the evolution of the phase-field, and the corresponding refined meshes are shown in Figures (e-h) for the single edge notched specimen under shear. The colors in the latter figures represent the different patches in the IGA context.} \label{sec5:fig:adap_mesh_SENS} \end{figure} \section{Concluding Remarks}\label{sec:6} In this work, we have proposed a robust monolithic solver to accurately forecast the material behavior, which is essential for predicting damage progression and estimating possible failure paths. The literature on variational phase-field based fracture modeling still lacks a reliable, efficient, and simple monolithic solver capable of capturing the pre- and post-peak behaviours accurately. The proposed fully monolithic solver adopts a fracture energy-based arc-length method and an adaptive under-relaxation technique to bridge this gap. The proposed solver utilizes an adaptive mesh refinement scheme using polynomial splines over hierarchical T-meshes (PHT-splines) within the framework of IGA. The PHT-splines possess a very efficient and easy to implement local refinement algorithm, which makes it a right choice for capturing quantities of local interest. The combination of the proposed solver with an adaptive mesh refinement technique could facilitate the application of this approach to more complex structures and with sophisticated constitutive laws. Through the four test cases presented in this work, the crack is allowed to nucleate on its own, and also the solver captures the post-peak snap back effects which is not possible with the alternate minimization solver\cite{Bourdin2007,miehe2010b} and the quasi-monolithic scheme \cite{Heister2015}. Further extensions of this work may include complex multiphysics problems (e.g., porous media, corrosion), dynamic fracture, and the unified phase-field fracture model \cite{wu2017} for quasi brittle fracture. Also, plasticity models could be incorporated for ductile fracture. \paragraph{Declaration of Competing Interest:} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \paragraph{Acknowledgment:} The first author (R.B) is thankful to Elias B\"orjesson for many helpful discussion on the arc-length method. The financial support from the Swedish Research Council for Sustainable Development (FORMAS) under Grant 2018-01249 and the Swedish Research Council (VR) under Grant 2017-05192 is gratefully acknowledged. \bibliographystyle{unsrt}
1,116,691,499,559
arxiv
\section{Introduction} \label{sec:intro} This paper introduces an approach based on polynomial interpolation to obtain mathematically rigorous results about existence of solutions of nonlinear ordinary differential equations (ODEs). Our motivation for the present work is threefold. First, we believe that polynomial interpolation techniques are versatile and can lead to efficient and general computational methods to approximate solutions of ODEs with complicated (non polynomial) nonlinearities. Second, while polynomial interpolation techniques have be used to produce computer-assisted proofs in ODEs, their applicability to produce proofs is sometimes limited by the formulation of the problem itself. More precisely, a standard way to prove (both theoretically and computationally) existence of solutions of systems of ODEs is to reformulate the problem into an integral equation (often in the form of a Picard operator) and then to apply the contraction mapping theorem to get existence. If one is interested to produce computer-assisted proofs using that approach, the analytic estimates required to perform the proofs depend on the amount of regularity gained by applying the integral operator. This observation motivated developing what we call the {\em a priori bootstrap}, which consists of reformulating the original ODE problem into one of looking for the fixed point of a higher order smoothing Picard-like operator. Third, we believe (and hope) that our proposed method can be adapted to study infinite dimensional continuous dynamical systems (e.g. partial differential equations and delay differential equations) for which spectral methods may sometimes be difficult to apply (for instance because of the shape of the spatial domains or because the differential operators are difficult to invert in a given spectral basis). It is important to realize that computer-assisted arguments to study differential equations are by now standard, and that providing a complete overview of the literature would require a tremendous effort and is outside the scope of this paper. However, we encourage the reader to read the survey papers \cite{MR1962787,MR1420838,MR2652784,Jay_Konstantin_Survey,MR1849323,notices_jb_jp}, the books \cite{MR2807595,MR3467671} and to consult the webpage of the CAPD group \cite{capd} to get a flavour of the extraordinary recent advances made in the field. More closely related to the present work are methods based on the contraction mapping theorem using the \emph{radii polynomial approach} (first introduced in \cite{MR2338393}), which has been developed in the last decade to study fixed points, periodic orbits, invariant manifolds and connecting orbits of ODEs, partial differential equations and delay differential equations (see for instance \cite{MR3353132,BerShe15,MR3437754,CasLesMir16,MR3623202,MR2592879}). The numerics and a posteriori analysis in those works mainly use spectral methods like Fourier and Chebyshev series, and Taylor methods. First order polynomial (piecewise linear) approximations were also used using the radii polynomial approach (see \cite{MR3323206,MR2821596,MR3207723}), but more seldom, mainly because the numerical cost was higher and the accuracy was lower than for spectral methods. The computational cost of these low order methods is essentially due to the above mentioned low gain of regularity of the Picard operators chosen to perform the computer-assisted proofs. In an attempt to address the low gain of regularity problem, we present here a new technique that we call \emph{a priori bootstrap} which, when combined with the use of higher order interpolation, significantly improves the efficiency of computer-assisted proofs with polynomial interpolation methods. We stress that the limitations that affected the previous works using interpolation were not solely due to the use of first order methods, and that the \emph{a priori bootstrap} is crucial (that is, just increasing the order of the interpolation does not significantly improve the results in those previous works). This point is illustrated in Section~\ref{sec:applications}. While we believe that one of the advantage of our proposed method is the versatility of the polynomial interpolations to tackle problems with complicated (non polynomial) nonlinearities, we hasten to mention the existence of previous powerful methods which have been developed in rigorous computing to study such problems. For instance, {\em automatic differentiation} (AD) techniques provide a beautiful and efficient means of computing solutions of nonlinear problems (e.g. see \cite{MR2807595,MR633878,MR2146523}) and are often combined with Taylor series expansions to prove existence of solutions of differential equations with non polynomial nonlinearities (e.g. see \cite{MR1962787, MR1652147, MR2644324,MR1930946, MR1961956, MR2049869,Ru99a}). Also, in the recent work \cite{MR3545977}, the ideas of AD are combined with Fourier series to prove existence of periodic orbits in non polynomial vector fields. Independently of AD techniques, a method involving Chebyshev series to approximate the nonlinearities have been proposed recently in \cite{MR3124898}. Finally, the fast Fourier transform (FFT) algorithm is used in \cite{FHL_KAM} to control general nonlinearities in the context of computer-assisted proofs in KAM theory. In this paper we consider $\phi:\mathbb{R}^n\to\mathbb{R}^n$ a $C^{{\vect{r}}}$ vector field (not necessarily polynomial) with ${{\vect{r}}} \ge 1$, and we present a rigorous numerical procedure to study problems of the form \begin{equation} \label{eq:general_problem} \left\{ \begin{aligned} &\frac{du}{dt}(t)=\phi(u(t)), \quad t\in[0,\tau],\\ &BV(u(0),u(\tau))=0. \end{aligned} \right. \end{equation} We treat three special cases for $BV$, corresponding to an initial value problem, a problem of looking for periodic orbits and a problem of looking for connecting orbits. We also note that, as for the already existing spectral methods, the technique presented here extends easily to treat parameter dependent versions of~\eqref{eq:general_problem} (e.g. using ideas from \cite{MR2630003,MR3125637,MR3464215}). For the sake of simplicity, we expose all the general arguments for the initial value problem only, that is for \begin{equation} \label{eq:general_IVP} \left\{ \begin{aligned} &\frac{du}{dt}(t)=\phi(u(t)), \quad t\in[0,\tau],\\ &u(0)=u_0, \end{aligned} \right. \end{equation} given an integration time $\tau > 0$ and an initial condition $u_0 \in \mathbb{R}^n$. We explain how~\eqref{eq:general_IVP} needs to be modified for different problems as we introduce them in Sections~\ref{sec:applications} and~\ref{sec:ABC}. Our paper is organized as follows. In Section~\ref{sec:a_priori_bootstrap}, we start by presenting our \emph{a priori bootstrap} technique, together with a \emph{piecewise} reformulation of the operator that we use throughout this work. We then recall in Section~\ref{sec:general} some definitions and error estimates about polynomial interpolation, and explain how to combine them with our \emph{a priori bootstrap} formulation to get computer-assisted proofs. The precise estimates needed for the proofs are then derived in Section~\ref{sec:bounds}, and their dependency with respect to the \emph{a priori bootstrap} and to the parameters of the polynomial interpolation is commented in Section~\ref{sec:para}. This discussion is complemented by several examples in Section~\ref{sec:applications}, where we apply our technique to validate solutions for the Lorenz system. Finally we give another example of application in Section~\ref{sec:ABC}, where we prove the existence of some specific orbits for ABC flows. \section{Reformulations of the Cauchy problem} \label{sec:a_priori_bootstrap} \subsection{A priori bootstrap} One of the usual strategies used to study~\eqref{eq:general_IVP}, both theoretically and numerically, is to recast it as a fixed point problem, as in the following lemma. \begin{lemma} Consider the standard Picard operator \begin{equation*} f:\left\{ \begin{aligned} \mathcal{C}^0([0,1],\mathbb{R}^n) &\to \mathcal{C}^1([0,1],\mathbb{R}^n) \\ u &\mapsto\ f(u), \end{aligned} \right. \end{equation*} where \begin{equation} \label{eq:standard_Picard} f(u)(t) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, u_0+\tau\int_0^t \phi(u(s))ds,\quad \text{for all}~t\in [0,1]. \end{equation} Then $u$ is a fixed point of $f$ if and only if $v:t\mapsto u(\frac{t}{\tau})$ is a solution of~\eqref{eq:general_IVP}. \end{lemma} In previous works using this reformulation, the limiting factor in the estimates needed to apply the contraction mapping theorem was a consequence of the fact that $f$ only gains one order of regularity, that is maps $\mathcal{C}^0$ into $\mathcal{C}^1$. This fact will be made precise in Section~\ref{sec:bounds} where we derive the estimates in question and in Section~\ref{sec:para} where we discuss how those estimates affect the effectiveness of our technique. To circumvent this limitation, we propose a different reformulation that we call \emph{a priori bootstrap}. This approach provides operators which gain more regularity, and therefore lead to sharper analytic estimates. First we introduce some notations. The following definition allows concisely describing the higher order equations obtained by taking successive derivatives of~\eqref{eq:general_IVP}. \begin{definition} Consider the sequence of vector fields $\left( \phi^{[p]} \right)_{0\leq p\leq {\vect{r}}+1}$ with $\phi^{[p]} : \mathbb{R}^n \to \mathbb{R}^n$, \begin{equation*} \phi^{[0]}(u) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, u\quad \text{and}\quad \phi^{[p+1]}(u) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, D\phi^{[p]}(u)\phi(u)\quad \text{for all } u \in \mathbb{R}^n \text{ and for all } p = 0,\dots,{\vect{r}}. \end{equation*} \end{definition} \begin{lemma} For any $1\leq p\leq {\vect{r}}+1$, $u$ solves~\eqref{eq:general_IVP} if and only if $u$ solves the following Cauchy problem \begin{equation} \label{eq:order_p_ODE} \left\{ \begin{aligned} & \frac{d^pu}{dt^p}(t)=\phi^{[p]}(u(t)), \quad t\in[0,\tau],\\ & \frac{d^qu}{dt^q}(0)=\phi^{[q]}(u_0), \quad \text{for all } q = 0,\dots,p-1. \end{aligned} \right. \end{equation} \end{lemma} \begin{proof} The direct implication is trivial. To prove the converse application, we consider $e \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \frac{du}{dt}-\phi(u)$ and show that is solve a linear ODE of order $p-1$, with initial conditions $\frac{d^qe}{de^q}(0)=0$ for all $q=0,\dots,p-2$, which implies that $e \equiv 0$. \end{proof} Integrating the $p^{th}$ order Cauchy problem \eqref{eq:order_p_ODE} $p$ times leads to a new fixed point operator which now maps $\mathcal{C}^0$ into $\mathcal{C}^p$. \begin{lemma} \label{lem:operator_order_p} Let $1\leq p\leq {\vect{r}}+1$ and consider the Picard-like operator \begin{equation*} \tilde g:\left\{ \begin{aligned} \mathcal{C}^0([0,1],\mathbb{R}^n) &\to \mathcal{C}^p([0,1],\mathbb{R}^n) \\ u &\mapsto\ \tilde g(u), \end{aligned} \right. \end{equation*} where \begin{equation} \label{eq:tilde_g_original} \tilde g(u)(t) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \sum_{q=0}^{p-1}\tau^q\frac{t^q}{q!} \phi^{[q]}(u_0)+\tau^p\int_0^t \frac{(t-s)^{p-1}}{(p-1)!}\phi^{[p]}(u(s))ds,\quad \text{for all}~t\in [0,1]. \end{equation} Then $u$ is a fixed point of $\tilde g$ if and only if $v:t\mapsto u(\frac{t}{\tau})$ is a solution of~\eqref{eq:order_p_ODE} (and thus of~\eqref{eq:general_IVP}). \end{lemma} \begin{proof} If $u$ is a fixed point of $\tilde g$, an elementary computation yields that $v$ solves~\eqref{eq:order_p_ODE}. Conversely, if $v$ solves~\eqref{eq:order_p_ODE} then Taylor's formula with integral reminder shows that $u$ is a fixed point of $\tilde g$. \end{proof} It is worth noting that, in the same framework of rigorous computation as the one used here, approximations using piecewise linear functions were used in \cite{MR2821596} to prove existence of homoclinic orbits for the Gray-Scott equation. In that case the system of ODEs considered is of order $2$, and therefore the equivalent integral operator is very similar to $\tilde g$ in \eqref{eq:tilde_g_original} for $p=2$. Similarly, piecewise linear functions were used in \cite{MR3207723} to prove existence of connecting orbits in the Lorenz equations. In that case, the standard Picard operator \eqref{eq:standard_Picard} was used. Now that we have an operator which provides a gain of several orders of regularity, it becomes interesting to consider polynomial interpolation of higher order, and again this will be detailed in Section~\ref{sec:para} and applied in Section~\ref{sec:applications}. \subsection{Piecewise reformulation of the Picard-like operator} \label{sec:reformulation} We finish this section by a last equivalent formulation of the initial value problem~\eqref{eq:general_IVP}, that will be the one used in the present paper to perform the computer-assisted proofs. Given $m\in\mathbb{N}$, we introduce the mesh of $[0,1]$ \begin{equation*} \Delta_m\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \{t_0,t_1,\ldots,t_m\}, \end{equation*} where $t_0=0<t_1<\ldots<t_m=1$. Then we consider $\mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n)$ (respectively $\mathcal{C}^k_{\Delta_{m}}([0,1],\mathbb{R}^n)$) the space of piecewise continuous (respectively $\mathcal{C}^k)$ functions having possible discontinuities only on the mesh $\Delta_m$. More precisely, we use the following definition. \begin{definition} For $k\in\mathbb{N}$, we say that $u\in\mathcal{C}^k_{\Delta_{m}}([0,1],\mathbb{R}^n)$ if $u_{ |_{(t_{j},t_{j+1})}}\in\mathcal{C}^k((t_{j},t_{j+1}),\mathbb{R}^n)$ and can be extended to a $\mathcal{C}^k$ function on $[t_j,t_{j+1}]$, for all $j=0,\ldots,m-1$. \end{definition} We then introduce \[ g:\left\{ \begin{aligned} \mathcal{C}^0_{\Delta_m}([0,1],\mathbb{R}^n) &\to \mathcal{C}^p_{\Delta_m}([0,1],\mathbb{R}^n) \\ u &\mapsto g(u), \end{aligned} \right. \] defined on the interval $(t_j,t_{j+1})$ ($j=0,\dots,m-1$) by \begin{equation} \label{eq:piecewise_g} g(u)(t) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \sum_{q=0}^{p-1}\tau^q\frac{(t-t_j)^q}{q!} \phi^{[q]}(u(t_j^-))+\tau^p\int_{t_j}^t \frac{(t-s)^{p-1}}{(p-1)!}\phi^{[p]}(u(s))ds, \end{equation} where $u(t_j^-)$ denotes the left limit of $u$ at $t_j$, and $u(t_0^-)$ must be replaced by $u_0$ (this last convention will be used throughout the paper). \begin{remark} \label{rem:piecewise_g} {\em We point out that our computer-assisted proof is based on the operator $g$ (defined in~\eqref{eq:piecewise_g}), which differs slightly from the operator $\tilde g$ (defined in~\eqref{eq:tilde_g_original}), which was used in previous studies such as~\cite{MR3323206,MR2821596,MR3207723}. The only difference is that the integral in $g$ is in some sense reseted at each $t_j$. We introduce this \emph{piecewise} reformulation because it allows for sharper estimates (see Remark~\ref{rem:sharper_estimates}). } \end{remark} We finally introduce $G:\mathcal{C}^0_{\Delta_m}([0,1],\mathbb{R}^n) \to \mathcal{C}^0_{\Delta_m}([0,1],\mathbb{R}^n)$ as \begin{equation*} G(u) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, g(u)-u. \end{equation*} \begin{lemma} Let $u\in\mathcal{C}^0_{\Delta_m}([0,1],\mathbb{R}^n)$. Then $G(u)=0$ if and only if $v:t\mapsto u(\frac{t}{\tau})$ solves~\eqref{eq:general_IVP}. \end{lemma} \begin{proof} This result is similar to Lemma~\ref{lem:operator_order_p}. The only additional property that we need to check is that, if $u\in\mathcal{C}^0_{\Delta_m}([0,1],\mathbb{R}^n)$ satisfies $G(u)=0$, then $u$ cannot be discontinuous. Indeed, if $G(u)=0$ then $g(u)=u$, and for all $j\in\{1,\ldots,m-1\}$ one has \begin{equation*} u(t_j^-)=\lim\limits_{t\to t_j^+} g(u)(t) = \lim\limits_{t\to t_j^+} u(t) = u(t_j^+). \qedhere \end{equation*} \end{proof} At this point, it might seems as if defining $G$ on $\mathcal{C}^0_{\Delta_m}([0,1],\mathbb{R}^n)$ brings unnecessary complications, and that we should simply define it on $\mathcal{C}^0([0,1],\mathbb{R}^n)$. While this is indeed a possibility, it will quickly become apparent that the present choice is more convenient, both for theoretical and numerical considerations (see Remark~\ref{rem:why_discontinuous}). Finding a zero of $G$ is the formulation of our initial problem~\eqref{eq:general_IVP} that we are going to use in the rest of this paper. \section{General framework for the polynomial interpolation} \label{sec:general} \subsection{Preliminaries} Given a mesh $\Delta_m$ as defined in Section~\ref{sec:reformulation}, we introduce the refined mesh $\Delta_{m,k}$ where, for all $j\in\{0,\ldots,m-1\}$ we add $k-1$ points between $t_{j}$ and $t_{j+1}$. More precisely we suppose that these points are the Chebyshev points (of the second kind) between $t_{j}$ and $t_{j+1}$, that is we add the following points: \begin{equation*} t_{j,l} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, t_{j}+\frac{x_l^k+1}{2}(t_{j+1}-t_{j}), \quad \text{for } l = 1,\dots,k-1, \end{equation*} where \begin{equation*} x_l^k \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \cos\theta_l^k,\quad \theta_l^k \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \frac{k-l}{l}\pi,\quad \text{for } l = 0, \dots,k. \end{equation*} Notice that the above definition extends to $t_{j,0}=t_{j}$ and $t_{j,k}=t_{j+1}$, and that $k=1$ corresponds to the mesh used in previous studies with first order interpolation (e.g. see \cite{MR2821596,MR3207723,MR3323206}). We then introduce the subspace $S_{m,k}^n \subset \mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n)$ of piecewise polynomial functions of degree $k$ on $\Delta_m$ \begin{equation*} S_{m,k}^n\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \left\{ u\in\mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n): \ u_{ |_{(t_{j},t_{j+1})}} \text{ is a polynomial of degree } k \text{ for all } j=0,\dots,m-1 \right\}. \end{equation*} Next, we define the projection operator \begin{equation*} \Pi_{m,k}^n : \left\{ \begin{aligned} \mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n) &\to S_{m,k}^n \\ u &\mapsto \bar u = \Pi_{m,k}^n(u), \end{aligned} \right. \end{equation*} where $\bar u$ is the function in $S_{m,k}^n$ that matches the values of $u$ on the mesh $\Delta_{m,k}$. Notice that $u$ can have discontinuities at the points $t_j$, therefore the matching of $u$ and $\bar u$ at those points must be understood as \begin{equation*} u(t_j^-)=\bar u(t_j^-) \quad \text{and}\quad u(t_j^+)=\bar u(t_j^+). \end{equation*} In the sequel we will need to control the error between a function $u$ and its interpolation $\bar u$. This is the content of the following propositions, where $\left\Vert\cdot\right\Vert_{\infty}$ denotes the sup norm on $[0,1]$. \begin{proposition} \label{prop:interpolation_error1} For all $u\in\mathcal{C}^{k+1}_{\Delta_{m}}([0,1],\mathbb{R})$, \begin{equation*} \left\Vert (Id-\Pi^1_{m,k})u\right\Vert_{\infty}=\left\Vert u-\bar u\right\Vert_{\infty} \leq C_{k}\max\limits_{0\leq j<m}\left((t_{j+1}-t_{j})^{k+1}\max\limits_{t\in[t_{j},t_{j+1}]}\left\vert \frac{d^{k+1} u}{dt^{k+1}}(t)\right\vert\right), \end{equation*} where \begin{equation*} C_{k} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \frac{1}{(k+1)!2^{2k}}. \end{equation*} \end{proposition} \begin{proposition} \label{prop:interpolation_error2} Fix $l\in\mathbb{N}$ such that $1\leq l\leq k$. For all $u\in\mathcal{C}^l_{\Delta_{m}}([0,1],\mathbb{R})$, \begin{equation*} \left\Vert (Id-\Pi^1_{m,k})u\right\Vert_{\infty}=\left\Vert u-\bar u\right\Vert_{\infty} \leq \tilde C_{k,l}\max\limits_{0\leq j<m}\left((t_{j+1}-t_{j})^l \max\limits_{t\in[t_{j},t_{j+1}]}\left\vert \frac{d^{l} u}{dt^{l}}(t)\right\vert\right), \end{equation*} where \begin{equation*} \tilde C_{k,l} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \min\left[\left(1+\Lambda_k\right)\left(\frac{\pi}{4}\right)^l\frac{(k+1-l)!}{(k+1)!},\frac{1}{l!2^l}\sum_{q=0}^{\left[\frac{l-1}{2}\right]}\frac{1}{4^q}\binom{l-1}{2q}\binom{2q}{q}\right], \end{equation*} $\Lambda_k$ being the Lebesgue constant (see for instance~\cite{MR3012510}), and $\left[\frac{l-1}{2}\right]$ denoting the integer part of $\frac{l-1}{2}$. \end{proposition} \begin{remark} {\em More information about the Lebesgue constant, and in particular sharp computable upper bounds for it, can be found in the Appendix, together with references and proofs of the two above propositions. } \end{remark} \subsection{Finite dimensional projection} \label{sec:numerical_implementation} To get an approximate zero of $G$ (and thus an approximate solution of~\eqref{eq:general_IVP}), we are going to look for a zero of $\bar G\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \Pi^n_{m,k} G_{ |S^n_{m,k}}$. But first, we need a convenient way to represent the elements of $S^n_{m,k}$. Here and in the sequel, we use the exponent ${}^{(i)}$ to denote the $i$-th component of a vector in $\mathbb{R}^n$, but we will work with all the components at once as often as possible to avoid burdening the notations with this exponent ${}^{(i)}$. Let us introduce the set of indexes \begin{equation*} \mathcal{E}^n_{m,k} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \left\lbrace (i,j,l)\in\mathbb{N}^3,\ 1\leq i\leq n,\ 0\leq j\leq m-1,\ 0\leq l\leq k\right\rbrace. \end{equation*} Perhaps the most natural way to characterize an element $\bar u$ of $S^n_{m,k}$ is to give all the values $\bar u^{(i)}(t_{j,l})$ for $(i,j,l)\in \mathcal{E}^n_{m,k}$. However, we will also use another representation, more suited to numerical computations, which consists of decomposing $\bar u$ on the Chebyshev basis. That is, we write \begin{equation} \label{eq:cheb_representation} \bar u^{(i)}(t)=\sum_{l=0}^k \bar u^{(i)}_{j,l} T_{l}\left(\frac{t-t_j}{t_{j+1}-t_j}-\frac{t_{j+1}-t}{t_{j+1}-t_j}\right),\quad \text{for all } j = 0,\dots,m-1 \text{ and } t \in (t_j,t_{j+1}), \end{equation} where $T_{l}$ is the $l$-th Chebyshev polynomial of the first kind. We can thus also describe uniquely any function $u$ belonging to $S^n_{m,k}$ by the family of Chebyshev coefficients $\left(\bar u^{(i)}_{j,l}\right)_{(i,j,l)\in \mathcal{E}^n_{m,k}}$. \begin{remark} \label{rem:why_discontinuous} {\em Let us mention how considering functions with possible discontinuities on the mesh points in $\Delta_m$ comes in handy. By restricting ourselves to functions in $\mathcal{C}^0([0,1],\mathbb{R}^n)$, we would need additional constraints on the Chebyshev coefficients to impose the continuity at each of the mesh point $t_j$ ($j = 1,\dots,m-1$) and keep track of them in all computations. Instead, the choice of working with $\mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n)$ allows avoiding these additional constraints. } \end{remark} For $u\in\mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n)$ we have \begin{align} \label{eq:projected_G} &G(u)\left(t_{j,l}\right) = \sum_{q=0}^{p-1}\tau^q\frac{(t_{j,l}-t_j)^q}{q!} \phi^{[q]}(u(t_j^-))+\tau^p\int_{t_j}^{t_{j,l}} \frac{(t_{j,l}-s)^{p-1}}{(p-1)!} \phi^{[p]}(u(s))ds -u(t_{j,l}), \nonumber\\ & \hspace{8.3cm} \text{for all}~j = 0,\dots,m-1 \text{ and}~l = 0,\dots,k. \end{align} We recall that all the values $G^{(i)}(u)\left(t_{j,l}\right)$ for $(i,j,l)\in\mathcal{E}^n_{k,m}$ uniquely characterize $\Pi^n_{m,k}G(u)$. Using the isomorphisms $\bar u\mapsto \left(\bar u^{(i)}(t_{j,l})\right)_{(i,j,l)\in \mathcal{E}^n_{m,k}}$ and $\bar u\mapsto \left(\bar u^{(i)}_{j,l}\right)_{(i,j,l)\in \mathcal{E}^n_{m,k}}$ to identify $S^n_{m,k}$ and $\mathbb{R}^{nm(k+1)}$, we can in fact see $\bar G\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \Pi^n_{m,k} G_{ |S^n_{m,k}}$ as a function from $\mathbb{R}^{nm(k+1)}$ to itself, that associates to the coefficients $\left(\bar u^{(i)}_{j,l}\right)_{(i,j,l)\in \mathcal{E}^n_{m,k}}$ the values $\left(G^{(i)}(\bar u)(t_{j,l})\right)_{(i,j,l)\in \mathcal{E}^n_{m,k}}$. Thus we can numerically find a zero $\bar u$ of $\bar G$, which is going to be our approximate solution. We note that we use these identifications between $S^n_{m,k}$ and $\mathbb{R}^{nm(k+1)}$ throughout the present work. Our objective is now to \emph{validate} this numerical solution $\bar u$, that is to prove that within a given neighbourhood of $\bar u$ lies a true zero $u$ of $G$. \subsection{Back to a fixed point formulation} \label{sec:operator} We consider the space $\mathcal X^n \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n)$ and its decomposition $\mathcal X^n=\mathcal X^n_{m,k}\oplus \mathcal X^n_{\infty}$, where \begin{equation*} \mathcal X^n_{m,k} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, S_{m,k}^n \quad \text{and}\quad \mathcal X^n_{\infty} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, (Id-\Pi_{m,k}^n)\mathcal{C}^0_{\Delta_{m}}([0,1],\mathbb{R}^n). \end{equation*} We already have a projection onto $\mathcal X^n_{m,k}$ \begin{equation*} \Pi^n_{m,k}:\left\{ \begin{aligned} \mathcal X^n &\to \mathcal X^n_{m,k} \\ u &\mapsto \bar u = \Pi^n_{m,k}(u), \end{aligned} \right. \end{equation*} and we also define its complementary \begin{equation*} \Pi^n_{\infty}:\left\{ \begin{aligned} \mathcal X^n &\to \mathcal X^n_{\infty} \\ u &\mapsto \Pi^n_{\infty}(u) = u-\bar u = (Id - \Pi^n_{m,k}) (u) . \end{aligned} \right. \end{equation*} We then define the norms \begin{equation*} \left\Vert \Pi^n_{m,k}(u)\right\Vert_{\mathcal X^n_{m,k}}\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \max\limits_{(i,j,l)\in\mathcal{E}^n_{m,k}}\left\vert \bar u^{(i)}(t_{j,l})\right\vert \quad \text{and} \quad \left\Vert \Pi^n_{\infty}(u)\right\Vert_{\mathcal X^n_{\infty}}\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \max\limits_{1\leq i\leq n}\left\Vert u^{(i)}-\bar u^{(i)}\right\Vert_{\infty}. \end{equation*} On $\mathcal X^n$ we consider the norm \begin{equation*} \left\Vert u\right\Vert_{\mathcal X^n} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \max\left(\left\Vert \Pi^n_{m,k}(u)\right\Vert_{\mathcal X^n_{m,k}},\frac{1}{r_{\infty}}\left\Vert \Pi^n_{\infty}(u)\right\Vert_{\mathcal X^n_{\infty}}\right), \end{equation*} where $r_{\infty}$ is a positive parameter. Notice that for all $r_{\infty}>0$, $\left(\mathcal X^n,\Vert\cdot\Vert_{\mathcal X^n}\right)$ is a Banach space. For any $r,r_{\infty}>0$, we denote by $B_{\mathcal X^n}(r,r_{\infty})$ the closed neighbourhood of $0$ defined as \begin{equation*} B_{\mathcal X^n}(r,r_{\infty}) = \left\lbrace u\in\mathcal X, \left\Vert u\right\Vert_{\mathcal X^n}\leq r \right\rbrace. \end{equation*} Suppose that we now have computed a numerical zero $\bar u$ of $\bar G$. We define $A_{m,k}^{\dag}=D\bar G\left(\bar u\right)$ and consider $A_{m,k}$ an injective numerical inverse of $A_{m,k}^{\dag}$. Finally, we introduce the \emph{Newton-like} operator $T:\mathcal X^n\to \mathcal X^n$ defined by \begin{equation*} T(u) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \left(\Pi^n_{m,k}-A_{m,k}\Pi^n_{m,k}G\right)u + \Pi^n_{\infty}\left(G(u)+u\right). \end{equation*} Notice that the fixed points of $T$ are in one-to-one correspondence with the zeros of $G$. We now give a finite set of sufficient conditions, that can be rigorously checked on a computer using interval arithmetic, to ensure that $T$ is a contraction on a given ball around $\bar u$. If those conditions are satisfied, the Banach fixed point theorem then yields the existence and local uniqueness of a zero of $G$. This is the content of the following statement (based on \cite{MR1639986}, see also \cite{MR2338393} for a detailed proof). \begin{theorem} \label{th:rad_pol} Let \begin{equation} \label{eq:def_y} y \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, T (\bar u) - \bar u, \end{equation} and \begin{equation} \label{eq:def_z} z=z(u_1,u_2)\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, DT(\bar u+u_1)u_2, \quad \text{for all}~u_1,u_2\in B_{\mathcal X^n}(r,r_{\infty}). \end{equation} Assume that we have bounds $Y$ and $Z(r,r_{\infty})$ satisfying \begin{equation} \label{eq:condition_Y} \left\vert \left(\Pi^n_{m,k} y\right)^{(i)}_{j,l} \right\vert \leq Y^{(i)}_{j,l},\quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}, \end{equation} \begin{equation} \label{eq:condition_Yinfty} \left\Vert \left(\Pi^n_{\infty} y\right)^{(i)}\right\Vert_{\infty} \leq Y^{(i)}_{\infty},\quad \text{for all}~1\leq i\leq n, \end{equation} \begin{equation} \label{eq:condition_Z} \sup\limits_{u_1,u_2\in B_{\mathcal X^n}(r,r_{\infty})} \left\vert \left(\Pi^n_{m,k} z(u_1,u_2)\right)^{(i)}_{j,l} \right\vert \leq Z^{(i)}_{j,l}(r,r_{\infty}),\quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}, \end{equation} and \begin{equation} \label{eq:condition_Zinfty} \sup\limits_{u_1,u_2\in B_{\mathcal X^n}(r,r_{\infty})} \left\Vert \left(\Pi^n_{\infty} z(u_1,u_2)\right)^{(i)}\right\Vert_{\infty} \leq Z^{(i)}_{\infty}(r,r_{\infty}),\quad \text{for all}~1\leq i\leq n. \end{equation} If there exist $r,r_{\infty}>0$ such that \begin{equation} \label{eq:condition_p_finite} p^{(i)}_{j,l}(r,r_{\infty})\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, Y^{(i)}_{j,l}+Z^{(i)}_{j,l}(r,r_{\infty})-r<0,\quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k} \end{equation} and \begin{equation} \label{eq:condition_p_infty} p^{(i)}_{\infty}(r,r_{\infty})\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, Y^{(i)}_{\infty}+Z^{(i)}_{\infty}(r,r_{\infty})-r_{\infty} r<0,\quad \text{for all}~1\leq i\leq n, \end{equation} then there exists a unique zero of $G$ within the set $\bar u + B_{\mathcal X^n}(r,r_{\infty}) \subset \mathcal X^n$. \end{theorem} The quantities $p^{(i)}_{j,l}(r,r_{\infty})$ and $p^{(i)}_{\infty}(r,r_{\infty})$ given respectively in \eqref{eq:condition_p_finite} and \eqref{eq:condition_p_infty} are called the {\em radii polynomials}. In the next section, we show how to obtain bounds $Y$ and $Z$ satisfying~\eqref{eq:condition_Y}-\eqref{eq:condition_Zinfty}. Before doing so, let us make a quick remark about the different representations and norms we can use on $\mathcal X^n_{m,k}$. \begin{remark} {\em As explained in Section~\ref{sec:numerical_implementation}, in practice we will mostly work with $\bar u\in \mathcal X^n_{m,k}$ represented by its Chebyshev coefficients as in~\eqref{eq:cheb_representation}. However, there are going to be instances where the values $\bar u^{(i)}(t_{j,l})$ are needed, for instance to compute $\left\Vert \bar u\right\Vert_{\mathcal X^n_{m,k}}$. We point out that numerically, passing from one representation to the other can be done easily by using the Fast Fourier Transform. One other important point is that, at some point in the next section we are going to need upper bounds for $\left\Vert \bar u^{(i)}\right\Vert_{\infty}$. To get such a bound from our finite dimensional data, we have two options, namely \begin{equation} \label{eq:bound_infty_from_sum} \max\limits_{t\in[t_j,t_{j+1}]}\left\vert \bar u^{(i)}(t)\right\vert \leq \sum_{l=0}^k \left\vert \bar u^{(i)}_{j,l} \right\vert,\quad \text{for all}~j=0,\dots,m-1, \end{equation} or \begin{equation} \label{eq:bound_infty_from_max} \max\limits_{t\in[t_j,t_{j+1}]}\left\vert \bar u^{(i)}(t)\right\vert \leq \Lambda_k \max\limits_{0\leq l\leq k} \left\vert \bar u^{(i)}(t_{j,l}) \right\vert,\quad \text{for all}~j = 0,\dots,m-1. \end{equation} If $\bar u$ is given, then~\eqref{eq:bound_infty_from_sum} is usually better, whereas~\eqref{eq:bound_infty_from_max} is better if $\bar u$ is any function in a given ball of $\mathcal X^n_{m,k}$. Notice that~\eqref{eq:bound_infty_from_sum} simply follows from the fact that the Chebyshev polynomials satisfy $\vert T_l(t)\vert \leq 1$ for all $t\in[-1,1]$ and all $l\in\mathbb{N}$. For more information about the bound~\eqref{eq:bound_infty_from_max}, see the Appendix and the references therein. } \end{remark} \section{Formula for the bounds} \label{sec:bounds} In this section, we give formulas for $Y^{(i)}_{j,l}$, $Y^{(i)}_{\infty}$, $Z^{(i)}_{j,l}$ and $Z^{(i)}_{\infty}$ satisfying the assumptions~\eqref{eq:condition_Y}-\eqref{eq:condition_Zinfty} of Theorem~\ref{th:rad_pol}. To make the exposition clearer, we focus strictly on the derivation of the different bounds in this section. In particular, the discussion about the impact of the level of an priori bootstrap (that is the value of $p$) and the order of polynomial approximation (that is the value of $k$) is done in Section~\ref{sec:para}. \subsection{The \boldmath$Y$\unboldmath~bounds} In this section we derive the $Y$ bounds, which measure the \emph{defect} associated with a numerical solution $\bar u$, that is how close $G(\bar u)$ is to $0$. We start by the \emph{finite dimensional} part. \begin{proposition} \label{prop:Y_finite} Let $y$ be defined as in~\eqref{eq:def_y} and consider \begin{equation*} Y^{(i)}_{j,l} \geq \left\vert \left( A_{m,k}\bar G(\bar u) \right)^{(i)}_{j,l} \right\vert, \quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}, \end{equation*} where $\bar G(\bar u)$ is here seen as the vector $\left(G^{(i)}(\bar u)(t_{j,l})\right)_{(i,j,l)\in \mathcal{E}^n_{m,k}}$. Then~\eqref{eq:condition_Y} holds. \end{proposition} \begin{proof} Simply notice that $\Pi^n_{m,k} y=-A_{m,k}\Pi^n_{m,k} G(\bar u)$. \end{proof} \begin{remark} \label{rem:error_integrals} {\em The above bound is not completely satisfactory, in the sense that is not directly implementable. Indeed, to compute $Y^{(i)}_{j,l}$ we need to evaluate (or at least to bound) the quantities $G^{(i)}(\bar u)(t_{j,l})$. In particular (see~\eqref{eq:projected_G}), we need to evaluate the integrals \begin{equation*} \int_{t_j}^{t_{j,l}} (t_{j,l}-s)^{p-1} \phi^{[p]}(\bar u(s))ds=\left(\frac{t_{j+1}-t_j}{2}\right)^p\int_{-1}^{x^k_l}(x^k_l-s)^p\Psi(s)ds, \end{equation*} where \begin{equation*} \Psi(s) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \phi^{[p]}\left(\sum_{l=0}^{k}\bar u_{j,l} T_l(s)\right). \end{equation*} If $\phi$ is a non polynomial vector field, we use a Taylor approximation of order $k_0$ of $\Psi$ to get an approximate value of the integral by computing \begin{equation*} \sum_{l=0}^{k_0}\frac{1}{l!}\frac{d^l\Psi}{ds^l}(0)\int_{-1}^{x^k_l}(x^k_l-s)^ps^lds. \end{equation*} Notice that this quantity can be evaluated explicitly. The error made in this approximation is then controlled as follows \begin{align*} &\left\Vert \int_{-1}^{x^k_l}(x^k_l-s)^p\Psi(s)ds - \sum_{l=0}^{k_0}\frac{1}{l!}\frac{d^l\Psi}{ds^l}(0)\int_{-1}^{x^k_l}(x^k_l-s)^ps^lds \right\Vert \leq& \\ &\hspace{7cm} \frac{1}{(k_0+1)!}\max_{s\in[-1,1]}\left\| \frac{d^{k_0+1}\Psi}{ds^{k_0+1}}(s)\right\| \int_{-1}^{x_l^k}(x_l^k-s)^p\vert s\vert^{k_0+1}ds. \end{align*} Notice that this error term is effective, since $\max_{s\in[-1,1]}\left\Vert\frac{d^{k_0+1}\Psi}{ds^{k_0+1}}(s)\right\Vert$ can be bounded using interval arithmetic. Therefore, the quantity $Y^{(i)}_{j,l}$ that we end up implementing is of the form \begin{equation*} Y^{(i)}_{j,l} = \left| \left( A_{m,k}\hat G(\bar u) \right)^{(i)}_{j,l} \right| + \left\vert \left( \left\vert A_{m,k}\right\vert G_{\epsilon}(\bar u) \right)^{(i)}_{j,l} \right\vert, \end{equation*} where the vector $\hat G(\bar u)$ contains the approximate integrals and the vector $G_{\epsilon}(\bar u)$ contains the errors bounds for these approximations. Here and in the sequel, absolute values applied to a matrix, like $\vert A_{m,k}\vert$, must be understood component-wise. We point out that, in practice, if the mesh $\Delta_m$ is refined enough, then $\bar u$ is not going to be varying much on each subinterval $[t_j,t_{j+1}]$, and thus we can get rather precise approximations even with a lower order $k_0$ for the Taylor expansion. We mention that when the vector field $\phi$ is polynomial, $\Psi$ has a finite Taylor expansion, therefore up the integrals can in fact be computed exactly (i.e. we can get $G_{\epsilon}(\bar u)=0$). } \end{remark} We now turn our attention to the second part of the $Y$ bound. \begin{proposition} \label{prop:Y_infty} Let $y$ be defined as in~\eqref{eq:def_y} and consider \begin{equation*} Y^{(i)}_{\infty} \geq C_k \tau^p \max\limits_{0\leq j<m}\left((t_{j+1}-t_{j})^{k+1}\max\limits_{t\in[t_{j},t_{j+1}]}\left\vert \frac{d^{k+1-p}}{dt^{k+1-p}}(\phi^{[p]})^{(i)}(\bar u(t))\right\vert\right),\quad \text{for all}~1\leq i\leq n. \end{equation*} Then~\eqref{eq:condition_Yinfty} holds. \end{proposition} \begin{proof} We have $\Pi^n_{\infty}y=\Pi^n_{\infty}(G(\bar u)+\bar u)=\Pi^n_{\infty}g(\bar u)$. Since $\frac{d^p}{dt^p}g(\bar u)=\tau^p \phi^{[p]}(\bar u)$, we have that $\frac{d^{k+1}}{dt^{k+1}}g(\bar u)=\tau^p \frac{d^{k+1-p}}{dt^{k+1-p}}\phi^{[p]}(\bar u)$ and Proposition~\ref{prop:interpolation_error1} yields \begin{align*} \left\Vert\left(\Pi^n_{\infty}y\right)^{(i)} \right\Vert_{\infty} &\leq C_k \tau^p \max\limits_{0\leq j<m}\left((t_{j+1}-t_{j})^{k+1}\max\limits_{t\in[t_{j},t_{j+1}]}\left\vert \frac{d^{k+1-p}}{dt^{k+1-p}}(\phi^{[p]})^{(i)}(\bar u(t))\right\vert\right). \qedhere \end{align*} \end{proof} \begin{remark} \label{rem:error_max} {\em As comment similar to the one of Remark~\ref{rem:error_integrals} applies here. Indeed, the bound given in Proposition~\ref{prop:Y_infty} is not directly implementable because of the term \begin{equation*} \max\limits_{t\in[t_{j},t_{j+1}]}\left\vert \frac{d^{k+1-p}}{dt^{k+1-p}}(\phi^{[p]})^{(i)}(\bar u(t))\right\vert, \end{equation*} but we can again get an explicit bound for this quantity by using a low order Taylor approximation and interval arithmetic. In the particular case where the vector field $\phi$ is polynomial, an explicit bound can also be obtained \emph{via} the Chebyshev coefficients of the polynomial $\frac{d^{k+1-p}}{dt^{k+1-p}}(\phi^{[p]})^{(i)}(\bar u)$, as in~\eqref{eq:bound_infty_from_sum}. } \end{remark} \subsection{The \boldmath$Z$\unboldmath~bounds} In this section we derive the $Z$ bounds, which measure the contraction rate of $T$ on the ball of radius $r$ around $\bar u$. We begin with the finite dimensional part, that is the projection on $\mathcal X^n_{m,k}$. Let $z$ be defined as in~\eqref{eq:def_z}. Then \begin{align*} \Pi^n_{m,k} z =\ & \Pi^n_{m,k}\left(DT(\bar u+u_1)u_2\right)\\ =\ & \Pi^n_{m,k} u_2 -A_{m,k}\Pi^n_{m,k}\left(DG(\bar u+u_1)u_2\right) \\ =\ & \Pi^n_{m,k} u_2 -A_{m,k}D\Pi^n_{m,k}G(\bar u+u_1) u_2 \\ =\ & \left(Id-A_{m,k}A_{m,k}^{\dag}\right)\Pi^n_{m,k} u_2 -A_{m,k}\left(D\Pi^n_{m,k}G(\bar u+u_1)u_2-A_{m,k}^{\dag}\Pi^n_{m,k} u_2\right) \\ =\ & \left(Id-A_{m,k}A_{m,k}^{\dag}\right)\Pi^n_{m,k} u_2 -A_{m,k}\left(D\Pi^n_{m,k}G(\bar u)u_2-A_{m,k}^{\dag}\Pi^n_{m,k} u_2\right) \\ & -A_{m,k}\left(D\Pi^n_{m,k}G(\bar u+u_1)-D\Pi^n_{m,k}G(\bar u)\right)u_2, \end{align*} where $A_{m,k}$ and $A_{m,k}^{\dag}$ are defined as in Section~\ref{sec:operator}. We are going to bound each term separately as \begin{align} \nonumber \left\vert \left(\Pi^n_{m,k} z\right)^{(i)}_{j,l}\right\vert \leq\ & \underbrace{\left\vert \left(\left(Id-A_{m,k}A_{m,k}^{\dag}\right)\Pi^n_{m,k} u_2\right)^{(i)}_{j,l}\right\vert}_{\leq \left(Z_0(r)\right)^{(i)}_{j,l}} + \underbrace{\left\vert \left(A_{m,k}\left(D\Pi^n_{m,k}G(\bar u)u_2-A_{m,k}^{\dag}\Pi^n_{m,k} u_2\right)\right)^{(i)}_{j,l}\right\vert}_{\leq \left(Z_1(r,r_{\infty})\right)^{(i)}_{j,l}} \\ & + \underbrace{\left\vert \left(A_{m,k}\left(D\Pi^n_{m,k}G(\bar u+u_1)-D\Pi^n_{m,k}G(\bar u)\right)u_2\right)^{(i)}_{j,l}\right\vert}_{\leq \left(Z_2(r,r_{\infty})\right)^{(i)}_{j,l}}. \label{eq:Z_bounds_splitting} \end{align} \subsubsection{The bound \boldmath$Z_0(r)$\unboldmath} The computation of the bounds $\left(Z_0(r)\right)^{(i)}_{j,l}$ estimating the first of the terms in the splitting \eqref{eq:Z_bounds_splitting} is rather straightforward and is simply a control on the precision of the numerical inverse. \begin{proposition} \label{prop:Z_0} Let $u_2\in B_{\mathcal X^n}(r,r_{\infty})$, define the vector $\mathbf{1}_{\mathcal X^n_{m,k}}\in\mathbb{R}^{nm(k+1)}$ by $\left(\mathbf{1}_{\mathcal X^n_{m,k}}\right)^{(i)}_{j,l}=1$ for all $(i,j,l)\in\mathcal{E}^n_{m,k}$ and let \begin{equation*} \left(Z_0(r)\right)^{(i)}_{j,l} \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \left(\left\vert Id-A_{m,k}A_{m,k}^{\dag}\right\vert\mathbf{1}_{\mathcal X^n_{m,k}}\right)^{(i)}_{j,l}r,\quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}. \end{equation*} Then, \begin{equation*} \left\vert \left(\left(Id-A_{m,k}A_{m,k}^{\dag}\right)\Pi^n_{m,k} u_2\right)^{(i)}_{j,l}\right\vert \leq \left(Z_0(r)\right)^{(i)}_{j,l}, \quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}. \end{equation*} \end{proposition} \subsubsection{The bound \boldmath$Z_1(r,r_\infty)$\unboldmath} We now construct the bounds $\left(Z_1(r,r_\infty)\right)^{(i)}_{j,l}$ estimating the second term in the splitting \eqref{eq:Z_bounds_splitting}. \begin{proposition} \label{prop:Z_1} Let $u_2\in B_{\mathcal X^n}(r,r_{\infty})$, consider $\rho=\left(\rho^{(i)}_{j,l}\right)_{(i,j,l)\in\mathcal{E}^n_{m,k}}$ such that \begin{equation*} \rho^{(i)}_{j,l} \geq r_{\infty}r\frac{\tau^p}{p!}(t_{j,l}-t_j)^{p}\max\limits_{s\in[t_j,t_{j+1}]}\left\vert D(\phi^{[p]})^{(i)}(\bar u(s))\right\vert \mathbf{1}_n, \quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}, \end{equation*} where $\mathbf{1}_{n}$ is the vector of size $n$ whose components all are equal to $1$. Let \begin{equation*} Z_1(r,r_{\infty})\,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \left\vert A_{m,k}\right\vert \rho. \end{equation*} Then \begin{equation*} \left\vert \left(A_{m,k}\left(D\Pi^n_{m,k}G(\bar u)u_2-A_{m,k}^{\dag}\Pi^n_{m,k} u_2\right)\right)^{(i)}_{j,l}\right\vert \leq \left(Z_1(r,r_{\infty})\right)^{(i)}_{j,l}, \quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}. \end{equation*} \end{proposition} \begin{proof} By definition of $A_{m,k}^{\dag}$ and $\bar G$, we have that \begin{equation*} D\Pi^n_{m,k}G(\bar u)\Pi^n_{m,k} u_2 = A_{m,k}^{\dag}\Pi^n_{m,k} u_2,\quad \text{for all}~u_2\in \mathcal X^n. \end{equation*} Therefore, we can rewrite \begin{align*} A_{m,k}\left(D\Pi^n_{m,k}G(\bar u)u_2-A_{m,k}^{\dag}\Pi^n_{m,k} u_2\right) &= A_{m,k} D\Pi^n_{m,k}G(\bar u)\left(u_2-\Pi^n_{m,k} u_2\right)\\ &= A_{m,k} D\Pi^n_{m,k}G(\bar u)\Pi^n_{\infty} u_2, \end{align*} and we only need to prove that \begin{equation*} \left\vert D\Pi^n_{m,k}G(\bar u)\Pi^n_{\infty} u_2\right\vert^{(i)}_{j,l} \leq \rho^{(i)}_{j,l}, \quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}. \end{equation*} Remembering~\eqref{eq:projected_G} and using that $\Vert \Pi^n_{\infty} u_2\Vert_{\mathcal X^n_{\infty}}\leq r_{\infty}r$, we estimate for all $(i,j,l)\in\mathcal{E}^n_{m,k}$, \begin{align*} \left\vert D\Pi^n_{m,k}G(\bar u)\Pi^n_{\infty} u_2\right\vert^{(i)}_{j,l} &\leq r_{\infty}r\tau^p\int_{t_j}^{t_{j,l}}\frac{(t_{j,l}-s)^{p-1}}{(p-1)!}\left\vert D(\phi^{[p]})^{(i)}(\bar u(s))\right\vert \mathbf{1}_{n} ds \nonumber\\ &\leq r_{\infty}r\frac{\tau^p}{p!}(t_{j,l}-t_j)^{p}\max\limits_{s\in[t_j,t_{j+1}]}\left\vert D(\phi^{[p]})^{(i)}(\bar u(s))\right\vert \mathbf{1}_n. \end{align*} \end{proof} Notice that Remark~\ref{rem:error_max} also applies here. \begin{remark} \label{rem:sharper_estimates} {\em Had we used the operator $\tilde g$ (see~\eqref{eq:tilde_g_original}) instead of $g$ (see~\eqref{eq:piecewise_g}), we would have gotten a bound like \begin{align*} \left\vert D\Pi^n_{m,k}G(\bar u)\Pi^n_{\infty} u_2\right\vert^{(i)}_{j,l} &\leq r_{\infty}r\tau^p\int_{0}^{t_{j,l}}\frac{(t_{j,l}-s)^{p-1}}{(p-1)!}\left\vert D(\phi^{[p]})^{(i)}(\bar u(s))\right\vert \mathbf{1}_{n} ds, \end{align*} which is obviously worst because one has to consider the whole integral from $0$ to $t_{j,l}$ instead of just from $t_j$ to $t_{j,l}$. } \end{remark} \subsubsection{The bound \boldmath$Z_2$\unboldmath} We finally construct the bounds $\left(Z_2(r,r_\infty)\right)^{(i)}_{j,l}$ estimating the last term in the splitting \eqref{eq:Z_bounds_splitting}. \begin{proposition} \label{prop:Z_2} Let $u_1,u_2\in B_{\mathcal X^n}(r,r_{\infty})$. consider $\\varrho=\left(\\varrho^{(i)}_{j,l}\right)_{(i,j,l)\in\mathcal{E}^n_{m,k}}$ such that \begin{align*} \varrho^{(i)}_{j,l} \geq & \sum_{q=1}^{p-1}\frac{\tau^q}{q!}(t_{j,l}-t_j)^q \sum_{\delta=0}^{q(d-1)-1}\frac{1}{(1+\delta)!}\left\vert D^{2+\delta}(\phi^{[q]})^{(i)}(\bar u(t_j^-))\right\vert\left(\mathbf{1}_n^{2+\delta}\right)r^{2+\delta} \\ & \ +\frac{\tau^p}{p!}(t_{j,l}-t_j)^p \sum_{\delta=0}^{p(d-1)-1}\frac{1}{(1+\delta)!}\max\limits_{s\in[t_j,t_{j+1}]}\left\vert D^{2+\delta}(\phi^{[p]})^{(i)}(\bar u(s))\right\vert\left(\mathbf{1}_n^{2+\delta}\right)((\Lambda_k+r_{\infty})r)^{2+\delta}. \end{align*} Let \begin{equation*} Z_2(r,r_{\infty}) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \left\vert A_{m,k}\right\vert \varrho. \end{equation*} Then \begin{equation*} \left\vert \left(A_{m,k}\left(D\Pi^n_{m,k}G(\bar u+u_1)-D\Pi^n_{m,k}G(\bar u)\right)u_2\right)^{(i)}_{j,l}\right\vert \leq \left(Z_2(r,r_{\infty})\right)^{(i)}_{j,l}, \quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}. \end{equation*} \end{proposition} \begin{remark} In the above proposition, $\left\vert D^{2+\delta}(\phi^{[p]})^{(i)}(\bar u(s))\right\vert\left(\mathbf{1}_n^{2+\delta}\right)$ must be understood as the evaluation of the $(2+\delta)$-linear form $\left\vert D^{2+\delta}(\phi^{[p]})^{(i)}(\bar u(s))\right\vert$ at the vectors $(\mathbf{1}_n,\ldots,\mathbf{1}_n)$, that is \begin{align*} \left\vert D^{2+\delta}(\phi^{[p]})^{(i)}(\bar u(s))\right\vert\left(\mathbf{1}_n^{2+\delta}\right) = \sum_{1\leq j_1,\ldots, j_{\delta+2}\leq n}\left\vert\partial_{j_1\ldots j_{\delta+2}}(\phi^{[p]})^{(i)}(\bar u(s))\right\vert . \end{align*} \end{remark} \begin{proof} ({\em of Proposition~\ref{prop:Z_2}}) We only have to prove that \begin{equation*} \left\vert \left( \left(D\Pi^n_{m,k}G(\bar u+u_1)-D\Pi^n_{m,k}G(\bar u)\right)u_2\right)^{(i)}_{j,l} \right\vert \leq \varrho^{(i)}_{j,l}, \quad \text{for all}~(i,j,l)\in\mathcal{E}^n_{m,k}. \end{equation*} Using~\eqref{eq:bound_infty_from_max} we have that \begin{equation*} \Vert u_2^{(i)}\Vert_{\infty} \leq \Vert \Pi^1_{m,k} u_2^{(i)}\Vert_{\infty}+\Vert \Pi^1_{\infty} u_2^{(i)}\Vert_{\infty} \leq (\Lambda_k+r_{\infty})r. \end{equation*} Then we estimate for all $(i,j,l)\in\mathcal{E}^n_{m,k}$, \begin{align*} &\left\vert \left(D\Pi^n_{m,k}G(\bar u+u_1)-D\Pi^n_{m,k}G(\bar u)\right)u_2\right\vert^{(i)}_{j,l} \\ & \leq \sum_{q=1}^{p-1}\frac{\tau^q}{q!}(t_{j,l}-t_j)^q \left\vert \left(D(\phi^{[q]})^{(i)}(\bar u(t_j^-)+u_1(t_j^-)) - D(\phi^{[q]})^{(i)}(\bar u(t_j^-))\right)(u_2(t_j^-))\right\vert \\ & \quad +\tau^p\int_{t_j}^{t_{j,l}} \frac{(t_{j,l}-s)^{p-1}}{(p-1)!}\left\vert\left(D(\phi^{[p]})^{(i)}(\bar u(s)+u_1(s))-D(\phi^{[p]})^{(i)}(\bar u(s))\right)(u_2(s))\right\vert ds \\ & \leq \sum_{q=1}^{p-1}\frac{\tau^q}{q!}(t_{j,l}-t_j)^q \sum_{\delta=0}^{q(d-1)-1}\frac{1}{(1+\delta)!}\left\vert D^{2+\delta}(\phi^{[q]})^{(i)}(\bar u(t_j^-))\right\vert\left(\mathbf{1}_n^{2+\delta}\right)r^{2+\delta} \\ & \quad +\frac{\tau^p}{p!}(t_{j,l}-t_j)^p \sum_{\delta=0}^{p(d-1)-1}\frac{1}{(1+\delta)!}\max\limits_{s\in[t_j,t_{j+1}]}\left\vert D^{2+\delta}(\phi^{[p]})^{(i)}(\bar u(s))\right\vert\left(\mathbf{1}_n^{2+\delta}\right)((\Lambda_k+r_{\infty})r)^{2+\delta}. \qedhere \end{align*} \end{proof} Notice that Remark~\ref{rem:error_max} also applies here. \subsubsection{The \boldmath$Z_\infty$\unboldmath~bound} We are left with the \emph{remainder part} of the $Z$ bound, which we treat in this section. \begin{proposition} \label{prop:Z_infty} Let $u_1,u_2\in B_{\mathcal X^n}(r,r_{\infty})$ and $z$ as in~\eqref{eq:def_z}. Define for all $i\in\{1,\ldots,n\}$ \begin{align*} &Z^{(i)}_{\infty}(r,r_{\infty})\geq\\ &\quad \tau^p C^{opt}_{k,p} \sum_{\delta=0}^{p(d-1)}\max\limits_{0\leq j<m}\left((t_{j+1}-t_{j})^p\frac{1}{\delta!}\max\limits_{t\in[t_{j},t_{j+1}]}\left\vert D^{1+\delta}(\phi^{[p]})^{(i)}(\bar u(t))\right\vert\left(\mathbf{1}_n^{1+\delta}\right)\right)\left((\Lambda_k+r_{\infty})r\right)^{1+\delta}, \end{align*} where $C^{opt}_{k,p}$ is one of the two constants given by Propositions~\ref{prop:interpolation_error1} and \ref{prop:interpolation_error2}, namely \begin{equation*} C^{opt}_{k,p} = \begin{cases} C_k &\text{if } p=k+1,\\ \tilde C_{k,p} &\text{if } p\leq k. \end{cases} \end{equation*} Then~\eqref{eq:condition_Zinfty} holds. \end{proposition} \begin{proof} We need to estimate \begin{equation*} \Pi^n_{\infty}z = \Pi^n_{\infty}\left(DT(\bar u+u_1)u_2\right) = \Pi^n_{\infty}\left(Dg(\bar u+u_1)u_2\right). \end{equation*} For any continuous function $\gamma$, one has \begin{equation*} \frac{d^{p}}{dt^{p}}\int_{t_j}^t \frac{(t-s)^{p-1}}{(p-1)!} \gamma(s) ds = \gamma(t), \end{equation*} thus we get, for all $1\leq i\leq n$ \begin{align*} &\left\Vert \Pi^n_{\infty}\left(Dg^{(i)}(\bar u+u_1)u_2\right) \right\Vert_{\infty} \\ &\quad = \left\Vert \Pi^n_{\infty}\left(t\mapsto \tau^p\int_{t_j}^t \frac{(t-s)^{p-1}}{(p-1)!} D(\phi^{[p]})^{(i)}(\bar u(s)+u_1(s))u_2(s)ds\right) \right\Vert_{\infty}\\ &\quad \leq \tau^p C^{opt}_{k,p} \max\limits_{0\leq j<m}\left((t_{j+1}-t_{j})^p\max\limits_{t\in[t_{j},t_{j+1}]}\left\vert D(\phi^{[p]})^{(i)}(\bar u(t)+u_1(t))u_2(t)\right\vert\right) \\ &\quad \leq \tau^p C^{opt}_{k,p} \sum_{\delta=0}^{p(d-1)}\max\limits_{0\leq j<m}\left((t_{j+1}-t_{j})^p\frac{1}{\delta!}\max\limits_{t\in[t_{j},t_{j+1}]}\left\vert D^{1+\delta}(\phi^{[p]})^{(i)}(\bar u(t))\right\vert\left(\mathbf{1}_n^{1+\delta}\right)\right)\left((\Lambda_k+r_{\infty})r\right)^{1+\delta}. \qedhere \end{align*} \end{proof} Notice that Remark~\ref{rem:error_max} also applies here. \subsection{The radii polynomials and interval arithmetics} The following proposition sums up what has been proven up to now in this section, namely that we have derived bounds that satisfy the requirements~\eqref{eq:condition_Y} to \eqref{eq:condition_Zinfty} from Theorem~\ref{th:rad_pol}. \begin{proposition} Let $y$ and $z$ defined as in~\eqref{eq:def_y} and \eqref{eq:def_z}. Then, the bound defined in Proposition~\ref{prop:Y_finite} satisfies~\eqref{eq:condition_Y} and the one from Proposition~\ref{prop:Y_infty} satisfies \eqref{eq:condition_Yinfty}. Also, consider the bounds defined in Propositions~\ref{prop:Z_0} to \ref{prop:Z_2}. Then \begin{equation*} Z^{(i)}_{j,l}(r,r_{\infty})=\left(Z_0(r)\right)^{(i)}_{j,l}+ \left(Z_1(r,r_{\infty})\right)^{(i)}_{j,l}+\left(Z_2(r,r_{\infty})\right)^{(i)}_{j,l}, \end{equation*} satisfies~\eqref{eq:condition_Z} and finally the bound from Proposition~\ref{prop:Z_infty} satisfies~\eqref{eq:condition_Zinfty}. \end{proposition} Notice that, the way these bounds are defined, they are polynomials in $r$ and $r_{\infty}$, whose coefficients are all positive and can be computed explicitly with the help of the computer, since they depend on the numerical data of an approximate solution $\bar u$. Also, we make sure to control possible round-off errors by using interval arithmetic (in our case INTLAB~\cite{Ru99a}). In practice, we first consider $r_{\infty}$ so that it satisfies the constraint~\eqref{eq:constraint_rinfty} introduced in the next section. If there does not exist such positive $r_{\infty}$, we increase $m$ and/or $k$ and/or $p$ and try again. Once $r_{\infty}$ is fixed, we try to find a positive $r$ such that the last conditions~\eqref{eq:condition_p_finite} and~\eqref{eq:condition_p_infty} of Theorem~\ref{th:rad_pol} hold. If there is no such positive $r$, we increase $m$ and/or $k$ and/or $p$ and try again. If we finally find a positive $r$ satisfying \eqref{eq:condition_p_finite} and~\eqref{eq:condition_p_infty}, then we have proven that Theorem~\ref{th:rad_pol} applies, that is there exists a unique zero of $G$ in $B_{\mathcal X^n}(r,r_{\infty})$. In Sections~\ref{sec:applications} and~\ref{sec:ABC}, we give several examples where the procedure described just above is successfully used to validate solutions of an initial value problem, as well as periodic solutions and heteroclinic orbits. But before doing so, we discuss in the next section the role of the parameters $k$, $m$ and $p$, and how they influence the bounds. \section{About the choice of the parameters} \label{sec:para} In this section, we explain how the parameters $k$, $m$ and $p$ should be chosen, and in particular we highlight how the \emph{a priori bootstrap} (that is taking $p\geq 2$) helps improving the efficiency of the computer-assisted procedure we propose. The discussion will be rather informal, but we hope it helps the reader understand the results of the various comparisons presented in Section~\ref{sec:applications}. Also, to make things slightly simpler we assume here that the grid $\Delta_m$ is uniform, therefore in the estimates each instance of $t_{j+1}-t_j$ can be replaced by $\frac{1}{m}$. Our main constraint is that we want the method to be successful while minimizing the size of our numerical data, that is the dimension of our finite dimensional space $\mathcal X^n_{m,k}$, which is $nm(k+1)$. Since $n$ is fixed by the dimension of the vector field $\phi$, we want to minimize the product $m(k+1)$. As we see in the examples of Section~\ref{sec:applications}, the usual limiting factor when trying to satisfy the radii polynomial inequalities \eqref{eq:condition_p_finite} and~\eqref{eq:condition_p_infty} is to get the order one term (in $r$) to be negative. For the finite part (that is \eqref{eq:condition_p_finite}), that means basically having that \begin{equation*} r_{\infty}\frac{\alpha}{p!}\left(\frac{\tau}{m}\right)^p<1, \end{equation*} (see Proposition~\ref{prop:Z_1}), and for the remainder part (that is \eqref{eq:condition_p_infty}) we get a condition like \begin{equation*} C^{opt}_{k,p}\left(\frac{\tau}{m}\right)^p\beta(\Lambda_k+r_{\infty})<r_{\infty}, \end{equation*} where $\alpha$ and $\beta$ are constants depending on the numerical solution $\bar u$ and on the vector field $\phi$, but not on the parameters $k$, $m$ and $p$ that we can tune. This leads to \begin{equation} \label{eq:constraint_rinfty} \beta C^{opt}_{k,p}\Lambda_k\left(\frac{\tau}{m}\right)^p < \left(1-\beta C^{opt}_{k,p}\left(\frac{\tau}{m}\right)^p\right)r_{\infty} < \left(1-\beta C^{opt}_{k,p}\left(\frac{\tau}{m}\right)^p\right)\frac{p!}{\alpha}\left(\frac{m}{\tau}\right)^p. \end{equation} We want to be able to chose a $r_{\infty}$ satisfying the above inequalities, and a necessary and sufficient condition for that is \begin{equation*} \beta C^{opt}_{k,p}\Lambda_k\left(\frac{\tau}{m}\right)^p < \left(1-\beta C^{opt}_{k,p}\left(\frac{\tau}{m}\right)^p\right)\frac{p!}{\alpha}\left(\frac{m}{\tau}\right)^p, \end{equation*} which we can rewrite \begin{equation} \label{eq:necessary_condition} \left(\frac{\tau}{m}\right)^p C^{opt}_{k,p} \left(\beta +\frac{\alpha\beta}{p!} \Lambda_k\left(\frac{\tau}{m}\right)^p\right) < 1. \end{equation} Remember that we want~\eqref{eq:necessary_condition} to be satisfied, while minimizing the product $m(k+1)$. When $p$ is fixed, and $k$ becomes large, notice that $C^{opt}_{k,p}$ is decreasing like $\frac{\ln(k)}{k^p}$. However, satisfying~\eqref{eq:necessary_condition} requires, roughly speaking, to decrease $\left(\frac{\tau}{m}\right)^p C^{opt}_{k,p}$ as much as possible. This suggests two things, which we confirm in our explicit examples of Section~\ref{sec:applications}. First, that it is slightly better to increase $m$ than $k$ (because of the $ln(k)$ factor) and second, that if we take $p$ equal to 2 or more (that is if we use \emph{a priori bootstrap}) then we can satisfy~\eqref{eq:necessary_condition} while taking $m(k+1)$ much smaller than if we had $p$ equal to 1. Finally, we point out that taking $k=p-1$ seems optimal for the conditon~\eqref{eq:necessary_condition} given by the order one term. Indeed, increasing $k$ from $p-1$ to $p$ increases the total number of coefficients, but brings no gain with respect to~\eqref{eq:necessary_condition} since \begin{equation*} C^{opt}_{p-1,p}=C_{p-1}<\tilde C_{p,p}=C^{opt}_{p,p}. \end{equation*} However, for the proof to succeed (that is for \eqref{eq:condition_p_finite} and \eqref{eq:condition_p_infty} to be satisfied) we also need small enough $Y$ and $Y_{\infty}$ bounds. Looking more precisely at $Y_{\infty}$, we see that it is of the form \begin{equation*} C_k \frac{1}{m^{k+1}}\gamma, \end{equation*} where $\gamma$ depends on the numerical data $\bar u$ and also on $k$, but the dependency on $k$ is way less important than in the $C_k \frac{1}{m^{k+1}}$ term, so we neglect it here. Looking back to the definition of $C_k$ in Proposition~\ref{prop:interpolation_error1}, we see that the term that we want to be small is of the form \begin{equation*} \frac{1}{(k+1)!}\frac{4}{(4m)^{k+1}}. \end{equation*} Therefore, if we really need to decrease the $Y_{\infty}$ bound, increasing $k$ is drastically better than increasing $m$. That is why, in practice we often take $k=p$, even though $k=p-1$ would be enough to satisfy~\eqref{eq:necessary_condition}. Finally we point out that, if we are not simply focused on getting an existence result, but also care about having sharp error bound, then we should definitively take care of having small $Y$ and $Y_{\infty}$ bounds, which, as we will show in the next section, can be achieved by slightly increasing $k$ (that is taking $k>p$). In the next section, we present several comparisons for different choices of parameters, that confirm the heuristic presented in this section. \section{Examples of applications for the Lorenz system} \label{sec:applications} In this section, we consider the Lorenz system, that is \begin{equation*} \phi(x,y,z)=\begin{pmatrix} \sigma(y-x)\\ \rho x -y -xz\\ -\beta z +xy \end{pmatrix}, \end{equation*} with standard parameter values $(\sigma,\beta,\rho)=(10,\frac{8}{3},28)$. Here, we first consider the initial value problem~ \eqref{eq:general_IVP}, and use those bounds to try and validate orbits of various length with different parameters, to highlight the significant improvement made possible by the \emph{a priori bootstrap} technique (that is taking $p\geq 2$). Then, we show that the \emph{a priori bootstrap} also allows to validate more interesting solutions (from a dynamical point of view), namely periodic orbits and connecting orbits. \subsection{Comparisons for the initial value problem} \label{sec:IVP} The aim of this section is to showcase the improvements allowed by the use of \emph{a priori bootstrap}, and to validate the heuristics made in Section~\ref{sec:para}. To do so, we fix an initial data (chosen close to the attractor of the Lorenz system) \begin{equation} \label{eq:initial_value} u_0=\begin{pmatrix} -14.68 \\ -11 \\ 37.67 \end{pmatrix}, \end{equation} and do two kinds of comparisons. First, we try to validate the longest possible orbits for $p=1,2,3$ at various values of $m$ and $k$. We recall that by validating, we mean getting the existence of a true solution near a numerical one, by checking that the hypotheses of Theorem~\ref{th:rad_pol} hold. To make the comparison fair, we fix the total number of coefficients used for the numerical approximation, that is the dimension of $\mathcal X^n_{m,k}$, given by $nm(k+1)$. This quantity is usually the bottleneck of our approach, since we need to store and invert the matrix $A^{\dag}_{m,k}$ which is of size $nm(k+1)\times nm(k+1)$. Here, we take $nm(k+1)=14000$ (or as close as possible to $14000$). The computations were made on a laptop with 8GB of RAM, and of course $nm(k+1)$ could be taken larger on a computer with more memory. The first set of results are given in Table~\ref{table:p1} (we recall that we work with the Lorenz system, therefore $n=3$). \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $k$ & $m$ & $nm(k+1)$ & $\tau_{max}$ & $r$ \\ \hline 1 & 2333 & 13998 & \textcolor{red}{0.69} & $2.3233\times 10^{-5}$ \\ 2 & 1556 & 14004 & 0.64 & $1.1524\times 10^{-7}$ \\ 3 & 1167 & 14004 & 0.58 & $8.6805\times 10^{-9}$ \\ \hline \end{tabular} \caption{Comparisons for $p=1$. $\tau_{max}$ is the longest integration time for which the proof succeeds, and $r$ is the associated validation radius, that is a bound of the distance (in $\mathcal{C}^0$ norm) between the numerical data used and the true solution.} \end{center} \label{table:p1} \end{table} In all cases, the proof fails for longer time $\tau$, because~\eqref{eq:necessary_condition} is no longer satisfied. We see here that, as announced in Section~\ref{sec:para}, it is better to take $k$ as small as possible to get the longest possible orbit, but that increasing $k$ helps reducing the $Y_{\infty}$ bound, and thus the validation radius $r$. We see that simply increasing the order of the polynomial interpolation (given by $k$), allows to get better accuracy but does not really help to prove longer orbits. However, we are going to show on the next examples (see Table~\ref{table:p2}) that combining \emph{a priori bootstrap} (that is taking $p\geq 2$) with higher order polynomial interpolation does allow to get much longer orbits. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $k$ & $m$ & $nm(k+1)$ & $\tau_{max}$ & $r$ \\ \hline 1 & 2333 & 13998 & 0.97 & $1.5718\times 10^{-3}$ \\ 2 & 1556 & 14004 & \textcolor{red}{5.6} & $8.4373\times 10^{-5}$ \\ 3 & 1167 & 14004 & 5.5 & $8.4184\times 10^{-8}$ \\ 4 & 933 & 13995 & 4.9 & $7.9190\times 10^{-9}$ \\ \hline \end{tabular} \caption{Comparisons for $p=2$. $\tau_{max}$ is the longest integration time for which the proof succeeds, and $r$ is the associated validation radius, that is a bound of the distance (in $\mathcal{C}^0$ norm) between the numerical data used and the true solution.} \label{table:p2} \end{center} \end{table} First, comparing the $k=1$ case when $p=1$ and $p=2$, we see that using \emph{a priori bootstrap} allows to get a slightly longer orbit, even for linear interpolation. Also, even for the longest possible orbit in that case ($\tau=0.97$), we still have much room to satisfy~\eqref{eq:necessary_condition} (the quantity given by~\eqref{eq:necessary_condition} is $\ll 1$). However, we cannot get a longer orbit in that case even with $p=2$, because the $Y_{\infty}$ bound becomes too large. This can be dealt with by increasing $k$, and we see that we can then get much longer orbits. To finish this set of comparisons, we show that doing one more iteration of the \emph{a priori bootstrap} process (that is taking $p=3$ instead of $p=2$) still improves the results and allows to get longer orbits (see Table~\ref{table:p3}). \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $k$ & $m$ & $nm(k+1)$ & $\tau_{max}$ & $r$ \\ \hline 2 & 1556 & 14004 & 5.6 & $7.6716\times 10^{-4}$ \\ 3 & 1167 & 14004 & \textcolor{red}{8.1} & $9.3043\times 10^{-6}$ \\ 4 & 933 & 13995 & \textcolor{red}{8.1} & $8.8204\times 10^{-8}$ \\ 5 & 778 & 14004 & 8.0 & $1.6175\times 10^{-8}$ \\ 9 & 467 & 14010 & 7.9 & $1.3748\times 10^{-8}$ \\ 19 & 233 & 13980 & 6.9 & $2.2998\times 10^{-8}$ \\ \hline \end{tabular} \caption{Comparisons for $p=3$. $\tau_{max}$ is the longest integration time for which the proof succeeds, and $r$ is the associated validation radius, that is a bound of the distance (in $\mathcal{C}^0$ norm) between the numerical data used and the true solution.} \end{center} \label{table:p3} \end{table} We sum up this set of comparisons by displaying the longest orbit obtained with $p=1$, $p=2$ and $p=3$ (see Figure~\ref{fig:longest_orbit}). \begin{figure} [h!] \begin{center} \subfigure{\includegraphics[width=7.5cm]{longest_p1.pdf}} \subfigure{\includegraphics[width=7.5cm]{longest_p2.pdf}} \subfigure{\includegraphics[width=7.5cm]{longest_p3.pdf}} \end{center} \vspace{-.4cm} \caption{The longest orbits we are able to validate, with a total number of coefficient of approximately 14000. In blue for $p=1$, in green for $p=2$ and in red for $p=3$. The initial value is given by~\eqref{eq:initial_value}.} \label{fig:longest_orbit} \end{figure} We then finish this section with another set of comparisons, where we now fix the length of the orbit, here $\tau=2$, and instead look for the minimal total number of coefficients for which we can validate this orbit (for different values of $p$). The aim of this experiment is to show that using \emph{a priori bootstrap} enables to validate solutions that one would not be able to validate without using it. Indeed, we are going to see that taking $p$ greater than one allows us to use way less coefficients to validate the solutions. Thus, if for a given solution, the proof without \emph{a priori bootstrap} requires more coefficients than what our computer can handle, one can reduce this number by using \emph{a priori bootstrap} and then possibly validate the orbit. For instance, still with the initial condition given by~\eqref{eq:initial_value}, we cannot validate the orbit of length $\tau=2$ without \emph{a priori bootstrap} (that is with $p=1$), at least not with less that $14000$ coefficients. However, the next table of results shows that we can validate it with $p=2$, and also using even less coefficients with $p=3$. \begin{table}[H] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & $k=1$ & $k=2$ & $k=3$ & $k=4$ \\ $p=2$ & no proof & $m=416$ & $m=415$ & $m=377$ \\ & no proof & \textcolor{red}{$nm(k+1)=3744$} & $nm(k+1)=4980$ & $nm(k+1)=5655$ \\ \hline & $k=2$ & $k=3$ & $k=4$ & $k=5$ \\ $p=3$ & $m=470$ & $m=125$ & $m=110$ & $m=99$ \\ & $nm(k+1)=4230$ & \textcolor{red}{$nm(k+1)=1500$} & $nm(k+1)=1650$ & $nm(k+1)=1782$ \\ \hline \end{tabular} \caption{Minimal number of coefficients needed to validate the orbit of length $\tau=2$, starting from $u_0$ given in~\eqref{eq:initial_value}.} \end{center} \end{table} \subsection{Validation of a periodic orbit.} \label{sec:periodic} To study periodic orbits, instead of an initial value problem, the system~\eqref{eq:general_problem} has to be slightly modified into a boundary value problem \begin{equation} \label{eq:periodic_problem} \left\{ \begin{aligned} &u'(t)=\phi(u(t)), \quad t\in[0,\tau],\\ &u(0)=u(\tau), \\ &\left\langle u(0)-u_0,v_0\right\rangle = 0, \end{aligned} \right. \end{equation} where $\tau$ is now an unknown of the problem, and where $u_0,v_0 \in \mathbb R^n$. The last equation is sometimes called \emph{Poincaré phase condition} and is here to isolate to periodic orbit. As for the initial value problem, we can then consider an equivalent integral formulation (possibly with \emph{a priori bootstrap}) and define an equivalent fixed point operator $T$ very similar to the one introduced in Section~\ref{sec:general}. The additional phase condition and the fact that $\tau$ is now a variable only require minor modifications of $T$ and of the bounds derived in Section~\ref{sec:bounds} (see for intance \cite{BerShe15}). Using \emph{a priori bootsrtap}, we are able to validate fairly complicated periodic orbits (see Figure~\ref{fig:periodic}). \begin{figure}[h!] \begin{center} \includegraphics[width=10cm]{periodic.pdf} \end{center} \vspace{-.3cm} \caption{A validated periodic orbit of the Lorenz system, whose period $\tau$ is approximately $11.9973$. We used two iterations of \emph{a priori bootstrap}, that is $p=3$, for the validation. If we want to minimize the total number of coefficients to do the validation, we can take $k=3$ and $m=602$ (which makes $7225$ coefficients in total), and we then get a validation radius of $1.5627\times 10^{-4}$. It is possible to get a significantly lower validation radius, at the expense of a slight increase in the total number of coefficients: for instance with $k=5$ and $m=495$ (which makes $8911$ coefficients in total), we get a validation radius of $4.7936\times 10^{-9}$.} \label{fig:periodic} \end{figure} \subsection{Validation of a connecting orbit} \label{sec:connecting} In this section, we present a computer-assisted proof of existence of a connecting orbit in the Lorenz system for the standard parameter values $(\sigma,\beta,\rho)=(10,\frac{8}{3},28)$. It is well know that at these parameter values, the Lorenz system admits a transverse connecting orbit between $\left(\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho-1)},\rho-1\right)$ and the origin. While computer-assisted proofs of connecting orbits were already investigated several times using topological and analytical approaches (see~\cite{MR2821596,MR3207723,MR1961956,MR1236201,MR2157844,MR2302059,MR2339601,MR945967,MR1726672,MR2060531,MR2494688,MR2505658,MR2173545,MR1661847,MR2388394}), a paper of particular relevance to the present work is~\cite{MR3207723}, where a particular case of our method is developed, with only linear interpolation and no \emph{a priori bootstrap} (that is $k=1$ and $p=1$). While the authors in \cite{MR3207723} were able to validate several connecting orbits for the Lorenz system, they could not validate the aforementioned connecting orbit for the standard parameter values. In fact, one of the main motivations for the present work was to improve the setting of~\cite{MR3207723} to be able to validate more complicated orbits. As we showcased in Section~\ref{sec:IVP}, using \emph{a priori bootstrap} enables us to validate significantly more complicated orbits for the initial value problem, and this is also true for connecting orbits. Indeed we are able to validate the standard connecting orbit for the Lorenz system, with parameter values $(\sigma,\beta,\rho)=(10,\frac{8}{3},28)$. Before exposing the results, we briefly describe how to modify~\eqref{eq:general_IVP} to be able to handle connecting orbits. Compared to an initial value problem on a given time interval, or to a periodic orbit, connecting orbits present an aditionnal difficulty which is that they are defined on an infinite time interval (from $-\infty$ to $+\infty$). To circumvent this difficulty and get back to a time interval of finite length, which is more suited to numerical computations (and to computer-assisted proofs), we are going to use local stable and unstable manifolds of the fixed points. By a computer-assisted method very similar to the one presented here, we first compute and validate local parameterization of the unstable manifold at $\left(\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho-1)},\rho-1\right)$ and of the stable manifold at the origin. Since the main object of this work is not the rigorous computations of those manifolds, we simply assume that they are given (with validation radius) and do not give more details here. The interested reader can find more information about the computations and validations of these parameterizations in~\cite{MR3437754,MR3518609} and the references therein, and also more detailed examples of their usage to get connecting orbits in~\cite{MR2821596,MR3207723,MR3353132,BerShe15}. We denote by $P$ a local parameterization of the stable manifold of the origin, and by $Q$ a local parameterization of the unstable manifold of a local parameterization of the stable manifold of $\left(\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho-1)},\rho-1\right)$. We point out that both manifolds are two-dimensional. We then want to solve \begin{equation} \label{eq:connecting_problem} \left\{ \begin{aligned} &u'(t)=\phi(u(t)), \quad t\in[0,\tau],\\ &u(0)=Q(\varphi), \\ &u(\tau)=P(\theta), \end{aligned} \right. \end{equation} where $\varphi$ and $\theta$ each is a one dimensional parameter, the parameter in the other dimension being fixed to isolate the solution. Notice that $\tau$ is now an unknown of the system. As for the initial value problem, we can then consider an equivalent integral formulation (possibly with \emph{a priori bootstrap}) and define an equivalent fixed point operator $T$ very similar to the one introduced in Section~\ref{sec:general}. The additional equation $u(\tau)=P(\theta)$ and the fact that we have three additional variables $\tau$, $\theta$ and $\varphi$ only requires minor modifications of $T$ and of the bounds derived in Section~\ref{sec:bounds} (see for intance \cite{BerShe15,MR3207723}). Using $p=3$, $k=3$ and $m=1150$ (that is a total number of 13803 coefficients) we are then able to rigorously compute a solution of~\eqref{eq:connecting_problem} (see Figure~\ref{fig:connecting_orbit}). \begin{figure}[h!] \begin{center} \includegraphics[width=12cm]{connecting_orbit2.pdf} \end{center} \vspace{-.3cm} \caption{Validated connecting orbit for the Lorenz system, with parameters $(\sigma,\beta,\rho)=(10,\frac{8}{3},28)$. The local stable manifold of the origin is in blue, the local unstable manifold of $\left(\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho-1)},\rho-1\right)$ in yellow, and the green connection between them (of length $\tau\simeq 17.3$) is validated using polynomial interpolation, with \emph{a priori bootstrap} ($p=3$). The proof gives a validation radius of $r=3.1340\times 10^{-5}$.} \label{fig:connecting_orbit} \end{figure} \section{Examples of applications for ABC flows} \label{sec:ABC} In this section, we apply our method to the non polynomial vector field \begin{equation*} \phi_{A,B,C}(x,y,z) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \begin{pmatrix} A\sin(z) + C\cos(y) \\ B\sin(x) + A\cos(z) \\ C\sin(y) + B\cos(x) \end{pmatrix}, \quad A,B,C \in \mathbb{R}. \end{equation*} The map $\phi_{A,B,C}$ is usually referred to as the Arnold-Beltrami-Childress (ABC) vector field, and gives a prime example of complex steady incompressible periodic flows in 3D (see~\cite{Arnold_ABC,MR851673,ref2_ABC} and the references therein). The main point of this section is to briefly illustrate the applicability of our technique to non polynomial vector fields. We plan on studying more thoroughly ABC flows with the help of our a posteriori validation method in a future work. Recently, the existence of orbits, that are periodic up to a shift of $2\pi$ in one coordinates, have been proven in the cases $A=B=C=1$ and $0<A\ll 1,\ B=C=1$ \cite{MR3549020,MR3580814}. Applying the method developed in this paper, we were able to complete these results by proving the following statements. \begin{theorem} \label{th:variable_A} For all $A=0.1,0.2,\ldots,1$, with $B=C=1$, there exists $\tau_A\in[\tau^-_A,\tau^+_A]$ (see Table~\ref{table:tau_A}) and a solution $(x,y,z)$ of the ABC flow such that \begin{equation*} x(t+\tau)=x(t)+2\pi,\quad y(t+\tau)=y(t),\quad z(t+\tau)=z(t),\quad\forall~t\in\mathbb{R}. \end{equation*} \end{theorem} \begin{proof} The proof is done by running \verb+script_proofs_A11.m+ (available at~\cite{webpage_AprioriBootstrap}), which for each $A=0.1,0.2,\ldots,1$ computes an approximate solution, then computes bounds satisfying~\eqref{eq:condition_Y}-\eqref{eq:condition_Zinfty} as described in Section~\ref{sec:bounds}, and finally finds positive $r_{\infty}$ and $r$ such that~\eqref{eq:condition_p_finite}-\eqref{eq:condition_p_infty} holds. \end{proof} \begin{theorem} \label{th:4pi} For all $A=B=C=1$, there exists $\tau\in[7.797656,7.797666]$ and a solution $(x,y,z)$ of the ABC flow such that \begin{equation*} x(t+\tau)=x(t)+4\pi,\quad y(t+\tau)=y(t),\quad z(t+\tau)=z(t),\quad\forall~t\in\mathbb{R} \end{equation*} and $x(\cdot+\tau)\neq x(\cdot)+2\pi$. \end{theorem} \begin{proof} The proof is done by running \verb+script_proofs_111.m+ (available at~\cite{webpage_AprioriBootstrap}), which computes an approximate solution, then computes bounds satisfying~\eqref{eq:condition_Y}-\eqref{eq:condition_Zinfty} as described in Section~\ref{sec:bounds}, and finally finds positive $r_{\infty}$ and $r$ such that~\eqref{eq:condition_p_finite}-\eqref{eq:condition_p_infty} holds. \end{proof} The solutions given by Theorem~\ref{th:variable_A} are represented in Figure~\ref{fig:variable_A}, and the solution given by Theorem~\ref{th:4pi} is represented in Figure~\ref{fig:4pi}. \begin{table}[htbp] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline $A$ & $1$ & $0.9$ & $0.8$ & $0.7$ & $0.6$ \\ \hline $\tau^-_A$ & $3.23527736$ & $3.41779635$ & $3.62512508$ & $3.86405419$ & $4.14464726$ \\ \hline $\tau^+_A$ & $3.23527746$ & $3.41779647$ & $3.62512521$ & $3.86405436$ & $4.14464749$ \\ \hline \end{tabular} \begin{tabular}{|c||c|c|c|c|c|} \hline $A$ & $0.5$ & $0.4$ & $0.3$ & $0.2$ & $0.1$ \\ \hline $\tau^-_A$ & $4.48269179$ & $4.90491344$ & $5.46177978$ & $6.26680147$ & $7.67945129$ \\ \hline $\tau^+_A$ & $4.48269213$ & $4.90491401$ & $5.46178092$ & $6.26680442$ & $7.67946552$ \\ \hline \end{tabular} \caption{The intervals $[\tau^-_A,\tau^+_A]$, for $A=0.1,0.2,\ldots,1$ in which the \emph{period} $\tau_A$ of the solution described in Theorem~\ref{th:variable_A} is proved to be.} \label{table:tau_A} \end{table} \begin{figure} [htbp] \begin{center} \subfigure{\includegraphics[width=7.5cm]{orbit_A11.pdf}} \subfigure{\includegraphics[width=7.5cm]{projected_A11.pdf}} \end{center} \vspace{-.4cm} \caption{These are the orbits that are described in Theorem~\ref{th:variable_A}. The color varies from blue for $A=1$ to red for $A=0.1$. Each proof was done with $p=2$, $k=2$ and $m=50$, and gave a validation radius varying from $r=4.8313\times 10^{-8}$ to $r=7.4012\times 10^{-6}$.} \label{fig:variable_A} \end{figure} \begin{figure} [htbp] \begin{center} \subfigure{\includegraphics[width=7.5cm]{orbit_111.pdf}} \subfigure{\includegraphics[width=7.5cm]{projected_111.pdf}} \end{center} \vspace{-.4cm} \caption{This is the orbit that is described in Theorem~\ref{th:4pi}. The proof was done with $p=2$, $k=2$ and $m=300$, and gave a validation radius $r=4.0458\times 10^{-6}$.} \label{fig:4pi} \end{figure} \section*{Appendix} For the sake of completeness, we give here some properties of the Lebesgue constant $\Lambda_k$, as well as proofs of Proposition~\ref{prop:interpolation_error1} and Proposition~\ref{prop:interpolation_error2}. We will assume here that $u$ is defined and smooth on $[-1,1]$. The corresponding estimates on $[t_j,t_{j+1}]$ can the easily be deduced by rescaling. We recall that $\Lambda_k$ is defined as the norm of the interpolation operator mapping $\mathcal{C}^0([-1,1],\mathbb{R})$ to itself, and associating a continuous function $u$ to its interpolation polynomial $P_k(u)$ of order $k$. Of course this operator (and its norm) depend on the interpolation points, which in this work are the Chebyshev interpolation points of the second kind \begin{equation*} x_l^k=\cos\left(\frac{k-l}{k}\pi\right),\quad \text{for all}~l = 0,\dots , k. \end{equation*} Introducing the basis consisting of the {\em Lagrange functions} \begin{equation*} L^k_i(x) \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \prod_{j\neq i}\frac{x-x^k_j}{x^k_i-x^k_j}, \end{equation*} we have that the {\em Lagrange interpolation polynomial of order $k$} is given by \begin{equation} \label{eq:Lagrange_polynomial} P_k(u)(x)=\sum_{i=0}^k u(x^k_i)L^k_i(x). \end{equation} One can then show (see for instance~\cite{MR3012510}), that \begin{equation} \label{eq:formula_Lambdak} \Lambda_k=\sup_{x\in[-1,1]} \sum_{i=0}^k \vert L^k_i(x)\vert, \end{equation} and therefore we get \begin{equation*} \sup_{x\in[-1,1]} \vert P_k(u)(x)\vert \leq \Lambda_k \max\limits_{0\leq i \leq k} \vert u(x^k_i) \vert, \end{equation*} which is exactly~\eqref{eq:bound_infty_from_max}. Since we used several times~\eqref{eq:bound_infty_from_max} and Proposition~\ref{prop:interpolation_error2} in Section~\ref{sec:bounds}, the bounds that we obtained there depend on the Lebesgue constant $\Lambda_k$. Therefore we need computable (and as sharp as possible) upper bounds for $\Lambda_k$. One possibility is to use the well known bound (again see for instance~\cite{MR3012510}) \begin{equation} \label{eq:bound_Lambdak} \Lambda_k \leq 1+\frac{2}{\pi}\ln(k+1). \end{equation} However, we can do better, at least when $k$ is odd. In that case, it has been shown (see \cite{MR0194799}) that \begin{equation*} \Lambda_k = \frac{1}{k}\sum_{l=0}^{k-1}\cot \left( \frac{2l+1}{4k}\pi \right) , \end{equation*} and this formula can be evaluated rigorously using interval arithmetic. Unfortunately, there is no such formula for $k$ even. For small values ($k=2$ and $k=4$) we computed $\Lambda_k$ by hand using~\eqref{eq:formula_Lambdak}, and for $k\geq 6$ we used~\eqref{eq:bound_Lambdak} (it is also know that $\Lambda_k \sim \frac{2}{\pi}\ln(k)$, therefore~\eqref{eq:bound_Lambdak} is sharp for large $k$). We now turn our attention to the interpolation estimates of Section~\ref{sec:general}. We point out that the analogue of Proposition~\ref{prop:interpolation_error1} for the Chebyshev interpolation points of the first kind is very standard, and can be found in many textbooks. However, the case of the Chebyshev interpolation points of the second kind is seldom discussed, therefore we include a proof here for the sake of completeness (which is nothing but a slight adaptation of the \emph{standard} proof). \medskip \noindent \textit{Proof of Proposition~\ref{prop:interpolation_error1}.} We consider the polynomial $W_k(x)=\prod_{l=0}^k (x-x_l^k)$ and use the standard interpolation error estimate for a function $u\in\mathcal{C}^{k+1}$ (see for instance \cite{MR1656150}), \begin{equation*} \left\Vert u-P_k(u)\right\Vert_{\infty} \leq \frac{\left\Vert W_k\right\Vert_{\infty}}{(k+1)!}\left\Vert u^{(k+1)} \right\Vert_{\infty}. \end{equation*} To prove Proposition~\ref{prop:interpolation_error1}, we only have to show that $\left\Vert W_k\right\Vert_{\infty}= \frac{1}{2^{k-1}}$ (the remaining factor $\frac{1}{2^{k+1}}$ coming from the rescaling). Introducing, for $k\in\mathbb{N}$, the $k$-th Chebyshev polynomial of the second kind $U_k$, defined by \begin{equation*} U_k(\cos(\theta))=\frac{\sin(k\theta)}{\sin(\theta)}, \end{equation*} we have that \begin{equation} \label{def:W_k} W_k(x)=(x-1)(x+1)\frac{U_k(x)}{2^{k-1}}. \end{equation} Indeed, the right hand side of~\eqref{def:W_k} is a unitary polynomial of degree $k+1$, that has the same zeros as $W_k$. We can then rewrite \begin{align*} W_k(x) &= \frac{1}{2^{k-1}}(x-1)(x+1)\frac{\sin(k\arccos(x))}{\sqrt{1-x^2}} \\ &= -\frac{1}{2^{k-1}}\sqrt{1-x^2}\sin(k\arccos(x)), \end{align*} and we end up with \begin{equation*} \left\Vert W_k\right\Vert_{\infty} = \frac{1}{2^{k-1}}, \end{equation*} so Proposition~\ref{prop:interpolation_error1} is proven. \hfill $\qed$ \medskip \noindent \textit{Proof of Proposition~\ref{prop:interpolation_error2}.} The first part of the bound, namely \begin{equation*} \left(1+\Lambda_k\right)\left(\frac{\pi}{4}\right)^l\frac{(k+1-l)!}{(k+1)!}, \end{equation*} comes from a combination of the Lebesgue constant and Jackson's Theorem, and can be found in~\cite{MR1656150}. However, it does not give a very sharp interpolation error estimate for small values of $k$ and $l$, therefore we derive here the second part of the bound, namely \begin{equation*} \frac{1}{l!2^l}\sum_{q=0}^{\left[\frac{l-1}{2}\right]}\frac{1}{4^q}\binom{l-1}{2q}\binom{2q}{q} \end{equation*} that can be used in those cases. Letting $u(x)=x^p$ (with $p\in\{0,\ldots,k\}$) in \eqref{eq:Lagrange_polynomial} leads to \begin{equation} \label{eq:lagrange_basis} x^p=\sum_{i=0}^k (x^k_i)^p L^k_i(x), \quad \text{for all}~x\in\mathbb{R}. \end{equation} We now fix a function $u\in\mathcal{C}^{l}$. Using~\eqref{eq:lagrange_basis} with $p=0$ (that is $1=\sum_{i=0}^k L^k_i(x)$), we get \begin{equation*} P_k(u)(x)-u(x)=\sum_{i=0}^k u(x^k_i)L^k_i(x)- u(x) \left( \sum_{i=0}^k L^k_i(x) \right) =\sum_{i=0}^k \left(u(x^k_i)-u(x)\right)L^k_i(x). \end{equation*} Using Taylor's formula, we then get \begin{align*} P_k(u)(x)-u(x) &= \sum_{i=0}^k \left(\sum_{p=1}^{l-1}\frac{(x^k_i-x)^p}{p!}u^{(p)}(x) + \frac{(x^k_i-x)^l}{l!}u^{(l)}(y_i) \right)L^k_i(x) \\ &= \sum_{p=1}^{l-1} \frac{u^{(p)}(x)}{p!}\sum_{i=0}^k (x^k_i-x)^pL^k_i(x) + \sum_{i=0}^k\frac{(x^k_i-x)^l}{l!}u^{(l)}(y_i)L^k_i(x), \end{align*} for some $y_i$ in $[-1,1]$. Then, expanding the $(x^k_i-x)^p$ terms and using again~\eqref{eq:lagrange_basis}, we get that \begin{align*} \sum_{i=0}^k (x^k_i-x)^pL^k_i(x) &= \sum_{q=0}^p\binom{p}{q}(-x)^{p-q}\sum_{i=0}^k (x^k_i)^qL^k_i(x) \\ &= \sum_{q=0}^p\binom{p}{q}(-x)^{p-q}x^q \\ &= (x-x)^p \\ &= 0, \end{align*} and thus \begin{align*} P_k(u)(x)-u(x) = \sum_{i=0}^k\frac{(x^k_i-x)^l}{l!}u^{(l)}(y_i)L^k_i(x), \end{align*} Letting \begin{equation*} \lambda^k_i \,\stackrel{\mbox{\tiny\textnormal{\raisebox{0ex}[0ex][0ex]{def}}}}{=}\, \prod_{j\neq i}\frac{1}{x^k_i-x^k_j}, \end{equation*} we can easily observe that $L_i^k(x) = \lambda^k_i W_k(x)/(x-x_i^k)$, and therefore \begin{align*} \left\vert P_k(u)(x)-u(x)\right\vert &\leq \frac{\left\Vert u^{(l)}\right\Vert_{\infty}}{l!}\sum_{i=0}^k\vert x^k_i-x\vert^l\vert L^k_i(x)\vert \\ &= \frac{\left\Vert u^{(l)}\right\Vert_{\infty}}{l!}\left\vert W_k(x)\right\vert \sum_{i=0}^k\vert\lambda_i^k \vert \vert x^k_i-x\vert^{l-1}. \end{align*} According to \cite{MR3012510}, in case the points $x^k_i$ are the Chebyshev interpolation points of the second kind, we have \[ \lambda^k_i = \begin{cases} (-1)^i\frac{2^{k-1}}{k}, & i = 1,\dots,k-1,\\ (-1)^i\frac{2^{k-1}}{2k}, &i = 0,k. \end{cases} \] Remembering that $\left\Vert W_k\right\Vert_{\infty}=\frac{1}{2^{k-1}}$, we get \begin{equation*} \left\vert P_k(u)(x)-u(x)\right\vert \leq \frac{\left\Vert u^{(l)}\right\Vert_{\infty}}{l!} \frac{1}{k} \left(\frac{(1+x)^{l-1}}{2}+\frac{(1-x)^{l-1}}{2} + \sum_{i=1}^{k-1} \vert x^k_i-x\vert^{l-1}\right). \end{equation*} The function \begin{equation*} x\mapsto \frac{(1+x)^{l-1}}{2}+\frac{(1-x)^{l-1}}{2} + \sum_{i=1}^{k-1} \vert x^k_i-x\vert^{l-1} \end{equation*} is even and increasing on $[0,1]$, therefore its maximum is reached at $x=1$ and we get \begin{equation*} \left\vert P_k(u)(x)-u(x)\right\vert \leq \frac{\left\Vert u^{(l)}\right\Vert_{\infty}}{l!} \frac{1}{k} \left(2^{l-2} + \sum_{i=1}^{k-1} \left(1-\cos \frac{i\pi}{k}\right)^{l-1}\right). \end{equation*} Then, we compute \begin{align*} 2^{l-2} + \sum_{i=1}^{k-1} \left(1-\cos \frac{i\pi}{k}\right)^{l-1} & = \sum_{i=0}^{k} \left(1-\cos \frac{i\pi}{k}\right)^{l-1} - 2^{l-2} \\ & = \sum_{i=0}^{k} \sum_{q=0}^{l-1}\binom{l-1}{q}(-1)^q\cos^q \frac{i\pi}{k} - 2^{l-2} \\ & = \sum_{q=0}^{\left[\frac{l-1}{2}\right]}\binom{l-1}{2q}\sum_{i=0}^{k}\cos^{2q} \frac{i\pi}{k} - 2^{l-2} \\ & = \sum_{q=0}^{\left[\frac{l-1}{2}\right]}\binom{l-1}{2q}\sum_{i=0}^{k}\frac{1}{4^q}\left(\binom{2q}{q}+\sum_{j=0}^{q-1}\binom{2q}{j}\cos 2(q-j)\frac{i\pi}{k}\right) - 2^{l-2} \\ & = \sum_{q=0}^{\left[\frac{l-1}{2}\right]}\binom{l-1}{2q}\frac{1}{4^q}\left((k+1)\binom{2q}{q}+2\sum_{j=0}^{q-1}\binom{2q}{j}\right) - 2^{l-2} \\ & = \sum_{q=0}^{\left[\frac{l-1}{2}\right]}\binom{l-1}{2q}\frac{1}{4^q}\left(k\binom{2q}{q}+4^q\right) - 2^{l-2} \\ & = k\sum_{q=0}^{\left[\frac{l-1}{2}\right]}\binom{l-1}{2q}\binom{2q}{q}. \end{align*} We end up with \begin{equation*} \left\vert P_k(u)(x)-u(x)\right\vert \leq \frac{\left\Vert u^{(l)}\right\Vert_{\infty}}{l!} \sum_{q=0}^{\left[\frac{l-1}{2}\right]}\binom{l-1}{2q}\binom{2q}{q}, \end{equation*} and Proposition~\ref{prop:interpolation_error2} is proven (the lacking $\frac{1}{2^l}$ factor coming from the time rescaling). \hfill $\qed$
1,116,691,499,560
arxiv
\section{Introduction} \label{sec:sample1} With so many applications, tools, and online platforms booming in today's technological era, the amount of data being collected is rapidly increasing. To effectively handle and access this massive amount of data, valuable information extraction tools must be developed. The fetching and accessing of data from tabular forms is one of the sub-areas in the Information Extraction field that requires attention. Several industries around the world, particularly the banking and insurance industries, rely heavily on paperwork and documentation. Tables are commonly utilized for anything from recording client information to reacting to their requirements. This information is then sent as a document (hard copy) to other departments for approval, where miscommunication can occasionally result in problems when grabbing data from tables. Instead, we can directly scan such documents into tables and work on the digitized data once the original data has been acquired and authorized. Table detection and structure recognition is an essential task in images analysis for automatically extracting information from the table in a digital way. image or document table detection and extraction is difficult because of the format of the document and various table layouts as shown in Fig. \ref{fig:examples_images}. Recently, deep learning had a significant impact on computer vision specially on image-based approaches for table detection, information extraction and analysis. A few studies have been conducted on the identification of tables in documents \cite{8270123,traquair2019deep,gilani2017table,tran2015table,7490132}. However, there is significantly less work put into detecting table structures, and the table structure is frequently classified by the rows and columns of a table \cite{mao2003document,kara2020holistic,sarkar2019document}. Deep learning has recently achieving state-of-the-art using convolutional neural network (CNN) \cite{lecun1995convolutional} in many tasks including object detection \cite{zhao2019object}, face recognition \cite{lawrence1997face}, sequence to sequence learning \cite{gehring2017convolutional,Abdallah_Abdelrahman}, speech recognition \cite{abdel2014convolutional}, semantic segmentation \cite{paszke2016enet}, image classification \cite{li2014medical}, handwritten recognition \cite{Abdallah_2020,nurseitov2020hkr,Daniyar_2020}, and table detection \cite{8270123,sarkar2019document,mao2003document} is demanding because they need to classify tables among the texts and other figures. The presence of split columns or rows, as well as nested tables or embedded figures, makes the detection of a table even more difficult. \begin{figure}[h!] \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth,height=7cm]{0a03b663558c30610cf6e7927dcacb9c-40.png} \caption{} \label{fig:example1} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth,height=7cm]{0a1db02693aa6694f4f03b0d1982db62.jpg} \caption{} \label{fig:example2} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth,height=7cm]{00a37a568966d8cb0d2dfaa339106465-17.png} \caption{} \label{fig:example3} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth,height=7cm]{0a4bca9d6922d720da5da387c767dfef-23.png} \caption{} \label{fig:example4} \end{subfigure} \caption{Electronic image examples in various formats and layouts from our dataset} \label{fig:examples_images} \end{figure} In this paper, we propose a new dataset called Table Net Detection and Classification Dataset (TNCR) that can be used for table detection and classification of tables into 5 different class. Also, we train deep learning models to solve the two tasks and compare them. Table detection is performed by using instance segmentation on each image. Each instance of the segmented table detects at pixel level at the images. In addition, we used same model for classifying the segmented tables into 5 different classes. The main contribution of our research are summarized as follows: \begin{itemize} \item First, this work presents a new dataset for table detection and table classification. It contains images of different quality for training and testing. The images are real, not generated from LATEX or Word documents. Our dataset contains 9428 images with 5 different labels for table classification (Full lined, No Lines, Merged cells, Partial lined, Partial line merged cells). \item Second, we present a brief description of deep learning models for object detection and classification that and present comparative results. For a better understanding of models performance, COCO performance metrics over IoUs ranging from 50\% to 95\% are displayed for each model. \item Third, we built many robust baselines using state-of-the-art models with end-to-end deep neural networks to test the effectiveness of our dataset. we compared state-of-the-art object detection models like Cascade R-CNN \cite{cai2019cascade}, Cascade mask R-CNN \cite{cai2019cascade}, Cascade RPN \cite{vu2019cascade}, Hybrid Task Cascade\cite{chen2019hybrid}, and YOLO \cite{redmon2018yolov3} with different backbone combinations presented as follow ResNet-50 \cite{he2016deep}, ResNet-101 \cite{he2016deep} and ResNeXt101 \cite{xie2017aggregated}. some models are trained in different learning schedule (1x, 20e and 2x). \end{itemize} The rest of paper is structured as follows: Section \ref{relatedwork} presents the related work on the topics of existing datasets and a brief history of the methods used in machine learning and deep learning on table detection and structure detection. Section \ref{dataset} describes our dataset in table detection and classification. Section \ref{sec:methodology} provides details description of the models and methodology in object detection (TNCR). Section \ref{Experiments_Results} presents experimental results with a comprehensive analysis of table detection using different models and summary of the paper and the future work are described in Section \ref{Conclusion}. \section{Related Work} \label{relatedwork} \subsection{Existing Datasets} ICDAR2013 dataset \cite{gobel2013icdar} contains 150 tables, with 75 tables in 27 EU excerpts and 75 tables in 40 US Government excerpts. Table regions are rectangular areas of a page that are defined by their coordinates. Because a table can span multiple pages, multiple regions can be included in the same table. ICDAR2013 is split up into two sub-tasks, table detection or location and table structure recognition. The goal of the table structure recognition task is to compare methods for determining table cell structure given accurate location information. UNLV Table dataset \cite{shahab2010open} consists of 2889 pages of scanned document images collected from various sources (Magazines, News papers, Business Letter, Annual Report etc). The scanned images are available in bitonal, greyscale, and fax formats, with resolutions of 200 and 300 DPI. Along with the original dataset, which contains manually marked zones, there is ground truth data; zone types are provided in text format. The Marmot dataset \cite{fang2012dataset} ground-truths were extracted using the semi-automatic ground-truthing tool ”Marmot” from a total of 2000 pages in PDF format. The dataset is made up of roughly 1:1 ratios of Chinese and English pages. The Chinese pages were chosen from over 120 e-Books from the Founder Apabi library’s diverse subject areas, with no more than 15 pages chosen from each book. The Citeseer website was used to crawl the English pages. DeepFigures dataset \cite{siegel2018extracting} contains documents with tables and figures from arXiv.com and the PubMed database. The DeepFigures dataset is focused on large-scale table/figure detection and cannot be used for table structure recognition. TableBank dataset \cite{li2020tablebank} is a new dataset for table detection and structure detection which consists of 417K high-quality labeled tables in a variety of domains, as well as their original documents. ICDAR2019 \cite{dejean_herve_2019_2649217} proposed a dataset for table detection (TRACK A) and table recognition (TRACK B). The dataset is divided to two types, historical and modern dataset. It contains 1600 images for training and 839 images for testing. Historical type contains 1200 images in track A and B for training and 499 images for testing. Modern type contains 600 images in track A and B for training and 340 images for testing. \subsection{Table detection and structure detection} The goal of table detection is to locate tables in a document using bounding boxes and the goal of table structure recognition is to determine a table’s row and column layout information. Table detection has been studied since the early 1990s. Katsuhiko \cite{itonori1993table} explains how to recognize table structure from document images using a new method. Each cell in a table is represented by a row and column pair that is arranged regularly in two dimensions. It coordinates explicitly found even when some ruled lines are missing. As a result, he has assumed that the table structure is defined by an arrangement of tentblocks, which is an arrangement of rows and columns, with ruled lines indicating their relationship. This procedure consists of two steps: expanding the bounding boxes of the cells and assigning row and column numbers to each edge. Wonkyo Seo et al,\cite{seo2015junction} proposes novel junction detection and labeling approaches to increase accuracy, where junction detection involves finding candidates for cell corners and junction labeling implies inferring their connections. Chandra and Kasturi \cite{chandran1993structural} proposed for structure table detection, The document is scanned in order to extract all horizontal and vertical lines. These lines are used to approximate the table’s dimensions. Thomas and Dengel \cite{inproceedings} proposes a novel method for recognizing table structures and analyzing layouts. The analysis of the detected layout components is based on the creation of a tile structure, which reliably recognizes row- and/or column spanning cells as well as sparse tables. The whole method is domain agnostic, may ignore textual contents if desired, and can therefore be used to any mixed-mode document (with or without tables) in any language, and even works with low-quality OCR documents (e.g. facsimiles). All horizontal and vertical lines that are present should be removed. These lines are used to approximate the table’s dimensions. The rapid development of machine learning in computer vision has had a significant impact on data-driven image-based table detection approaches in 1998 lead Kieninger and Dengel \cite{inproceedings} proposed first unsupervised machine learning method for table detection task. In 2002 Cesarini Francesca et al. \cite{cesarini2002trainable} proposed a supervised machine learning algorithm based on hierarchical representation using the MXY tree. The presence of a table is inferred by looking for parallel lines in the page’s MXY tree. This hypothesis is then supported by the presence of perpendicular lines or white spaces in the area between the parallel lines. Finally, based on proximity and similarity criteria, located tables can be merged. Also machine learning algorithm used for different tasks in table detection and structure detection like using Support vector machine (SVM) for feature extraction proposed by Kasar \cite{6628801} and sequence labeling task by Silva et al \cite{e2009learning}. Silva proposed a hidden Markov models (HMM) for table location by Interdependent classification using probabilistic graphical models. In this paper shows how to incorporate different document structure finders into the HMM. Using machine learning algorithms with table detection lead to improve the accuracy. Deep learning plays important role in computer vision. Deep learning has a significant impact on scanned image for table detection. For document analysis, convolutional neural networks (CNNs) are the top candidate for deep learning in image processing approaches. CNNs for object detection have been implemented widely in document analysis and image processing \cite{kara2019deep,kara2020holistic,arif2018table,gilani2017table}. Faster-RCNN \cite{ren2015faster} had shown good impact at table detection and achieved state-of-the-art performance on ICDAR-2013. Shoaib et al\cite{8540832}, proposed a method by combining deformable CNN with Faster-RCNN. Deformable convolution bases its receptive field on the input, allowing it to shape its receptive field to match the input. The network can then accommodate tables with any layout to this adaptation of the receptive field. CascadeTabNet \cite{prasad2020cascadetabnet} is a deep learning-based end-to-end solution that uses a single Convolution Neural Network (CNN) model to solve both table detection and structure recognition problems. CascadeTabNet present a Cascade mask Region-based CNN High-Resolution Network (Cascade mask R-CNN HRNet)-based model that simultaneously detects table regions and classifies detected tables. DeepDeSRT \cite{8270123} is contain two steps: first step is deep learning method for table detection where using fine-tuning a pre-trained model of Faster RCNN and second step is deep learning method for table structure recognition by using fine-tuning FCN proposed by Shelhamer et al. \cite{long2015fully} trained on VOC pascal\cite{everingham2010pascal}. For both table detection and structure recognition, TableNet \cite{paliwal2019tablenet} proposed a novel end-to-end deep learning model. To segment out the table and column regions, the model takes advantage of the interdependence between the twin tasks of table detection and table structure recognition. Then, from the identified tabular sub-regions, semantic rule-based row extraction is performed. On the publicly available ICDAR 2013 and Marmot Table datasets, the proposed model and extraction approach were evaluated, yielding state of-the-art results. Kavasidis et al. \cite{kavasidis2018saliency} proposed a fully convolutional neural network for table and chart detection that overcomes the shortcomings of existing methods. This paper proposes a fully-convolutional neural network based on saliency that performs multi-scale reasoning on visual cues, followed by a fully-connected conditional random field (CRF) for localizing tables and charts in digital/digitized documents. Leipeng Hao et al. \cite{7490132} proposed a novel method for detecting tables in PDF documents using convolutional neutral networks, one of the most widely used deep learning models. The proposed method begins by selecting some table-like areas using some loose rules, and then building and refining convolutional networks to determine whether the selected areas are tables or not. \section{Table Net Detection and Classification Dataset (TNCR) } \label{dataset} Tables in documents are of different types, they differ from each other in structure or form. The problem for the neural network was a kind of tables, after analyzing all the tables that we have, we classified the tables into 5 groups: \begin{enumerate} \item Full lined: a table with completely lines, without merged cells (Fig. \ref{fig:Full Line}). Also, Table in which all cells are limited by lines, there are no merged cells and Table in which all columns and rows are delimited by lines on both sides. In this case, the length of all horizontal lines is equal to the width of the table, and the length of the vertical lines is equal to the height. \item No lines: a table that has no lines, opposite to the “Full lined” class (Fig. \ref{fig:nolines}). \item Merged cells: a table that looks similar to the “Full lined” class, but has at least one merged cell (Fig. \ref{fig:merged_cells}). Merged cell is a full lined , in which two or more cells are concatenated and the contents of the cell are not delimited. \item Partial lined: a table that does not have some lines and does not have merged cells (Fig. \ref{fig:partial_lined}). Partial lined is a full lined with one or more lines missing. visually there are pronounced columns, there are no merged cells. column structures are clearly visible, vertical sidelines are absent. \item Partial lined merged cells: a table that does not have some lines, but has merged cells (Fig. \ref{fig:partial_lined_merged_cells}) \end{enumerate} \begin{figure}[h!] \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth,keepaspectratio]{full_lined.png} \caption{An example for the "Full lined" class} \label{fig:Full Line} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth,keepaspectratio]{nolines.png} \caption{An example for the class "No lines"} \label{fig:nolines} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth,keepaspectratio]{merged_cells.png} \caption{An example for the class "Merged cells"} \label{fig:merged_cells} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth,keepaspectratio]{partial_lined.png} \caption{An example for the class "Partial lined"} \label{fig:partial_lined} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth,keepaspectratio]{partial_lined_merged_cells.png} \caption{An example for the class "Partial lined"} \label{fig:partial_lined_merged_cells} \end{subfigure} \caption{Sample from dataset} \label{fig:samples_datase} \end{figure} In Fig. \ref{fig:histogram_of_dataset_before_balance} show the number of class in the dataset. Since for three classes (No lines, Partial lined merged cells, Partial lined) there were not enough tables for a balanced dataset. The first model was trained on pure Faster RCNN\cite{ren2015faster} using the luminoth library on the unbalance dataset. It was necessary to find tables in the public domain. And we came to the decision to parse pdf documents from the site accessdata.fda.gov. 875026 pdf pages were parsed, the model recognized 225154 pages with tables. The missing tables for three classes were taken from them and re-partitioned. Statistics after re-partitioning shown in Fig. \ref{fig:histogram_of_dataset_aftere_balance} \begin{figure}[h!] \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth,keepaspectratio]{num_before_balance.png} \caption{Dataset before re-partitioning} \label{fig:histogram_of_dataset_before_balance} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth,keepaspectratio]{num_after_balance.png} \caption{Dataset after re-partitioning} \label{fig:histogram_of_dataset_aftere_balance} \end{subfigure} \caption{Histogram of dataset} \label{fig:samples} \end{figure} \section{Methodology} \label{sec:methodology} In this section, we describe the methodology of using object detection and classification. We describe different methods and models that we used in table detection and classification. \subsection{Cascade R-CNN} The next problem to address following the R-CNNs is to improve the quality of segmentation and object detection. Quality means making predictions that are more accurate on a pixel level. It is difficult for object detection CNNs to accurately detect objects of various quality and size in an image. This is due to models being trained with a single threshold $u$, which is the Intersection over Union (IoU), being at least 50\% for the object to be considered a positive example. This is quite a low threshold which creates many bad proposals from the Region Proposal Network (RPN) and also makes the networks specialize in making proposals with around $u = 0.50$. To address this problem, Cai \cite{cai2019cascade} proposed Cascade R-CNN which sets up a multistage network with $u$ increasing at each stage.It uses the same architecture as Faster R-CNN but more of them in a sequence as seen in Fig. \ref{fig:CascadeR-CNN}. In Faster R-CNN the RPN outputs proposals which are then classified and gets a bounding box. The ones with $u < 0.50$ are discarded. However, instead of being done at this stage, Cascade R-CNN uses the output bounding boxes of the first stage as new region proposals. The second stage increases u and then further refines the output. This is repeated in a third stage and could be repeated as long as memory allows. However, they found that after three stages, the result does not improve further. The key here is that because the network is trained end-to-end, the stages following the initial Faster R-CNN become increasingly better at discarding low-quality proposals of the previous stage. Hence, producing better quality bounding boxes at the final stage. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth,height=.60\textwidth]{Cascade_R-CNN.JPG} \caption{Cascade R-CNN} \label{fig:CascadeR-CNN} \end{figure} Fig. \ref{fig:CascadeR-CNN} illustrates the Cascade RCNN architecture. It is a multi-stage extension of the Faster R-CNN architecture. Cascade RCNN, concentrating on the detection sub-network and using the RPN of the Faster R-CNN architecture for proposal detection. The Cascade R-CNN, on the other hand, isn't limited to this proposal mechanism; other options should be available. The first stage is a proposal sub-network, in which a backbone network processes the entire image. like ResNet \cite{he2016deep},To generate preliminary detection hypotheses, known as object proposals, a proposal head (“H0”) is used. A region-of-interest detection sub-network (“H1”), denoted as a detection head, processes these hypotheses in the second stage. Per hypothesis, a final classification score (“C”) and abounding box (“B”) are assigned. Using a multi-task loss with bounding box regression and classification components, the entire detector is learned end-to-end. \subsection{Cascade Mask R-CNN} To make it a Cascade Mask R-CNN, it is done similarly as making Faster R-CNN to Mask R-CNN by adding a segmentation branch in parallel to the bounding box regression and classification as seen in Fig. \ref{fig:CascadeMaskR-CNN}. This is due to segmentation being a pixel-wise operation and is not necessarily improved by having a well-defined bounding box. In the article, they propose using a mask-segmentation branch in the first stage due to being the least computationally heavy. The segmentation branch is added parallel to the detection branch in the Mask R-CNN. The Cascade R-CNN, on the other hand, has several detection branches. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth,height=.60\textwidth]{Cascade_mask_R-CNN.JPG} \caption{Cascade Mask R-CNN} \label{fig:CascadeMaskR-CNN} \end{figure} \subsection{Cascade RPN} Fig. \ref{fig:Cascade RPN} depicts the architecture of a two-stage Cascade RPN\cite{vu2019cascade}. Cascade RPN uses adaptive convolution to align the features to the anchors in this case. Because the anchor center offsets are zeros, the adaptive convolution is set to perform dilated convolution in the first stage. Because the spatial order of the features is maintained by the dilated convolution, the features of the first stage are "bridged" to the next stages. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth,height=.60\textwidth]{Cascade_RPN.JPG} \caption{Cascade RPN} \label{fig:Cascade RPN} \end{figure} \subsection{Hybrid Task Cascade (HTC)} The Hybrid Task Cascade (HTC) \cite{chen2019hybrid} is a new instance segmentation cascade architecture. The main idea is to improve information flow by incorporating cascade and multi-tasking at each stage, as well as leveraging spatial context to improve accuracy even more. HTC designed a cascaded pipeline for progressive refinement in particular. HTC is a new framework for segmenting instances as seen in Fig. \ref{fig:Hybrid Task Cascade}. It stands out in several ways when compared to other frameworks: \begin{itemize} \item Instead of running bounding box regression and mask prediction in parallel, it interleaves them. \item It includes a direct path for reinforcing the information flow between mask branches by feeding the previous stage's mask features to the current one. \item By combining an additional semantic segmentation branch with the box and mask branches, it aims to explore more contextual information. \end{itemize} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth,height=.60\textwidth]{htc.JPG} \caption{Hybrid Task Cascade} \label{fig:Hybrid Task Cascade} \end{figure} \subsection{YOLO} YOLOv3 \cite{redmon2018yolov3} uses logistic regression to predict the objectness of each bounding box. If the bounding box prior overlaps a ground truth object by a greater amount than any other bounding box prior, this value should be 1. If the bounding box prior isn't the best, but it overlaps a ground truth object by a certain amount, YOLOv3 ignores the prediction. The 0.5 threshold is employed by YOLOv3. For each ground truth object, YOLOv3 only assigns one prior bounding box. There is no loss in coordinate or class predictions if a bounding box prior is not assigned to a ground truth object. \section{Experiments Results} \label{Experiments_Results} \subsection{Dataset and Metrics performance } TNCR Dataset can serve as basic research on table detection, structure recognition, and table classification. It contains 5 different classes for tables which can help the researchers to detect the table and classify it even with no rows and columns. In this research, we perform preprocessing for tabular cell recognition in TNCR dataset. The representation of a table in a machine-readable format, where its layout is encoded according to a pre-defined standard, is known as table structure recognition \cite{jiang2021tabcellnet,zhong2019image}. TNCR Dataset is split into three datasets as follows training, validation and testing dataset. We carefully split the dataset from each class in the dataset 70\% for training and 15\% for validation and 15\% for testing as shown in table. \ref{tab:split_dataset}. \begin{table}[h!] \caption{training, validation and testing dataset. } \begin{tabular}{|c|c|c|c|c|c|} \hline & Full lined & No Lines & Merged cell & Partial lined & Partial lined \\ \hline Training & 1888 & 1469 & 1409 & 965 & 804 \\ \hline Validation & 405 & 315 & 302 & 207 & 173 \\ \hline testing & 405 & 315 & 302 & 207 & 172 \\ \hline Total & 2698 & 2099 & 2013 & 1379 & 1149 \\ \hline \end{tabular} \label{tab:split_dataset} \end{table} To evaluate our result for table detection we calculate the average precision (AP) , average recall (AR) and F1-score with the same ways of standard evaluation metrics for COCO dataset on different Intersection Over Union (IoU) threshold. the precision , recall and F1 score calculate as follow : , \begin{equation} \textrm{Average Precision (AP)} = \frac{\textrm{True Positive (TP)}}{( \textrm{True Positive (TP)}+\textrm{False Positive (FP)} )} \end{equation} \begin{equation} \textrm{Average Recall (AR)} = \frac{\textrm{True Positive (TP)}}{( \textrm{True Positive (TP)}+\textrm{False Negative (FN) } )} \end{equation} \begin{equation} \textrm{F1-score} = \frac{ 2 * (\textrm{AP}* \textrm{AR} )}{(\textrm{AP}+\textrm{AR})} \end{equation} We define True Positive detection results consistently and use them to compute precision and recall. The table header and all instances should be included in all recognized regions, ensuring that the entire table in the ground truth is captured\cite{luo2021deep}. The area within the bounding box must be free of any noise that would detract from the tabular region's purity. Other elements in a confusion matrix are represented as FP in all models, which stands for "not being a table with bounding boxes," and FN in all models, which stands for "actual tables with incorrect bounding boxes or no bounding boxes." The AP, AR, and F1-score metrics are calculated using confusion metrics. Confusion matrix elements are represented in all models. To compute the evaluation metrics, we used different IoU thresholds for the overlapping area between the result and the ground truth. IoU is used to determine whether a table region has been correctly detected and to measure the overlapping of the detected boxes. \subsection{Experiment Settings} The proposed and tested models have all been implemented using the MMdetection library \cite{chen2019mmdetection} for pytorch. MMDetection is a toolbox for object detection that includes a large number of object detection and instance segmentation methods, as well as related components and modules. It gradually develops into a unified platform that encompasses a wide range of popular detection methods and modern modules. The various features of this toolbox are introduced by MMDetection. The experiments were performed on Google Colaboratory platform and with 3 Tesla V100-SXM GPUs of 16 GB GPU memory and 16 GB of RAM. Also we run on a machine with 2$\times$ ``Intel(R) Xeon(R) E-5-2680'' CPUs and 4$\times$ ``NVIDIA Tesla k20x''. All the models have been trained and tested with images scaled to a fixed size of 1300$\times$1500 with batch size 16. SGD is defined as the optimizer with a momentum of 0.9, weight decay of 0.0001, and the learning rate is 0.02. All models utilize the Feature Pyramid Network (FPN) neck. \subsection{Results} The evaluation results of table detection for Cascade Mask R-CNN model with different backbones are shown in Table. \ref{tab:Cascade Mask R-CNN}. This table shows that ResNeXt-101-64x4d backbone has achieves the highest F1 score of 0.844 over 50\%:95\% and maintains the highest F1 score at various IoUs. ResNeXt-101-32x4d backbone also achieves lower performance at IoUs of 95\%, 90\%, and 50\%:95\%. Resnet-101 backbone with 1$\times$ Lr schedule shows lower performance at IoU of 50\% to 85\%. Benchmarks are frequently assessed at 50\% IoU or a mean average of 50\% to 95\% IoU. As a result, at 50\% IoU, ResNeXt-101-64x4d backbone has the highest precision and recall (0.891 and 0.975, respectively). \begin{table}[h!] \caption{Cascade Mask R-CNN} \begin{center} \begin{adjustbox}{width=1\textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Backbone}& \multirow{2}{*}{Lr schd}& \multirow{2}{*}{} & \multicolumn{11}{|c|}{ IoU } \\ \cline{4-14} & & & 50\% & 55\% & 60\% & 65\% & 70\% & 75\% & 80\% & 85\% & 90\% & 95\% & 50\%:95\% \\ \hline \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{1x} & Precision & 0.709 & 0.708 & 0.708 & 0.706 & 0.704 & 0.701 & 0.690 & 0.675 & 0.650 & 0.557 & 0.633\\ & & Recall & 0.778 & 0.777 & 0.776 & 0.775 & 0.774 & 0.770 & 0.760 & 0.747 & 0.725 & 0.647 & 0.713 \\ & & F1-Score & 0.741 & 0.740 & 0.740 & 0.738 & 0.737 & 0.733 & 0.723 & 0.709 & 0.685 & 0.598 & 0.670 \\ \hline \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{20e} & Precision & 0.713 & 0.713 & 0.711 & 0.711 & 0.709 & 0.707 & 0.702 & 0.688 & 0.663 & 0.587 & 0.650 \\ & & Recall & 0.775 & 0.775 & 0.774 & 0.773 & 0.773 & 0.769 & 0.764 & 0.752 & 0.729 & 0.663 & 0.719 \\ & & F1-Score & 0.742 & 0.742 & 0.741 & 0.740 & 0.739 & 0.736 & 0.731 & 0.718 & 0.694 & 0.622 & 0.682 \\ \hline \multirow{3}{*}{Resnet-101} & \multirow{3}{*}{1x} & Precision & 0.701 & 0.699 & 0.699 & 0.698 & 0.696 & 0.692 & 0.684 & 0.673 & 0.653 & 0.570 & 0.635 \\ & & Recall & 0.776 & 0.776 & 0.775 & 0.774 & 0.773 & 0.768 & 0.757 & 0.75 & 0.731 & 0.659 & 0.718 \\ & & F1-Score & \textbf{0.736}* & \textbf{0.735}* & \textbf{0.735}* & \textbf{0.734}* & \textbf{0.732}* & \textbf{0.728}* & \textbf{0.718}* & \textbf{0.709}* & 0.689 & 0.611 & 0.673 \\ \hline \multirow{3}{*}{Resnet-101} & \multirow{3}{*}{20e} & Precision & 0.803 & 0.802 & 0.799 & 0.796 & 0.788 & 0.781 & 0.766 & 0.734 & 0.674 & 0.468 & 0.636 \\ & & Recall & 0.968 & 0.967 & 0.964 & 0.961 & 0.953 & 0.945 & 0.931 & 0.903 & 0.849 & 0.669 & 0.819 \\ & & F1-Score & 0.877 & 0.876 & 0.873 & 0.870 & 0.862 & 0.855 & 0.840 & 0.809 & 0.751 & 0.550 & 0.715 \\ \hline \multirow{3}{*}{ResNeXt-101-32x4d} & \multirow{3}{*}{1x} & Precision & 0.761 & 0.760 & 0.751 & 0.740 & 0.735 & 0.728 & 0.696 & 0.665 & 0.591 & 0.383 & 0.572 \\ & & Recall & 0.954 & 0.953 & 0.944 & 0.936 & 0.931 & 0.925 & 0.890 & 0.859 & 0.799 & 0.583 & 0.769 \\ & & F1-Score & 0.846 & 0.845 & 0.836 & 0.826 & 0.821 & 0.814 & 0.781 & 0.749 & \textbf{0.679}* & \textbf{0.462}* & \textbf{0.656}* \\ \hline \multirow{3}{*}{ResNeXt-101-64x4d} & \multirow{3}{*}{1x} & Precision & 0.891 & 0.891 & 0.889 & 0.886 & 0.885 & 0.881 & 0.871 & 0.853 & 0.822 & 0.703 & 0.797\\ & & Recall & 0.975 & 0.975 & 0.973 & 0.970 & 0.969 & 0.965 & 0.958 & 0.942 & 0.917 & 0.820 & 0.898 \\ & & F1-Score & \textbf{0.931} & \textbf{0.931} & \textbf{0.929} & \textbf{0.926} & \textbf{0.925} & \textbf{0.921} & \textbf{0.912} & \textbf{0.895} & \textbf{0.866} & \textbf{0.757} & \textbf{0.844}\\ \hline \end{tabular} \end{adjustbox} \end{center} \label{tab:Cascade Mask R-CNN} \end{table} The results are shown in Table. \ref{tab:CascadeR-CNN} for Cascade-RCNN model with with different backbones was proposed by \cite{cai2019cascade} to achieve high F1 score on object detection datasets. ResNeXt-101-64x4d backbone achieves the highest F1 score of 0.841 over 50\%:95\% and maintains the highest F1 score at various IoUs. Resnet-50 backbone with 1$\times$ Lr schedule achieve lowest performance at various IoUs. Also Resnet-101 backbone with 1$\times$ Lr schedule shows lower performance at IoU of 65\% to 70\%. CascadeTabNet proposed by \cite{prasad2020cascadetabnet} combined by Cascade-Mask-RCNN and High-Resolution Net (HRNet) and achieved a 1.0 F1 score on the ICDAR2013 dataset. The proposed model is from Table. \ref{tab:Cascade Mask R-CNN} and \ref{tab:CascadeR-CNN} shows that ResNeXt101 led to an improvement over Resnet101 and Resnet50, with a F1-score of 0.931 compared to 0.877 and 0.742 respectively for Cascade-RCNN. \begin{table}[h!] \caption{Cascade R-CNN} \begin{center} \begin{adjustbox}{width=1\textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Backbone}& \multirow{2}{*}{Lr schd}& \multirow{2}{*}{} & \multicolumn{11}{|c|}{ IoU } \\ \cline{4-14} & & & 50\% & 55\% & 60\% & 65\% & 70\% & 75\% & 80\% & 85\% & 90\% & 95\% & 50\%:95\% \\ \hline \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{1x} & Precision & 0.699 & 0.698 & 0.698 & 0.697 & 0.695 & 0.689 & 0.682 & 0.667 & 0.637 & 0.528 & 0.613 \\ & & Recall & 0.776 & 0.698 & 0.698 & 0.775 & 0.772 & 0.765 & 0.758 & 0.745 & 0.719 & 0.623 & 0.699 \\ & & F1-Score & \textbf{0.735}* & \textbf{0.698}* & \textbf{0.698}* & \textbf{0.733}* & \textbf{0.731}* & \textbf{0.725}* & \textbf{0.717}* & \textbf{0.703}* & \textbf{0.675}* & \textbf{0.571}* & \textbf{0.653}* \\ \hline \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{20e} & Precision & 0.709 & 0.709 & 0.707 & 0.707 & 0.705 & 0.703 & 0.697 & 0.682 & 0.650 & 0.553 & 0.631\\ & & Recall & 0.776 & 0.776 & 0.774 & 0.773 & 0.771 & 0.767 & 0.762 & 0.751 & 0.721 & 0.640 & 0.708\\ & & F1-Score & 0.740 & 0.740 & 0.738 & 0.738 & 0.736 & 0.733 & 0.728 & 0.714 & 0.683 & 0.593 & 0.667\\ \hline \multirow{3}{*}{Resnet-101} & \multirow{3}{*}{1x} & Precision & 0.700 & 0.699 & 0.699 & 0.697 & 0.695 & 0.691 & 0.686 & 0.672 & 0.648 & 0.547 & 0.624 \\ & & Recall & 0.776 & 0.776 & 0.776 & 0.774 & 0.771 & 0.766 & 0.761 & 0.750 & 0.727 & 0.636 & 0.706 \\ & & F1-Score & 0.736 & 0.735 & 0.735 & \textbf{0.733}* & \textbf{0.731}* & 0.726 & 0.721 & 0.708 & 0.685 & 0.588 & 0.662 \\ \hline \multirow{3}{*}{Resnet-101} & \multirow{3}{*}{20e} & Precision & 0.711 & 0.711 & 0.710 & 0.709 & 0.708 & 0.704 & 0.693 & 0.680 & 0.657 & 0.572 & 0.642 \\ & & Recall & 0.776 & 0.776 & 0.775 & 0.774 & 0.772 & 0.769 & 0.756 & 0.745 & 0.723 & 0.649 & 0.712 \\ & & F1-Score & 0.742 & 0.742 & 0.741 & 0.740 & 0.738 & 0.735 & 0.723 & 0.711 & 0.688 & 0.608 & 0.675\\ \hline \multirow{3}{*}{ResNeXt-101-32x4d} & \multirow{3}{*}{1x} & Precision & 0.710 & 0.708 & 0.706 & 0.705 & 0.702 & 0.700 & 0.692 & 0.681 & 0.663 & 0.564 & 0.637\\ & & Recall & 0.780 & 0.778 & 0.777 & 0.776 & 0.772 & 0.770 & 0.763 & 0.753 & 0.735 & 0.651 & 0.716\\ & & F1-Score & 0.743 & 0.741 & 0.739 & 0.738 & 0.735 & 0.733 & 0.725 & 0.715 & 0.697 & 0.604 & 0.674\\ \hline \multirow{3}{*}{ResNeXt-101-64x4d} & \multirow{3}{*}{1x} & Precision & 0.894 & 0.894 & 0.892 & 0.892 & 0.890 & 0.886 & 0.877 & 0.862 & 0.831 & 0.703 & 0.798 \\ & & Recall & 0.971 & 0.971 & 0.970 & 0.959 & 0.967 & 0.963 & 0.954 & 0.943 & 0.914 & 0.810 & 0.891\\ & & F1-Score & \textbf{0.930} & \textbf{0.930} & \textbf{0.929} & \textbf{0.924} & \textbf{0.926} & \textbf{0.922} & \textbf{0.913} & \textbf{0.900} & \textbf{0.870} & \textbf{0.752} & \textbf{0.841}\\ \hline \end{tabular} \end{adjustbox} \end{center} \label{tab:CascadeR-CNN} \end{table} A comprehensive component-wise analysis is performed to demonstrate the effectiveness of Cascade RPN\cite{vu2019cascade}. Different components are omitted to demonstrate the effectiveness of Cascade RPN. Table. \ref{tab:CascadeRPN} shows the results. We adopted Fast R-CNN and Cascade RPN to improve the table detection. The fast R-CNN method achieves f1 score of 0.804 over 50\%:95\% IoU. The fast R-CNN method achieves better performance for table detection compare with CRPN. CRPN achieves f1 score of 0.609 over 50\%:95\% IoU. we have test Cascade RPN to measure average recall (AR), which is the average of recalls across IoU thresholds from 0.5 to 0.95 with a 0.05 step, is used to assess the quality of region proposals. the AR achieve 0.994 for fast R-CNN and 0.962 for CRPN method over 50\% IoU. \begin{table}[h!] \caption{Cascade RPN} \begin{center} \begin{adjustbox}{width=1\textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method}&\multirow{2}{*}{Backbone}& \multirow{2}{*}{Lr schd}& \multirow{2}{*}{} & \multicolumn{11}{|c|}{ IoU } \\ \cline{5-15} & & & & 50\% & 55\% & 60\% & 65\% & 70\% & 75\% & 80\% & 85\% & 90\% & 95\% & 50\%:95\% \\ \hline \multirow{3}{*}{Fast R-CNN} & \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{1x} & Precision & 0.894 & 0.892 & 0.892 & 0.888 & 0.887 & 0.880 & 0.864 & 0.838 & 0.792 & 0.603 & 0.749 \\ && & Recall & 0.994 & 0.993 & 0.992 & 0.987 & 0.985 & 0.978 & 0.964 & 0.941 & 0.901 & 0.744 & 0.869 \\ && & F1-Score & 0.941 & 0.939 & 0.939 & 0.934 & 0.933 & 0.926 & 0.911 & 0.886 & 0.842 & 0.666 & 0.804\\ \hline \multirow{3}{*}{CRPN} & \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{1x} & Precision & 0.884 & 0.882 & 0.871 & 0.870 & 0.863 & 0.854 & 0.837 & 0.773 & 0.683 & 0.521 & 0.553 \\ && & Recall & 0.962 & 0.959 & 0.958 & 0.956 & 0.949 & 0.932 & 0.919 & 0.885 & 0.813 & 0.697 & 0.679 \\ && & F1-Score & 0.921 & 0.918 & 0.912 & 0.910 & 0.903 & 0.8912 & 0.876 & 0.825 & 0.742 & 0.596 & 0.609 \\ \hline \end{tabular} \end{adjustbox} \end{center} \label{tab:CascadeRPN} \end{table} In comparison to other frameworks, Hybrid Task Cascade (HTC) \cite{chen2019hybrid} is unique in several ways: Instead of running bounding box regression and mask prediction in parallel, it interleaves them. It includes a direct path for reinforcing the information flow between mask branches by feeding the previous stage's mask features to the current one. By combining an additional semantic segmentation branch with the box and mask branches, it aims to explore more contextual information. from Table. \ref{tab:HTC} shows that Resnet-50 backbone with 1$\times$ Lr schedule has achieves the highest F1 score of 0.840 over 50\%:95\% and maintains the highest F1 score at various IoUs. Resnet-50 backbone with $20e$ Lr schedule achieves the lowest performance over 50\% to 95\% IoUs. Resnet-101 achieve 2.8\% improvement than Resnet-50 with $20e$ Lr schedule over 50\%:95\%. ResNeXt-101-32x4d and ResNeXt-101-64x4d backbones suffer from overfitting through dataset. \begin{table}[h!] \caption{Hybrid Task Cascade} \begin{center} \begin{adjustbox}{width=1\textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Backbone}& \multirow{2}{*}{Lr schd}& \multirow{2}{*}{} & \multicolumn{11}{|c|}{ IoU } \\ \cline{4-14} & & & 50\% & 55\% & 60\% & 65\% & 70\% & 75\% & 80\% & 85\% & 90\% & 95\% & 50\%:95\% \\ \hline \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{1x} & Precision & 0.886 & 0.884 & 0.883 & 0.882 & 0.879 & 0.874 & 0.863 & 0.838 & 0.790 & 0.687 & 0.787 \\ & & Recall & 0.993 & 0.991 & 0.991 & 0.990 & 0.986 & 0.980 & 0.968 & 0.947 & 0.906 & 0.809 & 0.901 \\ & & F1-Score & \textbf{0.936} & \textbf{0.934} & \textbf{0.933} & \textbf{0.932} & \textbf{0.929} & \textbf{0.923} & \textbf{0.912} & \textbf{0.889} & \textbf{0.844} & \textbf{0.743} & \textbf{0.840}\\ \hline \multirow{3}{*}{Resnet-50} & \multirow{3}{*}{20e} & Precision & 0.860 & 0.858 & 0.857 & 0.856 & 0.848 & 0.842 & 0.828 & 0.804 & 0.746 & 0.523 & 0.691 \\ & & Recall & 0.989 & 0.987 & 0.986 & 0.985 & 0.975 & 0.969 & 0.955 & 0.929 & 0.872 & 0.696 & 0.843 \\ & & F1-Score & \textbf{0.919}* & \textbf{0.917}* & \textbf{0.916}* & \textbf{0.915}* & \textbf{0.907}* & \textbf{0.901}* & \textbf{0.886}* & \textbf{0.861}* & \textbf{0.804}* & \textbf{0.597}* & \textbf{0.759}*\\ \hline \multirow{3}{*}{Resnet-101} & \multirow{3}{*}{1x} & Precision & 0.867 & 0.866 & 0.864 & 0.860 & 0.856 & 0.849 & 0.836 & 0.817 & 0.771 & 0.576 & 0.722 \\ & & Recall & 0.992 & 0.991 & 0.989 & 0.983 & 0.977 & 0.970 & 0.957 & 0.940 & 0.902 & 0.741 & 0.867 \\ & & F1-Score & 0.925 & 0.924 & 0.922 & 0.917 & 0.912 & 0.905 & 0.892 & 0.874 & 0.831 & 0.648 & 0.787 \\ \hline \hline \end{tabular} \end{adjustbox} \end{center} \label{tab:HTC} \end{table} Table. \ref{tab:YOLO} shows the performance of YOLO for table detection. YOLO shows low-performance overall the other models and it is not suitable for table detection. we trained YOLO with DarkNet-53 backbones with different Scales (320, 416, 608). DarkNet-53 with 320 scale achieve an f1 scale of 0.492. At 95\% has very low performance with 0.042 of f1 score. \begin{table}[h!] \caption{YOLO} \begin{center} \begin{adjustbox}{width=1\textwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Backbone}& \multirow{2}{*}{Scale}& \multirow{2}{*}{} & \multicolumn{11}{|c|}{ IoU } \\ \cline{4-14} & & & 50\% & 55\% & 60\% & 65\% & 70\% & 75\% & 80\% & 85\% & 90\% & 95\% & 50\%:95\% \\ \hline \multirow{3}{*}{DarkNet-53 } & \multirow{3}{*}{320} & Precision & 0.838 & 0.834 & 0.831 & 0.824 & 0.800 & 0.726 & 0.650 & 0.495 & 0.249 & 0.047 & 0.443 \\ & & Recall & 0.937 & 0.935 & 0.932 & 0.927 & 0.909 & 0.862 & 0.799 & 0.679 & 0.461 & 0.171 & 0.554 \\ & & F1-Score & \textbf{0.884} & \textbf{0.881} & \textbf{0.878} & \textbf{0.872} & \textbf{0.851} & \textbf{0.788} & \textbf{0.716} & \textbf{0.572} & \textbf{0.323} & \textbf{0.073} & \textbf{0.492} \\ \hline \multirow{3}{*}{DarkNet-53 } & \multirow{3}{*}{416} & Precision & 0.846 & 0.840 & 0.839 & 0.835 & 0.819 & 0.776 & 0.706 & 0.532 & 0.279 & 0.039 & 0.443 \\ & & Recall & 0.947 & 0.942 & 0.941 & 0.937 & 0.918 & 0.891 & 0.834 & 0.707 & 0.478 & 0.130 & 0.538 \\ & & F1-Score & 0.893 & 0.888 & 0.887 & 0.883 & 0.865 & 0.829 & 0.764 & 0.607 & 0.352 & 0.059 & 0.485 \\ \hline \multirow{3}{*}{DarkNet-53 } & \multirow{3}{*}{608} & Precision & 0.841 & 0.835 & 0.829 & 0.821 & 0.800 & 0.773 & 0.713 & 0.555 & 0.229 & 0.026 & 0.433 \\ & & Recall & 0.955 & 0.948 & 0.943 & 0.935 & 0.919 & 0.899 & 0.856 & 0.739 & 0.448 & 0.115 & 0.535\\ & & F1-Score & \textbf{0.894}* & \textbf{0.887}* & \textbf{0.882}* & \textbf{0.874}* & \textbf{0.855}* & \textbf{0.831}* & \textbf{0.777}* & \textbf{0.633}* & \textbf{0.303}* & \textbf{0.042}* & \textbf{0.478}*\\ \hline \end{tabular} \end{adjustbox} \end{center} \label{tab:YOLO} \end{table} \section{Conclusion and future work} We introduce the TNCR dataset, a new image-based table analysis dataset collected from real images, to aid research in table detection, structure recognition, and classification for document analysis. To evaluate the performance of TNCR, we use the majority of object detection models as a baseline. At each IoU from 50\% to 95\%, models that performed well for table detection were tested. Several combinations were proposed, and the one that performed the best by far was chosen. Table detection is much more difficult than cell structure detection. Experiments show that using deep learning to detect and recognize tables based on images is a promising research direction. We anticipate that the TNCR dataset will unleash the power of deep learning in the table analysis task, while also encouraging more customized network structures to make significant progress. The Cascade Mask R-CNN, Cascade R-CNN, Cascade RPN, Hybrid Task Cascade (HTC), and YOLO achieve f1 score of 0.844, 0.841, 0.804, 0.840 and 0.492 receptivity. For future work, Due to the presence of a large amount of tabular data in documents, the structure recognition task is critical in terms of its applicability in business and finance. We intend to expand the dataset by adding more real labeled images. We'll improve a new table detection model to address persistent issues with recognizing structures that are in close proximity to other elements of interest in an image. We Also plan to balance the classes of dataset for classification task. \bibliographystyle{elsarticle-num}
1,116,691,499,561
arxiv
\section{Introduction} The now retired NASA Kepler Space Telescope is responsible for observations leading to the confirmation of hundreds of multi-planet systems \citep{Lissauer2014, Rowe2014}. Of these systems, as many as six percent are thought to be compact \citep{Wu2019}, containing planets that are much more closely spaced than the inner planets of our own Solar System. These discoveries have naturally led to many questions being asked about the long-term stability of compact exoplanet systems. Indeed, it is even possible that compact planetary embryos existed interior to Venus's current orbit that have subsequently been \rev{expelled} from this region due to orbital instabilities \citep{Volk2015}. Within the class of observed compact systems, a large population of planets have been observed with a mass \citep{Mayor2011} and radius \citep{Petigura2013} between that of Earth and Neptune. Moreover, the observed orbital architecture is such that mutual inclinations are small, typically in the region of $1^{\circ}$ to $2^{\circ}$ \citep{Fabrycky2014}, while eccentricities are also found to be small\rev{, on average $\Bar{e} \approx 0.04$} \citep{Xie2016}. An archetypal example of these systems, albeit containing six planets, is Kepler-11 \citep{Lissauer2011}. Exoplanet systems with orbital spacings much greater than that required for stability are also present in the Kepler data set. It is \rev{a favoured hypothesis} that this orbital architecture is a result of dynamical instabilities in much more compact systems leading to close encounters and orbital reconfiguration \citep{Pu2015}. Understanding of the stability and evolution of compact exoplanet systems is therefore not only important for making sense of observations but also for understanding the planetary formation process as a whole. Characterisation of the stability of three or more planet systems can be approached in several ways. Analytical models have been built that can predict the lifetime of three planet systems based upon resonance overlap \citep{Wisdom1980, Quillen2011, Petit2020}. Recently, machine learning approaches have also been developed that, after being guided by a training set of $10^9$ year integrations, can use far shorter integrations to predict with surprisingly high accuracy which given exoplanet systems will remain stable for a billion orbital period\rev{s} \citep{Tamayo2016, Tamayo2020}. However, the most common approach to the problem, and the one employed in this paper, is the use of n-body simulation \citep{Chambers1999, Smith2009, Obertas2017, Hussain2020, Lissauer2021}. The majority of studies performed take a subset of the possible input parameter space for a compact, near-circular, near co-planar system of a given number of planets and then evolve this system forward in time checking for either the first close approach, typically specified as one Hill radius, $r_H$, or waiting for an orbital crossing to occur: this is then termed the instability event. Throughout this work we will use orbital crossing as our definition of an instability event and refer to the time at this point as the crossing time. \citet{Rice2018} found that systems containing four Neptune-size \rev{and Neptune-mass} planets \rev{initially located at $1$~AU} can continue to evolve after an instability event for over ten million dynamical periods before a collision of planets, meaning that the commonly used instability metric may not capture the entire evolution of the system. Given that the manner in which these planets collide determines the final orbital architecture it is important properly to understand this phase of the exoplanet system life cycle. Our study builds upon the work done by \citet{Rice2018} and \citet{Lissauer2021} by considering the post-instability evolution of compact, Earth-analogue, three-planet systems across a large range of initial orbital separations equally spaced in units of mutual Hill radii. We create three integration suites called the standard suite, perturbed suite and inclined suite, and perform $4,835$ integrations each in the first two and a further $11,200$ in the final one. We continue integrations up until the time of first collision between planets or for $10^8$ or $10^9$ orbits depending on the experiment. In section \ref{sec: simulations} of this paper we describe the methodology used for our integrations including the initial conditions for each integration suite, the integration packages used and the termination criteria. Section \ref{sec: standard suite results} contains the results of all standard suite integrations: section \ref{sec: time scale to planet planet collision} details the time scales for orbital crossing and collision between pairs of planets, and details collision probabilities over time for various initial configurations of systems; the effects of small changes in initial orbital longitudes upon these results are then examined in section \ref{sec: sensitivity to initial conditions}; and, finally, section \ref{sec: which planets collide} examines the probabilities of particular pairs of planets colliding. Section \ref{sec: inclined integration suite} introduces the results of inclined suite integrations: in section \ref{sec: dynamic heating} we explore the heating of what are initially dynamically cold systems that eventually enables orbital crossing and collision; here, we find that the three-planet Earth-mass systems behave in a similar manner to the four-planet Neptune-mass case but follow a different power law. Section \ref{sec: inclined time scale to planet planet collision} examines the time scales leading to collision in the inclined case and shows that the survival time after crossing can be a non-trivial fraction of the main-sequence lifetime of stars. In addition, this section also looks at the effects on lifetime of systems dependent on the distance from the innermost planet to the star and the initial inclination. We summarise our findings in section \ref{sec: conclusions}. \section{Methods} \label{sec: simulations} We have chosen to simulate three-planet systems comprising of analogues from our own Solar System. The central body in each of our systems is a one solar mass star, $m_0$ = 1 $\textrm{M}_\odot$. Each of the planets within the systems are Earth mass, $m_j$ = 1 $\textrm{M}_\oplus$ where $j={1,2,3}$ with a planetary radius also equal to that of Earth, $\textrm{R}_p = \textrm{R}_\oplus$. Planets are placed on initially circular orbits orbiting the star in a common direction with the innermost planet located at $1~\textrm{AU}$. Time throughout this work is provided in units of initial orbital period of the innermost planet, this means that the crossing time is invariant to rescaling of the system so long as initial orbital period ratios between bodies are maintained along with mass-ratios of planets and star. \subsection{Initial semi-major axes} Initial semi-major axes $a_j$ of systems are evenly spaced in terms of mutual Hill radii. The mutual Hill radii \rev{are defined as} \begin{equation} r_{H_{j, j+1}} = \left( \dfrac{m_j + m_{j+1}}{m_0 + \sum^{j-1}_{k=1} m_k} \right)^{\frac{1}{3}} \left( \dfrac{a_{j} + a_{j+1}}{2}\right). \end{equation} This allows for a dimensionless value $\beta$ to be defined to specify the even spacing of adjacent planetary orbits in units of their mutual Hill radii as \begin{equation} \beta \equiv \dfrac{a_{j+1} - a_j}{r_{H_{j, j+1}}}. \end{equation} Therefore, the initial semi-major axes of adjacent planets are chosen to be such that \begin{equation} \begin{split} &a_{j+1} = a_j + \beta r_{H_{j, j+1}} \\ &= a_j\left[1 + \dfrac{\beta}{2} \left( \dfrac{m_j + m_{j+1}}{m_0 + \sum^{j-1}_{k=1} m_k} \right)^{\frac{1}{3}} \right]\left[1 - \dfrac{\beta}{2} \left( \dfrac{m_j + m_{j+1}}{m_0 + \sum^{j-1}_{k=1} m_k} \right)^{\frac{1}{3}} \right]^{-1}. \end{split} \label{eq: semimajor axis spacing} \end{equation} The innermost planet is placed such that it has a semi-major axis of $1\, \textrm{AU}$, and all other semi-major axes are chosen through Eq.~\eqref{eq: semimajor axis spacing}. We refer to this configuration as a \emph{system at $1$ AU}. Likewise, later on, when results are generalised to include systems with an innermost planet located at $0.25$ AU with other planets spaced as per Eq.~\eqref{eq: semimajor axis spacing} we refer to it as a \emph{system at $0.25$ AU}. \begin{figure*} \centering \includegraphics[width=0.975\textwidth]{images/orbital_crossing.pdf} \caption{Plot showing the crossing time, $t_c,$ and impact time, $t_i$, for all integrations in the standard suite \rev{for systems at $1$~AU}. Simulations are run for up to $10^9$ orbits in general but some are terminate at $10^8$ orbits to save on computation. Orbits are specified by the initial period of the innermost planet. Impacts that take place before a crossing are highlighted \rev{by a} green \rev{diamond} whereas systems that did not cross within the maximum simulation time are marked with a red triangle. \rev{Models fitted to the crossing and impact times according to Eq.~\ref{eq: model} are shown as a dashed black and a dashed red line, respectively.}} \label{fig: orbital crossing at 1AU} \end{figure*} \subsection{Stopping criteria and integration packages} We have opted to use the Terrestrial Exoplanet Simulator (TES) \footnote{Code available at \url{https://github.com/PeterBartram/TES}} package to perform our integrations \citep{Bartram2021a}. TES is a new numerical integration package written in C++ for propagating exoplanet systems. This package combines an integrator that follows Brouwer's law \citep{Brouwer1937} with a new special perturbation method to allow for reduced run-times and decreased numerical error resulting in, e.g., improved energy conservation. Additionally, this tool has been designed to allow for integration all the way to collision of terrestrial mass planets to machine precision. TES can be run using C++ directly, or through a python interface allowing for ease of use and for multiple integrations to be performed in parallel. Throughout our simulations we have opted to use TES with a non-dimensional tolerance of $1 \times 10^{-8}$ which has ensured that the relative energy error in all simulations, even after collision, and for the longest lived systems, is maintained below $1 \times 10^{-13}$. To validate our own results, we also repeated all of our standard suite integrations making use of \rev{IAS15} \citep{Rein2014} within the REBOUND package \citep{Rein2012}. The results from this comparison can be found in Appendix \ref{appendix: integrator comparison}. As mentioned before, time is measured by periods of the innermost planet in the system throughout this work, meaning that all times are specified in units of orbits or dynamical periods. Integrations run until either a collision is detected or the simulation reaches a maximum time of $10^8$ or $10^9$ dynamical periods, depending on the experiment. In order to detect an orbital crossing, the orbital elements of each planet are calculated at every step within each integration. These are then compared to determine the time at which the apoapsis of a planet crosses the periapsis of the exterior adjacent planet. We define this as the \emph{crossing time} and denote it $t_c$. Moreover, also at each step, the mutual separations of each of the planets are calculated so that collisions can be detected. The metric of two planets coming within $2 \textrm{R}_\oplus$ of one another is used for collision detection. We define the time at which this occurs as the \emph{impact time} and denote it $t_i$. We also define the \emph{post-crossing survival time}, $t_s$, of a system to be the time that the system persists without a collision after the point of orbital crossing: $$t_s \equiv t_i - t_c.$$ All encounters closer than any experienced previously are recorded such that it is possible to generalise the collision results to systems with planetary radii greater than that of the Earth or, equivalently, initial orbital radii closer than $1~\textrm{AU}$. \rev{We use this generalisation to consider systems at $0.25$~AU and $1$~AU for all integration suites.} We also define the time of closest encounter prior to collision as the \emph{closest encounter time}, $t_e$. To ensure bit-wise identical initial conditions as in \citet{Lissauer2021}, initial conditions are specified as orbital elements which are then entered in to the MERCURY \citep{Chambers1999} integration package in order to generate an initial state vector which is then provided to either TES or REBOUND. Table~\ref{tab: time symbols} contains a summary of all symbols related to simulations event times. \begin{table} \centering \caption{Summary of all simulation event time symbols used.} \label{tab: time symbols} \begin{tabular}{lcc} \hline Symbol & Definition \\ \hline $t_c$ & crossing time \\ $t_i$ & impact time \\ $t_s$ & post-crossing survival time \\ $t_e$ & closest encounter time \\ \hline \end{tabular} \end{table} \subsection{Standard integration suite} \label{sec: standard suite} The first suite of integrations is composed of $4,835$ orbital configurations and is termed our standard suite. In this suite systems are on initially circular, co-planar orbits with an initial mean anomaly for the $j_{\textrm{th}}$ planet $M_j = 2 \pi j \lambda$ radians where $\lambda \equiv \dfrac{1}{2}\left( 1+ \sqrt{5}\right)$, i.e., the golden ratio, and are merely chosen to avoid special orientations. As we wish to study the effects of the initial spacing of planets upon impact timescales we choose a high resolution in $\beta$ such that there are $1 \times 10^{3}$ integrations per unit $\beta$ over the range $\beta = [3.\rev{46}5, 8.3]$. Generally, integrations are terminated after $10^\rev{9}$ orbits if a collision is not encountered. However, in certain areas we have chosen to \rev{limit} integrations to $10^\rev{8}$ orbits \rev{to save on computation}; these regions are clearly marked on any plots. \subsection{Perturbed integration suite} \label{sec: perturbed suite} The second integration suite is termed our perturbed suite and is also composed of $4,835$ integrations. The only difference between the initial conditions of the standard suite and the perturbed suite is that in the latter case the innermost planet is perturbed by $100~\textrm{m}$ along its orbital arc. We strictly terminate integration at $1 \times 10^{8}$ orbital period\rev{s} of the innermost planet in this suite. This suite is used to examine the effects of very small changes in initial conditions upon crossing and impact time. \subsection{Inclined integration suite} \label{sec: inclined suite} The final integration suite is the inclined suite and is composed of \rev{$16,800$} integrations. \rev{Of these, $15$ did not complete in the available CPU time and have been excluded from the dataset. This is equivalent to $0.09\%$ of inclined integrations, and we therefore do not believe this will have biased the dataset in any meaningful way.} We choose initial conditions across a subset of the available parameter space manually rather than randomly and perform integrations for a maximum simulation time of $1 \times 10^{8}$ orbital periods of the innermost planet. To make best use of computational resources we limit this study to the range $\beta = [3.5, 6.3]$ and perform experiments uniformly spaced in $\beta$ with fifty values per unit $\beta$. At each value of $\beta$ we perform one hundred and twenty experiments where the initial values of semi-major axis, eccentricity and mean longitude are the same as in the standard suite. Planets are, however, inclined relative to each other in one of four ways: one of inner, middle or outer planet inclined above the orbital plane of the system, and also with the middle planet above and the outer planet below. For each such configuration of relative inclination fifteen initial values of inclination are logarithmically spaced between $i_0 = 0.06^\circ$ and $i_0 = 0.58^\circ$, yielding an initial orbital height ranging from $0.10 ~ r_H$ to $r_H$. The distribution of initial inclinations within this range is such that ten values are used between $i_0 = 0.24^\circ$ and $i_0 = 0.58^\circ$ and five values are used over the region $i_0 = 0.06^\circ$ and $i_0 = 0.24^\circ$. Finally, two values are chosen for the ascending nodes $\Omega$: either according to the golden ratio in Section \ref{sec: simulations} such that $M_j = \Omega_j$ or equally spaced such that $\Omega_j = [0^\circ, 120^\circ, 240^\circ]$. The full state vector of each simulation is output to file once every ten thousand orbital periods; additionally, each planetary flyby closer than any other previously observed is also recorded. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/ti_before_tc_cumsum.pdf} \caption{C\rev{umulative sum of integrations with a collision before orbital crossing} for various initial values for semi-major axis of the innermost planet. The flat region between beta $\beta = 7$ and $\beta = 8$ is due to systems not experiencing an orbital crossing within the maximum simulation time \rev{ in that region and integrations being terminated early (see the red triangles in Figure~\ref{fig: orbital crossing at 1AU}). A large fraction of integrations with an initial spacing $\beta > 6.3$ were stopped at $10^8$ orbits so results beyond this value cannot be considered to have been drawn from a uniform sample.}} \label{fig: cumsum ti before tc} \end{figure} \begin{figure*} \centering \includegraphics[width=0.975\textwidth]{images/beta_vs_ts_dashed.pdf} \caption{Post-crossing survival time of systems initially at $1$ AU against $\beta$. Blue dots indicate the same pair both crossed orbits and collided; orange indicates the pair that collided was not the pair that crossed; green indicates a collision between the inner and outer planets. The $t_s$ model (\rev{bold} dashed black) is fitted to all data points \rev{with a survival time greater than two orbits}. The insets show the planet separation for the marked systems between crossing time\rev{, $t_c$,} (dashed orange) and collision time\rev{, $t_i$,} (dashed green). Additionally, the Hill radius at $1$ AU is shown (dashed red).} \label{fig: survival time against beta with satellite images} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{images/survival_distributions.pdf} \caption{Normalised histograms of \rev{post-crossing survival time,} log($t_s$)\rev{, for different regions of initial spacing, $\beta$}. The top row of plots is for systems initially at $1$ AU while the bottom one is at $0.25$ AU. Log-skew-normal probability density functions, shown in orange, are fitted to the data through a maximum likelihood estimator. The mean $\mu$, standard deviation $\sigma$, and the skew $\zeta$ are included for each distribution as $\mathcal{N}(\mu, \sigma, \zeta)$. Systems that did not experience a crossing were excluded from these distributions.} \label{fig: survival distributions} \end{figure*} \section{Standard integration suite} \label{sec: standard suite results} This section contains results from the standard integration suite described in section \ref{sec: standard suite}. \rev{Additionally, the results of the perturbed integration suite, described in section \ref{sec: perturbed suite}, are analysed in subsection \ref{sec: sensitivity to initial conditions}.} \subsection{Timescale to planet-planet collision} \label{sec: time scale to planet planet collision} The crossing and impact times for the standard suite are plotted in Fig.~\ref{fig: orbital crossing at 1AU}. Inspection of the crossing time with respect to the initial orbital spacing shows the clear upwards trend present in other works \citep{Smith2009,Obertas2017,Hussain2020,Tamayo2020,Lissauer2021}. We also capture the large scale variations about the trend which for the most part are a result of mean motion resonances as discussed in \citet{Obertas2017}. Additionally, we replicate the finding of \citet{Lissauer2021} in the discovery of a highly stable configuration around $\beta=5.74$ which they attribute to the distance of this configuration from any strong resonances. Throughout this work we used a linear logarithmic fit of the form \begin{equation} \textrm{log}\rev{_{10}} \left(t \right) = b\rev{'} \beta' + c\rev{'} \label{eq: model} \end{equation} in several places where $\beta' = \beta-2\sqrt{3}$ and is used to reduce the dependency of \rev{the y-intercept} upon the slope. We fit this model to three data sets such that $t=$ $t_c$, $t_i$ or $t_s$ and state explicitly which at the time of use. \rev{Unless otherwise stated, w}e only include data points in the region $\beta = [3.465, 6.3]$ in the fits to avoid biasing the results due to systems that did not experience an orbital crossing within the maximum simulation time. For $t=t_c$, over this region, we find that $b\rev{'} = 1.352$ and $c\rev{'} = 2.067$ which is in strong agreement with \citet{Lissauer2021} and confirms the functionality of the TES tool. For impact times $t_i$ we find $b\rev{'} = 1.192$ and $c\rev{'} = 2.42$. Figure \ref{fig: orbital crossing at 1AU} highlights that the post-crossing survival time is very small compared to the crossing time for the majority of systems observed. The log scale of the plot and the relatively small magnitude of $t_s$ means the bulk of the impact time data points are hidden in this figure. The only exception is in the region of small $\beta$ where the ratio $\nicefrac{t_i}{t_c}$ is large due to the relatively small size of $t_c$. Finally, it can be seen that for a small subset of integrations collisions can occur before an orbital crossing has taken place. A cumulative sum showing the numbers of occurrences is shown in Fig.~\ref{fig: cumsum ti before tc} where we believe that the increase between systems at $1~$AU and $0.25~$AU is not dependent purely on the physical cross-sectional area of planets but rather the enhanced cross-sectional area due to gravitational focusing \citep{Safronov1972}. It is likely that a symplectic integrator, configured to use the standard step size of $\nicefrac{1}{20}$~th of the smallest dynamical period, would miss these collisions. However, given the small number of occurrences relative to the number of integrations typically performed in stability studies, it is unlikely that these missed collisions will have biased the data-sets in any statistically meaningful way. \begin{figure} \includegraphics[width=0.49\textwidth]{images/tc_vs_ts.pdf} \caption{Post-crossing survival time\rev{, $t_s$, against orbital crossing time, $t_c$, for} systems initially at $1$ AU. Blue dots indicate the same pair both crossed orbits and collided; orange indicates the pair that collided was not the pair that crossed; green indicates a collision between the inner and outer planets. The $t_s$ model (dashed black) is fitted to all data points \rev{with a survival time greater than two orbits}.} \label{fig: survival times} \end{figure} \begin{table} \centering \caption{\rev{Fitted model coefficients for $t_s$ against $\beta$ and $t_c$. Plotted models are fitted to the long-lived population, long, only but fitted models for the full dataset, all, are included as well. $PCC$ is the Pearson correlation coefficient. $\sigma$ is the standard deviation of the dataset from the fitted model.}} \label{tab: model coefficients} \begin{tabular}{llcccccc} \hline $t_s$ model & dataset & $b$ & $c$ & $b'$ & $c'$ & $PCC$ & $\sigma$ \\ \hline \multirow{2}{*}{Figure~\ref{fig: survival time against beta with satellite images}} & long & $-$ & $-$ & $0.111$ & $2.84$ & $0.197$ & $0.680$ \\ & all & $-$ & $-$ & $0.165$ & $2.496$ & $0.176$ & $1.13$ \\ \multirow{2}{*}{Figure~\ref{fig: survival times}} & long & $0.0781$ & $2.693$ & $-$ & $-$ & $0.183$ & $0.682$ \\ & all & $0.118$ & $2.27$ & $-$ & $-$ & $0.167$ & $1.13$ \\ \hline \end{tabular} \end{table} Figure~\ref{fig: survival time against beta with satellite images} shows the post-crossing survival time for all systems within the standard suite against $\beta$; Figure~\ref{fig: survival times} is identical but plotted against $t_c$. We find two main populations of post-crossing survival times present: those surviving for less than two orbits, and those surviving for more than ten orbits with very few outliers in between. Within the long surviving population, it can be seen that there is a clear increase in the post-crossing survival time of systems with respect to both $\beta$ and $t_c$. \rev{We fit models of the form of Eq.~\eqref{eq: model} to both the long-lived population and the population in its entirety, we call these datasets \emph{long} and \emph{all}, respectively. The model coefficients $b'$ and $c'$ can be found in the top two rows of Table~\ref{tab: model coefficients}. Similarly, we also fit linear models to the two datasets present in Fig.~\ref{fig: survival times} for $\mathrm{log}_{10}(t_s)$ against $\mathrm{log}_{10}(t_c)$. The model coefficients $b$ and $c$ can be found in the bottom two rows of Table~\ref{tab: model coefficients}. In all cases, we calculate the Pearson correlation coefficient (PCC) and also calculate the standard deviation, $\sigma$, of the data minus the fitted model, e.g. $\sigma(\mathrm{log}_{10}(t_s)-(b'\beta'+c'))$. } Clearly, there is a tendency for systems to persist for longer after an orbital crossing when the initial mutual spacing between them is greater, with a difference of a factor of three in median post-crossing survival time over the entire beta range. However, even given this increase, the post-crossing survival time for systems simulated did not ever exceed one million orbits. Given that this represents roughly one ten-thousandth of the main sequence lifetime of \rev{solar-mass} stars it is possible, although very unlikely, that we could observe a \rev{compact} exoplanet system that has undergone an orbital crossing but has not yet experienced a collision between planets, even if it were a truly co-planar system. In the case of the short\rev{-}lived population, there is a further subdivision of different behaviours: those systems that experience a collision almost immediately following a crossing, e.g. those bodies whereby $t_s$ < $10^{-1}$, and those which persist for longer than this but less than a couple of orbits. In the former case, we have observed that the trajectories of two planets about the star simply cross, leading to straightforward collisions, and also triggering an orbital crossing in the process. However, in the latter case, we find that the trajectories of the planets about the star are such that a very close encounter occurs which causes the two planets to become temporarily gravitationally captured. These two planets then remain within approximately a Hill radius of one another before finally experiencing a fatal collision a fraction of an orbit later. These behaviours are shown in the satellite images in Fig.~\ref{fig: survival time against beta with satellite images}. It can be seen here that temporary gravitational capture is not the cause of collision in the case of the outliers with a post-crossing survival time between two and ten orbits. To give consideration to whether these results generalise to other systems of planets we have calculated $t_i$ and $t_s$ for a system with the inner planet initially placed at $0.25$ AU. \rev{This is equivalent to artificially inflating the radius of all planets in systems at $1$~AU by a factor of four. When thought of this way, this is akin to placing planets} with a radius approximately the same size as Neptune at $1$ AU; $t_c$ is invariant to the initial location of the inner planet. The probability\rev{, calculated as the cumulative fraction of systems that have experienced collisions over the total number of systems,} of collision over time for both settings are shown in Fig.\ref{fig: collision probabilities}. The separation between dashed and solid lines indicates that a given collision probability is reached sooner in systems composed of planets with a larger radius. The difference in time remains about constant over all values of $\beta$, even if the log scale suggests otherwise. \begin{figure} \includegraphics[width=0.495\textwidth]{images/collision_probability_one_AU.pdf} \caption{Probability of having experienced a collision over time for various regions of \rev{initial spacing,} $\beta$. \rev{The probability is calculated as the cumulative fraction of systems that have experienced collisions over the total number of systems.} Solid lines show the probabilities for systems initially at $1$ AU while the dashed lines are initially at $0.25$ AU.} \label{fig: collision probabilities} \end{figure} Figure \ref{fig: survival distributions} contains \rev{normalised histograms} of $t_s$ within different regions of $\beta$ for systems with the inner planet initially at $1$ AU and $0.25$ AU. We find that the distribution of post-crossing survival times is log-skew-normal distributed across all systems; we confirmed this using a Kolmogorov-Smirnov test with a precision parameter of $\alpha=0.005$. The skew-normal distribution is a generalisation of the normal distribution that allows the class to be extended to include distributions with non-zero skewness through the addition of a shape parameter \citep{Azzalini1999}. Log-skew-normal probability density functions, shown in orange, are fitted to the data through a maximum likelihood estimator. We calculated the mean $\mu$, standard deviation $\sigma$, and the skew $\zeta$ for each distribution; we use the Fisher-Pearson coefficient of skewness throughout. We find that $\mu$ increases with increasing $\beta$ range, and also find the same pattern for $\sigma$ in all but one case. In all cases, $\zeta$ is negative indicating a skew towards shorter post-crossing survival times as compared to a normal distribution. \rev{This means that there is a preference for systems to collide sooner rather than later after an orbital crossing as compared to the most frequent survival times. There is a slow build up in the number of systems experiencing collisions over time after an orbital crossing but a much sharper cut-off after the peak density of collisions. This highlights the difficulty for systems to persist for long timescales after an orbital crossing in the co-planar case.} Systems with a shorter mean post-crossing survival time show a skew of a smaller magnitude than those with a longer survival time, e.g. at $1$ AU $\zeta=-0.14$ for $\beta < 4.0$ whereas for $\beta >= 4.0$ the smallest, in magnitude, value observed is $\zeta=-0.57$. We find that the distributions of post-crossing survival times at $0.25$ AU are less skewed than those at $1$ AU, indicating that the survival times of systems in this case are closer to a log-normal distribution. \begin{table*} \centering \caption{Comparison of \emph{crossing times} of systems using identical values of \rev{initial spacing, $\beta$, in mutual Hill radii} for the standard and perturbed initial longitudes.} \label{tab:crossing table} \begin{tabular}{lcccccc} \hline Interval: & [$3.465$, $3.999$] & [$4.0$, $4.999$] & [$5.0$, $5.999$] & [$6.0$, $6.33$] & \hspace{0.3cm} & [$3.46\rev{5}$, $6.33$]\\ \hline number of runs in the range & $535$ & $1000$ & $1000$ & $331$ && $2866$ \\ $< \textrm{log}_{t_c}\textrm{(standard)} - \textrm{log}_{t_c}\textrm{(perturbed)}>$ & $0.006$ & $-0.001$ & $-0.011$ & $-0.014$ && $-0.004$ \\ $< |\textrm{log}_{t_c}\textrm{(standard)} - \textrm{log}_{t_c}\textrm{(perturbed)} |>$ & $0.039$ & $0.182$ & $0.306$ & $0.356$ && $0.219$ \\ $t_c$(perturbed) < $0.5 t_c$(standard) & $7$ ($1.31\%$) & $92$ ($9.20\%$) & $200$ ($20.00\%$) & $75$ ($22.66\%$) && $374$ ($13.05\%$) \\ $0.5 t_c$(standard) < $t_c$(perturbed) < $2 t_c$(standard) & $524$ ($97.94\%$) & $812$ ($81.20\%$) & $580$ ($58.00\%$) & $173$ ($52.27\%$) && $2089$ ($72.89\%$) \\ $t_c$(standard) < $0.5 t_c$(perturbed) & $4$ ($0.75\%$) & $96$ ($9.60\%$) & $220$ ($22.00\%$) & $83$ ($25.08\%$) && $403$ ($14.06\%$) \\ within 10\% of standard systems & $398$ ($74.39\%$) & $217$ ($21.70\%$) & $100$ ($10.00\%$) & $27$ ($8.16\%$) && $742$ ($25.89\%$) \\ within 1\% of standard systems & $333$ ($62.24\%$) & $68$ ($6.80\%$) & $10$ ($1.00\%$) & $7$ ($2.11\%$) && $418$ ($14.58\%$) \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Comparison of \emph{collision times} of systems using identical values of \rev{initial spacing, $\beta$, in mutual Hill radii} for the standard and perturbed initial longitudes both with the innermost planet initially at \emph{$1$ AU}.} \label{tab:collision table} \begin{tabular}{lcccccc} \hline Interval: & [$3.465$, $3.999$] & [$4.0$, $4.999$] & [$5.0$, $5.999$] & [$6.0$, $6.33$] & \hspace{0.3cm} & [$3.46\rev{5}$, $6.33$]\\ \hline number of runs in the range & $535$ & $1000$ & $1000$ & $331$ && $2866$ \\ $<\textrm{log}_{t_i}\textrm{(standard)} - \textrm{log}_{t_i}\textrm{(perturbed)}>$ & $0.015$ & $-0.012$ & $-0.010$ & $-0.012$ && $-0.006$ \\ $<|\textrm{log}_{t_i}\textrm{(standard)} - \textrm{log}_{t_i}\textrm{(perturbed)} |>$ & $0.429$ & $0.302$ & $0.294$ & $0.349$ && $0.328$ \\ $t_i$(perturbed) < $0.5 t_i$(standard) & $145$ ($27.10\%$) & $189$ ($18.90\%$) & $185$ ($18.50\%$) & $76$ ($22.96\%$) && $595$ ($20.76\%$) \\ $0.5 t_i$(standard) < $t_i$(perturbed) < $2 t_i$(standard) & $260$ ($48.60\%$) & $614$ ($61.40\%$) & $598$ ($59.80\%$) & $175$ ($52.87\%$) && $1647$ ($57.47\%$) \\ $t_i$(standard) < $0.5 t_i$(perturbed) & $130$ ($24.30\%$) & $197$ ($19.70\%$) & $217$ ($21.70\%$) & $80$ ($24.17\%$) && $624$ ($21.77\%$) \\ within 10\% of standard systems & $108$ ($20.19\%$) & $110$ ($11.00\%$) & $99$ ($9.90\%$) & $25$ ($7.55\%$) && $342$ ($11.93\%$) \\ within 1\% of standard system & $79$ ($14.77\%$) & $17$ ($1.70\%$) & $8$ ($0.80\%$) & $5$ ($1.51\%$) && $109$ ($3.80\%$) \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Comparison of \emph{collision times} of systems using identical values of \rev{initial spacing, $\beta$, in mutual Hill radii} for the standard and perturbed initial longitudes both with the innermost planet initially at \emph{$0.25$ AU}.} \label{tab: collision table 0.25 au} \begin{tabular}{lcccccc} \hline Interval: & [$3.465$, $3.999$] & [$4.0$, $4.999$] & [$5.0$, $5.999$] & [$6.0$, $6.33$] & \hspace{0.3cm} & [$3.46\rev{5}$, $6.33$]\\ \hline number of runs in the range & $535$ & $1000$ & $1000$ & $331$ && $2866$ \\ $<\textrm{log}_{t_i}\textrm{(standard)} - \textrm{log}_{t_i}\textrm{(perturbed)}>$ & $-0.005$ & $-0.010$ & $-0.010$ & $-0.011$ && $-0.009$ \\ $<|\textrm{log}_{t_i}\textrm{(standard)} - \textrm{log}_{t_i}\textrm{(perturbed)} |>$ & $0.297$ & $0.243$ & $0.301$ & $0.353$ && $0.286$ \\ $t_i$(perturbed) < $0.5 t_i$(standard) & $98$ ($18.32\%$) & $141$ ($14.10\%$) & $187$ ($18.70\%$) & $75$ ($22.66\%$) && $501$ ($17.48\%$) \\ $0.5 t_i$(standard) < $t_i$(perturbed) < $2 t_i$(standard) & $335$ ($62.62\%$) & $701$ ($70.10\%$) & $592$ ($59.20\%$) & $176$ ($53.17\%$) && $1804$ ($62.94\%$) \\ $t_i$(standard) < $0.5 t_i$(perturbed) & $102$ ($19.07\%$) & $158$ ($15.80\%$) & $221$ ($22.10\%$) & $80$ ($24.17\%$) && $561$ ($19.57\%$) \\ within 10\% of standard systems & $142$ ($26.54\%$) & $130$ ($13.00\%$) & $99$ ($9.90\%$) & $25$ ($7.55\%$) && $396$ ($13.82\%$) \\ within 1\% of standard system & $108$ ($20.19\%$) & $19$ ($1.90\%$) & $10$ ($1.00\%$) & $7$ ($2.11\%$) && $144$ ($5.02\%$) \\ \hline \end{tabular} \end{table*} \subsection{Sensitivity to initial conditions} \label{sec: sensitivity to initial conditions} To examine the sensitivity to initial conditions of the results of our simulations we use our perturbed suite of integrations described in section \ref{sec: perturbed suite}. The crossing and collision times of each integration between the standard suite and the perturbed suite are compared to determine the effect of the perturbation. Table \ref{tab:crossing table} contains the results of that comparison for the \emph{time of orbital crossing}. Tables \ref{tab:collision table} and \ref{tab: collision table 0.25 au} contain the same comparison for but for the \emph{impact time} of systems at $1$ AU and $0.25$ AU, respectively. In general, the comparison between crossing times in Table \ref{tab:crossing table} aligns closely with \cite{Lissauer2021}. Percentages between the two studies rarely differ by more than a few points despite the different integration tools used: TES and MERCURY. One notable difference between the two studies is in the \rev{initially wider spaced} systems. In the regions $\beta = [5.0, 5.999]$ and $\beta = [6.0, 6.33]$ we find roughly double the number of initial orbital spacings where the standard and perturbed suite integrations experience orbital crossing times within $10\%$ of one another. Given the precise orbital evolution required in order for standard and perturbed suite systems to experience a crossing at the same time it is unlikely that numerical error would ever cause an increase in this statistic. We therefore take this as an indication that TES has maintained a higher precision than the symplectic Wisdom-Holman \citep{Wisdom1991} scheme within MERCURY. To further validate TES in this setting we have also repeated the standard suite integrations with \rev{IAS15} from the REBOUND package for comparison. We find very good agreement in results between the two routines. Detailed results from this experiment are included in Appendix \ref{appendix: integrator comparison}. The right-most summary column for the full range of $\beta = [3.465, 6.33]$ in Table \ref{tab:collision table} shows there is a marked decrease in the number of collisions occurring within a factor of two, and within ten and one percent of one another; as compared to the orbital crossing times in Table \ref{tab:crossing table}. The largest reduction is seen in the within\rev{-}a\rev{-}factor\rev{-}of\rev{-}two row where a reduction of over $15$ \rev{percentage points} highlights the sensitivity to close approaches in this setting. The majority of this difference in collision times is seen in the \rev{initially closely spaced} systems where a reduction of \rev{almost} $50$ \rev{percentage points} can be seen for integrations finishing within $10$\% of one another. However, once the crossing times exceed approximately $1 \times 10^4$ orbits at $\beta = 5$ the effect of the perturbation disappears and values between crossing and collision times for the two data sets converge. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/collisions_bar_plot_quarter_au.pdf} \caption{Probability of collision per pair of planets broken down by the pair of orbits that initially crossed and \rev{initial spacing,} $\beta$, range. \rev{Probability is calculated as the fraction of collisions between a given pair of planets over the total number of collisions.}The top pane\rev{l} is for systems initially at $1$ AU while the bottom pane\rev{l} is initially at $0.25$ AU. Inner and outer refer to the innermost and outermost pairs of planets, respectively. Extrema refers to the pair comprising the innermost and outermost planets.} \label{fig: proability collision from crossing 1 AU} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/pairs_colliding_distribution.pdf} \caption{\rev{Post-crossing survival time} distribution of collisions between different pairs of planets. Blue bars indicate the same pair both crossed orbits first and collided; orange indicates the pair that collided was the other neighbouring pair; green indicates a collision between the inner and outer planets. The top pane\rev{l} is initially at $1$ AU while the bottom pane\rev{l} is initially at $0.25$ AU.} \label{fig: pairs involved in collision distribution} \end{figure} \subsection{Which planets collide?} \label{sec: which planets collide} We find a slight discrepancy between the prevalence of orbital crossings \rev{in our standard integration suite}, \rev{with the innermost pair triggering $48\%$ of crossings compared to $52\%$ for the outermost pair.} These percentages were calculated using $n=4,835$ integrations and the expected stochastic variation, about the mean, i.e. $50\%$, is therefore approximately $\rev{0.72}\%$ \citep{Dobrovolskis2007}. In the following, we designate the specific pair of planets that collide as the \emph{collision pair}, and analogously we refer to the pair of planets that experienced an orbital crossing as the \emph{crossing pair}. We find that across all values of $\beta$ a collision between two planets is almost twice as likely if the same two planets were also involved in the orbital crossing. Figure \ref{fig: proability collision from crossing 1 AU} highlights clearly that this is the case with between $48$\% and $62$\% of collision events occurring between the crossing pair \rev{for systems at $1$~AU depending on the initial orbital spacing}. Moreover, these percentages appear to be invariant as to whether the inner or outer pair was involved in the orbital crossing. A clear trend can be seen with respect to $\beta$, where an increase in the initial orbital spacing between planets leads to an increased probability of collision between the crossing pair. Figures \ref{fig: survival time against beta with satellite images} and \ref{fig: survival times} are coloured based on the collision and crossing pair for each system. As first crossings can only ever occur between neighbouring planets, it is possible to use only three colours for this: blue for \emph{same-pair systems} whereby the same pair was involved in both the first orbit crossing and the collision, orange for \emph{other-neighbouring-pair systems} to indicate that the colliding pair was the neighbouring pair that did not first cross, and green for \emph{extrema-pair systems} to indicate a collision between the inner and outer planets. Across the whole range of $\beta$ it can be seen that for systems with a $t_s$ below the $t_s$ model fit line collisions are predominantly between the first crossing pair. Figure \ref{fig: pairs involved in collision distribution} shows how these three combinations of events are distributed over time. Collisions that take place within a single orbit of orbit crossing are almost exclusively found in same-pair systems due either to simple immediate collisions or to the temporary gravitational bounding of planets discussed previously. Same-pair collisions are the most likely outcome for all systems at $1$ AU, shown in the top pane, until $t_s \approx 10^4$ orbits, followed by other-\rev{neighbouring-}pair systems, with extrema-pair systems being the least likely. However, after this period the probability of collision between any combination of planets becomes almost identical, indicating that the mixing of planetary orbits after crossing is sufficient to overcome the increased probability of same-pair integrations due to the initial orbital configuration. Interestingly, the peak of other-\rev{neighbouring-}pair and extrema-pair systems do not align, instead the former peaks first. This can be understood as the mixing process taking longer to cause the inner and outer planets orbits to overlap than to excite the middle planet enough to cross the orbits of both of its neighbours. In the bottom pane, it can be seen that at $0.25$ AU the behaviour is similar; however, the number of collisions taking place within a single orbit roughly doubles. \section{Inclined Integration Suite} \label{sec: inclined integration suite} In the co-planar case, no systems survived for more than a million orbits after the first orbital crossing. However, \citet{Rice2018} observed a number of non co-planar systems that survived for their maximum simulation time of ten million orbits. Therefore, we now go on to examine the behaviour in the non co-planar case described by the inclined suite of initial conditions in section \ref{sec: inclined suite}. As a reminder, these initial conditions include fifteen initial inclinations ranging from an initial orbital height of $0.10 ~ r_H$ to $r_H$. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/dynamic_heating.pdf} \caption{Inclination and eccentricity growth for individual systems from the inclined suite with $\beta = 5.98$. Only eighty configurations are included to aid clarity. Systems are shown in purple until they experience an orbital crossing and in grey thereafter. \rev{The RMS inclination and eccentricity values for all systems that have experienced an orbital crossing are shown (dashed blue). A linear model fitted to the mean of all systems that have experienced an orbital crossing is also shown (solid green).}} \label{fig: rms eccenticity inclination} \end{figure} \subsection{Dynamic heating} \label{sec: dynamic heating} The systems studied in the inclined integration suite begin with modest inclinations and no eccentricities, making them dynamically cold. Figure \ref{fig: rms eccenticity inclination} shows how the system heats up over time by plotting the root-mean-square (RMS) inclination and eccentricity over time. We calculate the mean over all runs that have experienced an orbital crossing, and fit a linear model to this mean \rev{which is shown as the solid green line}. Individual integrations are shown in purple until they experience an orbital crossing and in grey thereafter. For clarity, in Fig.~\ref{fig: rms eccenticity inclination} results of individual integrations are only shown for eighty integrations in the inclined suite for $\beta = 5.98$. \citet{Rice2018} found that, for four-planet Neptune-mass systems, there are two distinct growth modes of RMS eccentricity before and after an instability event: Eccentricity evolves rapidly to a quasi-equilibrium at a value of $~10^{-2}$ at which point encounters begin. After a period of mixing as a result of close approaches, systems transition into a new evolutionary phase during which eccentricity growth follows a power-law form approximately $\propto t^{\nicefrac{1}{6}}$. In the three-planet Earth-mass case, our systems reach a quasi-equilibrium value of $e \approx \,10^{-3}$ before a period of chaotic mixing and rapid growth, which finally settles into the new growth phase approximately $\propto t^{\nicefrac{1}{6}}$. The RMS inclinations in Fig.~\ref{fig: rms eccenticity inclination}, on the other hand, while similar are different to the four-planet Neptune-mass case. We also observe that the inclination of the systems remains at roughly the initial value until the first encounter, at which point they are rapidly excited before entering a new growth mode. This rapid excitation is in keeping with the findings of \cite{Matsumoto2017}. These behaviours can be seen by the horizontal inclination lines in the population of systems before crossing and in the power-law growth in the population afterward. \citet{Rice2018} stated that the trend towards long-lived systems depends upon only the RMS inclination being greater than the averaged ratio of Hill radius to semi-major axis, this is called the critical inclination and is marked on this plot. We also find this to be the case across all systems within our inclined suite: any systems that have experienced orbital crossing and have their RMS inclination damped below this threshold rapidly experience a collision. The key difference in results in our simulations as compared to the four-planet, Neptune-mass case is that the power-law growth rate appears to be $\propto t^{\nicefrac{1}{4}}$ as opposed to $\propto t^{\nicefrac{1}{3}}$. We offer two possible explanations for this: 1) our data set could be biased due to the non-random initial conditions used; or, 2) there could be an underlying dependence between either the planetary mass or the number of planets within the system and the growth rate. Further investigation is needed to distinguish between these two possibilities. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/tc_inclined.pdf} \caption{Time to orbital crossing against $\beta$ for the inclined integration suite. The minimum, maximum and mean values of the \rev{one hundred and twenty} integrations performed at each value of $\beta$ are shown. Additionally, the $t_c$ model is fitted to the mean values.} \label{fig: tc inclined mean min max} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/ts_inclined.pdf} \caption{Post-crossing survival time of inclined integration suite with systems at $1$ AU. Colours \rev{of data points represent the initial inclinations, with darker colours representing higher inclinations}. The twenty-three systems that persisted for the full $10^8$ orbits are highlighted via a red triangle, independent of their initial inclination. Note that most of these surviving systems had their initial orbital crossing in far less than $10^8$ years, so they survived for almost $10^8$ years post-crossing before the simulation was terminated and appear as triangles at the top of the plot; the two exceptions, which survived for $< 3 \times 10^7$ years, both had initial orbital separations $\beta > 5.3$. } \label{fig: ts inclined} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/ti_probability_inclined.pdf} \caption{Probability of having experienced a collision over time for various regions of $\beta$ in the inclined integration suite. \rev{The probability is calculated as the cumulative fraction of systems that have experienced collisions over the total number of systems.} Solid lines show the probabilities for systems initially at $1$ AU while the dashed lines are initially at $0.25$ AU.} \label{fig: ti inclined probability} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/inclined_runs_probability.pdf} \caption{Distribution of post\rev{-crossing} survival times in the inclined integration suite for systems after a close encounter. Each plot contains data from $1120$ integrations across the entire inclined $\beta$ range where $\beta = 3.5-6.3$. The upper two plots, in cyan, are for systems initially at $1~$AU and the lower two plots, in grey, are for $0.25~$AU. The two leftmost plots contain data for systems with the minimum initial inclination, $i_0 = 0.06^\circ$, whereas the two rightmost plots contain data for systems with the maximum initial inclination, $i_0 = 0.58^\circ$. Two systems survived for the full simulation time after an orbital crossing in the low inclination case at $1~$AU whereas one survived in the high inclination case. No systems in the $0.25~$AU case survived for the full simulation duration after an orbital crossing in any of our integrations.} \label{fig: inclined survival probabilities four panel} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/inclined_initial_inc_vs_ts.pdf} \caption{\rev{Median} of the log post-crossing survival time for each value of initial inclination within the inclined suite represented by the orbital height as a fraction of the Hill radius. There are fifteen values of inclination used meaning that each data point plotted is the average of up to $1120$ integrations; \rev{the only systems excluded are those that did not experience a collision in the maximum integration time}.} \label{fig: inclined ts vs initial orbital height} \end{figure} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/inclined_r_planet_vs_ts.pdf} \caption{Median and maximum post-crossing survival time for systems as a function of the radius of planets \rev{relative to the Hill radius at $1$~AU} for systems in the inclined integration suite \rev{at $1$~AU}. Simulation times are capped at $10^8$ orbits.} \label{fig: planet radius vs ts} \end{figure} \begin{figure} \centering \includegraphics[width=0.495\textwidth]{images/min_miss.pdf} \caption{Time between closest encounter prior to impact and impact against the distance between the surfaces of the planets involved for systems \rev{at $1$~AU} in the inclined integration suite. The post-crossing survival time of each system is indicated through colouring. The grey shaded area indicates impacts that are possibly due to temporary gravitational capture which are excluded from the fitted model \rev{shown as a bold dashed black line}. \rev{The horizontal dashed black line shows the Hill radius at $1$~AU.}} \label{fig: closest encounter} \end{figure} \begin{figure} \centering \includegraphics[width=0.495\textwidth]{images/inc_vs_ts.pdf} \caption{Time between closest encounter prior to impact and impact against the time\rev{-}averaged inclination range, i.e. the difference between the smallest and largest inclinations, for systems \rev{at $1$~AU} in the inclined integration suite. The closest encounter experienced by a system is indicated through colouring.} \label{fig: inclination vs encounter time} \end{figure} \begin{figure} \centering \includegraphics[width=0.495\textwidth]{images/ecc_vs_ts.pdf} \caption{Time between closest encounter prior to impact and impact against the time\rev{-}averaged maximum eccentricity for systems \rev{at $1$~AU} in the inclined integration suite. The closest encounter experienced by a system is indicated through colouring.} \label{fig: eccentricity vs encounter time} \end{figure} \subsection{Time scale to planet-planet collision} \label{sec: inclined time scale to planet planet collision} Figure \ref{fig: tc inclined mean min max} shows the crossing time for systems within our inclined suite. We find a large variance in crossing time across the inclined suite with a difference between the maximum and minimum crossing times at each value of $\beta$ as large as two orders of magnitude in many cases. The spikes seen in Fig.~\ref{fig: orbital crossing at 1AU} are also present in some of our inclined cases. A model of the type in Eq.~\ref{eq: model} is fitted to the mean values of crossing time observed at each value of $\beta$, yielding coefficients $b' = \rev{1.39}$ and $c' = \rev{2.18}$. These values are in very good agreement with those from the standard suite. This is, however, where the similarities between the co-planar and inclined cases end. Figure \ref{fig: ts inclined} shows the post-crossing survival time for systems within the inclined suite, the times are much higher than in the co-planar case where the longest surviving system after crossing survived for roughly one million orbits. Here, the majority of systems survive for longer than this and, in fact, there are twenty-three systems that do not experience any collision at all within the maximum simulation time (100 million orbits), equivalent to $0.14\%$ of all integrations. Given that the post-crossing survival time is now approaching one percent of the lifetime of the Sun, it is much less unlikely that we actually could observe an inclined system between a crossing and a collision. However, at $0.25$~AU no integrations survived for the full simulation duration after an integration. Figure \ref{fig: ti inclined probability} shows the probability of a collision across all integrations within the inclined suite. \rev{The probability is calculated as the cumulative fraction of systems that have experienced collisions divided by the total number of systems.} Results are included for systems initially at $1$ AU as well as at $0.25$ AU. Decreasing the initial distance to the star by this amount is identical to having artificially inflated the planetary radius $\textrm{R}_p$ by a factor of four, i.e., made $\textrm{R}_p$ approximately equal to that of Neptune, \rev{whilst keeping the innermost planet initially at $1$~AU}. It is therefore expected that the collision probability over time should increase with decreasing initial distance to the star. However, the increase is striking: for Earth analogues the probability of a collision for a given system after one million orbital periods is roughly 50\%, but for a Neptune radius (1~M$_\oplus$) planet at 1 AU that probability increases to over 75\% across all $\beta$ ranges, reaching almost 90\% in all but one range. Furthermore, for the $1$ AU systems it can be seen that the various $\beta$ regions converge after roughly a million orbits. This indicates that the evolution after the first close encounter has reconfigured the system such that any \rev{prior} collision probabilities due to initial orbital spacing are lost. To understand this, we can look at the collision probability \rev{in Figure \ref{fig: ti inclined probability} at one million orbits for $0.25$ AU systems. These systems are equivalent to to a Neptune radius planet being placed at $1$~AU and roughly $90\%$ have experienced a collision within this timescale. We can therefore infer that the same roughly $90\%$ of Earth radius planets at $1$~AU must have experienced a close encounter within $4 \textrm{R}_p$. The loss of prior collision probabilities due to orbital spacing after this point in time therefore appears to be driven by these particularly close encounters.} Figure \ref{fig: inclined survival probabilities four panel} contains the distribution of post-crossing survival times for two subsets of the inclined suite results: the subsets each contain $1120$ configurations, one at the minimum initial inclination ($0.06^\circ$) and the other at the maximum initial inclination ($0.58^\circ$). It can be seen that the distributions are different at each initial orbital radii and inclination. Firstly, the population of collisions taking place within several orbits of an orbital crossing decreases with increasing initial inclination. In both of the most highly inclined cases there is only a single peak present in the distribution; however, this distribution is much more negatively skewed in systems initially at $1~$AU. In the lowest inclination cases there are two peaks present in addition to the one caused by immediate collisions. One \rev{peak is} collocated with those found in the more inclined case. The second peak is centered at approximately $t_s = 10^{2.5}$. In the co-planar case we have seen that the distribution of post-crossing survival times are centered at approximately $10^{2.5}$ orbits and it is also known that if the inclination is below the critical threshold $i = r_H$ the number of collisions occurring within a factor of three of the orbital crossing increases \citep{Rice2018}. Both of these factors combined explain the appearance of this second peak. Additionally, a larger proportion of systems at $0.25~$ AU experience a collision in this second peak. The effect of increased initial inclination across the whole inclined integration suite can be seen in Fig.~\ref{fig: inclined ts vs initial orbital height}, where an increase in inclination, shown here in terms of orbital height, leads to a moderate increase in the median post-crossing survival times for \rev{systems} at \rev{both} 0.25~AU \rev{and} 1~AU. The RMS inclination in compact three-body systems has been seen to stay approximately constant up until the time of the first close encounter, which means that observed inclinations of actual planetary systems could in fact provide information about the probable survival times of systems after an orbital crossing. The parameter that dominates the post-crossing survival time of systems in the inclined suite is the \rev{ratio of the planetary radius to the Hill radius at $1$~AU}. Figure \ref{fig: planet radius vs ts} shows the median of the log post-crossing survival times for all systems in the suite \rev{at $1$~AU}. We find almost two orders of magnitude difference in the average survival time of systems with planets where $\nicefrac{\textrm{R}_p}{r_H} = 0.017$ as compared to systems with planets where $\nicefrac{\textrm{R}_p}{r_H} = 0.004$. This outweighs the effect of initial inclination on the survival times. Interestingly, systems surviving for the full $10^8$ orbits can be seen all the way down to a value of $\nicefrac{\textrm{R}_p}{r_H} = 0.0157$ where a rapid decrease in the lifetime of the longest lived of systems is seen. This is equivalent to a planet initially located at $1$ AU with a radius $3.5$ times that of Earth. In addition to the dependence of the post-crossing survival time upon the orbital elements of the system, we also find a correlation with the distance of the closest approach. Figure~\ref{fig: closest encounter} shows the time taken for a collision to occur after the closest encounter experienced prior to it, at time denoted $t_e$, against the distance between the surfaces of the planets. Data points in the shaded grey area are excluded from our fitted models, and this area corresponds to the boundary seen in Figure~\ref{fig: survival time against beta with satellite images} at approximately eight orbits. Here, we see a strong negative correlation where a least squares model fitted to the log of $t_i-t_e$ and the miss distance of the encounter has a slope of $-0.26$ with a $y$-intercept of $1.6$. Ergo, the closer an encounter experienced by a system the longer it is likely to survive afterwards. In this plot, each point is also coloured according to the post-crossing survival time of the system. Looking vertically from top to bottom at the colouring it can also be seen that the absolute post-crossing survival time of systems depends upon the miss distance of the closest encounter. It seems that for planetary systems to survive for a long time after an orbital crossing they must risk collision. We find that the closest encounters are responsible for driving the largest changes in both inclination and eccentricity, and we believe that it is the increase in inclination that causes the trend seen in Figure~\ref{fig: closest encounter}. Figure~\ref{fig: inclination vs encounter time} shows the time taken for a collision to occur after the closest encounter experienced prior to it against the time\rev{-}averaged inclination range, i.e. the difference between the maximum and minimum inclinations. Systems with the largest inclination range survive for the longest after a close encounter and the minimum miss distance, indicated through colouring, is key to increasing this range. Figure~\ref{fig: eccentricity vs encounter time} is identical except it shows the time\rev{-}averaged maximum eccentricity in a system. Again, the minimum miss distance can be seen to be responsible for the increases in eccentricity. \rev{These increases in eccentricity will also work to increase the lifetime of systems through a reduction in the effect of gravitational focusing on the combined physical/gravitational cross-sectional area of planets \citep{Safronov1972}.} \begin{figure} \centering \includegraphics[width=0.475\textwidth]{images/inclined_pairs_colliding_distribution.pdf} \caption{Time distribution of collisions between different pairs of planets in the inclined integration suite. Cyan bars indicate the same pair both crossed orbits and collided; dark grey indicates the pair that collided was not the pair that crossed; yellow indicates a collision between the inner and outer planets. The top pane\rev{l} is initially at $1$ AU while the bottom pane\rev{l} is initially at $0.25$ AU.} \label{fig: inclined pairs involved in collision distribution} \end{figure} \subsection{Which planets collide?} Figure \ref{fig: inclined pairs involved in collision distribution} is the equivalent to Fig. \ref{fig: pairs involved in collision distribution} but for the inclined suite. Similarly to the co-planar case, we find that collisions within a single orbit are almost exclusively between the same pair involved in the crossing. We also find an increase in the number of collisions within this time frame in the $0.25$ AU case compared to the $1$ AU case. However, at a factor of approximately $3$, here we see that the increase is more substantial. The distributions of survival times for systems surviving after crossing for longer than a single orbit appear very different to the co-planar case. Nonetheless, some similarities in behaviour are present: in both the co-planar and inclined case there is a peak present of same-pair collisions between $10^2$ and $10^3$ orbits. Adjusting for the number of systems in each suite we find that the fraction of systems colliding at this point is roughly five times smaller at $1$ AU in the inclined case. The time period for mixing in the inclined case is approximately $10^4$ orbits, slightly longer than in the co-planar case, after which collisions between any pair of planets become equally likely. \section{Conclusions} \label{sec: conclusions} We performed more than $25,000$ integrations of compact three-planet systems with the TES integration tool for a maximum time of $10^9$ orbits of the innermost planet or until the first collision of planets. We chose to focus our attention on the effects of orbital spacing and therefore distributed system configurations across a wide range of initial values evenly spaced in $\beta$. Efforts were initially focused on the co-planar case where it is easier to isolate the effects of increasing $\beta$ but then extended to include the inclined case as well. We find in the co-planar suite that planetary systems are doomed after an orbital crossing: they rapidly experience a collision within a maximum observed time of less than one million orbits. However, despite this prognosis, we found that systems with a wider initial spacing of planets do survive longer, exhibiting a median post-crossing survival time following a slope $\textrm{log}_{10}(t_s) \propto 0.12 \, \beta$. Additionally, we show that three distinct populations of post-instability impact behaviour are present, with very few outliers: \begin{enumerate} \item immediate collisions within a tenth of an orbit, \item prompt collisions between a tenth of an orbit and two orbits, \item those surviving for much longer than ten orbits. \end{enumerate} The pathology of these different behaviours have been identified and each of them are also observed in the inclined suite. The probabilities of a collision between specified planetary pairs were also calculated and it was found that collisions will take place between the same pair of planets that initially crossed in the majority of cases, ranging from $48\%$ to $62\%$ depending on the region of $\beta$. These probabilities increase further depending on the radius of the planet with Neptune-radius planets experiencing probabilities as high as $76\%$. Despite this increase in probabilities in the co-planar case, the post-crossing survival time only weakly depends upon the planetary radius causing an increase of only $10^3$ orbits. In the inclined suite, however, we observe that the planetary radius is the main driver of the post-crossing survival time. We find a decrease in median post-crossing survival time of almost two orders of magnitude between Earth and Neptune radius planets. Additionally, the initial orbital inclinations have been shown to also influence the post-crossing survival times across the full range of $\beta$ by as much as an order of magnitude. Additionally, we looked at the RMS eccentricity and inclination growth of all systems within our inclined suite after an orbital crossing. Here, we replicate the eccentricity growth rate $e \propto t^{\nicefrac{1}{6}}$ found in other studies. We do, however, find the growth rate of the inclination to be $i \propto t^{\nicefrac{1}{4}}$ instead of the $i \propto t^{\nicefrac{1}{3}}$ observed in previous work. Finally, we have shown that systems that experience the closest encounters also survive for the longest, and planetary systems that wish to survive must therefore live dangerously. \section*{Acknowledgements} We would like to acknowledge the funding provided by the Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in Next Generation Computational Modelling grant EP/L015382/1 that has made this research possible. Additionally, we would like to acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services, at the University of Southampton. JJL was supported through NASA's PSD ISFM program. \rev{We would like to thank an anonymous reviewer for their detailed comments that helped improve this manuscript.} \section*{Data Availability} We have made available all data generated from our integrations. This includes both the data files containing crossing and collision timings for all systems, and periodic state vector data. The dataset is available via \hyperlink{https://doi.org/10.5258/SOTON/D1623}{https://doi.org/10.5258/SOTON/D1623}. \bibliographystyle{mnras}
1,116,691,499,562
arxiv
\section{Adopted forms of the IMF and the CMF} The IMF is not an observable quantity, but rather an analytical description of the mass distribution among a newly-formed stellar population (Kroupa 2013). Modern forms of the IMF adopt either a log-normal distribution at low masses and a power-law tail above 1~M$_{\odot}$ (Chabrier et al.\ 2003, 2005) or a continuous set of several ``broken'' power-laws (Kroupa 2001, 2013). Above $\sim$ 0.2 M$_{\odot}$, the Chabrier and Kroupa IMFs agree, and the integrated mass of a stellar population is the same using either IMF formalism (Chomiuk \& Povich 2011). Below 0.2 M$_{\odot}$ the form of the IMF is still very uncertain and the subject of much debate. ~~~~~The CMF has a shape similar to Chabrier and Kroupa IMFs but is shifted towards larger masses by a factor ${\sim}3$, which has generally been interpreted as a core-to-star conversion efficiency of ${\sim}30\%$ (see Fig.~2). ~~~~~The similar shape of the IMF and CMF has led to believe that there is an intrinsic mapping between these two quantities. However, this one-to-one correspondence does not find much theoretical ground (see below). \section{Are the the IMF and CMF Universal? } Over the last decade, there has been growing evidence of a variable IMF, as opposed to the common assumption that the IMF of the Milky Way is universal (Kroupa 2002; Bastian et al.\ 2010, Fig.~1). These claims come from a wide variety of approaches, including stellar population analysis (e.g. van Dokkum $\&$ Conroy 2010; Ferreras et al.\ 2013), gravitational lensing (Treu et al.\ 2010), and dynamical models (Cappellari et al.\ 2012). A notable Galactic example of exceptions to a universal IMF is the Taurus Molecular Cloud, which shows an excess of 0.6--0.8 M$_{\odot}$ stars. Other examples are the massive clusters Westerlund 1 (Lim et al.\ 2013), Quintuplet (Hussman et al.\ 2012), Arches (Hosek et al.\ 2019), and the young nuclear star clusters (Lu et al.\ 2013), although these could depart from a {\em{standard IMF}} as a consequence of mass segregation.\\ ~~~~~From the extragalactic point of view, since 2010 there has been a flurry of IMF studies focusing on early type elliptical galaxies. These studies have both found an over-abundance of high-mass stars (``top-heavy" IMF, e.g. Dav\'{e} et al.\ 2008), and an over-abundance of low-mass stars (``bottom-heavy" IMF, e.g. van Dokkum $\&$ Conroy 2010). In all cases, it is important to keep in mind that to determine the IMF of a stellar population, one has to go over a complicated process which consists of several steps: (1) measure the Luminosity Function (LF) of a complete sample of stars that lie in a defined volume; (2) convert the LF into a present day mass function (PDMF), using a mass-magnitude relationship; and (3) correct the PDMF for the star-formation history, stellar evolution, galactic structure, cluster dynamical evolution and binarity to obtain the individual-star IMF. Each of these steps is affected by potential biases and pitfalls that can lead to highly uncertain results. \begin{figure}[h] \centering \includegraphics[width=0.55\textwidth]{Offner_etal.jpg} \caption{Recent IMF estimates for 8 star forming regions. The error bars represent the Poisson error for each data point. The solid lines are the log-normal form proposed by Chabrier (2005) for the IMF, normalized to best follow the data. From Offner et al. (2014).} \label{fig:offner2014} \end{figure} ~~~~~The most recent CMF determinations (Fig.~2) have been obtained with Herschel (e.g., Andr\'{e} et al.\ 2010; Konyves et al.\ 2015, Olmi et al.\ 2018) and ALMA (Motte et al. 2018). The Herschel data support the conclusions of early studies performed in the $\rho$-Oph and Serpens molecular clouds (Motte et al.\ 1998; Johnstone et al.\ 2000; Testi $\&$ Sargent, 1998), which suggested that the CMF can be described, similarly to the IMF, by $dN \sim M_{\rm core}^{-1.5} dM$ below 0.5~M$_{\odot}$ and by $dN \sim M_{\rm core}^{-2 - 2.5} dM$ at higher core masses. However, the recent ALMA observations in the mini Galactic starburst W43 appear to show a departure from a standard IMF, with a much shallower CMF. \\ \begin{figure}[h] \centering \includegraphics[width=0.55\textwidth]{Andre_etal2014.jpg} \caption{Core mass function (histogram with error bars) of the prestellar cores identified with Herschel in Aquila (Konyves et al.\ 2015; Andr\'{e} et al.\ 2010). The Kroupa and Chabrier IMF and the typical mass spectrum of CO clumps is shown for comparison.} \label{fig:andre2014} \end{figure} ~~~~~From an analytical standpoint, Inutsuka (2001) and Hennebelle \& Chabrier (2008) applied the Press-Schechter formalism and Shadmehri \& Elmegreen (2011) used the ISM power spectrum with a density cutoff to obtain the clump mass function. A big uncertainty with this method, and with the conversion of a theoretical clump mass function into a stellar mass function, is the unknown multiplicity and mass function of stars inside each clump, for which there are few observations. Often clumps contain several stars and the one-to-one correspondence between clump mass and stellar mass is lost. Numerical simulations of star formation usually get the IMF in a more dynamical process involving long-term accretion into cores along filaments (e.g., Haugb{\o}lle et al. 2018, Bate 2019). In these models, the instantaneous CMF and the final stellar IMF are not one-to-one either. ~~~~~Importantly, the CMF has been barely measured across different environments, especially outside the Solar neighborhood. The ALMA-IMF project (PI. F. Motte), now underway, is an attempt to measure the CMF in fifteen star forming regions across the Galaxy. We also note that many current dust measurements of the CMF are somewhat uncertain, due to intrinsic challenges, such as temperature determinations and whether each core will form one star or more (for more details see Offner et al. 2014). The difficulties in assignment of emission to a single object using automated routines, and the potential bias in the resulting CMFs are discussed in Pineda et al. (2009). \section{Does Environment Sculpt the IMF?} The question of how environment shapes the IMF can be reduced to two principal variables: metallicity and stellar feedback. \subsection{Metallicity Effects} Metallicity sets the opacity of the cloud, which governs the minimum mass of a gravitationally bound core, the cooling rate of cores, and the maximum possible stellar mass (Eddington luminosity). Therefore, we expect metallicity to play a pivotal role in shaping the IMF. Indeed, recent observations of early-type galaxies find that their local IMFs become increasingly bottom-heavy (i.g. more lower mass stars) in those galaxies that are metal rich (Martin-Navarro et al. 2015). \subsection{Feedback Effects} According to simulations by Krumholz et al.\ (2016), radiative heating is the main driver of the characteristic IMF mass. These simulations show that when radiative heating increases, the efficiency of fragmentation is reduced, leading to a top-heavy IMF. Conversely, Conroy \& van Dokkum (2012) suggest that a pivotal role is played by radiative ambient pressure, which is responsible for giving rise to bottom-heavy IMFs (at increasing pressure) as observed in elliptical galaxies with a history of starburst-generating mergers. An additional effect is represented by kinetic feedback. Stellar winds, protostellar outflows/jets, and ionization all likely affect the efficiency of star formation (e.g. Li $\&$ Nikamura 2006). However, it is still matter of debate how and if they ultimately affect stellar masses. For instance, it is thought that outflows slow the star formation rate (e.g. Dale $\&$ Bonnell 2008; Wang et al.\ 2010), but it is unclear if this has any effect on the stellar mass distribution. Likewise for ionization, several studies (e.g. Dale $\&$ Bonnell 2012; Walch et al.\ 2013) have shown that ionizing radiation can provide both negative or positive feedback, in the sense of suppressing or triggering star formation, but none of these have been conclusive in demonstrating the impact on the IMF. \section{Open Questions} {\bf{\underline{Overarching Questions:}}} \begin{enumerate} \item{To what extent can we assume the IMF is universal?} \item{Does the CMF map directly on to the IMF in all environments?} \item{How does environment shape the CMF and IMF?} \end{enumerate} \noindent {\bf{\underline{More Specific Questions:}}}\\ \begin{enumerate} \item{What physical mechanism(s) suppresses the formation of brown dwarfs? Can this lead to a better understanding of the distinction between brown dwarfs and giant planets?} \item{Hierarchical collapse models and many observations suggest that giant molecular cloud (GMC) complexes make stars over an extended time period. Can we observe time-evolution in the CMF?} \item{Massive stars do not seem to obey the CMF--IMF mapping. The CMF appears lognormal, not a Salpeter power-law slope at high masses. Do star formation efficiencies change at higher masses or are cloud mergers required to form the most massive stars?} \item{How do binary/multiple stellar systems arise from the CMF? What determines whether gravitationally-bound cores fragment further?} \item{Can we reconcile observations of bottom-heavy IMFs in elliptical galaxies that were once starbursts with top-heavy IMFs in young massive clusters (YMCs)? What are the implications for Pop III stars?} \end{enumerate} \section{Observational Goals and Recommendations} To answer the questions above we outline the following observational goals and recommendations: \begin{itemize} \item{{\bf{Observational Goal---IMF:}} To achieve an accurate measurement of the IMF in diverse environments and explore potential variations with metallicity and feedback, we require observations of numerous YMCs and associations distributed at increasing distances across the Galaxy and beyond, such as Taurus ($d=180$ pc); Orion (400 pc); M17 (1.6 kpc); W3/4/5 (2.0 kpc, outer Galaxy); NGC 7538 (2.8 kpc), NGC 3603 (7 kpc), and the Large and Small Magellanic Clouds (50--60 kpc).\\ \hspace{0.5truecm} {\bf{Recommendation:}} While {\em Gaia} can provide information on distances and velocities for the stars in these star-forming complexes, we need high spatial-resolution (${<}0.1"$), wide-field imaging and spectroscopy at visual and particularly NIR wavelengths to allow stellar age and mass determinations in both unobscured and obscured regions up to several degrees wide on the sky. This type of information can be obtained by the {\em Cosmological Advanced Survey Telescope for Optical and ultraviolet Research} ({\em CASTOR}, Cot\'e et al.\ 2012) and by {\em WFIRST}. These facilities, combined with LSST ($u,g,r,i,z, Y$) and Euclid ($R,I,Z, Y, J, H$), will be ideal for studies of the IMF, thanks to their wavelength coverage (0.15 - 0.4 $\mu$m for {\em CASTOR}/Visible Imager and Spectrometer and 0.4 - 2 $\mu$m for {\em WFIRST}/WFI) and large FOV (0.67 deg$^{2}$ for {\em CASTOR}/Visible Imager, and 0.25 deg$^{2}$ for {\em WFIRST}/WFI).} \item{{\bf{Observational Goal---CMF:}} Along the same lines as for the IMF, we advocate for surveys of Galactic and extra-galactic GMC complexes (as described above) to investigate potential variations of the CMF with environment.\\ \hspace{0.5truecm} {\bf{Recommendation:}} Interferometric observations (ALMA, EVLA, SMA) will provide high-resolution observations of targeted regions. However, the {\em Origins Space Telescope (OST)} will be uniquely capable of performing statistical measurements of the CMF and protostellar luminosity functions in distant/obscured Galactic regions, including starburst-like environments. What makes {\em OST}\ ideal for this task is its unique imaging and mapping capabilities of the far-IR cold dust emission peak in dense, prestellar clumps and cores. This can be achieved through the combination of (1) a large FOV for efficient scanning of extended regions on the sky; (2) sufficiently high angular resolution to resolve a 0.1 pc cores at a distance of a few kpc, and (3) sufficiently high sensitivity to detect 0.1~M$_{\odot}$ cores at 2--3 kpc. The 5.9-m {\em OST}\ mirror allows achieving a resolution of ${\sim}6''$ at 50~$\mu$m, which is comparable to the angular resolution at shorter wavelengths of {\em Spitzer}/IRAC ($2''$), and {\em Spitzer}/MIPS ($6''$). The Far-Infrared Imager and Polarimeter instrument, FIP, can map 1~deg$^{2}$ of the sky in 100~hrs while achieving a 5-$\sigma$ sensitivity of ${\sim}1~\mu$Jy. We note that the baseline concept for the {\em OST}/FIP instrument has two bands---50 and 250~$\mu$m---but an optional upscope would add the 100 and 500~$\mu$m channels. We recommend the inclusion of these additional bands that would better constrain the peak of the cold dust emission. } \end{itemize} \noindent Importantly, currently existing (e.g. {\em HST}, the Magellan telescope, etc.) or planned facilities (e.g., the {\em James Webb Space Telescope}) will be able to carry out imaging and multi-object spectroscopy of targeted Galactic YMCs. While such observations will be useful for IMF studies, the reach of these measurements will be limited by the small FOVs, which do not allow efficient mapping of large (i.e. of the order of deg$^{2}$) star-forming complexes across the Galaxy. \pagebreak \textbf{References}\\ \small{ Andr\'e, P., et al.\, 2010, A$\&$A, 518, 102\\ Bastian, N., et al.\, 2010, ARA$\&$A, 48, 339\\ Bate, M. 2019, MNRAS, 484, 2341\\ Cappellari, M., et al.\, 2012, Nature, 484, 485\\ Chabrier, G., et al.\, 2003, PASP, 115, 763\\ Chabrier, G., 2005, {\em{The Initial Mass Function 50 Years Later}}, vol. 327 of Astrophysics and Space Science Library, ed. by Corbelli, Palla $\&$ Zinnecker, pp. 41-50, Springer, Dordrecht\\ Chomiuk, L. $\&$ Povich, M., 2011, ApJ,142, 197\\ Conroy, C. $\&$ van Dokkum, P. G., 2012, ApJ, 760, 71\\ Dale, J. E. $\&$ Bonnell, I. A., MNRAS, 2008, 391, 2\\ Dale J. E. $\&$ Bonnell, I. A., MNRAS, 2012, 422, 1352\\ Dave', R., 2008, MNRAS, 385, 147\\ Elmegreen, B. G., 2011, ApJ, 564, 773\\ Ferreras, I., et al.\, 2013, MNRAS, 429, 15\\ Haugb{\o}lle, T., Padoan, P., Nordlund, \AA. 2018, ApJ, 854, 35\\ Hennebelle, P. $\&$ Chabrier, G., 2008, ApJ, 684, 395\\ Hosek, M., et al.\, 2019, ApJ, 870, 44\\ Hussman, B., et al.\, 2012, A$\&$A, 540, 57\\ Inutsuka, S.-I. 2001, ApJ, 559, L149\\ Johnstone, D., et al.\, 2000, ApJ, 545, 327\\ Konyves, V., et al.\, 2015, A$\&$A, 584, 81\\ Kroupa, P., 2001, MNRAS, 322, 231\\ Kroupa, P., 2002, Science, 295, 82\\ Kroupa, P., 2013, {\em{The Stellar and Sub-Stellar Initial Mass Function of Simple and Composite Populations}}, p. 115\\ Krumholz, M., et al.\, 2016, MNRAS, 458, 1671\\ Li, Z.-Y. $\&$ Nakamura, F., 2006, ApJ Letters, 640, 187\\ Lim, B., et al.\, 2013, AJ, 145, 2, 46\\ Lu, J. R., et al.\, 2013, ApJ, 764, 155\\ Martin-Navarro, I., et al.\, 2015, ApJL, 806, 31\\ Motte, F., et al.\, 1998, A$\&$A, 336, 150\\ Motte, F., et al., 2018, Nature Astronomy, 2, 478\\ Offner, S., et al.\, 2014, {\em{The Origin and Universality of the Initial Mass Function}}, Protostars and Planets IV, ed. Beuther, Klessen, Dullemond, Henning, University of Arizona Press, 914 pp., p. 53-75\\ Olmi, L., et al.\, 2018, MNRAS, 480, 1831\\ Pineda, J. E., Rosolowsky, E., Goodman, A. A., 2009, ApJ, 699, 134 \\ Shadmehri, M $\&$ Elmegreen, B. G., 2011, MNRAS, 410, 788\\ Testi, L. $\&$ Sargent, A. I., 1998, ApJL, 508, 91\\ Treu, T., et al.\, 2010, ApJ, 709, 1195\\ Van Dokkum, P. G. $\&$ Conroy, C., 2010, nature, 468, 940\\ Walch, S., et al.\, 2013, mnras, 435, 917\\ Wang, P., et al.\, 2010, ApJ, 709, 27\\ } \end{document}
1,116,691,499,563
arxiv
\section{Introduction}\label{sec:intro} Policy gradient (PG) methods, or more generally direct policy search methods, have long been recognized as one of the foundations of reinforcement learning (RL) \cite{sutton2018reinforcement}. Specifically, PG methods directly search for the optimal policy parameter that maximizes the long-term return in Markov decision processes (MDPs), following the policy gradient ascent direction \cite{williams1992simple,sutton2000policy}. This search direction can be more efficient using a preconditioning matrix, e.g., using the natural PG direction \cite{kakade2002natural}. These methods have achieved tremendous empirical successes recently, especially boosted by the power of (deep) neural networks for policy parametrization \cite{schulman2015trust,lillicrap2015continuous,mnih2016asynchronous,schulman2017proximal}. These successes are primarily attributed to the fact that PG methods naturally incorporate \emph{function approximation} for policy parametrization, in order to handle massive and even continuous state-action spaces. In practice, the policy gradients are usually estimated via samples using Monte-Carlo rollouts and bootstrapping \cite{williams1992simple,baxter2001infinite}. Such stochastic PG methods notoriously suffer from very high variances, which not only destabilize but also slow down the convergence. Several conventional approaches have been advocated to reduce the variance of PG methods, e.g., by adding a baseline \cite{sutton2000policy,wu2018variance}, or by using function approximation for estimating the value function, namely, developing actor-critic algorithms \cite{konda2000actor,peters2008natural,bhatnagar2009natural}. More recently, motivated by the advances of variance-reduction techniques in stochastic optimization \cite{johnson2013accelerating, allen2016variance, reddi2016stochastic,defazio2014saga}, there have been surging interests in developing \emph{variance-reduced} PG methods \cite{xu2017stochastic,papini2018stochastic,xu2019improved,xu2019sample,yuan2020stochastic}, which are shown to be faster. In contrast to the empirical successes of PG methods, their theoretical convergence guarantees, especially \emph{non-asymptotic global} convergence guarantees, have not been addressed satisfactorily until very recently \cite{fazel2018global,zhang2019global,wang2019neural,bhandari2019global,agarwal2019optimality}. By \emph{non-asymptotic global} convergence, here we mean the convergence behavior of PG methods from any initialization, and the quality of the point they converge to (usually enjoys global optimality up to some compatible function approximation error due to policy parametrization), after a finite number of iterations/samples. These recent prominent guarantees are normally beyond the folklore \emph{first-order} stationary-point convergence\footnotemark[1], as expected from a \emph{stochastic nonconvex optimization} perspective of solving RL with PG methods. Special landscapes of the RL objective, though nonconvex, have enabled the convergence to even globally optimal values. On the other hand, none of the aforementioned variance-reduced PG methods \cite{xu2017stochastic,papini2018stochastic,xu2019improved,xu2019sample,yuan2020stochastic} have been shown to enjoy these desired global convergence properties. It remains unclear whether these methods can converge to beyond first-order stationary policies. \footnotetext[1]{That is, finding a parameter $\theta$ such that $\|\nabla J(\theta)\|^2\leq \varepsilon$, where $J$ is the expected return. } Motivated by these advances and the questions that remain to be answered, we aim in this paper to improve the convergence of PG and natural PG (NPG) methods, and their variance-reduced variants, under general smooth policy parametrizations. Our contributions are summarized as follows. \vspace{9pt} \noindent{\textbf{Contributions.}} With a focus on the conventional Monte-Carlo-based PG methods, we propose a general framework for analyzing their \emph{global convergence}. Our contribution is three-fold: first, we establish the global convergence up to compatible function approximation errors due to policy parametrization, for a variance-reduced PG method SRVR-PG \cite{xu2019sample}; second, we improve the global convergence of NPG methods established in \cite{agarwal2019optimality}, from $\mathcal{O}\left(\varepsilon^{-4}\right)$ to $\mathcal{O}\left(\varepsilon^{-3}\right)$; third, we propose a new variance-reduced algorithm based on NPG, and establish its global convergence with an efficient sample-complexity. These improvements are based on a framework that integrates the advantages of previous analyses on (variance reduced) PG and NPG, and rely on a (mild) assumption that the Fisher information matrix induced by the policy parametrization is positive definite (see Assumption \ref{assump: strong convexity}). A comparison of previous results and our improvements is laid out in Table \ref{table: summary of results}. \vspace{6pt} \noindent{\textbf{Related Work.}} \vspace{1pt} \noindent{\textbf{Global Convergence of (Natural) PG.}} Recently, there has been a surging research interest in investigating the global convergence of PG and NPG methods, which is beyond the folklore convergence to first-order stationary policies. In the special case with linear dynamics and quadratic reward, \cite{fazel2018global} shows that PG methods with random search converge to the globally optimal policy with linear rates. In \cite{zhang2019global}, with a simple reward-reshaping, PG methods have been shown to converge to the second-order stationary-point policies. \cite{bhandari2019global} shows that for finite-MDPs and several control tasks, the nonconvex RL objective has no suboptimal local minima. \cite{wang2019neural} prove that (natural) PG methods converge to the globally optimal value when overparametrized neural networks are used for function approximation. \cite{agarwal2019optimality} provides a fairly general characterization of global convergence for these methods, and a basic sample complexity result for sample-based NPG updates. It is also worth noting that trust-region policy optimization (TRPO) \cite{schulman2015trust}, as a variant of NPG, also enjoys global convergence with overparametrized neural networks \cite{liu2019neural}, and for regularized MDPs \cite{shani2019adaptive}. Very recently, for actor-critic algorithms, a series of non-asymptotic convergence results have also been established \cite{xu2020improving,xu2020non,wu2020finite,hong2020two}, with global convergence guarantees when natural PG/PPO are used in the actor step. \vspace{3pt} \vspace{-3pt} \noindent{\textbf{Variance-Reduction (VR) for PG.}} Conventional approaches to reduce the high variance in PG methods include using (natural) actor-critic algorithms \cite{konda2000actor,peters2008natural,bhatnagar2009natural}, and adding baselines \cite{sutton2000policy,wu2018variance}. The idea of variance reduction (VR) is first proposed to accelerate stochastic minimization. VR algorithms such as SVRG \cite{johnson2013accelerating, allen2016variance, reddi2016stochastic}, SAGA \cite{defazio2014saga}, SARAH \cite{nguyen2017sarah}, and Spider \cite{fang2018spider} achieve acceleration over SGD in both convex and nonconvex settings. SVRG is also accelerated by applying a positive definite preconditioner that captures the curvature of the objective \cite{liu2019acceleration}. Inspired by these successes in stochastic optimization, VR is also incorporated into PG methods \cite{xu2017stochastic}, with empirical validations for acceleration, and analyzed rigorously in \cite{papini2018stochastic}. Then, \cite{xu2019improved} improves the sample complexity of SVRPG, and \cite{xu2019sample} proposes a new SRVR-PG method that uses recursively updated semi-stochastic policy gradient, which leads to an improved sample complexity of $\cO(\varepsilon^{-1.5})$ over previous works. More recently, \cite{yuan2020stochastic} proposes a new STORM-PG method, which blends momentum in the update and matches the sample complexity of in \cite{xu2019sample}, and \cite{pham2020hybrid} applies the idea of SARAH and considers a more general setting with regularization. Finally, heavy-ball type of momentum has also been applied to PG methods \cite{huang2020momentum}. We highlight that all these sample complexity results are for first-order stationary-point convergence (which might have arbitrarily bad performance: see \eqref{equ: stationary convergence definition}), in contrast to the more desired global convergence guarantees (up to some function approximation errors that can be small) that we are interested in. \begin{table}[H] \begin{center} \begin{tabular}{cccc} \begin{tabular}[c]{@{}c@{}}NPG\\\cite{agarwal2019optimality}\end{tabular} & \begin{tabular}[c]{@{}c@{}}NPG\\ \cite{wang2019neural}\end{tabular} & \begin{tabular}[c]{@{}c@{}}TRPO\\ \cite{liu2019neural}\end{tabular} & \begin{tabular}[c]{@{}c@{}}TRPO\\ \cite{shani2019adaptive}\end{tabular} \\ \hline \addlinespace[0.2cm] $\mathcal{O}(\varepsilon^{-4})$ & $\mathcal{O}(T_{TD}\varepsilon^{-2})$ \footnotemark[1] & $\mathcal{O}(\varepsilon^{-8})$ & $\mathcal{O}(\varepsilon^{-4})$ \\ \hline \end{tabular} \vskip 0.5cm \begin{tabular}{cccc} \begin{tabular}[c]{@{}c@{}}NPG\\ \eqref{equ: NPG update_2}\end{tabular} & \begin{tabular}[c]{@{}c@{}}PG\\ \eqref{equ: PG update}\end{tabular} & \begin{tabular}[c]{@{}c@{}}SRVR-PG \cite{xu2019sample}\\ (Algorithm \ref{alg: SRVR-PG})\end{tabular} & \begin{tabular}[c]{@{}c@{}}SRVR-NPG\\ (Algorithm \ref{alg: SRVR-NPG})\end{tabular} \\ \hline \addlinespace[0.2cm] $\cO(\varepsilon^{-3})$ &$\mathcal{O}({\sigma^2}{\varepsilon^{-4}})$ & $\mathcal{O}\left((W+\sigma^2)\varepsilon^{-3}\right)$ & $\mathcal{O}\left((W+\sigma^2)\varepsilon^{-2.5}+\varepsilon^{-3}\right)$ \\ \hline \end{tabular} \end{center} \caption{Comparison of sample complexities of several methods to reach global optimality up to some compatible function approximation error (see \eqref{equ: compatible function approximation error}). Our results are listed in the second table (See App. \ref{app: previous results} for their derivations). We compare the number of trajectories to reach $\varepsilon-$optimality in expectation, up to some inherent error due to the function approximation for policy parametrization (see \eqref{equ: global convergence definition}). $\sigma^2$ is an upper bound for the variance of gradient estimator (see Assumption \ref{assump: variance}), and $W$ is an upper bound for the variance of importance weight (see Assumption \ref{assump: importance sampling}).} \label{table: summary of results} \vspace{-5pt} \end{table} \footnotetext[1]{In \cite{wang2019neural}, $T_{TD}$ iterations of temporal difference updates are needed at each iteration, $T_{TD}$ can be large for wide neural networks. See App. \ref{app: previous results} for details.} \vspace{-5pt} \vspace{-10pt} \section{Preliminaries} \vspace{-5pt} We first introduce some preliminaries regarding both the MDPs and policy gradient methods. \vspace{-5pt} \subsection{Markov Decision Processes}\label{sec:prelim_MDP} \vspace{-5pt} Consider a discounted Markov decision process defined by a tuple $(\cS,\cA,\PP,R,\gamma)$, where $\cS$ and $\cA$ denote the state and action spaces of the agent, $\PP(s'\given s,a):\cS\times\cA\to \cP(\cS)$ is the Markov kernel that determines the transition probability from $(s,a)$ to state ${s}'$, $\gamma\in(0,1)$ is the discount factor, and $r:\cS\times\cA\to [-R,R]$ is the reward function of $s$ and $a$. At each time $t$, the agent executes an action $a_t\in\cA$ given the current state $s_t\in\cS$, following a possibly stochastic policy $\pi:\cS\to \cP(\cA)$, i.e., $a_t\sim \pi(\cdot\given s_t)$. Then, given the state-action pair $(s_t,a_t)$, the agent observes a reward $r_t=r(s_t,a_t)$. Thus, under any policy $\pi$, one can define the \emph{state-action value} function $Q^{\pi}:\cS\times\cA\to\RR$ as \$ Q^{\pi}(s,a):=\EE_{a_t\sim \pi(\cdot\given s_t),s_{t+1}\sim \PP(\cdot\given s_t,a_t)}\bigg(\sum_{t=0}^\infty \gamma^t r_t\bigggiven s_0=s,a_0=a\bigg). \$ One can also define the \emph{state-value} function $V^\pi:\cS\to \RR$, and the \emph{advantage} function $A^\pi:\cS\times\cA\to \RR$, under policy $\pi$, as $V^\pi(s):=\EE_{a\sim \pi(\cdot\given s)}[Q^\pi(s,a)]$ and $A^\pi(s,a):=Q^\pi(s,a)-V^\pi(s)$, respectively. Suppose that the initial state $s_0$ is drawn from some distribution $\rho$. Then, the goal of the agent is to find the optimal policy that maximizes the expected discounted return, namely, \#\label{equ: def_obj} \max_{\pi}~~J(\pi):=\EE_{s_0\sim\rho}[V^\pi(s_0)]. \# In practice, both the state and action spaces $\cS$ and $\cA$ can be very large. Thus, the policy $\pi$ is usually parametrized as $\pi_\theta$ for some parameter $\theta\in\RR^d$, using, for example, deep neural networks. As such, the goal of the agent is to maximize $J(\pi_\theta)$ in the space of the parameter $\theta$, which naturally induces an optimization problem. Such a problem is in general nonconvex \cite{zhang2019global, agarwal2019optimality}, making it challenging to find the globally optimal policy. For notational convenience, let us denote $J(\pi_\theta)$ by $J(\theta)$. Many of the previous works focus on establishing stationary convergence of policy gradient methods. That is, finding a $\theta$ that satisfies \begin{align} \label{equ: stationary convergence definition} \|\nabla J(\theta)\|^2\leq \varepsilon. \end{align} Obviously, such a $\theta$ may not lead to a large $J(\theta)$. Instead, we are interested in finding a $\theta$ such that \begin{align} \label{equ: global convergence definition} J^{\star} - J(\theta) \leq \cO(\sqrt{\varepsilon_{\text{bias}}})+\varepsilon, \end{align} where $J^{\star}=\max_{\pi} J(\pi)$, and the $\cO(\sqrt{\varepsilon_{\text{bias}}})$ term reflects the inherent error related to the possibly limited expressive power of the policy parametrization $\pi_{\theta}$ (see Assumption \ref{assump: compatible error} for the definition). \subsection{(Natural) Policy Gradient Methods}\label{sec:prelim_NPG} To solve the optimization problem \eqref{equ: def_obj}, one standard way is via the policy gradient (PG) method \cite{sutton2000policy}. Specifically, let $\tau_i=\{s^i_0,a^i_0,s^i_1,\cdots\}$ denote the data of a sampled trajectory under policy $\pi_\theta$. Then, a stochastic PG ascent update is given as \begin{align} \label{equ: PG update} \theta^{k+1} = \theta^{k} + \eta \cdot \frac{1}{N}\sum_{i=1}^N g(\tau_i\given \theta^k), \end{align} where $\eta>0$ is a stepsize, $N$ is the number of trajectories, and $g(\tau_i \given\theta^k)$ estimates $\nabla J(\theta^k)$ using the trajectory $\tau_i$. Common unbiased estimators of PG include REINFORCE \cite{williams1992simple}, using the policy gradient theorem \cite{sutton1992reinforcement}, and GPOMDP \cite{baxter2001infinite}. The commonly used GPOMDP estimator will be given by \#\label{equ:GPOMDP_surro} g(\tau_i\given \theta)= \sum_{h=0}^{\infty} \left(\sum_{t=0}^{h}\nabla_{\theta}\log \pi_{\theta}(a^i_t\given s^i_t)\right)\left(\gamma^h r(s^i_h, a^i_h)\right), \# where $\nabla_{\theta}\log \pi_{\theta}(a^i_t\given s^i_t)$ is the \emph{score function}. If the expectation of this infinite sum exits, then \eqref{equ:GPOMDP_surro} becomes an unbiased estimate of the policy gradient of the objective $J(\theta)$ defined in \eqref{equ: def_obj}. This unbiasedness is established in App. \ref{sec:help_lemma} for completeness. In practice, a \emph{truncated} version of GPOMDP is used to approximate the infinite sum in \eqref{equ:GPOMDP_surro}, as \begin{align} \label{equ: truncated GPOMDP estimator} g(\tau_i^H\given \theta) &= \sum_{h=0}^{H-1} \left(\sum_{t=0}^{h}\nabla_{\theta}\log \pi_{\theta}(a^i_t\given s^i_t)\right)\left(\gamma^h r(s^i_h, a^i_h)\right), \end{align} where $\tau_i^H=\{s^i_0,a^i_0,s^i_1,\cdots,s^i_{H-1},a^i_{H-1}, s^i_H\}$ is a truncation of the full trajectory $\tau_i$ of length $H$. \eqref{equ: truncated GPOMDP estimator} is thus a biased stochastic estimate of $\nabla J(\theta)$, with the bias being negligible for a large enough $H$. For notational simplicity, we denote the $H$-horizon trajectory distribution induced by the initial state distribution $\rho$ and policy $\pi_\theta$ as $p^H_{\rho}(\cdot \given \theta)$, that is, \[ p^H_{\rho}(\tau^H\given\theta) = \rho (s_0)\prod_{h=0}^{H-1} \pi_{\theta}(a_h\given s_h) \PP(s_{h+1}\given a_{h},s_{h}). \] Hereafter, unless otherwise stated, we refer to this \emph{$H$-horizon trajectory} simply as \emph{trajectory}, drawn from $p^H_{\rho}(\cdot\given\theta)$. As a significant variant of PG, NPG \cite{kakade2002natural} also incorporates a preconditioning matrix $F_{\rho}(\theta)$, leading to the following update \# &F_{\rho}(\theta)=\EE_{s\sim d^{\pi_\theta}_{\rho}}[F_s(\theta)], \qquad \theta^{k+1} =\theta^k+\eta\cdot F^{\dagger}_\rho(\theta^k)\nabla J(\theta^k), \label{equ: NPG update} \# where $F_s(\theta) = \EE_{a\sim \pi_{\theta}(\cdot\given s)}\left[\nabla_{\theta}\log\pi_{\theta}(a\given s)\nabla_{\theta}\log\pi_{\theta}(a\given s)^\top \right]$ is the Fisher information matrix of $\pi_{\theta}(\cdot\given s)\in \cP(\cA)$, $F^{\dagger}_\rho(\theta^k)$ is the Moore-Penrose pseudoinverse of $F_\rho(\theta^k)$, and $d^{\pi_\theta}_{\rho}\in\cP(\cS)$ is the state visitation measure induced by policy $\pi_\theta$ and initial distribution $\rho$, which is defined as \[ d^{\pi_\theta}_{\rho}(s)\coloneqq (1-\gamma)\EE_{s_0\sim\rho}\sum_{t=0}^\infty\gamma^t \PP(s_t=s\given s_0,\pi_\theta). \] The NPG update \eqref{equ: NPG update} can also be written as \cite{kakade2002natural,agarwal2019optimality} \#\label{equ: NPG update_2} \theta^{k+1}=\theta^k+\eta \cdot w^k,\quad~~ \text{with~~~~} w^k\in\argmin_{w\in\RR^d}~~~L_{\nu^{\pi_{\theta}}_{\rho}}(w; \theta), \# where $L_{\nu^{\pi_{\theta}}_{\rho}}(w; \theta)$ is the \textit{compatible function approximation error} defined by \begin{align} \label{equ: compatible function approximation error} L_{\nu^{\pi_{\theta}}_{\rho}}(w; \theta)=\EE_{(s,a)\sim \nu^{\pi_{\theta}}_{\rho}}\left[\big(A^{\pi_{\theta}}(s,a)-(1-\gamma)w^\top\nabla_{\theta}\log\pi_{\theta}(a\given s)\big)^2\right]. \end{align} Here, $\nu^{\pi_{\theta}}_{\rho}(s,a) = d^{\pi_\theta}_{\rho}(s)\pi(a\given s)$ is the \emph{state-action} visitation measure induced by $\pi_\theta$ and initial state distribution $\rho$, which can also be written as \begin{align} \label{equ: nu with nu_0} \nu^{\pi_{\theta}}_{\rho}(s,a)\coloneqq (1-\gamma)\EE_{s_0\sim\rho}\sum_{t=0}^\infty\gamma^t \PP(s_t=s,a_t=a\given s_0,\pi_\theta). \end{align} For convenience, we will denote $\nu^{\pi_{\theta}}_{\rho}$ by $\nu^{\pi_{\theta}}$ hereafter. In other words, the NPG update direction $w^k$ is given by the minimizer of a stochastic optimization problem. In practice, one obtains an approximate NPG update direction $w^k$ by SGD (see Procedure \ref{alg: SGD for NPG subproblem}). Regarding the NPG update \eqref{equ: NPG update_2}, we make the following standing assumption on the Fisher information matrix induced by $\pi_\theta$ and $\rho$. \begin{assumption} \label{assump: strong convexity} For all $\theta\in\Rd$, the Fisher information matrix induced by policy $\pi_{\theta}$ and initial state distribution $\rho$ satisfies \$ F_{\rho}(\theta)=\EE_{(s,a)\sim \nu^{\pi_{\theta}}_{\rho}}\left[\nabla_{\theta}\log\pi_{\theta}(a\given s)\nabla_{\theta}\log\pi_{\theta}(a\given s)^\top \right]\succcurlyeq \mu_F \cdot I_d \$ for some constant $\mu_F>0$. \end{assumption} Assumption \ref{assump: strong convexity} essentially states that $F_{\rho}(\theta)$ behaves well as a preconditioner in the NPG update \eqref{equ: NPG update_2}. This is a common (and minimal) requirement for the convergence of preconditioned algorithms in both convex and nonconvex settings in the optimization realm, for example, the quasi-Newton algorithms \cite{broyden1970convergence, fletcher1970new, goldfarb1970family, shanno1970conditioning}, and their stochastic variants \cite{byrd2016stochastic,moritz2016linearly, gower2016stochastic, wang2017stochastic, liu2019acceleration}. In the RL realm, one common example of policy parametrizations that can satisfy this assumption is the Gaussian policy \cite{williams1992simple,duan2016benchmarking,papini2018stochastic,xu2019sample}, where $\pi_\theta(\cdot\given s)=\cN(\mu_\theta(s),\Sigma)$ with mean parametrized linearly as $\mu_\theta(s)=\phi(s)^\top \theta$, where $\phi(s)$ denotes some feature matrix of proper dimensions, $\theta$ is the coefficient vector, and $\Sigma\succ 0$ is some fixed covariance matrix. In this case, the Fisher information matrix at each $s$ becomes $\phi(s)\Sigma^{-1}\phi(s)^{\top}$, independent of $\theta$, and is uniformly lower bounded (positive definite sense) if $\phi(s)$ is full-row-rank, namely, the features expanded by $\theta$ are linearly independent, which is a common requirement for linear function approximation settings \cite{tsitsiklis1997analysis,melo2008analysis,sutton2009fast}. See App. \ref{sec:append_justify_Fisher} for more detailed justifications, as well as discussions on more general policy parametrizations. In the pioneering NPG work \cite{kakade2002natural}, $F(\theta)$ is directly assumed to be positive definite. So is in the follow-up works on natural actor-critic algorithms \cite{peters2008natural,bhatnagar2009natural}. In fact, this way, $F(\theta)$ will define a valid Riemannian metric on the parameter space, which has been used for interpreting the desired convergence properties of natural gradient methods \cite{amari1998natural,martens2014new}. In a recent version of \cite{agarwal2019optimality}, a relevant assumption (specifically, Assumption 6.5, item 3) is made to establish the global convergence of NPG, in which it is assumed that $\lambda_{\textrm{min}}(F_{\rho}(\theta))$ is not too small compared with the Fisher information matrix induced by a fixed comparator policy. this can be implied by our Assumption \ref{assump: strong convexity}. To sum up, the positive definiteness on the Fisher preconditioning matrix is common and not very restrictive. In Sec. \ref{sec: theory}, we shall see that under Assumption \ref{assump: strong convexity}, the stationary convergence of NPG can be analyzed, and NPG enjoys a better sample complexity of $\mathcal{O}(\varepsilon^{-3})$ in terms of its global convergence, compared with the existing sample complexity of $\mathcal{O}(\varepsilon^{-4})$ in \cite{agarwal2019optimality}. In addition, interestingly, PG and its variance-reduced version SRVR-PG also enjoy global convergence, although the Fisher information matrix does not appear explicitly in their updates. \begin{comment} To solve the optimization problem \eqref{equ: def_obj}, one standard way is via the policy gradient(PG) method. Specifically, \cite{sutton2000policy} has derived the closed-form of the gradient of $J(\theta)$ with respect to the parameter $\theta$, given by \# \nabla J(\theta) &=\frac{1}{1-\gamma}\cdot\EE_{s\sim d^{\pi_\theta}_{\rho},a\sim\pi_\theta(\cdot\given s)}\big\{\nabla_{\theta}\log\pi_{\theta}(a\given s)\cdot A^{\pi_\theta}(s,a)\big\},\label{equ: policy_grad_2} \# where $d^{\pi_\theta}_{\rho}$ is the state visitation measure under policy $\pi_\theta$ and initial state distribution $\rho$, defined as $d^{\pi_\theta}_{\rho}(s)=(1-\gamma)\EE_{s_0\sim\rho}\sum_{t=0}^\infty\gamma^t \PP(s_t=s\given s_0,\pi_\theta)$, and $\PP(s_t=s\given s_0,\pi_\theta)$ is the probability that the $t$th state $s_t=s$ given initial state $s_0$ and policy $\pi_\theta$. $\nabla_{\theta}\log\pi_{\theta}(s,a)$ is often called the score function. In practice, the evaluation of $\nabla J(\theta)$ is expensive. Instead, one relies on a stochastic estimate of $\nabla J(\theta)$ and performs the following update. \issue{It is not ``expensive''. Instead, in practice, what you can have is only the data (trajectories). I would rather say ``In practice, only trajectory data is available to estimate the exact policy gradient $\nabla J(\theta)$. This leads to the following stochastic gradient ascent update:''} \begin{align} \label{equ: PG update} \theta^{k+1} = \theta^{k} + \eta \frac{1}{N}\sum_{i=1}^N g(\tau_i\given \theta^k), \end{align} where $\eta>0$ is a stepsize, $N$ is a sample size \issue{KZ: not the ``sample size'' but the ``number of sampled trajectories''. check all!}, and $g(\tau_i \given\theta^k)$ estimates $\nabla J(\theta^k)$ with a sampled trajectory $\tau_i\sim p(\cdot\given \theta^k)$\issue{what is this small $p$? have we defined what a trajectory $\tau_i$ is? see my definition of $\tau_i^H$ as below, and decide if we need to keep $\tau_i$ at all, since hereafter we will only deal with these $\tau_i^H$ trajectories.}. In this work, we set $g(\tau_i\given \theta^k)$ as the GPOMDP estimator \cite{baxter2001infinite}. \issue{[Are we using $\pi_\theta(a\given s)$ or $\pi_\theta(s,a)$?? I saw some inconsistency. plz check them all]} \begin{align} \label{equ: truncated GPOMDP estimator} g(\tau_i|\theta) &= \sum_{h=0}^{\infty} \left(\sum_{t=0}^{h}\nabla_{\theta}\log \pi_{\theta}(s^i_t, a^i_t)\right)\gamma^h r(s^i_h, a^i_h), \end{align} \edit{which is obviously an unbiased estimate of $\nabla J(\theta)$. To make the infinite sum in \eqref{equ: truncated GPOMDP estimator} computationally feasible, one can use a geometric sampling procedure. Specifically, one can first sample a random horizon $H\sim\text{Geo}(1-\gamma)$ from a geometric distribution with success probability $1-\gamma$, and then simulate the process to obtain a trajectory of length $H$. The \emph{undiscounted} sum as follows leads to an unbiased estimate of $\nabla J(\theta)$ under certain regularity conditions \begin{align} \label{equ: truncated GPOMDP estimator_2} g(\tau_i^H|\theta) &= \sum_{h=0}^{H-1} \left(\sum_{t=0}^{h}\nabla_{\theta}\log \pi_{\theta}(s^i_t, a^i_t)\right)r(s^i_h, a^i_h), \end{align} where $\tau_i^H=\{s_0,a_0,s_1,\cdots,s_{H-1},a_{H-1}\}$ is the trajectory of sampled process. See Sec. \ref{app: sampling} for a proof on the unbiasedness of this procedure. Let us also denote the trajectory distribution induced by policy $\pi_{\theta}$ as $p(\tau|\theta)$, that is, \[ p(\tau\given\theta) = \rho(s_0)\prod_{h=0}^{\infty} \pi_{\theta}(a_h\given s_h) \PP(s_{h+1}\given a_h,s_h). \] } \issue{we set baseline to be 0 here, since in SRVR-PG, it is set to 0 so that the variance of $g_w$ can be calculated. In order to obtain it in practice, we need to use the sampling trick similar to Q-sampling proposed in \cite{zhang2019global}. Let us explain this in App. \ref{app: sampling}. Mention that the effective state-action pairs needed is $\frac{1}{1-\gamma}$, which is equivalent to one trajectory.} \issue{introduce true NPG basics here, and do $\nu^{\pi_{\theta}}$ later (Follow Sham's...) and move the eps approx assumption to sec 4, no need to put it as assumption. Also, define Fisher as in the first NPG paper. } As a significant variant of PG methods, natural policy gradient(NPG) \cite{kakade2002natural} incorporates a preconditioning matrix $F(\theta)$ into the update, where $F(\theta)$ is the Fisher information matrix induced by $\pi_\theta$ \issue{[One suggestion for minimal changes of the paper regarding the FIM name mentioned above: We can say here ``Hereafter, we will refer to $F(\theta)$ only as the Fisher information matrix for simplicity.''}. Specifically, we have \# F(\theta)&=\EE_{(s,a)\sim\nu^{\pi_\theta}}\left[\nabla\log\pi_{\theta}(a\given s)\nabla\log\pi_{\theta}(a\given s)^\top \right],\label{equ: Fisher matirx}\\ \theta^{k+1}&=\theta^k+\eta\cdot F^{\dagger}(\theta^k)\nabla J(\theta^k),\label{equ: NPG update} \# where $F^{\dagger}(\theta^k)$ denotes the Moore-Penrose pseudoinverse of $F(\theta^k)$ and $\nu^{\pi_{\theta}}(s,a)$ is a state-action visitation measure defined as \[ \nu^{\pi_{\theta}}(s,a)\coloneqq (1-\gamma)\EE_{(s_0,a_0)\sim\nu_0}\sum_{t=0}^\infty\gamma^t \PP(s_t=s, a_t = a\given s_0,a_0,\pi_\theta). \] Here, $\nu_0$ is an initial state-action distribution. To obtain a practical implementation of NPG, people often write the update in another equivalent form\footnotemark[1] \cite{kakade2002natural,agarwal2019optimality}. \footnotetext[1]{This follows directly from the first order optimality condition of \eqref{equ: compatible function approximation error}} \#\label{equ: NPG update_2} \theta^{k+1}=\theta^k+\frac{\eta}{1-\gamma}\cdot w^k,\quad~~ \text{with~~~~} w^k\in\argmin_{w\in\RR^d} L_{\nu^{\pi_{\theta^k}}}(w; \theta^k), \# where the \textit{compatible function approximation error} is defined by \begin{align} \label{equ: compatible function approximation error} L_{\nu^{\pi_{\theta}}}(w; \theta)=\EE_{(s,a)\sim\nu^{\pi_{\theta}}}\big[A^{\pi_{\theta}}(s,a)-w^\top\nabla_{\theta}\log\pi_{\theta}(a\given s)\big]^2. \end{align} Namely, the NPG update direction $w^k$ is given by the minimizer of a stochastic optimization problem. In practice, one obtains an approximate NPG update direction by SGD \cite{bach2013non}\footnotemark[2]. \footnotetext[2]{We apply SGD as in \cite{bach2013non} to make a fair comparison with \cite{agarwal2019optimality}. One can also apply faster algorithms such as the classical SA algorithm \cite{nemirovski2009robust} and AC-SA algorithm \cite{ghadimi2012optimal}} Following \cite{agarwal2019optimality}, we assume that the policy parametrization $\pi_{\theta}$ achieves a good function approximation, as measured by the \textit{minimal compatible function approximation error}. \issue{as per our discussion, move this ``assumption'' to the algorithm-design.} \begin{assumption} \label{assump: compatible error} We assume that for any $\theta\in\Rd$, the \textit{minimal compatible function approximation error} satisfies \begin{align} \label{equ: minimal compatible function approximation error} L^{\star}_{\nu^{\pi_{\theta}}}(\theta)\coloneqq \min_{w\in\Rd}L_{\nu^{\pi_{\theta}}}(w; \theta)\leq \varepsilon_{\text{bias}}. \end{align} \end{assumption} Intuitively, $\varepsilon_{\text{bias}}$ reflects the error when approximating the advantage function from the score function, it measures the capacity of the parametrization $\pi_{\theta}$. When $\pi_{\theta}$ is the softmax parametrization, we have $\varepsilon_{\text{bias}}=0$ \cite{agarwal2019optimality}. When $\pi_{\theta}$ is a restricted parametrization, $\varepsilon_{\text{bias}}$ is often positive as $\pi_{\theta}$ may not contain all stochastic policies. However, for rich neural parametrizations, $\varepsilon_{\text{bias}}$ can be made small \cite{wang2019neural}. \issue{although the $\nu$ definition changes, the argument for softmax is still true, see C.3 of \cite{agarwal2019optimality}} In this work, we also make the following standing assumption. \begin{assumption} \label{assump: strong convexity} For all $\theta\in\Rd$, the Fisher information matrix satisfies \[ F(\theta)\succcurlyeq \mu_F I_d \] for some $\mu_F>0$. \end{assumption} Assumption \ref{assump: strong convexity} essentially states that $F{\theta}$ behaves well as a preconditioner in the NPG update \eqref{equ: NPG update}. \edit{This is a common (minimal) requirement for the convergence of optimization algorithms, especially variance-reduced algorithms with preconditioning \cite{liu2019acceleration} \issue{worth citing more on this!}. In the RL realm, one common example of policy parametrizations that satisfies this assumption is the Gaussian policy \cite{williams1992simple,duan2016benchmarking,papini2018stochastic,xu2019sample}, where $\pi_\theta(\cdot\given s)=\cN(\mu_\theta(s),\Sigma)$ with some mean parametrized as $\mu_\theta(s)$ (which can be a nonlinear function of $\theta$, as neural networks), and some fixed covariance matrix $\Sigma>0$. This way, the Fisher information matrix of $\pi_\theta(\cdot\given s)$ becomes $\Sigma^{-1}$, which is indeed positive definite. This also holds more generally for any {full-rank} exponential family parametrization with mean parametrized by $\theta$, as the Fisher information matrix in this case is also positive definite \cite{dasgupta2011exponential}. Indeed, Fisher information matrix is positive definite for any \emph{regular} statistical model \cite{kullback1997information}. Only this way, it will define a valid Riemannian metric on the parameter space, as has been used for interpreting the desired convergence properties of natural gradient methods \cite{amari1998natural,martens2014new}. In addition, in the pioneering NPG work \cite{kakade2002natural}, the Fisher is directly assumed positive definite. In sum, the positive definiteness on the Fisher preconditioning matrix is common and not restricted. } \end{comment} \vspace{-5pt} \section{Variance-Reduced Policy Gradient Methods} \label{sec:variance reduction} \vspace{-5pt} Recently, \cite{xu2019sample} proposes an algorithm called Stochastic Recursive Variance Reduced Policy Gradient (SRVR-PG, see Algorithm \ref{alg: SRVR-PG}), which applies variance-reduction on PG. It achieves a sample complexity of $\cO(\varepsilon^{-1.5})$ to find an $\varepsilon-$stationary point, compared with the $\cO(\varepsilon^{-2})$ sample complexity of stochastic PG. However, it remains unclear whether SRVR-PG converges globally. In this work, we provide an affirmative answer to this question by showing that SRVR-PG has a sample complexity of $\cO(\varepsilon^{-3})$ to find an $\varepsilon-$optimal policy, up to some compatible function approximation error due to policy parametrization. We also propose a new algorithm called SRVR-NPG to incorporate variance reduction into NPG, which is described in Algorithm \ref{alg: SRVR-NPG}. In Sec. \ref{sec: theory}, we provide a sample complexity for its global convergence, which is comparable to our improved NPG result. \begin{algorithm}[t] \caption{Stochastic Recursive Variance Reduced Natural Policy Gradient (SRVR-NPG)} \label{alg: SRVR-NPG} \textbf{Input:} number of epochs $S$, epoch size $m$, stepsize $\eta$, batch size $N$, minibatch size $B$, truncation horizon $H$, initial parameter $\theta^0_m=\theta_0\in\Rd.$ \begin{algorithmic}[1] \For{$j\leftarrow 0,...,S-1$}{} \State{$\theta^{j+1}_0=\theta^j_m$;} \State{Sample $\{\tau^H_i\}_{i=1}^N$ from $p_{\rho}^H(\cdot\given \theta^{j+1}_0)$ and calculate $u^{j+1}_0=\frac{1}{N}\sum_{i=1}^N g(\tau^H_i\given \theta^{j+1}_0)$;} \State{$w^{j+1}_0 = \texttt{SRVR-NPG-SGD}(\nu^{\pi_{\theta^{j+1}_0}}, \pi_{\theta^{j+1}_0}, u^{j+1}_0)$;} \Comment{$w^{j+1}_0\approx w^{j+1}_{0,\star} = F^{-1}_{\rho}(\theta^{j+1}_0)u^{j+1}_0$; } \State{$\theta^{j+1}_{1} = \theta^{j+1}_0+\eta w^{j+1}_0;$} \For{$t \leftarrow 1,...,m-1$}{} \State{Sample $B$ trajectories $\{\tau^H_j\}_{j=1}^{B}$ from $p_{\rho}^H(\cdot|\theta^{j+1}_t)$;} \State{$u^{j+1}_t=u^{j+1}_{t-1}+\frac{1}{B}\sum_{j=1}^B \left(g(\tau^H_j\given \theta^{j+1}_t)-g_w(\tau^H_j\given \theta^{j+1}_{t-1})\right)$;} \State{$w^{j+1}_t = \texttt{SRVR-NPG-SGD}(\nu^{\pi_{\theta^{j+1}_t}}, \pi_{\theta^{j+1}_t}, u^{j+1}_t)$;} \Comment{$w^{j+1}_t\approx w^{j+1}_{t,\star} = F^{-1}_{\rho}(\theta^{j+1}_t)u^{j+1}_t$; } \State{$\theta^{j+1}_{t+1} = \theta^{j+1}_t+\eta w^{j+1}_t;$} \EndFor \EndFor \State\Return{$\theta_{\text{out}}$ chosen uniformly from $\{\theta\}_{j=1,...,S; t=0,...,m-1.}$} \end{algorithmic} \end{algorithm} In line 8 of Algorithm \ref{alg: SRVR-NPG}, $g_w(\tau^H_j|\theta^{j+1}_{t-1})$ is a weighted gradient estimator given by \begin{align} \label{equ: weighted GPOMDP} g_w(\tau^H_j\given \theta^{j+1}_{t-1}) &= \sum_{h=0}^{H - 1}w_{0:h} (\tau^H_j \given \theta^{j+1}_{t-1}, \theta^{j+1}_{t}) \left(\sum_{t=0}^{h}\nabla_{\theta}\log \pi_{\theta}(a^i_t\given s^i_t)\right)\left(\gamma^h r(s^i_h, a^i_h)\right), \end{align} where the importance weight factor $w_{0:h}(\tau^H_j|\theta^{j+1}_{t-1}, \theta^{j+1}_{t})$ is defined by \begin{align} \label{equ: importance weight} w_{0:h}(\tau^H_j\given \theta^{j+1}_{t-1}, \theta^{j+1}_{t})= \prod_{h'=0}^h \frac{\pi_{\theta^{j+1}_{t-1}}(a_{h'}\given s_{h'})}{\pi_{\theta^{j+1}_t}(a_{h'}\given s_{h'})}. \end{align} This importance sampling makes $u^{j+1}_t$ an unbiased estimator of $\nabla J^H(\theta^{j+1}_t)$. In lines 4 and 8 of Algorithm \ref{alg: SRVR-NPG}, $w^{j+1}_t$ is produced by \texttt{SRVR-NPG-SGD} (see Procedure \ref{alg: SGD for SRVR-NPG subproblem}), which applies SGD\footnotemark[1] to solve the following subproblem: \begin{align} \label{equ: SRVR-NPG subproblem} w^{j+1}_{t}\approx \argmin_{w}\left\{\mathbb{E}_{(s,a)\sim \nu^{j+1}_t }\left[ \left(w^T\nabla_{\theta}\log \pi_{\theta^{j+1}_t}(a\given s)\right)^2\right]-2\langle w, u^{j+1}_t\rangle\right\}, \end{align} where $\nu^{j+1}_{t}$ is the state-action visitation measure induced by $\pi_{\theta^{j+1}_t}$. The exact update direction given by \eqref{equ: SRVR-NPG subproblem} is $F^{-1}_{\rho}(\theta^{j+1}_t)u^{j+1}_t$, and as in NPG, $F_{\rho}(\theta^{j+1}_t)$ also serves as a preconditioner. \footnotetext[1]{Following \cite{agarwal2019optimality}, we apply SGD \cite{bach2013non} to make a fair comparison. One can also apply the SA algorithm \cite{nemirovski2009robust} and AC-SA algorithm \cite{ghadimi2012optimal}.} \vspace{-5pt} \section{Theoretical Results} \label{sec: theory} \vspace{-5pt} Before presenting the global convergence results, we first introduce some standard assumptions. \begin{assumption} \label{assump: variance} The truncated GPOMDP estimator $g(\tau^H\given \theta)$ defined in \eqref{equ: truncated GPOMDP estimator} satisfies $\text{Var}\left(g(\tau^H\given\theta)\right)\coloneqq \E [\|g(\tau^H\given \theta) - \E[g(\tau^H\given \theta)]\|^2]\leq \sigma^2$ for any $\theta$ and $\tau^H\sim p^H_{\rho}(\cdot\given \theta)$. \end{assumption} \begin{assumption} \label{assump: conditions on score function} \begin{enumerate} \item $\|\nabla_{\theta}\log \pi_{\theta}(a\given s)\|\leq G$ for any $\theta$ and $(s,a)\in\cS\times \cA$. \item $\|\nabla_{\theta}\log \pi_{\theta_1}(a\given s)-\nabla_{\theta}\log \pi_{\theta_2}(a\given s)\|\leq M\|\theta_1-\theta_2\|$ for any $\theta_1, \theta_2$ and $(s,a)\in\cS\times \cA$. \end{enumerate} \end{assumption} \begin{assumption} \label{assump: importance sampling} For the importance weight $w_{0:h}(\tau^H|\theta_1, \theta_2)$ \eqref{equ: importance weight}, there exists $W>0$ such that \[ \text{Var}(w_{0:h}\left(\tau^H\given\theta_1, \theta_2)\right)\leq W, \,\,\,\forall \theta_1,\theta_2\in\Rd, \tau^H\sim p^H_{\rho}(\cdot\given\theta_2). \] \end{assumption} Assumptions \ref{assump: variance}, \ref{assump: conditions on score function} and \ref{assump: importance sampling} are standard in the analysis of PG methods and their variance reduced variants \cite{agarwal2019optimality, papini2018stochastic, xu2019improved,xu2019sample}. They can be verified for simple policy parametrizations such as Gaussian policies; see \cite{papini2018stochastic, pirotta2013adaptive,cortes2010learning} for more justifications. Following the Assumption 6.5 of \cite{agarwal2019optimality}, we assume that the policy parametrization $\pi_{\theta}$ achieves a good function approximation, as measured by the \textit{transferred compatible function approximation error}. \begin{assumption} \label{assump: compatible error} For any $\theta\in\Rd$, the \textit{transferred compatible function approximation error} satisfies \begin{align} \label{equ: minimal compatible function approximation error} L_{\nu^{\star}}(w^{\theta}_{\star}; \theta)= \EE_{(s,a)\sim \nu^{\star}}\left[\big(A^{\pi_{\theta}}(s,a)-(1-\gamma)(w^{\theta}_{\star})^\top\nabla_{\theta}\log\pi_{\theta}(a\given s)\big)^2\right]\leq \varepsilon_{\text{bias}}, \end{align} where $\nu^{\star}(s,a) = d^{\pi^{\star}}_{\rho}(s) \cdot \pi^{\star}(a \given s)$ is the state-action distribution induced by an optimal policy $\pi^{\star}$ that maximizes $J(\pi)$, and $w^{\theta}_{\star} = \argmin_{w\in\Rd} L_{\nu^{\pi_{\theta}}_{\rho}}(w; \theta)$ is the exact NPG update direction at $\theta$. \end{assumption} $\varepsilon_{\text{bias}}$ reflects the error when approximating the advantage function from the score function, it measures the capacity of the parametrization $\pi_{\theta}$. When $\pi_{\theta}$ is the softmax parametrization, we have $\varepsilon_{\text{bias}}=0$ \cite{agarwal2019optimality}. When $\pi_{\theta}$ is a restricted parametrization, $\varepsilon_{\text{bias}}$ is often positive as $\pi_{\theta}$ may not contain all stochastic policies. For rich neural parametrizations, $\varepsilon_{\text{bias}}$ is very small \cite{wang2019neural}. \vspace{-5pt} \subsection{A General Framework for Global Convergence} \vspace{-5pt} Inspired by the global convergence analysis of NPG in \cite{agarwal2019optimality}, we present a general framework that relates the global convergence rates of these algorithms to i) their stationary convergence rate on $ J(\theta)$, and ii) the difference between their update directions and exact NPG update directions. \begin{proposition} \label{prop: global convergence} Let $\{\theta^k\}_{k=1}^K$ be generated by a general update of the form \[ \theta^{k+1} = \theta^k +\eta w^k, \,\,\,\,\, k = 0,1,...K-1. \] Furthermore, let $w^k_{\star} = F_{\rho}^{-1}(\theta^k)\nabla J(\theta^k)$ be the exact NPG update direction at $\theta^k$. Then, we have \begin{align} J(\pi^{\star})-\frac{1}{K}\sum_{k=0}^{K-1}J(\theta^k) &\leq \frac{\sqrt{\varepsilon_{\text{bias}}}}{1-\gamma} + \frac{1}{\eta K} \mathbb{E}_{s\sim d^{\pi^{\star}}_\rho} \left[\text{KL}\left(\pi^{\star}(\cdot\given s)|| \pi_{\theta^0}(\cdot\given s)\right)\right]\nonumber\\ &\,\,\, +\frac{M\eta}{2K}\sum_{k=0}^{K-1}\|w^k\|^2 + \frac{G}{K}\sum_{k=0}^{K-1} \|w^k-w^k_{\star}\|, \label{equ: global convergence} \end{align} where $\pi^{\star}$ is an optimal policy that maximizes $J(\pi)$. \end{proposition} The detailed proof of this global convergence framework can be found in \ref{app: global}. To obtain a high level idea, one first starts from the $M-$smoothness of the score function to get \begin{align*} &\mathbb{E}_{s\sim d^{\pi^{\star}}_{\rho}} \left[\text{KL}\left(\pi^{\star}(\cdot\given s)|| \pi_{\theta^{k}}(\cdot\given s)\right)-\text{KL}\left(\pi^{\star}(\cdot\given s)|| \pi_{\theta^{k+1}}(\cdot \given s)\right)\right]\\ &\geq \eta \mathbb{E}_{s\sim d^{\pi^{\star}}_{\rho}}\mathbb{E}_{a\sim \pi^{\star} (\cdot \given s)} [\nabla_{\theta}\log \pi_{\theta^k}(a\given s)\cdot w^k_{\star}] \\ &\,\,\, + \eta \mathbb{E}_{s\sim d^{\pi^{\star}}_{\rho}}\mathbb{E}_{a\sim \pi^{\star} (\cdot \given s)} [\nabla_{\theta}\log \pi_{\theta^k}(a\given s)\cdot (w^k - w^k_{\star})] -\frac{M\eta^2}{2}\|w^k\|^2. \end{align*} On the other hand, the renowned Performance Difference Lemma \cite{kakade2002approximately} tells us that \begin{align*} \mathbb{E}_{s\sim d^{\pi^{\star}}_{\rho}}\mathbb{E}_{a\sim \pi^{\star} (\cdot \given s)} [A^{\pi_{\theta^k}}(s,a)] = (1-\gamma) \left(J^{\star}-J(\theta^k)\right). \end{align*} To connect the advantage term $\mathbb{E}_{s\sim d^{\pi^{\star}}_{\rho}}\mathbb{E}_{a\sim \pi^{\star} (\cdot \given s)} [A^{\pi_{\theta^k}}(s,a)]$ with the inner product term $\mathbb{E}_{s\sim d^{\pi^{\star}}_{\rho}}\mathbb{E}_{a\sim \pi^{\star} (\cdot \given s)} [\nabla_{\theta}\log \pi_{\theta^k}(a\given s)\cdot w^k_{\star}]$, we invoke Assumption \ref{assump: compatible error}: \[ \mathbb{E}_{s\sim d^{\pi^{\star}}_{\rho}}\mathbb{E}_{a\sim \pi^{\star} (\cdot \given s)}\left[\big(A^{\pi_{\theta}}(s,a)-(1-\gamma)(w^{\theta}_{\star})^\top\nabla_{\theta}\log\pi_{\theta}(a\given s)\big)^2\right]\leq \varepsilon_{\text{bias}}, \quad \text{for any $\theta\in\Rd$.} \] The final result follows from a telescoping sum on $k = 0,1,...,K-1$. Several remarks are in order. The first term on the right-hand side of \eqref{equ: global convergence} reflects the function approximation error due to the parametrization $\pi_{\theta}$, and the second term is of the form $\cO(\frac{1}{K})$. The third term depends on the stationary convergence. With Assumption \ref{assump: strong convexity}, it can be shown that\footnotemark[1] $\frac{1}{K}\sum_{k=0}^{K-1}\EE[\|w^k\|^2]\rightarrow 0$ for both NPG and SRVR-NPG. The proof follows from an optimization perspective and is inspired by the stationary convergence analysis of stochastic PG (see App. \ref{sec: stationary convergence}). \footnotetext[1]{The stationary convergence of SRVR-PG has been established in \cite{xu2019sample}. } With Assumption \ref{assump: strong convexity}, we can also show that the last term of \eqref{equ: global convergence} is small. Take stochastic PG as an example; then, we have $w^k = \frac{1}{N}\sum_{i=1}^N g(\tau^H_i|\theta^k)$, and \begin{align*} \frac{1}{K}\sum_{k=0}^{K-1}\|w^k - w^k_{\star}\| &\leq \frac{1}{K}\sum_{k=0}^{K-1}\|w^k-\nabla {J}(\theta^k)\| + \frac{1}{K}\sum_{k=0}^{K-1}\left(1+\frac{1}{\mu_F}\right)\|\nabla {J}(\theta^k)\|. \end{align*} When $H$ and $N$ are large enough, $w^k$ is a low-variance estimator of $\nabla J^H(\theta^k)$, and $\nabla J^H(\theta^k)$ is close to $\nabla J(\theta^k)$, this makes the first term above small. The second term also goes to $0$ as $\theta^k$ approaches stationarity. \vspace{-5pt} \subsection{Global Convergence Results} \vspace{-5pt} By applying Proposition \ref{prop: global convergence} on the PG, NPG, SRVR-PG, and SRVR-NPG updates and analyzing their stationary convergence, we obtain their global convergence rates. In the following, we only keep the dependences on $\sigma^2$ (the variance of the gradient estimator), $W$ (variance of importance weight), $\frac{1}{1-\gamma}$ (the effective horizon) and $\varepsilon$ (target accuracy). The specific choice of the parameters and sample complexities, as well as the proof, can be found in the appendix. \begin{theorem} \label{thm: PG global convergence} In the stochastic PG \eqref{equ: PG update} with the truncated GPOMDP estimator \eqref{equ: truncated GPOMDP estimator}, take $\eta=\frac{1}{4L_J}$, $K=\cO\left(\frac{1}{(1-\gamma)^{2}\varepsilon^2}\right)$, $N=\cO\left(\frac{\sigma^2}{\varepsilon^2}\right)$, and $H =\cO\left(\log(\frac{1}{(1-\gamma)\varepsilon})\right)$. Then, we have \begin{align*} \begin{split} J(\pi^{\star})-\frac{1}{K}\sum_{k=0}^{K-1} \EE[J(\theta^k)]&\leq \frac{\sqrt{\varepsilon_{\text{bias}}}}{1-\gamma}+\varepsilon. \end{split} \end{align*} In total, stochastic PG samples $\mathcal{O}\left(\frac{\sigma^2}{(1-\gamma)^2\varepsilon^4}\right)$ trajectories. \end{theorem} \begin{remark} $L_J = \frac{MR}{(1-\gamma)^2}$ is the Lipschitz constant of $\nabla J$, see Lemma \ref{lem: smoothness of objective} for details. \end{remark} \begin{remark} Theorem \ref{thm: PG global convergence} improves the result of \cite[Thm. 6.11]{agarwal2019optimality} from (impractical) full gradients to sample-based stochastic gradients. \end{remark} \begin{theorem} \label{thm: NPG global convergence} In the NPG update \eqref{equ: NPG update_2}, let us apply $\cO\left(\frac{1}{(1-\gamma)^4\varepsilon^2}\right)$ iterations of SGD as in Procedure \ref{alg: SGD for NPG subproblem} to obtain an update direction. In addition, take $\eta = \frac{\mu_F^2}{4G^2L_J}$ and $K=\cO\left(\frac{1}{(1-\gamma)^2\varepsilon}\right)$. Then, \begin{align*} \begin{split} J^{\star}-\frac{1}{K}\sum_{k=0}^{K-1} \EE[J(\theta^k)]&\leq \frac{\sqrt{\varepsilon_{\text{bias}}}}{1-\gamma}+\varepsilon. \end{split} \end{align*} In total, NPG samples $\mathcal{O}\left(\frac{1}{(1-\gamma)^6\varepsilon^3}\right)$ trajectories. \end{theorem} \begin{remark} Compared with \cite[Coro. 6.10]{agarwal2019optimality}, Theorem \ref{thm: NPG global convergence} improves the sample complexity of NPG by $\cO(\varepsilon^{-1})$. This is because our stationary convergence analysis on NPG allows for a constant stepsize $\eta$, while \cite[Coro. 6.10]{agarwal2019optimality} applies a stepsize of $\eta=\mathcal{O}(1/\sqrt{K})$. It is worth noting that the $\cO(\sqrt{\varepsilon_{\text{bias}}})$ term is the same as in \cite{agarwal2019optimality}, and we also apply the average SGD \cite{bach2013non} to solve the NPG subproblem \eqref{equ: NPG update_2}. \end{remark} \begin{theorem} \label{thm: SRVR-PG global convergence} In SRVR-PG (Algorithm \ref{alg: SRVR-PG}), take $\eta=\frac{1}{8L_J}$, $S=\mathcal{O}\left(\frac{1}{(1-\gamma)^{2.5}\varepsilon}\right)$, $m=\mathcal{O}\left(\frac{(1-\gamma)^{0.5}}{\varepsilon}\right)$, $B=\mathcal{O}\left(\frac{W}{(1-\gamma)^{0.5}\varepsilon}\right)$, $N=\mathcal{O}\left(\frac{\sigma^2}{\varepsilon}\right)$, and $H =\cO\left(\log(\frac{1}{(1-\gamma)\varepsilon})\right)$. Then, we have \begin{align*} \begin{split} J^{\star}-\frac{1}{Sm}\sum_{s=0}^{S-1}\sum_{t=0}^{m-1} \EE[J(\theta^{j+1}_t)]&\leq \frac{\sqrt{\varepsilon_{\text{bias}}}}{1-\gamma} +\varepsilon. \end{split} \end{align*} In total, SRVR-PG samples $\mathcal{O}\left(\frac{W+\sigma^2}{(1-\gamma)^{2.5}\varepsilon^3}\right)$ trajectories. \end{theorem} \begin{remark} Theorem \ref{thm: SRVR-PG global convergence} establishes the global convergence of SRVR-PG proposed in \cite{xu2019sample}, where only stationary convergence is shown. Also, compared with stochastic PG, SRVR-PG enjoys a better sample complexity thanks to its faster stationary convergence. \end{remark} \begin{theorem} \label{thm: SRVR-NPG global convergence} In SRVR-NPG (Algorithm \ref{alg: SRVR-NPG}), let us apply $\cO\left(\frac{1}{(1-\gamma)^4\varepsilon^2}\right)$ iterations of SGD as in Procedure \ref{alg: SGD for SRVR-NPG subproblem} to obtain an update direction. In addition, take $\eta=\frac{\mu_F}{16L_J}$, $S=\cO\left(\frac{1}{(1-\gamma)^{2.5}\varepsilon^{0.5}}\right)$, $m= \cO\left(\frac{(1-\gamma)^{0.5}}{\varepsilon^{0.5}}\right)$, $B=\cO\left(\frac{W}{(1-\gamma)^{0.5}\varepsilon^{1.5}}\right)$, $N=\cO\left(\frac{\sigma^2}{\varepsilon^2}\right)$, and $H =\cO\left(\log(\frac{1}{(1-\gamma)\varepsilon})\right)$. Then, \begin{align*} \begin{split} J^{\star}-\frac{1}{Sm}\sum_{s=0}^{S-1}\sum_{t=0}^{m-1} \EE[J(\theta^{j+1}_t)]&\leq \frac{\sqrt{\varepsilon_{\text{bias}}}}{1-\gamma}+\varepsilon. \end{split} \end{align*} In total, SRVR-NPG samples $\cO\left(\frac{W+\sigma^2}{(1-\gamma)^{2.5}\varepsilon^{2.5}} + \frac{1}{(1-\gamma)^{6}\varepsilon^{3}}\right)$ trajectories. \end{theorem} \begin{remark} Compared with SRVR-PG, our SRVR-NPG has a better dependence on $W$ and $\sigma^2$, which could be large in practice (especially $W$). The current sample complexity of SRVR-NPG is not better than our (improved) result of NPG since, in our analysis, the advantage of variance reduction is offset by the cost of solving the subproblems. \end{remark} \section{Numerical Experiments} \label{sec: experiments} In this section, we compare the numerical performances of stochastic PG, NPG, SRVR-PG, and SRVR-NPG. Specifically, we test on benchmark reinforcement learning environments Cartpole and Mountain Car. Our implementation is based on the implementation of SRVPG\footnotemark[1] and SRVR-PG\footnotemark[2], and can be found in the supplementary material. \footnotetext[1]{\url{https://github.com/Dam930/rllab}} \footnotetext[2]{\url{https://github.com/xgfelicia/SRVRPG}} For both tasks, we apply a Gaussian policy of the form $\pi_{\theta}(a\given s) = \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{(\mu_{\theta}(s) - a)^2}{2\sigma^2}\right)$ where the mean $\mu_{\theta}(s)$ is modeled by a neural network with Tanh as the activation function. For the Cartpole problem, we apply a neural network of size $32\times 1$ and a horizon of $H = 100$. In addition, each training algorithm uses $5000$ trajectories in total. For the Mountain Car problem, we apply a neural network of size $64\times 1$ and take $H = 1000$. $3000$ trajectories are allowed for each algorithm. The numerical performance comparison, as well as the settings of algorithm-specific parameters, can be found in Figures \ref{fig: cartpole} and \ref{fig: mountain car}. In App. \ref{app: implementation details}, we provide more implementation details. \newlength{\halfwidth} \setlength{\halfwidth}{\dimexpr 0.45\textwidth-\tabcolsep} \begin{figure}[htbp] \centering \begin{tabular}{@{}p{\halfwidth}p{\halfwidth}@{}} \centering \raisebox{-\height}{\includegraphics[width=1\linewidth]{new_compare_cartpole.png}}& \raisebox{-\height}{\includegraphics[width=1\linewidth]{compare_mc.png}}\\ \caption{Numerical Performances on Cartpole. For PG, SRVR-PG and SRVR-NPG, we report the undiscounted average return averaged over 10 runs. For NPG, we report the averaged return over 40 runs. Overall, SRVR-NPG has the best performance.} \label{fig: cartpole}& \caption{Numerical Performances on Mountain Car. For PG, SRVR-PG and SRVR-NPG, we report the undiscounted average return averaged over 10 runs. For NPG, we report the averaged return over 40 runs. Overall, NPG has the best performance.} \label{fig: mountain car} \end{tabular} \end{figure} \vspace{-5pt} \section{Concluding Remarks} \label{sec:conclusions} \vspace{-5pt} In this work, we have introduced a framework for analyzing the global convergence of (natural) PG methods and their variance-reduced variants, under the assumption that the Fisher information matrix is positive definite. We have established the sample complexity for the global convergence of stochastic PG and its variance-reduced variant SRVR-PG, and improved the sample complexity of NPG. In addition, we have introduced SRVR-NPG, which incorporates variance-reduction into NPG, and enjoys both global convergence guarantee and an efficient sample complexity. Our improved analysis hinges on exploiting the advantages of previous analyses on (variance reduced) PG and NPG methods, which may be of independent interest, and can be used to design faster variance-reduced NPG methods in the future. \newpage \onecolumn \section*{Broader Impact} The results of this paper improves the performance of policy-gradient methods for reinforcement learning, as well as our understanding to the existing methods. Through reinforcement learning, our study will also benefit several research communities such as machine learning and robotics. We do not believe that the results in this work will cause any ethical issue, or put anyone at a disadvantage in our society. \section*{Acknowledgements} Yanli Liu and Wotao Yin were partially supported by the Office of Naval Research (ONR) Grant N000141712162. Yanli Liu was also supported by UCLA Dissertation Year Fellowship. Kaiqing Zhang and Tamer Ba\c{s}ar were supported in part by the US Army Research Laboratory (ARL) Cooperative Agreement W911NF-17-2-0196, and in part by the Office of Naval Research (ONR) MURI Grant N00014-16-1-2710. We would like to thank Rui Yuan for his suggestions to improve the proof of Lemma B.1 and Proposition G.1. \bibliographystyle{plain}
1,116,691,499,564
arxiv
\section{Trigger strategy in CMS \pbpb\ running \label{sec:hlt_intro}} The key component in exploiting the CMS capabilities in heavy-ion collisions is the trigger system, which is crucial in accessing rare probes such as high $E_T$ jets and photons, $Z^0$ bosons, $D$ and $B$ mesons, and high-mass dileptons from the decay of quarkonia. The unique CMS trigger architecture only employs two trigger levels: The Level-1 trigger is implemented using custom electronics and inspects events at the full bunch crossing rate. All further online selection is performed in the High-Level Trigger (HLT) using a large cluster of commodity workstations (the ``filter farm'') running ``offline'' reconstruction algorithms on fully assembled event information. The trigger system was designed to deal with the unprecedented luminosities in LHC \pp\ running, yielding an expected event rate of 40~MHz, with 25 superimposed \pp\ collisions for each event. Out of the 40~MHz \pp\ event rate, a 100~kHz data stream will be selected by the Level-1 trigger for further processing in the HLT. The HLT will reduce the 100~kHz input stream to 150~Hz of events written to permanent storage. \begin{figure}[Hhtb] \centering \includegraphics[width=0.48\textwidth]{cProductionRates_col_20061221} \includegraphics[width=0.48\textwidth]{cSigMBvsHLTRatesInt_v9_col_20061221} \caption{\label{fig:prodrates} Left: Expected production rates in minimum bias \pbpb\ collisions at $\sqrt{s_{_{NN}}} = 5.5$~TeV, assuming design luminosity. Right: Expected rates to tape for jets, \ensuremath{J/\psi}\ and $\Upsilon$\ channels for minimum bias (dashed lines) and HLT triggered data sets (solid lines).} \end{figure} The \pbpb\ design luminosity $L = 10^{27}$~cm$^{-2}$s$^{-1}$ at the beginning of a store is smaller than the \pp\ design luminosity by 7 orders of magnitude. The corresponding initial \pbpb\ collision rate is $\approx $8~kHz, and the average collision rate over the duration of a store will be $\approx $3~kHz. Therefore, even the maximal event rate for \pbpb\ collisions is much smaller than the 100~kHz input rate for the HLT in \pp\ collisions after the Level-1 selection. This allows a trigger strategy in \pbpb\ running that can be summarized as follows: Every \pbpb\ collision identified by the Level-1 trigger will be sent to the HLT filter farm. At the HLT, the full event information will be available for each event. All rejection of \pbpb\ collisions will be based on the outcome of HLT trigger algorithms that are identical to the corresponding offline algorithms or optimized versions of the offline algorithms. Therefore, algorithms like the offline jet finder will be run on each \pbpb\ event in the CMS interaction region, optimizing the CMS physics reach. This strategy relies on the fact that the HLT in its final configuration will provide sufficient input bandwidth to accept all collision events and sufficient computing power to run full offline algorithms on all events. The event size and computing time constraints were evaluated using full GEANT 4 based simulations of \pbpb\ events. For events with a charged hadron multiplicity of $dN/d\eta \approx 3000$ for central events, the event size was found to be approximately linear in the charged hadron multiplicity, ranging from 330~kByte/event for a $b=12$~fm sample to 8.5~MByte/event for a $b=0$~fm sample. Averaging over impact parameter and adjusting for additional noise, backgrounds and diagnostic information, we obtain an event size of 2.5~MByte per minimum bias event for running at design luminosity. Including all uncertainties, we expect that the bandwidth of 225~MByte/sec will allow a rate of \pbpb\ events to mass storage between $10$ and $100$~Hz. A large part of this uncertainty will only be resolved once the first LHC data are taken, underscoring the need for a flexible high-level trigger scheme. The HLT online computing farm is expected to consist of about $1500$ servers. The projected CPU budget per event, in units of todays 1.8~GHz Opteron CPUs on which our timing measurements were performed, will be $\approx$~1.5~s at the beginning of each store (8~kHz collision rate), and $\approx$~4~s averaged over the duration of the store (3~kHz collision rate). We measured the timing of three key algorithms in the CMS ORCA framework: the jet finding algorithm, the stand-alone muon finder using muon chamber information and the full muon finder including the silicon tracker information. Averaging over the Glauber impact parameter distribution, the execution time of the modified iterative cone jet finding algorithm, including background subtraction, is $\langle t \rangle = 250$~ms. More than 50\% of the execution time was spent in unpacking the calorimeter data. The stand-alone muon algorithm has an average execution time of $\langle t \rangle = 80 \pm 20$~ms, using muon candidates from the Level-1 trigger. Both algorithms therefore fit comfortably into the CPU budget per event discussed above. The full muon algorithm extends the tracks found in the muon system to the silicon tracker and provides a significant improvement in momentum resolution and background rejection. This is particularly important for low $p_T$ dimuons, which are expected to take up the largest fraction of the output bandwidth to tape. This algorithm is called for less than 2\% of all events. The current $L3$ execution time corresponds to about $10\pm 3$~s per minimum bias event. Work on porting the the present offline algorithms to a new framework, CMSSW, and optimizing or significantly modifying the present algorithms for use in the HLT event selection is ongoing. \begin{figure}[Hthb] \begin{center} \centerline{ \resizebox{75mm}{!}{\includegraphics{yprimeovery}} \resizebox{75mm}{!}{\includegraphics{trigRaa}}} \caption{\label{fig:raa} Left: $\Upsilon$' over $\Upsilon$ ratio vs $p_T$. Statistics correspond to $10^6$~s of data taking (one nominal year of LHC heavy ion running). Shown in comparison to the statistical uncertainty are calculations in different theoretical scenarios. Right: The nuclear modification factor $R_{AA}$ as a function of $p_T$ for charged particles, for minimum bias data (left panel) and for data triggered on high-$E_T$ jets (right panel), for $10^6$~s of data taking.} \end{center} \end{figure} \section{HLT simulation results \label{sec:hlt_simulation}} \label{SimChain} Using results from event size and timing studies, a simulation chain was set up to translate the production cross-sections into rates to tape. The simulations used parametrizations of the acceptance, efficiency and background rates of the offline jet finding and muon finding algorithms which are expected to form the basis of the HLT algorithms. Details of the model calculations and reconstruction algorithms can be found in \cite{hitdr}. In Figure~\ref{fig:prodrates} (right) we show the rates of signal events to tape for minimum bias running (no event selection in HLT) in comparison to those for event selection using the HLT. The rates were calculated for design average luminosity, using a trigger table that devoted about 30\% of the output bandwidth to dimuon chanels and about 35\% to jet channels. Using the HLT, a gain in statistics of more than an order of magnitude is achieved for jets at large $E_T$ and for dimuons. Correspondingly, the usable range in $E_T$ ($p_T$) for the jet and dimuon measurements is extended by more than than factor of $2$ and $3$, respectively. Note that, for this comparison, the HLT rate for each process was only counted in the corresponding trigger stream. Two key examples of the physics benefit of the HLT for quarkonium and jet related measurements are shown below. The left plot of Fig.~\ref{fig:raa} shows the ratio of ${\Upsilon}$' to $\Upsilon$ yields as a function of transverse momentum. The projected statistical resolution is compared to four model calculations. This measurement, which relies on the added statistics provided by the HLT selection, allows a clear distinction of the different scenarios, and may therefore serve as a sensitive probe of the initial QCD medium. In the right plot of Fig.~\ref{fig:raa} we show the nuclear modification factor $R_{AA}$ for events selected by an HLT trigger on high $E_T$ jets (right). Compared to a minimum bias data set, the triggered sample extends the useful range in $p_T$ by more than a factor of 2.5, to more than 200~GeV/c. Predictions for $R_{AA}$ in Pb+Pb collisions at the LHC have been made for several models of the parton energy loss in the QCD medium. The predictions differ most markedly in the high $p_T$ region, which can only be measured with high precision in the jet-triggered event sample. In summary, the flexibility of the HLT system will allow us to allocate bandwidth to trigger channels differentially as a function of rapidity, $y$, and $p_T$ of the trigger object and as a function of collision centrality, using full offline algorithms. This sophisticated triggering system will be critical in maximizing the overall physics reach of CMS in heavy ion running.
1,116,691,499,565
arxiv
\section{Introduction} It is well known that orthogonal polynomials have a great history and continuing important applications in mathematics, physics and engineering and so on \cite{sze,sim1,sim2,deift,mehta}. Boundary value problems for analytic functions is a living research field with a beautiful and rich theory as well as diverse and interesting applications. It also has a fascinating history which can be traced back to the origins of function theory and comes into sight after 1851 via Riemann's famous habilitation thesis. Riemann's treatment on these problems is heuristic. It was Hilbert who first proposed a partly rigorous approach to attack the problems in the linear case. The main defect of Hilbert's approach lies in the ignorance of the indexes of the problems which pointed out by F. Noether. For this reason, nowadays these problems are usually called Riemann-Hilbert problems (for short, RHPs). In the past almost thirty years, a remarkable fact is that one can construct some (usually, $2\times2$) matrix-valued RHPs to characterize many different types of orthogonal polynomials with respect to general weight functions or probability measures. These RHPs are always called Riemann-Hilbert characterizations (simply, RH characterizations; or more simply, RHCs) for the corresponding orthogonal polynomials. In fact, RHPs appear in many different settings. There are many systematic approaches to formulate RH characterizations for some interesting problems in modern studies. Nevertheless, RH characterizations for orthogonal polynomials come ``out of the blue" according to Deift's view \cite{deift1}. In this regard, the first breakthrough was due to Fokas, Its and Kitaev \cite{fik}. There they proposed the RH characterization for orthogonal polynomials on the real line (simply, OPRL). More precisely, they formulated the following $2\times2$ matrix-valued RHP for a $2\times2$ matrix-valued function $\mathcal{Y}: \mathbb{C}\setminus \mathbb{R}\rightarrow \mathbb{C}^{2\times 2}$ satisfying \begin{equation} (\mbox{RHC for OPRL})\,\, \begin{cases} \mathcal{Y}\,\, \mbox{is analytic in}\,\,\mathbb{C}\setminus \mathbb{R},\vspace{2mm}\\ \mathcal{Y}^{+}(x)=\mathcal{Y}^{-}(x)\left( \begin{array}{cc} 1 & w(x) \\ 0 & 1 \\ \end{array} \right) \,\,\mbox{for} \,\,x\in \mathbb{R},\vspace{2mm}\\ \mathcal{Y}(z)=\left(I+O\left(\frac{1}{z}\right)\right)\left( \begin{array}{cc} z^{n} & 0 \\ 0 & z^{-n} \\ \end{array} \right) \,\,\mbox{as}\,\, z\rightarrow \infty, \end{cases} \end{equation} where $w$ is a weight function on $\mathbb{R}$ and $I$ is the $2\times2$ identity matrix. In \cite{bdj}, Baik, Deift and Johansson proposed the RH characterization for orthogonal polynomials on the unit circle (concisely, OPUC). That is, for a $2\times2$ matrix-valued function $Y: \mathbb{C}\setminus \partial \mathbb{D}\rightarrow \mathbb{C}^{2\times 2}$, the following conditions are fulfilled: \begin{equation} (\mbox{RHC for OPUC})\,\, \begin{cases} Y\,\, \mbox{is analytic in}\,\,\mathbb{C}\setminus \partial \mathbb{D},\vspace{2mm}\\ Y^{+}(t)=Y^{-}(t)\left( \begin{array}{cc} 1 & t^{-n}w(t) \\ 0 & 1 \\ \end{array} \right) \,\,\mbox{for} \,\,t\in \partial \mathbb{D},\vspace{2mm}\\ Y(z)=\left(I+O(\frac{1}{z})\right)\left( \begin{array}{cc} z^{n} & 0 \\ 0 & z^{-n} \\ \end{array} \right) \,\,\mbox{as}\,\, z\rightarrow \infty, \end{cases} \end{equation} where $\mathbb{D}$ is the unit disc, $\partial \mathbb{D}$ is the unit circle, $w$ is a weight function on $\partial \mathbb{D}$ and $I$ is the $2\times2$ identity matrix. With respect to orthogonal trigonometric polynomials (simply, OTP), in \cite{dd08}, Du and the author constructed the RH characterization for them. More precisely, it is the following $2\times2$ matrix-valued RHP: for a $2\times2$ matrix-valued function $\mathfrak{Y}: \mathbb{C}\setminus \partial \mathbb{D}\rightarrow \mathbb{C}^{2\times 2}$ satisfying \begin{equation} (\mbox{RHC for OTP})\,\, \begin{cases} \mathfrak{Y}\,\, \mbox{is analytic in}\,\,\mathbb{C}\setminus \partial \mathbb{D},\vspace{2mm}\\ \mathfrak{Y}^{+}(t)=\mathfrak{Y}^{-}(t)\left( \begin{array}{cc} 1 & t^{-2n}w(t) \\ 0 & 1 \\ \end{array} \right) \,\,\mbox{for} \,\,t\in \partial \mathbb{D},\vspace{2mm}\\ \mathfrak{Y}(z)=\left(I+O(\frac{1}{z})\right)\left( \begin{array}{cc} z^{2n} & 0 \\ 0 & z^{-2n+1} \\ \end{array} \right) \,\,\mbox{as}\,\, z\rightarrow \infty,\vspace{2mm}\\ \mathfrak{Y}_{11}(0)=\mathfrak{Y}_{21}(0)=0, \end{cases} \end{equation} where $\mathbb{D}$ is the unit disc, $\partial \mathbb{D}$ is the unit circle, $w$ is a weight function on $\partial \mathbb{D}$ and $I$ is the $2\times2$ identity matrix. Observing all of the above RH characterizations, the innovation from OPRL to OPUC is that the 12 entry in the jump matrix with $t^{-n}w$ replacing $w$. However, from OPUC to OTP, there is another completely new innovation except the 12 entry $t^{-2n}w$ in place of $t^{-n}w$ in the jump matrix. That is, in the matrix about the growth conditions at $\infty$ for $Y$, the 11 entry $z^{n}$ is replaced by $z^{2n}$ and the 22 entry $z^{-n}$ is replaced by $z^{-2n+1}$. Moreover, the 11 and 21 entries are prescribed to be $0$ at the origin. Based on such innovations, a surprisingly remarkable fact is that the RHP (1.3) can be characterized for both OTP and OPUC. For this reason, Du and the author discovered and established the mutual representation theorem for OTP and OPUC. It becomes a bridge connecting such two isolated classes of orthogonal polynomials. However, it should be pointed out that the RHC (1.2) for OPUC can also be as a RHC for OTP when replacing $n$ with $2n-1$ (see Remark 3.2 in \cite{dd08}). Such two RHCs can be transformed to each other by an explicit $2\times2$ matrix-valued multiplier (see the uniqueness part of the proof of Theorem 3.1 in \cite{dd08}). Nevertheless, we still use the RHP (1.3) as the RH characterization for OTP in the present paper. In addition, for general orthogonal polynomials (simply, GOP), the author also formulated a semi-conjugate $2\times2$ matrix-valued boundary value problem to characterize orthogonal polynomials on an arbitrary smooth Jordan curve in $\mathbb{C}$ (see \cite{d}). But it isn't a RHP since the semi-conjugate operator appears. More precisely, for a $2\times2$ matrix-valued function $\mathrm{Y}: \mathbb{C}\setminus \Gamma\rightarrow \mathbb{C}^{2\times 2}$, the following conditions are satisfied: \begin{equation} (\mbox{RHC for GOP})\,\, \begin{cases} \mathrm{Y}\,\, \mbox{is analytic in}\,\,\mathbb{C}\setminus \Gamma,\vspace{2mm}\\ (\mathrm{D}\mathrm{Y})^{+}(t)=(\mathrm{D}\mathrm{Y})^{-}(t)\left( \begin{array}{cc} 1 & w(t)s^{\prime}(t) \\ 0 & 1 \\ \end{array} \right) \,\,\mbox{for} \,\,t\in \Gamma,\vspace{2mm}\\ \mathrm{Y}(z)=\left(I+O(\frac{1}{z})\right)\left( \begin{array}{cc} z^{n} & 0 \\ 0 & z^{-n} \\ \end{array} \right) \,\,\mbox{as}\,\, z\rightarrow \infty, \end{cases} \end{equation} where $\Gamma$ is an arbitrary smooth Jordan curve in $\mathbb{C}$ oriented counter-clockwisely, $s(t)$ is the arc-length function, $w$ is a weight function on $\Gamma$, $I$ is the $2\times2$ identity matrix and the semi-conjugate operator $\mathrm{D}$ is defined by \begin{equation} ({\mathrm{D}}\mathrm{Y})(z)=\left( \begin{array}{ll} \overline{\mathrm{Y}_{11}(z)} \hspace{2mm} \mathrm{Y}_{12}(z) \\ \overline{\mathrm{Y}_{21}(z)} \hspace{2mm} \mathrm{Y}_{22}(z) \end{array} \right) \quad {\mathrm{for}} \quad \mathrm{Y}(z)= \left( \begin{array}{ll} \mathrm{Y}_{11}(z) \hspace{2mm} \mathrm{Y}_{12}(z) \\ \mathrm{Y}_{21}(z) \hspace{2mm} \mathrm{Y}_{22}(z) \end{array} \right), \end{equation} in which $\overline{z}$ is the conjugate complex number of $z$. As some applications of the mutual representation theorem, four-term recurrences, Christoffel-Darboux formulae and some properties of zeros for OTP were obtained in \cite{dd08}. In fact, by the mutual representation theorem, some important theorems (such as Favard, Baxter, Geronimus, Rakhmanov, Szeg\"o and the strong Szeg\"o) in the theory of OPUC can be established for OTP. This is one of the themes in the present paper. At present, together with the nonlinear steepest descent method due to Deift and Zhou \cite{dz}, Riemann-Hilbert problems are mainly applied into some asymptotic analysis problems on integrable systems, orthogonal polynomials, combinatorics and random matrices, etc. \cite{bdj,deift,mehta}. However, except for asymptotic analysis \cite{bdj,dd06,dd08,dz}, Riemann-Hilbert problems can also be used to some analytic and algebraic problems such as yielding some difference equations, differential equations and so on \cite{deift,deift1}. As an example to show the subtlety and power of RHPs in this facet, a new proof is given for Szeg\"o recursions of OPUC and four-term recurrences of OTP by using their RH characterizations in Section 4. As a byproduct, some new identities on Cauchy integrals for both OPUC and OTP as well as Hilbert transforms for OTP are also obtained. This paper is organized as follows. In Section 2, some definitions and notations are introduced involving OPUC and OTP as well some coefficients about them such as Verblunsky coefficients and so on, the mutual representation theorem and some of its consequences are given. In Section 3, some theorems for OTP are obtained in terms of the mutual representation theorem including Favard, Baxter, Geronimus, Rakhmanov, Szeg\"o and the strong Szeg\"o theorems which are important in the theory of OPUC. However, Favard theorem in this section is only in a weak form. As stated above, in Section 4, some identities such as Szeg\"o recursions of OPUC, four-term recurrences of OTP and so on are obtained by using RH characterizations for OPUC and OTP respectively. The final section is mainly devoted to prove a Favard theorem stronger than the one in Section 3. \section{Mutual representation and its consequences} Let $\mathbb{D}$ be the unit disc in the complex plane, $\partial\mathbb{D}$ be the unit circle and $\mu$ be a nontrivial probability measure on $\partial \mathbb{D}$ (i.e. with infinity support, nonnegative and $\mu(\partial \mathbb{D})=1$). Throughout this paper, by decomposition, we always write \begin{equation} d\mu(\tau)=w(\tau)\frac{d\tau}{2\pi i\tau}+d\mu_{s}(\tau), \end{equation} where $\tau\in\partial \mathbb{D}$, $w(\tau)=2\pi i\tau d\mu_{ac}/d\tau$ in which $d\mu_{ac}$ is the absolutely continuous part of $d\mu$, and $d\mu_{s}$ is the singular part of $d\mu$. Introduce two class of inner products, one is complex as follows \begin{equation} \langle f, g\rangle_{\mathbb{C}}=\int_{\partial \mathbb{D}}\overline{f(\tau)}g(\tau)d\mu(\tau) \end{equation} with norm $||f||_{\mathbb{C}}=[\int_{\partial \mathbb{D}}|f(\tau)|^{2}d\mu(\tau)]^{1/2}$, where $f, g$ are complex integrable functions on $\partial \mathbb{D}$. The other is real and defined by \begin{equation} \langle f, g\rangle_{\mathbb{R}}=\int_{\partial \mathbb{D}}f(\tau)g(\tau)d\mu(\tau) \end{equation} with norm $||f||_{\mathbb{R}}=[\int_{\partial \mathbb{D}}|f(\tau)|^{2}d\mu(\tau)]^{1/2}$, where $f, g$ are real integrable functions on $\partial \mathbb{D}$. By the complex inner product (2.2), applying Gram-Schmidt procedure to the following system \begin{equation*} \{1,z,z^{2},\ldots,z^{n},\ldots\}, \end{equation*} where $z\in \mathbb{C}$, we get the unique system $\{\Phi_{n}(z)\}$ of monic orthogonal polynomials on the unit circle with respect to $\mu$ satisfying \begin{equation} \langle\Phi_{n},\Phi_{m}\rangle_{\mathbb{C}}=\kappa_{n}^{-2}\delta_{nm}\,\,\,\text{with}\,\,\, \kappa_{n}>0. \end{equation} Then the orthonormal polynomials $\varphi_{n}(z)$ on the unit circle satisfy \begin{equation} \langle\varphi_{n},\varphi_{m}\rangle_{\mathbb{C}}=\delta_{nm}\,\,\,\text{and}\,\,\, \varphi_{n}(z)=\kappa_{n}\Phi_{n}(z). \end{equation} For any polynomial $Q_{n}$ of order $n$, its reversed polynomial $Q_{n}^{*}$ is defined by \begin{equation} Q_{n}^{*}(z)=z^{n}\overline{Q_{n}(1/\overline{z})}. \end{equation} One famous property of OPUC is Szeg\"o recurrence \cite{sze}, i.e. \begin{equation} \Phi_{n+1}(z)=z\Phi_{n}(z)-\overline{\alpha}_{n}\Phi^{*}_{n}(z), \end{equation} where $\alpha_{n}=-\overline{\Phi_{n+1}(0)}$ are called Verblunsky coefficients. It is well known that $\alpha_{n}\in \mathbb{D}$ for $n\in \mathbb{N}\cup\{0\}$. By convention, $\alpha_{-1}=-1$ (see \cite{sim1}). Szeg\"o recurrence (2.7) is extremely useful in the theory of OPUC. Especially, Verblunsky coefficients play an important role in many interesting problems for OPUC (see \cite{sim1,sim2}). Using the real inner product (2.3) and Gram-Schmidt procedure to the following over $\mathbb{R}$ linearly independent ordered set \begin{equation} \Big\{1, \frac {z-z^{-1}}{2i}, \frac {z+z^{-1}}{2}, \ldots, \frac{z^{n}-z^{-n}}{2i}, \frac{z^{n}+z^{-n}}{2}, \ldots \Big\}, \end{equation} where $z\in \mathbb{C}\setminus\{0\}$, we get the unique system \begin{equation} \{1, b_{1}\pi_{1}(z), a_{1}\sigma_{1}(z), \ldots, b_{n}\pi_{n}(z), a_{n}\sigma_{n}(z), \ldots\} \end{equation} of the ``monic" orthogonal Laurent polynomials (concisely, OLP) of the first class on the unit circle with respect to $\mu$ fulfilling \begin{equation} \langle\pi_{m},\sigma_{n}\rangle_{\mathbb{R}}=0, \langle\pi_{m},\pi_{n}\rangle_{\mathbb{R}}=\langle\sigma_{m},\sigma_{n}\rangle_{\mathbb{R}}=\delta_{mn},\,\,\,m,n=1,2,\ldots \end{equation} and \begin{equation} a_{n}\sigma_{n}(z)=\frac{z^{n}+z^{-n}}{2}-\beta_{n}b_{n}\pi_{n}(z)-\imath_{n}a_{n-1}\sigma_{n-1}(z) -\jmath_{n}b_{n-1}\pi_{n-1}(z)+\text{lower order} \end{equation} as well as \begin{equation} b_{n}\pi_{n}(z)=\frac{z^{n}-z^{-n}}{2i}-\varsigma_{n}a_{n-1}\sigma_{n-1}(z) -\zeta_{n}b_{n-1}\pi_{n-1}(z)+\text{lower order}, \end{equation} where $a_{n},b_{n}>0$, which are respectively the norms of the ``monic" orthogonal Laurent polynomials of the first class given by right hand sides of (2.11) and (2.12), \begin{equation} \beta_{n}=\langle\frac{z^{n}+z^{-n}}{2},b_{n}^{-1}\pi_{n}\rangle_{\mathbb{R}}, \end{equation} \begin{equation} \imath_{n}=\langle\frac{z^{n}+z^{-n}}{2},a_{n-1}^{-1}\sigma_{n-1}\rangle_{\mathbb{R}},\,\, \jmath_{n}=\langle\frac{z^{n}+z^{-n}}{2},b_{n-1}^{-1}\pi_{n-1}\rangle_{\mathbb{R}} \end{equation} and \begin{equation} \varsigma_{n}=\langle\frac{z^{n}-z^{-n}}{2i},a_{n-1}^{-1}\sigma_{n-1}\rangle_{\mathbb{R}},\,\, \zeta_{n}=\langle\frac{z^{n}-z^{-n}}{2i},b_{n-1}^{-1}\pi_{n-1}\rangle_{\mathbb{R}}. \end{equation} Throughout, as a convention, take $\sigma_{0}=1$, $\pi_{0}=0$ and $\beta_{0}=0$ as well as $a_{0}=b_{0}=1$. In deed, identifying the unit circle with the interval $[0,2\pi)$ via the map $\theta\rightarrow e^{i\theta}$, we get the orthonormal trigonometric polynomials of the first class $\pi_{n}(\theta)$ and $\sigma_{n}(\theta)$ for the over $\mathbb{R}$ linearly ordered trigonometric system \begin{equation} \{1, \sin\theta, \cos\theta, \ldots, \sin n\theta, \cos n\theta, \ldots\} \end{equation} by the above process when $z=e^{i\theta},\,\theta\in [0,2\pi)$. As noted in the introduction, by the uniqueness of solution of the the RHC (1.3), we have the following mutual representation theorem for OPUC and OTP. \begin{thm}[\!\!\cite{dd08}] Let $\mu$ be a nontrivial probability measure on the unit circle $\partial \mathbb{D}$, $\{1, \pi_{n}, \sigma_{n}\}$ be the unique system of the orthonormal Laurent polynomials of the first class on the unit circle with respect to $\mu$, and $\{\Phi_{n}\}$ be the unique system of the monic orthogonal polynomials on the unit circle with respect to $\mu$. Then for any $z\in \mathbb{C}$ and $n\in \mathbb{N}$, \begin{equation} \Phi_{2n-1}(z)=z^{n-1}[a_{n}\sigma_{n}(z)+(\beta_{n}+i)b_{n}\pi_{n}(z)] \end{equation} and \begin{equation} \kappa^{2}_{2n}\Phi^{*}_{2n}(z)=\frac{1}{2}z^{n}[a^{-1}_{n}(1+\beta_{n}i)\sigma_{n}(z) -ib^{-1}_{n}\pi_{n}(z)], \end{equation} where $\kappa_{n}$ is the leading coefficient of the orthonormal polynomial of order $n$ on the unit circle with respect to $\mu$, $\kappa_{n}=\|\Phi_{n}\|^{-1}_{\mathbb{C}}$, $\Phi_{n}^{*}$ is the reversed polynomial of $\Phi_{n}$, and $a_{n}, b_{n}, \beta_{n}$ are given in (2.11)-(2.13). \end{thm} Denote $\Lambda_{n}=-\frac{1}{2}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]i$, by (2.17) and (2.18), we obtain \begin{thm} \begin{equation} a_{n}\sigma_{n}(z)=-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}b_{n}^{-2}iz\Phi_{2n-1}(z)-(1-\beta_{n}i)\Phi_{2n}^{*}(z)] \end{equation} and \begin{equation} b_{n}\pi_{n}(z)=-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)z\Phi_{2n-1}(z)-i\Phi_{2n}^{*}(z)] \end{equation} for $n\in \mathbb{N}$ and $z\in \mathbb{C}\setminus\{0\}$. \end{thm} As some consequences, we have \begin{thm} [\!\!\cite{dd08}] \begin{equation} \kappa_{2n}^{2}=\frac{1}{4}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}] \end{equation} for $n\in \mathbb{N}\cup\{0\}$. \end{thm} \begin{thm} \begin{equation} \alpha_{2n-1}=\frac{1}{4}\kappa_{2n}^{-2}[b_{n}^{-2}-a_{n}^{-2}(1-\beta_{n}^{2})]-\frac{1}{2}\kappa_{2n}^{-2}a_{n}^{-2} \beta_{n}i \end{equation} and \begin{equation} \alpha_{2n-2}=\frac{1}{2}(\imath_{n}+\beta_{n-1}\varsigma_{n}-\zeta_{n})-\frac{i}{2}(\jmath_{n}-\imath_{n}\beta_{n-1} +\varsigma_{n}) \end{equation} for $n\in \mathbb{N}$. \end{thm} \begin{proof} (2.22) is referred to \cite{dd08}. (2.23) follows from (2.11), (2.12), (2.17) and the fact $\alpha_{2n-2}=-\overline{\Phi_{2n-1}(0)}.$ \end{proof} Since $\kappa_{n}^{2}/\kappa_{n+1}^{2}=1-|\alpha_{n}|^{2}$ for $n\in \mathbb{N}\cup\{0\}$, by Theorem 2.3 and 2.4, we get \begin{thm} \begin{equation} \kappa_{2n-1}^{2}=[a_{n}^{2}+b_{n}^{2}(1+\beta^{2}_{n})]^{-1} \end{equation} for $n\in \mathbb{N}$. \end{thm} Therefore, by (2.21) and (2.24), we obtain \begin{thm} \begin{equation} \lim_{n\rightarrow \infty}a_{n}b_{n}=\frac{1}{2}\exp\Big(\frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}\Big) \end{equation} and \begin{equation} \lim_{n\rightarrow \infty}[a_{n}^{2}+b_{n}^{2}(1+\beta^{2}_{n})]=\exp\Big(\frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}\Big). \end{equation} \end{thm} \begin{proof} Since (see \cite{sze,sim1}) \begin{equation} \lim_{n\rightarrow \infty}\kappa_{n}^{-2}=\exp\Big(\frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}\Big), \end{equation} then (2.26) follows from (2.24) whereas (2.25) follows from \begin{equation} \kappa_{2n-1}^{2}\kappa_{2n}^{2}=\frac{1}{4}a_{n}^{-2}b_{n}^{-2} \end{equation} which holds by (2.21) and (2.24). \end{proof} In addition, we also have \begin{thm} \begin{align} &[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\beta_{n+1}^{2})]+(\imath_{n+1}+\beta_{n}\varsigma_{n+1}-\zeta_{n+1})^{2}\nonumber\\ &+(\jmath_{n+1}-\imath_{n+1}\beta_{n} +\varsigma_{n+1})^{2}=4 \end{align} for $n\in \mathbb{N}\cup\{0\}$. \end{thm} \begin{proof} It immediately follows from (2.21), (2.23) and (2.24) since $\kappa_{2n}^{2}/\kappa_{2n+1}^{2}=1-|\alpha_{2n}|^{2}$ for $n\in \mathbb{N}\cup\{0\}$. \end{proof} In the rest of this section, we give another identity on the coefficients $a_{n}, b_{n},\beta_{n}$ of OTP and $\alpha_{n}, \kappa_{n}$ of OPUC. The main idea will also be used in Section 5 below. To do so, we need the following simple facts. \begin{lem} Let $\Phi_{n}$ be the monic orthogonal polynomial on the unit circle of order $n$ with respect to $\mu$, and $\Phi^{*}_{n}$ be the reversed polynomial of $\Phi_{n}$, then \begin{equation} \langle 1,\Phi_{n}^{*}\rangle_{\mathbb{R}}=\int_{\partial \mathbb{D}}\Phi_{n}^{*}(\tau)d\mu(\tau)=\kappa_{n}^{-2} \end{equation} and \begin{equation} \langle 1,z\Phi_{n}\rangle_{\mathbb{R}}=\int_{\partial \mathbb{D}}\tau\Phi_{n}(\tau)d\mu(\tau)=\alpha_{n}^{-1}\Big(\kappa_{n}^{-2}-\kappa_{n+1}^{-2}\Big), \end{equation} where the Verblunsky coefficient $\alpha_{n}$ is restricted in $\mathbb{D}\setminus\{0\}$. \end{lem} \begin{proof} By (2.6), we have \begin{align} \int_{\partial \mathbb{D}}\Phi_{n}^{*}(\tau)d\mu(\tau)=\int_{\partial \mathbb{D}}\tau^{n}\overline{\Phi_{n}(\tau)}d\mu(\tau)=\langle z^{n},\Phi_{n}\rangle_{\mathbb{C}}=\langle \Phi_{n},\Phi_{n}\rangle_{\mathbb{C}}. \end{align} Thus (2.30) holds on account of (2.4). For $\alpha_{n}\in\mathbb{D}\setminus\{0\}$, by Szeg\"o recurrence (2.7) (or see (4.12) below), \begin{align} \int_{\partial \mathbb{D}}\tau\Phi_{n}(\tau)d\mu(\tau)=\alpha_{n}^{-1}\Big[\int_{\partial \mathbb{D}}\Phi_{n}^{*}(\tau)d\mu(\tau)-\int_{\partial \mathbb{D}}\Phi_{n+1}^{*}(\tau)d\mu(\tau)\Big]. \end{align} Therefore, (2.31) follows from (2.30). \end{proof} \begin{thm} \begin{align} &\alpha_{2n-1}\beta_{n}+\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)(1+\alpha_{2n-1})\kappa_{2n-1}^{-2}\nonumber \\-&\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)+\alpha_{2n-1}b_{n}^{-2}i\Big]\kappa_{2n}^{-2}=0 \end{align} for $n\in \mathbb{N}$. \end{thm} \begin{proof} In the case of $\alpha_{2n-1}=0$, since $\kappa_{2n-1}=\kappa_{2n}$ by $\kappa_{2n-1}^{2}/\kappa_{2n}^{2}=1-|\alpha_{2n-1}|^{2}$, it is easy to get (2.34). So in what follows, we always assume that $\alpha_{2n-1}\in \mathbb{D}\setminus\{0\}$. By Theorem 2.2, Lemma 2.8 and Szeg\"o recurrence, we have \begin{align} \langle z^{n},b_{n}\pi_{n}\rangle_{\mathbb{R}}=&\langle z^{n},-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)z\Phi_{2n-1}-i\Phi_{2n}^{*}]\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\langle 1,z\Phi_{2n-1}\rangle_{\mathbb{R}}+\frac{i}{2}\langle 1,\Phi_{2n}^{*}\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\kappa_{2n-1}^{-2}+\Big[\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+\frac{i}{2}\Big]\kappa_{2n}^{-2}\nonumber \end{align} and \begin{align} \langle z^{-n},b_{n}\pi_{n}\rangle_{\mathbb{R}}=&\langle z^{-n},-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)z\Phi_{2n-1}-i\Phi_{2n}^{*}]\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\langle 1,z^{-(2n-1)}\Phi_{2n-1}\rangle_{\mathbb{R}}+\frac{i}{2}\langle 1,z^{-2n}\Phi_{2n}^{*}\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\langle z^{(2n-1)},\Phi_{2n-1}\rangle_{\mathbb{C}}+\frac{i}{2}\overline{\langle 1,\Phi_{2n}\rangle}_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\langle \Phi_{2n-1},\Phi_{2n-1}\rangle_{\mathbb{C}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\kappa_{2n-1}^{-2}. \end{align} Thus \begin{align} \beta_{n}=&\langle\frac{z^{n}+z^{-n}}{2},b_{n}^{-1}\pi_{n}\rangle_{\mathbb{R}}=b_{n}^{-2}\langle\frac{z^{n}+z^{-n}}{2},b_{n}\pi_{n}\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big)\kappa_{2n-1}^{-2}\nonumber\\ &+\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big]\kappa_{2n}^{-2}. \end{align} Multiplying by $\alpha_{2n-1}$ on two sides of (2.36), (2.34) immediately follows. \end{proof} \begin{rem} Noting \begin{equation} \Lambda_{n}=-\frac{1}{2}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]i=-2\kappa_{2n}^{2}i, \end{equation} we can also get (2.34) by directly invoking (2.21), (2.22) and (2.24) together. \end{rem} \section{Favard, Baxter, Geronimus, Rakhmanov and Szeg\"o theorems} In the present section, some theorems are obtained for orthogonal trigonometric polynomials, such as Favard, Baxter, Geronimus, Rakhmanov theorems and so on, which play important roles in the theory of OPUC \cite{sim1,sim2}. \subsection{Weak Favard Theorem} We begin with a weak Favard Theorem for OTP. Favard theorem for OPRL is about the orthogonality of a system of polynomials which satisfies a three-term recurrence with appropriate coefficients \cite{sze,ma}. Its OPUC version is well-known and also called Verblunsky theorem \cite{sim1,enzg}, that is, if $\{\alpha_{n}^{(0)}\}_{n=0}^{\infty}$ is a sequence of complex numbers in $\mathbb{D}$, then there exists a unique measure $d\mu$ such that $\alpha_{n}(d\mu)=\alpha_{n}^{(0)}$, where $\alpha_{n}(d\mu)$ are the associated Verblunsky coefficients of $d\mu$. For orthogonal trigonometric polynomials, we have the following Favard theorem in a weak form. \begin{thm} Let $\{(a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)})\}_{n=0}^{\infty}$ with $a_{0}^{(0)},b_{0}^{(0)}=1$ and $\beta_{0}^{(0)}=0$ be a system of three-tuples of real numbers satisfying \begin{align} &[(a_{n}^{(0)})^{2}+(b_{n}^{(0)})^{2}(1+(\beta_{n}^{(0)})^{2})] [(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}(1+(\beta_{n+1}^{(0)})^{2})]<4(a_{n}^{(0)})^{2}(b_{n}^{(0)})^{2} \end{align} with $a_{n}^{(0)},b_{n}^{(0)}>0$ for $n\in \mathbb{N}\cup\{0\}$, then there exists a nontrivial probability measure $d\mu$ on $\partial \mathbb{D}$ such that $a_{n}(d\mu)=a_{n}^{(0)}$, $b_{n}(d\mu)=b_{n}^{(0)}$ and $\beta_{n}(d\mu)=\beta_{n}^{(0)}$, where $a_{n}(d\mu),b_{n}(d\mu),\beta_{n}(d\mu)$ are associated coefficients of $d\mu$ defined by (2.11)-(2.13). \end{thm} \begin{proof} For $n\in \mathbb{N}\cup\{0\}$, define \begin{equation} \kappa_{2n}^{(0)}=\frac{1}{2}\Big[(a_{n}^{(0)})^{-2}\big(1+(\beta_{n}^{(0)})^{2}\big)+(b_{n}^{(0)})^{-2}\Big]^{\frac{1}{2}} \end{equation} and \begin{equation} \kappa_{2n+1}^{(0)}=\Big[(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}\big(1+(\beta_{n+1}^{(0)})^{2}\big)\Big]^{-\frac{1}{2}}. \end{equation} Let \begin{equation} \alpha_{2n-1}^{(0)}=\frac{1}{4}(\kappa_{2n}^{(0)})^{-2}\Big[(b_{n}^{(0)})^{-2}-(a_{n}^{(0)})^{-2}\big(1-(\beta_{n}^{(0)})^{2}\big)\Big] -\frac{1}{2}(\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2} (\beta_{n}^{(0)})i, \end{equation} then $\alpha_{2n-1}^{(0)}\in \mathbb{D}$ since \begin{equation} \Big|\alpha_{2n-1}^{(0)}\Big|^{2}=\frac{(\kappa_{2n}^{(0)})^{4}-\frac{1}{4}(a_{n}^{(0)})^{-2}(b_{n}^{(0)})^{-2}} {(\kappa_{2n}^{(0)})^{4}}=1-\frac{(\kappa_{2n-1}^{(0)})^{2}}{(\kappa_{2n}^{(0)})^{2}} \end{equation} and $a_{n}^{(0)},b_{n}^{(0)}>0$. Note that (3.1) is equivalent to \begin{equation} \frac{\kappa_{2n}^{(0)}}{\kappa_{2n+1}^{(0)}}<1. \end{equation} Arbitrarily choose a sequence $\{\alpha_{2n}^{(0)}\}_{n=0}^{\infty}$ such that \begin{equation} \Big|\alpha_{2n}^{(0)}\Big|=\sqrt{1-(\kappa_{2n}^{(0)})^{2}\big/(\kappa_{2n+1}^{(0)})^{2}} \end{equation} and fix it, then $\alpha_{2n}^{(0)}\in\mathbb{D}$ for $n\in \mathbb{N}\cup\{0\}$ by (3.6). Therefore, for this fixed sequence $\{\alpha_{n}^{(0)}\}_{n=0}^{\infty}$, by Verblunsky theorem, there exists a unique nontrivial probability measure $d\mu$ on $\partial \mathbb{D}$ such that \begin{equation} \alpha_{n}(d\mu)=\alpha_{n}^{(0)} \end{equation} for $n\in \mathbb{N}\cup\{0\}$. Then for $n\in \mathbb{N}\cup\{0\}$, \begin{equation} \kappa_{n}(d\mu)=\kappa_{n}^{(0)} \end{equation} since $\kappa_{n}(d\mu)=\prod_{j=0}^{n-1}(1-|\alpha_{j}(d\mu)|^{2})^{-\frac{1}{2}}$ (see \cite{sim1}). Suppose that $\{\Phi_{n}(d\mu,z)\}_{n=0}^{\infty}$ is the sequence of monic orthogonal polynomials on the unit circle with respect to $d\mu$, set \begin{equation} \Sigma_{n}(z)=-\frac{1}{2}z^{-n}[(\Lambda_{n}^{(0)})^{-1}(b_{n}^{(0)})^{-2}iz\Phi_{2n-1}(d\mu,z) -(1-\beta_{n}^{(0)}i)\Phi_{2n}^{*}(d\mu,z)] \end{equation} and \begin{equation} \Pi_{n}(z)=-\frac{1}{2}z^{-n}[(\Lambda_{n}^{(0)})^{-1}(a_{n}^{(0)})^{-2}(1+\beta_{n}^{(0)}i)z\Phi_{2n-1}(d\mu,z) -i\Phi_{2n}^{*}(d\mu,z)] \end{equation} for $n\in \mathbb{N}$ and $z\in \mathbb{C}\setminus\{0\}$, where $\Lambda_{n}^{(0)}=-\frac{1}{2}\Big[(a_{n}^{(0)})^{-2}\big(1+(\beta_{n}^{(0)})^{2}\big)+(b_{n}^{(0)})^{-2}\Big]i$. Obviously, \begin{equation} \Lambda_{n}^{(0)}=-2(\kappa_{2n}^{(0)})^{2}i. \end{equation} By Szeg\"o recurrence and (3.8), \begin{equation} z\Phi_{2n-1}(d\mu,z)=\Phi_{2n}(d\mu,z)+\overline{\alpha^{(0)}_{2n-1}}\Phi^{*}_{2n-1}(d\mu,z). \end{equation} Hence by the orthogonality of $\Phi_{n}(d\mu, z)$ and $\Phi_{n}^{*}(d\mu, z)$, we get \begin{equation} \langle z^{\pm j}, \Sigma_{n}\rangle_{\mathbb{R}}=\langle z^{\pm j}, \Pi_{n}\rangle_{\mathbb{R}}=0,\,\,\,\,j=0,1,\ldots,n-1. \end{equation} Moreover, \begin{equation} \langle z^{n}, \Sigma_{n}\rangle_{\mathbb{R}}=(a_{n}^{(0)})^{2}\overline{\alpha^{(0)}_{2n-1}}+\frac{1}{2}(\kappa_{2n}^{(0)})^{-2}(1-\beta_{n}^{(0)}i), \end{equation} \begin{equation} \langle z^{-n}, \Sigma_{n}\rangle_{\mathbb{R}}=(a_{n}^{(0)})^{2}, \end{equation} \begin{equation} \langle z^{n}, \Pi_{n}\rangle_{\mathbb{R}}=(b_{n}^{(0)})^{2}(\beta_{n}^{(0)}-i)\overline{\alpha^{(0)}_{2n-1}}+\frac{1}{2}(\kappa_{2n}^{(0)})^{-2}i, \end{equation} and \begin{equation} \langle z^{-n}, \Pi_{n}\rangle_{\mathbb{R}}=(b_{n}^{(0)})^{2}(\beta_{n}^{(0)}-i) \end{equation} follow from (3.9), (3.12) and the fact $||\Phi_{n}(d\mu)||_{\mathbb{R}}^{2}=||\Phi_{n}^{*}(d\mu)||_{\mathbb{R}}^{2}=[\kappa_{n}(d\mu)]^{^{-2}}$ as well as $(\kappa_{2n-1}^{(0)})^{2}(\kappa_{2n}^{(0)})^{2}=\frac{1}{4}(a_{n}^{(0)})^{-2}(b_{n}^{(0)})^{-2}$. By (3.4), \begin{equation} \overline{\alpha^{(0)}_{2n-1}}-1=-\frac{1}{2}(\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2}(1-\beta_{n}^{(0)}i) \end{equation} and \begin{equation} \overline{\alpha^{(0)}_{2n-1}}+1=\frac{1}{2}(\kappa_{2n}^{(0)})^{-2}\Big[(a_{n}^{(0)})^{-2}(\beta_{n}^{(0)})^{2} +(b_{n}^{(0)})^{-2}\Big]+\frac{1}{2}(\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2}\beta_{n}^{(0)}i. \end{equation} So \begin{equation} \langle \frac{z^{n}+z^{-n}}{2}, \Sigma_{n}\rangle_{\mathbb{R}}=(a_{n}^{(0)})^{2},\,\,\,\langle \frac{z^{n}-z^{-n}}{2i}, \Pi_{n}\rangle_{\mathbb{R}}=(b_{n}^{(0)})^{2} \end{equation} and \begin{equation} \langle \frac{z^{n}-z^{-n}}{2i}, \Sigma_{n}\rangle_{\mathbb{R}}=0 \end{equation} as well as \begin{equation} \langle \frac{z^{n}+z^{-n}}{2}, \Pi_{n}\rangle_{\mathbb{R}}=(b_{n}^{(0)})^{2}\beta_{n}^{(0)}. \end{equation} In addition, it is easy to check that the coefficients of $z^{n}$ and $z^{-n}$ in $\Pi_{n}(z)$ are respectively $\frac{1}{2i}$ and $-\frac{1}{2i}$ whereas both of ones in $\Sigma_{n}(z)-\beta_{n}^{(0)}\Pi_{n}(z)$ are $\frac{1}{2}$. Noting (3.14) and (3.22), this fact means that $\Sigma_{n}(z)$ and $\Pi_{n}(z)$ are just the ``monic" orthogonal Laurent polynomials of the first class on the unit circle with respect to $d\mu$, i.e. \begin{equation} \Sigma_{n}(z)=a_{n}(d\mu)\sigma_{n}(d\mu,z)\,\,\,\,\,\text{and}\,\,\,\,\,\Pi_{n}(z)=b_{n}(d\mu)\pi_{n}(d\mu,z). \end{equation} Since \begin{equation} \langle a_{n}(d\mu)\sigma_{n}(d\mu), a_{n}(d\mu)\sigma_{n}(d\mu)\rangle_{\mathbb{R}}=a_{n}^{2}(d\mu), \end{equation} \begin{equation} \langle b_{n}(d\mu)\pi_{n}(d\mu), b_{n}(d\mu)\pi_{n}(d\mu)\rangle_{\mathbb{R}}=b_{n}^{2}(d\mu) \end{equation} and \begin{equation} \langle \frac{z^{n}+z^{-n}}{2}, b_{n}(d\mu)\pi_{n}(d\mu)\rangle_{\mathbb{R}}=b_{n}^{2}(d\mu)\beta_{n}(d\mu), \end{equation} therefore, by (3.21) and (3.23), \begin{equation} a_{n}(d\mu)=a_{n}^{(0)},\,\,\,b_{n}(d\mu)=b_{n}^{(0)},\,\,\,\beta_{n}(d\mu)=\beta_{n}^{(0)}. \end{equation} \end{proof} \begin{rem} Only for the sequence of three-tuples $(a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)})$ fulfilling (3.1), to get (3.28), the measure $d\mu$ is not unique since the sequence can definitely determine Verblunsky coefficients with odd subscript but ones with even subscript from the above proof. For $n\in \mathbb{N}$, set \begin{equation} \imath_{n}(d\mu)=\langle\frac{z^{n}+z^{-n}}{2},(a_{n-1}^{(0)})^{-1}\sigma_{n-1}(d\mu)\rangle_{\mathbb{R}}, \end{equation} \begin{equation} \jmath_{n}(d\mu)=\langle\frac{z^{n}+z^{-n}}{2},(b_{n-1}^{(0)})^{-1}\pi_{n-1}(d\mu)\rangle_{\mathbb{R}}, \end{equation} \begin{equation} \varsigma_{n}(d\mu)=\langle\frac{z^{n}-z^{-n}}{2i},(a_{n-1}^{(0)})^{-1}\sigma_{n-1}(d\mu)\rangle_{\mathbb{R}}, \end{equation} and \begin{equation} \zeta_{n}(d\mu)=\langle\frac{z^{n}-z^{-n}}{2i},(b_{n-1}^{(0)})^{-1}\pi_{n-1}(d\mu)\rangle_{\mathbb{R}}. \end{equation} Then the measure $d\mu$ is unique for the sequence of seven-tuples \begin{equation} (a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)},\imath_{n}(d\mu),\jmath_{n}(d\mu),\varsigma_{n}(d\mu),\zeta_{n}(d\mu)) \end{equation} satisfying (3.1) by Theorem 2.4 and Verblunsky theorem. Since $d\mu$ is partly dependent on $(a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)})$ and $\imath_{n}(d\mu),\jmath_{n}(d\mu),\varsigma_{n}(d\mu),\zeta_{n}(d\mu)$ are dependent on $d\mu$, $a_{n}^{(0)}$ and $b_{n}^{(0)}$, then the sequence of seven-tuples (3.33) satisfying (3.1) is partly dependent on the sequence of three-tuples $(a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)})$ fulfilling (3.1). Considering the uniqueness of $d\mu$ for the sequence of (3.33) with (3.1), we call that $d\mu$ is selectively unique for the sequence $\{(a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)})\}_{n=0}^{\infty}$ satisfying (3.1) and $a_{n}^{(0)},b_{n}^{(0)}>0$ as well as $a_{0}^{(0)},b_{0}^{(0)}=1$ and $\beta_{0}^{(0)}=0$. In Section 5, we will give a strong Favard theorem which in detail illuminates the relation on the uniqueness of $d\mu$ and a sequence of seven-tuples, $\{(a_{n}^{(0)}, b_{n}^{(0)},\beta_{n}^{(0)},\imath_{n}^{(0)},\jmath_{n}^{(0)},\varsigma_{n}^{(0)},\zeta_{n}^{(0)})\}$, with some additional properties. \end{rem} Similarly, by Theorems 2.3, 2.4, 2.7 and using the corresponding theorems for OPUC, we also have Baxter, Geronimus, Rakhmanov, Szeg\"o and the strong Szeg\"o theorems for OTP in what follows. \subsection{Baxter Theorem} Let \begin{equation} c_{n}=\int_{\partial \mathbb{D}}\overline{\tau}^{n}d\mu(\tau), \,\,\,n\in \mathbb{N}\cup\{0\} \end{equation} be moments of $\mu$, Baxter theorem for OPUC states that $\sum_{n=0}^{\infty}|\alpha_{n}|<0$ if and only if $\sum_{n=0}^{\infty}|c_{n}|<0$ and $d\mu(\tau)=w(\tau)\frac{d\tau}{2\pi i\tau}$ with $w(\tau)$ continuous and $\min_{\tau\in\partial \mathbb{D}}w(\tau)>0$. For orthogonal trigonometric polynomials, we have Baxter theorem as follows. \begin{thm} Let $\mu$ be a nontrivial probability measure on $\partial \mathbb{D}$, $a_{n}, b_{n}, \beta_{n}$ be the associated coefficients given in (2.11)-(2.13) and $c_{n}$ be the moments of $\mu$ defined by (3.34), then \begin{align} &\sum_{n=0}^{\infty}\sqrt{1-\frac{1}{4}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\beta_{n+1}^{2})]} \nonumber\\ +&\sum_{n=0}^{\infty}\sqrt{\frac{a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}-1)} {a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}+1)}}<\infty \end{align} is equivalent to $\sum_{n=0}^{\infty}|c_{n}|<0$ and $d\mu(\tau)=w(\tau)\frac{d\tau}{2\pi i\tau}$ with $w(\tau)$ continuous and $\min_{\tau\in\partial \mathbb{D}}w(\tau)>0$. \end{thm} \subsection{Geronimus Theorem} To discuss Geronimus theorem, it is necessary to introduce some basic notions of Schur algorithm (see \cite{sim1}). An analytic function $F$ on $\mathbb{D}$ is called a Carath\'eodory function if and only if $F(0)=1$ and $\Re F(z)>0$ on $\mathbb{D}$. An analytic function $f$ on $\mathbb{D}$ is called a Schur function if and only if $\sup_{z\in \mathbb{D}}|f(z)|<1$. Let \begin{equation} F(z)=\int_{\partial\mathbb{D}}\frac{\tau+z}{\tau-z}d\mu(\tau) \end{equation} be an associated Carath\'eodory function of $\mu$, then \begin{equation} f(z)=\frac{1}{z}\frac{F(z)-1}{F(z)+1} \end{equation} is a Schur function related to $\mu$. Starting with a Schur function $f_{0}$, Schur algorithm actually provides an approach to continuously map one Schur function to another by a series of transforms of the form \begin{equation} \begin{cases} f_{n+1}(z)=\displaystyle\frac{1}{z}\frac{f_{n}(z)-\gamma_{n}}{1-\overline{\gamma}_{n}f_{n}(z)},\\[4mm] \gamma_{n}=f_{n}(0). \end{cases} \end{equation} $f_{n}$ are called Schur iterates and $\gamma_{n}$ are called Schur parameters associated to $f_{0}$. Due to Schur, it is well known that there is a one to one correspondence between the set of Schur functions which are not finite Blaschke products and the set of sequences of $\{\gamma_{n}\}_{n=0}^{\infty}$ in $\mathbb{D}$. Geronimus theorem for OPUC asserts that if $\mu$ is a nontrivial probability measure on $\partial \mathbb{D}$, the Schur parameters $\{\gamma_{n}\}_{n=0}^{\infty}$ associated to $f_{0}$ related to $\mu$ defined by (3.36) and (3.37) are identical to the Verblunsky coefficients $\{\alpha_{n}\}_{n=0}^{\infty}$. For orthogonal trigonometric polynomials, we have Geronimus theorem as follows. \begin{thm} Let $\mu$ be a nontrivial probability measure on $\partial \mathbb{D}$, if $\gamma_{n}$ are Schur parameters and $a_{n}$, $b_{n}$, $\beta_{n}$, $\imath_{n}$, $\jmath_{n}$, $\varsigma_{n}$, $\zeta_{n}$ are coefficients associated to $\mu$ defined by (2.11)-(2.15), then \begin{equation} \gamma_{2n-1}=\frac{a_{n}^{2}-b_{n}^{2}(1-\beta_{n}^{2})}{a_{n}^{2}+b_{n}^{2}(1+\beta_{n}^{2})} -\frac{2b_{n}^{2}\beta_{n}}{a_{n}^{2}+b_{n}^{2}(1+\beta_{n}^{2})}i \end{equation} and \begin{equation} \gamma_{2n-2}=\frac{1}{2}(\imath_{n}+\beta_{n-1}\varsigma_{n}-\zeta_{n})-\frac{i}{2}(\jmath_{n}-\imath_{n}\beta_{n-1} +\varsigma_{n}) \end{equation} for $n\in \mathbb{N}$. \end{thm} \subsection{Rakhmanov Theorem and Szeg\"o Theorem} Let $d\mu$ have the decomposition form (2.1), $\{\alpha_{n}\}_{n=0}^{\infty}$ be the Verblunsky coefficients of $\mu$, Rakhmanov theorem for OPUC states that if $w(\tau)>0$ for a.e. $\tau\in \partial\mathbb{D}$, then $\lim_{n\rightarrow\infty}|\alpha_{n}|=0$. Its OTP version is as follows. \begin{thm} Let $\mu$ be a nontrivial probability measure on $\partial \mathbb{D}$ with the decomposition form (2.1), $a_{n}, b_{n}, \beta_{n}$ be the associated coefficients of $\mu$ given in (2.11)-(2.13). If $w(\tau)>0$ for a.e. $\tau\in \partial\mathbb{D}$, then \begin{equation} \lim_{n\rightarrow\infty}\frac{a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}-1)} {a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}+1)}=0 \end{equation} and \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{4}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\beta_{n+1}^{2})]=1. \end{equation} \end{thm} Szeg\"o theorem for OPUC shows that \begin{equation} \prod_{n=0}^{\infty}(1-|\alpha_{n}|^{2})=\exp\Big(\frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}\Big). \end{equation} Especially, \begin{equation} \sum_{n=0}^{\infty}|\alpha_{n}|^{2}<\infty\Longleftrightarrow \frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}>-\infty. \end{equation} Its analog for OTP is \begin{thm} Let $\mu$ be a nontrivial probability measure on $\partial \mathbb{D}$ with the decomposition form (2.1), $a_{n}, b_{n}, \beta_{n}$ be the associated coefficients of $\mu$ given in (2.11)-(2.13). Then \begin{equation} \prod_{n=0}^{\infty}\frac{a_{n+1}^{2}+b_{n+1}^{2}(1+\beta_{n+1}^{2})}{a_{n}^{2}+b_{n}^{2}(1+\beta_{n}^{2})}= \exp\Big(\frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}\Big). \end{equation} In particular, \begin{align} &\sum_{n=0}^{\infty}\left\{1-\frac{1}{4}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\beta_{n+1}^{2})]\right\} \nonumber\\ +&\sum_{n=0}^{\infty}\frac{a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}-1)} {a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}+1)}<\infty \end{align} is equivalent to $\displaystyle\frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}>-\infty$. \end{thm} \subsection{The Strong Szeg\"o Theorem}Let $d\mu$ have the decomposition form (2.1) satisfying the Szeg\"o condition \begin{equation} \frac{1}{2\pi i}\int_{\partial \mathbb{D}}\log w(\tau)\frac{d\tau}{\tau}>-\infty, \end{equation} it is accustomed to introduce the Szeg\"o function as follows \begin{equation} D(z)=\exp\Big(\frac{1}{4\pi i}\int_{\partial \mathbb{D}}\frac{\tau+z}{\tau-z}\log w(\tau)\frac{d\tau}{\tau}\Big). \end{equation} It is easy to get that $D(z)$ is analytic and nonvanishing in $\mathbb{D}$, lies in the Hardy space $H^{2}(\mathbb{D})$ and $\lim_{r\uparrow1}D(r\tau)=D(\tau)$ for a.e. $\tau\in\partial \mathbb{D}$ as well as $|D(\tau)|^{2}=w(\tau)$. Let \begin{equation} D(z)=\exp\Big(\frac{1}{2}\hat{L}_{0}+\sum_{n=1}^{\infty}\hat{L}_{n}z^{n}\Big),\,\,\,z\in \mathbb{D}. \end{equation} Due to Ibragimov, the sharpest form of the strong Szeg\"o theorem for OPUC (see \cite{sim1}) says that \begin{equation} \sum_{n=0}^{\infty}n|\alpha_{n}|^{2}<\infty\Longleftrightarrow d\mu_{s}=0 \,\,\,\text{and} \,\,\,\sum_{n=0}^{\infty}n|\hat{L}_{n}|^{2}<\infty. \end{equation} The corresponding result for OTP is stated in the following theorem. \begin{thm} Let $\mu$ be a nontrivial probability measure on $\partial \mathbb{D}$ with the decomposition form (2.1) satisfying the Szeg\"o cindition (3.47), $a_{n}, b_{n}, \beta_{n}$ be the associated coefficients of $\mu$ given in (2.11)-(2.13), and $\{\hat{L}_{n}\}_{n=0}^{\infty}$ be the Taylor coefficients of the logarithm of Szeg\"o function $D(z)$ at $z=0$ which are defined by (3.48) and (3.49). Then \begin{align} &\sum_{n=0}^{\infty}2n\left\{1-\frac{1}{4}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}][a_{n+1}^{2}+b_{n+1}^{2}(1+\beta_{n+1}^{2})]\right\} \nonumber\\ +&\sum_{n=0}^{\infty}(2n-1)\left\{\frac{a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}-1)} {a_{n}^{4}+b_{n}^{4}(1+\beta_{n}^{2})^{2}+2a_{n}^{2}b_{n}^{2}(\beta_{n}^{2}+1)}\right\}<\infty \end{align} is equivalent to $d\mu_{s}=0$ and $\sum_{n=0}^{\infty}n|\hat{L}_{n}|^{2}<\infty$. \end{thm} In the above, by the mutual representation theorem for OTP and OPUC, we obtain some classical theorems for orthogonal trigonometric polynomials corresponding to the ones for orthogonal polynomials on the unit circle. In fact, by this theorem, we can obtain much more results for orthogonal trigonometric polynomials. For example, the important and useful Bernstein-Szeg\"o measure can be expressed in terms of orthogonal trigonometric polynomials as follows \begin{equation*} d\mu_{n}= \begin{cases} \displaystyle\frac{a_{m}^{2}+b_{m}^{2}(1+\beta^{2}_{m})}{|a_{m}\sigma_{m}(\theta)+(\beta_{m}+i)b_{m}\pi_{m}(\theta)|^{2}} \frac{d\theta}{2\pi},\,\,\,n=2m-1,\\[3mm] \displaystyle\frac{a_{m}^{2}b_{m}^{2}}{a_{m}^{2}+b_{m}^{2}(1+\beta^{2}_{m})}\frac{1}{|a_{m}^{-1}(\beta_{m}-i)\sigma_{m}(\theta) -b_{m}^{-1}\pi_{m}(\theta)|^{2}} \frac{d\theta}{2\pi},\,\,\,n=2m. \end{cases} \end{equation*} \section{Identities from Riemann-Hilbert Characterizations} In this section, by applying the corresponding RH characterizations, we obtain some identities for OPUC and OTP including Szeg\"o recursions for OPUC, four-term recurrences for OTP and some new identities on Cauchy integrals for OPUC and OTP as well as Hilbert transforms for OTP. Let $H(\partial \mathbb{D})$ denote the set of all complex-valued and H\"older continuous functions defined on $\partial \mathbb{D}$. For simplicity, we always assume that the weight function $w\in H(\partial \mathbb{D})$ in what follows. \subsection{The case of OPUC} The RH characterization for OPUC is uniquely solvable as follows \begin{thm}[\!\!\cite{bdj,dd06}] The RHP (1.2) has a unique solution given by \begin{equation} Y(z)=\left( \begin{array}{cc} \Phi_{n}(z) & C[\tau^{-n}\Phi_{n}w](z) \\ -\kappa_{n-1}^{2}\Phi_{n-1}^{*}(z) & -\kappa_{n-1}^{2}C[\tau^{-n}\Phi_{n-1}^{*}w](z) \\ \end{array} \right), \end{equation} where $\Phi_{n}$ is the monic orthogonal polynomials on the unit circle of order $n$ with respect to the weight $w$, $\Phi_{n-1}^{*}$ is the reversed polynomial of $\Phi_{n-1}$, $\kappa_{n-1}$ is given as in (2.4), and $C$ is the Cauchy integral operator. \end{thm} Consider the Schwarz reflection of $Y$ defined by \begin{equation} Y_{1}(z)=\overline{Y\left(\frac{1}{\overline{z}}\right)},\,\,\,\,z\in \mathbb{C}\setminus\partial \mathbb{D}, \end{equation} then $Y_{1}^{+}(t)=\overline{Y^{-}(t)}$ and $Y_{1}^{-}(t)=\overline{Y^{+}(t)}$ for $t\in \partial \mathbb{D}$. Therefore, by using the boundary condition in RHP (1.2), we have \begin{equation} Y_{1}^{+}(t)=Y_{1}^{-}(t)\left( \begin{array}{cc} 1 & -t^{n}w(t) \\ 0 & 1 \\ \end{array} \right),\,\,\, t\in \partial \mathbb{D}. \end{equation} By a direct evaluation, \begin{equation} \lim_{z\rightarrow \infty}Y_{1}(z)=\left( \begin{array}{cc} -\alpha_{n-1} & \kappa_{n}^{-2} \\ -\kappa_{n-1}^{2} & -\overline{\alpha}_{n-1} \\ \end{array} \right). \end{equation} Moreover, by the growth condition at $\infty$ in RHP (1.2), we have \begin{equation} \lim_{z\rightarrow 0}Y_{1}(z)\left( \begin{array}{cc} z^{n} & 0 \\ 0 & z^{-n} \\ \end{array} \right)=I. \end{equation} Let \begin{align} Y_{2}(z)= \left( \begin{array}{cc} -\overline{\alpha}_{n-1} & -\kappa_{n}^{-2} \\ -\kappa_{n-1}^{2} & \alpha_{n-1} \\ \end{array} \right)Y_{1}(z)\left( \begin{array}{cc} z^{n} & 0 \\ 0 & -z^{-n} \\ \end{array} \right). \end{align} Noting \begin{equation} 1-|\alpha_{n-1}|^{2}=\left(\frac{\kappa_{n-1}}{\kappa_{n}}\right)^{2}, \end{equation} by (4.4), (4.6) and simple calculations, \begin{equation} \lim_{z\rightarrow \infty}Y_{2}(z)\left( \begin{array}{cc} z^{-n} & 0 \\ 0 & z^{n} \\ \end{array} \right)=I. \end{equation} By (4.3) and (4.8), $Y_{2}$ satisfies the RH characterization (1.2) for OPUC, viz. \begin{equation} (\mbox{RHP for $Y_{2}$})\,\, \begin{cases} Y_{2}\,\, \mbox{is analytic in}\,\,\mathbb{C}\setminus \partial \mathbb{D},\vspace{2mm}\\ Y_{2}^{+}(t)=Y_{2}^{-}(t)\left( \begin{array}{cc} 1 & t^{-n}w(t) \\ 0 & 1 \\ \end{array} \right) \,\,\mbox{for} \,\,t\in \partial \mathbb{D},\vspace{2mm}\\ Y_{2}(z)=\left(I+O(\frac{1}{z})\right)\left( \begin{array}{cc} z^{n} & 0 \\ 0 & z^{-n} \\ \end{array} \right) \,\,\mbox{as}\,\, z\rightarrow \infty. \end{cases} \end{equation} By the uniqueness, $Y_{2}(z)=Y(z)$ for $z\in \mathbb{C}\setminus \partial \mathbb{D}$. Namely, \begin{equation} Y(z)=\left( \begin{array}{cc} -\overline{\alpha}_{n-1} & -\kappa_{n}^{-2} \\ -\kappa_{n-1}^{2} & \alpha_{n-1} \\ \end{array} \right)\overline{Y\left(\frac{1}{\overline{z}}\right)}\left( \begin{array}{cc} z^{n} & 0 \\ 0 & -z^{-n}\\ \end{array} \right),\,\,z\in \mathbb{C}\setminus\partial\mathbb{D}. \end{equation} By the above arguments, we have \begin{thm} Let $\Phi_{n}$, $\Phi_{n-1}^{*}$, $\alpha_{n-1}$, $\kappa_{n}$ be as above, then \begin{enumerate} \item [(A)] The identities \begin{equation} \Phi_{n}(z)=-\overline{\alpha}_{n-1}\Phi_{n}^{*}(z)+\frac{\kappa_{n-1}^{2}}{\kappa_{n}^{2}}z\Phi_{n-1}(z) \end{equation} and \begin{equation} \Phi_{n}^{*}(z)=\Phi_{n-1}^{*}(z)-\alpha_{n-1}z\Phi_{n-1}(z) \end{equation} hold for $z\in \mathbb{C}$; \item [(B)] The identities \begin{align} C[\Phi_{n}w](z)=&z^{n}\Big[\overline{\alpha}_{n-1}\overline{C[\Phi_{n}w]\left(\frac{1}{\overline{z}}\right)} -\frac{\kappa_{n-1}^{2}}{\kappa_{n}^{2}}\overline{C[\Phi_{n-1}^{*}w]\left(\frac{1}{\overline{z}}\right)}+\frac{1}{\kappa_{n}^{2}}\Big] \end{align} and \begin{align} C[\Phi_{n-1}^{*}w](z)=&-z^{n}\Big[\overline{C[\Phi_{n}w]\left(\frac{1}{\overline{z}}\right)} +\alpha_{n-1}\overline{C[\Phi_{n-1}^{*}w]\left(\frac{1}{\overline{z}}\right)}-\frac{1+\alpha_{n-1}}{\kappa_{n-1}^{2}}\Big] \end{align} hold for $z\in \mathbb{C}\setminus\partial\mathbb{D}$. \end{enumerate} \end{thm} \begin{proof} (4.11) and (4.12) are obtained by identifying the 11 and 21 entries in left hand side with the ones in right hand side of (4.10). By the orthogonality, it is easy to get that (see \cite{dd06}) \begin{equation} C[\tau^{-n}\Phi_{n}w](z)=z^{-n}C[\Phi_{n}w](z) \end{equation} and \begin{equation} -\kappa_{n-1}^{2}C[\tau^{-n}\Phi_{n-1}^{*}w](z)=z^{-n}\Big(-\kappa_{n-1}^{2}C[\Phi_{n-1}^{*}w](z)+1\Big). \end{equation} By identifying the 12 and 22 entries in two sides of (4.10), (4.13) and (4.14) respectively follow from (4.15) and (4.16). \end{proof} \begin{rem} The identities (4.11) and (4.12) are just the classical Szeg\"o recursions. They are equivalent to each other. \end{rem} \subsection{The case of OTP} The RH characterization for OTP is uniquely solvable as follows \begin{thm}[\!\!\cite{dd08}] The RHP (1.3) has a unique solution given by \begin{equation} \mathfrak{Y}(z)=\left( \begin{array}{cc} z^{n}L(\sigma_{n}, \pi_{n})(z) & C[\tau^{-n}L(\sigma_{n}, \pi_{n})w](z) \\ z^{n}\mathcal{L}(\sigma_{n-1}, \pi_{n-1})(z) & C[\tau^{-n}\mathcal{L}(\sigma_{n-1}, \pi_{n-1})w](z) \\ \end{array} \right), \end{equation} where \begin{equation} L(\sigma_{n}, \pi_{n})(z)=\lambda_{1,n}a_{n}\sigma_{n}(z)+\lambda_{2,n}b_{n}\pi_{n}(z), \end{equation} \begin{equation} \mathcal{L}(\sigma_{n-1}, \pi_{n-1})(z)=\lambda_{3,n-1}a_{n-1}\sigma_{n-1}(z)+\lambda_{4,n-1}b_{n-1}\pi_{n-1}(z) \end{equation} in which $\sigma_{n}$ and $\pi_{n}$ are the orthonormal Laurent polynomials of the first class on the unit circle with respect to the weight $w$, $a_{n}, b_{n}, \beta_{n}$ are given in (2.11)-(2.13), $\lambda_{1,n}=1$, $\lambda_{2,n}=\beta_{n}+i$, $\lambda_{3,n-1}=-\frac{1}{2}a_{n-1}^{-2}(1+\beta_{n-1} i)$, $\lambda_{4,n-1}=\frac{1}{2}b_{n-1}^{-2}i$, and $C$ is the Cauchy integral operator. \end{thm} Set \begin{equation} \mathfrak{Y}_{1}(z)=\overline{\mathfrak{Y}\left(\frac{1}{\overline{z}}\right)},\,\,\,\,z\in \mathbb{C}\setminus\partial \mathbb{D}, \end{equation} then $\mathfrak{Y}_{1}^{+}(t)=\overline{\mathfrak{Y}^{-}(t)}$ and $\mathfrak{Y}_{1}^{-}(t)=\overline{\mathfrak{Y}^{+}(t)}$ for $t\in \partial \mathbb{D}$. Therefore, by the boundary and growth conditions in RHP (1.3), we have \begin{equation} \mathfrak{Y}_{1}^{+}(t)=\mathfrak{Y}_{1}^{-}(t)\left( \begin{array}{cc} 1 & -t^{2n}w(t) \\ 0 & 1 \\ \end{array} \right),\,\,\, t\in \partial \mathbb{D} \end{equation} and \begin{equation} \lim_{z\rightarrow 0}\mathfrak{Y}_{1}(z)\left( \begin{array}{cc} z^{2n} & 0 \\ 0 & z^{-2n+1} \\ \end{array} \right) =I. \end{equation} Moreover, by straightforward calculations, we have \begin{equation} \lim_{z\rightarrow \infty}\mathfrak{Y}_{1}(z)\left( \begin{array}{cc} z & 0 \\ 0 & 1 \\ \end{array} \right) =\triangle=\left( \begin{array}{cc} \triangle_{11} & \triangle_{12} \\ \triangle_{21} & \triangle_{22} \\ \end{array} \right), \end{equation} where \begin{align} \triangle_{11}&=-\frac{1}{2}(\imath_{n}+\beta_{n-1}\varsigma_{n}-\zeta_{n})+\frac{i}{2}(\jmath_{n}-\imath_{n}\beta_{n-1} +\varsigma_{n})\nonumber\\ &=-\alpha_{2n-2}\,\, (\mbox{by Theorem 2.4}),\\ \triangle_{12}&=a_{n}^{2}+b_{n}^{2}(1+\beta_{n}^{2})=\kappa_{2n-1}^{-2}\,\, (\mbox{by Theorem 2.5}),\\ \triangle_{21}&=-\frac{1}{4}\Big(a_{n-1}^{-2}(1+\beta_{n-1}^{2})+b_{n-1}^{-2}\Big)=-\kappa_{2n-2}^{2}\,\, (\mbox{by Theorem 2.3}),\\ \triangle_{22}&=-\frac{1}{2}(\imath_{n}+\varsigma_{n}\beta_{n-1}-\zeta_{n})-\frac{i}{2}(\jmath_{n}-\imath_{n}\beta_{n-1}+\varsigma_{n})=-\overline{\alpha}_{2n-2}. \end{align} Let \begin{align} \mathfrak{Y}_{2}(z) =\left( \begin{array}{cc} \triangle_{22} & -\triangle_{12} \\ \triangle_{21} & -\triangle_{11} \\ \end{array} \right)\mathfrak{Y}_{1}(z)\left( \begin{array}{cc} z^{2n+1} & 0 \\ 0 & -z^{-2n+1} \\ \end{array} \right). \end{align} Noting (or by Theorem 2.7) \begin{equation} \det\triangle=|\alpha_{2n-2}|^{2}+\left(\frac{\kappa_{2n-2}}{\kappa_{2n-1}}\right)^{2}=1, \end{equation} by (4.23) and (4.28), \begin{equation} \lim_{z\rightarrow \infty}\mathfrak{Y}_{2}(z)\left( \begin{array}{cc} z^{-2n} & 0 \\ 0 & z^{2n-1} \\ \end{array} \right)=I. \end{equation} Thus $\mathfrak{Y}_{2}$ satisfies the following RHP (i.e. the RH characterization (1.3) for OTP) \begin{equation} (\mbox{RHP for $\mathfrak{Y}_{2}$})\,\, \begin{cases} \mathfrak{Y}_{2}\,\, \mbox{is analytic in}\,\,\mathbb{C}\setminus \partial \mathbb{D},\vspace{2mm}\\ \mathfrak{Y}_{2}^{+}(t)=\mathfrak{Y}_{2}^{-}(t)\left( \begin{array}{cc} 1 & t^{-2n}w(t) \\ 0 & 1 \\ \end{array} \right) \,\,\mbox{for} \,\,t\in \partial \mathbb{D},\vspace{2mm}\\ \mathfrak{Y}_{2}(z)=\left(I+O(\frac{1}{z})\right)\left( \begin{array}{cc} z^{2n} & 0 \\ 0 & z^{-2n+1} \\ \end{array} \right) \,\,\mbox{as}\,\, z\rightarrow \infty,\vspace{2mm}\\ (\mathfrak{Y}_{2})_{11}(0)=(\mathfrak{Y}_{2})_{21}(0)=0. \end{cases} \end{equation} By the uniqueness of RHP (1.3), $\mathfrak{Y}_{2}(z)=\mathfrak{Y}(z)$ for any $z\in \mathbb{C}\setminus \partial \mathbb{D}$. That is, \begin{align} \mathfrak{Y}(z) =\left( \begin{array}{cc} \triangle_{22} & -\triangle_{12} \\ \triangle_{21} & -\triangle_{11} \\ \end{array} \right)\overline{\mathfrak{Y}\left(\frac{1}{\overline{z}}\right)}\left( \begin{array}{cc} z^{2n+1} & 0 \\ 0 & -z^{-2n+1} \\ \end{array} \right),\,\,\,z\in \mathbb{C}\setminus\partial \mathbb{D}. \end{align} In order to derive some identities for OTP, we introduce reflectional sets, reflectional and auto-reflectional functions for the unit circle $\partial \mathbb{D}$. \begin{defn} A set $\Sigma$ is called a reflectional set for the unit circle $\partial \mathbb{D}$ if both $z\in\Sigma$ and $1/z\in\Sigma$, or simply a reflectional set, in which $z$ and $1/z$ are called reflection to each other. For example, $\mathbb{C} \setminus\{0\}$ is a reflectional set for the unit circle. \end{defn} \begin{defn} If $f$ is defined on a reflectional set $\Sigma$, set \begin{equation} f_{*}(z)=\overline{f\left(1/\overline{z}\right)},\,\,z\in\Sigma, \end{equation} then $f_{*}$ is the reflectional function of $f$ for the unit circle in $\Sigma$, simply reflection. \end{defn} \begin{defn} If $f$ is defined on a reflectional set $\Sigma$ such that \begin{equation} f(z)=f_{*}(z),\,\,z\in\Sigma, \end{equation} then $f$ is called an auto-reflectional function for the unit circle in $\Sigma$, simply auto-reflection. \end{defn} \begin{lem} Let $\mu$ be a nontrivial probability measure on the unit circle $\partial \mathbb{D}$, $\{1, \sigma_{n}, \pi_{n}\}$ be the unique system of orthonormal Laurent polynomials of the first class on the unit circle with respect to $\mu$, then $\sigma_{n}, \pi_{n}$ are auto-reflectional for the unit circle in $\mathbb{C}\setminus\{0\}$. \end{lem} \begin{proof} It immediately follows from that $\displaystyle\frac{z^{n}+z^{-n}}{2}$ and $\displaystyle\frac{z^{n}-z^{-n}}{2i}$ are auto-reflectional for the unit circle in $\mathbb{C}\setminus\{0\}$ and all of the coefficients are real-valued. \end{proof} With the above preliminaries, we have \begin{thm} Let $\sigma_{n}$, $\pi_{n}$, $a_{n}$, $b_{n}$, $\triangle_{kl}$, $\lambda_{m,n}$ be as above, then \begin{enumerate} \item [($\mathcal{A}$)] The identities \begin{align} &(\lambda_{1,n}z^{-1}-\overline{\lambda}_{1,n}\triangle_{22})a_{n}\sigma_{n}(z)+(\lambda_{2,n}z^{-1}-\overline{\lambda}_{2,n}\triangle_{22})b_{n}\pi_{n}(z)\nonumber\\ =&-\triangle_{12}\Big(\overline{\lambda}_{3,n-1}a_{n-1}\sigma_{n-1}(z)+\overline{\lambda}_{4,n-1}b_{n-1}\pi_{n-1}(z)\Big) \end{align} and \begin{align} &(\lambda_{3,n-1}z^{-1}+\overline{\lambda}_{3,n-1}\triangle_{11})a_{n-1}\sigma_{n-1}(z)+(\lambda_{4,n-1}z^{-1}+\overline{\lambda}_{4,n-1}\triangle_{11})b_{n-1}\pi_{n-1}(z)\nonumber\\ =&\triangle_{21}\Big(\overline{\lambda}_{1,n}a_{n}\sigma_{n}(z)+\overline{\lambda}_{2,n}b_{n}\pi_{n}(z)\Big) \end{align} hold for $z\in \mathbb{C}\setminus\{0\}$; \item [($\mathcal{B}$)] The identities \begin{align} &C[\tau^{-n}(\lambda_{1,n}a_{n}\sigma_{n}+\lambda_{2,n}b_{n}\pi_{n})w](z)\nonumber\\ =&z^{-2(n-1)}\Big[\triangle_{22}C[\tau^{n-1}(\overline{\lambda}_{1,n}a_{n}\sigma_{n}+\overline{\lambda}_{2,n}b_{n}\pi_{n})w](z)\nonumber\\ &-\triangle_{12}C[\tau^{n-1}(\overline{\lambda}_{3,n-1}a_{n-1}\sigma_{n-1}+\overline{\lambda}_{4,n-1}b_{n-1}\pi_{n-1})w](z)\Big] \end{align} and \begin{align} &C[\tau^{-n}(\lambda_{3,n-1}a_{n-1}\sigma_{n-1}+\lambda_{4,n-1}b_{n-1}\pi_{n-1})w](z)\nonumber\\ =&z^{-2(n-1)}\Big[\triangle_{21}C[\tau^{n-1}(\overline{\lambda}_{1,n}a_{n}\sigma_{n}+\overline{\lambda}_{2,n}b_{n}\pi_{n})w](z)\nonumber\\ &-\triangle_{11}C[\tau^{n-1}(\overline{\lambda}_{3,n}a_{n-1}\sigma_{n-1}+\overline{\lambda}_{4,n}b_{n-1}\pi_{n-1})w](z)\Big] \end{align} hold for $z\in \mathbb{C}\setminus(\partial\mathbb{D}\cup\{0\})$. \end{enumerate} \end{thm} \begin{proof} By Lemma 4.8, (4.35) and (4.36) are obtained by identifying the 11 and 21 entries in LHS with the ones in RHS of (4.32). Since \begin{equation} \overline{C[\tau^{-n}L(\sigma_{n}, \pi_{n})w]\left(\frac{1}{z}\right)}=-zC[\tau^{n-1}\overline{L(\sigma_{n}, \pi_{n})}w](z) \end{equation} and \begin{equation} \overline{C[\tau^{-n}\mathcal{L}(\sigma_{n-1}, \pi_{n-1})w]\left(\frac{1}{z}\right)}=-zC[\tau^{n-1}\overline{\mathcal{L}(\sigma_{n-1}, \pi_{n-1})}w](z) \end{equation} for $z\in \mathbb{C}\setminus(\partial\mathbb{D}\cup\{0\})$, then (4.37) and (4.38) follow from comparing the 12 and 22 entries with each other in both sides of (4.32). \end{proof} In the above theorem, as $z$ is restricted on $\partial \mathbb{D}$, we indeed obtain the following four-term recurrences for OTP. \begin{thm} Let $\sigma_{n}$, $\pi_{n}$, $a_{n}$, $b_{n}$, $\triangle_{kl}$, $\lambda_{m,n}$ be as above, then \begin{enumerate} \item [($\mathfrak{A}$)] The identities \begin{align} &(\lambda_{1,n}e^{-i\theta}-\overline{\lambda}_{1,n}\triangle_{22})a_{n}\sigma_{n}(\theta)+(\lambda_{2,n}e^{-i\theta}-\overline{\lambda}_{2,n}\triangle_{22})b_{n}\pi_{n}(\theta)\nonumber\\ =&-\triangle_{12}\Big(\overline{\lambda}_{3,n-1}a_{n-1}\sigma_{n-1}(\theta)+\overline{\lambda}_{4,n-1}b_{n-1}\pi_{n-1}(\theta)\Big) \end{align} and \begin{align} &(\lambda_{3,n-1}e^{-i\theta}+\overline{\lambda}_{3,n-1}\triangle_{11})a_{n-1}\sigma_{n-1}(\theta)+(\lambda_{4,n-1}e^{-i\theta}+\overline{\lambda}_{4,n-1}\triangle_{11})b_{n-1}\pi_{n-1}(\theta)\nonumber\\ =&\triangle_{21}\Big(\overline{\lambda}_{1,n}a_{n}\sigma_{n}(\theta)+\overline{\lambda}_{2,n}b_{n}\pi_{n}(\theta)\Big) \end{align} hold for $\theta\in [0, 2\pi)$; \item [($\mathfrak{B}$)] The identities \begin{align} &H[\tau^{-n}(\lambda_{1,n}a_{n}\sigma_{n}+\lambda_{2,n}b_{n}\pi_{n})w](e^{i\theta})\nonumber\\ =&e^{-i[2(n-1)\theta]}\Big[\triangle_{22}H[\tau^{n-1}(\overline{\lambda}_{1,n}a_{n}\sigma_{n}+\overline{\lambda}_{2,n}b_{n}\pi_{n})w](e^{i\theta})\nonumber\\ &-\triangle_{12}H[\tau^{n-1}(\overline{\lambda}_{3,n-1}a_{n-1}\sigma_{n-1}+\overline{\lambda}_{4,n-1}b_{n-1}\pi_{n-1})w](e^{i\theta})\Big] \end{align} and \begin{align} &H[\tau^{-n}(\lambda_{3,n-1}a_{n-1}\sigma_{n-1}+\lambda_{4,n-1}b_{n-1}\pi_{n-1})w](e^{i\theta})\nonumber\\ =&e^{-i[2(n-1)\theta]}\Big[\triangle_{21}H[\tau^{n-1}(\overline{\lambda}_{1,n}a_{n}\sigma_{n}+\overline{\lambda}_{2,n}b_{n}\pi_{n})w](e^{i\theta})\nonumber\\ &-\triangle_{11}H[\tau^{n-1}(\overline{\lambda}_{3,n}a_{n-1}\sigma_{n-1}+\overline{\lambda}_{4,n}b_{n-1}\pi_{n-1})w](e^{i\theta})\Big] \end{align} hold for $\theta\in [0, 2\pi)$, where $H$ is the Hilbert transform on the unit circle, i.e. \begin{equation} Hf(t)=P.V. \frac{1}{\pi}\int_{\partial \mathbb{D}}\frac{f(\tau)}{t-\tau}d\tau,\,\,t\in\partial \mathbb{D} \end{equation} in which $f\in H(\partial \mathbb{D})$. \end{enumerate} \end{thm} \begin{proof} (4.41) and (4.42) are obvious by identifying $e^{i\theta}\in \partial \mathbb{D}$ with $\theta\in[0,2\pi)$ in (4.35) and (4.36). By the well-known Plemelj formula, viz. \begin{equation} C^{\pm}f(t)=\pm\frac{1}{2}f(t)+\frac{i}{2}Hf(t),\,\,t\in \partial \mathbb{D}, \end{equation} where $f\in H(\partial \mathbb{D})$, taking $z\in \mathbb{D}\rightarrow t=e^{i\theta}$ (or $z\in \mathbb{C}\setminus\overline{\mathbb{D}}\rightarrow t=e^{i\theta}$), then (4.43) and (4.44) easily follow from (4.37), (4.38), (4.41) and (4.42). \end{proof} \begin{rem} The identities (4.35) (4.36), (4.41) and (4.42) are four-term recurrences for OTP (exactly, the former two are for OLP of the first class). They are equivalent to each other and also to the ones in \cite{dd08}. \end{rem} \begin{rem} By taking a similar strategy, we can also apply the RH characterization (1.2) for OPUC to derive the above identities for OTP (or OLP of the first class) in Theorems 4.9 and 4.10 when $2n-1$ is in place of $n$ as stated in the Introduction. \end{rem} \begin{rem} By the mutual representation theorem (Theorem 2.1) and Theorem 4.2, we can directly get some identities for OTP in different forms. They are equivalent to (4.35)-(4.38) and (4.41)-(4.44). By this approach, the four-term recurrences (corresponding to (4.35), (4.36), (4.41) and (4.42)) were obtained in \cite{dd08}. \end{rem} \section{A Strong Favard theorem} Theorem 3.1 tells us that there exist many nontrivial probability measures $d\mu$ corresponding to any fixed system of three-tuples $\{(a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)})\}$ satisfying (3.1). That is to say that the system of three-tuples $\{(a_{n}^{(0)},b_{n}^{(0)},\beta_{n}^{(0)})\}$ with (3.1) is not sufficient to uniquely determine the nontrivial probability measure $d\mu$. As stated in Remark 3.2, we need consider a system of seven-tuples $\{(a_{n}^{(0)}, b_{n}^{(0)},\beta_{n}^{(0)},$ $\imath_{n}^{(0)},\jmath_{n}^{(0)},\varsigma_{n}^{(0)},\zeta_{n}^{(0)})\}$ with some suitable properties in order to uniquely determine the nontrivial probability measure $d\mu$. In what follows, we will discuss it in detail. At first, we give some more relations on the coefficients $a_{n}, b_{n},\beta_{n},\imath_{n},\jmath_{n},\varsigma_{n},\zeta_{n}$ of OTP and $\alpha_{n}, \kappa_{n}$ of OPUC. To this end, the following basic facts are required. \begin{lem} Let $\Phi_{n}$ be the monic orthogonal polynomial on the unit circle of order $n$ with respect to $\mu$, and $\Phi^{*}_{n}$ be the reversed polynomial of $\Phi_{n}$, then \begin{equation} \langle 1,z\Phi_{n}^{*}\rangle_{\mathbb{R}}=\int_{\partial \mathbb{D}}z\Phi_{n}^{*}(\tau)d\mu(\tau)=-a_{n+1,n} \end{equation} and \begin{equation} \langle 1,z^{2}\Phi_{n}\rangle_{\mathbb{R}}=\int_{\partial \mathbb{D}}\tau^{2}\Phi_{n}(\tau)d\mu(\tau)=\alpha_{n}^{-1}\Big(a_{n+2,n+1}-a_{n+1,n}\Big), \end{equation} where the Verblunsky coefficient $\alpha_{n}$ is restricted in $\mathbb{D}\setminus\{0\}$, and $a_{n+1,n}$ is given by \begin{equation} \Phi_{n+1}(z)=z^{n+1}+a_{n+1,n}z^{n}+\mbox{lower order}. \end{equation} \end{lem} \begin{proof} By (2.6), we have \begin{align} \int_{\partial \mathbb{D}}z\Phi_{n}^{*}(\tau)d\mu(\tau)=\int_{\partial \mathbb{D}}\tau^{n+1}\overline{\Phi_{n}(\tau)}d\mu(\tau)=\langle z^{n+1},\Phi_{n}\rangle_{\mathbb{C}}. \end{align} Thus (5.1) immediately follows from (5.3) by the orthogonality. For $\alpha_{n}\in\mathbb{D}\setminus\{0\}$, by Szeg\"o recurrence, \begin{align} \int_{\partial \mathbb{D}}\tau^{2}\Phi_{n}(\tau)d\mu(\tau)=\alpha_{n}^{-1}\Big[\int_{\partial \mathbb{D}}\tau\Phi_{n}^{*}(\tau)d\mu(\tau)-\int_{\partial \mathbb{D}}\tau\Phi_{n+1}^{*}(\tau)d\mu(\tau)\Big]. \end{align} So (5.2) holds by applying (5.1). \end{proof} \begin{thm} Let $a_{n}, b_{n},\beta_{n},\imath_{n},\jmath_{n},\varsigma_{n},\zeta_{n},\alpha_{n},\kappa_{n}$ be given in Section 2, then \begin{align} \imath_{n+1}=&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)a_{2n,2n-1}i-\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i\nonumber\\ &+a_{n}^{-2}(1-\beta_{n}i)\Big]a_{2n+1,2n} +\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big), \end{align} \begin{align} \jmath_{n+1}=&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big)a_{2n,2n-1}-\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\nonumber\\ &+b_{n}^{-2}i\Big]a_{2n+1,2n} +\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big)i, \end{align} \begin{align} \varsigma_{n+1}=&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big)a_{2n,2n-1}-\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i\nonumber\\ &+a_{n}^{-2}(1-\beta_{n}i)\Big]a_{2n+1,2n} -\frac{1}{4i}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \end{align} and \begin{align} \zeta_{n+1}=&\frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big)a_{2n,2n-1}-\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\nonumber\\ &+b_{n}^{-2}i\Big]a_{2n+1,2n} -\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \end{align} for $n\in \mathbb{N}\cup\{0\}$, where $\alpha_{2n-1},\alpha_{2n}\in \mathbb{D}\setminus\{0\}$, $a_{2n+1,2n}, a_{2n,2n-1}$ are given by (5.3). \end{thm} \begin{proof} It is enough to prove (5.6) and (5.7). Similar are to (5.8) and (5.9). By Theorem 2.2, Lemmas 5.1 and 2.8 and Szeg\"o recurrence, when $\alpha_{2n-1},\alpha_{2n}\in \mathbb{D}\setminus\{0\}$, we have \begin{align} \langle z^{n+1},a_{n}\sigma_{n}\rangle_{\mathbb{R}}=&\langle z^{n+1},-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}b_{n}^{-2}iz\Phi_{2n-1}-(1-\beta_{n}i)\Phi_{2n}^{*}]\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}b_{n}^{-2}i\langle 1,z^{2}\Phi_{2n-1}\rangle_{\mathbb{R}}+\frac{1}{2}(1-\beta_{n}i)\langle 1,z\Phi_{2n}^{*}\rangle_{\mathbb{R}}\nonumber\\ =&\frac{1}{2}\Lambda_{n}^{-1}b_{n}^{-2}\alpha_{2n-1}^{-1}a_{2n,2n-1}i-\frac{1}{2}\Big[\Lambda_{n}^{-1}b_{n}^{-2}\alpha_{2n-1}^{-1}i\nonumber\\ &+(1-\beta_{n}i)\Big]a_{2n+1,2n}, \end{align} \begin{align} \langle z^{-(n+1)},a_{n}\sigma_{n}\rangle_{\mathbb{R}}=&\langle z^{-(n+1)},-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}b_{n}^{-2}iz\Phi_{2n-1}-(1-\beta_{n}i)\Phi_{2n}^{*}]\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}b_{n}^{-2}i\langle 1,z^{-2n}\Phi_{2n-1}\rangle_{\mathbb{R}}+\frac{1}{2}(1-\beta_{n}i)\langle 1,z^{-(2n+1)}\Phi_{2n}^{*}\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}b_{n}^{-2}i\langle z^{2n},\Phi_{2n-1}\rangle_{\mathbb{C}}+\frac{1}{2}(1-\beta_{n}i)\overline{\langle 1,z\Phi_{2n}\rangle}_{\mathbb{R}}\nonumber\\ =&\frac{1}{2}\Lambda_{n}^{-1}b_{n}^{-2}a_{2n,2n-1}i+\frac{1}{2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big), \end{align} \begin{align} \langle z^{n+1},b_{n}\pi_{n}\rangle_{\mathbb{R}}=&\langle z^{n+1},-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)z\Phi_{2n-1}(z)-i\Phi_{2n}^{*}(z)]\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\langle 1,z^{2}\Phi_{2n-1}\rangle_{\mathbb{R}}+\frac{i}{2}\langle 1,z\Phi_{2n}^{*}\rangle_{\mathbb{R}}\nonumber\\ =&\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}a_{2n,2n-1}-\frac{1}{2}\Big[\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\nonumber\\&+i\Big]a_{2n+1,2n} \end{align} and \begin{align} \langle z^{-(n+1)},b_{n}\pi_{n}\rangle_{\mathbb{R}}=&\langle z^{-(n+1)},-\frac{1}{2}z^{-n}[\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)z\Phi_{2n-1}(z)-i\Phi_{2n}^{*}(z)]\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\langle 1,z^{-2n}\Phi_{2n-1}\rangle_{\mathbb{R}}+\frac{i}{2}\langle 1,z^{-(2n+1)}\Phi_{2n}^{*}\rangle_{\mathbb{R}}\nonumber\\ =&-\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)\langle z^{2n},\Phi_{2n-1}\rangle_{\mathbb{C}}+\frac{i}{2}\overline{\langle 1,z\Phi_{2n}\rangle}_{\mathbb{R}}\nonumber\\ =&\frac{1}{2}\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)a_{2n,2n-1}+\frac{i}{2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big). \end{align} Thus \begin{align} \imath_{n+1}=&\langle\frac{z^{n+1}+z^{-(n+1)}}{2},a_{n}^{-1}\sigma_{n}\rangle_{\mathbb{R}}=a_{n}^{-2}\langle\frac{z^{n+1}+z^{-(n+1)}}{2},a_{n}\sigma_{n}\rangle_{\mathbb{R}}\nonumber\\ =&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)a_{2n,2n-1}i-\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i\nonumber\\ &+a_{n}^{-2}(1-\beta_{n}i)\Big]a_{2n+1,2n} +\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \end{align} and \begin{align} \jmath_{n+1}=&\langle\frac{z^{n+1}+z^{-(n+1)}}{2},b_{n}^{-1}\pi_{n}\rangle_{\mathbb{R}}=b_{n}^{-2}\langle\frac{z^{n+1}+z^{-(n+1)}}{2},b_{n}\pi_{n}\rangle_{\mathbb{R}}\nonumber\\ =&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big)a_{2n,2n-1}-\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\nonumber\\ &+b_{n}^{-2}i\Big]a_{2n+1,2n} +\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big)i, \end{align} where $\alpha_{2n-1},\alpha_{2n}\in \mathbb{D}\setminus\{0\}$. \end{proof} Denote \begin{equation} A=\left( \begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i & -\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big] \vspace{2mm}\\ \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) & -\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big] \\ \end{array} \right), \end{equation} then by (2.21) and (2.37), \begin{align} |A|=&\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i\vspace{2mm}\\ \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1} \\ \end{array} \right|\nonumber\\ &+\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i & -\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\vspace{2mm}\\ \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) & -\frac{1}{4}b_{n}^{-2}i \\ \end{array} \right|\nonumber\\ =&-\frac{1}{16}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)\left|\begin{array}{cc} i & a_{n}^{-2}(1-\beta_{n}i)\vspace{2mm}\\ (1+\beta_{n}i) & b_{n}^{-2}i \\ \end{array} \right|\nonumber\\ =&\frac{1}{16}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\nonumber\\ =&\frac{1}{8}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i\neq 0. \end{align} So $A$ is invertible. By Theorem 5.2, we can represent the coefficient $a_{n+1,n}$ of OPUC by the coefficients $a_{n}, b_{n},\beta_{n},\imath_{n},\jmath_{n},\varsigma_{n},\zeta_{n}$ of OTP as follows. \begin{thm} Let $a_{n}, b_{n},\beta_{n},\imath_{n},\jmath_{n},\varsigma_{n},\zeta_{n},\alpha_{n},\kappa_{n},a_{n+1,n}$ be as above, then \begin{equation} \left( \begin{array}{c} a_{2n,2n-1} \vspace{1mm}\\ a_{2n+1,2n} \\ \end{array} \right)=A^{-1}\left( \begin{array}{c} \imath_{n+1}-\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \vspace{1mm}\\ \jmath_{n+1}-\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big)i \\ \end{array} \right), \end{equation} where $A$ is given by (5.16). \end{thm} \begin{proof} It follows from (5.6), (5.7) and the invertibility of $A$. \end{proof} \begin{center} \begin{tikzpicture} \node (a) at (0,0) {$\varsigma_{n+1}$}; \node (b) at (0,3) {$\imath_{n+1}$}; \node (c) at (4,0) {$\zeta_{n+1}$}; \node (d) at (4,3) {$\jmath_{n+1}$}; \draw (a) -- node[left]{$C$} (b); \draw (a) -- node[below]{$B$} (c); \draw (b) -- node[above]{$A$} (d); \draw (c) -- node[right]{$D$} (d); \draw [dashed] (a) -- node[above,pos=0.7]{$F$} (d); \draw (b) -- node[above,pos=0.3]{$E$} (c); \end{tikzpicture} \end{center} \begin{center} Derivation of $a_{n+1,n}$ from different ways \end{center} \begin{rem} In Theorem 5.3, we derive $a_{2n+1,2n}$ and $a_{2n,2n-1}$ in terms of $\imath_{n+1}$ and $\jmath_{n+1}$. In fact, there are many different ways shown in the above figure from which to deduce them. For instance, like as $A$, let $B$ be the coefficient matrix for $a_{2n+1,2n}$ and $a_{2n,2n-1}$ in (5.8) and (5.9). Namely, \begin{equation} B=\left( \begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big] \vspace{2mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big] \\ \end{array} \right). \end{equation} Then we can use $B$ to express $a_{2n+1,2n}$ and $a_{2n,2n-1}$ in terms of $\varsigma_{n+1}$ and $\zeta_{n+1}$. So are to use $C,D$ and $E$ in the solid-line cases shown in the above figure. However, a further condition will be required for $F$ in the dashed-line case. Here $C, D, E, F$ have the similar sense as $A$ and $B$. Such observations are based on the following results about the evaluations for the determinants of these coefficient matrices. \end{rem} \begin{thm} Let $B, C, D, E, F$ be the coefficient matrices stated in the above remark, then \begin{equation} |B|=-\frac{1}{8}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big)i, \end{equation} \begin{equation} |C|=\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\alpha_{2n-1}^{-1}(1+\beta_{n}i), \end{equation} \begin{equation} |D|=-\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\alpha_{2n-1}^{-1}, \end{equation} \begin{equation} |E|=\frac{1}{8i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}\Big[a_{n}^{-2}+b_{n}^{-2}+a_{n}^{-2}\beta_{n}i\Big] \end{equation} and \begin{equation} |F|=-\frac{1}{8i}\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\alpha_{2n-1}^{-1}\beta_{n}(\beta_{n}-i). \end{equation} \end{thm} \begin{proof} By applying (2.21) and (2.37) as well as basic properties of determinants, we have \begin{align} |B|=&\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}\vspace{2mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1} \\ \end{array} \right|\nonumber\\ &+\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}a_{n}^{-2}(1-\beta_{n}i)\vspace{2mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4}b_{n}^{-2}\\ \end{array} \right|\nonumber\\ =&\frac{1}{16}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big)\left|\begin{array}{cc} i & a_{n}^{-2}(1-\beta_{n}i)\vspace{2mm}\\ (1+\beta_{n}i) & b_{n}^{-2}i \\ \end{array} \right|\nonumber\\ =&-\frac{1}{16}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big)[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\nonumber\\ =&-\frac{1}{8}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big)i,\nonumber \end{align} \begin{align} |C|=&\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i & -\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big]\vspace{2mm}\\ \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big] \\ \end{array} \right|\nonumber\\ =&-\frac{1}{16}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big]\left|\begin{array}{cc} \alpha_{2n-1}^{-1}+1 & 1\vspace{2mm}\\ \alpha_{2n-1}^{-1}-1 & 1 \\ \end{array} \right|\nonumber\\ =&-\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\alpha_{2n-1}^{-1}\Big[\Lambda_{n}^{-1}b_{n}^{-2}i+\alpha_{2n-1}(1-\beta_{n}i)\Big]\nonumber\\ =&\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\alpha_{2n-1}^{-1}(1+\beta_{n}i),\nonumber \end{align} \begin{align} |D|=&\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) & -\frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big]\vspace{2mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big]\\ \end{array} \right|\nonumber\\ =&-\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big]\left|\begin{array}{cc} \alpha_{2n-1}^{-1}+1 & 1\vspace{2mm}\\ \alpha_{2n-1}^{-1}-1 & 1 \\ \end{array} \right|\nonumber\\ =&-\frac{1}{8i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\alpha_{2n-1}^{-1}\Big[\Lambda_{n}^{-1}a_{n}^{-2}(1+\beta_{n}i)+\alpha_{2n-1}i\Big]\nonumber\\ =&-\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\alpha_{2n-1}^{-1},\nonumber \end{align} \begin{align} |E|=&\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i\vspace{2mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1} \\ \end{array} \right|\nonumber\\ &+\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i & -\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\vspace{2mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4}b_{n}^{-2} \\ \end{array} \right|\nonumber \end{align} \begin{align} =&-\frac{1}{16}(\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\left|\begin{array}{cc} \alpha_{2n-1}^{-1}+1 & 1\vspace{2mm}\\ \alpha_{2n-1}^{-1}-1 & 1 \\ \end{array} \right|\nonumber\\ &-\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\left|\begin{array}{cc} (\alpha_{2n-1}^{-1}+1)i & a_{n}^{-2}(1-\beta_{n}i)\vspace{2mm}\\ (\alpha_{2n-1}^{-1}-1)(1+\beta_{n}i) & b_{n}^{-2}i \\ \end{array} \right|\nonumber\\ =&-\frac{1}{8}(\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\nonumber\\ &+\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[b_{n}^{-2}-a_{n}^{-2}(1+\beta_{n}^{2})]\nonumber\\ =&\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}\Big\{\kappa_{2n}^{-2}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)+[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\nonumber\\ &+\alpha_{2n-1}[b_{n}^{-2}-a_{n}^{-2}(1+\beta_{n}^{2})]\Big\}\nonumber\\ =&\frac{1}{8i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}[a_{n}^{-2}(1+\beta_{n}i)+b_{n}^{-2}]\nonumber \end{align} and \begin{align} |F|=&\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\vspace{2mm}\\ \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1} \\ \end{array} \right|\nonumber\\ &+\left|\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) & -\frac{1}{4}b_{n}^{-2}i\vspace{2mm}\\ \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}a_{n}^{-2}(1-\beta_{n}i)\\ \end{array} \right|\nonumber\\ =&-\frac{1}{16}(\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}\left|\begin{array}{cc} \alpha_{2n-1}^{-1}+1 & 1\vspace{2mm}\\ \alpha_{2n-1}^{-1}-1 & 1 \\ \end{array} \right|\nonumber\\ &-\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\left|\begin{array}{cc} (\alpha_{2n-1}^{-1}+1)(1+\beta_{n}i) & b_{n}^{-2}i\vspace{2mm}\\ (\alpha_{2n-1}^{-1}-1)i & a_{n}^{-2}(1-\beta_{n}i) \\ \end{array} \right|\nonumber\\ =&-\frac{1}{8}(\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2})^{2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}-\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\nonumber\\ &+\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[b_{n}^{-2}-a_{n}^{-2}(1+\beta_{n}^{2})]\nonumber\\ =&\frac{1}{16i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}\Big\{\kappa_{2n}^{-2}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)-[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\nonumber\\ &+\alpha_{2n-1}[b_{n}^{-2}-a_{n}^{-2}(1+\beta_{n}^{2})]\Big\}\nonumber\\ =&-\frac{1}{8i}\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\alpha_{2n-1}^{-1}\beta_{n}(\beta_{n}-i).\nonumber \end{align} Thus the proof is complete. \end{proof} \begin{cor} Let $B, C, D, E, F$ be the coefficient matrices as above, then $B, C, D, E$ are invertible for $\beta_{n}\in \mathbb{R}$, whereas $F$ is invertible for $\beta_{n}\in \mathbb{R}\setminus\{0\}$. \end{cor} By Theorems 5.2 and 5.3 as well as Corollary 5.6, we have \begin{thm} Let $A, B, C, D, E, F$ be the coefficient matrices as above, then \begin{align} &A^{-1}\left( \begin{array}{c} \imath_{n+1}-\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \vspace{1mm}\\ \jmath_{n+1}-\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big)i \\ \end{array} \right)\nonumber\\ =&B^{-1}\left( \begin{array}{c} \varsigma_{n+1}+\frac{1}{4i}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \vspace{1mm}\\ \zeta_{n+1}+\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \\ \end{array} \right)\nonumber\\ =&C^{-1}\left( \begin{array}{c} \imath_{n+1}-\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \vspace{1mm}\\ \varsigma_{n+1}+\frac{1}{4i}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \\ \end{array} \right)\nonumber\\ =&D^{-1}\left( \begin{array}{c} \jmath_{n+1}-\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big)i \vspace{1mm}\\ \zeta_{n+1}+\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \\ \end{array} \right)\nonumber\\ =&E^{-1}\left( \begin{array}{c} \imath_{n+1}-\frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \vspace{1mm}\\ \zeta_{n+1}+\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \\ \end{array} \right) \end{align} for $\beta_{n}\in \mathbb{R}$. Moreover, any term in the above identities is equal to \begin{equation} F^{-1}\left( \begin{array}{c} \jmath_{n+1}-\frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big)i \vspace{1mm}\\ \varsigma_{n+1}+\frac{1}{4i}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \\ \end{array} \right) \end{equation} as $\beta_{n}\neq 0$. \end{thm} Let \begin{equation} \gamma=\left( \begin{array}{c} \frac{1}{4i}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \vspace{1mm}\\ \frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \\ \end{array} \right) \end{equation} and \begin{equation} \eta=\left( \begin{array}{c} \frac{1}{4}a_{n}^{-2}(1-\beta_{n}i)\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big) \vspace{1mm}\\ \frac{1}{4}b_{n}^{-2}\overline{\alpha}_{2n}^{-1}\Big(\kappa_{2n}^{-2}-\kappa_{2n+1}^{-2}\Big)i \\ \end{array} \right), \end{equation} then by the first identity in (5.25), \begin{equation} B\left( \begin{array}{c} \imath_{n+1} \\ \jmath_{n+1} \\ \end{array} \right)-A\left( \begin{array}{c} \varsigma_{n+1} \\ \zeta_{n+1} \\ \end{array} \right)=A\gamma+B\eta, \end{equation} where $A$ and $B$ are given by (5.16) and (5.19) respectively. By (2.23), we have \begin{align} \left( \begin{array}{c} \alpha_{2n} \vspace{1mm}\\ \overline{\alpha}_{2n} \\ \end{array} \right)=P\left( \begin{array}{c} \imath_{n+1} \vspace{1mm}\\ \jmath_{n+1} \\ \end{array} \right)+Q \left( \begin{array}{c} \varsigma_{n+1} \vspace{1mm}\\ \zeta_{n+1} \\ \end{array} \right), \end{align} where \begin{equation} P=\left( \begin{array}{cc} \frac{1+\beta_{n}i}{2} & -\frac{i}{2} \vspace{1mm}\\ \frac{1-\beta_{n}i}{2} & \frac{i}{2} \\ \end{array} \right)\,\,\,\mbox{and}\,\,\,Q=\left( \begin{array}{cc} \frac{\beta_{n}-i}{2} & -\frac{1}{2} \vspace{1mm}\\ \frac{\beta_{n}+i}{2} & -\frac{1}{2} \\ \end{array} \right). \end{equation} Denote \begin{equation} D=\left( \begin{array}{cc} P & Q \\ B & -A \\ \end{array} \right)\,\,\,\mbox{and}\,\,\,\alpha=\left( \begin{array}{c} \alpha_{2n} \\ \overline{\alpha}_{2n} \\ \end{array} \right), \end{equation} then we have the following theorem. \begin{thm} \begin{equation} \left( \begin{array}{c} \imath_{n+1} \\ \jmath_{n+1} \\ \varsigma_{n+1} \\ \zeta_{n+1} \\ \end{array} \right)=D^{-1}\left( \begin{array}{c} \alpha \\ A\gamma+B\eta \\ \end{array} \right), \end{equation} where $D^{-1}$ is the inverse matrix of $D$. \end{thm} \begin{proof} By (5.29) and (5.30), \begin{equation} D\left( \begin{array}{c} \imath_{n+1} \\ \jmath_{n+1} \\ \varsigma_{n+1} \\ \zeta_{n+1} \\ \end{array} \right)=\left( \begin{array}{cc} P & Q \\ B & -A \\ \end{array} \right)\left( \begin{array}{c} \imath_{n+1} \\ \jmath_{n+1} \\ \varsigma_{n+1} \\ \zeta_{n+1} \\ \end{array} \right)=\left( \begin{array}{c} \alpha \\ A\gamma+B\eta \\ \end{array} \right). \end{equation} So it is enough to show that $|D|\neq0$ in order to get (5.33). However, it follows from Laplace theorem by simple calculations on some of its subdeterminants. More precisely, \begin{itemize} \item [Case 1.] \begin{equation} |P|=\left| \begin{array}{cc} \frac{1+\beta_{n}i}{2} & -\frac{i}{2} \vspace{1mm}\\ \frac{1-\beta_{n}i}{2} & \frac{i}{2} \\ \end{array} \right|=\frac{i}{2}\frac{1+\beta_{n}i}{2}+\frac{i}{2}\frac{1-\beta_{n}i}{2}=\frac{i}{2} \end{equation} and \begin{equation} (-1)^{1+2+1+2}|-A|=|A|. \end{equation} \item [Case 2.] \begin{equation} |Q|=\left| \begin{array}{cc} \frac{\beta_{n}-i}{2} & -\frac{1}{2} \vspace{1mm}\\ \frac{\beta_{n}+i}{2} & -\frac{1}{2} \\ \end{array} \right|=-\frac{1}{2}\frac{\beta_{n}-i}{2}+\frac{1}{2}\frac{\beta_{n}+i}{2}=\frac{i}{2} \end{equation} and \begin{equation} (-1)^{1+2+3+4}|B|=|B|. \end{equation} \item [Case 3.] \begin{equation} \left| \begin{array}{cc} \frac{1+\beta_{n}i}{2} & \frac{\beta_{n}-i}{2} \vspace{1mm}\\ \frac{1-\beta_{n}i}{2} & \frac{\beta_{n}+i}{2} \\ \end{array} \right|=\frac{1+\beta_{n}i}{2}\frac{\beta_{n}+i}{2}-\frac{\beta_{n}-i}{2}\frac{1-\beta_{n}i}{2}=\frac{1+\beta_{n}^{2}}{2}i \end{equation} and \begin{align} &(-1)^{1+2+1+3}\left| \begin{array}{cc} -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big] & \frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big] \vspace{1mm}\\ -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big] & \frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big] \\ \end{array} \right|\nonumber\\ &=0. \end{align} \item [Case 4.] \begin{equation}\left| \begin{array}{cc} \frac{1+\beta_{n}i}{2} & -\frac{1}{2} \vspace{1mm}\\ \frac{1-\beta_{n}i}{2} & -\frac{1}{2} \\ \end{array} \right|=-\frac{1}{2}\frac{1+\beta_{n}i}{2}+\frac{1}{2}\frac{1-\beta_{n}i}{2}=-\frac{\beta_{n}}{2}i \end{equation} and \begin{align} &(-1)^{1+2+1+4}\left| \begin{array}{cc} -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big] & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i \vspace{1mm}\\ -\frac{1}{4i}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big] & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) \\ \end{array} \right|\nonumber\\ =&-|A|i. \end{align} \item [Case 5.] \begin{equation} \left| \begin{array}{cc} -\frac{i}{2} & \frac{\beta_{n}-i}{2} \vspace{1mm}\\ \frac{i}{2} & \frac{\beta_{n}+i}{2} \\ \end{array} \right|=-\frac{i}{2}\frac{\beta_{n}+i}{2}-\frac{\beta_{n}-i}{2}\frac{i}{2}=-\frac{\beta_{n}}{2}i \end{equation} and \begin{align} &(-1)^{1+2+2+3}\left| \begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & \frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)\Big] \vspace{1mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & \frac{1}{4}\Big[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i\Big] \\ \end{array} \right|\nonumber\\ =&-|B|i. \end{align} \item [Case 6.] \end{itemize} \begin{equation} \left| \begin{array}{cc} -\frac{i}{2} & -\frac{1}{2} \vspace{1mm}\\ \frac{i}{2} & -\frac{1}{2} \\ \end{array} \right|=\frac{i}{2}\frac{1}{2}+\frac{i}{2}\frac{1}{2}=\frac{i}{2} \end{equation} and \begin{align} &(-1)^{1+2+2+4}\left| \begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i \vspace{1mm}\\ \frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) \\ \end{array} \right|=0. \end{align} So \begin{align} |D|=&\frac{i}{2}(|A|+|B|)-\frac{\beta_{n}}{2}i(-|A|i-|B|i)=(|A|+|B|)\frac{i-\beta_{n}}{2}\nonumber\\ =&(|A|+|B|)i\frac{1+\beta_{n}i}{2}=-\frac{1}{8}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\neq 0. \end{align} Thus \begin{equation} \left( \begin{array}{c} \imath_{n+1} \\ \jmath_{n+1} \\ \varsigma_{n+1} \\ \zeta_{n+1} \\ \end{array} \right)=D^{-1}\left( \begin{array}{c} \alpha \\ A\gamma+B\eta \\ \end{array} \right).\nonumber\vspace{-5mm} \end{equation} \end{proof} \begin{rem} Similar as in Remark 2.10, we can directly obtain (5.33) by using (2.21)-(2.24) and (2.37) together. \end{rem} With the above preliminaries, we get the following strong Favard theorem. \begin{thm} Let $\{(a_{n}^{(0)}, b_{n}^{(0)},\beta_{n}^{(0)},\imath_{n}^{(0)},\jmath_{n}^{(0)},\varsigma_{n}^{(0)},\zeta_{n}^{(0)})\}$ with $a_{0}^{(0)},b_{0}^{(0)}=1$ and $\beta_{0}^{(0)}=0$ be a system of seven-tuples of real numbers satisfying \begin{itemize} \item [(1)] \begin{align} &0<(\imath_{n+1}^{(0)}+\beta_{n}^{(0)}\varsigma_{n+1}^{(0)}-\zeta_{n+1}^{(0)})^{2}+(\jmath_{n+1}^{(0)}-\imath_{n+1}^{(0)}\beta_{n}^{(0)} +\varsigma_{n+1}^{(0)})^{2}\nonumber\\ =&1-\frac{1}{4}[(a_{n}^{(0)})^{2}+(b_{n}^{(0)})^{2}(1+(\beta_{n}^{(0)})^{2})] [(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}(1+(\beta_{n+1}^{(0)})^{2})]<1; \end{align} \item [(2)] \begin{equation} \beta_{n}^{(0)}\neq0\,\,\,\,\mbox{or}\,\,\,\,\frac{(a_{n}^{(0)})^{2}}{(b_{n}^{(0)})^{2}}+(\beta_{n}^{(0)})^{2}\neq1 \end{equation} \end{itemize} for $n\in \mathbb{N}$ with $a_{n}^{(0)},b_{n}^{(0)}>0$ for $n\in \mathbb{N}\cup\{0\}$, then there exists a unique nontrivial probability measure $d\mu$ on $\partial \mathbb{D}$ such that $a_{n}(d\mu)=a_{n}^{(0)}$, $b_{n}(d\mu)=b_{n}^{(0)}$, $\beta_{n}(d\mu)=\beta_{n}^{(0)}$, $\imath_{n}(d\mu)=\imath_{n}^{(0)}$, $\jmath_{n}(d\mu)=\jmath_{n}^{(0)}$, $\varsigma_{n}(d\mu)=\varsigma_{n}^{(0)}$ and $\zeta_{n}(d\mu)=\zeta_{n}^{(0)}$, where $a_{n}(d\mu),b_{n}(d\mu),\beta_{n}(d\mu),\imath_{n}(d\mu),\jmath_{n}(d\mu),\varsigma_{n}(d\mu),\zeta_{n}(d\mu)$ are associated coefficients of $d\mu$ defined by (2.11)-(2.15). \end{thm} \begin{proof} Let \begin{equation} \kappa_{2n}^{(0)}=\frac{1}{2}\Big[(a_{n}^{(0)})^{-2}\big(1+(\beta_{n}^{(0)})^{2}\big)+(b_{n}^{(0)})^{-2}\Big]^{\frac{1}{2}}, \end{equation} \begin{equation} \kappa_{2n+1}^{(0)}=\Big[(a_{n+1}^{(0)})^{2}+(b_{n+1}^{(0)})^{2}\big(1+(\beta_{n+1}^{(0)})^{2}\big)\Big]^{-\frac{1}{2}}, \end{equation} \begin{equation} \alpha_{2n-1}^{(0)}=\frac{1}{4}(\kappa_{2n}^{(0)})^{-2}\Big[(b_{n}^{(0)})^{-2}-(a_{n}^{(0)})^{-2}\big(1-(\beta_{n}^{(0)})^{2}\big)\Big] -\frac{1}{2}(\kappa_{2n}^{(0)})^{-2}(a_{n}^{(0)})^{-2} (\beta_{n}^{(0)})i \end{equation} and \begin{equation} \alpha_{2n}^{(0)}=\frac{1}{2}(\imath_{n+1}^{(0)}+\beta_{n}^{(0)}\varsigma_{n+1}^{(0)}-\zeta_{n+1}^{(0)})-\frac{i}{2}(\jmath_{n+1}^{(0)}-\imath_{n+1}^{(0)}\beta_{n}^{(0)} +\varsigma_{n+1}^{(0)}) \end{equation} for $n\in \mathbb{N}\cup\{0\}$, then by (5.48), Verblunsky theorem and Theorem 3.1, there exists a unique nontrivial probability measure $d\mu$ on $\partial \mathbb{D}$ such that \begin{equation}\alpha_{n}(d\mu)=\alpha_{n}^{(0)}\,\,\,\mbox{and}\,\,\,\kappa_{n}(d\mu)=\kappa_{n}^{(0)}\end{equation} as well as \begin{equation}a_{n}(d\mu)=a_{n}^{(0)},\,\,b_{n}(d\mu)=b_{n}^{(0)}\,\,\,\mbox{and}\,\,\,\beta_{n}(d\mu)=\beta_{n}^{(0)}\end{equation} for $n\in \mathbb{N}\cup\{0\}$. On one hand, noting (5.49), by Theorem 5.8, we have \begin{equation} \left( \begin{array}{c} \imath_{n+1}(d\mu) \vspace{1mm}\\ \jmath_{n+1}(d\mu) \vspace{1mm}\\ \varsigma_{n+1}(d\mu) \vspace{1mm}\\ \zeta_{n+1}(d\mu) \\ \end{array} \right)=D(d\mu)^{-1}\left( \begin{array}{c} \alpha(d\mu) \\ A(d\mu)\gamma(d\mu)+B(d\mu)\eta(d\mu) \\ \end{array} \right) \end{equation} for $n\in \mathbb{N}\cup\{0\}$, where $A(d\mu), B(d\mu), \gamma(d\mu), \eta(d\mu), D(d\mu), \alpha(d\mu)$ are respectively given by (5.16), (5.19), (5.27), (5.28) and (5.32) with $a_{n}(d\mu), b_{n}(d\mu),\beta_{n}(d\mu), \alpha_{n}(d\mu),$ $\kappa_{n}(d\mu), \Lambda_{n}(d\mu)$ replacing $a_{n}, b_{n},\beta_{n}, \alpha_{n}, \kappa_{n}, \Lambda_{n}$. On the other hand, as in Remark 5.9, by directly invoking (5.50)-(5.53) and \begin{equation}\Lambda_{n}^{(0)}=-\frac{1}{2}\Big[(a_{n}^{(0)})^{-2}\big(1+(\beta_{n}^{(0)})^{2}\big)+(b_{n}^{(0)})^{-2}\Big]i,\end{equation} we have \begin{equation} \left( \begin{array}{c} \imath_{n+1}^{(0)} \vspace{1mm}\\ \jmath_{n+1}^{(0)} \vspace{1mm}\\ \varsigma_{n+1}^{(0)} \vspace{1mm}\\ \zeta_{n+1}^{(0)} \vspace{1mm}\\ \end{array} \right)=(D^{(0)})^{-1}\left( \begin{array}{c} \alpha^{(0)} \\ A^{(0)}\gamma^{(0)}+B^{(0)}\eta^{(0)} \\ \end{array} \right) \end{equation} for $n\in \mathbb{N}\cup\{0\}$, where $A^{(0)}, B^{(0)}, \gamma^{(0)}, \eta^{(0)}, D^{(0)}, \alpha^{(0)}$ are respectively given by (5.16), (5.19), (5.27), (5.28) and (5.32) with $a_{n}^{(0)}, b_{n}^{(0)},\beta_{n}^{(0)}, \alpha_{n}^{(0)},$ $\kappa_{n}^{(0)}, \Lambda_{n}^{(0)}$ replacing $a_{n}, b_{n},\beta_{n}, \alpha_{n}, \kappa_{n}, \Lambda_{n}$. In terms of (2.37), (5.16), (5.19), (5.27), (5.28), (5.32), (5.54), (5.55) and (5.57), one can easily find that \begin{equation*}D(d\mu)=D^{(0)},A(d\mu)=A^{(0)}, B(d\mu)=B^{(0)}, \alpha(d\mu)=\alpha^{(0)},\gamma(d\mu)=\gamma^{(0)},\eta(d\mu)=\eta^{(0)}.\end{equation*} Thus, by (5.56) and (5.58), \begin{equation*}\imath_{n}(d\mu)=\imath_{n}^{(0)}, \jmath_{n}(d\mu)=\jmath_{n}^{(0)}, \varsigma_{n}(d\mu)=\varsigma_{n}^{(0)}\,\,\,\mbox{and}\,\,\, \zeta_{n}(d\mu)=\zeta_{n}^{(0)}\end{equation*} for $n\in \mathbb{N}$. \end{proof} Except for the above strong Favard theorem, by Theorem 5.5 and (5.17), we also have the following result on the determinants of the coefficient matrices $A, B, C, D, E$ and $F$. \begin{thm} Let $A, B, C, D, E, F$ be the coefficient matrices as above, then \begin{equation} |A||B|-|C||D|(1+\beta_{n}i)+|E||F|=0. \end{equation} \end{thm} \begin{proof} Note that \begin{align} FE^{-1}=&|E|^{-1}S, \end{align} where \begin{align} S=&\left(\begin{array}{cc} \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) & -\frac{1}{4}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\vspace{2mm}\\ \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) & -\frac{1}{4i}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)] \\ \end{array} \right)\nonumber\\ &\left(\begin{array}{cc} -\frac{1}{4i}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i] & \frac{1}{4}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)]\vspace{2mm}\\ -\frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) & \frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i\\ \end{array} \right)\nonumber\\ \triangleq&\left( \begin{array}{cc} s_{11} & s_{12} \\ s_{21} & s_{22} \\ \end{array} \right) \end{align} with \begin{align} s_{11}=&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) \Big\{-\frac{1}{4i}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\Big\}\nonumber\\ &-\frac{1}{4}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\Big\{-\frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big) \Big\}\nonumber\\ =&-\frac{1}{8i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\nonumber\\ =&-\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-4}\alpha_{2n-1}^{-1}(1+\beta_{n}i)=|D|(1+\beta_{n}i), \end{align} \begin{align} s_{12}=&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}+1\Big) \Big\{\frac{1}{4}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)]\Big\}\nonumber\\ &-\frac{1}{4}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\Big\{\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i\Big\}\nonumber\\ =&\frac{1}{16}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\Big(\alpha_{2n-1}^{-1}+1\Big)\nonumber\\ =&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\kappa_{2n}^{2}\Big(\alpha_{2n-1}^{-1}+1\Big)=|A|, \end{align} \begin{align} s_{21}=&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) \Big\{-\frac{1}{4i}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\alpha_{2n-1}^{-1}+b_{n}^{-2}i]\Big\}\nonumber\\ &-\frac{1}{4i}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)]\Big\{-\frac{1}{4i}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}(1+\beta_{n}i)\Big(\alpha_{2n-1}^{-1}-1\Big)\Big\}\nonumber\\ =&-\frac{1}{16}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[a_{n}^{-2}(1+\beta_{n}^{2})+b_{n}^{-2}]\Big(\alpha_{2n-1}^{-1}-1\Big)\nonumber\\ =&-\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\kappa_{2n}^{2}\Big(\alpha_{2n-1}^{-1}-1\Big)=|B|, \end{align} and \begin{align} s_{22}=&\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}-1\Big) \Big\{\frac{1}{4}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)]\Big\}\nonumber\\ &-\frac{1}{4i}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)]\Big\{\frac{1}{4}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\Big(\alpha_{2n-1}^{-1}+1\Big)i\Big\}\nonumber\\ =&-\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}[\Lambda_{n}^{-1}a_{n}^{-2}b_{n}^{-2}\alpha_{2n-1}^{-1}i+a_{n}^{-2}(1-\beta_{n}i)]\nonumber\\ =&\frac{1}{8}\Lambda_{n}^{-1}a_{n}^{-4}b_{n}^{-2}\alpha_{2n-1}^{-1}(1+\beta_{n}i)=|C|. \end{align} The last steps in the above identities for all the entries of $S$ hold by (5.17) and Theorem 5.5. So \begin{align} FE^{-1}=|E|^{-1}\left( \begin{array}{cc} |D|(1+\beta_{n}i) & |A| \\ |B| & |C| \\ \end{array} \right). \end{align} Thus (5.59) follows immediately. \end{proof} \begin{rem} In fact, one can get (5.59) by directly invoking (5.20)-(5.24) and (5.17). However, by this means, (5.66) could not be seen. \end{rem} \bibliographystyle{amsplain}
1,116,691,499,566
arxiv
\section{Reionization, Reheating, and Feedback} The Gunn-Peterson limit implies that the IGM was reionized and reheated by $z\sim5$, by energy released by objects which previously condensed out of the background and formed stars, AGNs or some other sources. This exerted a negative feedback on the rate of collapse of gas out of the background IGM by raising the Jeans mass there and affected the appearance and evolution of the IGM and the structure which subsequently collapsed out of it. How early could starlight have reionized the IGM? Shapiro and Giroux attempted to answer this question by solving linear equations for the growth of density fluctuations in the IGM and using a Press-Schechter approach to determine the baryon fraction which had collapsed out at any epoch, releasing energy to reheat and reionize the IGM, coupled to a detailed numerical evolution of the thermal and ionization balance of a coarse-grained, spatially-averaged IGM and the equation of radiative transfer for the ionizing radiation background, including the opacity of the observed quasar absorption line gas, as described in \cite{X}, \cite{X2}, and \cite{X3}. The maximum-possible efficiency was assumed for energy release by massive stars, which form at a rate proportional to the rate of collapse of baryons out of the IGM and stop forming when they have enriched the collapsed fraction with a solar abundance of heavy elements, in a flat, COBE-normalized, standard CDM model ($\Omega_B=0.06$, $h=0.5$, $\Omega=1$). The IGM was found to reheat by $z_h\sim65$ to $T_{\rm IGM}\sim10^{3.5}{\rm K}$ when the collapsed baryon fraction was only $f_{\rm coll}\approx10^{-4}$, while the H atom ionization breakthrough was $z_b\sim50$ when $f_{\rm coll}\sim10^{-3}$, implying a net metallicity averaged over all baryons in the universe at these epochs equal to these values of $f_{\rm coll}$, in solar units \cite{X3}. Since the COBE-normalization is almost twice that typically assumed in recent simulations of the Lyman alpha forest and galaxy formation, our results are {\it conservative} in the sense of maximizing $z_b$; a lower initial amplitude would make reionization and reheating occur a little later than this. In my talk, I also presented ASPH simulations by Shapiro and Martel which demonstrated that global reheating can significantly affect the small-scale structure formation responsible for quasar absorption line gas(\cite{X3},\cite{X4},\cite{X5}). This contribution is too brief to present that material, but I refer the reader to \cite{X3} for a summary. Here we will focus, instead, on new work on the effects of photoionization. \section{The Photoevaporation of Intergalactic Clouds} The first sources of ionizing radiation which turned on in the neutral (i.e. postrecombination) IGM prior to $z\sim5$ resulted in isolated, expanding H~II regions. The expansion and eventual overlap of the weak, R-type cosmological ionization fronts bounding these H~II regions was previously described analytically by treating the IGM as a uniform, cosmologically expanding gas (\cite{X6},\cite{X7}). The density fluctuations required to explain galaxy formation and the Lyman alpha forest were accounted for approximately by treating the IGM as ``clumpy,'' with a universal clumping factor. That approximation is correct in the limit in which the clumps are either too small to ``self-shield'' or else constitute only a small fraction of the total mass inside the HII region. It does not, however, address the possible dynamical consequences for the clumps, themselves, of the passage of I-fronts. In what follows, we present the first simulations of the gas dynamics and radiative transfer of an intergalactic cloud overtaken by a cosmological I-front. The fate of this cloud depends fundamentally on whether or not it can shield itself against the incident radiation from the external source responsible for the intergalactic I-front. If the cloud size exceeds the ``Str\"omgren length'' (the length of a column of gas within which the unshielded arrival rate of ionizing photons just balances the total recombination rate), it can trap the I-front. In that case, the weak R-type I-front which swept into the cloud initially, moving supersonically with respect to gas both ahead and behind, decelerates to the sound speed of the ionized gas before it can exit the cloud, thereby becoming a weak, D-type front preceded by a shock. Typically, the side of the cloud which faces the radiation source expels a supersonic wind which causes the remaining cloud material to be accelerated away from the source by the so-called ``rocket effect'' as the cloud photoevaporates (cf.\cite{X8}). For a uniform gas of H density $n_{\rm H,c}$, located a distance $r_{\rm Mpc}$ (in Mpc) from a UV source emitting $N_{\rm ph,56}$ ionizing photons per second (in units of $10^{56}{\rm s}^{-1}$), the Str\"omgren length is only $\ell_{\rm S}\cong(50\,{\rm pc})(N_{\rm ph,56}/r_{\rm Mpc}^2) (n_{\rm H,c}/0.1\,{\rm cm}^{-3})^{-2}$. Gas bound to dark matter halos whose virial temperature is less than $10\,{\rm km\,s^{-1}}$ will photoevaporate unimpeded by gravity. For larger halos gravity competes with the effects of photoevaporation. As a first study of these important effects, we have simulated the photoevaporation of a uniform, spherical, neutral, intergalactic cloud of gas mass $1.5\times10^6M_\odot$, radius $R_c=0.5\,\rm kpc$, density $n_{\rm H,c}=0.1\,{\rm cm^{-3}}$ and $T=100\,\rm K$, in which self-gravity is unimportant, located $1\,\rm Mpc$ from a quasar with emission spectrum $F_\nu\propto\nu^{-1.8}$ ($\nu>\nu_{\rm H}$) and $N_{\rm ph}=10^{56}{\rm s}^{-1}$, initially in pressure balance with an ambient IGM of density $0.001\,\rm cm^{-3}$ which at time $t=0$ has just been photoionized by the passage of the intergalactic R-type I-front generated when the quasar turned on. Apart from H and He, the cloud also contains heavy elements at $10^{-3}$ times the solar abundance. Our simulations in 2D, axisymmetry use an Eulerian hydro code (called CORAL), with Adaptive Mesh Refinement and a Riemann solver based on the Van~Leer flux-splitting algorithm, which solves nonequilibrium ionization rate equations (for H, He, C, N, O, Ne, and~S) and includes an explicit treatment of radiative transfer which takes account of bound-free opacity of H and He (\cite{X9},\cite{X10},\cite{X11}). Our grid size in $(r,z)$ was $128\times512$~cells (fully refined). Figure~1 shows the structure of the cloud $50\,\rm Myr$ after it was overtaken by the quasar's I-front as it sweeps past the cloud in the IGM. Since $\ell_S\ll R_c$ initially, the cloud traps the I-front, as described above, and drives a supersonic wind from the surface facing the quasar. It takes more than $100\,\rm Myr$ to evaporate the cloud, accelerating it to 10's of $\rm km\,s^{-1}$ in the process. Figure~2 shows selected observable diagnostics, including column densities of H~I, He~I and II seen along the symmetry axis at different times and the spatial variation of the relative abundances of selected metal ions at $50\,\rm Myr$. The cloud starts as a high-column-density Lyman Limit absorber, but ends with the H~I column density of a Lyman alpha forest cloud, with $\rm[He\,II]/[H\,I]\sim10^2$ and metal ions. \begin{figure} \centerline{\vbox{ \hbox{\hskip0.9cm\psfig{figure=shapiro1a.ps,height=7.cm}} \vskip-1cm \hbox{\psfig{figure=shapiro1b.ps,width=10.cm}} }} \vskip-3cm \caption[] { THE PHOTOEVAPORATION OF AN INTERGALACTIC GAS CLOUD BY IONIZING RADIATION FROM A NEARBY QUASAR. (a) Results $50\,\rm Myr$ after the quasar, located $1\,\rm Myr$ away along the x-axis to the left of the computational box, first turns on. (a) (upper box) Shaded isodensity contours with logarithmic spacing, of the total atomic (HI) density $n$ (highest = white, lowest = black). (b) (lower panels) contour plots of atomic density (upper), logarithmically spaced, and (lower) velocity arrows are plotted for velocities larger than 5 km/s, with length proportional to velocity. The solid line shows current extent of the original cloud matter and dashed line is the I-front (50\% H ionization contour). } \end{figure} \begin{figure} \centerline{\vbox{ \psfig{figure=shapiro2.ps,height=16.cm} }} \vskip-1cm \caption[]{PHOTOEVAPORATING CLOUD: OBSERVATIONAL DIAGNOSTICS. (a) Column Densities of H~I (top panels) and of He I (solid) and II (dotted) (middle panels) versus velocity as measured along the $x$-axis at $r=0$. Each box labelled with time (in Myrs) since the QSO turned on. (b) (bottom panels) Carbon and Oxygen ionic fractions along this symmetry axis at $t=50\,\rm Myr$. } \end{figure} \acknowledgements{This work was supported by NASA Grant NAG5-2785 and NSF Grant ASC-9504046, and was made possible by a UT Dean's Fellowship and a National Chair of Excellence, UNAM, Mexico in 1997 for PRS. PRS is also grateful to Hugo Martel and Mark Giroux for their collaboration in the work referred to in \S1, which space did not permit us to include here.} \begin{iapbib}{99}{ \bibitem{X} Shapiro, P. R., Giroux, M. L., \& Babul, A. 1994, ApJ, 427, 25. \bibitem{X2} Giroux, M. L., \& Shapiro, P. R. 1996, ApJ Suppl., 102, 191. \bibitem{X3} Shapiro, P. R. 1995, in {\it The Physics of the Interstellar Medium}, eds. A. Ferrara, C. F. McKee, C. Heiles, and P. R. Shapiro (ASP Conf. Vol. 80), 55--97. \bibitem{X4} Shapiro, P. R., \& Martel, H. 1995, in {\it Dark Matter}, eds. S. S. Holt and C. L. Bennett (AIP Conf. Proc. 336), pp.~446--449. \bibitem{X5} Shapiro, P. R., \& Martel, H. 1997, in preparation. \bibitem{X6} Shapiro, P. R. 1986, PASP, 98, 1014. \bibitem{X7} Shapiro, P. R., \& Giroux, M. L. 1987, ApJ, 321, L107. \bibitem{X8} Spitzer, L. 1978, {\it Physical Processes in the Interstellar Medium} (Wiley). \bibitem{X9} Mellema, G., Raga, A. C., Canto, J., Lundqvist, P., Balick, B., Steffen, W., \& Noriega-Crespo, A. 1997, A\&A, submitted. \bibitem{X10} Raga, A. C., Mellema, G., \& Lundquist, P. 1977, ApJ Suppl., 109, 517. \bibitem{X11} Raga, A. C., Taylor, S. D., Cabrit, S., \& Biro, S. 1995, A\&A, 296, 833. } \end{iapbib} \vfill \end{document}
1,116,691,499,567
arxiv
\section{Introduction} \label{sec:introduction} \input{introduction} \section{Tracker Description} \label{sec:tracker_description} \input{tracker_description} \section{Data Samples} \label{sec:data_samples} \input{data_samples} \section{Tracker Commissioning} \label{sec:commissioning} The following two subsections describe the operating characteristics and performance of the silicon pixel and silicon strip detectors, respectively. \subsection{Silicon Pixel Detector} \label{sec:pixels} \input{pixels} \subsection{Silicon Strip Detector} \label{sec:strips} \input{strips} \section{Track Reconstruction} \label{sec:reconstruction} \input{reconstruction} \section{Tracking Performance} \label{sec:tracking} \input{tracking} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \section*{Acknowledgements} \label{sec:acknowledgments} \input{acknowledgments} \subsubsection{Operating Conditions} In order to make maximal use of experience gained from the operation of the pixel detector with cosmic rays during summer/autumn 2009, the operating conditions were not changed for the December 2009 data taking period. The coolant temperature was kept constant at 7$^\circ$C. The bias potential applied to the 285\micron thick p-spray barrel sensors \cite{Allkofer:2007ek} was a uniform 150~V. The bias potential applied to the 270\micron thick p-stop endcap sensors \cite{Arndt:2003ck} was a uniform 300~V. Small fractions of the barrel (1.0\%) and endcap (3.1\%) detectors were inactive resulting in a net operational fraction of 98.4\% for the entire detector. The calibration procedures described in Ref.~\cite{craft_pixel_paper} were used to determine the ADC gains and pedestals for all channels. Iterative tuning reduced the mean (spread) of the readout threshold distributions for the pixel Readout Chips (ROCs) from the values measured during the 2008 cosmic ray commissioning \cite{craft_pixel_paper} to 2733\,$e$ (196\,$e$) in the barrel detector and 2483\,$e$ (163\,$e$) in the endcap detectors, where $e$ is the magnitude of the electron charge. These measured threshold values apply only to the calibration procedure. Because the bandwidth of the preamplifiers is limited by power considerations, small signals can take more than a bunch crossing time (25 ns) to fire the zero-crossing discriminator that triggers the storage of the signal. This causes some smaller signals to be associated with the wrong bunch crossing and to be ignored by the readout system. The net result is that the effective or ``in-time'' thresholds are larger than the set values. The effective thresholds are estimated by comparing the distribution of measured cluster $x$-sizes (azimuthal direction in the barrel detector and radial direction in the endcap detectors) with those predicted by the detailed pixel simulation, \textsc{pixelav}~\cite{Chiochia:2004qh, Swartz:2005vp}. The cluster sizes are sensitive to the effective thresholds. To avoid highly ionizing particles, the tracks used in this analysis were required to have momenta larger than 4\GeVc. This selection ensures that even protons and deuterons produce signals that are within a few percent of the ionization minimum. By varying the simulated thresholds until the measured and simulated distributions agree, the average effective thresholds are found to be approximately 3500\,$e$ in the barrel detector and 3000\,$e$ in the endcap detectors. A study of the pixel hit reconstruction efficiency using a technique similar to the strip detector technique described in Section~\ref{sec:strip_effs} suggests that the efficiency is larger than 99\% for the live regions of the detector and is consistent with earlier work \cite{dNdeta}. \subsubsection{Pixel Timing Scan} The pixel detector readout system uses the 40~MHz LHC clock as input. Signals from the CMS trigger system must arrive at the correct time within the 25~ns clock cycle to associate the correct bunch crossing time stamp with any signal above the readout threshold. An optimally phased clock signal will maximize the number of pixels observed in clusters. The overall trigger timing was adjusted by varying the clock phase until the average barrel and endcap cluster sizes as measured in minimum bias triggers were maximized. These quantities are plotted versus clock phase in Fig.~\ref{fig:pixel_timing_size}. The clock phase setting of 6~ns was found to optimize the smoothly varying detector averages. A finer module-by-module adjustment of the clock phase will be performed when higher trigger rates become available. \begin{figure}[hbtp] \begin{center} \mbox{ {\scalebox{0.50}{ \includegraphics[width=\linewidth]{fig/pixels/Cluster_clusMeanSizeVsDelay.pdf} }} } \caption{ The average cluster size distributions for the barrel and endcap pixel detectors in minimum bias events are plotted versus clock phase.} \label{fig:pixel_timing_size} \end{center} \end{figure} \subsubsection{Operating Characteristics with Minimum Bias Triggers} \label{sssec:pixel_operting_minbias} The distributions of the number of clusters observed in 0.9\TeV events selected by the minimum bias trigger are shown in Fig~\ref{fig:clN_900GeV}. The observed data, shown as solid dots, are compared with fully simulated data, shown as histograms, that were generated with a recent tuning of the \PYTHIA event generator~\cite{Moraes:2007rq}. The left plot shows the distribution for all events, whereas the right plot shows the distribution after removing events that also satisfy the beam-gas trigger. There is an excess of large multiplicity events that are removed by the beam-gas trigger requirement. The source of these events could be beam-gas interactions or beam scraping in the beam transport system near the interaction point. After removal of the beam background events, the measured distributions are approximately consistent with preliminary expectations. The measured average cluster multiplicities per layer (barrel detector) and per disk (endcap detector) are listed in Table~\ref{tab:pixel_clN}. They are compared with the expectation from the simulation and are found to be in rough agreement. It should be noted that the event generator is based on an event model that has not yet been tuned in detail and is not expected to provide accurate predictions. \begin{figure}[hbtp] \begin{center} \mbox{ {\scalebox{0.90}{ \includegraphics[width=\linewidth]{fig/pixels/ClN_900GeV.pdf} }} } \caption{The cluster multiplicity of (a) all minimum bias triggered events and (b) those that do not trigger the beam-gas veto in the 0.9\TeV data sample. The histograms show the similar distribution for a sample of simulated data.} \label{fig:clN_900GeV} \end{center} \end{figure} During the extremely low luminosity run in December 2009 (the instantaneous luminosity was typically in the range 10$^{26}$--10$^{27}$~cm$^{-2}$s$^{-1}$), the beam background events occurred at a rate that was roughly comparable to the rate of minimum bias triggers. Because they are characterized by particle trajectories that are nearly parallel to one of the beams, most background events ($\sim$90\%) do not fire the minimum bias trigger but do have clusters in the endcap detectors and elongated clusters in the first two layers of the barrel detector. At the beam energies of the December 2009 run, the pixel detector occupancies associated with the background events were typically five times larger than those associated with minimum bias events. The beam-gas trigger veto effectively removes background events, as do cluster shape, track quality, and vertex requirements. \begin{table}[htbp] \caption{\label{tab:pixel_clN} The average cluster multiplicity per layer/disk in 0.9\TeV minimum bias triggers. The simulation errors are entirely statistical and do not represent the uncertainties in the event modelling. The asymmetry seen in the forward and backward endcaps is caused by an offset in the luminous region along the beam axis.} \begin{center} \begin{minipage}{0.45\textwidth} \begin{tabular}{ccc} \hline \multicolumn{3}{c}{Barrel Pixel: clusters/layer}\\ Layer & Measured & Simulation \\ \hline 1 & 35.2$\pm$0.9 & 31.6$\pm$1.2 \\ 2 & 30.6$\pm$0.8 & 27.8$\pm$1.1 \\ 3 & 27.4$\pm$0.8 & 24.8$\pm$1.0 \\ \hline \end{tabular} \end{minipage} \hfil \begin{minipage}{0.45\textwidth} \begin{center} \begin{tabular}{ccc} \hline \multicolumn{3}{c}{Endcap Pixel: clusters/disk}\\ Disk & Measured & Simulation \\ \hline $-$2 & 8.0$\pm$0.1 & 7.3$\pm$0.2 \\ $-$1 & 7.8$\pm$0.1 & 7.2$\pm$0.2 \\ 1 & 8.1$\pm$0.1 & 7.7$\pm$0.2 \\ 2 & 8.6$\pm$0.1 & 8.1$\pm$0.2 \\ \hline \end{tabular} \end{center} \newpage \end{minipage} \end{center} \end{table} The cluster charge distributions measured in the barrel and endcap detectors with the 0.9\TeV sample are shown as solid dots in Fig.~\ref{fig:cc_norm}. Each entry is scaled by the ratio of the pixel sensor thickness to the track path length in the sensor. The solid histograms represent the expectations from the \PYTHIA-based, full detector simulation. The measured and simulated barrel distributions have similar peaks but the measured distribution is somewhat broader than the simulated one. This may be due to residual pixel-to-pixel gain variation resulting from the use of a single gain for all 80 channels in each ROC column or residual module-to-module clock phase variation. The corresponding distributions for the endcap detectors have similar widths but indicate a 5\% charge-scale mismatch. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.40}{ \includegraphics[width=\linewidth]{fig/pixels/normalized_cluster_chargebarrel_paper.pdf} \label{fig:cc_norm_bpix} }} } \mbox{ \subfigure[] {\scalebox{0.40}{ \includegraphics[width=\linewidth]{fig/pixels/normalized_cluster_chargeendcap_paper.pdf} \label{fig:cc_norm_fpix} }} } \caption{ The normalized cluster charge measured in the (a) barrel and (b) endcap pixel detectors for the sample of 0.9\TeV minimum bias events. The insets show the same distributions on semi-log scales.} \label{fig:cc_norm} \end{center} \end{figure} \subsubsection{Lorentz Angle Calibration} The use of n-in-n pixel technology and the large magnetic field in CMS imply that pixel hit reconstruction involves large Lorentz drift corrections (the typical bias corrections are 53\micron in the barrel and 10\micron in the endcap). The estimation of track impact coordinates from pixel clusters is performed with two different algorithms. The simpler, faster ``Generic Algorithm'' \cite{CMS_NOTE_2002_049} uses the Lorentz width $W_L$ to estimate the projected cluster size and bias correction. The Lorentz width is the product of the effective thickness of the sensor $T_\mathrm{eff}$ and the tangent of the average Lorentz angle $\theta_L$: $W_L = T_\mathrm{eff}\tan{\theta_L}$. Due to the focusing of the electric field at the n+ implants, the charge sharing near the n+ side of the sensors is reduced. This is modelled by the effective thickness which is 5--10\% smaller than the physical thickness of the sensor substrate. The detailed \textsc{pixelav} simulation is used to extract the Lorentz width by applying the Generic Algorithm to a sample of simulated clusters and by adjusting $W_L$ to minimize the bias and maximize the resolution. The slower, more sophisticated ``Template Algorithm'' \cite{CMS_NOTE_2007_033} fits pre-computed cluster shapes to the measured clusters. The Lorentz-drift effects are encoded in the cluster shapes and the same \textsc{pixelav} simulation is used to compute them. Therefore, the actual Lorentz calibration procedure is to tune the detailed simulation to agree with data and then to generate a Lorentz width for the Generic Algorithm and cluster shapes for the Template Algorithm. Two different techniques have been used to perform the calibration. The 2008 cosmic ray data were calibrated by measuring the cluster $x$-sizes as functions of $\cot{\alpha}$ (see Fig.~\ref{fig:pixel_coords} for definitions) and by determining the locations of the cluster-size minimum $\cot{\alpha}_\mathrm{min}$\cite{craft_pixel_paper}. In the pixel barrel, $-\cot{\alpha}_\mathrm{min}$ is equal to $\tan{\theta_L} = r_H\bar\mu B$, where $r_H$ is the electron Hall factor, $\bar\mu$ is the average electron mobility, and $B$ is the magnetic field. The 2008 cosmic ray measurements suggested that the value of the electron Hall factor used in \textsc{pixelav} should be increased to 1.05 from the 1.02 value determined in test beam measurements \cite{Dorokhov:2003if}. In 2009, the temperature of the detector was lowered and the bias voltage of the pixel barrel was increased, which changed the average Lorentz angles in both barrel and endcap detectors. New cosmic ray based determinations are reported in Table~\ref{tab:pixel_la} and are compared with the tuned simulation. \begin{table}[htbp] \caption{\label{tab:pixel_la} The tangent of the Lorentz angle $\tan{\theta_L}$ as determined by 2009 calibrations. } \begin{center} \begin{tabular}{ccccc} \hline \multicolumn{5}{c}{2009 Lorentz Angle Measurements}\\ Sample & Detector & Technique & Measured $\tan{\theta_L}$ & Simulation \\ \hline Cosmic Ray & Barrel & Cluster Size & 0.409$\pm$0.002(stat) & 0.407$\pm$0.002(stat) \\ Cosmic Ray & Endcap & Cluster Size & 0.081$\pm$0.005(stat) & 0.080$\pm$0.004(stat) \\ Minimum Bias & Barrel & Grazing Angle & 0.3985$\pm$0.0005(stat) & 0.4006$\pm$0.0005(stat) \\ Minimum Bias & Barrel & Cluster Size & 0.409$\pm$0.002(stat) & 0.411$\pm$0.005(stat)\\ \hline \end{tabular} \end{center} \end{table} The barrel calibration was repeated with collision data in December 2009 using a new ``grazing angle'' technique \cite{CMS_NOTE_2008_012}. This technique makes use of the two-dimensional pixel segmentation to simultaneously measure the average transverse displacement of the charge carriers as a function of distance along clusters produced by a sample of highly inclined tracks. Since longitudinal position in the cluster is completely correlated with depth in the junction, this technique determines the average transverse carrier displacement as a function of depth as shown graphically in Fig.~\ref{fig:pixel_la_grazing_angle}. The average Lorentz angle, extracted from the linear fit shown in the figure, is compared with the detailed simulation in Table~\ref{tab:pixel_la}. The extremely large population of highly curved, low transverse momentum tracks observed in minimum bias triggers spans the $\cot{\alpha}$ region needed to determine the minimum projected cluster size in the pixel barrel. This enables the use of the cluster size technique as a cross check which is also reported in Table~\ref{tab:pixel_la}. Note that the two techniques are affected by different systematic effects and that a better than 1\% consistency is observed between the real and simulated measurements in all cases. A variation of fitting procedures suggests that the total systematic uncertainty on the Lorentz angle calibration is less than 2\%. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.45}{ \includegraphics[width=\linewidth]{fig/pixels/la_angles.pdf} \label{fig:pixel_coords} }} } \mbox{ \subfigure[] {\scalebox{0.45}{ \includegraphics[width=\linewidth]{fig/pixels/grazing_angle_fit.pdf} \label{fig:pixel_la_grazing_angle} }} } \caption{ (a) The pixel local coordinate system and track angle definitions. The local $z$ axis coincides with the sensor electric field $\vec E$. The local $x$ axis is chosen to be parallel to $\vec E\times\vec B$ where $\vec B$ is the axial magnetic field. The local $y$ axis is defined to make a right-handed coordinate system. The angle $\alpha$ is the angle between the $x$ axis and the track projection on the local $xz$ plane. (b) The transverse cluster displacement of highly inclined barrel clusters as a function of depth for a sample of 0.9\TeV minimum bias events at a magnetic field of 3.8~T. The tangent of the Lorentz angle is given by the slope of a linear fit which is shown as the solid line.} \end{center} \end{figure} \subsubsection{Resolution Study} The intrinsic position resolution in a limited range of the angular acceptance was measured using tracks from minimum bias triggers that traverse overlapping sensors in the barrel layers. A similar analysis was performed in a very different angular region with 2008 cosmic ray data \cite{craft_pixel_paper} using the measurement technique given in Ref.~\cite{:2009mq}. Tracks passing through two overlapping modules in the same layer are used to compare the hit position with the expected position from the track trajectory. Because it is insensitive to alignment uncertainties, the difference of the local track impact points on a fitted trajectory is known about ten times more precisely than are the individual predicted hit positions. A double difference is formed by taking the difference between the measured hit position difference in the two modules and the predicted trajectory position difference. The width of this double difference distribution is insensitive to translational misalignment of the overlapping modules. To limit the effect of multiple scattering, a minimum track momentum of 2.5\GeVc is required. Clusters with measured charge below 10\,000\,$e$ or containing pixels on the sensor edges are excluded. The double difference widths are fitted with a Gaussian and the uncertainty from the trajectory prediction is subtracted quadratically to recover the hit resolution on the position difference. With the assumption of equal resolution for each of the modules in the overlap, the final fit values for the resolution for a single module are $12.8\pm0.9$\micron along $x$ and $32.4\pm1.4$\micron along $y$. The \textsc{pixelav} simulation is used to generate a sample of clusters that has the same distribution of impact angles as the measured sample. Since the simulation does not include the double-size pixels that span the gaps between the sixteen readout chips which tile each module, a subsample of the overlap data sample is used to determine single-size-pixel resolutions of $12.7\pm2.3$\micron along $x$ and $28.2\pm1.9$\micron along $y$. These numbers can be directly compared with those extracted from Gaussian fits to the simulated residual distributions. The simulated resolutions are $14.1\pm0.5$\micron and $24.1\pm0.5$\micron along $x$ and $y$, respectively, and agree reasonably well with the measured resolutions. Because overlaps occur only at the edges of the track $\alpha$-angle acceptance where the $x$ sizes of the clusters deviate from the optimal size of two, the measured and simulated $x$ resolutions are somewhat worse than the typical $x$ resolution (less than 10\micron) expected for most collision-related clusters. The measured and simulated $y$ resolutions are expected to be typical of the detector performance. \subsubsection{Operating Conditions} All of the modules in the strip tracker were biased at 300~V in the early collision running. This is the same setting that was used in the CRAFT studies and is well above the full depletion voltage for the sensors. Similarly, the coolant temperature was set at 4--6\,$^{\circ}$C, the same as in the CRAFT study. This meant that the $p^+$ on $n$ sensors~\cite{Agram:2004} were approximately at room temperature. As described in the Technical Design Report for the CMS Tracker~\cite{Tracker_TDR,Tracker_TDR_add}, there are two main modes of operation for the strip tracker analogue pipeline integrated circuits (APV25~\cite{French:2001}): peak and deconvolution. In deconvolution mode, the output charge for each strip represents a weighted sum of three consecutive pipeline cells~\cite{Gadomski:1992}. Although deconvolution mode was designed to avoid signal pile-up in high (design) luminosity operations, it will be necessary to run in this mode whenever the expected separation between proton bunches will be less than a few hundred nanoseconds. The luminosity in the early collision running was very low and the bunches well separated; most of the strip data were collected in peak mode, which is based on the signal in a single pipeline cell. All of the data, whether in peak or deconvolution mode, were zero suppressed, meaning that only strips which were part of clusters were read out for each event. Many of the starting parameters for the strip tracker during the early collision running had been established in the preceding CRAFT period. For example, the timing of the tracker subsystems (in peak mode) with respect to CMS triggers was set during the cosmic-ray muon studies. Similarly, the alignment parameters for the strip detector modules were derived from the same studies. As part of the alignment process, offsets had been determined for cluster positions in sensors due to the Lorentz drift of holes and electrons under the influence of the solenoid field. For barrel layers, the Lorentz angle correction for cluster positions during track reconstruction is about 10\,$\mu$m, which is significantly larger than the 3--4\,$\mu$m alignment precision achieved in the cosmic ray studies~\cite{craft_alignment_paper}. \subsubsection{Strip Timing Scan} As the strip tracker was operated in peak mode at the start of the early collision running, the trigger timing established in the preceding CRAFT period could be used. In CRAFT the sampling time of the APV25's was set within each subsystem by means of a dedicated synchronization signal, adjusted according to the measured readout fibre lengths. The synchronization of the subsystems was obtained using the signal from cosmic-ray muon tracks. Details on how the scan was done can be found in Ref.~\cite{CMS_NOTE_2008_007}. Toward the end of the data collection period the APV25 mode was changed from peak to deconvolution and since timing is more critical in the latter, a fine-delay scan was made following the mode change. For expediency only one layer (TOB L3) was used in the study. Figure~\ref{fig:strip_timing} shows the result of the fine-delay timing scan. The timing adjustment for the clock and trigger signals is set on the front-end hybrids and the smallest step size is 1.04\,ns. From the figure it can be seen that the timing prior to the scan had been off by about 10\,ns from ideal. This level of mistiming resulted in an estimated 2.5\% decrease in Signal-to-Noise (S/N) in the strip modules during the peak mode running, where the delay timing is less critical. The amplitude that is measured in the timing scan represents the signal of the highest pulse height strip in a cluster scaled by the ratio of the sensor thickness to the path length of the track in the sensor. Following the scan, the timing offsets for all of the strip tracker subsystems were updated and some data were collected in deconvolution mode. No data samples were collected in peak mode with the new delays. \begin{figure}[hbtp] \begin{center} \mbox{ {\scalebox{0.40}{ \includegraphics[width=\linewidth]{fig/strips/FineDelay900GeV_cms.pdf} }} } \caption{Normal-incidence-scaled charge (arbitrary units) of the highest pulse height strip in a cluster as a function of the readout delay with respect to the CMS trigger, in deconvolution mode. The dashed vertical line corresponds to the setting prior to the timing scan.} \label{fig:strip_timing} \end{center} \end{figure} \subsubsection{Signal-to-Noise Measurements} Signal-to-Noise measurements were made in both peak and deconvolution modes. In peak mode, the S/N ratio was determined at both centre-of-mass energies, 0.9 and 2.36\,TeV, whereas deconvolution mode is restricted to 2.36\,TeV. The ratio is evaluated on the basis of charge clusters associated with reconstructed tracks, where the individual strip noise values are taken from calibration runs. For track angles that are not normal to the surface of modules the signal values are scaled by the cosine of the angle relative to the local normal. This is done to give the same expectation value per cluster for modules of the same type. Cluster noise, which takes into account the noise of each strip within a cluster, is used as the denominator in the S/N ratio. When all strips within a cluster have the same noise, cluster noise is equivalent to the noise of a single strip. Further details on the determination of the S/N ratio can be found in Ref.~\cite{Eklund:1999sn}. Figures~\ref{fig:TIBsn} and \ref{fig:TOBsn} show the S/N distributions for the TIB and TOB modules, respectively, in deconvolution mode. Included with each distribution is the result of the fit to a Landau distribution convolved with a Gaussian distribution. The most probable value of the fitted curves is taken to be the S/N value and results for all of the strip tracker subdetectors are summarized in Table~\ref{tab:ston} for all three running conditions. Peak values shown in the table have not been corrected for the 2.5\% loss due to non-optimal timing. They are comparable with results obtained in the CRAFT studies and in earlier cosmic ray studies. The difference in peak and deconvolution mode S/N values stems largely from the higher noise in deconvolution. After calibration there is some variation in signal values (measured in electrons) for the two modes, but this has been shown to be within 10\%. The S/N ratio should not depend on the centre-of-mass energy and this is confirmed by the table entries. Although it is not possible to directly compare channel noise distributions in the early collision data with results from calibration runs given the zero suppression, the frequency and distribution of clusters in empty LHC buckets provide an indirect cross-check of the calibration results and assumptions about the Gaussian and uncorrelated nature of the noise. For example, with bad modules excluded from the readout the mean number of clusters in empty buckets, out of some 9 million channels, was 4.2. This is consistent with the clustering rules, which require a certain number of standard deviations (five for the total charge in a cluster), and Gaussian probabilities. By way of contrast, there were $\sim$1200 clusters per minimum bias trigger in the 0.9\,TeV data. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.40}{ \includegraphics[width=\linewidth]{fig/strips/TIBsn_deco_cms.pdf} \label{fig:TIBsn} }} } \mbox{ \subfigure[] {\scalebox{0.40}{ \includegraphics[width=\linewidth]{fig/strips/TOBsn_deco_cms.pdf} \label{fig:TOBsn} }} } \caption{ Signal-to-Noise distributions in deconvolution mode for (a) (thin sensor) TIB and (b) (thick sensor) TOB modules. The curves are results of the fits to a Landau distribution convoluted with a Gaussian distribution.} \end{center} \end{figure} \begin{table}[htbp] \caption{\label{tab:ston}Summary of strip tracker Signal-to-Noise measurements. The peak mode ratios have not been corrected for the estimated 2.5\% decrease in signal from the trigger mistiming, as described in the text.} \begin{center} \begin{tabular}{lccccc}\hline Conditions & TIB & TID & TOB & TEC thin & TEC thick \\ \hline 0.9\TeV, peak mode & 27.4 & 26.7 & 34.1 & 28.8 & 35.7 \\ 2.36\TeV, peak mode & 27.4 & 26.8 & 34.1 & 28.8 & 35.7 \\ 2.36\TeV, deco mode & 20.3 & 19.2 & 23.9 & 20.3 & 26.1 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Strip Layer Efficiencies} \label{sec:strip_effs} Efficiencies for strip tracker layers were determined using events that were collected in peak mode. Reconstructed tracks in these events were required to have a minimum of 8 hits in order to be used in the efficiency measurements. To avoid inactive regions and allow for alignment imprecision, trajectories passing near the edges of sensors were excluded. The presence of a hit anywhere within the non-excluded region of a traversed module was counted as a positive response; efficiency is determined by the ratio of positive responses to the total number of traversing tracks. Layers under study were not removed from the track reconstruction and could in fact count toward the minimum hit requirement. The total integrated hit efficiency during the early collision period was measured to be 97.8\%, which is essentially explained by the number of bad modules in the strip tracker. That is, about 2.2\% of the modules have been excluded from the readout because of problems with high voltage short circuits, control ring failures, or other issues. With known problem modules excluded, the overall hit efficiency is 99.8\%, consistent with the $\sim$0.2\% bad channel rate from the construction process. Detailed simulations, used to determine track reconstruction efficiency, take into account inactive regions in addition to the measured efficiencies. The efficiency measurements for the collision data include an estimated 0.04\% systematic error due to the use of the layers under study in the reconstruction process and the wide search windows within modules. \subsubsection{Energy Loss Measurement} \label{sec:strips_dedx} Although the primary function of the strip tracker is to provide hit position information for track reconstruction and precise momentum determination, the wide linear range of the strip channel output also provides a measure of energy loss. That is, the charge collected in a hit cluster is directly proportional to energy lost by a particle, largely through ionization, while traversing the silicon. For reconstructed tracks the angle $\theta$ between the track direction and the axis normal to module sensor is well defined for each hit on the track. The instantaneous energy loss per unit path length ($dE/dx$) in the silicon is then approximated by the quantity $\Delta E/(\Delta L \cdot \sec\theta)$, where $\Delta E$ is the cluster charge expressed in units of MeV and $\Delta L$ is the normal-angle thickness of the active volume of the silicon sensor. All of the TIB and TID modules and the modules on rings 1--4 of the TEC have silicon sensors that are 320\,$\mu$m thick, whereas the TOB and TEC ring 5--7 modules have 500\,$\mu$m thick sensors. Some 30\,$\mu$m of the nominal thicknesses for both thin and thick types is inactive material, i.e., does not contribute to the charge collection. In zero-suppressed readout, which was used exclusively in the early collision period, there are 8 ADC bits for the charge on each channel within a cluster. Channel gains are set such that a single ADC count corresponds to about one-quarter of the average noise and full scale corresponds to approximately three times the average loss expected from normally incident minimum ionizing particles. The highest two ADC values have a special significance: 254 implies a value between 254 and 1024 counts, and 255 indicates that the actual value was in excess of 1024 counts. The $dE/dx$ algorithm includes the saturated values but without any special treatment. The main point in determining energy loss per unit path length is that, for a given medium, $dE/dx$ depends largely on the velocity ($\beta$) of the traversing particle. By combining $dE/dx$ information with the measured momentum $p$ of a track, one can determine the mass of the traversing particle. On the scale of charged particle momenta in CMS collisions, there is only a limited range near the low end where the difference in $\beta$ values is significant enough to distinguish among long-lived hadrons. The momentum range where pions would have relatively large energy loss is such that tracks tend to curl up in the 3.8~T solenoid field and thus fail to be reconstructed. The strip hits on reconstructed tracks represent independent measures of $dE/dx$, ignoring the negligible loss of energy in traversing the tracker. Although pixel hits are included in the track reconstruction, they are not used in the $dE/dx$ calculation due to their more limited linear range. Several methods have been used to determine an estimate for the most probable $dE/dx$ value based on the measurements in the strip tracker modules traversed by a track. Figure~\ref{fig:dedx_p}, for example, shows the relationship between the Harmonic-2 $dE/dx$ estimator~\cite{CMS_NOTE_2008_005} and momentum for 0.9\,TeV data taken in peak mode. In the figure, clear bands can be seen for kaons and protons and to a much lesser extent for deuterons. An estimate of the mass of each candidate can be obtained using the particle momentum and the measurement of the ionization energy loss provided by the $dE/dx$ estimators. To this end the following relation between $dE/dx$, $p$, and $m$ is assumed for the momenta below the minimum-ionizing region: \begin{equation} \frac{dE}{dx}= K\frac{m^2}{p^2}+C \;. \label{eq:bethebloch} \end{equation} The proton line in Fig.~\ref{fig:dedx_p} is used to extract the parameters $K$ and $C$ in Eq.~\ref{eq:bethebloch}. The 0.7--1.0\,GeV$/c$ range in the proton band is used for the reference data fit, while extrapolations based on the same $K$ and $C$ values yield a good agreement for protons with momenta above and below the reference range and for kaons. The mass spectrum that results from inverting Eq.~\ref{eq:bethebloch} for all tracks with $dE/dx > 4.15$\,MeV/cm and $p<2$\,GeV$/c$ is shown in Fig.~\ref{fig:dedx_m}. From the frequency plot one can observe clear kaon and proton peaks as well as good agreement for the peaks from a Monte Carlo simulation. There is also evidence for a deuteron peak in data, although saturation of the ADC scale is particularly pronounced for deuterons given their reduced $\beta$ values and relatively higher $|\eta|$ values. That the deuteron peak is poorly modelled by the simulation is partly understood as the underlying generator, \PYTHIA, does not produce deuterons by design, although they can be produced in the subsequent \GEANT~\cite{Geant} hadron showers. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.40}{ \includegraphics[width=\linewidth]{fig/strips/dedxCNPHarm2_TH2Fitted_final.pdf} \label{fig:dedx_p} }} } \mbox{ \subfigure[] {\scalebox{0.40}{ \includegraphics[width=\linewidth]{fig/strips/dedxCNPHarm2_Comp_MassDistrib_final_mod3.pdf} \label{fig:dedx_m} }} } \caption{ Energy loss versus the momentum of tracks (a) and frequency of tracks as a function of track mass as determined from the measured energy loss and momentum (b). The lightly shaded line in the (a) indicates the fit in the reference range of the proton band while the darker lines correspond to extrapolations for kaons, protons, and deuterons based on the fit parameters.} \end{center} \end{figure} \subsection{Basic Tracking Distributions} \label{sec:basic} The \textit{highPurity} tracks are selected, with additional requirements of $|d_z| < 10\,\sigma_z$ (where $d_z$ is the longitudinal impact parameter with respect to the primary vertex and $\sigma_z$ is the combined track and primary vertex uncertainty in $z$) and $\sigma_{\pt}/\pt < 10\%$, to compare the data and simulation. Figure~\ref{fig:basic_dist} shows the results of this comparison for several important track parameters. The distribution of the number of tracks per event, shown in Fig.~\ref{fig:trk_n}, has been normalized to the number of events. The data clearly have more tracks per event than are present in the simulated data. This is believed to be due to an as-yet unoptimized tune of the \PYTHIA generator. To be able to compare shapes, the other distributions have been normalized to the number of reconstructed tracks in the data. There is general agreement between the data and simulation distribution shapes for all other tracking variables. In particular, the features in the $\phi$ distribution, due to inactive modules, are well modelled by the simulation. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/n.pdf} \label{fig:trk_n} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/nHit.pdf} \label{fig:trk_nHit} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/pt.pdf} \label{fig:trk_pt} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/eta.pdf} \label{fig:trk_eta} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/phi.pdf} \label{fig:trk_phi} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/dxyCorr_pvtx.pdf} \label{fig:trk_dxyCorr} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/dzCorr_pvtx.pdf} \label{fig:trk_dzCorr} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/chi2ndof.pdf} \label{fig:trk_chi2ndof} }} } \caption{Comparison of the data (points) and simulation (histogram) distributions of tracking parameters: (a) number of tracks per event, (b) number of hits used per track, transverse (c) momentum \pt, (d) track pseudorapidity $\eta$, (e) azimuthal angle $\phi$, (f) transverse impact parameter $d_{xy}$ with respect to the primary vertex, (g) longitudinal impact parameter $d_z$ with respect to the primary vertex, and (h) normalized $\chi^2$. The simulated distributions are normalized by area to the data distributions.} \label{fig:basic_dist} \end{center} \end{figure} \subsection{Primary Vertex Resolution} \label{sec:pvtx} The reconstruction of the primary interaction vertex in the event starts from the track collection. The tracks are clustered based on the $z$ coordinate of the track at the point of closest approach to the beamline. The clusters are fit with an adaptive vertex fit~\cite{CMS_NOTE_2007_008}, where tracks in the vertex are assigned a weight between 0 and 1 based on their proximity to the common vertex. The primary vertex resolution strongly depends on the number of tracks used in fitting the vertex and on their \pt. To measure the resolution, the tracks in an event with only one vertex are randomly split into two different sets and used to independently fit the primary vertex. The distribution of the difference in the fitted vertex positions can then be used to extract the resolution by fitting a Gaussian to it and dividing $\sigma$ by $\sqrt{2}$. To examine the effect of the \pt\ of the tracks in the vertex, we study the resolution versus the number of tracks in the vertex for different average \pt\ of tracks in the vertex. Figure~\ref{fig:pvtx_respt} shows the $x$, $y$, and $z$ resolutions for different average \pt ranges. While the resolution differs considerably depending on \pt and multiplicity, the simulation accurately reproduces the data results. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[width=\linewidth]{fig/tracking/ResXpaper.pdf} \label{fig:pvtx_respt_x} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[width=\linewidth]{fig/tracking/ResYpaper.pdf} \label{fig:pvtx_respt_y} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[width=\linewidth]{fig/tracking/ResZpaper.pdf} \label{fig:pvtx_respt_z} }} } \caption{Primary vertex resolution distributions in (a) $x$, (b) $y$, and (c) $z$ versus number of tracks. The three sets of results in each plot show different average \pt ranges and within each \pt range, data and simulation are compared.} \label{fig:pvtx_respt} \end{center} \end{figure} \subsection{Reconstruction of Particle Decays} \label{sec:V0s} \subsubsection{V$^0$ Reconstruction} \label{sec:V0reco} V$^0$ particles are long-lived ($c\tau > 1$\cm) neutral particles reconstructed by their decay to two charged particles\footnote{Charge conjugate states are implied throughout the paper.}: K$_\mathrm{S}^0 \to \pi^+\pi^-$ and $\Lambda^0 \to p\pi^-$. Reconstruction of V$^0$ decays requires finding oppositely charged tracks that are detached from the primary vertex and form a good secondary vertex with an appropriate invariant mass. For the $\Lambda^0$, the lowest momentum track is assumed to be the pion. As no further particle identification is required, a V$^0$ candidate can appear in both K$_\mathrm{S}^0$ and $\Lambda^0$ samples. To be considered as a V$^0$ decay track, a track must have at least 6 hits, a normalized $\chi^2$ less than 5, and a transverse impact parameter with respect to the beamspot greater than $0.5\sigma_{IP}$, where $\sigma_{IP}$ is the calculated uncertainty (including beamspot and track uncertainties). The reconstructed V$^0$ decay vertex must have a normalized $\chi^2$ less than 7 and a transverse separation from the beamspot greater than 15$\sigma_T$, where $\sigma_T$ is the calculated uncertainty (including beamspot and vertex uncertainties). In addition, the V$^0$ candidate is discarded if either of the daughter tracks has hits that are more than 4$\sigma_{3D}$ from the V$^0$ vertex, towards the primary vertex, where $\sigma_{3D}$ is the uncertainty in the vertex position. The mass resolution of the V$^0$ depends on $\eta$ as well as on the decay vertex position and a single Gaussian is not a sufficiently accurate functional form for the signal. Therefore, a double Gaussian with the same mean was used to fit the signal. For the background shapes, a linear background was used for $\pi^+\pi^-$ and the function $a(m-m_p-m_\pi)^b$ was used for the $p\pi^-$ spectrum where $m$ is the $p\pi^-$ invariant mass and $a$ and $b$ are free parameters. The $\pi^+\pi^-$ and $p\pi^-$ mass distributions, along with the overlaid fits, are shown in Figs.~\ref{fig:kshort_mass} and \ref{fig:lambda_mass}, respectively. Tables~\ref{tab:v0masses} and \ref{tab:v0sigmas} show the reconstructed V$^0$ masses and resolutions obtained from the data and simulation. While the various results are close to expectations, significant discrepancies are present. These features can be examined as a function of track kinematic variables to better understand the CMS tracker and magnetic field. This work is ongoing. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[width=\linewidth]{fig/tracking/ksMass_336V8K_data_newfix_doubgauss_nostat.pdf} \label{fig:kshort_mass} }} } \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[width=\linewidth]{fig/tracking/lamMass_336V8K_data_newfix_doubgauss_nostat_atotheb.pdf} \label{fig:lambda_mass} }} } \caption{The invariant mass distributions of (a) $\pi^+\pi^-$ with a fit to the K$_\mathrm{S}^0$ and (b) $p\pi^-$ with a fit to the $\Lambda^0$.} \end{center} \end{figure} \begin{table}[htbH] \begin{center} \caption{\label{tab:v0masses}Masses obtained from data, world average~\cite{PDG08}, and simulation (reconstructed and generated). The uncertainties for data and simulation results are statistical only.} \vspace{3pt} \begin{tabular}{l|cc|cc}\hline & \multicolumn{4}{c}{Mass ($\!\MeVcc$)} \\ V$^0$ & Data & PDG & Simulation & Generated \\ \hline K$_\mathrm{S}^0$ & $497.68 \pm 0.06$ & $497.61 \pm 0.02$ & $498.11 \pm 0.01$ & $497.670$ \\ $\Lambda^0$ & $1115.97 \pm 0.06$ & $1115.683 \pm 0.006$ & $1115.93 \pm 0.02$ & $1115.680$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htbH] \begin{center} \caption{\label{tab:v0sigmas}V$^0$ mass resolutions obtained from data and simulation. The narrow and wide Gaussian resolutions are $\sigma_1$ and $\sigma_2$, respectively. The $\sigma_1$ fraction is the fraction of the yield from the narrow Gaussian. The final row gives the average resolution, obtained from the square root of the weighted average of the two resolutions squared. Uncertainties are statistical only.} \vspace{3pt} \begin{tabular}{lcccc}\hline Parameter & K$_\mathrm{S}^0$ Data & K$_\mathrm{S}^0$ Simulation & $\Lambda^0$ Data & $\Lambda^0$ Simulation \\ \hline $\sigma_1 (\!\MeVcc)$ & $4.53 \pm 0.12$ & $4.47 \pm 0.04$ & $1.00 \pm 0.26$ & $1.71 \pm 0.05$ \\ $\sigma_2 (\!\MeVcc)$ & $11.09 \pm 0.41$ & $10.49 \pm 0.11$ & $3.25 \pm 0.14$ & $3.71 \pm 0.09$ \\ $\sigma_1$ fraction & $0.58 \pm 0.03$ & $0.58 \pm 0.01$ & $0.15 \pm 0.05$ & $0.44 \pm 0.03$ \\ $\overline{\sigma} (\!\MeVcc)$ & $7.99 \pm 0.14$ & $7.63 \pm 0.03$ & $3.01 \pm 0.08$ & $2.99 \pm 0.03$ \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{V$^0$ Lifetime} For the $0.9\TeV$ centre-of-mass energy data and simulation, invariant mass distributions are made for different bins of proper decay length, $ct = mcL/p$, where $L$ is the measured decay length. These distributions are fitted to obtain the yield, leading to the uncorrected $ct$ distribution as seen in Fig.~\ref{fig:kshort_lifetime_a} for the K$_\mathrm{S}^0$ data. The uncorrected $ct$ distribution from the simulation is divided by the generated exponential shape given by $e^{-ct/c\tau_{Sim}}$ to obtain the correction factor versus $ct$. The uncorrected data $ct$ distribution is divided by the correction factor to obtain the corrected $ct$ distribution as seen in Fig.~\ref{fig:kshort_lifetime_b} for the K$_\mathrm{S}^0$. This distribution is fitted with an exponential, the slope of which gives the measured lifetime. The good fit to an exponential function ($\chi^2/\text{NDOF} = 8.1/8$) indicates that the simulation accurately reproduces the efficiency variation versus lifetime. The fitted results, $\tau_{\mathrm{K}_\mathrm{S}^0} = 90.0 \pm 2.1$\,ps and $\tau_{\Lambda^0} = 271 \pm 20$\,ps (with $\chi^2/\text{NDOF} = 11.3/6$), are both within 1\,$\sigma$ of the world average~\cite{PDG08}. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/ksCtauFromYields_noScaling3D.pdf} \label{fig:kshort_lifetime_a} }} } \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/ksCtauFromYields_withFit3D_formatted.pdf} \label{fig:kshort_lifetime_b} }} } \caption{K$_\mathrm{S}^0$ $ct$ distributions for (a) uncorrected data and (b) corrected data with an exponential fit.} \end{center} \end{figure} \subsubsection{Reconstruction of K$^*(892)^-$ and $\Xi^-$} The reconstructed sample of V$^0$ particles was exploited to reconstruct decays of other particles. The K$_\mathrm{S}^0$ candidates are combined with charged tracks from the primary vertex to search for the strong decay K$^*(892)^- \to K_S^0\pi^-$. For this analysis, events were required to contain a reconstructed primary vertex consisting of more than two tracks and a fit probability greater than 0.5\%. The K$_\mathrm{S}^0$ candidate must pass the same criteria as described in Sec.~\ref{sec:V0reco}. In addition, the requirement on the impact parameter significance of the pions from the K$_\mathrm{S}^0$ is increased from 0.5 to 2. The K$_\mathrm{S}^0$ candidates must also have a mass within 20\MeVcc of the nominal mass and the K$_\mathrm{S}^0$ flight path must pass within 2\mm of the primary vertex. The charged track in the K$^*(892)^-$ decay must have a normalized $\chi^2$ less than 2, at least two hits in the pixel detector, at least seven total hits, $\pt > 0.5\GeVc$, $|\eta|<2$, and pass within 2 (3)~mm of the primary vertex in the direction transverse to (along) the beam line. The K$_\mathrm{S}^0\pi^-$ invariant mass is calculated using the world-average value of the K$_\mathrm{S}^0$ mass~\cite{PDG08} and is shown in Fig.~\ref{fig:kstar_mass}. The figure also shows an overlay of a fit to the K$_\mathrm{S}^0\pi^-$ mass distribution. The fit uses a Breit-Wigner for the signal plus a threshold function for the background \begin{equation*} \frac{S}{\left(m^2-M_{K^*}^2\right)^2+\Gamma_{K^*}^2 M_{K^*}^2} + B\left[1-\exp{\left(\frac{M_K+M_\pi-m}{p}\right)}\right], \end{equation*} where $m$ is the K$_\mathrm{S}^0\pi^-$ invariant mass, $M_{K^*}$ and $\Gamma_{K^*}$ are the mass and width of the K$^*(892)^-$, $M_K$ and $M_\pi$ are the world-average masses of $K^0$ and $\pi^-$, and $S$, $B$, and $p$ are free parameters. The K$^*$ width $(\Gamma_{K^*})$ is fixed at the world average value of 50.8\MeVcc~\cite{PDG08}, while the K$^*$ mass $(M_{K^*})$ is a free parameter. The mass returned by the fit, $888.3 \pm 3.2\MeVcc$, is consistent with the world average value of $891.66 \pm 0.26\MeVcc$~\cite{PDG08}. The $\Xi^-$ was reconstructed through its decay to $\Lambda^0\pi^-$. The $\Xi^-$ is a long-lived baryon, with a decay topology different from that of the K$^*(892)^-$: the $\pi^-$ from the $\Xi^-$ decay should be detached from the primary vertex rather than originating from it. The $\Lambda^0$ candidates were reconstructed as described in Sec.~\ref{sec:V0reco} except that a looser transverse significance cut of 10 (rather than 15) was applied. $\Lambda^0$ candidates with a mass within 8\MeVcc of the world-average value were combined with charged tracks with the same sign as the pion in the $\Lambda^0$ decay. The $\Lambda^0\pi^-$ fit used a $\Lambda^0$ mass constraint and the vertex was required to have a fit probability better than 1\%. All three tracks involved in the decay were required to have at least 6 valid hits and a 3D impact parameter with respect to the primary vertex greater than 3$\sigma$. The resulting mass plot, shown in Fig.~\ref{fig:xi_mass}, is fit with a single Gaussian for the signal and a background shape of $Aq^{(1/2)}+Bq^{(3/2)}$ where $q = m-M_\Lambda - M_\pi$, $m$ is the $\Lambda^0 \pi^-$ invariant mass, and $A$ and $B$ are free parameters. The measured mass of $1322.8 \pm 0.8\MeVcc$ is close to the world average value of $1321.71 \pm 0.07\MeVcc$~\cite{PDG08}. The resolution of $4.0 \pm 0.8\MeVcc$ is consistent with the simulation result of $3.6 \pm 0.4\MeVcc$. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[width=\linewidth]{fig/tracking/KStar_data.pdf} \label{fig:kstar_mass} }} } \mbox{ \subfigure[] {\scalebox{0.425}{ \includegraphics[width=\linewidth]{fig/tracking/XiMass_data.pdf} \label{fig:xi_mass} }} } \caption{Invariant mass plots of (a) K$_\mathrm{S}^0 \pi^-$ with a fit to the K$^*(892)^-$ and (b) $\Lambda^0\pi^-$ with a fit to the $\Xi^-$.} \end{center} \end{figure} \subsection{Particle Identification Using Measured Energy Losses} \label{sec:dedx} Estimating the energy loss $(dE/dx)$ of a particle by means of charge collected by the CMS silicon strip tracker is described in Sec.~\ref{sec:strips_dedx}. In this section, applications of $dE/dx$ measurements are used to identify protons and kaons produced in $\Lambda^0$ and $\phi$ decays. \subsubsection{$dE/dx$ Verification with $\Lambda \to p\pi^-$ Decays} The kinematics of the $\Lambda^0 \to p\pi^-$ decay requires $p_p > p_\pi$ for all $\Lambda^0$ particles reconstructed at CMS\@. This provides a clean source of protons and pions which can be used to check the $dE/dx$ results. We apply the same selection as in Section~\ref{sec:V0reco}, and plot the $dE/dx$ distribution as a function of the momentum for tracks associated to V$^0$ candidates in the mass range 1.08--1.16\GeVcc, separately for the highest momentum tracks (Fig.~\ref{fig:lambda_estimator_hard}) and the lowest momentum tracks (Fig.~\ref{fig:lambda_estimator_soft}). As expected, the highest momentum tracks are generally found near the proton curve while the lowest momentum tracks are generally inconsistent with the proton curve. The few exceptions are consistent with background under the $\Lambda^0$ peak. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[width=\linewidth]{fig/tracking/Lambdas_dedxCNPHarm2_dEdxVsP_Hard.pdf} \label{fig:lambda_estimator_hard} }} } \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[width=\linewidth]{fig/tracking/Lambdas_dedxCNPHarm2_dEdxVsP_Soft.pdf} \label{fig:lambda_estimator_soft} }} } \caption{Estimated energy loss as a function of the momentum for (a) the highest momentum track and (b) the lowest momentum track for the $\Lambda^0$ candidate decay products. The superimposed curves comes from the proton fit in the inclusive track sample shown in Fig.~\ref{fig:dedx_p}.} \end{center} \end{figure} \subsubsection{Reconstruction of $\phi(1020) \to \mathrm{K}^+\mathrm{K}^-$} The $\phi(1020) \to \mathrm{K}^+\mathrm{K}^-$ decay was reconstructed using data taken at 0.9\TeV centre-of-mass energy. The candidate kaon tracks come from the collection of \textit{highPurity} tracks and are required to have $\pt > 0.5\GeVc$, normalized $\chi^2 < 2$, at least five hits, $|\eta|<2$, and a transverse impact parameter with respect to the reconstructed beamspot smaller than 3\mm. Finally, for tracks with $p<1\GeVc$, the track must have a measured $dE/dx$ consistent with the kaon hypothesis (see Eq.~\ref{eq:bethebloch}): $K (M^\textrm{min}/p)^2+C < dE/dx < K (M^\textrm{max}/p)^2+C$. The parameters of the $dE/dx$ cut for kaons are those extracted from a fit to the $dE/dx$ vs.\ $p$ distribution, as described in Sec.~\ref{sec:strips_dedx}. We use a compatibility window of $\pm200\MeVcc$ around the K mass, with $M^\textrm{min}$ and $M^\textrm{max}$ being lower and upper boundaries of this window. The fit of the mass spectra of pairs of tracks accepted by the $dE/dx$ selection used the sum of two normalized functions: a convolution of a relativistic Breit-Wigner shape with a Gaussian for the $\phi$ signal and an arctangent function for the background. The mass plot and overlaid fit are shown in Fig.~\ref{fig:phi_mass_signal}. The fitted $\phi$ mass of $1019.58 \pm 0.22\MeVcc$ is in agreement with the world-average value of $1019.455 \pm 0.020\MeVcc$. The resolution found in data is $1.29 \pm 0.32\MeVcc$, in agreement with the value found in simulation, $1.41\MeVcc$. Candidates in which at least one track fails the $dE/dx$ requirement are shown in Fig.~\ref{fig:phi_mass_background} where only background is observed, indicating that the $dE/dx$ requirement has a high efficiency to select $\phi(1020)$ candidates. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.47}{ \includegraphics[width=\linewidth]{fig/tracking/PhiFit_data_rereco_newdedx_fixwidth.pdf} \label{fig:phi_mass_signal} }} } \mbox{ \subfigure[] {\scalebox{0.47}{ \includegraphics[width=\linewidth]{fig/tracking/PhiFail_data_and_MC_rereco_newdedx.pdf} \label{fig:phi_mass_background} }} } \caption{$\mathrm{K}^+\mathrm{K}^-$ invariant mass distribution, with (a) both kaons satisfying the $dE/dx$ requirement and with (b) at least one particle failing that requirement. In (a) a fit to the $\phi(1020)$ hypothesis is shown.} \end{center} \end{figure} \subsection{Reconstruction of Photon Conversions and Nuclear Interactions} \label{sec:interactions} While the tracker is essential for finding charged particles and measuring their momenta, the tracker material is also a source for interactions. For photons, interactions with the tracker material can produce $e^+e^-$ conversion pairs, while for hadrons, nuclear interactions can produce multiple hadrons. Photon conversions in the Tracker reduce the efficiency for low-energy-photon finding by the electromagnetic calorimeter, while nuclear interactions reduce track finding efficiency and can affect the resolution of many hadronic observables such as jets or missing transverse energy. Thus, identification of conversions and nuclear interactions can be used to improve many aspects of the event reconstruction. Furthermore, studies of conversions and interactions can be used to improve our understanding of the material in the Tracker. The electrons and positrons from converted photons can be identified by the electromagnetic calorimeter and used as seeds for track reconstruction~\cite{CMS_NOTE_2006_005}. In the minimum bias events collected in December 2009, however, the photons have a soft spectrum as seen in Fig.~\ref{fig:conversion_pt} and therefore the conversion pairs are unlikely to reach the electromagnetic calorimeter. These conversion pairs can still be reconstructed by using tracker-seeded conversion reconstruction techniques, made possible by the iterative tracking algorithm described in Section~\ref{sec:reconstruction} which extends the capability of reconstructing low-\pt and detached tracks. The essential signature of a massless conversion photon is the two parallel tracks at the production vertex, in both the transverse and longitudinal planes. The reconstructed invariant mass, shown in Fig.~\ref{fig:conversion_mass}, shows the effect of the mass resolution, which is well modelled by the simulation. Two different conversion reconstruction approaches have been used. Both methods fit two oppositely charged tracks to a common 3D vertex with the constraint that the two tracks are parallel at the vertex. The methods differ mainly in the preselection of the track pairs. The first method, from which Figs.~\ref{fig:conversion_pt} and \ref{fig:conversion_mass} are derived, requires both tracks have at least 3 hits and normalized $\chi^2$ less than 10 and at least one track with 5 or more hits. The tracks are required to have positive charge-signed transverse impact parameter, positive distance of minimum approach in 2D (i.e., the two full track circles have one or no intersection in the transverse plane), small $z$ separation at their innermost point ($|\Delta z|<5$\cm) if they are in the barrel, and a small opening angle in both the transverse ($\Delta \phi <0.2$) and longitudinal plane ($\Delta\cot\theta<0.1$ where $\theta$ is the polar angle relative to the $z$ axis). The vertex fit must have a $\chi^2$ probability better than $5\times 10^{-3}$ and be located inside the innermost hits on the tracks. To increase efficiency, the second method takes \textit{all} tracks with a $\chi^2$ probability above $10^{-6}$ and requires a vertex with fit probability greater than $10^{-6}$, radius greater than 2\cm, and at most one hit per track inside of the vertex position. The $\chi^2$ probability from the second method is shown in Fig.~\ref{fig:conversion_prob} with good agreement between data and simulation. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/photon_pt_ext_data.pdf} \label{fig:conversion_pt} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[angle=90,width=\linewidth]{fig/tracking/photon_massMomAtVtx_linear_paper.pdf} \label{fig:conversion_mass} }} } \mbox{ \subfigure[] {\scalebox{0.313}{ \includegraphics[width=\linewidth]{fig/tracking/ConvProbStackPAS.pdf} \label{fig:conversion_prob} }} } \caption{Comparisons of data photon conversions (points) and real and fake photon conversion from simulation (filled histograms) showing: (a) distributions of the reconstructed \pt of the converted photons from the first method, (b) the invariant mass of the $e^+e^-$ pairs from the first method, and (c) the distribution of the vertex $\chi^2$ probability from the second method. The last bin of (b) is the overflow bin.} \end{center} \end{figure} The nuclear interaction finder starts from the full list of tracks described in Section~\ref{sec:reconstruction}. For each pair of tracks, the distance of closest approach is computed and if the two tracks are close enough they are considered linked together. A recursive finder produces blocks of tracks linked together from which a rough estimate of the displaced vertex position is computed. Finally, the tracks from a block are refitted together with a displaced vertex as a common constraint. $V^{0}$ decays and photon conversions are removed from the resulting sample of displaced vertices. A tight selection is applied to the remaining vertices to remove fake tracks and pairs from the primary vertex. The resulting sample of significantly displaced vertices in the radial direction ($r>2.5$\cm) is called the nuclear interactions sample. In the data, $80\%$ of nuclear interactions are reconstructed with 2 tracks and $20\%$ with 3 tracks. In the first case, a $30\%$ combinatorial fake rate is expected from the simulation, while in the second case the fake rate is negligible. The distribution of nuclear interaction positions provides a means of observing the material in the detector and validating the simulation of the material. The distribution of radial position $r$ of the nuclear vertices, compared to the simulation, is shown in Fig.~\ref{fig:interactions_r}. The beam pipe at a radius of 3\cm, as well as the three barrel pixel layers at average radii of 4.3, 7.3, and 10.2~cm, are clearly seen. The radius is measured relative to the centre of the pixel detector. In the version of the simulation used here, this is also the centre of the beam pipe. In reality, the beam pipe centre is offset from the pixel detector centre resulting in a smeared distribution versus radius. Nevertheless, there is good agreement between the data and the simulation for the relative rate of nuclear interactions in the different barrel pixel structures and the beam pipe. This indicates a consistent description of the material distribution in this region. The material distribution in the endcap pixel detector is studied by selecting nuclear interactions with $|z|>26$\cm and $r < 19$\cm. The longitudinal position $|z|$ of the nuclear vertices, compared to the simulation, is shown in Fig.~\ref{fig:interactions_z}. The pixel barrel flange ($|z|<30$\cm) and the two pixel disks can be clearly distinguished. The tail up to 1\,m is from pixel services. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[width=\linewidth]{fig/tracking/VertexRho_interactions.pdf} \label{fig:interactions_r} }} } \mbox{ \subfigure[] {\scalebox{0.42}{ \includegraphics[width=\linewidth]{fig/tracking/VertexZ_interactions.pdf} \label{fig:interactions_z} }} } \caption{Distributions of nuclear interaction vertices versus (a) radial position $r$ for $|z|<26$\cm and (b) versus the magnitude of the longitudinal coordinate $|z|$ for $|z|>26$\cm and $r<19$\cm. The simulation histogram is normalized to the total number of nuclear interactions found in data in the full $z$ range.} \end{center} \end{figure} \subsection{Study of $b$-tag Related Observables} \label{sec:btag} The measurement of impact parameters and the reconstruction of secondary vertices have been tested with the limited event sample of December 2009. At higher collision energy these objects will provide the main observables used in b-tagging algorithms. The 2009 data contain only a few well-defined jets and mainly tracks at momenta below those typically used in b-tagging. To test the reconstruction on a sufficiently large sample, a few changes to the reconstruction chain have been applied with respect to what is described in Ref.~\cite{CMS_PAS_BTV_09_001}. As described in Ref.~\cite{CMS_PAS_JME_10_001}, jet reconstruction is performed using the anti-\kt jet clustering algorithm~\cite{Cacciari:2008gp} on objects obtained from the CMS particle flow reconstruction~\cite{CMS_PAS_PFT_09_001,CMS_PAS_PFT_10_001}. To recover low-momentum jets, the cone size is increased to 0.7 and the minimum $\pt$ is reduced to $3\GeVc$. The b-tagging algorithms are run on tracks associated with these jets. The track selection is also changed relative to Ref.~\cite{CMS_PAS_BTV_09_001}; the minimum $\pt$ requirement is removed and 7 rather than 8 hits are required. The impact parameter is computed with respect to the reconstructed primary vertex and the distributions are compared between data and a minimum bias simulation reconstructed with the same algorithm settings. Figure~\ref{fig:btag_ip} shows the three-dimensional impact parameter significance distribution for all tracks in a jet. The secondary-vertex reconstruction using the tracks associated to jets has also been slightly modified compared to the algorithm described in Ref.~\cite{CMS_PAS_BTV_09_001}. The differences are a looser track selection, a relaxed vertex-to-jet direction compatibility, the use of track refitting in the secondary-vertex fit, and the use of the primary-vertex constraint rather than the beamspot. In addition, to suppress K$_\mathrm{S}^0$ candidates, the transverse secondary-vertex separation must be less than 2.5\cm and the secondary-vertex invariant mass more than 15\MeVcc from the nominal K$_\mathrm{S}^0$ mass. The significance of the distance between primary and secondary vertices is compared to what is expected from a simulation of minimum bias events in Fig.~\ref{fig:btag_sv}. While many two- and three-track vertices are reconstructed, only one four-track vertex is found in the data. This event is shown in Fig.~\ref{fig:fourtrackvertex}. \begin{figure}[hbtp] \begin{center} \mbox{ \subfigure[] {\scalebox{0.47}{ \includegraphics[width=\linewidth]{fig/tracking/ip3dsig.pdf} \label{fig:btag_ip} }} } \mbox{ \subfigure[] {\scalebox{0.47}{ \includegraphics[width=\linewidth]{fig/tracking/sv3dsig.pdf} \label{fig:btag_sv} }} } \caption{Distribution of (a) the significance of the three-dimensional impact parameter for all tracks in a jet and (b) the significance of the three-dimensional displacement of the secondary vertex. The data are shown as full circles while the simulation contributions from light flavour, charm, and bottom are shown as different-shaded histograms. The outermost bins contain the respective histogram underflow/overflow.} \end{center} \end{figure} \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.5\textwidth]{fig/tracking/fourtrackvertex_crop2.png} \caption{Display of an event with a four-track secondary vertex. The vertex is separated from the primary vertex by 7$\sigma$ and the invariant mass of the four particles is 1.64\GeVcc, assuming they are all pions.} \label{fig:fourtrackvertex} \end{center} \end{figure}
1,116,691,499,568
arxiv
\section{Introduction} \onehalfspacing The ability to control physical properties of a material by tuning its bandgap is at the heart of modern electronic devices \cite{Alattas2018}. Studies have found that an energy gap can be generated in two-dimensional materials, such as a monolayer \cite{Elias2019} or a bilayer grahene (BLG) \cite{doi:10.1021/nn202463g}. BLG consisting of two stacked monolayers of graphene is a material in which physical properties can be controlled by tuning the bandgap \cite{Novoselov666}. It is thus possible to improve the electrical, the thermal and the optical conductivities and modulate the mechanical properties \cite{McCann_2013}. However, several of the characteristics of BLG are similar to those of a monolayer, but a BLG has interesting underlying physics that holds a potential for electronics applications such as sensors \cite{SEEKAEW2017357}, exhibiting high sensitivity \cite{QIN2017760}, stable specificity, and fast response \cite{PhysRevB.100.075421}. The electronic properties of a BLG have been studied using density functional theory (DFT), in which the bandgap at the $K$ point in the Brillouin zone depends linearly on the average applied electric field \cite{NEMNES20199, PhysRevB.79.165431}. One of the main approaches to alter the electrostatic potential of a BLG is substitutional doping with foreign atoms \cite{C0JM02922J} such as Boron (B), Nitrogen (N) \cite{NEMNES2018175}, and Silicon (Si) atoms. For instant, a B- and N-doped BLG results in a p-type or an n-type semiconducting behavior with a shifting of the Fermi energy, respectively. A B and N codoped BLG exhibits a semiconding material with a small bandgap, where the Fermi energy is located in the bandgap \cite{doi:10.1002/adma.200901285}. A substantial bandgap due to Boron and Nitrogen atoms codoped in BLG can be created in which the size of the band gap is effectively tuned in the presence of B-N pairs \cite{Alattas2018}. The created band gap of a boron nitride (BN) BLG by varying an external electric field can be used for investigating photocatalysis \cite{doi:10.1063/1.4950993}. These electronic properties of B or N doped BLG give almost similar result for both the AA- and AB-stackings patterns with respect to the dopant atoms \cite{doi:10.1021/acs.nanolett.9b00986}. The doping of graphene with Si, N and B atoms leads to modified bonds in the BLG which in turn effects its mechanical properties. The influence of the Si, N and B doping on the mechanical properties of graphene has been examined and it was shown that it leads to an almost linear decrease of the Young’s modulus. Such doping effects are found to be most significant for silicon, less pronounced for boron, and small or negligible for nitrogen \cite{HAN2015618}. In addition, molecular dynamics simulations have been done for twisted bilayer of graphene displaying that they possess outstanding mechanical properties. In the linear elastic region, the mechanical strain rate and the presence of cracks have negligible effects, while in the nonlinear mechanical region fracture toughness is seen \cite{liu2018molecular}. The optical characteristics of BLG are interesting for optoelectronics devices and graphene-based photodetectors \cite{Echtermeyer2011}. Inter- and intraband transitions in graphene have been studied using infrared spectroscopy finding a strong intraband absorption in the terahertz frequency range \cite{5951298, Abdullah_2019, Abdullah2019}. The foreign atoms can be used to control the optical properties as a function of the doping as has been demonstrated in an electrostatically gated BLG \cite{PhysRevB.79.115441}. The roles of the doping atoms on the $\pi-\pi$ interactions was studied and a systematic red shift and a broadening of the lowest excitations in the optical absorption was demonstrated \cite{doi:10.1021/jp504222m}. Furthermore, the Si doping opens the band gap of graphene and enhances its optical conductivity \cite{Houmad2015, en12061082}. The pure monolayer and bilayer graphene structures are generally not good candidates for thermoelectric based devices because of the vanishing bandgap \cite{Dollfus_2015, ABDULLAH2018223}. In the pure material the Seebeck coefficient, $S$, and the thermoelectric figure of merit, $ZT$, are thus very limited. In order to enhance $S$ and $ZT$, one may thus introduce B and N doping atoms in the graphene structures. It has been shown that graphene/BN heterostructures provide the possibility to tune and decrease strongly the phonon thermal conductivity, which is a very favourable feature to enhance the thermoelectric properties such as $S$ and $ZT$ \cite{PhysRevB.86.115410, PhysRevB.84.205444, ABDULLAH2020103282}. In this work, we consider Si, B, and N atoms doped AA- and AB-stacked BLG represented by BC$_{14}$N and Si$_{2}$C$_{14}$ structures. The electronic, mechanical, optical and thermal characteristics are investigated using density functional theory. A comparison between the AA- and the AB-stacked BLG will be shown with detailed mechanisms of improving these two BLG structures by Si and BN impurity atoms. In \sec{Sec:Model} the structure of BLG is briefly over-viewed. In \sec{Sec:Results} the main achieved results are analyzed. In \sec{Sec:Conclusion} the conclusion of the results is presented. \section{Computational details}~\label{Sec:Model} \onehalfspacing The present results have been worked out by using the Quantum Espresso (QE) package \cite{Giannozzi_2009}, which encompasses tools for first-principles calculations and materials modeling for electronic structure simulations based on DFT. A full structure relaxation is obtained by including van der Waals interactions in the exchange (XC) functional of the DFT model, where the $k$-point grid is $12\times12\times1$ \cite{PhysRevB.82.081101}. The Perdew-Burke-Ernzerhof functional (PBE) within the framework of the generalized gradient approximation is employed for the calculations of geometry optimizations and the electronic properties \cite{PhysRevB.23.5048}. For the Brillouin zone sampling and the calculations of the density of state (DOS), a $12\times12\times1$ and a $77\times77\times1$ grids are used, respectively. The energy cutoff for the plane wave expansion is set to be $1088.45$ eV for all the calculations. Pure BLG and doped BLG graphene structures are iteratively optimized and the calculations are considered converged when the force on each atom is less than $10^{-6}$ eV/$\text{\normalfont\AA}$. The crystalline and molecular structure visualization program (XCrySDen) is employed to visualize all the structures \cite{KOKALJ1999176}. In addition, the Boltzmann transport properties software package (BoltzTraP) is used to investigate the thermal properies of the systems \cite{madsen2006boltztrap-2}. The BoltzTraP code uses a mesh of band energies and has an interface to the QE package \cite{ABDULLAH2020126578}. The optical characteristics of the systems are obtained by the QE code. \hfill \section{Results}~\label{Sec:Results} We approach the AA- and AB-stacked BLG with a $2\times2$ supercell. Two types of doping are considered: Si doped BLG and BN-codoped BLG. These two types of atom doping are expected to form BLG-like semiconductor materials. \subsection{AA- and AB-stacked structures} We consider an AA- and an AB-stacked BLG in our study as are shown in \fig{fig01}. The AA-stacked BLG is composed of two layers that are exactly aligned, while in the AB-stacked BLG (Bernal stacking), the carbon atoms belonging to different sublattices, A and B, form the AB stacking pattern between the layers (atoms belonging to the A sublattice in one layer are stacked directly above the atoms of the B sublattice from the other layer). \lipsum[0] \begin{figure*}[htb] \centering \includegraphics[width=0.7\textwidth]{fig01.pdf} \caption{AA- and AB-stacked pristine BLG, Si$_2$C$_{14}$, and BC$_{14}$N. The C, Si, B and N atoms are brown, golden, blue and red colors, respectively.} \label{fig01} \end{figure*} \lipsum[0] \begin{table*}[h] \centering \begin{center} \caption{\label{table_1} The lattice constant, a, the inter-layer distance, $l$, the B-N distance (d$_{\rm BN}$), the S-Si distance (d$_{\rm Si-Si}$), the C-C, B-N, C-B, and C-N bond lengths for all pure and doped structures. The unit of all parameters is $\text{\normalfont\AA}$.} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\hline Str. & a & $l$ & d$_{\rm Si-Si}$& d$_{\rm BN}$& C-C& C-B& C-N & C-Si \\ \hline \multicolumn{1}{| c }{AA-stacking} \\ \hline BLG & 2.46 & 3.6 & - & - &1.42 & - & - & - \\ Si$_2$C$_{14}$ & 2.73 & 4.5 & 5.25 & - &1.47 & - & - & 1.69 \\ BC$_{14}$N & 2.48 & 3.05 & - & 3.71 &1.419 & 1.48 & 1.43 & - \\ \hline \multicolumn{1}{| c }{AB-stacking} \\ \hline BLG & 2.46 & 3.4 & - & - &1.42 & - & - & \\ Si$_2$C$_{14}$ & 2.84 & 4.19 & 4.46 & - &1.44 & - & - & 1.69 \\ BC$_{14}$N & 2.51 & 2.9 & - & 2.94 &1.425 & 1.48 & 1.44 & - \\ \hline \end{tabular} \end{center} \end{table*} Based on interaction effects between atoms or the interlayer interaction, we consider two types of doping atoms arrangements in the BLG with Si dopant or BN-codopant atoms. The Si-doped BLG identified as Si$_2$C$_{14}$ for both the AA- and the AB-stackings shown in \fig{fig01} (Middle panel). In addition, a BN-codped BLG that is labeled as BSi$_{14}$N for the AA- and the AB-stackings. In a $2\times2\times1$ super-cell of Si$_2$C$_{14}$, one Si (golden) atom in each layer is doped while in BSi$_{14}$N we assume one B (blue) atom in the top layer and one N (red) atom in the bottom layer. The Si atom in top(bottom) layer is doped in the para(meta) position, while the B(N) atom is located at the ortho(para) position \cite{ABDULLAH2020126350,ABDULLAH2020126807,ABDULLAH2020103282}. The interlayer distance of both the AA- and the AB-stacked BLG are found to be $3.6$ and $3.4$~$\text{\normalfont\AA}$, respectively, which are a good agreement with experimental \cite{doi:10.1063/1.2975333} and theoretical \cite{Alattas2018} results. The lattice constant, a, the interlayer distance, $l$, the distance between the B-N (d$_{\rm BN}$), the distance between the Si-Si atoms (d$_{\rm Si-Si}$), the C-C, C-B, C-N and the C-Si bond lengths are all presented in \tab{table_1}. It is known that the repulsive force between the two layers in the case of the AA-stacking is much stronger than that in the AB-stacking. This is attributed to the fact that half of the atoms of the first layer are located in the center of hexagon of the second layer, and the other half of the atoms of the first layer lies directly above atoms of the second layer in the AB-stacking. Therefore, a higher binding energy is required for the AA-stacking compared to the AB-stacking. Based on that, the AB-stacking is a more stable arrangement compared to the AA-stacking. The repulsive interaction between the layers of the AA-stacking leads to a larger interlayer distance. Another observation of the data in \tab{table_1} is that the average lattice constant of both Si$_2$C$_{14}$, and BC$_{14}$N is larger than that of BLG in both cases of an AA- and an AB-stacking indicating a super-cell expansion due to the dopant atoms. This is attributed to the larger atomic radii of the B and Si atoms compared to the C atom. In addition, the super-cell expansion for the doped AB-stacked structure is slightly larger than that of the AA-stacked. This may be referred to the interlayer interaction which is stronger for the AB-stacked doped systems as the interlayer distance of AB-stacked is smaller. Furthermore, the inter-layer distance in Si$_2$C$_{14}$ is large, but in BC$_{14}$N it is small compared to BLG for both the AA- and AB-stackings. This reveals a repulsive interaction between layers in Si$_2$C$_{14}$ \cite{DENIS2010251}, but an attractive interaction between the layers in BC$_{14}$N. These repulsive and attractive interactions arise due to presence of the dopant atoms. Several methods have been used to analyse the components or the sources of the interlayer interactions in BLG. First, the interaction that emerges due to the sp$^3$ bonding between the two layers of BLG. This type of interaction is studied by incrementally moving two atoms with the same planar coordinates (one in the upper layer and the other one in the lower layer) to the sp$^3$ bond distance which is about $1.54$~$\text{\normalfont\AA}$ \cite{doi:10.1063/1.4740259}. Second, non-bonding potentials are used to describe the van der Waals interactions between the layers of BLG \cite{PhysRevB.62.13104, PhysRevB.80.245424, PhysRevB.85.245430}. The Van der Waals interaction is a dipole-dipole interaction that is effective if the distance between dipoles is around 4-5~$\text{\normalfont\AA}$. Third, the non-bonding interaction energy between the layers of a BLG \cite{doi:10.1021/jp2095032}. In this case the interaction energy determines the type of the interlayer interaction that could be either a repulsive or an attractive interaction. In our work, the non-bonding interlayer interaction energy between the layers of the BLG is considered, where the dopant atoms play an essential role \cite{Dappe_2012}. The interaction energy between the dopant atoms in graphene can be determined or obtained from the total energy of the system using DFT calculation. The interaction energy between two dopant atoms in a structure can be defined as \begin{equation} \Delta E = E_2 - E_0 + 2 \times E_1, \end{equation} where E$_0$, E$_1$ and E$_2$ are the total energies of the systems with zero, one and two substitutional dopant atoms, respectively. The interaction energy between the Si atoms in the different layers of the AA- and that AB-stacked Si$_2$C$_{14}$ is found to be $2.11$~eV and $1.34$~eV, respectively, revealing a stronger repulsion interaction between the two substitutional Si atoms in the AA-stacking. The interaction is repulsive as the interaction energy has a positive value in both cases. Even though the Si atoms in the AA-stacking do not lie directly above each other, the repulsive inteaction is still stronger between the layers just like in the pure AA-stacked BLG. We can say that the AB-stacked Si$_2$C$_{14}$ is more stable than AA-stacked because the repulsive interaction in the AB-stacked is smaller leading to a smaller binding energy. This interactions between the Si atoms in a monolayer~\cite{Wei2014} and a bilayer~\cite{DENIS2010251} graphene have been studied an a repusive interaction between the Si atoms has been confirmed. In addition, an attractive interaction between the B and the N atoms is observed in both the AA- and the AB-stacked BC$_{14}$N as the interaction energy has a negative value, $-3.2$, and $-2.5$~eV, respectively. The attractive interaction between the B and the N atoms leads to a decrease the the interlayer distance which is $3.05$~$\text{\normalfont\AA}$ for AA-stacking and $2.9$~$\text{\normalfont\AA}$ for AB-stacking. Furthermore, the the B atom in the top layer and the N atom in the bottom layer are slightly moved toward each other indicating an attractive interaction between theses two atoms. We conclude that the AB-stacking is more stable because of higher attractive interactions. The attractive interaction between the B and the N atoms in graphene has been studied in two dimentional systems such as in monolayer graphene and in silicene nanosheet~\cite{doi:10.1063/1.4742063, abdullah2020properties}. \subsection{Electronic Band structure} The electronic band structure is plotted in \fig{fig02} for an AA-stacked BLG (a), Si$_2$C$_{14}$ (c), and BSi$_{14}$N (e), and AB-stacked BLG (b), Si$_2$C$_{14}$ (d), and BSi$_{14}$N (f). In the AA-stacked BLG, the electron distribution pattern has a hexagonal symmetry which is almost the same as for a monolayer graphene. It thus generates a linear dispersion of a valence band ($\pi$) and conduction band ($\pi^*$) intersecting the at K-point. In the AB-stacked BLG, parabolic bands are found near the Fermi level due to the asymmetric interactions between the upper and the lower layers. The lowest valence band and the conduction band only have a weak overlap near the K-point. It means the density of free carriers is very low. In both the AA- and the AB-stackings, the $\Gamma$-point determining the $\pi$-band width corresponds to the maximum and minumum $\pi$-band energy levels. The energy states are affected by interlayer atomic interactions at the M-ponit \cite{PhysRevB.74.085406}. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{fig02.pdf} \caption{Electronic band structure of AA-stacked BLG (a), Si$_2$C$_{14}$ (b) and BC$_14$N (c). Fermi energy is set at 0 eV } \label{fig02} \end{figure} Furthermore, the energy spacing the between $\pi_2^*$ and the $\pi_1^*$ near the K-point is $0.11$, and $0.07$~eV for the AA- and the AB-stackings, respectively, which are equal to the corresponding energy spacing of the $\pi_2\text{-}\pi_1$, where $\pi_2$ and $\pi_1$ refer to the double states of the $\pi$-bands. The linear dispersion of the AA-stacked BLG has been experimentally confirmed \cite{Kim2013}. The dopant atoms in BLG induce a symmetry breaking and the interlayer interactions result in a bandgap. Band gaps also appear due to the interactions of dopants arising from their lateral periodicity. We therefore see a bandgap at the the K-point for the AA- and AB-stacked Si$_2$C$_{14}$ and AA-stacked BSi$_{14}$N \cite{RASHID2019102625}. The repulsive interaction in the AA- and the AB-stacked Si$_2$C$_{14}$ induces a direct bandgap which is $0.66$, and $0.72$~eV, respectively. Thus, both systems exhibit a semiconductor behavior. The attractive interaction in the AA-stacked BSi$_{14}$N generates a very small indirect bandgap, $0.025$~eV indicating semiconductor property. The B/N atoms change the position of the Dirac cone along the $M-K$ direction. The Fermi energy crosses the valence band maxima and the conduction band minima in the AB-stacked BSi$_{14}$N revealing a degenerate semiconductor behavior. Furthermore, The energy spacing between the $\pi_2$ and the $\pi_1$ bands is increased in Si$_2$C$_{14}$, while the double states of the $\pi$-bands in BSi$_{14}$N totally disappears. This is in a good agreement with recent results for BN-codoped BLG in which the BLG is doped with one N atom in one layer and one B atom in the other \cite{Alattas2018}. \subsection{Stress-Strain curves} The mechanical properties of a structure can be determined by the stress-strain curve. A uniaxial tensile strain is applied to the atoms of both layers in the zigzag and the armchair directions. In the DFT calculation, the system is extended by small displacement increments, $0.02$, to the atoms at both ends. After each elongation, the system is relaxed to reach a new equilibrium state with both ends fixed. The elongation and relaxation procedures are repeated until the desired tensile strain is reached. The same mechanism has been applied for BLG using MD simulation \cite{ZHANG20114511}. \begin{table}[h] \centering \begin{center} \caption{\label{table_2} The Young modulus (YM), The tensile strength (TS)m, and the fracture stress (FS) of all the structures.} \begin{tabular}{|l|l|l|l|l|l|l|l|l|}\hline \multirow{2}{1.0cm}{AA}& \multicolumn{6}{p{0.1cm}}{\centering Zigzag}{\centering Armchair} \\ \cline{2-7} & \multicolumn{1}{c|}{YM} & \multicolumn{1}{c|}{TS} & \multicolumn{1}{c|}{FS} & \multicolumn{1}{c|}{YM} & \multicolumn{1}{c|}{TS} & \multicolumn{1}{c|}{FS} \\ \hline BLG & 974 & 99.64 & 99.64 & 974 & 96.22 & 96.22 \\ Si$_2$C$_{14}$ & 732 & 50.22 & 46.2 & 728 & 36.72 & 36.72 \\ BC$_{14}$N & 946 & 80.06 & 66.63 & 948 & 76.13 & 76.13 \\ \hline \multicolumn{1}{| c }{AB} \\ \hline BLG & 885 & 90.58 & 90.58 & 882 & 88.5 & 88.5 \\ Si$_2$C$_{14}$ & 665 & 45.63 & 38.1 & 670 & 42.55 & 39.34 \\ BC$_{14}$N & 898 & 72.71 & 41.9 & 855 & 71.72 & 71.72 \\ \hline \end{tabular} \end{center} \end{table} The stress-strain curves are shown in \fig{fig03} for both the AA- and the AB-stacked BLG (green), Si$_2$C$_{14}$ (blue), and BSi$_{14}$N (red) in the zigzag (a and c) and the armchair (b and d) directions. In a pure AA-stacked BLG, the tensile strain linearly increases in a small regime of strain in which the system experiences enlargments linearly up to $\le 5\%$. This linearity is measured by the Young moduli, which is calculated as the initial slope of the stress–strain curve. The linear part of the curves is also called the elastic region. Table \ref{table_2} presents the Young modulus (YM), the tensile strain (TS), and the fracture strain (FS) of undoped and doped BLG. The first column in \tab{table_2} shows the structures, the column 2 to 4 are the values in the zigzag direction, and the last three column are the values in the armchair direction. The stress–strain relationship of the AA- and the AB-BLG reveal a Young modulus of $974$ and $864$ GPa, respectively, which are very close to the Young moduli obtained in reports of experimental \cite{Lee385} and theoretical \cite{doi:10.1063/1.4789594} work. At a large strain, the stress of the system respondes non-linearly to the strain until the system’s failure, determining the fracture strain. The ultimate tensile strength is defined as the maxima in the stress-strain curve. The corresponding strain is introduced as the ultimate tensile strain indicating the flexibility of the system. The ultimate stress or tensile strain of the AA- and the AB-BLG is $99.64$ and $90.58$~GPa, respectively, at the strain of $0.151$. The values of ultimate stress are also equal to the fracture strain here as there is no stretching in the structure after the fracture strain~\cite{doi:10.1063/1.5091753}. The same behavior of the stress-strain curve is found for the armchair direction, where the Young modulus is the same for both the AA- and the AB-stackings. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{fig03.pdf} \caption{Stress-strain curves of AA- and AB-stacked BLG (green), Si$_2$C$_{14}$ (red), and BC$_{14}$N (blue) in the zigzag (a, c) and armchair (b, d) directions.} \label{fig03} \end{figure} In the AA- and the AB-stacked Si$_2$C$_{14}$ and BC$_{14}$N the stress-strain curves are modified due to the presence of dopant atoms. The bond energies of C-Si, C-B and C-N are smaller than those of C-C \cite{C2NR11728B, JAVVAJI201725}. The bonds can be generically arranged from high to low energy as: C-C, C-N, C-N and Si-C. We therefore see a reduction in the stress-strain curves of Si$_2$C$_{14}$ and BC$_{14}$N which are caused by the small bonding energies of the dopant atoms with the C atoms~\cite{C2NR11728B}. In addition, the super-cell expansion due to the dopant atoms mentioned before is another reason why the Si$_2$C$_{14}$ and BC$_{14}$N bilayers need less tensile stress. Even though the repulsive interlayer interaction is increased in both the AA- and AB-stacked Si$_2$C$_{14}$ the Young modulus still decreases. This indicates that the binding energy between C and Si atoms is dominant in controlling the elastic properties of the system. The same scenario can be applied for BC$_{14}$N where an attractive interaction between the layers exists. Another observation is that the doped systems experience enlargments in a linear fashion till $\le 4\%$ in the AA-stacking, which demonstrates less elastic properties compared to the AA-stacking in both the zigzag and the armchair directions (see \tab{table_2}). \subsection{Optical absorption spectra} In general, bilayer graphene has much richer spectral features in comparison with monolayers. We therefore present here the optical response of the AA- and the AB-stacked systems. We find that the optical response of the dielectric function of a BLG can be manipulated by the influence of repulsive or attractive interlayer interactions. The imaginary part of dielectric function $\varepsilon_2$ is shown in \fig{fig04} for the AA- (a and b) and the AB-stackings (c and d) in the case of an in-plane, E$_{\rm in}$, (left panel) and an out-of-plane, E$_{\rm out}$, (right panel) electric field. \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{fig04.pdf} \caption{Imaginary part of dielectric function for the AA- (a and b) and the AB-stackings (c and d) in the case of an in-plane, E$_{\rm in}$, (left panel) and an out-of-plane, E$_{\rm out}$, (right panel) electric field} \label{fig04} \end{figure} It well known that in the case of E$_{\rm in}$ the AA-stacked BLG has two main peaks in dielectric function at $3.95$ and $13.87$~eV formed by the $\pi$ to $\pi^*$ and the $\sigma$ to $\sigma^*$ transitions, respectively. Furthermore, in E$_{\rm out}$ the two main peak are generated by transitions from the $\sigma$ to $\pi^*$ at $11.22$~eV and the $\pi$ to $\sigma^*$ at $14.26$~eV. The anisotropic behaviour is clearly observed for the two different polarizations \cite{NATH2015691}. The AB-stacked BLG has the peaks at almost the same energy values with less intensity since it's interlayer distance is close to the AA-stacking. In the AA- and the AB-stacked Si$_2$C$_{14}$ two main features in $\varepsilon_2$ are observed in the case of E$_{\rm in}$. First, double peaks for each $\pi \rightarrow \pi^*$ and $\sigma \rightarrow \sigma^*$ transitions appear. This is attributed to increased energy spacing between the $\pi_{1,2}$, the $\pi^*_{1,2}$, the $\sigma_{1,2}$, and the $\sigma^*_{1,2}$ as is shown in \fig{fig02}(c and d). Second, a red shift towards lower energy for both peaks is seen. The red shift of the peaks occurs by decreased energy spacing between the $\pi$ and the $\pi^*$, and the $\sigma$ and the $\sigma^*$ a long the $\Gamma-M$ and the $M-K$ directions. It is interesting to note that the peak intensity at lower energy is enhanced. The peak intensity for the AA-stacking is higher than that of the AB-stacking. The peaks of Si$_2$C$_{14}$ in the case of out-of-plane electric field are not red shifted and the intensity of the peaks is almost the same for the AA-stacking, while it is slightly decreased for the AB-stacking. The properties of the imaginary dielectric function for BC$_{14}$N is very different for both the in- and the out-of-plain electric fields. The intensity of the peaks is decreased due the overlaping of the valence and the conduction bands. In the in-plane electric field, double peaks for both the AA- and the AB-stackings are not seen any more because of the absence the double states in the band structure. It also seems that the peak intensity is almost the same for the AA- and the AB-stackings. In the case of the out-of-plane electric field, the right peak is red shifted for the AA- and the AB-stackings while the left peak is diminished. In addition, a strong peak at a very low energy is observed. These are attributed to extreme decreasing of the energy spacing between the $\pi$ and the $\pi^*$, and the $\sigma$, and the $\sigma^*$ along the $\Gamma-M$ and the $M-K$ directions (see \fig{fig02}(e and f)). \subsection{Seebeck coefficient} We investigated the thermal properties of our model at the low temperature ranging from $20$ to $160$~K, where the phonons are not active \cite{PhysRevB.87.241411,ABDULLAH20181432}. So, the electrons deliver the main contribution to the thermal behavior. It is know that a good thermoelectric material should have a high electrical conductivity, Seebeck coefficient, $S$, and low thermal conductivity. The thermoelectric performance of monolayer and bilayer graphene is poor because of closed bandgaps, leading to a small Seebeck coefficient \cite{DENG2019622, Abdullah_2018}. This can be clearly seen in \fig{fig05}, where the $S$ versus temperature is plotted for both the AA- (a), the AB-stacking (b), for pure BLG (green), Si$_2$C$_{14}$ (blue), and BC$_{14}$N (red). The Seebeck coefficient is very small for pure BLG. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{fig06.pdf} \caption{Seebeck coefficient for the AA- (a) and the AB-stacking (b), for pure BLG (green), Si$_2$C$_{14}$ (blue), and BC$_{14}$N (red).} \label{fig05} \end{figure} The key point to enhance the $S$ is the opening up of the bandgap. Since the bandgap in Si$_2$C$_{14}$ is much larger than that of BC$_{14}$N, a much higher $S$ of Si$_2$C$_{14}$ is observed for both the AA- and the AB-stackings. Therefore, one can expect to have a higher thermoelectric performance for the Si$_2$C$_{14}$ structure. The electronic and thermal properties of doped BLG may be of fundamental interest and can play an important role in the performance of nanoscale devices.\\ \section{Conclusion}~\label{Sec:Conclusion} We investigate the interaction energy, the electronic band structure, the mechanical, the optical and the thermal characteristics of AA- and AB-stacked BLG, Si$_2$C$_{14}$, and BC$_{14}$N structures. We show that interlayer interaction effects have a crucial influence on the properties of both the Si$_2$C$_{14}$, and the BC$_{14}$N structures. The Si atoms in Si$_2$C$_{14}$ and the B and N atoms in BC$_{14}$N induce repsulsive and attractive interactions between the layers, respectively. We find that the influence of the repulsive interactions on the stress-strain curves in Si$_2$C$_{14}$ is larger compared to the attractive interactions in BC$_{14}$N. Therefore, the stress-strain curves are more suppressed for Si$_2$C$_{14}$ in both the zigzag and the armchair directions. The imaginary dielectric function of Si$_2$C$_{14}$ induces a red shift at high energy while only a reduction in the imaginary dielectric function is found for BC$_{14}$N without any important shift of peaks. Furthermore, we find that Si$_2$C$_{14}$ has a more promissing thermal response than BC$_{14}$N with regards to possible applications in devices. \section{Acknowledgment} This work was financially supported by the University of Sulaimani and the Research center of Komar University of Science and Technology. The computations were performed on resources provided by the Division of Computational Nanoscience at the University of Sulaimani.
1,116,691,499,569
arxiv
\section{Introduction}\label{sec:intro} \subsection{Background} We consider the classical compressed sensing problem of minimizing the cardinality ${\text{card}}(x)$ of an approximate solution to an underdetermined equation system $Ax=b$, i.e. \begin{equation}\label{l0prob}\argmin_{x:~\|Ax-b\|_2<\varepsilon}{\text{card}}(x), \end{equation} where $\varepsilon>0$ is some allowed tolerance of the error and $x_0$ lies in ${\mathbb R}^n$ or ${\mathbb C}^n$. Problem \eqref{l0prob} is NP-hard \cite{natarajan1995sparse} and a popular approach, commonly referred to as ``compressed sensing'', is to replace ${\text{card}}(x)$ with the convex function $\|x\|_1$, i.e. \begin{equation}\label{l1prob}\argmin_{x:~\|Ax-b\|_2<\varepsilon}\|x\|_1.\end{equation} This method goes back (at least) to the 70's (see the introduction of \cite{candes2008enhancing} for a nice historical overview) but received increasing attention in the late 90's due to the work by Chen, Donoho and Saunders \cite{chen2001atomic} on what they called basis pursuit, which amounts to solving \begin{equation}\label{l1probdual}\argmin\{\lambda\|x\|_1+\frac{1}{2}\|Ax-b\|_2^2\}\end{equation} for a suitable choice of parameter $\lambda$ (playing the role of $\varepsilon$ in the $\ell^1$-version of \eqref{l0prob}). In fact, \eqref{l1probdual} is the dual problem of \eqref{l1prob} in the sense that for each $\varepsilon$ there is a $\lambda$ such that the solution of \eqref{l1prob} and \eqref{l1probdual} coincides. The method received massive attention after the works of Cand\`{e}s and coworkers in the early 2000, and the term compressed sensing was coined. In \cite{candes2006stable}, Cand\`{e}s, Romberg and Tao proved the surprising result that, given a sparse vector $x_0$ and a measurement \begin{equation} \label{nise}b=Ax_0+{{ \epsilon}}, \end{equation} where ${{ \epsilon}}$ is Gaussian noise, solving \eqref{l1prob} yields (for a suitable choice of $\varepsilon$) a vector $\hat x$ that satisfies \begin{equation}\label{crt} \|\hat x-x_0\|_2<C_K\|{{ \epsilon}}\|_2, \end{equation} where $C_K$ is a constant. Arguing that it is impossible to beat a linear \begin{wrapfigure}{r}{0.4\textwidth} \begin{center} \includegraphics[width=.4\textwidth]{penalties.pdf}\\ \end{center} \caption{Illustration of penalties.} \label{S22card} \end{wrapfigure} dependence on the noise (even knowing the true support of $x_0$ a priori), the estimate \eqref{crt} led the authors to conclude that ``no other method can significantly outperform this''. The result holds given certain assumptions on the matrix $A$, related to the restricted isometry properties of $A$, which in a separate publication (Theorem 1.6, \cite{candes2005decoding}) was shown to hold with ``overwhelming probability''. The result is indeed surprising, in Figure \ref{S22card} we display the one-dimensional counterparts of ${\text{card}} (x)$ and $\|x\|_1$, demonstrating that ${\text{card}}(x)$ and $\|x\|_1$ are indeed quite different functionals. However, the definition ``overwhelming probability'' is asymptotic in its nature, and therefore it is not clear if \eqref{crt} is valid in a moderately sized application (it often is not, see Section \ref{sec:size}). This continues to be the state of the art, see e.g.~\cite{adcock2016generalized,adcock2017breaking,candes-etal-acm-2011} which provides \textit{asymptotic} theorems about when compressed sensing works in concrete setups. Moreover, whereas very strong recovery results were reported e.g.~in \cite{candes2006robust,chartrand2007exact,donoho2006most} for the case of \textit{exact data} $b=Ax_0$, in the presence of noise (c.f. \eqref{nise}) the method gives a well known bias (see e.g. \cite{fan2001variable,mazumder2011sparsenet}). The $\ell^1$ term not only has the (desired) effect of forcing many entries in $x$ to 0, but also the (undesired) effect of diminishing the size of the non-zero entries. This is clearly visible even in the one-dimensional situation; the function ${\mathbb R}\ni x\mapsto \lambda|x|+\frac{1}{2}|x-x_0|^2$ has its minimum shifted towards 0 from the sought point $x_0$. This has led to a large amount of non-convex suggestions to replace the $\ell^1$-penalty, see e.g. \cite{bredies2015minimization,blumensath2009iterative,blumensath2008iterative,attouch2013convergence,chartrand2007exact,pan2015relaxed,zou2008one,fan2004nonconcave,wang2014optimal,selesnick2017sparse,loh2013regularized,fan2014strong,zhang2012general,zhang2010nearly,loh2017support,candes2008enhancing,breheny2011coordinate,fan2001variable,mazumder2011sparsenet}. These methods are tailormade for the sparsity problem, and upon changing the non-convex penalty ${\text{card}}(x)$ it is not clear what to do. We consider now the general problem of minimizing \begin{equation}\label{genprob} f(x)+\|Ax-b\|_2^2\end{equation} where $f$ is some non-convex penalty and $x$ is a vector in some linear space, not necessarily ${\mathbb R}^n$. For example, if the desired cardinality $K$ is known a priori, we can take $f$ to be the indicator function $\iota_{P_K}$ of the set $P_K=\{x:{\text{card}} (x)\leq K\}$ in which case \eqref{genprob} reduces to \begin{equation}\label{agt1intro} \argmin_{{\text{card}} (x)\leq K}\|Ax-b\|. \end{equation} \begin{wrapfigure}{r}{0.4\textwidth} \includegraphics[width=0.4\textwidth]{quadraticenvelope.pdf} \caption{Illustration of a non-convex function $f$ (red) and its quadratic envelope $Q_2(f)$ (black). The black graph lies slightly below for illustration only.}\label{f456} \end{wrapfigure} In \cite{carlsson2018convex} the ``quadratic envelope'' ${\mathcal Q}_2(f)$ was introduced, where ${\mathcal Q}_2$ is the ``quadratic biconjugate'' (apart from the name, this transform was introduced already in \cite{carlsson2016convexification}, see Figure \ref{f456} for an illustraion). It has the property that ${\mathcal Q}_2 (f) (x)+\|x\|^2$ is the lower semi-continuous convex envelope of $f(x)+\|x\|^2$, and the relationship between \begin{equation}\label{genprobS} {\mathcal Q}_2(f)(x)+\|Ax-b\|_2^2\end{equation} and the original functional in \eqref{genprob} was investigated. The main result is that, given $\|A\|<1$, the set of local minimizers to \eqref{genprobS} is a subset of the local minimizers of \eqref{genprob}, and most importantly that the global minimizers coincide. In the particular case of $f(x)={\text{card}}(x)$ the functional \eqref{genprobS} has previously been introduced by Zhang \cite{zhang2010nearly} under the name MCP (Minimax Concave Penalty) and independently by Aubert, Blanc-Feraud and Soubies \cite{soubies-etal-siims-2015} under the name $CE\ell0$. It also shows up in earlier publications, for example (2.4) in \cite{fan2001variable}, but it seems like \cite{zhang2010nearly} is the first comprehensive performance study and \cite{soubies-etal-siims-2015} the first publication where the connection with convex envelopes appears. For this choice of $f$, the value of the contributions of the present paper is mainly theoretical, which goes much beyond what was previously known. In particular we show that the global minimizer with the MCP-penalty (i.e.~${\mathcal Q}_2({\text{card}})$) is the oracle solution (for certain choice of parameters). On the other hand, ${\mathcal Q}_2(\iota_{P_K})$ is a new object that has only appeared previously in earlier publications by the authors of the present article. In this article we provide theoretical results of the type \eqref{crt} for the two concrete functionals ${\mathcal Q}_2({\text{card}})$ and ${\mathcal Q}_2 (\iota_{P_K})$. A more extensive discussion of previous results concerning MCP/$CE\ell0$ is found in Section \ref{review}, as well as other related results on non-convex optimization. \subsection{Contributions} A clear drawback with non-convex optimization schemes is that algorithms are bound to get stuck in local minima, and in concrete situations it is hard to determine weather this is the case or not. In the present article we give conditions on $A$ which imply that any local minima of \eqref{genprobS} for $f(x)={\text{card}}(x)$ necessarily has a high cardinality unless it is the global minima. Hence, if a sparse local minima is found one can be sure that it is the global minima. In the case of $f=\iota_{P_K}$ we take this one step further and give conditions under which \eqref{genprobS} has a unique local minimizer, which hence must be the global minimizer as well as the solution to the original problem \eqref{agt1intro}. In the case when $b=Ax_0+{{ \epsilon}}$ and $x_0$ is a sparse vector, we significantly improve the state of the art in compressed sensing in a number of ways. Firstly, the conditions on $A$ hold in greater generality. Secondly, we obtain an estimate corresponding to \eqref{crt} where the involved constants are significantly smaller than $C_K$. Thirdly (and most importantly), the method seems to work better in practice, at least in the setting when $A$ has normalized Gaussian random columns. In particular, for reasonable values of noise (e.g.~$SNR\approx 4$ for the case of a 100x200 matrix $A$, see Section \ref{sec:num}) we can find the oracle solution using the Forward Backward Splitting algorithm. In Section~\ref{sec:mainResults} we present highlights from the theory, show some numerical results and compare with the traditional $\ell^1$-method \eqref{l1probdual}. In \ref{review} we give a brief review of the field. The remainder of the paper, Sections \ref{sec:unique}-\ref{seccardfix}, are devoted to developing the theory. \section{Summary of Main Results and Innovations}\label{sec:mainResults} \subsection{Sparse recovery via ${\mathcal Q}_2({\text{card}})$} We return to the first problem of minimizing \eqref{genprobS} for $f={\text{card}}(x)$ i.e. \begin{equation}\label{q1}{\mathcal K}(x)=\mu {\text{card}}{(x)}+\|Ax-b\|^2\end{equation} (where we introduce the parameter $\mu$ to control the tradeoff between sparsity and data-fit) which we regularize with \begin{equation}\label{q1reg}{\mathcal K}_{reg}(x)={\mathcal Q}_2(\mu{\text{card}}){(x)}+\|Ax-b\|^2.\end{equation} The graph of ${\mathcal Q}_2({\text{card}})$ is depicted in Figure \ref{S22card}. Is it true that unique global minimizers exist (recall that they are the same given $\|A\|<1$)? It is easy to see that this is not the case in general, just consider the case of a $2x4-$matrix $A$ such that every pair of columns are linearly independent, and let $\mu$ be such that the global minimum is attained when $\|Ax-b\|=0$. In this case we have $\binom{4}{2}=6$ choices that all give the global minimum. However, in the above example there exists no ``sparse'' solution, for 2 equals the row-dimension of the matrix, and by sparse we mean a number much smaller than this. For an $m \times n$ matrix $A$, with columns sampled from the unit sphere of $\mathbb{R}^n$, this is formalized in Lemma 2.1 of \cite{donoho2006most}, where it is shown that \eqref{l0prob} with $\varepsilon=0$ has a unique sparse solution (with probability 1) if $b=Ax_0$ and ${\text{supp }} x_0<m/2$, which is an upper bound of how much sparsity one needs in order to have a well posed sparse problem. In this paper we will study uniqueness of sparse minimizers of \eqref{q1reg}, in the sense that we give concrete conditions such that if there exists one local minimizer $x'$ of \eqref{q1reg} with the property that ${\text{card}}(x')<<m$ (in a manner to be made precise), then \begin{itemize} \item $x'$ is automatically a global minimizer \item any other stationary point $x''$ of \eqref{q1reg} satisfies ${\text{card}}(x'')>>{\text{card}}(x')$. \end{itemize} We remind the reader that $A$ satisfies a Restricted Isometry Property for integer $k$, if any $k$ columns of $A$ behaves approximately as an isometry, in the sense that the resulting matrix can be bounded below by $\sqrt{1-\delta_k}Id$ and bounded above by $\sqrt{1+\delta_k}Id$, where $\delta_k$ is the restricted isometry constant for $k$. Classical results from compressed sensing literature usually require that the numbers $\delta_k$ are small, something which we have found is hard to fulfill in practice. For example, the famous estimate \eqref{crt} holds under the assumption that \begin{equation}\label{ki}\delta_{3K}+3\delta_{4K}<2.\end{equation} Our numerical evaluation (see Section \ref{sec:size}) shows that if $K=5$ this condition is usually not satisfied for a Gaussian random matrix $A$ (with normalized columns) of size $m\times n$, unless $m$ is (at least) around 500. The statement that RIP holds with overwhelming probability \cite{candes2005decoding} is therefore somewhat misleading, since it is based on an asymptotic estimate. For a small size matrix of the type discussed above it typically never applies (more on this in Section \ref{sec:size}). That the RIP-conditions are hard to satisfy in practice is well known by the community, and has led to interesting new contributions about efficiency of $\ell^1$ without RIP, given that the problem is sampled in a certain way, see e.g.~\cite{adcock2016generalized,adcock2017breaking}. However, these results are asymptotic in nature and do not apply in as general situation as the ones we will present here. We base the theory of this paper on the ``Restricted Linear Independence Property'' (RLIP), basically constituting the lower estimate of the RIP. More precisely, we define \begin{equation}\label{beta}\beta_k=\inf\{\frac{\|Ax\|}{\|x\|}:~ x\neq 0,~{\text{card}}(x)\leq k\}\end{equation} for $k=1\ldots n$. We say that $A$ satisfies RLIP with respect to the property $P_K=\{x:{\text{card}}(x)\leq K\}$ if $\beta_K\neq 0$. In other words $A$ is RLIP with respect to this property if and only if any $K$ chosen columns of $A$ are linearly independent. The relationship with RIP is as follows; if $A$ satisfies RIP with constant $\delta_k$ then it satisfies RLIP with $\beta_k\geq \sqrt{1-\delta_k}$, whereas the converse often does not hold. To give an idea of the type of results proven in this paper, we first present two corollaries of theorems in Section \ref{seccard}. \begin{corollary}\label{cor:globalpoint2} Let $A$ have normalized columns and let $x'$ be a stationary point of ${\mathcal K}_{reg}$ with ${\text{card}}(x')=K$ and set $z'=(I-A^* A)x' + A^* b$. Assume that \begin{equation}\label{cond3c}|z_i'|\not\in\left[{\beta_{2K}^2}{\sqrt{\mu}},\frac{1}{\beta_{2K}^2}{\sqrt{\mu}}\right],\quad 1\leq i\leq n.\end{equation} If \begin{equation}\label{cond2c}\fro{Ax'-b}< \mu ,\end{equation} then $x'$ is a unique global minimum of both ${\mathcal K}$ and ${\mathcal K}_{reg}$. Moreover, any other stationary point $x''$ has a larger support. \end{corollary} The statement is a combination of Theorem \ref{thm:statpoint:vec} and \ref{thm:globalpoint2} (for the choice $N=2K$). Upon assuming a bit more in \eqref{cond3c} and \eqref{cond2c}, we may also conclude that $x''$ has substantially larger support. As a curious remark, note that $\beta_{2K}>0$ forces $m\geq 2K$ which is precisely the upper bound given by Lemma 2.1 in \cite{donoho2006most} mentioned above. In the case when $b=Ax_0+{{ \epsilon}}$, as discussed earlier, we can say more. Below we state Theorem \ref{thm:doctorgadget} for the particular case $N=2K$. \begin{corollary}\label{cor:doctorgadget} Suppose that $b=Ax_0+{{ \epsilon}}$ and set ${\text{card}} (x_0)=K.$ Assume that $\|{{ \epsilon}}\|< {\beta_{2K}^2\sqrt{\mu}}$ and $$|x_{0,j}|>\para{\frac{1}{\beta_{2K}^2}+1}\sqrt{\mu},\quad j\in {\text{supp }} x_0.$$ Then there exists a unique global minimum $x'$ to ${\mathcal K}_{reg}$ as well as ${\mathcal K}$, with the property that ${\text{supp }} x'={\text{supp }} x_0$, that $$\|x'-x_0\|\leq \frac{\|{{ \epsilon}}\|}{\beta_K},$$ and that ${\text{card}}(x'')> K$ for any other stationary point $x''$ of ${\mathcal K}_{reg}$. \end{corollary} Note that the conditions on ${{ \epsilon}}$ and $x_0$ are very natural; if the noise is too large or if the non-zero entries of $x_0$ are too small, there is no hope of correctly retrieving the support. \subsection{Known ``model order''.} We now discuss the situation when the model order, i.e. the amount $K$ of non-zero entries, is known. This problem is also known as the $K$-sparse problem and studied e.g.~in \cite{blumensath2008iterative}. For simplicity we restrict attention to ${\mathbb R}^n$, corresponding results for ${\mathbb C}^n$ are similar but the assumptions on $A$ are slightly more technical. In this case we set \begin{equation}\label{q2}{\mathcal K}_K(x)=\iota_{P_K}(x)+\|Ax-b\|^2\end{equation} (where the subindex $K$ separates the notation from \eqref{q1}) which we regularize with \begin{equation}\label{q2reg}{\mathcal K}_{K,reg}(x)={\mathcal Q}_2(\iota_{P_K})(x)+\|Ax-b\|^2.\end{equation} The result corresponding to Corollary \ref{cor:globalpoint2} reads as follows \begin{corollary}\label{cytwsgdf} Let $A$ have normalized columns such that no pair is orthogonal, and assume that $n\geq m+K+2$. Any local minimizer $x'$ of ${\mathcal{K}_{K,reg}}$ then lies in $P_K$. Moreover, set $z'=(I-A^* A)x' + A^* b$, let $\tilde z'$ contain the elements of $z'$ sorted after decreasing magnitude, and assume that \begin{equation}\label{n}|\tilde z'_{K+1}|<(2\beta_{2K}^2-1)|\tilde z'_{K}|.\end{equation} Then $x'$ is the unique global minimum of ${\mathcal{K}_K}$ and ${\mathcal{K}_{K,reg}}$. \end{corollary} The result is a combination of Proposition \ref{mfisR}, Theorem \ref{celok6} and \ref{thm:globalpoint2f}. We remark that in the typical compressed sensing application, $A$ is a matrix with $m<<n$ and $K<<m$. If the columns of $A$ are normalized random, then the conditions on $A$ are satisfied with probability 1. Moreover, any subset of $2K$ columns will be close to an isometry as long as $2K<<m$, so it is not unreasonable to expect that $\beta_{2K}\approx 1$. In this case the assumption \eqref{n} is quite reasonable since $|\tilde z'_{K+1}|\leq |\tilde z'_{K}|$ by construction and $2\beta_{2K}^2-1\approx 1.$ The size of $\beta_{2K}$ in the above scenario is further discussed in subsection \ref{sec:size}. We now consider the case when $b=Ax_0+{{ \epsilon}}$ and we wish to retrieve $x_0$, where ${\text{card}}(x_0)=K$. By combining Proposition \ref{p1f} and Theorem \ref{cor:dogadgetf}, we have (for $A$ as in the previous corollary); \begin{corollary}\label{cor:doctorgadget2} If $\beta_{2K}>\frac{1}{\sqrt{2}}$ and $|x_{0,j}|>\para{\frac{1}{2\beta_{2K}^2-1}+\frac{1}{\beta_K}}\|{{ \epsilon}}\|$ for all $j\in S$ then there exists a unique local minimizer $x'$ to ${\mathcal{K}_{K,reg}}$ with ${\text{supp }} (x')={\text{supp }} (x_0)$. This is the global minimum of both ${\mathcal K}_K$ and ${\mathcal{K}_{K,reg}}$ and moreover it satisfies $\|Ax'-b\|\leq \|\epsilon\|$, ${\text{supp }} (x')={\text{supp }}(x_0)$ and $$\|x'-x_0\|\leq \frac{\|{{ \epsilon}}\|}{\beta_K}.$$ \end{corollary} If $\beta_{2K}\approx 1$ the condition is $|x_{0,j}|\gtrsim 2\|{{ \epsilon}}\|$ and the conclusion $\|x'-x_0\|\lesssim{\|{{ \epsilon}}\|}.$ We further remark that $x'$ in Corollaries \ref{cor:doctorgadget} and \ref{cor:doctorgadget2} is the so called ``oracle solution'', i.e. the one you would get if an oracle told you the true support $S$ \begin{wrapfigure}{r}{0.4\textwidth} \begin{center} \includegraphics[width=.4\textwidth]{17x25.pdf}\\ \end{center} \caption{Plot of $\frac{1}{\beta_K}$ for a $17\times 25$ matrix $A$ with normalized random columns.}\label{f2} \end{wrapfigure} of $x_0$ and you were to solve the (overdetermined) equations system $A_Sx=b$ where $A_S$ denotes the $m\times K$ matrix whose columns are those with indices in $S$ (and then expand $x$ to ${\mathbb R}^n$ by inserting zeroes off $S$). This is clearly the best possible solution one could hope for (as argued also in \cite{candes2006stable}). If we have a method that would find a vector $x'$ with the correct support $S$ (with a bias or not), we can always get this unbiased solution by simply discarding $x'$ and follow the above procedure to get the oracle solution. Therefore the issue of finding the support is maybe more central than having a good estimate of $\|x'-x_0\|$. Indeed, finding the correct support is often used as a measurement of success in numerical sections on the topic \cite{blumensath2009iterative,candes2008enhancing,loh2017support}. Apart from \cite{blumensath2009iterative} which studies the minimization of \eqref{q1} itself (and performs poor in practice, see Figure \ref{f4}), we have not been able to locate any results in the literature which claim to find $S$, as Corollaries \ref{cor:doctorgadget} and \ref{cor:doctorgadget2}. In \cite{nikolova2013description} and \cite{nikolova2016relationship}, the minimizers of the un-regularized functionals ${\mathcal K}$ and ${\mathcal K}_K$ are studied. Corollaries \ref{cor:globalpoint2}-\ref{cor:doctorgadget2} provide new results and extensions of this line of research. \subsection{On the size of RIP/RLIP-constants}\label{sec:size} We are interested in estimating the size of the constants $\beta_K$, $\delta_K$ as well as compare the upper bound $C_K$ in \eqref{crt} with $1/\beta_{K}$ in the corresponding estimates of Corollary \ref{cor:doctorgadget} and \ref{cor:doctorgadget2}. We focus on matrices $A$ of size $m\times n$ where $n>m$ and the columns are generated from a random Gaussian distribution and then normalized so that $\|A\|_{\infty,col}=1$, in accordance with \cite{candes2006stable,donoho2006most} as well as the the assumptions in this paper. \begin{figure} \includegraphics[width=.45\textwidth]{512x25.pdf} \includegraphics[width=.45\textwidth]{512x25zoom.pdf}\\ \caption{Left; $C_K$ (red, $K=1\ldots 5$) versus $\frac{1}{\beta_K}$ (blue, $K=1\ldots 19$) for a $512\times 25$ matrix $A$ with normalized random columns. Right, zoom on $\frac{1}{\beta_K}$.}\label{f3} \end{figure} \iffalse We remind the reader that a ``good'' RIP constant $\delta_j$ is $\approx 0$ whereas a good RLIP-constant $\beta_j$ is $\approx 1$. As we will show below, the $\beta_j$'s seem more keen on being ``good''. One explanation for this is clearly that the $\delta_j$'s are designed to satisfy two inequalities as opposed to one, but moreover, in case the lower inequality defines $\delta_j$, we have $$\delta_j=\sqrt{1-\beta_j^2},$$ and both the square and the square root work against ``good'' values for $\delta_j$. To illustrate, if $\beta_K=0.9$ (which is a decent value) we see that $1-\beta_K^2\approx 0.2$ so $\delta_K\approx 0.45$, which is a bad value of $\delta_K$. Worse yet, in order for \eqref{crt} to be satisfied, we need \eqref{ki} which requires $\delta_{4K}$ ($\delta_{3K}$ is enough for the results in \cite{blumensath2009iterative}) to be small, which is even harder. In contrast, the results of this paper apply whenever $\beta_{2K}$ is decent. For these reasons, it turns out that \eqref{crt} is almost never satisfied for a $17\times 25$-matrix $A$ and $K=2$(!!), which is why these are omitted from Figure \ref{f2} where we plot values of $1/\beta_K$ for $A$ of this size. Although this may seem too small for a realistic application, we remark that matrices of this size do appear e.g.~ in GPS-positioning \cite{lesouple}. Interestingly, the values of $1/\beta_{K}$ are decent until around $K=7$, which roughly coincides with the upper theoretical bound $m/2=8.5$ for when it is reasonable to expect that any method would work \cite{donoho2006most}. \fi Figure~\ref{f2} shows numerical computations of $1/\beta_K$ for random matrices of size $17 \times 25$. Here we do not plot $C_K$ since the requirement $\delta_{3K}+3\delta_{4K} < 2$ turned out to almost never be fulfilled when $K > 1$. In contrast, Corollary~\ref{cor:doctorgadget} apply whenever $\beta_{2K}>0$ and Corollary~\ref{cor:doctorgadget2} when $\beta_{2K}>\frac{1}{2}$. In order to compare $1/\beta_K$ with $C_K$ for a moderately sized application, we would like to compute these for a $256\times 512$-matrix, say. However, due to the combinatorial nature of the constants $\delta_j$ and $\beta_k$, it is not possible to compute them for matrices with more columns than $\approx 30$ (on a standard laptop at least). Nevertheless a $256\times 25$-matrix can be seen as the first portion of a $256\times 512$-matrix, and from this perspective the values obtained in the $256\times 25$-case serve as lower bounds of the true values. It turns out that \eqref{ki} typically does not hold for $K > 2$, while $\beta_{2K} > 1/2$ holds for all $K$ up to 25. Finally, considering $512\times 25$-matrices, we do have that \eqref{ki} is satisfied in general, and Figure \ref{f3} plots a graph of $1/\beta_K$ versus $C_K$. From the right graph we also see that values of $\beta_K$ are very decent, around 0.8, for $K$ near 20. \subsection{Numerical Recovery Results}\label{sec:num} In \cite{candes2008enhancing} astonishing results are shown in the noise free case, for example in Figure 2 (of that paper) we see how $K=130$ non-zero entries are recovered using a matrix $A$ of size $m\times n= 256\times 512$ (which incidentally is close to the theoretical bound $2K<m$ in the present paper). However, in the presence of noise, performance seems to drop drastically. In Figure 7 (of the same paper) we see an example where $K=8$, $m=72$ and $n=256$. This is in line with the predictions of \cite{loh2017support}, which use $K=\sqrt{m}$ in their numerical section 4.3. \begin{figure}[htb] \includegraphics[width=.5\textwidth]{x0mu=1_StepLength09.pdf} \includegraphics[width=.5\textwidth]{xSmu=1_StepLength09.pdf}\\ \caption{$\|x'-x_0\|$ (left) and $\|x'-x_S\|$ (right) versus $\|\epsilon\|$ for the 5 methods \eqref{l1probdual}, \eqref{q1}-\eqref{q1reg} and \eqref{q2}-\eqref{q2reg}. The methods based on ${\mathcal Q}_2({\text{card}})$ and ${\mathcal Q}_2(\iota_{P_K})$ work perfectly down to $SNR\approx 4$.}\label{f4} \end{figure} Here we will present numerical results for the case of $K=10$, $m=100$ and $n=200$. We use a matrix $A$ with Gaussian randomly generated columns, which are subsequently normalized, and solve problems \eqref{l1probdual}, \eqref{q1reg} and \eqref{q2reg} for $b=Ax_0+\epsilon$ for different levels of noise $\|\epsilon\|$ between 0 and 5. The vector $x_0$ has random entries between 2 and 4 in magnitude, and a total magnitude $\|x_0\|=11$. To solve the optimization problems we use FBS which is known to converge to a stationary point (by \cite{attouch2013convergence} in combination with Section 2.4 of \cite{carlsson2016convexification} or Section 6 of \cite{carlsson2018convex}). In Section 5 of \cite{attouch2013convergence} the convergence of FBS for the unregularized problems \eqref{q1} and \eqref{q2} is considered, but with no analysis of performance. This has also been proposed earlier in \cite{blumensath2008iterative} where it is compared against matching pursuit. For this reason, we also included graphs for the result of minimizing \eqref{q1} and \eqref{q2}. Each point on the respective curves is an average over 50 trials, where we have used 1000 iterations and with a step-size parameter of \(0.9/\|A\|^2\), which is close to the upper theoretical bound given in \cite{attouch2013convergence} (which coincides with the bound for the convex case, see e.g. \cite{combettes2005signal}). For the $\ell^1$-problem \eqref{l1probdual} we used the formula $$\lambda= \frac{\|\epsilon\| }{\sqrt{n}}\sqrt{2 \log (n)} $$ corresponding to the recommendations in Section 5.2 of \cite{chen2001atomic}. For \eqref{q1}-\eqref{q1reg} we used $\mu=1$ and $K$ was set to 10 for \eqref{q2}-\eqref{q2reg}. If the values of $\beta_{2K}$ and $\beta_K$ are $\approx 1$, then the conditions in Corollary \ref{cor:doctorgadget} hold given that $2\sqrt{\mu}<\min \{|x_{0,j}|:|x_{0,j}|\neq 0\}$ which in our case is $2.05$ and $\|\epsilon\|\leq \sqrt{\mu}$, whereas the conditions in Corollary \ref{cor:doctorgadget2} hold as long as $2\|\epsilon\|<2.05$. In both cases, the estimate for $\|x'-x_0\|$ reads $\|x'-x_0\|\lesssim \|\epsilon\|$ which is supposed to hold for $\|\epsilon\|\lesssim 1$. As can be seen from the graph in Figure \ref{f4} (left) the true bound (for this particular example) seems to be $\|x'-x_0\|\lesssim \frac{1}{3}\|\epsilon\|$ for both \eqref{q1reg} and \eqref{q2reg}, whereas the true constant for $\ell^1$ is around 1 (despite $C_{10}=\infty$ as argued earlier). The unregularized cardinality problem \eqref{q1}, a.k.a.~iterative hard thresholding, seems to perform poorly (with our parameters) whereas \eqref{q2} (a.k.a. the $K$-sparse algorithm \cite{blumensath2008iterative}) seems to do a decent job, similar to $\ell^1$ in performance for\begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=.5\textwidth]{x0_50x200.pdf}\\ \end{center} \vspace{-0.5cm}\caption{Same as Figure \ref{f5} but with 50 only rows in $A$.}\label{f6}\vspace{-0.5cm} \end{wrapfigure} higher noise levels. In both cases corresponding performance for the regularized versions is significantly better, indicating that the regularization by $S_2^2$ indeed has a crucial effect. Note that all 3 methods work for noise-levels much greater than stipulated by the theory. We also remark that, rather surprisingly, there is no major difference between \eqref{q1reg} and \eqref{q2reg} for moderate noise levels. However, both these methods are designed to find the oracle solution $x_S$, not $x_0$, so to evaluate this performance we include in Figure \ref{f4} (right) also the graph of $\|x'-x_S\|$ versus $\|\epsilon\|$. From this we deduce that both work perfectly until $\|\epsilon\|=3$, but that \eqref{q1reg} deteriorates substantially faster beyond this point. In other words, in this example both methods based on ${\mathcal Q}_2({\text{card}})$ and ${\mathcal Q}_2(\iota_{P_{10}})$ work as expected down to $SNR$ around 4. Note that, in the best case scenario $\beta_K=\beta_{2K}=1$, and then a simple computation shows that Corollary \ref{cor:doctorgadget} and \ref{cor:doctorgadget2} then applies for $SNR$ down to $2\sqrt{10}\approx 6$, and hence there is almost perfect harmony between theory and numerical results. More precisely, we can allow 50\% more noise in practice than predicted by the theory. In Figure \ref{f6} we show the same graphs except that now $A$ has size $50\times 200$. Clearly this has a significant impact on performance. In particular, although \eqref{q1reg} and \eqref{q2reg} still do better than traditional $\ell^1$-minimization, there is no longer a significant difference. This could indicate that the convex $\ell^1$-method is more reliable in very difficult scenarios, as opposed to the non-convex methods suggested here, but this would have to be further investigated to be confirmed. A drawback of $\ell^1$-methods is that one often needs to find a suitable $\lambda$, which leads to slow evaluation in practice. \begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=.5\textwidth]{reshapedhisto_100x200.pdf}\\ \end{center} \vspace{-0.5cm} \caption{Histogram of cardinality for 50 trials of \eqref{q1reg} with $\|\epsilon\|=2.5$.}\label{f5} \end{wrapfigure} Another issue that we have not discussed is the starting point. We have used $0$ for all examples above, and (a bit surprisingly) this seems to work better than using the least squares solution \(x_{LS}\) of $Ax=b$. In our final graph Figure \ref{f5} we plot a histogram of the cardinality of $x'$ over 50 trials with the noise level $\|\epsilon\|=2.5$, using ${\mathcal Q}_2({\text{card}})$ and \(x_{LS}\) as starting point. For this noise level and starting point, ${\mathcal Q}_2(\iota_{10})$ still works perfectly, which is why its performance is excluded, the histogram hits 50 at $K=10$. It is interesting to note the following dichotomy, either the cardinality is around 10, or substantially larger, in harmony with the results presented in this paper (e.g.~Corollary \ref{cor:globalpoint2} and more generally Theorem \ref{thm:statpoint:vec}). \subsection{Brief review of related results}\label{review} As previously mentioned, ${\mathcal Q}_2({\text{card}})$ was introduced previously in \cite{zhang2010nearly} and \cite{soubies-etal-siims-2015}, and appeared earlier also e.g.~in \cite{fan2001variable}, although in this paper they move on to introduce yet another penalty called SCAD which we discuss further below. Needless to say, we are not the first group to address the shortcomings of traditional $\ell^1$-minimization by use of non-convex penalties. In fact, even before the birth of compressed sensing, the shortcomings of $\ell^1$-techniques were debated and non-convex alternatives were suggested, we refer to \cite{fan2001variable} for an overview of early publications on this issue. Moreover, shortly after publishing the celebrated result \eqref{crt}, Cand\`{e}s, Wakin and Boyd suggested an improvement called ``Reweighted $\ell^1$-minimization'' \cite{candes2008enhancing} which also became a big success. They provide a theoretical understanding of this algorithm as minimizing the non-convex functional $$f(x)=\sum_j \log(\epsilon+|x_j|)$$ where $\epsilon$ is a parameter chosen by the user. Figure \ref{S22card} shows the functions ${\text{card}}(x),$ $|x|$ and $\log(0.1+|x|)-\log(0.1)$ as well as ${\mathcal Q}_2({\text{card}})$. As is clear to see, $\log(0.1+|x|)-\log(0.1)$ is closer to ${\text{card}}(x)$ than $|x|$, which may explain the better performance by reweighted $\ell^1$-minimization reported in \cite{candes2008enhancing}. The functional ${\mathcal Q}_2({\text{card}})$ is even closer to ${\text{card}}(x)$, and while this certainly is one reason behind the superior theoretical results reported in this paper, it is not clear that it is beneficial in practice since it may lead to an increased probability of getting stuck in local minima. Indeed, suppose one has a non-sparse solution to $Ax'=b$ where all non-zero elements of $x'$ are in the flat part of ${\mathcal Q}_2({\text{card}})$, then we clearly have an (undesired) local minima of ${\mathcal{K}_{reg}}(x)={\mathcal Q}_2({\text{card}})(x)+\|Ax-b\|^2_2$, whose presence for high levels of noise is clearly visible in Figure \ref{f5}. This is further studied in \cite{soubies-etal-siims-2015} where it is shown that, for a particular choice of $7\times 15$-matrix $A$ and other parameters, the original functional \eqref{q1} has roughly 16000 (!) local minima, whereas around 5000 of these remain as stationary points for the regularization \eqref{q1reg}. Several of these stationary points turn out to not be local minima, and hence the authors provide a macro algorithm to avoid non-local minima. In the same vein, Zhang \cite{zhang2010nearly} proposes to iteratively update relevant parameters to reach the desired global minima with higher probability. While these results speak a bit in favor of using another method such as reweighted $\ell^1$ or ${\mathcal Q}_2(\iota_{P_K})$, which according to Corollary \ref{cor:doctorgadget2} does not suffer from the same drawbacks (under mild assumptions), favorable results for ${\mathcal Q}_2({\text{card}})$ were reported in \cite{loh2017support}, which compares the use of ${\mathcal Q}_2({\text{card}})$/MCP with $\ell^1$ and reweighted $\ell^1$ (called LSP in \cite{loh2017support}) as well as SCAD (which has similar performance as ${\mathcal Q}_2({\text{card}})$). The numerical results in this paper seems also to reconfirm this, despite not employing any algorithm ensuring that we do not converge to an undesired stationary point. At first glance, this seems to contradict the findings of \cite{soubies-etal-siims-2015} reported above. However, in their experiments they do not use $b=Ax_0$ for some sparse $x_0$ and they also do not use a matrix $A$ with good RLIP-properties. The best theoretical justification to support the use of ${\mathcal Q}_2({\text{card}})$ seems to be Corollary 1 in \cite{loh2017support} which, under a number of assumptions, prove that ${\mathcal{K}_{reg}}$ does have a unique stationary point with high probability, and provide an estimate of the type \eqref{crt}. The setting of \cite{loh2017support} is rather different and we have not been able to verify reasonable values of the involved constants $c_l,c_u,c_\infty,R,\mu,\lambda, \eta, c_1,c_2,c_3$ and $\gamma$ in order to compare the strength of our Corollary \ref{cor:doctorgadget} with Corollary 1 in \cite{loh2017support}. We simply note that they point in the same direction and that Corollary \ref{cor:doctorgadget} holds under simpler conditions. The same remark goes for Theorem 3 in \cite{pan2015relaxed} and Theorem 1 in \cite{fan2001variable}, which provide conditions under which a class of non-convex optimization problems give the same estimate as the ``oracle estimator'', with a high probability. The papers \cite{attouch2013convergence,blumensath2008iterative} considers \eqref{genprob} for the cases $f(x)={\text{card}}(x)$ as well as $f(x)=\iota_{P_K}(x)$, and \cite{attouch2013convergence} show in particular that the FBS-algorithm applied to \eqref{genprob} converges to a stationary point, but a further analysis of this point is not present. Incidentally, this article in combination with Section 2.4 of \cite{carlsson2016convexification} (Section 6 of \cite{carlsson2018convex}) shows that FBS also converges for \eqref{genprobS} under very soft assumptions. For the case $f(x)={\text{card}}(x)$, \cite{blumensath2009iterative} goes a bit further and actually provide an estimate of the type \eqref{crt} with the good value $C_K=5$ (independent of $K$), however under the assumption $\delta_{3K}<1/8$ which, in the light of the Section \ref{sec:size}, is not easy to satisfy. Many other non-convex penalties have been proposed over the years \cite{selesnick2017sparse,bredies2015minimization,chartrand2007exact,pan2015relaxed,zou2008one,fan2004nonconcave,wang2014optimal,loh2013regularized,fan2014strong,zhang2012general,zhang2010nearly,loh2017support,candes2008enhancing,breheny2011coordinate,fan2001variable,mazumder2011sparsenet}, and we make no attempt to review them here. The introduction of \cite{loh2017support} contains a recent overview. A common denominator seems to be that the penalty function has the form $p(x)=\sum_j p_j(x_j)$ where $p_j$ are functions on ${\mathbb R}$ (except the recent contribution \cite{selesnick2017sparse}). In this sense, ${\mathcal Q}_2(\iota_K)$ stands out as an interesting deviation. \iffalse In fact, ${\mathcal Q}_2({\text{card}})$ and ${\mathcal Q}_2(\iota_K)$ can be seen as extreme points of a more general framework proposed in \cite{larsson-olsson-ijcv-2016}. Although that paper is written in the setting of matrices and without using the ${\mathcal{S}}_2$-transform explicitly, it is shown that $S_2^2(f)$ is computable for any functional $f$ of the form $$x\mapsto g({\text{card}}(x))$$ where $g$ is a convex increasing function on $\mathbb{N}$. Thus ${\text{card}}(x)$ arise from the concrete choice $g(t)=t$ whereas $\iota_K(x)$ appears as $g(t)=\iota_{\{t:~t\geq K\}}$. The former is reasonable to use when we have maximum uncertainty about the sought cardinality, the latter when we have no uncertainty. Clearly the more common situation is that of limited uncertainty of the sought support, and the framework clearly allow to tailor-make the function $g$ to fit prior knowledge. We will not pursue this generality in this article, but remark that related proximal operators for ${\mathcal Q}_2(f)$ are somewhat tricky to compute, and for this reason there is a free downloadable code available at .... \fi \iffalse \section{Relaxation and the ${\mathcal{S}}_\gamma$-transform}\label{regS} Let $f$ be any $[0,\infty]-$valued functional on ${\mathbb R}^n$, ${\mathbb C}^n$ or for that matter any separable Hilbert space ${\mathcal H}$. The ${\mathcal{S}}_{\gamma}$-transform, where $\gamma>0$ is a parameter, is designed such that ${\mathcal{S}}^2_{\gamma}(f)(x)+\frac{\gamma}{2}\|x\|^2$ is the l.s.c. convex envelope of $f(x)+\frac{\gamma}{2}\|x\|^2$. It can be shown that ${\mathcal{S}}^2_\gamma(f)$ is continuous wherever it is finite, and that $\gamma$ is the maximum negative curvature of ${\mathcal{S}}^2_\gamma(f)$. It also holds that ${\mathcal{S}}^2_\gamma(f)(x)$ increase with increasing $\gamma$, that $\lim_{\gamma\rightarrow\infty}{\mathcal{S}}^2_{\gamma}(f)=f$ and that $\lim_{\gamma\rightarrow 0^+}{\mathcal{S}}^2_{\gamma}(f)$ equals a convex minorant above the l.s.c. convex envelope of $f$, which in the cases considered here is identically 0. Its potential use as a regularizer for problems of the form \eqref{genprob}, i.e. replacing $f(x)+\frac{1}{2}\|A x-b\|^2$ with \begin{equation} {\mathcal{S}}^2_\gamma(f)(x)+\frac{1}{2}\|A x-b\|^2, \label{rep} \end{equation} was investigated in [...] After much discussion we decided to fix $\gamma=2$ and remove the traditional factor $\frac{1}{2}$ in this paper, since it substantially simplifies formulas. This is not a limitation since one can always obtain a such a problem by rescaling $f,~ A$ and $b$. To see that this can be done, note that a simple computation shows ${\mathcal{S}}_\gamma^2(f)=\frac{\gamma}{2}{\mathcal Q}_2(\frac{2}{\gamma}f)$ (see [..]) so that \eqref{rep} is equivalent to $${\mathcal Q}_2(\frac{2}{\gamma}f)(x)+\fro{\frac{A x}{\sqrt{\gamma}}-\frac{b}{\sqrt{\gamma}}}.$$ We henceforth assume that such a rescaling has been done so that we are interested in minimizing \begin{equation}\label{relax}{\mathcal K}_{reg}(x)={\mathcal Q}_2(f)(x)+\fro{{A x-b}}\end{equation} instead of ${\mathcal K}(x)=f(x)+\fro{A x-b}$. \fi \section{Uniqueness of minimizers and stationary points with the desired property}\label{sec:unique} We now turn to the heart of the matter, namely uniqueness of sparse minimizers of ${\mathcal K}_{reg}$, more precisely minimizers in a given $K$-sparse set $P_K$. As noted in the introduction, this is not possible without imposing additional conditions on $A$ as well as $b$. In this section we provide such a condition and in the coming ones we show what it entails in practice for the sparsity (and $K$-sparsity) problem. We first introduce the concept of a stationary point for a non-convex functional $g$, which in practice is easier to find than local minimizers. We recall that the Fr\'{e}chet subdifferential $\hat \partial g(x)$ is the set of vectors $v$ with the property that $$\underset{y\neq x}{\liminf_{y\rightarrow x}}~ g(y)-g(x)-\scal{v,y-x}\geq 0.$$ We say that a point $x$ is a stationary point of $g$ if $0\in\hat \partial g(x)$. For the case when $g$ is a sum of a convex function $g_c$ and a differentiable function $g_d$, it is easy to see that $x$ is a stationary point if and only if $-\nabla g_d(x)\in \partial g_c(x)$ where $\partial g_c(x)$ denotes the usual subdifferential. Set \begin{equation}\label{gdef}{\mathcal G}(x)=\frac{1}{2}{\mathcal Q}_2(f)(x)+\frac{1}{2}\fro{x},\end{equation} i.e. $2{\mathcal G}$ the l.s.c. convex envelope of $f(x)+\fro{x}$. We have \begin{equation}\label{fr3}{\mathcal K}_{reg}(x)=2{\mathcal G}(x)-\fro{x}+\fro{Ax-b}\end{equation} which upon differentiation yields that $x'$ is a stationary point of ${\mathcal K}_{reg}$ if and only if \begin{equation}\label{sp}(I-A^* A)x' + A^* b\in\partial {\mathcal G}(x').\end{equation} Given any $x$, we therefore associate with it a new point $z$ via \begin{equation}\label{y} z=(I-A^* A)x + A^* b. \end{equation} The importance of $z$ is due to the following simple observation. \begin{proposition}\label{thm:statpoint} Let $x'$ and $x''$ be distinct stationary points of ${\mathcal K}_{reg}$ such that $x''-x'\in P_K$. Then \begin{equation}\label{ineq}{\mathsf {Re}}\scal{z''-z',x''-x'}\leq (1-\beta_K^2)\fro{x''-x'}.\end{equation} \end{proposition} The above proposition will mainly be used backwards, i.e.~we will show that \eqref{ineq} does not hold and thereby conclude that $x''-x'\not\in P_K$. \begin{proof} We have $$z''-z'=( I-A^* A)x''+A^*b-( I-A^* A)x'-A^*b=(I-A^* A)(x''-x'),$$ so taking the scalar product with $x''-x'$ gives $${\mathsf {Re}}\scal{z''-z',x''-x'}=\fro{x''-x'}-\fro{A(x''-x')}\leq (1-\beta_K^2)\fro{x''-x'},$$ as desired. Note that it is not necessary to take the real part, but we leave it since scalar products in general can be complex numbers. \end{proof} As we shall see, the point $z'$ has a decisive influence on the coming sections. To begin with, it has the following interesting property. \begin{proposition} A point $x'$ is a stationary point of ${\mathcal K}_{reg}$ if and only if it solves the convex problem $$x'\in\argmin_x {\mathcal Q}_2(f)(x)+\fro{x-z'}.$$ \end{proposition} Note the absence of $A$ in the above formula, which in particular implies that ${\mathcal Q}_2(f)(x)+\fro{x-z'}$ is the convex envelope of $f(x)+\fro{x-z'}$. \begin{proof} As noted in \eqref{sp}, $x'$ is a stationary point of ${\mathcal K}_{reg}$ if and only if $z'\in\partial{\mathcal G}(x')$. By the same token, $x'$ is a stationary point of $${\mathcal Q}_2(f)(x)+\fro{x-z'}=2{\mathcal G}(x)-2\scal{x,z'}+\fro{z'}$$ if and only if $z'\in\partial{\mathcal G}(x')$, and since the functional is convex (and clearly has a well defined minimum) the stationary points coincide with the set of minimizers. \end{proof} \section{The sparsity problem}\label{seccard} We return to the sparsity problem, and consider $f(x)=\mu{\text{card}}(x)$ where $\mu$ is a parameter and ${\text{card}} (x)$ is the number of non-zero entries in the vector $x$. In this case we have, \begin{equation}\label{l0000}{\mathcal Q}_2(\mu{\text{card}})(x)=\sum_{j=1}^n \mu-\para{\max\{\sqrt{\mu}-{ |x_j|},0\}}^2, \quad x\in{\mathbb R}^n.\end{equation} To recapitulate, we want to minimize \eqref{q1}, i.e.~\begin{equation}\label{t57}{\mathcal{K}}(x)=\mu{\text{card}}(x)+\fro{Ax-b}\end{equation} which we replace by \eqref{q1reg}, i.e. \begin{equation}\label{fr2}{\mathcal{K}_{reg}}(x)={\mathcal Q}_2(\mu{\text{card}})(x)+\fro{Ax-b}.\end{equation} \subsection{Equality of minimizers for ${\mathcal{K}}$ and ${\mathcal{K}_{reg}}$} As noted by Aubert, Blanc-Feraud and Soubies (see Theorems 4.5 and 4.8 in \cite{soubies-etal-siims-2015}), ${\mathcal{K}_{reg}}$ has the same global minima and potentially fewer local minima than ${\mathcal{K}}$ if \begin{equation}\label{alpha}\|A\|_{\infty,col}=\sup_i \|a_i\|_2\leq 1,\end{equation} where $a_i$ denotes the columns of $A$. Below we (essentially) reproduce their statement in the terminology of this paper, and also include a proof for completeness. \begin{theorem}\label{celok5} If $\|A\|_{\infty,col}<1$, then any local minimizer of ${\mathcal{K}_{reg}}$ is a local minimizer of ${\mathcal{K}},$ and the (nonempty) set of global minimizers coincide. If merely $\|A\|_{\infty,col}=1$, then any connected component of global minima of ${\mathcal{K}_{reg}}$ includes at least two global minima of ${\mathcal{K}}$. \end{theorem} \begin{proof} We first establish that $\inf {\mathcal{K}}=\inf {\mathcal{K}_{reg}}$. Since ${\mathcal{K}}\geq {\mathcal{K}_{reg}}$ it suffices to show that $\inf {\mathcal{K}}\leq \inf {\mathcal{K}_{reg}}$. Suppose not and let $x_0$ be a point such that ${\mathcal{K}_{reg}}(x_0)<\inf {\mathcal{K}}$. Then there must be some index $j$ such that the corresponding value in ${\mathcal Q}_2(\mu{\text{card}})(x_0)$ is different from $\mu{\text{card}} (x_{0,j})$, which (see \eqref{l0000}) implies that \begin{equation}\label{py1}0<|x_{0,j}|<\sqrt\mu.\end{equation} But then \begin{equation}\label{py}\partial_{x_{0,j}}^2{\mathcal{K}_{reg}}(x_0)=-2+2\|a_j\|^2\leq 0.\end{equation} It follows that we can redefine $x_{0,j}$ to equal either $0$ or $\sqrt{\mu}$, so that the resulting point $x_1$ satisfies ${\mathcal{K}_{reg}}(x_1)\leq {\mathcal{K}}(x_0)$. We can now continue like this for another index $j$ such that \eqref{py1} holds (if it exists), and this process must terminate after finitely many steps $N$. Denoting the resulting point by $x_N$, we see that it satisfies ${\mathcal{K}}(x_N)={\mathcal{K}_{reg}}(x_N)<\inf {\mathcal{K}}$, a contradiction. It is easy to see that ${\mathcal{K}}$ has global minimizers, and by the above argument these are also global minimizers for ${\mathcal{K}_{reg}}$. Now let $x_0$ be a global minimizer of ${\mathcal{K}_{reg}}$ but not for ${\mathcal{K}}$. As before we have that \eqref{py1} and \eqref{py} holds for some index $j$, and we clearly must have equality in the latter. If $\|A\|_{\infty,col}<1$ this is a contradiction, and hance we see that the global minimizers of ${\mathcal{K}}$ and ${\mathcal{K}_{reg}}$ coincides. If merely $\|A\|_{\infty,col}=1$, then we can redefine $x_{0,j}$ to equal either $0$ or $\sqrt{\mu}$ without changing the value of ${\mathcal{K}_{reg}}$, and as before this process eventually leads to a point $x_N$ which is also a global minimizer for ${\mathcal{K}}.$ By the construction, there are at least two such points. It remains to prove the statement about local minimizers. If $x_0$ is a local minimizer of ${\mathcal{K}_{reg}}$ and $\|A\|_{\infty,col}<1$, we immediately get a contradiction from \eqref{py} unless ${\mathcal{K}}(x_0)={\mathcal{K}_{reg}}(x_0)$. In view of ${\mathcal{K}}\geq {\mathcal{K}_{reg}}$, this establishes the claim. \end{proof} \subsection{On the uniqueness of sparse stationary points}\label{erai} Next we take a closer look at the structure of the stationary points. Given $N$ such that $\beta_{N}> 0$, we will show that under certain assumptions the difference between two stationary points always has at least $N$ elements. Hence if we find a stationary point with less than $N/2$ elements then we can be sure that this is the sparsest one. The main theorem reads as follows: \begin{theorem}\label{thm:statpoint:vec} Let $x'$ be a stationary point of ${\mathcal{K}_{reg}}$, let $z'$ be given by \eqref{y}, and assume that \begin{equation}\label{e4}|z_i'|\not\in\left[{\beta_{N}^2}{\sqrt{\mu}},\frac{1}{\beta_{N}^2}{\sqrt{\mu}}\right]\end{equation} for all $i\in\{1,\ldots,n\}$ (where the condition is automatically fulfilled if $\beta_N>1$.) If $x''$ is another stationary point of ${\mathcal{K}_{reg}}$ then ${\text{card}}(x''-x') > N$. \end{theorem} Note that we allow $\beta_N>1$ in the above theorem, in which case the condition on $z'$ is automatically satisfied. The proof depends on a sequence of lemmas, and is given at the end of the section. Clearly, we will rely on Proposition \ref{thm:statpoint}, which requires an investigation of the functional ${\mathcal G}$ \eqref{gdef} and in particular its sub-differential. Introducing the function $g$ as \begin{equation} g(x) = \begin{cases} \frac{\mu + {x^2}}{2} & |x| \geq {\sqrt{\mu}} \\ \sqrt{\mu}|x| & 0 \leq |x| \leq {\sqrt{\mu}} \end{cases}. \label{eq:gdef} \end{equation} we get \begin{equation}\label{Gg} {\mathcal G}(x)=\sum_{j=1}^n g(x_j).\end{equation} Its sub-differential is given by \begin{equation} \partial g(x) = \begin{cases} \{ x\} & |x| \geq {\sqrt{\mu}} \\ \{\sqrt{\mu}\frac{x}{|x|}\} & 0 < |x| \leq {\sqrt{\mu}} \\ \sqrt{\mu}~{\mathbb D} & x = 0 \end{cases} \label{eq:vectorsubgrad} \end{equation} where ${\mathbb D}$ is the closed unit disc in ${\mathbb C}$ or, if working over ${\mathbb R}$, ${\mathbb D}=[-1,1]$. In the remainder we suppose for concreteness that we work over ${\mathbb C}$ (but show the real case in pictures). Note that the sub-differential consists of a single point for each $x \neq 0$. Figure~\ref{fig:gfunk} illustrates $g$ and its sub-differential. \begin{figure}[htb] \begin{center} \includegraphics[width=40mm]{gfunk} \includegraphics[width=40mm]{gsubgrad} \end{center} \caption{The function $g(x)$ (left) and its sub-differential $\partial g(x)$ (right). Note that the sub-differential contains a unique element everywhere except at $x=0$.} \label{fig:gfunk} \end{figure} The following two results establish a bound on the sub-gradients of ${\mathcal G}$. We begin with some one-dimensional estimates of $g$. \begin{lemma}\label{lemma:bnd1} Assume that $z_0 \in \partial g(x_0)$ and $\beta_N<1$. If \begin{equation} \left| z_0 \right| > \frac{\sqrt{\mu}}{\beta_N^2} \label{eq:subgradbnd1} \end{equation} then for any $x_1,~z_1$ with $z_1\in \partial g(x_1)$ and $x_1\neq x_0$, we have \begin{equation}\label{fd} {\mathsf {Re}}(z_1 -z_0)\overline{(x_1-x_0)} > (1-\beta_N^2) |x_1-x_0|^2 . \end{equation} \end{lemma} \begin{proof} By rotational symmetry (i.e. $\partial g(e^{i\phi}x)= e^{i\phi} \partial g(x)$), it is no restriction to assume that $z_0>0$. By $\frac{1}{\beta_N^2}> 1$ and \eqref{eq:subgradbnd1}, we see that $z_0>\sqrt{\mu}$, and hence the identity $z_0\in\partial g(x_0)$ and \eqref{eq:vectorsubgrad} together imply that $z_0=x_0$ and in particular that $x_0\in{\mathbb R}$ and \begin{equation} \label{onion}x_0>\frac{\sqrt{\mu}}{\beta_N^2}. \end{equation} To prove the result we now minimize the quotient $\frac{{\mathsf {Re}}(z_1 -z_0)\overline{(x_1-x_0)}}{ |x_1-x_0|^2}$ and show that it is larger than $1-\beta_N^2$. There are three cases to consider; $|x_1|=0$, $0<|x_1|<\sqrt{\mu}$ and $|x_1|\geq \sqrt{\mu}$. The latter case is easy since then $z_1-z_0=x_1-x_0$ and $1-\beta_N^2 < 1$, which yields the desired conclusion. For the two other cases we first show that $z_1$ and $x_0$ can be assumed to be real. If $x_1=0$ then the optimization of the above quotient is equivalent to minimization of $-{\mathsf {Re}}(z_1)$ over $z_1 \in \sqrt{\mu}{\mathbb D}$, since $x_0=z_0$ is real and positive, which is clearly minimized in $z_1 = \sqrt{\mu}$. For the middle case, $z_1$ and $x_1$ have the same angle with ${\mathbb R}$, and $|x_1|<|z_1|<z_0$. We first hold the radii fixed and only consider the angle as an argument. Recall $z_0=x_0$ and set $R=|z_1|/|x_1|$. Then \begin{equation}\label{gt}\frac{{\mathsf {Re}}(z_1 -z_0)\overline{(x_1-x_0)}}{ |x_1-x_0|^2}=\frac{R|x_1|^2-2R{\mathsf {Re}} x_1 \overline{x_0}+|x_0|^2}{ |x_1|^2-2{\mathsf {Re}} x_1 \overline{x_0}+|x_0|^2}=R-\frac{R-1}{|x_1-x_0|^2}\end{equation} which shows that the quotient is minimized when $x_1$ is real and $0<x_1<\sqrt{\mu}$ (which then automatically applies to $z_1$ as well). Summarizing the above we may thus assume that $x_1$ and $z_1$ are real and non-negative and $0\leq x_1<\sqrt{\mu}$. This simplifies the quotient \eqref{gt} to $\frac{x_0-z_1}{ x_0-x_1}$. We now hold $x_1,~z_1$ fixed and consider $x_0$ as the variable. Since $z_1\geq x_1$, it is easy to see that this is minimized for $x_0$ as small as possible, i.e. $x_0=\frac{\sqrt{\mu}}{\beta_N^2}$. With this at hand, the minimum of $\frac{|z_1-\frac{\sqrt{\mu}}{\beta_N^2}|}{ |x_1-\frac{\sqrt{\mu}}{\beta_N^2}|}$ is clearly attained at $x_1=0$ and $z_1=\sqrt{\mu}$. Summing up, we have that $$\frac{{\mathsf {Re}}(z_1 -z_0)\overline{(x_1-x_0)}}{ |x_1-x_0|^2}>\frac{|z_1-\frac{\sqrt{\mu}}{\beta_N^2}|}{ |x_1-\frac{\sqrt{\mu}}{\beta_N^2}|}\geq \frac{|\sqrt{\mu}-\frac{\sqrt{\mu}}{\beta_N^2}|}{ |\frac{\sqrt{\mu}}{\beta_N^2}|}=1-\beta_N^2.$$ \end{proof} \begin{lemma}\label{lemma:bnd2} Assume that $z_0 \in \partial g(x_0)$ and $\beta_N<1$. If \begin{equation} \left| z_0 \right| < {\beta_N^2}{\sqrt{\mu}} \label{eq:subgradbnd2} \end{equation} then for any $x_1,~z_1$ with $z_1\in \partial g(x_1)$, $x_1\neq x_0$, we have \begin{equation}\label{fd1} {\mathsf {Re}}(z_1 -z_0)\overline{(x_1-x_0)} > (1-\beta_N^2) |x_1-x_0|^2. \end{equation}\end{lemma} \begin{proof} The proof is similar to the previous lemma. We first note that $x_0=0$, $x_1\neq 0$ and that $z_0$ may be assumed to be in $(0,\sqrt{\mu}]$. For a fixed radius $R = \frac{|z_1|}{|x_1|}$ the quotient $\frac{{\mathsf {Re}}(z_1 -z_0)\overline{(x_1-x_0)}}{ |x_1-x_0|^2}=\frac{{\mathsf {Re}}(z_1 -z_0)\overline{x_1}}{ |x_1|^2} = R- z_0\frac{{\mathsf {Re}} x_1}{|x_1|^2}$ is smallest when both $x_1$ and $z_1$ are real valued and positive. Since $x_1\neq 0$ we have $z_1 = \max(x_1,\sqrt{\mu})$. It is also easy to see that $z_0={\beta_N^2}{\sqrt{\mu}}$ gives a minimum value for any positive choice of $x_1,z_1$. This reduces the problem to finding the minimum of $$x_1 \mapsto\frac{\max(x_1,\sqrt{\mu}) -{\beta_N^2}{\sqrt{\mu}}}{ x_1}$$ which by basic calculus equals $(1-\beta_N^2),$ as desired. \end{proof} We are now ready to prove Theorem \ref{thm:statpoint:vec}. \begin{proof}[Proof of Theorem \ref{thm:statpoint:vec}] By Proposition \ref{thm:statpoint} it suffices to verify \begin{equation}\label{g} {\mathsf {Re}}\skal{z''-z',x''-x'} > (1-\beta_N^2) \|x''-x'\|^2,\quad x''\neq x'. \end{equation} Suppose first that $\beta_N<1$. Since $\partial {\mathcal G}(x)=\sum_{j=1}^n \partial g(x_j)$, Lemmas \ref{lemma:bnd1} and \ref{lemma:bnd2} imply that $${\mathsf {Re}}(z''_i-z'_i) \overline{(x_i''-x_i')} > (1-\beta_N^2) |x_i''-x_i'|^2, $$ for all $i$ with $x_i''-x_i' \neq 0$. Since $x_i''-x_i'=0$ gives $(z''_i-z'_i)\overline{(x_i''-x'_i)}=0$ summing over $i$ gives the result. Suppose now that $\beta_N>1$. By \eqref{g} it suffices to prove that ${\mathsf {Re}}\skal{z''-z',x''-x'}\geq 0$ for all $x''\neq x'$. Fix $i$ in $\{1,\ldots,n\}$. By rotational symmetry it is easy to see that we can assume that $x'_i,z'_i\geq 0$. Moreover, for fixed values of $|z_i''|$ and $|x''_i|$ (but variable complex phase) it is easy to see that ${\mathsf {Re}} (z_i''-z')(x_i''-x_i')$ achieves min when these are also real, i.e. we can assume that $x''_i,z''_i\in{\mathbb R}$. Since the graph of $\partial g$ is non-decreasing it follows that $(z_i''-z')(x_i''-x_i') \geq 0$ for all $i$, as desired. It remains to consider the case when $\beta_N=1$, and as above we reach a contradiction if we prove that ${\mathsf {Re}}\skal{z''-z',x''-x'}> 0$. Again we can assume that $x'_i,z'_i\geq 0$ and that $x''_i,z''_i\in{\mathbb R}$. Then \eqref{e4} implies that $z_i'\neq \sqrt{\mu}$ for all $1\leq i \leq n$, which via $z'_i\in \partial g(x'_i)$ also implies that $x_i'\not\in (0,\sqrt{\mu}]$. If $x''\neq x'$ we must have $x''_i\neq x_i$ for some $i$. Using that $z_i''\in \partial g$, examination of \eqref{eq:vectorsubgrad} yields that also $z_i''\neq z_i'$. With this at hand we see that the left hand side of \eqref{g} is strictly positive, whereas the right equals 0, which again is a contradiction. \end{proof} \iffalse For later use in [..], we record that we have shown the following result. \begin{corollary}\label{corbor} Let $z'$ satisfy $z'\in\partial {\mathcal G}(x')$ and \begin{equation}\label{e4t}|z_i'|\not\in\left[{\lambda^2}{\sqrt{\mu}},\frac{1}{\lambda^2}{\sqrt{\mu}}\right].\end{equation} If $z''\in\partial {\mathcal G}(x'')$, then ${\mathsf {Re}}\scal{z''-z',x''-x'}>(1-\lambda^2)\|x''-x'\|^2$. \end{corollary} \fi \subsection{Conditions on global minimality}\label{erai1} \begin{theorem}\label{thm:globalpoint2} Let $A$ satisfy $\|A\|_{\infty,col}\leq 1$, let $x'$ be a stationary point of ${\mathcal{K}_{reg}}$ and let $z'$ be given by \eqref{y}. Assume that \begin{equation}\label{cond3}|z_i'|\not\in\left[{\beta_N^2}{\sqrt{\mu}},\frac{1}{\beta_N^2}{\sqrt{\mu}}\right],\quad 1\leq i\leq n.\end{equation} If \begin{equation}\label{cond2}2\mu{\text{card}}(x') +\fro{Ax'-b}< \mu N+\mu,\end{equation} then $x'$ is the unique global minimum of ${\mathcal{K}}$ and ${\mathcal{K}_{reg}}$. \end{theorem} Obviously, it is desirable to pick $N$ as large as possible, which is limited by \eqref{cond3} and the fact that $\beta_N$ decreases with $N$. Also note that $\beta_N\leq 1$ since $\beta_1\leq \sup_i\{\|a_i\|_2\}=\|A\|_{\infty,col}$. \begin{proof} Set $K={\text{card}}(x')$ and assume that $x'$ is not the unique global minimizer of ${\mathcal{K}_{reg}}$. By Theorem \ref{celok5}, there exists a global minima $x''\neq x'$ for both ${\mathcal{K}_{reg}}$ and ${\mathcal{K}}$, which hence is a stationary point of ${\mathcal{K}_{reg}}$. Theorem \ref{thm:statpoint:vec} then shows that ${\text{card}} (x'')\geq N-K+1$. It follows that $${\mathcal{K}_{reg}}(x'')-{\mathcal{K}_{reg}}(x')\geq {\mathcal{K}}(x'')-{\mathcal{K}}(x')\geq \mu(N-K+1)-(\mu K +\fro{Ax'-b})>0$$ by \eqref{cond2}, ${\mathcal{K}}(x'')={\mathcal{K}_{reg}}(x'')$ and ${\mathcal{K}}(x')\geq{\mathcal{K}_{reg}}(x')$. This is a contradiction, and hence $x'$ must be the unique global minimizer of ${\mathcal{K}_{reg}}$. By Theorem \ref{celok5} it then follows that $x'$ is also unique minimizer of ${\mathcal{K}}$. \end{proof} \subsection{Noisy data.} In this final subsection we return to the compressed sensing problem of retrieving a sparse vector $x_0$ given corrupted measurements $b=Ax_0+{{ \epsilon}}$, where ${{ \epsilon}}$ is noise and $x_0$ is sparse. More precisely we set $S={\text{supp }} x_0$ where we assume that $\#S=K$ is much smaller than $m$ -- the amount of rows in $A$ (i.e. number of measurements). We let $x_{0,j}$ denote the elements of the vector $x_0$. Let $A_S$ denote the matrix obtained from $A$ by setting columns outside of $S$ to $0$, and let $x_S$ denote the least squares solution to $A_S x_S=b$. Note that this is the so called ``oracle solution'' discussed in the introduction, which can also be written $x_S=(A_S^*A_S)^\dagger A_S^*b$ where $(A_S^*A_S)^\dagger$ denotes the Moore-Penrose inverse. The below proposition shows that the oracle solution is under mild assumptions a local minimizer of ${\mathcal K}_{reg}$, which we denote by $x'$ for notational consistency. \begin{proposition}\label{p1} Let $A$ satisfy $\|A\|_{\infty,col}\leq 1$. If $\|{{ \epsilon}}\|< {\sqrt{\mu}}$ and $$|x_{0,j}|>\sqrt{\mu}+\frac{\|{{ \epsilon}}\|}{\beta_K}$$ for all $j\in S$ then the oracle solution $x'=x_S$ is a strict local minimum to ${\mathcal{K}_{reg}}$ with ${\text{supp }} (x')={\text{supp }} (x_0)$. We also have $|x_{j}'|>\sqrt{\mu},~ j\in S,$ $\|Ax'-b\|\leq \|\epsilon\|$, and $$\|x'-x_0\|\leq \frac{\|{{ \epsilon}}\|}{\beta_K}.$$ \end{proposition} \begin{proof} Consider the equation $A_Sx=Ax_0+{{ \epsilon}}$ and note that $Ax_0=A_Sx_0$. The least squares solution is obtained by applying $(A_S^*A_S)^\dagger A_S^*$ which gives the solution $$x'=x_0+(A_S^*A_S)^\dagger A_S^*{{ \epsilon}}=x_0+{{\delta}},$$ where we set $(A_S^*A_S)^\dagger A_S^*{{ \epsilon}}={{\delta}}$. By construction of the Moore-Penrose inverse, ${\text{supp }} {{\delta}}\subset S$, and hence $$A{{\delta}}=A_S{{\delta}}=P_{\text{Ran} A_S}{{ \epsilon}},$$ where $P_{\text{Ran} A_S}$ denotes the orthogonal projection onto the range of $A_S.$ In particular, $$\|{{\delta}}\|\leq\frac{\|A_S{{\delta}}\|}{\beta_K}=\frac{\|P_{\text{Ran} A_S}{{ \epsilon}}\|}{\beta_K}\leq \frac{\|{{ \epsilon}}\|}{\beta_K},$$ which establishes the final inequality in the proposition. Also $\|{{\delta}}\|_{\infty}\leq \|{{\delta}}\|_{2}$ which implies \begin{equation}\label{sofg}|x_{j}'|\geq |x_{0,j}|-|{{\delta}}_j|>\sqrt{\mu}+\frac{\|{{ \epsilon}}\|}{\beta_K}-\frac{\|{{ \epsilon}}\|}{\beta_K}=\sqrt{\mu},\quad j\in S.\end{equation} This also gives ${\text{supp }} x'={\text{supp }} x_0$ since we already have shown ${\text{supp }} x'\subset {\text{supp }} x_0\cup{\text{supp }} {{\delta}}\subset S$. We now consider $Ax'-b$, which equals \begin{equation}\label{sof}\begin{aligned}&Ax'-b=A_Sx'-b=\\&A_Sx_0+A_S(A_S^*A_S)^\dagger A_S^*{{ \epsilon}}-(A_Sx_0+{{ \epsilon}})=(P_{\text{Ran} A_S}-I){{ \epsilon}}=-P_{(\text{Ran} A_{S})^{\perp}}{{ \epsilon}}\end{aligned}\end{equation} and hence $\|Ax'-b\|\leq \|\epsilon\|$. It remains to prove that $x'$ is a local minimum of ${\mathcal{K}_{reg}}={\mathcal Q}_2(\mu{\text{card}})+\fro{Ax-b}$. To this end, consider ${\mathcal{K}_{reg}}(x'+v)$. Since $|x_{j}'|>\sqrt{\mu}$ for $j\in S$, the term ${\mathcal Q}_2(\mu{\text{card}})$ is flat for the corresponding indices of $v$. We get \begin{equation}\label{rd}{\mathcal{K}}(x'+v)=\sum_{j\in S^c}\left(2\sqrt{\mu} |v_j|-v_j^2\right)+2\scal{v,A^*(Ax'-b)}+\|Av\|^2+{\mathcal{K}_{reg}}(x').\end{equation} Since $x'$ solves the least squares problem posed initially, the vector $A_S^*(Ax'-b)=A_S^*(A_Sx'-b)$ must be 0. With this in mind \eqref{rd} then simplifies to \begin{equation}\label{rd1}2\left(\sum_{j\in S^c}\sqrt{\mu} |v_j|+v_j\scal{a_j,Ax'-b}\right)-\sum_{j\in S^c}v_j^2+\|Av\|^2+{\mathcal{K}_{reg}}(x').\end{equation} By the Cauchy-Schwartz inequality and \eqref{sof} we have $$|\scal{a_j,Ax'-b}|\leq \|a_j\|\|{{ \epsilon}}\|< \|A\|_{\infty,col} {\sqrt{\mu}}\leq\sqrt{\mu}.$$ It follows that the term $\sum_{j\in S^c}\sqrt{\mu} |v_j|+v_j\scal{a_j,Ax'-b}$ in \eqref{rd1} can be estimated from below by $\rho\sqrt{\sum_{j\in S^c}v_j^2}$ for some $\rho>0$, and hence that $$2\left(\sum_{j\in S^c}\sqrt{\mu} |v_j|+v_j\scal{a_j,Ax'-b}\right)-\sum_{j\in S^c}v_j^2>0$$ for $v$ in a neighborhood of 0, as long as $\sum_{j\in S^c}v_j^2\neq 0$. To have ${\mathcal{K}_{reg}}(x'+v)\leq{\mathcal{K}_{reg}}(x')$ we thus need ${\text{supp }} v\subset S$, as seen from \eqref{rd1}. But then \eqref{rd1} reduces to $\|Av\|^2+{\mathcal{K}_{reg}}(x')$, and since $\beta_K>0$ it follows that $\|Av\|^2>0$ unless $v=0$. In other words, $x'$ is a strict local minimizer. \end{proof} In the above proposition, there is nothing said as to whether $x'$ is a global minimum or not. To get further, let $z'$ correspond to $x'$ via \eqref{y}. We need conditions such that \eqref{cond3} holds for $z'$, i.e. \begin{equation}\label{cond31}|z_{i}'|\not\in\left[{\beta_N^2}{\sqrt{\mu}},\frac{1}{\beta_N^2}{\sqrt{\mu}}\right].\end{equation} We remind the reader that $N$ is a number which preferably is a bit larger than $2K$, where $K$ is the cardinality of $x_0$. \begin{proposition}\label{p2} Let $A$ satisfy $\|A\|_{\infty,col}\leq 1$. If $\|{{ \epsilon}}\|< {\beta_N^2\sqrt{\mu}}$ and \begin{equation}\label{po7}|x_{0,j}|>\frac{\sqrt{\mu}}{\beta_N^2}+\frac{\beta_N^2\sqrt{\mu}}{\beta_K}, \quad j\in {\text{supp }} x_0,\end{equation} then \eqref{cond31} holds. \end{proposition} \begin{proof} Using \eqref{sof} we get \begin{align}\label{tg}z'=( I-A^* A)x' + A^* b = x'-A^*(Ax'-b)= x'+A^*P_{(\text{Ran} A_{S})^{\perp}}{{ \epsilon}}.\end{align} Since $A^*P_{(\text{Ran} A_{S})^{\perp}}$ is 0 on rows with index $j\in S$ (being a scalar product of a vector in $\text{Ran} A_{S}$ and another in its orthogonal complement), we see that $z_{j}'= x_{j}'$ for such $j$. Combining this with the final estimate of Proposition \ref{p1}, we see that \begin{equation*}\label{po9}|z_{j}'|>\frac{1}{\beta_N^2}{\sqrt{\mu}}, \quad j\in S\end{equation*} holds whenever \begin{equation}\label{po6}|x_{0,j}|>\frac{\sqrt{\mu}}{\beta_N^2}+\frac{\|{{ \epsilon}}\|}{\beta_K}, \quad j\in S,\end{equation} which is true by the assumptions. For the remaining $z_{j}'$, (i.e. $j\in S^c$), we have $x_{j}'=0$ so \begin{equation}\label{o0}|z_j'|=|(A^*P_{(\text{Ran} A_{S})^{\perp}}{{ \epsilon}})_j|=|\scal{P_{(\text{Ran} A_{S})^{\perp}}{{ \epsilon}},a_j}|\leq \|A\|_{\infty,col}\|{{ \epsilon}}\|\leq \|{{ \epsilon}}\|<{\beta_N^2}{\sqrt{\mu}},\end{equation} which establishes \eqref{cond31}. \end{proof} Putting all the results together and combining with simple estimates, we finally get. \begin{theorem}\label{thm:doctorgadget} Suppose that $b=Ax_0+{{ \epsilon}}$ where $A$ is an $m\times n$-matrix with $\|A\|_{\infty,col}\leq 1$ and set ${\text{card}} (x_0)=K.$ Let $N\geq 2K$ and assume that $\|{{ \epsilon}}\|< {\beta_{N}^2\sqrt{\mu}}$ and $$|x_{0,j}|>\para{\frac{1}{\beta_{N}^2}+1}\sqrt{\mu},\quad j\in {\text{supp }} x_0.$$ Then the oracle solution $x'=x_S$ is a unique global minimum to ${\mathcal{K}_{reg}}$ as well as ${\mathcal{K}}$, with the property that ${\text{supp }} x'={\text{supp }} x_0$, that $$\|x'-x_0\|\leq \frac{\|{{ \epsilon}}\|}{\beta_K},$$ and that ${\text{card}}(x'')> N-K$ for any other stationary point $x''$ of ${\mathcal{K}_{reg}}$. \end{theorem} \iffalse A necessary condition for the above assumptions to hold, is that they hold with $\beta_{N}$ replaced by $\sigma_{N}$, in light of Proposition \ref{p0}. \fi \begin{proof} All the statements follow by Theorem \ref{thm:statpoint:vec}, Theorem \ref{thm:globalpoint2} and Proposition \ref{p1}, so we just need to check that these apply. Note that $\beta_{N}\leq \beta_K\leq \|A\|_{\infty,col}\leq 1$ which will be used repeatedly. We begin to verify that Proposition \ref{p1} applies, which is easy by noting that $\|{{ \epsilon}}\|\leq \beta_N^2\sqrt{\mu}<2\sqrt{\mu}$ and $${\sqrt{\mu}}+\frac{\|\epsilon\|}{\beta_K}\leq \frac{\sqrt{\mu}}{\beta_{N}^2}+\frac{\beta_{N}^2\sqrt{\mu}}{\beta_K}\leq \frac{\sqrt{\mu}}{\beta_{N}^2}+{\sqrt{\mu}}<|x_{0,j}|.$$ Now, to verify that the two theorems apply we need to check the conditions \eqref{cond31}, which follow if we show that Proposition \ref{p2} applies. The estimate on $\|{{ \epsilon}}\|$ is satisfied by assumption and the other follows by noting that $\frac{\beta_{N}^2}{\beta_K}\leq 1.$ By this we already have verified Theorem \ref{thm:statpoint:vec} and the first condition of Theorem \ref{thm:globalpoint2}. To check \eqref{cond2} note that $\|Ax'-b\|\leq \|{{ \epsilon}}\|< \beta_N^2\sqrt{\mu}$ by Proposition \ref{p1}, so \eqref{cond2} holds if $2\mu K+\beta_N^4{\mu}\leq \mu N+\mu$, which is clearly the case since $N\geq 2K$. \end{proof} As usual, a simpler statement is found by setting $N=2K$, which gives the loosest conditions to verify, (see Corollary \ref{cor:doctorgadget}). The only difference in the conclusion concerns the cardinality of other local minima, since the estimate on $\|x'-x_0\|$ only depends on $\beta_K$. \section{Known model order; the $K$-sparsity problem}\label{seccardfix} Let $P_K=\{x:{\text{card}} (x)\leq K\}$ where $x$ is a vector in ${\mathbb C}^n$ or ${\mathbb R}^n$. Set $f(x)=\iota_{P_K}(x)$ and note that the problem \begin{equation}\label{agt1}\argmin_{{\text{card}}(x)\leq K}\|Ax-b\|\end{equation} is equivalent to finding the minimum of \begin{equation}\label{agt2}{\mathcal{K}_K}(x)=\iota_{P_K}(x)+\|Ax-b\|^2,\end{equation} (where we put a subindex $K$ to distinguish from ${\mathcal K}$ in the previous section). Again, we will approach this problem by using $${\mathcal{K}_{K,reg}}(x)={\mathcal Q}_2(\iota_{P_K})(x)+\|Ax-b\|^2.$$ This is in some ways much simpler than the situation in the previous sections, for example all local minimizers of ${\mathcal{K}_K}$ are clearly in $P_K$. On the other hand, ${\mathcal Q}_2(\iota_{P_K})$ turns out to be rather complicated. We recapitulate the essentials, which follows by adapting the computations in \cite{andersson-etal-ol-2017} (for matrices) to the vector setting. Define $\tilde x$ to be the vector $x$ resorted so that $(|\tilde x_j|)_{j=1}^d$ is a decreasing sequence. Then \begin{equation}\label{lust1}{\mathcal Q}_2(\iota_{P_K})(x)= \frac{1}{k_*}\para{\sum_{j>K-k_*}|\tilde{x}_j|}^2-\sum_{j>K-k_*}|\tilde x_j|^2\end{equation} where $k_*$ is the largest value in $1...K$ for which the non-increasing sequence \begin{equation}\label{lust}s(k)=\left(\sum_{j>K-k}|\tilde x_j|\right)-k|\tilde x_{K+1-k}|\end{equation} is non-negative (note that it clearly is non-negative for $k=1$). Although it is not very clear from the above expression, ${\mathcal Q}_2(\iota_{P_K})$ is known to be continuous (see e.g.~Proposition 3.2 in \cite{carlsson2018convex}), and this will be used without comment below. We first show that the global minima of ${\mathcal{K}_{K,reg}}$ and ${\mathcal{K}_K}$ coincide. \subsection{Equality of minimizers and $K$-feasibility} In order to provide a theorem similar to \ref{celok5}, we need a technical condition on the columns $a_1,\ldots,a_n$ of $A$. We say that $A$ is \textit{$K$-feasible} if $\|A\|_{\infty,col}\leq 1$ and for any subset of $n-K$ columns, we can pick two such that $\|a_i-a_j\|^2\leq 2$. We say that $A$ is \textit{strictly $K$-feasible} if the inequality is strict. This is very easy to satisfy, the following proposition lists conditions that imply $K$-feasibility in ${\mathbb R}$ and ${\mathbb C}$ respectively. \begin{proposition}\label{mfisR} If we work over ${\mathbb R}$, any $A$ with $\|A\|_{\infty,col}\leq 1$ and $n\geq m+K+2$ is $K-$feasible. If we add the condition that $\scal{a_i,a_j}\neq 0$ for all pairs, or that $\|A\|_{\infty,col}< 1$, then strict $K-$feasibility follows. The same follows over ${\mathbb C}$ if $n\geq 2m+K+2$. Another condition ensuring strict $K-$feasibility, which works in both ${\mathbb R}$ and ${\mathbb C}$, is that $\|A\|_{\infty,col}\leq 1$ and at least $nK$ of the values $\{{\mathsf {Re}}\scal{a_i,a_j}\}_{i> j}$ are positive (repetitions allowed). \end{proposition} We remark that it is possible to choose $2m+1$ vectors in ${\mathbb C}^m$ such that $\|a_i-a_j\|^2> 2$, just consider a simplex in ${\mathbb R}^{2m}$ with equal sidelengths and all corners on the unit sphere. The condition that $n\geq 2m+K+2$ is a bit unfortunate, since it rules out the common situation $n=2m$. This is why we added the final part of the proposition. \begin{proof} If the chosen subset contain a zero vector, the conclusion is immediate, so we can assume that this is not the case. For any $m+2$ vectors $a_1,\ldots,a_{m+2}$ in ${\mathbb R}^m$, we can always pick two such that $\scal{a_i,a_j}\geq 0$, which follows from a simple induction argument \cite{mathstack}. The first two conclusions regarding ${\mathbb R}^m$ follow immediately by this observation. Since ${\mathbb C}^m$ is isomorphic with ${\mathbb R}^{2m}$, the corresponding result for ${\mathbb C}$ follows. Finally, the elements above the diagonal in $(\scal{a_i,a_j})_{i,j=1}^n$ are $n(n-1)/2$ in number. When we consider a subset of columns of cardinality $n-K$, we remove a total of $(n-K)K+(K-1)K/2$ of these. Since $nK$ is a bigger number, we are certain that at least one positive value remain. \end{proof} To illustrate a concrete example, which sometimes appears in applications, we consider the concatenation of a discrete Fourier transform matrix and an identity matrix. To see what the above proposition entails in this case, suppose that $m=4k$ or $m=4k+1$. Each column of the Fourier matrix gives rise to at least $k$ positive values in its scalar products with the canonical basis coming from the identity matrix, so in total we have at least $mk$ positive off diagonal elements in the Fourier matrix. This gives the condition $2mK\leq mk$, i.e.~$K\leq m/8$, which is acceptable for relevant applications. We now develop the theory for $K$-feasible matrices, starting with the analog of Theorem \ref{celok5}. \begin{theorem}\label{celok6} Let $A$ be $K$-feasible. Then the global minimizers of ${\mathcal{K}_{K,reg}}$ and ${\mathcal{K}_K}$ coincides, and all lie in $P_K$. If $A$ is strictly $K$ feasible, then any local minimizer of ${\mathcal{K}_{K,reg}}$ lies in $P_K$ and is a minimizer of ${\mathcal{K}_K}$. \end{theorem} \begin{proof} We first treat global minimizers. Clearly all minimizers of ${\mathcal{K}_K}$ are in $P_K$. Since ${\mathcal{K}_K}\geq{\mathcal{K}_{K,reg}}$ and they coincide on $P_K$, it suffices to show that a given global minimizer of ${\mathcal{K}_{K,reg}}$ is in $P_K$. This is annoyingly difficult, it is even a problem to show that ${\mathcal{K}_{K,reg}}$ has global minima. We begin by showing that large values of $x$ are not candidates for global minima if they are in the set $$U=\left\{x\neq 0:~\frac{\sum_{j=K+1}^n|\tilde x_j|}{\sum_{j=1}^K|\tilde x_j|}\geq\frac{1}{2K}\right\}.$$ We have ${\mathcal Q}_2(\iota_{P_K})(x)>0$ for all $x\in U$. One way to see this is by Corollary 4.4 in \cite{carlsson2018convex} (or Theorem 2.20 in \cite{carlsson2016convexification}), which states that there exists a direction $v$ such that $t\mapsto {\mathcal Q}_2(\iota_{P_K})(x+tv)>0$ has negative second derivative (since ${\mathcal{K}_{K,reg}}(x)<\infty={\mathcal{K}_K}(x)$), and this would imply that ${\mathcal Q}_2(\iota_{P_K})$ could take negative values, contradicting Proposition 3.2 in \cite{carlsson2018convex} (or Proposition 2.1 in \cite{carlsson2016convexification}). This can also be deduced by a more careful analysis of \eqref{lust1}, which we will perform below. Put \begin{equation}\label{olbia}\epsilon=\inf\left\{{\mathcal Q}_2(\iota_{P_K})(x): ~x\in U,~\|x\|_2=1\right\}.\end{equation} Since we are minimizing a continuous non-zero function over a compact set, $\epsilon>0$. We now show that large values of $x\in U$ yield large values of ${\mathcal Q}_2(\iota_{P_K})(x)$. Let us write $s=s_x$ for \eqref{lust}, when there is a need to make the dependence on $x$ clear. The function $s$ is radially dependent, i.e.~$s_{tx}=t s_x$ for $t\in{\mathbb R}$, and hence $k_*$ is radially independent. Looking at the expression for ${\mathcal Q}_2(\iota_{P_K})$ we see that $${\mathcal Q}_2(\iota_{P_K})(tx)=t^2{\mathcal Q}_2(\iota_{P_K})(x)\quad t\in {\mathbb R}.$$ Note that ${\mathcal{K}_K}(0)={\mathcal{K}_{K,reg}}(0)=\frac{1}{2}\|b\|^2$ so the global minimum is less than or equal to this. If $R$ is such that $\epsilon R^2>\frac{1}{2}\|b\|^2$, it finally follows that no point $x\in U$ with $\|x\|>R$ can be less than $\frac{1}{2}\|b\|^2$. We remark in passing that $x=0$ can not be a global minimizer of ${\mathcal{K}_{K,reg}}$ unless $b$ is perpendicular to the range of $A$, in which case the theorem holds trivially. To see this, note that ${\mathcal Q}_2(\iota_{P_K})(0)=0$ and ${\mathcal Q}_2(\iota_{P_K})\geq 0$, so if $x=0$ is the global minimum of ${\mathcal{K}_{K,reg}}$ the gradient of $\frac{1}{2}\|Ax-b\|^2$ must necessarily be zero at $x=0$, i.e.~$A^*(-b)=0$. Now let $G$ be the set of global minimizers for ${\mathcal{K}_{K,reg}}$ restricted to $[-R,R]^n$. This set is clearly closed, non-void and bounded. Now let $G_n\subset G$ be the subset where $|\tilde x_n|$ attains its minimum over $G$, let $G_{n-1}\subset G_n$ be the subset where $|\tilde x_{n-1}|$ is minimized, and so on until we reach $G_{K+1}$, which still is closed, non-void and bounded. Suppose that $G_{K+1}\not\subset P_K$ and pick $x\in G_{K+1}\setminus P_K$. We now show that this is impossible. First of all, note that any two columns of $A$ and corresponding two elements in $x$ may (simultaneously) switch positions and sign, without affecting the problem, so it is no restriction to assume that $|\tilde x|=x$, which we now do. If $x_{K+1}=R$ then this must also be the case for $x_1,\ldots,x_K$, and hence $\|x\|>R$ and $x\in U$, which is impossible by the earlier conclusions. Thus $x_{K+1}<R$. Use $K$-feasibility of $A$ to pick two indices $j>i> K$ such that $\|a_i-a_j\|^2\leq 2$. Consider the function \begin{equation}\label{function}x(t)=x+t e_i-te_j.\end{equation} We shall show that this function stays in $G$ for small values of $t>0$, which contradicts the construction of $G_{K+1}$. Note that it stays in $[-R,R]^n$ for sure, due to $x_{K+1}<R$. A complicating factor is the fact that we may fail to have $|\tilde x(t)|=x(t)$ for $t>0$, i.e.~this vector is not necessarily non-increasing as a function of its index. As long as $ x_{K}> x_{i}$, we can pick both $i,j>K$ such that $x(t)$ is non-increasing for small values of $t>0$. We consider this case first. All values of $s_{x(t)}(k)$ in \eqref{lust} are then unaltered by $t$, and hence small perturbations do not affect $k_*$. With this at hand, it follows that the first term in the expression \eqref{lust1} for ${\mathcal Q}_2(\iota_{P_K})(x(t))$ is unaffected by small changes in $t$, whereas the latter term is a quadratic polynomial starting with $-2t^2$. The quadratic term in the expression $\|Ax(t)-b\|^2$ on the other hand is $\|a_i-a_j\|^2t^2$. Using $\|A\|_{\infty,col}\leq 1$, it follows that \begin{equation}\label{po0}\frac{d^2}{dt^2}{\mathcal{K}_{K,reg}}(x(t))=-4+2\|a_i-a_j\|^2\leq 0.\end{equation} Thus ${\mathcal{K}_{K,reg}}(x(t))$ is linear in a neighborhood of 0, and since $x(0)\in G$ it must actually be a constant. It follows that $x(t)\in G$ for small values of $t>0$, which is a contradiction as we noted before. It remains to consider the case when $x_{K}= x_{i}$. In this case we can make $x(t)$ non-increasing (of its index) for small (fixed) values of $t>0$, upon changing $i$ so that $x_i$ is the first value to equal $x_K$, but now $i\leq K$ and the independence of $k_*$ is less clear. Let $k_i$ be such that $i=K+1-k_i$. We will need the following observations about $s(k)$; If $x_{K+1-k}=x_{K+1-(k+1)}$ then $s(k)=s(k+1)$ so we always have \begin{equation}\label{inh}x_{K+1-k_*}<x_{K+1-(k_*+1)}=x_{K-k_*}.\end{equation} Since $x\not \in P_K$, we also have $s_x(1)>0$. Combined this gives $s_x(k_i)=s_x(1)>0$ and hence $k_*\geq k_i$. If we now consider $s_{x(t)}(k)$ as functions of $t$, the inequality $s_{x(t)}(k_i)>0$ is stable with respect to small perturbations. On the other hand, the values of $s_{x(t)}(k)$ for $k>k_i$ are unaffected by small values of $t$ (they cancel out in the sum of \eqref{lust}), and so we conclude that $k_*$ is unaffected by small values of $t$. It follows that both terms depending on $t$ in $ x(t)$ have indices beyond $K-k_*$, which precisely as before yields that \eqref{po0} holds, a contradiction. To sum up, we have so far shown that any element $x\in G_{K+1}\setminus P_K$ necessarily satisfies $|\tilde x_{K+1}|<R$ and $|\tilde x_n|=0$. As grande finale we will now show that this is impossible. Let $j$ be such that $x_j=0$ and pick $i$ such that $|x_i|\leq |{x}_K|$ and $\scal{a_i,a_j}\neq 0$. That this can be done follows from the basic fact that $m+1$ non-zero vectors in ${\mathbb R}^m$ can not be mutually orthogonal (suppose for the moment that no $a_i$ is identically 0). Since $x_j=0$, it is no restriction to assume that $\scal{a_i,a_j}> 0$, which we do for simplicity. Consider again $x(t)=x+t e_i- te_j$. The previous analysis can now be repeated without modification, to conclude that \eqref{po0} holds whether $i>K$ or not. But in this case \eqref{po0} has a strictly negative value, which contradicts that $x$ is in $G$ to begin with. In the case when one $a_i=0$, we have $\|a_i-a_j\|\leq 1$ for any other $a_j$, which also yields a contradiction in \eqref{po0}. We can finally conclude that any minimizer of ${\mathcal{K}_{K,reg}}$ in $[-R,R^n]$ is in $P_K$ and thus also a minimizer of ${\mathcal{K}_K}$. Since $R$ could be arbitrarily large, the conclusion holds also in ${\mathbb R}^n$. The set of global minimizers of ${\mathcal{K}_{K,reg}}$ is thus closed and non-empty. However, the remaining argument becomes easier if we keep $R$ fixed where it is for a while longer. Now consider a path-connected component $H$ of the set of global minimizers in $[-R,R]^n$. Repeating the entire above argument, we see that $P_K\cap H\neq\emptyset$. Assume now that it contains points that are not in $P_K$. Since $\sum_{j=K+1}^n |\tilde x_j|$ is a continuous function on $H$ which is 0 at $P_K$, there are points in $H$ with arbitrarily small quotient $(\sum_{j=K+1}^n|\tilde x_j|)/{|\tilde x_1|}$. Recall that we have ruled out the case $0\in H$ early on in the proof, so the quotient is a continuous function on $H$. Let $I$ be a level set of this function. We repeat the construction of sets $I_n\supset I_{n-1}\supset \ldots I_{K+1}$ precisely as we did with $G$. Pick a concrete $x\in I_{K+1}$, and as before it is no restriction to assume that $x=|\tilde x|$. As before we see that it is impossible to have $ x_n=0$, and as before we can pick $i,j$ such that $\scal{a_i,a_j}\geq 0$ where $i,j>K$. We define $x(t)$ via \eqref{function} and establish as before that the \eqref{po0} must hold, whereby we get that $x(t)\in I_{K+1}$ for small values of $t$, and this contradicts how the sets $I_1,\ldots, I_K$ were chosen. By this we get that all global minimizers of ${\mathcal{K}_{K,reg}}$ in $[-R,R]^n$ must lie in $P_K$, and since $R$ can be arbitrarily large, the proof about global minimizers is complete. Finally if we assume that $A$ is strictly $K$-feasible and that $x$ is a local minimizer, then we can always find $i,j$ such that \eqref{po0} holds, which is a contradiction since $-4+2\|a_i-a_j\|^2<0$ in this case. \end{proof} \subsection{On the uniqueness of sparse stationary points} We now give a condition, similar to \eqref{e4} in Section \ref{erai}, to ensure that a sparse stationary point is unique, in the sense that other stationary points must have higher cardinality. \begin{theorem}\label{thm:statpoint:vecfix} Let $x'$ be a stationary point of ${\mathcal{K}_{K,reg}}$ with cardinality $K$, let $z'$ be given by \eqref{y}, and assume that \begin{equation}\label{e4fix}|\tilde z'_{K+1}|<(2\beta_{2K}^2-1)|\tilde z'_{K}|.\end{equation} If $x''$ is another stationary point of ${\mathcal{K}_{K,reg}}$ then ${\text{card}}(x'') > K$. \end{theorem} Again, we allow $\beta_{2K}>1$ in the above theorem, in which case the condition on $z$ is automatically satisfied. We begin with a lemma. Recall ${\mathcal G}$ given by \eqref{gdef}, i.e.~$\frac{1}{2}{\mathcal Q}_2(\iota_{P_K})(x)+\frac{1}{2}\fro{x}$ in the present case. We need an expression for $\partial{\mathcal G}(x)$ for $x\in P_K$. \begin{lemma}\label{sun} If $x\in P_K$ then $z\in\partial{\mathcal G}(x)$ if and only if $\tilde z_j=\tilde x_j$ for $j=1,\ldots,K$ and $\tilde z_j\in |\tilde x_K|{\mathbb D} $ for $j>K$. \end{lemma} \begin{proof} Since ${\mathcal Q}_2(\iota_{P_K})+\fro{x}$ is the l.s.c. convex envelope of $\iota_{P_K}+\fro{x}$, we have that ${\mathcal G}(x)=\frac{1}{2}{\mathcal Q}_2(\iota_{P_K})+\frac{1}{2}\fro{x}$ is the double Fenchel conjugate of $\frac{1}{2}\iota_{P_K}+\frac{1}{2}\fro{x}$. The Fenchel conjugate of the latter is easily computed to $${\mathcal G}^*(y)=\frac{1}{2}\sum_{j=1}^K |\tilde y_j|^2.$$ By the well-known identity $z\in\partial {\mathcal G}(x)\Leftrightarrow x\in\partial {\mathcal G}^*(z)$ (see e.g. Proposition 16.9 in \cite{bauschke2017convex}) we have $z\in\partial{\mathcal G}(x)$ if and only if \begin{equation}\label{t6}z=\argmax_z {\mathsf {Re}}\scal{x,z}-\frac{1}{2}\sum_{j=1}^K |\tilde z_j|^2.\end{equation} By standard results on reordering of sequences (see e.g.~Ch.~1 in \cite{simon2005trace}), this implies that the reordering of $z$ to $\tilde z$ can be chosen identical to that of $x$, which implies that $\scal{x,z}=\scal{\tilde x,\tilde z}$. Combined with $\tilde{x}_j=0$ for $j>K$, we see that \eqref{t6} turns into \begin{equation}\label{t7}z=\frac{1}{2}\argmax_z-\sum_{j=1}^K |\tilde x_j-\tilde z_j|^2.\end{equation} The lemma now easily follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:statpoint:vecfix}.] If ${\text{card}} (x'')\leq K$ we clearly have $x''-x'\in P_{2K}$ and both $z'$ and $z''$ have the structure stipulated in Lemma \ref{sun}. Let $I'={\text{supp }} x'$ and $I''={\text{supp }} x''$. Then ${\mathsf {Re}}\scal{z''-z',x''-x'}$ can be written \begin{equation} {\mathsf {Re}}\left(\sum_{\footnotesize \begin{array}{c} i \in I' \\ i \in I''\end{array}} |x_i''-x_i'|^2 + \sum_{\footnotesize \begin{array}{c} i \in I' \\ i \notin I''\end{array}} (x_i'-z_i'')\overline{x_i'} + \sum_{\footnotesize \begin{array}{c} i \notin I' \\ i \in I''\end{array}} (x_i''-z_i')\overline{x_i''}\right). \label{eq:sum1} \end{equation} As before we want to reach a contradiction to Proposition \ref{thm:statpoint}, i.e.~we want to prove ${\mathsf {Re}}\scal{z''-z',x''-x'}>(1-\beta_{2K}^2)\|x''-x'\|_2^2$. Note that \begin{equation}\label{gr2} \|x''-x'\|^2=\sum_{\footnotesize \begin{array}{c} i \in I' \\ i \in I''\end{array}} |x_i''-x_i'|^2 + \sum_{\footnotesize \begin{array}{c} i \in I' \\ i \notin I''\end{array}} |x_i'|^2 + \sum_{\footnotesize \begin{array}{c} i \notin I' \\ i \in I''\end{array}} |x_{i}''|^{2}, \end{equation} that the first term in \eqref{eq:sum1} and \eqref{gr2} are the same, and that $\beta_{2K}>0$. Since the second and third sums have the same number of terms it suffices to show that \begin{equation} {\mathsf {Re}} (x_i'-z_i'')\overline{x_i'} + (x_j''-z_j')\overline{x_j''} > (1-\beta_{2K}^2)(|x_i'|^2 + |x_j''|^{ 2}), \label{eq:twoterms} \end{equation} for any pair $i\in I'$, $i\notin I''$ and $j \notin I'$, $j \in I''$. This in turn will follow upon showing that $$|z_i''| |x_i'| + |z_j'| |x_j''| < \beta_{2K}^2(|x_i'|^2 + |x_j''|^{ 2}).$$ By Lemma \ref{sun} it is easy to see that $|z_i''|\leq |z_j''|=|x_j''|$ and by assumption we also have $|z_j'|< (2\beta_{2K}^2-1)| z'_{i}|=(2\beta_{2K}^2-1)| x'_{i}|$. Thus $$|z_i''| |x_i'| + |z_j'| |x_j''| < |x_j''||x_i'| + (2\beta_{2K}^2-1)| x'_{i}| |x_j''|=2\beta_{2K}^2| x'_{i}| |x_j''|\leq \beta_{2K}^2(|x_i'|^2 + |x_j''|^{ 2}),$$ as desired. \end{proof} \iffalse For later use in [..], we record that we have shown the following result. \begin{corollary}\label{corbor1} Let $z'$ satisfy $2z'\in\partial {\mathcal G}(x')$ and \begin{equation*}|\tilde z'_{K+1}|<(2\lambda^2-1)|\tilde z'_{K}|.\end{equation*} If $2z''\in\partial {\mathcal G}(x'')$, then ${\mathsf {Re}}\scal{z''-z',x''-x'}>(1-\lambda^2)\|x''-x'\|^2$. \end{corollary} \fi \subsection{Conditions on global minimality.} The statements in this section are actually a bit stronger than the corresponding ones in Section \ref{erai1}. \begin{theorem}\label{thm:globalpoint2f} Let $A$ be $K$-feasible and let $x'\in P_K$ be a stationary point of ${\mathcal{K}_{K,reg}}$. Let $z'$ be given by \eqref{y} and assume that \eqref{e4fix} applies. Then $x'$ is a unique global minimizer of ${\mathcal{K}_K}$ and ${\mathcal{K}_{K,reg}}$. If $A$ is strictly $K$-feasible, then there are no other local minimizers either. \end{theorem} \begin{proof} By Theorem \ref{celok6} there exists $x''\in P_K$ which is a global minimizer for both ${\mathcal{K}_K}$ and ${\mathcal{K}_{K,reg}}$. If $x'\neq x''$ this would contradict Theorem \ref{thm:statpoint:vecfix}. If in addition $A$ is strictly $K$-feasible, then Theorem \ref{celok6} says that any local minimizer is a stationary point in $P_K$, which is impossible by Theorem \ref{thm:statpoint:vecfix}. \end{proof} \subsection{Noisy data.} We now assume that $b$ is of the form $Ax_0+{{ \epsilon}}$ where ${{ \epsilon}}$ is noise and $x_0$ is sparse. More precisely we set $S={\text{supp }} x_0$ where we assume that $\#S=K$. Let $A_S$ denote the matrix obtained from $A$ by setting columns outside of $S$ to $0$. By minor modifications of the proof of Proposition \ref{p1} we obtain. \begin{proposition}\label{p1f} Let $A$ satisfy $\|A\|_{\infty,col}\leq 1$. If $|x_{0,j}|>\para{1+\frac{1}{\beta_K}}\|{{ \epsilon}}\|$ for all $j\in S$ then the oracle solution $x'=x_S$ is a strict local minimizer of ${\mathcal{K}_{K,reg}}$ with ${\text{supp }} (x')={\text{supp }} (x_0)$. This also satisfies $|x'_j|>\|\epsilon\|$, $j\in S$, $\|Ax'-b\|\leq \|\epsilon\|$ and $$\|x'-x_0\|\leq \frac{\|{{ \epsilon}}\|}{\beta_K}.$$ \end{proposition} \begin{proof} We assume for simplicity that we work over ${\mathbb R}$, and as in Proposition \ref{p1} we let $x'$ be the oracle solution. All estimates of Proposition \ref{p1} go through with minor modifications, for example \eqref{sofg} is replaced by \begin{equation}\label{hy}|x_{j}'|\geq |x_{0,j}|-|{{\delta}}_j|>|x_{0,j}|-\frac{\|{{ \epsilon}}\|}{\beta_K},\quad j\in S\end{equation} which shows that ${\text{supp }} (x')={\text{supp }} (x_0)$ as well as $|x'_j|>\|\epsilon\|$ for $j\in S$. The only real difference is the proof that $x'$ is a local minimizer of ${\mathcal{K}_{K,reg}}$, so we consider ${\mathcal{K}_{K,reg}}(x'+v)$ where as usual we can assume that $x'=|\tilde x'|$. Since $x'$ solves the least squares problem posed initially, the vector $A_S^*(Ax'-b)=A_S^*(A_Sx'-b)$ must be 0, and so \begin{equation*}{\mathcal{K}_{K,reg}}(x'+v)={\mathcal Q}_2(\iota_{P_K})(x'+v)+2\sum_{j=K+1}^n v_j\scal{a_j,Ax'-b} +\|Av\|^2+{\mathcal{K}_{K,reg}}(x').\end{equation*} Let $k_1$ be such that $x_{K+1-k_1}=x_K$ but $x_{K-k_1}>x_K$, and note that $k_*$ will depend on $v$ but will always satisfy $k_*\leq k_1$. With this in mind ${\mathcal Q}_2(\iota_{P_K})(x'+v)$ becomes $$\frac{1}{k_*}\left(k_*x_K+\sum_{j=K-k_*+1}^K v_j+\sum_{j=K+1}^n|v_j|\right)^2-\sum_{j=K-k_*+1}^K(x_K+v_j)^2-\sum_{j=K+1}^n v_j^2.$$ Upon inspection there is a lot of cancelation and the expression reduces to $2x_K\sum_{j=K+1}^n|v_j|$ plus quadratic terms in $v$. Returning to the expression for ${\mathcal{K}_{K,reg}}(x'+v)$ and collecting all quadratic contributions from $v$ in a term $q(v)$, we see that ${\mathcal{K}_{K,reg}}(x'+v)-{\mathcal{K}_K}(x')$ equals \begin{equation*}2x_K\left(\sum_{j=K+1}^n|v_j|\right)+2\left(\sum_{j=K+1}^n v_j\scal{a_j,Ax'-b}\right) +q(v).\end{equation*} Since $|\scal{a_j,Ax'-b}|\leq\|\epsilon\|$ and $x_K>\|\epsilon\|$, it is easy to see that there exist some constant $\rho>0$ such that \begin{equation*}2x_K\left(\sum_{j=K+1}^n|v_j|\right)+2\left(\sum_{j=K+1}^n v_j\scal{a_j,Ax'-b}\right)\geq\rho\sqrt{\sum_{j=K+1}^nv_j^2}\end{equation*} near $0$. As in the proof of Proposition \ref{p1} we conclude that we must have $\sum_{j=K+1}^nv_j^2=0$ in order for ${\mathcal{K}_{K,reg}}(x'+v)\leq {\mathcal{K}_{K,reg}}(x')$ to be possible. However, for $v$ with ${\text{supp }} v\subset S,$ ${\mathcal Q}_2(\iota_{P_K})(x'+v)=0$ and so $${\mathcal{K}_{K,reg}}(x'+v)=\|A(x'+v)-b\|^2\geq \|Ax'-b\|^2={\mathcal{K}_{K,reg}}(x'),$$ and the proof is complete. \end{proof} To go from saying that $x'$ is a strict local minimizer to saying unique global minimizer is now a short step. \begin{theorem}\label{cor:dogadgetf} Let $A$ be $K$-feasible and $\beta_{2K}>1/\sqrt{2}$. If $$|x_{0,j}|>\para{\frac{1}{2\beta_{2K}^2-1}+\frac{1}{\beta_K}}\|{{ \epsilon}}\|,\quad j\in S$$ then the oracle solution $x'$ in the above proposition is a global minimum of ${\mathcal{K}_K}$ and ${\mathcal{K}_{K,reg}}$. Moreover, if $A$ is strictly $K$-feasible then ${\mathcal{K}_{K,reg}}$ has no other local minimizers. \end{theorem} \begin{proof} Proposition \ref{p1f} clearly applies and ensures that $x'$ is a local minimizer. We now check that \eqref{e4fix} applies for $z'$ given by \eqref{y}, i.e. we want to check that $|\tilde z'_{K+1}|<(2\beta_{2K}^2-1)|\tilde z'_K|$. Note that $|\tilde z'_{K+1}|\leq\|{{ \epsilon}}\|$ by the same estimate as \eqref{o0}. Moreover, since $z'\in\partial {\mathcal G}(x')$, Lemma \ref{sun} implies that it suffices to show that $\|{{ \epsilon}}\|<(2\beta_{2K}^2-1)|\tilde x'_K|$, which by \eqref{hy} follows if $$\|{{ \epsilon}}\|<(2\beta_{2K}^2-1)\para{|\tilde x_{0,K}|-\frac{\|{{ \epsilon}}\|}{\beta_K}}$$ which easily is seen to be equivalent with the condition in the statement. The desired conclusions now follow by Theorem \ref{thm:globalpoint2f}. \end{proof} \bibliographystyle{plain}
1,116,691,499,570
arxiv
\section{Introduction} The optimal design problem, devoted to find the minimal energy configurations of a mixture of two conductive materials, has been widely studied since the pioneering papers \cite{KS1,KS2, KS3}. It is well known that, given a container $\Omega$ and prescribing only the volume fraction of the material where it is expected to have a certain conductivity, an optimal configuration might not exist. To overcome this difficulty, Ambrosio and Buttazzo in \cite{AB} imposed a perimeter penalization and studied the following minimization problem $$ \min\left\{\int_E (\alpha |Du|^2 +g_1(x,u))dx + \int_{\Omega\setminus E}(\beta |Du|^2+ g_2(x,u)) dx + \sigma P(E,\Omega):E \subset \Omega, u \in H^1_0(\Omega)\right\}, $$ finding the solution $(u,E)$ and describing the regularity properties of the optimal set $E$. In this paper we are considering the minimization of a similar functional, where the energy density $|\cdot|^2$ has been replaced by more general $W_i$, $i=1,2$ without any convexity assumptions and with linear growth, and since the lower order terms $g_1(x,u)$ and $g_2(x,u)$ do not play any role in the asymptotics, we omit them in our subsequent analysis. The case of $W_i$, $i=1,2$, not convex with superlinear growth has been studied in the context of thin films in \cite{CZ}. Thus, given $\Omega$ a bounded open subset of $\mathbb{R}^{N}$, we assume that $W_{i}:\mathbb{R}^{d\times N}\rightarrow\mathbb{R}$ are continuous functions such that there exist positive constants $\alpha,\beta$ for which \begin{equation} \alpha|\xi|\leq W_{i}(\xi)\leq\beta(1+|\xi|)\hbox{ for every }\xi\in \mathbb{R}^{d\times N},\;\;\;\ i=1,2. \label{H1 \end{equation} We consider the following optimal design proble \begin{equation} \underset \begin{array} [c]{c {\small u\in W}^{1,1}{\small (\Omega;}\mathbb{R}^{d}{\small )}\\ {\small \chi_E\in BV(\Omega;\{0,1\}) \end{array} }{\inf}\left\{ {\int_{\Omega}\left( \chi_{E}W_{1}(\nabla u)+(1-\chi _{E})W_{2}\right) (\nabla u)dx+P(E;\Omega):u=u_{0}}\text{ {on } {\partial\Omega}\right\} \label{originalpb \end{equation} where $\chi_{E}$ is the characteristic function of $E\subset\Omega\ $ which has finite perimeter, see \eqref{perimeter} below. Note that by \eqref{perimeter} and the definition of total variation, $P\left( E;\Omega\right) =\left\vert D\chi_{E}\right\vert \left( \Omega\right) $ and we are lead to the subsequent minimum proble \begin{equation}\nonumber \inf\limits_ \begin{array} [c]{l {\small u\in W}^{1,1}{\small (\Omega;{\mathbb R}}^{d}{\small )}\\ {\small \chi}_{E}{\small \in BV(\Omega;\{0,1\}) \end{array} }\left\{ {\displaystyle\int_{\Omega}} \left( \chi_{E}W_{1}+\left( 1-\chi_{E}\right) W_{2}\right) \left( \nabla u\right) dx+\left\vert D\chi_{E}\right\vert \left( \Omega\right) :u=u_0\text{ on }\partial\Omega\right\} . \end{equation} The lack of convexity of the energy requires a relaxation procedure. To this end we start by localizing our energy, first we introduce the functional $F_{\cal OD}:L^{1}(\Omega;\{0,1\})\times L^{1}(\Omega;\mathbb{R}^{d})\times \mathcal{A}\left( \Omega\right) \rightarrow\lbrack0,+\infty]$ defined by \begin{equation} F_{\cal OD}(\chi,u;A):=\left\{ \begin{array} [c]{lll { {\displaystyle\int_{A}} \left( \chi_{E}W_{1}\left( \nabla u\right) +(1-\chi_{E})W_{2}\left( \nabla u\right) \right) dx+\left\vert D\chi_{E}\right\vert (A)} & & \text{in }BV(A;\{0,1\})\times W^{1,1}(A;\mathbb{R}^{d}),\text{\bigskip}\\ +\infty & & \text{otherwise. \end{array} \right. \label{Je \end{equation} Then we consider the relaxed localized energy of $\left( \ref{Je}\right) $ given b \begin{equation}\nonumber \begin{array} [c]{c \mathcal{F_{\cal OD}}\left( \chi,u;A\right) :=\inf\left\{ \underset{n\rightarrow \infty}{\lim\inf {\displaystyle\int_{A}} {\left( \chi_{n}W_{1}\left( \nabla u_{n}\right) +(1-\chi_{n})W_{2}\left( \nabla u_{n}\right) \right) dx+}\left\vert D\chi_{n}\right\vert \left( A\right) {:}\left\{ u_{n}\right\} \subset W^{1,1}\left( A;\mathbb{R ^{d}\right) ,\right. \\ \qquad\qquad\qquad\qquad\qquad\qquad\left. \left\{ \chi_{n}\right\} \subset BV\left( A;\left\{ 0,1\right\} \right) ,~u_{n}\rightarrow u\text{ in }L^{1}\left( A;\mathbb{R}^{d}\right) \text{ and }\chi_{n}\overset{\ast }{\rightharpoonup}\chi\text{ in }BV\left( A;\left\{ 0,1\right\} \right) \right\}. \end{array} \end{equation} Let $V:\{0,1\}\times\mathbb{R}^{d\times N}\rightarrow(0,+\infty)$ be given by \begin{equation} V\left( q, z\right) :=q W_{1}(z)+\left( 1-q\right) W_{2}\left( z\right) \label{Vbar \end{equation} and $\overline{F_{\cal OD}}:BV(\Omega;\{0,1\})\times BV(\Omega;\mathbb{R}^{d )\times\mathcal{A}\left( \Omega\right) \rightarrow\lbrack0,+\infty]$ be defined as \begin{equation}\label{representation \overline{F_{\cal OD}}\left( \chi,u;A\right) :=\int_{A}QV\left( \chi,\nabla u\right) dx+\int_{A}QV^{\infty}\left( \chi,\frac{dD^{c}u}{d\left\vert D^{c u\right\vert }\right) d\left\vert D^{c}u\right\vert +\int_{J_{\left( \chi,u\right) }\cap A}K_{2}\left( \chi^{+},\chi^{-},u^{+},u^{-},\nu\right) d\mathcal{H}^{N-1} \end{equation} where $QV$ is the quasiconvex envelope of $V$ given in $\left( \ref{Qfbar \right) ,$ $QV^{\infty}$ is the recession function of $QV,$ namely \begin{equation}\label{QVinfty} QV^{\infty}\left( q, z\right) :=\lim_{t\rightarrow\infty}\frac{QV\left( q,t z\right) }{t}, \end{equation} an \begin{equation} {K_{2}(a,b,c,d,\nu):=\inf}\left\{ {\displaystyle\int_{Q_{\nu}}} QV^{\infty}(\chi(x),\nabla u(x))dx+|D\chi|(Q_{\nu}):}\left( \chi,u\right) {{\in\mathcal{A}_2(a,b,c,d,\nu)}}\right\} {{,}} \label{K2 \end{equation} wher \begin{align} \mathcal{A}_2\left( a,b,c,d,\nu\right) & :=\left\{ \left( \chi,u\right) \in BV\left( Q_{\nu};\left\{ 0,1\right\} \right) \times W^{1,1}\left( Q_{\nu};\mathbb{R}^{d}\right) :\right. \label{AFR}\\ & \left. (\chi(y),u\left( y\right)) =(a,c)\text{ if }y\cdot\nu=\frac{1}{2},~(\chi(y),u\left( y\right)) =(b,d)\text{ if }y\cdot\nu=-\frac{1}{2},\right.\nonumber\\ & \left. (\chi, u)\text{ are }1-\text{periodic in }\nu_{1},\dots,\nu_{N-1} \hbox{ directions}\right\} ,\nonumber \end{align} for $\left( a,b,c,d,\nu\right) \in\left\{ 0,1\right\} \times\left\{ 0,1\right\} \times\mathbb{R}^{d}\times\mathbb{R}^{d}\times S^{N-1},$ with $\left\{ \nu_{1},\nu_{2},\dots,\nu_{N-1},\nu\right\} $ an orthonormal basis of $\mathbb{R}^{N}$ and $Q_\nu$ the unit cube, centered at the origin, with one direction parallel to $\nu$. In Section \ref{appl} we obtain the following integral representation. \begin{theorem} \label{mainthm}Let $\Omega\subset\mathbb{R}^{N}$ be a bounded open set and let $W_{i}:\mathbb{R}^{{d}\times{N}}\rightarrow\lbrack0,+\infty)$, $i=1,2$, be continuous functions satisfying \eqref{H1}. Let $\overline{F_{\cal OD}}$ be the functional defined in $\left( \ref{representation}\right) $. Then for every $(\chi,u)\in L^1(\Omega;\{0,1\})\times L^1(\Omega;\mathbb{R}^{d}) \begin{equation}\nonumber \mathcal{F_{\cal OD}}(\chi,u;A)=\left\{ \begin{array}{ll} \overline{F_{\cal OD}}(\chi,u;A) &\hbox{ if } (\chi, u)\in BV(\Omega;\{0,1\}) \times BV(\Omega;\mathbb R^d) ,\\ \\ +\infty & \hbox{ otherwise.} \end{array} \right. \end{equation} \end{theorem} This result will be achieved as a particular case of a more general theorem dealing with special functions of bounded variation which are piecewise constants. In fact we provide an integral representation for the relaxation of the functional $F:L^{1}(\Omega;\mathbb{R}^{m})\times L^{1}(\Omega;\mathbb{R}^{d )\times\mathcal{A}\left( \Omega\right) \rightarrow[ 0,+\infty]$ defined by \begin{equation} \label{FG}F(v,u;A):=\left\{ \begin{array} [c]{lll \displaystyle{\int_Af(v, \nabla u)dx+\int_{A\cap J_v} g(v^{+}, v^{-}, \nu _{v})d\mathcal{H}^{N-1}} & \hbox{ in }SBV_{0}(A;\mathbb{R}^{m}) \times W^{1,1}(A;\mathbb{R}^{d}),\\ & & \\ +\infty & \hbox{ otherwise,} & \end{array} \right. \end{equation} where $SBV_0(A;\mathbb R^m)$ is defined in \eqref{SBV0} (see Section \ref{pre}) and $f: \mathbb{R}^{m} \times\mathbb{R}^{d \times N}\to[0,+\infty[$, $g :\mathbb{R}^{m} \times\mathbb{R}^{m} \times S^{N-1} \to[0, +\infty[$ satisfy the following hypotheses: \begin{itemize} \item[($F_{1}$)] $f$ is continuous; \item[$(F_{2})$] there exist $0<\beta'\leq\beta$ such that \[ \beta'|z| \leq f(q, z) \leq\beta(1+ |z|), \] for every $(q, z) \in\mathbb{R}^{m}\times\mathbb{R}^{d \times N};$ \item[($F_{3}$)] there exists $L>0$ such that \[ \left| f(q_{1},z)- f(q_{2},z)\right| \leq L|q_{1}-q_{2}|(1+ |z|) \] for every $q_{1},q_{2} \in\mathbb{R}^{m}$ and $z \in\mathbb{R}^{d \times N};$ \item[($F_{4}$)] there exist $\alpha\in(0,1),$ and $C,L>0$ such that \[ t|z|>L\ \Rightarrow\ \left| f^{\infty}(q, z)- \frac{f(q, t z)}{t}\right| \leq C \frac{|z|^{1-\alpha}}{t^{\alpha}},\ \hbox{ for every } (q, z) \in\mathbb{R}^{m} \times\mathbb{R}^{d \times N}, t \in\mathbb{R}, \] \end{itemize} with $f^{\infty}$ the recession function of $f$ with respect to the last variable, defined a \begin{equation} \label{finfty}\displaystyle{f^{\infty}(q, z):= \limsup_{t \to\infty \frac{f(q,t z)}{t}, \end{equation} for every $(q, z)\in\mathbb{R}^{m} \times\mathbb{R}^{d \times N};$ \begin{itemize} \item[($G_{1}$)] $g$ is continuous; \item[($G_{2}$)] there exists a constant $C>0$ such that \[ \frac{1}{C}(1+|\lambda-\theta|)\leq g(\lambda, \theta, \nu)\leq C (1+|\lambda- \theta|), \] for every $(\lambda, \theta, \nu)\in\mathbb{R}^{m} \times\mathbb{R}^{m} \times S^{N-1}$, \item[($G_{3}$)] $g(\lambda, \theta, \nu)= g(\theta, \lambda, -\nu)$, for every $(\lambda, \theta, \nu)\in\mathbb{R}^{m} \times\mathbb{R}^{m} \times S^{N-1}$. \end{itemize} The relaxed localized energy of \eqref{FG} is given b \begin{equation \begin{array} [c]{c \mathcal{F}\left( v,u;A\right) :=\inf\left\{ \displaystyle{\liminf_{n \to\infty} \left( \int_{A} f(v_{n}, \nabla u_{n})dx +\int_{J_{v_{n}}\cap A} g({v_{n}}^{+}, {v_{n}}^{-}, \nu_{v_{n}}) d\mathcal{H}^{N-1}\right) :\left\{ u_{n}\right\} \subset W^{1,1}\left( A;\mathbb{R}^{d}\right) },\right. \\ \qquad\qquad\qquad\qquad\qquad\qquad\left. \{ v_{n}\} \subset SBV_{0}\left( A;\mathbb{R}^{m}\right) ,~u_{n}\rightarrow u\text{ in }L^{1}\left( A;\mathbb{R}^{d}\right) \text{ and }v_{n} \to v \hbox{ in }L^1(A;\mathbb R^m) \right\}. \end{array} \label{calFG \end{equation} Let $\overline{F_{0}}:SBV_{0}(\Omega;\mathbb{R}^{m})\times BV(\Omega;\mathbb{R ^{d})\times\mathcal{A}\left( \Omega\right) \rightarrow[0,+\infty]$ be given by \begin{equation} \label{representationFG}\overline{F_{0}}\left( v,u;A\right) :=\int_{A}Qf\left( v,\nabla u\right) dx+\int_{A}Qf^{\infty}\left( v,\frac{dD^{c}u}{d\left\vert D^{c}u\right\vert }\right) d\left\vert D^{c}u\right\vert +\int_{J_{\left( v,u\right) }\cap A}K_{3}\left(v^{+},v^{-},u^{+},u^{-},\nu\right) d\mathcal{H}^{N-1}, \end{equation} where $Qf$ is the quasiconvex envelope of $f$ given in \eqref{Qfbar}, $Qf^{\infty}$ is the recession function of $Qf$, and $K_{3}: \mathbb{R}^{m} \times\mathbb{R}^{m} \times\mathbb{R}^{d} \times\mathbb{R}^{d} \times S^{N-1} \to[0, +\infty[$ is defined as \begin{equation} \label{K3} \begin{array}{ll} K_{3}(a,b,c,d,\nu):= \\ \displaystyle{\inf\left\{ \int_{Q_{\nu}}Qf^{\infty}(v(x), \nabla u(x))dx+ \int_{J_{v}\cap Q_{\nu}}g(v^{+}(x), v^{-}(x), \nu(x))d \mathcal{H}^{N-1}: (v,u)\in\mathcal{A}_3(a,b,c,d,\nu)\right\}} \end{array} \end{equation} where \begin{equation} \label{A3 \begin{array} [c]{ll \displaystyle{\mathcal{A}_3\left( a,b,c,d,\nu\right) :=} &\displaystyle{\left\{ (v,u)\in(SBV_{0}(Q_{\nu };\mathbb{R}^{m})\cap L^{\infty}(Q_{\nu};\mathbb{R}^{m}))\times W^{1,1}(Q_{\nu};\mathbb{R}^{d}): \right.} \\ \\ &\displaystyle{(v(y), u(y))= (a,c) \hbox{ if } y \cdot\nu=\frac {1}{2}, (v(y), u(y))= (b,d) \hbox{ if } y\cdot\nu=-\frac{1}{2},} \\ \\ &\displaystyle{\left. (v, u) \hbox{ are } 1-\hbox{periodic in }\nu_{1}, \dots, \nu_{N-1} \hbox{ directions} \right\},} \end{array} \end{equation} with $\left\{ \nu_{1},\nu_{2},\dots,\nu_{N-1},\nu\right\} $ an orthonormal basis of $\mathbb{R}^{N}.$ In the following we present the main result. \begin{theorem} \label{mainthmgen} Let $\Omega\subset\mathbb{R}^{N}$ be a bounded open set and let $f:\mathbb{R}^{m} \times\mathbb{R}^{d \times N}\to[0, +\infty[$ be a function satisfying $(F_{1})- (F_{4})$ and $g:\mathbb{R}^{m} \times \mathbb{R}^{m} \times S^{N-1}\to[0, +\infty[$ satisfying $(G_{1})-(G_{3})$. Let $F$ be the functional defined in \eqref{FG}. Then for every $(v,u) \in L^{1}(\Omega;\mathbb{R}^{m})\times L^{1}(\Omega;\mathbb{R}^{d}) $ \[ \mathcal{F}(v,u; \Omega)=\left\{ \begin{array} [c]{ll \overline{F_{0}}(v,u;\Omega) & \hbox{ if }(v,u)\in SBV_{0}(\Omega;\mathbb{R}^{m}) \times BV(\Omega;\mathbb{R}^{d}),\\ \\ +\infty & \hbox{ otherwise.} \end{array} \right. \] \end{theorem} The paper is organized as follows. Section \ref{pre} is devoted to preliminary results dealing with functions of bounded variation, perimeters and special functions of bounded variation which are piecewise constant. The properties of the energy densities and several auxiliary results involved in the proofs of representation Theorems \ref{mainthm} and \ref{mainthmgen} are discussed in Section \ref{auxres}. The proof of the lower bound for ${\cal F}$ in \eqref{calFG} is presented in Sections \ref{lb}, while Section \ref{ub} contains the upper bound and the proof of Theorem \ref{mainthmgen}. The applications to optimal design problems as in \cite{AB} and the comparison with previous related relaxation results as in \cite{FM2}, such as Theorem \ref{mainthm}, are discussed in Section \ref{appl}. \section{Preliminaries}\label{pre} We give a brief survey of functions of bounded variation and sets of finite perimeter. In the following $\Omega\subset\mathbb{R}^{N}$ is an open bounded set and we denote by $\mathcal{A}\left( \Omega\right) $ the family of all open subsets of $\Omega$. The $N$-dimensional Lebesgue measure is designated as $\mathcal{L}^{N}$, while $\mathcal{H}^{N-1}$ denotes the $\left( N-1\right) $-dimensional Hausdorff measure. The unit cube in $\mathbb{R}^{N}$, $\left( -\frac{1}{2},\frac{1}{2}\right) ^{N}$, is denoted by $Q$ and we set $Q\left( x_{0},\varepsilon\right) :=x_{0}+\varepsilon Q$ for $\varepsilon>0$. For every $\nu \in S^{N-1}$ we define $Q_{\nu}:=R_{\nu}\left( Q\right) $, where $R_{\nu}$ is a rotation such that $R_{\nu}\left( e_{N}\right) =\nu$. The constant $C$ may vary from line to line. \label{perimeterBV} We denote by $\mathcal{M}(\Omega)$ the space of all signed Radon measures in $\Omega$ with bounded total variation. By the Riesz Representation Theorem, $\mathcal{M}(\Omega)$ can be identified to the dual of the separable space $\mathcal{C}_{0}(\Omega)$ of continuous functions on $\Omega$ vanishing on the boundary $\partial\Omega$. If $\lambda\in\mathcal{M}(\Omega)$ and $\mu \in\mathcal{M}(\Omega)$ is a nonnegative Radon measure, we denote by $\frac{d\lambda}{d \mu}$ the Radon-Nikod\'{y}m derivative of $\lambda$ with respect to $\mu$. The following version of Besicovitch Differentiation Theorem was proven by Ambrosio and Dal Maso \cite[Proposition 2.2]{Ambrosio-Dal Maso}. \begin{theorem}\label{thm2.6BBBF} If $\lambda$ and $\mu$ are Radon measures in $\Omega$, $\mu \geq 0$, then there exists a Borel measure set $E \subset \Omega$ such that $\mu(E)=0$, and for every $x \in {\rm supp}\mu-E$ $$ \displaystyle{\frac{d \lambda}{d \mu}(x):= \lim_{\e \to 0^+}\frac{\lambda (x+ \e C)}{\mu (x+ \e C)}} $$ exists and is finite whenever $C$ is a bounded, convex, open set containing the origin. \end{theorem} We recall that the exceptional set E above does not depend on C. An immediate corollary is the generalization of Lebesgue-Besicovitch Differentiation Theorem given below. \begin{theorem}\label{thm2.8FM2} If $\mu$ is a nonnegative Radon measure and if $f \in L^1_{\rm loc}(\mathbb R^N,\mu)$ then $$ \lim_{\e \to 0^+} \frac{1}{\mu(x+ \e C)}\int_{x+ \e C} | f(y) - f ( x ) | d\mu(y) =0 $$ for $\mu$- a.e. $ x\in \mathbb R^N$ and for every, bounded, convex, open set $C$ containing the origin. \end{theorem} \begin{definition} A function $w\in L^{1}(\Omega;{\mathbb{R}}^{d})$ is said to be of \emph{bounded variation}, and we write $w\in BV(\Omega;{\mathbb{R}}^{d})$, if all its first distributional derivatives $D_{j}w_{i}$ belong to $\mathcal{M (\Omega)$ for $1\leq i\leq d$ and $1\leq j\leq N$. \end{definition} The matrix-valued measure whose entries are $D_{j}w_{i}$ is denoted by $Dw$ and $|Dw|$ stands for its total variation. We observe that if $w\in BV(\Omega;\mathbb{R}^{d})$ then $w\mapsto|Dw|(\Omega)$ is lower semicontinuous in $BV(\Omega;\mathbb{R}^{d})$ with respect to the $L_{\mathrm{loc}}^{1}(\Omega;\mathbb{R}^{d})$ topology. We briefly recall some facts about functions of bounded variation. For more details we refer the reader to \cite{AFP2}, \cite{EG}, \cite{G} and \cite{Z}. \begin{definition} \label{def3.14AFPstrict} Let $w, w_{n} \in BV(\Omega;\mathbb{R}^{d})$. The sequence $\{w_{n}\}$ strictly converges in $BV(\Omega;\mathbb{R}^{d})$ to $w$ if $\{w_{n}\}$ converges to $w$ in $L^{1}(\Omega;\mathbb{R}^{d})$ and $\{|D w_{n}|(\Omega)\}$ converges to $|D w|(\Omega)$ as $n \to\infty$. \end{definition} \begin{definition} Given $w\in BV\left( \Omega;\mathbb{R}^{d}\right) $ the \emph{approximate upper}\textit{\ }\emph{limit }and the \emph{approximate lower limit} of each component $w^{i}$, $i=1,\dots,d$, are defined by \[ \left( w^{i}\right) ^{+}\left( x\right) :=\inf\left\{ t\in\mathbb{R :\,\lim_{\varepsilon\rightarrow0^{+}}\frac{\mathcal{L}^{N}\left( \left\{ y\in\Omega\cap Q\left( x,\varepsilon\right) :\,w^{i}\left( y\right) >t\right\} \right) }{\varepsilon^{N}}=0\right\} \] and \[ \left( w^{i}\right) ^{-}\left( x\right) :=\sup\left\{ t\in\mathbb{R :\,\lim_{\varepsilon\rightarrow0^{+}}\frac{\mathcal{L}^{N}\left( \left\{ y\in\Omega\cap Q\left( x,\varepsilon\right) :\,w^{i}\left( y\right) <t\right\} \right) }{\varepsilon^{N}}=0\right\} , \] respectively. The \emph{jump set}\textit{\ }of $w$ is given by \[ J_{w}:=\bigcup_{i=1}^{d}\left\{ x\in\Omega:\,\left( w^{i}\right) ^{-}\left( x\right) <\left( w^{i}\right) ^{+}\left( x\right) \right\} . \] \end{definition} It can be shown that $J_{w}$ and the complement of the set of Lebesgue points of $w$ differ, at most, by a set of $\mathcal{H}^{N-1}$ measure zero. Moreover, $J_{w}$ is $\left( N-1\right) $-rectifiable, i.e., there are $C^{1} $ hypersurfaces $\Gamma_{i}$ such that $\mathcal{H}^{N-1}\left( J_{w}\setminus\cup_{i=1}^{\infty}\Gamma_{i}\right)=0.$ \begin{proposition}\label{thm2.3BBBF} If $w\in BV\left( \Omega;\mathbb{R}^{d}\right) $ then \begin{enumerate} \item[i)] for $\mathcal{L}^{N}-$a.e. $x\in\Omega \begin{equation} \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon}\left\{ \frac {1}{\mathcal{\varepsilon}^{N}}\int_{Q\left( x,\varepsilon\right) }\left\vert w(y) -w(x) -\nabla w\left( x\right) \cdot( y-x) \right\vert ^{\frac{N}{N-1 }dy\right\} ^{\frac{N-1}{N}}=0; \label{approximate differentiability \end{equation} \item[ii)] for $\mathcal{H}^{N-1}$-a.e. $x\in J_{w}$ there exist $w^{+}\left( x\right) ,$ $w^{-}\left( x\right) \in\mathbb{R}^{d}$ and $\nu\left( x\right) \in S^{N-1}$ normal to $J_{w}$ at $x,$ such that \[ \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^{N}}\int_{Q_{\nu ^{+}\left( x,\varepsilon\right) }\left\vert w\left( y\right) -w^{+}\left( x\right) \right\vert dy=0,\qquad\lim_{\varepsilon\rightarrow0^{+}}\frac {1}{\varepsilon^{N}}\int_{Q_{\nu}^{-}\left( x,\varepsilon\right) }\left\vert w\left( y\right) -w^{-}\left( x\right) \right\vert dy=0, \] where $Q_{\nu}^{+}\left( x,\varepsilon\right) :=\left\{ y\in Q_{\nu}\left( x,\varepsilon\right) :\,\left\langle y-x,\nu\right\rangle >0\right\} $ and $Q_{\nu}^{-}\left( x,\varepsilon\right) :=\left\{ y\in Q_{\nu}\left( x,\varepsilon\right) :\,\left\langle y-x,\nu\right\rangle <0\right\} $; \item[iii)] for $\mathcal{H}^{N-1}$-a.e. $x\in\Omega\backslash J_{w} \[ \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\mathcal{\varepsilon}^{N} \int_{Q\left( x,\varepsilon\right) }\left\vert w(y) -w\left( x\right) \right\vert dy=0. \] \end{enumerate} \end{proposition} We observe that in the vector-valued case in general $\left( w^{i}\right) ^{\pm}\neq\left( w^{\pm}\right) ^{i}.$ In the sequel $w^{+}$ and $w^{-}$ denote the vectors introduced in $ii)$ above. Choosing a normal $\nu_{w}\left( x\right) $ to $J_{w}$ at $x,$ we denote the \emph{jump} of $w$ across $J_{w}$ by $\left[ w\right] :=w^{+}-w^{-}.$ The distributional derivative of $w\in BV\left( \Omega;\mathbb{R}^{d}\right) $ admits the decomposition \[ Dw=\nabla w\mathcal{L}^{N}\lfloor\Omega+\left( \left[ w\right] \otimes \nu_{w}\right) \mathcal{H}^{N-1}\lfloor J_{w}+D^{c} w, \] where $\nabla w$ represents the density of the absolutely continuous part of the Radon measure $Dw$ with respect to the Lebesgue measure. The \emph{Hausdorff}, or \emph{jump}, \emph{part} of $Dw$ is represented by $\left( \left[ w\right] \otimes\nu_{w}\right) \mathcal{H}^{N-1}\lfloor J_{w}$ and $D^{c} w $ is the \emph{Cantor part} of $Dw$. The measure $D^{c} w $ is singular with respect to the Lebesgue measure and it is diffuse, i.e., every Borel set $B\subset\Omega$ with $\mathcal{H}^{N-1}\left( B\right) <\infty$ has Cantor measure zero. The following result, that will be exploited in the sequel, can be found in \cite[Lemma 2.6]{FM2}. \begin{lemma}\label{lemma2.5BBBF} Let $w \in BV(\Omega;\mathbb R^d)$, for ${\cal H}^{N-1}$ a.e. $x$ in $J_w$, $$ \displaystyle{\lim_{\e \to 0^+} \frac{1}{\e^{N-1}} \int_{J_w \cap Q_{\nu(x)}(x, \e)} |w^+(y)- w^-(y)| d {\cal H}^{N-1} = |w^+(x)- w^-(x)|.} $$ \end{lemma} In the following we give some preliminary notions related with sets of finite perimeter. For a detailed treatment we refer to \cite{AFP2}. \begin{definition} \label{Setsoffiniteperimeter} Let $E$ be an $\mathcal{L}^{N}$- measurable subset of $\mathbb{R}^{N}$. For any open set $\Omega\subset\mathbb{R}^{N}$ the perimeter of $E$ in $\Omega$, denoted by $P(E;\Omega)$, is the variation of $\chi_{E}$ in $\Omega$, i.e. \begin{equation} \label{perimeter}P(E;\Omega):=\sup\left\{ \int_{E} \mathrm{div}\varphi dx: \varphi\in C^{1}_{c}(\Omega;\mathbb{R}^{d}), \|\varphi\|_{L^{\infty} \leq1\right\} . \end{equation} We say that $E$ is a set of finite perimeter in $\Omega$ if $P(E;\Omega) <+ \infty.$ \end{definition} Recalling that if $\mathcal{L}^{N}(E \cap\Omega)$ is finite, then $\chi_{E} \in L^{1}(\Omega)$, by \cite[Proposition 3.6]{AFP2}, it results that $E$ has finite perimeter in $\Omega$ if and only if $\chi_{E} \in BV(\Omega)$ and $P(E;\Omega)$ coincides with $|D\chi_{E}|(\Omega)$, the total variation in $\Omega$ of the distributional derivative of $\chi_{E}$. Moreover, a generalized Gauss-Green formula holds: \begin{equation}\nonumber {\int_{E}\mathrm{div}\varphi dx=\int_{\Omega}<\nu_{E},\varphi>d|D\chi _{E}|\;\;\forall\,\varphi\in C_{c}^{1}(\Omega;\mathbb{R}^{d})}, \end{equation} where $D\chi_{E}=\nu_{E}|D\chi_{E}|$ is the polar decomposition of $D\chi_{E}$. We also recall that, when dealing with sets of finite measure, a sequence of sets $\{E_{n}\}$ converges to $E$ in measure in $\Omega$ if $\mathcal{L ^{N}(\Omega\cap(E_{n}\Delta E))$ converges to $0$ as $n\rightarrow\infty$, where $\Delta$ stands for the symmetric difference. Analogously, the local convergence in measure corresponds to the above convergence in measure for any open set $A\subset\subset\Omega$. These convergences are equivalent to $L^{1}(\Omega)$ and $L^{1 _{\mathrm{loc}}(\Omega)$ convergences of the characteristic functions. We also remind that the local convergence in measure in $\Omega$ is equivalent to convergence in measure in domains $\Omega$ with finite measure. Denoting by ${\cal P}(\Omega)$ the family of all sets with finite perimeters in $\Omega$ we recall the Fleming-Rishel formula (see \cite[formula 4.59]{F}): for every $\Phi \in W^{1,1}(\Omega)$ the set $ \{t \in \mathbb R: \{\Phi >t\} \not \in {\cal P}(\Omega)\} $ is negligible in $\mathbb R$ and \begin{equation}\label{FR} \displaystyle{\int_\Omega h |\nabla \Phi|dx = \int_{-\infty}^{+ \infty} \int_{\partial^\ast \{\Phi >t\}} h d {\cal H}^{N-1}dt} \end{equation} for every bounded Borel function $h :\Omega \to \mathbb R$, where $\partial ^\ast \{\Phi >t\}$ denotes the essential boundary of $\{\Phi >t\}$ (cf. \cite[Definition 3.60]{AFP2}). At this point we deal with functions of bounded variation whose Cantor part is null. \begin{definition} \label{SBV} A function $v \in BV(\Omega;\mathbb{R}^{m})$ is said to be a special function of bounded variation, and we write $v \in SBV(\Omega ;\mathbb{R}^{m})$, if $D^{c} v={\underline{0}}$, i.e. \[ Dv=\nabla v\mathcal{L}^{N}\lfloor\Omega+ ([v]\otimes\nu_{v})\mathcal{H ^{N-1}\lfloor J_{v}. \] \end{definition} The space $SBV_{0}(\Omega;\mathbb{R}^{m})$ is defined by \begin{equation}\label{SBV0} SBV_{0}(\Omega;\mathbb{R}^{m}):=\left\{ v \in SBV(\Omega;\mathbb{R}^{m}): \nabla v=0, \hbox{ and } \mathcal{H}^{N-1}(J_{v})< + \infty\right\} . \end{equation} \noindent Clearly, any characteristic function of a set of finite perimeter is in $SBV_{0}(\Omega)$. We recall that a sequence of sets $\{E_i\}$ is a Borel partition of a Borel set $B \in {\cal B}(\mathbb R^N)$ if and only if $$ E_i \in{\cal B}(\mathbb R^N) \hbox{ for every }i, E_i \cap E_j = \emptyset \hbox{ for every } i \not= j \hbox{ and } \cup_{i=1}^\infty E_i =B. $$ The above requirements could be weakened requiring that $|E_i \cap E_j|=0$, for $i \not =j$ and $|B \Delta \cup_{i=1}^\infty E_i|=0$. Such a sequence $\{E_i\}$ is said to be a Caccioppoli partition if and only if each $E_i$ is a set of finite perimeter. The following result, whose proof can be found in \cite{CT}, expresses the relations between Caccioppoli partitions and $SBV_0$ functions. \begin{lemma}\label{lemma31BCD} If $v \in SBV_0(\Omega;\mathbb R^m)$ then there exist a Borel partition $\{E_i\}$ of $\Omega$ and a sequence $\{v_i\}\subset \mathbb R^m$ such that $$ v=\sum_{i=1}^\infty v_i \chi_{E_i} \hbox{ a e. } x \in \Omega, $$ $${\cal H}^{N-1}(J_v \cap \Omega)=\frac{1}{2}\sum_{i=1}^\infty {\cal H}^{N-1}(\partial^\ast E_i \cap \Omega)=\frac{1}{2}\sum_{i\not = j=1}^\infty {\cal H}^{N-1}(\partial^\ast E_i \cap \partial^\ast E_j\cap \Omega),$$ $$ (v^+, v^-, \nu_{v}) \equiv (v^i, v^j, \nu_i) \hbox{ a.e. } x \in \partial^\ast E_i \cap \partial E^\ast_j \cap \Omega,$$ $\nu_i$ being the unit normal to $\partial^\ast E_i \cap \partial E^\ast_j $, \end{lemma} \medskip In the sequel we identify $(v,u)\in SBV_{0}(\Omega;\mathbb{R}^{m})\times BV(\Omega;\mathbb{R}^{d})$ with their precise representatives $({\tilde v}, {\tilde u}). $ See \cite[Definition 3.63 and Corollary 3.80]{AFP2} for the definition. \begin{remark} \label{vmeas} Since $SBV_{0}(\Omega;\mathbb{R}^{m}) \subset BV(\Omega ;\mathbb{R}^{m})$, then $(v,u)\in BV(\Omega;\mathbb{R}^{m+d})$ for every $(v,u) \in SBV_{0}(\Omega;\mathbb{R}^{m}) \times BV(\Omega;\mathbb{R}^{d})$. Thus $(v,u)$ is $|D^{c} (v,u)|$-measurable, and since $D^{c}(v,u)= ({\underline{0}}, D^{c} u)$, we may say that $v$ is $|D^{c} u|$-measurable. \end{remark} The following compactness result for bounded sequences in $SBV(\Omega; \mathbb{R}^m)$ is due to Ambrosio (see \cite{A1}, \cite{A2}). \begin{theorem} \label{Theorem 2.1} Let $\Phi:[0,+\infty) \to[0,+\infty)$, $\Theta:(0,+\infty] \to(0,+\infty]$ be two functions, respectively convex and concave, and such that \[ \lim_{t\to\infty} \frac{\Phi(t)}{t} = +\infty, \quad\Phi\text{ is nondecreasing}, \] \[ \Theta(+\infty) = \lim_{t\to\infty} \Theta(t), \quad\lim_{t\to0^{+}} \frac{\Theta(t)}{t} = +\infty, \quad\Theta\text{ is non decreasing}. \] Let $\{v_n\}$ be a sequence of functions in $SBV(\Omega;\mathbb{R}^m)$ such that \[ \sup_{n} \left\{ \int_{\Omega} \Phi(|\nabla v_{n}|) \, dx + \int_{J_{v_n}} \Theta(|[v_n]|)\, d\mathcal{H}^{N-1} + \int_{\Omega} |v_n|\, dx\right\} < +\infty. \] Then there exists a subsequence $\{v_{n_{k}}\}$ converging in $L^{1 (\Omega;\mathbb{R}^m)$ to a function $v \in SBV(\Omega;\mathbb{R}^m)$, and \[ \nabla v_{n_{k}} \rightharpoonup\nabla v \quad\text{in} \; L^{1 (\Omega;\mathbb{R}^{N\times m}),\quad[v_{n_{k}}]\otimes\nu_{v_{n_{k} }\mathcal{H}^{N-1}\lfloor J_{v_{n_{k}}}\overset{\ast}{\rightharpoonup} [v]\otimes\nu_v\mathcal{H}^{N-1}\lfloor J_v, \] \[ \int_{J_v\cap\Omega} \Theta(|[v]|)\, d\mathcal{H}^{N-1} \leq\liminf _{n\to+\infty} \int_{J_{v_{n}}\cap\Omega} \Theta(|[v_{n}]|) \, d\mathcal{H ^{N-1}. \] \end{theorem} \section{Auxiliary results}\label{auxres} This section is mainly devoted to describe the properties of the energy densities involved in the integral representation of relaxed functionals \eqref{representation} and \eqref{representationFG}. Recall that a Borel function $f:\mathbb{R}^{m}\times\mathbb{R}^{d\times N}\rightarrow\left[ -\infty,+\infty\right] $ is said to be quasiconvex if \begin{equation} f\left( q, z\right) \leq\frac{1}{\mathcal{L}^{N}\left( \Omega\right) \int_{\Omega}f\left( q, z+\nabla\varphi\left( y\right) \right) dy \label{qcx \end{equation} for every open bounded set $\Omega\subset\mathbb{R}^{N}$ with $\mathcal{L ^{N}\left( \partial\Omega\right) =0,$ for every $(q, z)\in\mathbb{R}^{m} \times\mathbb{R}^{d\times N}$ and every $\varphi\in W_{0}^{1,\infty}\left( \Omega;\mathbb{R}^{d}\right) $ whenever the right hand side of $\left( \ref{qcx}\right) $ exists as a Lebesgue integral. The quasiconvex envelope of $f:\mathbb{R}^{m}\times\mathbb{R}^{d\times N}\rightarrow\left[ 0,+\infty\right] $ is the largest quasiconvex function below $f$ and it is denoted by $Qf.$ If $f$ is Borel and locally bounded from below then it can be shown tha \begin{equation} Qf\left( q, z\right) =\inf\left\{ \int_{Q} f\left( q, z+\nabla \varphi\right) dx:\varphi\in W_{0}^{1,\infty}\left( Q;\mathbb{R}^{d}\right) \right\} , \label{Qfbar \end{equation} for every $(q, z)\in\mathbb{R}^{m} \times\mathbb{R}^{d\times N}$. The following result guarantees that the properties of $f$ are inherited by $Qf$. Since the proof develops along the lines as in \cite[Proposition 2.2]{RZ}, in turn inspired by \cite{D}, we omit it. \begin{proposition} \label{continuityQfbar} Let $f:\mathbb{R}^{m}\times\mathbb{R}^{d\times N}\rightarrow[0,+\infty)$ be a function satisfying $(F_{1})-(F_{3})$, and let $Q f:\mathbb{R}^{m}\times\mathbb{R}^{d \times N}\rightarrow[0,+\infty)$ be its quasiconvexification, as in \eqref{Qfbar}. Then $Q f$ satisfies $(F_{1 )-(F_{3})$. \end{proposition} \begin{remark} \label{propfinfty} Let $f:\mathbb{R}^{m} \times\mathbb{R}^{d \times N} \to[0, +\infty)$ be a function satisfying $(F_{1})-(F_{4})$, with $f^{\infty}$ as in \eqref{finfty}. \noindent(i) Recall that the recession function $f^{\infty}(q, \cdot)$ is positively one homogeneous for every $q \in\mathbb{R}^{m}$. \medskip\noindent(ii) We observe that, if $f$ satisfies the growth condition $(F_{2}) $, then $\beta'|z|\leq f^{\infty}(q, z)\le\beta|z|$ holds. Moreover, if $f$ satisfies $(F_3)$, then $f^\infty$ satisfies $|f^\infty(q, z)- f^\infty(q', z)|\leq L|q-q'||z|$, where $L$ is the constant appearing in $(F_3)$. \medskip\noindent(iii) As showed in \cite[Remark 2.2 (ii)]{FM2}, if a function $f:\mathbb{R}^{m} \times\mathbb{R}^{d\times N}\longrightarrow[0,+ \infty)$ is quasiconvex in the last variable and such that $f(q, z)\le c(1+|z|)$, for some $c>0$, then, its recession function $f^{\infty }(q, \cdot)$ is also quasiconvex. \medskip\noindent(iv) A proof entirely similar to \cite[Proposition 3.4]{BZZ} (see also \cite[Proposition 2.6]{RZ}) ensures that for every $(q, z) \in\mathbb{R}^{m} \times\mathbb{R}^{d \times N}, $ $Q(f^{\infty })(q, z)= (Qf)^{\infty}(q, z)$, hence we will adopt the notation $Qf^\infty$. In particular if $f$ satisfies $(F_1)-(F_3)$, Proposition \ref{continuityQfbar} guarantees that $Qf^\infty$ is continuous in both variables. Furthermore, for every $q\in \mathbb R^m$, $Qf^\infty(q, \cdot)$ is Lipschitz continuous in the last variable. \medskip\noindent(v) $(Qf)^{\infty}$ satisfies the analogous condition to $(F_{4})$. We also observe, as emphasized in \cite{FM2}, that $(F_{4})$ is equivalent to say that there exist $C >0$ and $\alpha\in(0,1) $ such that $$ \displaystyle{\left| f^{\infty}(q, z)-f(q, z)\right| \leq C (1+ |z|^{1-\alpha}) $$ for every $(q, z) \in\mathbb{R}^{m} \times\mathbb{R}^{d \times N}.$ An argument entirely similar to \cite[Proposition 2.7]{RZ} ensures that there exist $\alpha\in(0,1),$ and $C^{\prime}>0$ such that \[ \displaystyle{\left| (Qf)^{\infty}(q, z)- Qf(q, z)\right| \leq C^{\prime}(1+|z|^{1-\alpha})} \] for every $(q, z)\in\mathbb{R}^{m}\times\mathbb{R}^{d \times N}.$ \end{remark} The following proposition, whose proof can be obtained arguing exactly as in \cite[page 132]{BBBF}, establishes the properties of the density $K_3$. \begin{proposition}\label{propK3} Let $f:\mathbb R^m \times \mathbb R^{d \times N}\to [0, +\infty)$ and $g: \mathbb R^m\times \mathbb R^m \times S^{N-1}\to (0, +\infty)$. Let $K_3$ be the function defined in \eqref{K3}. If $( F_{1}) -( F_{4}) $ and $\left( G_{1}\right) -\left( G_{3}\right) $ hold then \begin{enumerate} \item[a)] $\left\vert K_3\left( a,b,c,d,\nu\right) -K_3\left( a',b',c',d',\nu\right) \right\vert \leq C\left( \left\vert a-a'\right\vert +\left\vert b-b'\right\vert +\left\vert c-c'\right\vert +\left\vert d-d'\right\vert \right) $ for every $\left( a,b,c,d,\nu\right) ,$ $\left( a',b',c',d',\nu\right) \in\mathbb R^m \times\ \mathbb R^m \times \mathbb R^{d}\times \mathbb R^{d}\times S^{N-1};$ \item[b)] $\nu\longmapsto K_3\left( a,b,c,d,\nu\right) $ is upper semicontinuous for every $\left( a,b,c,d\right) \in \mathbb R^m\times \mathbb R^{m}\times \mathbb R^{d}\times \mathbb R^{d};$ \item[c)] $K_3$ is upper semicontinuous in $\mathbb R^{m}\times \mathbb R^{m}\times \mathbb R^{d}\times \mathbb R^{d}\times S^{N-1};$ \item[d)] $K_3\left( a,b,c,d,\nu\right) \leq C\left( \left\vert a-b\right\vert +\left\vert c-d\right\vert + 1\right) $ for every $\nu\in S^{N-1}.$ More precisely, from the growth conditions $(F_2)$, $(G_2)$ and the definition of $K_3$ we have $K_3(a,a,c,d,\nu) \leq C(|c-d|),$ $K_3(a,b,c,c,\nu)\leq C(1+|a-b|)$. \end{enumerate} \end{proposition} \noindent A Borel measurable function $g :\mathbb R^m \times \mathbb R^m \times S^{N-1} \to \mathbb R $ is BV-elliptic (cf. \cite{A3}, \cite{AFP2} and \cite{BFLM}) if for all $(a, b, \nu) \in \mathbb R^m \times \mathbb R^m \times S^{N-1}$, and for any fi…nite subset $T$ of $\mathbb R^m$ \begin{equation}\label{Bvellipticity} \int_{J_{w}\cap Q_{\nu}} g(w^+, w^-, \nu_w) d{\cal H}^{N-1} \geq g(a, b,\nu) \end{equation} for all $w \in BV (Q_\nu; T)$ such that $w = v_0$ on $\partial Q_\nu$, where \begin{equation}\label{vab} v_0:= \left\{ \begin{array}{ll} a \hbox{ if } x \cdot \nu > 0,\\ b \hbox{ if } x\cdot\nu \leq 0. \end{array} \right. \end{equation} We are in position to provide some approximation results which allow us to reobtain the relaxed functionals and the related energy densities in terms of suitable relaxation procedures. To this end we start by stating a result very similar to \cite[Proposition 3.5]{BBBF} which allows to achieve $K_3$. \begin{proposition}\label{prop3.5BBBF} Let $f:\mathbb R^m \times \mathbb R^{d \times N}\to [0,+\infty)$ and $g:\mathbb R^m \times \mathbb R^m \times S^{N-1} \to (0,+\infty)$ be functions such that $(F_1)-(F_4)$ and $(G_1)-(G_3)$ hold, respectively. Let $K_3$ be the function defined in \eqref{K3} and $(v_0, u_0) $ be given by \begin{equation}\label{v0u0} v_0(x):=\left\{ \begin{array}{ll} a \hbox{ if }x \cdot \nu >0,\\ b \hbox{ if } x \cdot \nu < 0 \end{array} \right., \;\;\; u_0(x):= \left\{ \begin{array}{ll} c \hbox{ if }x \cdot \nu >0,\\ d \hbox{ if }x \cdot \nu < 0. \end{array} \right. \end{equation} Then $$ \begin{array}{ll} K_3(a,b,c,d,\nu)&=\displaystyle{\inf_{(v_n, u_n)}\left\{ \liminf_{n\to \infty} \left(\int_{Q_\nu}Qf^\infty(v_n(x), \nabla u_n(x))dx\right.\right.}\displaystyle{+\left. \int_{Q_\nu \cap J_{v_n}}g(v_n^+(x), v_n^-(x), \nu_n(x)) d{\cal H}^{N-1}\right):} \\ \\ &\displaystyle{ (v_n,u_n) \in SBV_0(Q_\nu;\mathbb R^m)\times W^{1,1}(Q_\nu;\mathbb R^d), (v_n,u_n)\to (v_0,u_0) \hbox{ in }L^1(Q_\nu;\mathbb R^{m+d}) \Big\}}\\ \\ &\displaystyle{=: K_3^\ast(a,b,c,d,\nu).} \end{array} $$ \end{proposition} \begin{remark}\label{applicationofprop3.4} i) It is worthwhile to observe that the above result ensures a sharper result than the one which is stated, namely the same type of arguments in \cite[Proposition 3.5]{BBBF} allow us to obtain $K_3(a,b,c,d, \nu)$ as a relaxation procedure but with test sequences in ${\cal A}_3(a,b,c,d,\nu)$, converging to $(v_0, u_0)$ in \eqref{v0u0}. \noindent ii) Notice that by virtue of the growth conditions on $Qf^\infty$ (cf. Remark \ref{propfinfty}) we can replace in \eqref{A3} the space $W^{1,1}(Q_\nu;\mathbb R^d)$ by $W^{1,\infty}(Q_\nu;\mathbb R^d)$. \noindent iii) Under assumptions $(G_1)-(G_3)$, the function $K_3$ in \eqref{K3} can be obtained, either taking test functions $v$ in $BV(\Omega; T)$ for every $T\subset \mathbb R^m$, with ${\rm card}(T)$ finite, or in $SBV_0(\Omega;\mathbb R^m) \cap L^\infty(\Omega;\mathbb R^m)$. This is easy to verify by virtue of Lemma \ref{lemma31BCD}. Namely, one can approximate functions $v$ in $SBV_0(\Omega;\mathbb R^m) \cap L^\infty(\Omega;\mathbb R^m)$ by sequences $\{v_n\}$ in $BV(\Omega; T_n)$ with $T_n \subset \mathbb R^m$ and ${\rm card}(T_n)$ finite. Moreover $(v_n^+, v_n^-, \nu_{v_n}) \to (v^+, v^-, \nu_v)$ pointwise and we can apply reverse Fatou's lemma to obtain the equivalence between the two possible definitions of $K_3$. \noindent iv) Observe that the properties of $K_3$ and the assumptions on $f$ and $g$ allow us to replace in the definition of ${\cal A}_3$ (see formula \eqref{A3}) the set $SBV_0(Q;\mathbb R^m)\cap L^\infty(\Omega;\mathbb R^m)$ by $SBV_0(\Omega;\mathbb R^m)$. \end{remark} By the proposition below one can replace in \eqref{calFG}, $f$ by its quasiconvexification $Qf$. We will omit the proof, which is quite standard, exploiting the relaxation results in the Sobolev spaces, cf. \cite[Theorem 9.8]{D}. \begin{proposition} \label{propqcx} Let $\Omega\subset\mathbb{R}^{N}$ be a bounded open set, $f$ and $g$ be as in Theorem \ref{lsctheorem}, $Qf$ as in $\left( \ref{Qfbar}\right) $ and let $\mathcal{F}$ be given by \eqref{calFG} . Then for every $A\in \mathcal{A}\left( \Omega\right) $ and for every $\left( v,u\right) \in SBV_0\left( A;\mathbb R^m \right) \times BV\left( A;\mathbb{R}^{d}\right),$ \ \begin{array} [c]{c \mathcal{F}\left( v,u;A\right) =\inf\left\{ \underset{n\rightarrow \infty}{\lim\inf {\displaystyle\int_{A}} Qf\left( {v_{n},\nabla u_{n}}\right) {dx+} \displaystyle{\int_{A \cap J_{v_n}}}g(v^+_n, v^-_n, \nu_n)d {\cal H}^{N-1} : \right.\\ \quad \quad \quad \quad \left\{ (v_n, u_{n})\right\} \subset SBV_0(A; \mathbb R^m) \times W^{1,1}\left( A;\mathbb{R}^{d}\right), \left. ~(v_n,u_{n})\rightarrow (v, u)\text{ in }L^1(A;\mathbb R^m) \times L^{1}\left( A;\mathbb{R}^{d}\right) \right\} . \end{array} \] \end{proposition} The following result is analogous to \cite[Proposition 2.4]{FM1} and it is devoted to replace the test functions in \eqref{calFG} by smooth ones. We will omit the proof, and just observe that i) follows the arguments in \cite{AF} with the application of Morse's measure covering theorem (c.f. \cite[Theorem 1.147]{FL}) . \begin{proposition}\label{prop2.4FM1} Let $f:\mathbb R^m \times \mathbb{R}^{d\times N}\rightarrow [0, +\infty]$ be a function satisfying $(F_1)-(F_3)$ and let $Qf$ be given by \eqref{Qfbar}. \begin{enumerate} \item[i)] Let $B$ be a ball in $\mathbb{R}^{N}.$ If \begin{equation} {\overline F_0}(v, u;B)\leq\underset{n\rightarrow \infty}{\lim\inf}\left(\int_{B}Qf\left( v_n,\nabla u_n\right) dx + \int_{J_{v_n}\cap B}g(v_n^+, v_n^-, \nu_{v_n})d {\cal H}^{N-1}\right) \label{lsc \end{equation} holds for every $(v_n,u_{n}), (v,u) \in SBV_0(\Omega;\mathbb R^m)\times W^{1,1}\left( \Omega;\mathbb{R}^{d}\right) $ such that $(v_n,u_n)\rightarrow (v,u)$ in $L^1\left( \Omega;\mathbb R^m \right) \times L^{1}\left( \Omega;\mathbb{R}^{d}\right) $ then it holds for all open bounded sets $\Omega \subset\mathbb{R}^{N}.$ \item[ii)] For every $(v,u) \in L^1(\Omega;\mathbb R^m)\times L^1(\Omega;\mathbb R^d)$, $\{(v_n,u_n)\} \subset SBV_0( \Omega;\mathbb R^m) \times W^{1,1}(\Omega;\mathbb{R}^d)$ such that $(v_n, u_{n})\rightarrow (v,u)$ in $L^1(\Omega;\mathbb R^m)\times L^{1}\left( \Omega;\mathbb{R}^{d}\right)$ there exists $\left\{ ({\widetilde v}_{n}, \widetilde{u}_{n})\right\} \subset C_{0}^{\infty}\left( \mathbb{R}^{N};\mathbb{R}^m\right) \times C_{0}^{\infty}\left( \mathbb{R}^{N};\mathbb R^d\right) $ such that $(\widetilde{v}_n, \widetilde{u _{n})\rightarrow (v,u)$ strictly in $BV(\Omega;\mathbb R^m) \times BV\left( \Omega;\mathbb{R}^{d}\right) $ and \[ \displaystyle{\liminf_{n \to \infty} \int_{\Omega}Qf\left( \widetilde{v}_n,\nabla\widetilde{u}_n\right) dx =\liminf_{n\to \infty}\int_\Omega Qf(v_n,\nabla u_n) dx.} \] \end{enumerate} \end{proposition} In order to achieve the integral representation in \eqref{mainthmgen} for the jump part, we need to modify $\left\{ \left( v_{n},u_{n}\right) \right\} $ to match the boundary in such a way the new sequences will be in $\mathcal{A}_3\left( v^+(x),v^-(x),u^{+}\left( x_{0}\right) ,u^{-}\left( x_{0}\right) ,\nu\left( x_{0}\right) \right) $ given in \eqref{A3}, and the energy doesn't increase. This is achieved in the next Lemma that for sake of simplicity is stated in the unit cube $Q\subset\mathbb{R}^{N}$, and with the normal to the jump set $\nu=e_N$. The proof relies on the techniques of \cite[Lemma 3.5]{BDV}, \cite[Lemma 3.1]{FM2} and \cite[Lemma 4.4]{ABr1}. \begin{lemma} \label{Lemma4.1FM}Let $Q:=\left[ 0,1\right] ^{N}$ an \[ v_0\left( y\right) :=\left\{ \begin{array} [c]{lll a & & \text{if }x_{N}>0,\\ b & & \text{if }x_{N}< 0, \end{array} \right.\qquad u_{0}\left( y\right) :=\left\{ \begin{array} [c]{lll c & & \text{if }x_{N}> 0,\\ d & & \text{if }x_{N} <0. \end{array} \right. \] Let $\left\{ v_{n}\right\} \subset SBV_0\left( Q;\mathbb R^m \right) $ and $\{u_{n}\} \subset W^{1,1}\left( Q;\mathbb{R}^{d}\right) $, such that $v_n \to v_0$ in $L^{1}\left( Q;\mathbb R^m \right) $ and $u_n\to u_0$ in $L^{1}\left( Q;\mathbb{R}^{d}\right) .$ If $\rho$ is a mollifier, $\rho_{n}:=n^{N}\rho\left( nx\right) ,$ then there exists $\left\{ \left( \zeta_{n},\xi_{n}\right) \right\} \in \mathcal{A}_3\left( a,b,c,d,e_{N}\right) $ such that \[ \zeta_{n}=v_0\text{ on }\partial Q,~\zeta_{n}\rightarrow v_0\text{ in }L^{1}\left( Q;\mathbb R^m\right), \] \[ \xi_{n}=\rho_{i\left( n\right) }\ast u_{0}\text{ on }\partial Q,~~\ \ \xi _{n}\rightarrow u_{0}\text{ in }L^{1}\left( Q;\mathbb{R}^{d}\right) \ an \begin{equation}\nonumber \begin{array}{ll} \displaystyle{\underset{n\rightarrow\infty}{\lim\sup}\left( \int_{Q}Qf\left( \zeta_{n},\nabla\xi_{n}\right) dx+\int_{J_{\zeta_n}\cap Q}g(\zeta_n^+, \zeta_n^-, \nu_{\zeta_n})d {\cal H}^{N-1}\right)} \\ \\ \displaystyle{\leq \underset{n\rightarrow\infty}{\lim\inf}\left( \int_{Q}Qf\left( v_{n},\nabla u_{n}\right) dx+\int_{J_{v_n}\cap Q}g(v_n^+, v_n^-, \nu_{v_n})d {\cal H}^{N-1} \right).} \end{array} \end{equation} \end{lemma} \begin{proof} Without loss of generality, we may assume that \[ \begin{array}{ll} \displaystyle{\underset{n\rightarrow\infty}{\lim\inf}\left(\int_{Q}Qf\left( v_{n},\nabla u_{n}\right) dx+\int_{J_{v_n}\cap Q}g(v_n^+, v_n^-,\nu_{v_n})d {\cal H}^{N-1}\right)}\\ \\ \displaystyle{=\lim_{n\to \infty}\left(\int_{Q}Qf\left( v_{n},\nabla u_{n}\right) dx+\int_{J_{v_n}\cap Q}g(v_n^+, v_n^-,\nu_{v_n})d {\cal H}^{N-1}\right) <+\infty.} \end{array} \] The proof is divided in two steps. \noindent {\bf Step 1.} First we claim that for every $\varepsilon >0$, denoted $ \|(v_0, u_0)\|_{\infty}$ by $M_0$, there exist a sequence $ \{\overline u_n\} \subset W^{1,1}(Q;\mathbb R^d)\cap L^\infty(Q;\mathbb R^d)$ and a sequence $ \{\overline v_n\} \subset SBV_0(Q;\mathbb R^m)\cap L^\infty(Q;\mathbb R^m)$, and a constant $C>0$ such that $\|{\overline u_n}\|_{\infty}, \|\overline v_n\|_{\infty} \leq C$ for every $n$ and \begin{equation}\label{as3.7BBBF} \begin{array}{ll} \displaystyle{\underset{n\rightarrow\infty}{\lim\inf}\left(\int_{Q}Qf\left( {\overline v_{n}},\nabla \overline u_{n}\right) dx+\int_{J_{{\overline v_n}}\cap Q}g({\overline v_n}^+, {\overline v_n}^-,\nu_{\overline v_n})d {\cal H}^{N-1}\right)} \\ \\ \displaystyle{\leq \underset{n\rightarrow\infty}{\lim}\left(\int_{Q}Qf\left( v_{n},\nabla u_{n}\right) dx+\int_{J_{v_n}\cap Q}g(v_n^+, v_n^-,\nu_{v_n})d {\cal H}^{N-1}\right)+ \varepsilon.} \end{array} \end{equation} To achieve the claim we can apply a truncation argument as in \cite[Lemma 3.5]{BDV}, (c.f. also \cite[Lemma 3.7]{BBBF}). For $a_i \in \mathbb R$ to be determined later depending on $\e$ and $M_0$, we define $\phi_i \in W^{1, \infty}_0(\mathbb R^{m+d};\mathbb R^{m+d})$ such that \begin{equation}\label{Lipschitztruncature} \phi_i(x)=\left\{ \begin{array}{ll} x, & |x| < a_i,\\ 0, &|x|\geq a_{i+1}, \end{array} \right. \end{equation} $\|\nabla \phi_i\|_{\infty}\leq 1$, with $x \in \mathbb R^{m+d}$, and $x\equiv(x_1,x_2), x_1 \in \mathbb R^m, x_2 \in \mathbb R^d$. For any $n \in \mathbb N$ and for any $i $ as above, let $(v^i_n,u^i_n)\in SBV_0(Q;\mathbb R^m) \times W^{1,1}(Q;\mathbb R^d)\cap L^\infty(Q;\mathbb R^{m+d})$ be given by $$ (v^i_n, u^i_n):=\phi_i(v_n,u_n). $$ Considering the bulk part of the energy $F$ in \eqref{FG}, and exploiting Proposition \ref{propqcx} and the growth conditions on $f$ and $Qf$, we have $$ \begin{array}{ll} \displaystyle{\int_Q Qf(v^i_n,\nabla u^i_n)dx = \int_{Q \cap \{|(v_n, u_n)|\leq a_i\}}Qf(v_ n,\nabla u_n)dx + \int_{Q\cap \{|(v_n,u_n)|> a_{i+1}\}}Qf(0,0)dx } \\ \\ \displaystyle{+\int_{Q \cap \{a_i< |(v_n, u_n)|\leq a_{i+1}\}} Qf(v_n^i, \nabla u_n^i)dx }\\ \\ \displaystyle{\leq \int_Q Qf(v_n, \nabla u_n)dx + C |Q \cap \{|(v_n, u_n)|> a_{i+1}\}|+ C_1 \int_{A \cap \{a_i < |(v_n, u_n)|\leq a_{i+1}\}}(1+|\nabla u_n|)dx}. \end{array} $$ Concerning the surface term of the energy in \eqref{FG}, since $((v_n^i)^\pm, (u_n^i)^\pm)=\phi_i(v_n^\pm, u_n^\pm)$, and without loss of generality one can assume that $|(v_n^-, u_n^-)|\leq |(v_n^+,u_n^+)|$ ${\cal H}^{N-1}$- a.e. on $J_{(v_n,u_n)}$, we have that $$ \begin{array}{ll} \displaystyle{\int_{Q\cap J_{v^i_n}}g((v_n^i)^+, (v_n^i)^-, \nu_{v_n^i})d {\cal H}^{N-1}}\\ \\ \displaystyle{\leq \int_{J_{v_n} \setminus \{ a_{i+1} \leq |(v_n^-,u_n^-)|\}\cap Q} g(\phi_i((v_n^i)^+, (u_n^i)^+), \phi_i((v_n^i)^-, (u_n^i)^-), \nu_{(v_n^i, u_n^i)})d {\cal H}^{N-1}.} \end{array} $$ Arguing as in \cite[Lemma 3.5]{BDV} (cf. also \cite[Remark 3.6]{BDV}), and exploiting the growth conditions on $g$ we can estimate $\displaystyle{\frac{1}{k}\sum_{i=1}^k F(v_n^i, u_n^i; Q)}$ for any fixed $k \in \mathbb N$, and for every $n \in \mathbb N$, with $k$ independent on $n$. Then $$ \begin{array}{ll} \displaystyle{\frac{1}{k}\sum_{i=1}^k F(v_n^i, u_n^i; Q) \leq F(v_n, u_n; Q) + \frac{1}{k} \sum_{i=2}^k\left(C |Q \cap \{|(v_n, u_n)|> a_{i+1}\}|+ C_4 \int_ {J_2^i \cap Q} (1+ |v_n^-|)d {\cal H}^{N-1}\right)}\\ \\ \displaystyle{+\frac{1}{k}\left(c_2 \int_Q (1+|\nabla u_n|)dx + 3 C_4 \int_{J_{v_n}\cap Q} (1+|v_n^+-v_n^-|)d {\cal H}^{N-1}\right),} \end{array} $$ where $J^i_2:=\{|v_n^-|\leq a_i, |v_n^+|\geq a_{i+1}\}$. By the growth conditions there exists a constant $C$ such that $$ \displaystyle{\left(c_2 \int_Q (1+|\nabla u_n|)dx + 3 c_4 \int_{J_{v_n}\cap Q} (1+|v_n^+-v_n^-|)d {\cal H}^{N-1}\right)\leq C,} $$ for every $n \in \mathbb N$. Choose $\displaystyle{k \in \mathbb N}$ such that $\displaystyle{\frac{c}{k}\leq \frac{\varepsilon}{3}}$. Moreover $$ \displaystyle{C \geq\int_ {J_2^i \cap Q} |v_n^+- v_n^-|d {\cal H}^{N-1} \geq \int_ {J_2^i \cap Q} (|v_n^+ |- |v_n^-|)d {\cal H}^{N-1} \geq (a_{i+1}-a_i){\cal H}^{N-1}(J_2^i \cap Q)}, $$ whence $$ \displaystyle{\int_{J_2^i\cap Q}(1+ |v_n^-|)d{\cal H}^{N-1}\leq C \frac{1+ a_i}{a_{i+1}-a_i}.} $$ The sequence $\{a_i\}$ can be chosen recursively as follows $$ \begin{array}{ll} C_2 |Q \cap \{|(v_n, u_n)| > a_{i}\}|\leq \frac{\varepsilon}{3}, \hbox{ for every }n \in \mathbb N, a_{i+1} \geq M_0,\\ \\ c_4C\frac{1+a_i}{a_{i +1}- a_i} \leq \frac{\varepsilon}{3} \hbox{ for every }i \in \mathbb N, \end{array} $$ which is possible since $\{(v_n, u_n)\}$ is bounded in $L^1$. Thus we obtain $$ \frac{1}{k}\sum_{j=1}^k F(v_n^{i_j}, u_n^{i_j}; Q) \leq F(v_n, u_n; Q)+ \varepsilon. $$ Therefore for every $n \in \mathbb N$ there exists $i(n)\in \{1,\dots, k\}$ such that $$ F(v_n^{i_n}, u_n^{i_n}; Q) \leq F(v_n, u_n; Q)+ \varepsilon. $$ It suffices to define $\overline v_n:= v_n^{i_n}$ and ${\overline u_n}:= u_n^{i_n}$ to achieve \eqref{as3.7BBBF} and observe that $\{\overline u_n\}$ and $\{\overline v_n\}$ are bounded in $L^\infty$, by construction. \noindent {\bf Step 2.} This step is devoted to the construction of sequences $\{\xi_n\}$ and $\{\zeta_n\}$ as in the statement. Let ${\overline v_n}$ and ${\overline u_n}$ be as in $i)$. Define \[ w_{n}\left( x\right) :=\left( \rho_{n}\ast u_{0}\right) \left( x\right) =\int_{B\left( x,\frac{1}{n}\right) }\rho_{n}\left( x-y\right) u_{0}\left( y\right) dy. \] As $\rho$ is a mollifier, we have for each tangential direction $i=1,\dots ,N-1,$ $w_{n}\left( x+e_{i}\right) =w_{n}\left( x\right) $ and s \[ w_{n}\left( y\right) =\left\{ \begin{array} [c]{lll c & & \text{if }x_{N}>\frac{1}{n},\\ d & & \text{if }x_{N}<-\frac{1}{n}, \end{array} \right. ~\ \ \ \left\Vert \nabla w_{n}\right\Vert _{\infty}=O\left( n\right) ,~~\ w_{n}\in\mathcal{A}_1\left( c,d,e_{N}\right), \] where $$ \begin{array}{ll} {\cal A}_1(c,d,e_N):=\left\{ u\in W^{1,1}(Q_{\nu};\mathbb{R}^{d}): u(y)= c \hbox{ if } y \cdot\nu=\frac {1}{2}, u(y)= d \hbox{ if } y\cdot\nu=- \frac{1}{2},\right.\\ \\ \left.\hbox{ with }u \;1-\hbox{periodic in }\nu_{1}, \dots, \nu_{N-1} \hbox{ directions} \right\}. \end{array} $$ Let $\alpha_{n}:=\sqrt{\left\Vert {\overline u_{n}}-w_{n}\right\Vert _{L^{1}\left( Q;\mathbb{R}^{d}\right) }+\left\Vert {\overline v_{n}}-v_{0}\right\Vert _{L^{1}\left( Q\right) }},~$\newline$k_{n}:=n\left[ 1+\left\Vert {\overline u_{n}}\right\Vert _{W^{1,1}\left( Q;\mathbb{R}^{d}\right) }+\left\Vert w_{n}\right\Vert _{W^{1,1}\left( Q;\mathbb{R}^{d}\right) }+\left\Vert {\overline v_{n}}\right\Vert _{BV\left( Q\right) }+\left\Vert v_{0}\right\Vert _{BV\left( Q\right) }+ {\cal H}^{N-1}(J_{{\overline v_n}})\right] ,~s_{n}:=\frac{\alpha_{n}}{k_{n}}$ where $\left[ k\right] $ denotes the largest integer less than or equal to $k.$ Since $\alpha_{n}\rightarrow0^{+},$ we may assume that $0\leq\alpha_{n}<1,$ and set $Q_{0}:=\left( 1-\alpha_{n}\right) Q,~Q_{i}:=\left( 1-\alpha _{n}+is_{n}\right) Q,~i=1,\dots,k_{n}.$ Consider a family of cut-off functions $\varphi_{i}\in C_{0}^{\infty}\left( Q_{i}\right) ,$ $0\leq\varphi_{i}\leq1,~\varphi_{i}=1$ in $Q_{i-1 ,~\left\Vert \nabla\varphi_{i}\right\Vert _{\infty}=O\left( \frac{1}{s_{n }\right) $ for $i=1,\dots,k_{n},$ and define \[ u_{n}^{\left( i\right) }\left( x\right) :=\left( 1-\varphi_{i}\left( x\right) \right) w_{n}\left( x\right) +\varphi_{i}\left( x\right) {\overline u_n}\left( x\right) . \] Since $u_{n}^{\left( i\right) }=w_{n}$ on $\partial Q$ we have that $u_{n}^{\left( i\right) }\in\mathcal{A}_1\left( c,d,e_{N}\right). $ Clearly \[ \nabla u_{n}^{\left( i\right) }=\nabla {\overline u_n}\text{ in }Q_{i-1},\qquad\nabla u_{n}^{\left( i\right) }=\nabla w_{n}\text{ in }Q\backslash Q_{i}, \] and in $Q_{i}\backslash Q_{i-1} \[ \nabla u_{n}^{\left( i\right) }=\nabla w_{n}+\varphi_{i}\left( \nabla {\overline u_n}-\nabla w_{n}\right) +\left( {\overline u_n}-w_{n}\right) \otimes\nabla \varphi_{i}. \] For $0<t<1$ define \[ v_{n,i}^{t}\left( x\right) :=\left\{ \begin{array} [c]{lll v_0\left( x\right) & & \text{if }\varphi_{i}\left( x\right) <t,\\ {\overline v_n}\left( x\right) & & \text{if }\varphi_{i}\left( x\right) \geq t. \end{array} \right. \] Clearly, $\lim_{n\rightarrow\infty}\left\Vert v_{n,i}^{t}-v _{0}\right\Vert _{L^{1}\left( Q\right) }=0$ as $n\rightarrow\infty,$ independently on $i$ and $t$. For every $n$ and $i,$ by Fleming-Rishel formula \eqref{FR} it is possible to find $t_{n,i}\in\left] 0,1\right[ $ such that \begin{align*} \left\{ x\in Q:\varphi_{i}\left( x\right) <t_{n,i}\right\} & \in\mathcal{P}\left( Q\right) ,\\ \mathcal{H}^{N-1}\left( J_{v_{0}}\cap\left\{ x\in Q:\varphi_{i}\left( x\right) =t_{n,i}\right\} \right) & =\mathcal{H}^{N-1}\left( J_{\overline v_n}\cap\left\{ x\in Q:\varphi_{i}\left( x\right) =t_{n,i}\right\} \right) =0, \end{align*} where ${\cal P}(Q)$ denotes the family of sets with finite perimeter in $Q$. Le \[ v_{n,i}^{t_{n,i}}:=\left\{ \begin{array} [c]{lll v_0\left( x\right) & & \text{in }Q\cap\left\{ x\in Q:\varphi _{i}\left( x\right) <t_{n,i}\right\} ,\\ {\overline v_n}\left( x\right) & & \text{in }Q\cap\left\{ x\in Q:\varphi _{i}\left( x\right) \geq t_{n,i}\right\} . \end{array} \right. \] Clearly, $\lim_{n\rightarrow\infty}\left\Vert v_{n,i}^{t_{n,i}}-v _{0}\right\Vert _{L^{1}\left( Q\right) }=0$, $\left\{v^{t_{n,i}}_{n,i}\right\}\subset SBV_0(Q;\mathbb R^m)\cap L^\infty(Q;\mathbb R^m)$ and, from Step 1, it is uniformly bounded on $n, i$ and $t$. We have \begin{align*} & \int_{Q}Qf\left( v_{n,i}^{t_{n,i}},\nabla u_{n}^{\left( i\right) }\right) dx+\int_{J_{v_{n,i}^{t_{n,i}}}\cap Q}g( (v_{n,i}^{t_{n,i}})^+,(v_{n,i}^{t_{n,i}})^-, \nu_{v_{n,i}^{t_{n,i}}}) d {\cal H}^{N-1} \\ & \leq\int_{Q}Qf\left( {\overline v_n},\nabla {\overline u_n}\right) dx+C\int_{Q_{i \backslash Q_{i-1}}\left( 1+\left\vert {\overline u_n}\left( x\right) -w_{n}\left( x\right) \right\vert \frac{1}{s_{n}}+\left\vert \nabla {\overline u_n}\left( x\right) \right\vert +\left\vert \nabla w_{n}\left( x\right) \right\vert \right) dx\\ & +C\int_{Q\backslash Q_{i}}\left( 1+\left\vert \nabla w_{n}\left( x\right) \right\vert \right) dx+\int_{ Q\cap\left\{ \varphi_{i}>t_{n,i}\right\} _{1}} g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n})d {\cal H}^{N-1} \\ & +\left\vert Dv_{n,i}^{t_{n,i}}\right\vert \left( \left( Q\cap\left\{ \varphi_{i}>t_{n,i}\right\} _{0}\right) \right) + {\cal H}^{N-1} \left( \left( Q\cap\left\{ \varphi_{i}>t_{n,i}\right\} _{0}\right) \right) +\left\vert Dv _{n,i}^{t_{n,i}}\right\vert \left( \partial^{\ast}\left\{ \varphi _{i}<t_{n,i}\right\} \right) \\ & + {\cal H}^{N-1}\left( \partial^{\ast}\left\{ \varphi _{i}<t_{n,i}\right\} \right)\\ & \leq\int_{Q}Qf\left( {\overline v_n},\nabla {\overline u_n}\right) dx+I_{1}+ \int_{Q \cap J_{\overline v_n}} g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n})d {\cal H}^{N-1} +C \left\vert Dv_0\right\vert \left( Q\backslash Q_{i}:\left\{ \varphi_{i}>t_{n,i}\right\} _{0}\right) \\ & +\frac{C}{s_{n}}\int_{Q_{i}\backslash Q_{i-1}} |{\overline v}_n-v_0|dx+ \frac{1}{s_n}O(s_n), \end{align*} where \[ \left\{ \varphi_{i}>t_{n,i}\right\} _{1}:=\left\{ x\in Q:\frac{\left\vert \left\{ x\in Q:\varphi_{i}>t_{n,i}\right\} \cap B_{\rho}\left( x\right) \right\vert }{\left\vert B_{\rho}\left( x\right) \right\vert }=1\right\}, \] \[ \left\{ \varphi_{i}>t_{n,i}\right\} _{0}:=\left\{ x\in Q:\frac{\left\vert \left\{ x\in Q:\varphi_{i}>t_{n,i}\right\} \cap B_{\rho}\left( x\right) \right\vert }{\left\vert B_{\rho}\left( x\right) \right\vert }=0\right\}, \] $I_1:=$ $\displaystyle{C\int_{Q_{i \backslash Q_{i-1}}\left( 1+\left\vert {\overline u_n}\left( x\right) -w_{n}\left( x\right) \right\vert \frac{1}{s_{n}}+\left\vert \nabla {\overline u_n}\left( x\right) \right\vert +\left\vert \nabla w_{n}\left( x\right) \right\vert \right) dx +C\int_{Q\backslash Q_{i}}\left( 1+\left\vert \nabla w_{n}\left( x\right) \right\vert \right) dx}$, and we have used \eqref{FR} in the last two terms of the above estimate. Averaging over all layers $Q_{i}\backslash Q_{i-1}$ one obtains \begin{align*} & \frac{1}{k_{n}}\sum_{i=1}^{k_{n}}\left( \int_{Q}Qf\left( v _{n,i}^{t_{n,i}},\nabla u_{n}^{\left( i\right) }\right) dx+ \int_{Q \cap J_{v_{n,i}^{t_{n,i}}}} g((v_{n,i}^{t_{n,i}})^+, (v_{n,i}^{t_{n_i}})^-, \nu_{v_{n_i}^{t_{n,i}}})d {\cal H}^{N-1}\right) \\ & \leq\int_{Q}Qf\left( {\overline v_n},{\overline u_n}\right) dx + \int_{Q \cap J_{v_n}}g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n})d {\cal H}^{N-1} +\frac{C}{k_{n}}\int_{Q}\left( 1+\left\vert \nabla {\overline u_n}\right\vert +\left\vert \nabla {\overline v_n}\right\vert \right) dx\\ & +\frac{C}{k_{n}}\int_{Q}\left\vert {\overline u_n}-w_n\right\vert \frac{1}{s_{n }dx+C\int_{Q\backslash Q_{0}}\left( 1+\left\vert \nabla w_n\right\vert \right) dx+C\left\vert Dv_{0}\right\vert \left( Q\backslash Q_{0}\right)+\frac{C}{s_{n}k_{n}}\int_{ Q\backslash Q_{0}} |{\overline v_n}-v_0|dx + \frac{C}{k_n} \\ & \leq\int_{Q}Qf\left( {\overline v_n},\nabla {\overline u_n}\right) dx+ \int_{Q \cap J_{v_n}} g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n})d{\cal H}^{N-1} +\frac{C}{k_{n}}\int_{Q}\left( 1+\left\vert \nabla {\overline u_n}\right\vert +\left\vert \nabla {\overline v_n}\right\vert \right) dx\\ & +\frac{C}{\alpha_{n}}\left\Vert {\overline u_n}-w_n\right\Vert _{L^{1}}+C\int_{Q\backslash Q_{0}}\left( 1+\left\vert \nabla w_n\right\vert \right) dx+C\left\vert Dv_{0}\right\vert \left( Q\backslash Q_{0}\right) +\frac{C}{\alpha_{n}}\left\Vert {\overline v_n}-v_{0}\right\Vert _{L^{1}\left( Q\right)} + \frac{C}{k_n} . \end{align*} Since $\left\vert Q\backslash Q_{0}\right\vert =O\left( \alpha_{n}\right) $ and $\nabla w_n\left( x\right) =0$ if $\left\vert x_{N}\right\vert >\frac{1}{N}$ we estimate \[ \int_{Q\backslash Q_{0}}\left( 1+\left\vert \nabla w_n\right\vert \right) dx\leq O\left( \alpha_{n}\right) +\mathcal{H}^{N-1}\left( Q\backslash Q_{0}\cap\left\{ x_{N}=0\right\} \right) \int_{-\frac{1}{n}}^{\frac{1}{n }O\left( n\right) dx_{N}=O\left( \alpha_{n}\right) . \] The same argument exploited above in order to estimate $\int_{Q \setminus Q_0}dx$ applies to estimate $|Dv_0|(Q\setminus Q_0)$ since $v_0$ is a jump function across $x_N=0$, namely $|Dv_0|(Q\setminus Q_0)= C \mathcal{H}^{N-1}\left( Q\backslash Q_{0}\cap\left\{ x_{N}=0\right\} \right)$, recalling also that $Q_0 =\alpha_n Q$. Setting $\varepsilon_{n}:=O\left( \frac{1}{n}\right) +C\sqrt{\left\Vert {\overline u_n}-w_n\right\Vert _{L^{1}\left( Q;\mathbb{R}^{d}\right) }+\left\Vert {\overline v_n}-v_{0}\right\Vert _{L^{1}\left( Q\right) }}+O\left( \alpha _{n}\right) $ we have that $\varepsilon_{n}\rightarrow 0^{+}$ and \begin{align*} & \frac{1}{k_{n}}\sum_{i=1}^{k_{n}}\left( \int_Q Qf\left( v_{n,i}^{t_{n,i}},\nabla u_n^{(i)}\right) dx+ \int_{Q \cap J_{v_{n_i}^{t_{n,i}}}} g((v_{n,i}^{t_{n,i}})^+, (v_{n,i}^{t_{n,i}})^-, \nu_{v_{n,i}^{t_{n,i}}})d {\cal H}^{N-1}\right) \\ & \leq\int_{Q}Qf \left( {\overline v_n},\nabla {\overline u_n}\right) dx+\int_{Q \cap J_{\overline v_n}}g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n})d {\cal H}^{N-1}+\varepsilon_{n} \end{align*} and so there exists an index $i\left( n\right) \in\left\{ 1,\dots ,k_{n}\right\} $ for which \begin{equation}\nonumber \begin{array}{ll} \displaystyle{\int_{Q}Qf\left( v_{n,i\left( n\right) }^{t_{n,i\left( n\right) },\nabla u_n^{i\left( n\right) }\right) dx+\int_{Q \cap J_{v_{n_i}^{t_{n,i}}}} g((v_{n,i}^{t_{n,i}})^+, (v_{n,i}^{t_{n,i}})^-, \nu_{v_{n,i}^{t_{n,i}}})d {\cal H}^{N-1}} \\ \\ \displaystyle{\leq \int_{Q}Qf \left( {\overline v_n},\nabla {\overline u_n}\right) dx+\int_{Q \cap J_{\overline v_n}}g({\overline v_n}^+. {\overline v_n}^-, \nu_{\overline v_n})d {\cal H}^{N-1}+\varepsilon_{n}.} \end{array} \end{equation} It suffices to define $\xi_{n}:=u_{n}^{i\left( n\right)} , \zeta_{n}:=v_{n,i\left(n\right)}^{t_{n,i\left( n\right) }}$ to get \begin{equation}\nonumber \begin{array}{ll} \displaystyle{\underset{n\rightarrow\infty}{\lim\sup}\left( \int_{Q}Qf\left( \zeta_{n},\nabla\xi_{n}\right) dx+\int_{J_{\zeta_n}\cap Q}g(\zeta_n^+, \zeta_n^-, \nu_{\zeta_n})d {\cal H}^{N-1}\right)} \\ \\ \displaystyle{\leq \underset{n\rightarrow\infty}{\lim\inf}\left( \int_{Q}Qf\left( {\overline v_n},\nabla {\overline u_n}\right) dx+\int_{J_{\overline v_n}\cap Q}g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n})d {\cal H}^{N-1} \right),} \end{array} \end{equation} which concludes the proof. \end{proof} \begin{remark}\label{asinGlobalMethodBFLM} \begin{itemize} \item[i)] Observe that arguing as in the first step of Lemma \ref{Lemma4.1FM}, it results that for every $u \in BV(\Omega;\mathbb R^d)$ and $v \in SBV_0(\Omega;\mathbb R^m)\cap L^\infty(\Omega;\mathbb R^m)$ $$ \begin{array}{ll} {\cal F}(v, u;A) =&\inf\left\{ \displaystyle{\liminf_{n \to\infty} \left( \int_{A} f(v_{n}, \nabla u_{n})dx +\int_{J_{v_{n}}\cap A} g({v_{n}}^{+}, {v_{n}}^{-}, \nu_{v_{n}}) d\mathcal{H}^{N-1}\right) :}\right. \\ \\ &\{ v_{n}\} \subset SBV_{0}\left(A;\mathbb R^m \right) \cap L^\infty(A;\mathbb R^m), \left\{ u_{n}\right\} \subset W^{1,1}\left( A;\mathbb{R}^{d}\right) ,\\ \\ & (v_n,u_n) \to (v, u)\text{ in }L^1\left( A;\mathbb{R}^{m+d}\right), \, \sup_n \|v_n\|_{\infty}< +\infty \Big\}. \end{array} $$ \item[ii)] Similarly, if also $u\in BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^d)$, then $$ \begin{array}{ll} {\cal F}(v,u;A) =&\inf\left\{ \displaystyle{\liminf_{n \to\infty} \left( \int_{A} f(v_{n}, \nabla u_{n})dx +\int_{J_{v_{n}}\cap A} g({v_{n}}^{+}, {v_{n}}^{-}, \nu_{v_{n}}) d\mathcal{H}^{N-1}\right) :}\right. \\ \\ & \{ v_{n}\} \subset SBV_{0}\left(A;\mathbb R^m \right) \cap L^\infty(A;\mathbb R^m), \left\{ u_{n}\right\} \subset W^{1,1}\left( A;\mathbb{R}^{d}\right) \cap L^\infty(A;\mathbb R^d), \\ \\ &(v_n,u_n) \to (v, u)\text{ in }L^1\left( A;\mathbb{R}^{m+d}\right), \sup_n \|(v_n, u_n)\|_{\infty}< +\infty \Big\}. \end{array} $$ \item[iii)] Notice that an argument entirely similar to \cite[Lemmas 13 and 14]{BFLM} allows us to say that for every $(v,u)\in SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d)$, it results $$ \displaystyle{{\cal F}(v,u; A)=\lim_{j \to\infty}{\cal F}(\phi_j(v,u);A)}, $$ where $\phi_j$ are the functions defined in \eqref{Lipschitztruncature}. \end{itemize} \end{remark} We conclude this section with a result that will be exploited in the sequel. \begin{lemma} \label{Lemma0} Let $X$ be a function space, for any $F:\mathbb R\times X \rightarrow\left[ 0,\infty\right] \[ \underset{\varepsilon\rightarrow 0^+}{\lim\sup}\inf_{u\in X}F\left( \varepsilon,u\right) \leq\inf_{u\in X }\underset{\varepsilon\rightarrow 0^+ }{\lim\sup}F\left( \varepsilon,u\right) . \] \end{lemma} \begin{proof} For any $\widetilde{u}\in X \[ \inf_{u\in X }F\left( \varepsilon,u\right) \leq F\left( \varepsilon,\widetilde{u}\right) . \] Thu \[ \underset{\varepsilon\rightarrow 0^+}{\lim\sup}\inf_{u\in X }F\left( \varepsilon,u\right) \leq\underset{\varepsilon \rightarrow 0^+ }{\lim\sup}F\left( \varepsilon,\widetilde{u}\right) \] for every $\widetilde{u}\in X .$ Applying the infimum in the previous inequality one obtain \[ \inf_{\widetilde{u}\in X}\underset{\varepsilon \rightarrow0^+}{\lim\sup}\inf_{u\in X}F\left( \varepsilon,u\right) \leq\inf_{\widetilde{u}\in X }\underset{\varepsilon\rightarrow 0^+ }{\lim\sup}F\left( \varepsilon ,\widetilde{u}\right) . \] Henc \[ \underset{\varepsilon\rightarrow 0^+ }{\lim\sup}\inf_{u\in X }F\left( \varepsilon,u\right) \leq\inf_{u\in X}\underset{\varepsilon\rightarrow 0^+ }{\lim\sup}F\left( \varepsilon,u\right) . \] \end{proof} \section{Lower bound}\label{lb} This section is devoted to the proof of the lower bound inequality for Theorem \ref{mainthmgen}. Recall that ${\cal F}$ and $\overline{F_0}$ are the functionals introduced in \eqref{calFG} and \eqref{representationFG}. \begin{theorem} \label{lsctheorem} Let $\Omega\subset\mathbb{R}^{N}$ be a bounded open set, let $f:\mathbb R^m \times \mathbb R^d\rightarrow\lbrack0,+\infty)$ satisfy $(F_1)-(F_4)$ and let $g:\mathbb R^m \times \mathbb R^m \times S^{N-1}\to [0, +\infty)$ satisfy $(G_1)-(G_3)$. Then for every $\left( v,u\right) \in SBV_0 ( \Omega;\mathbb R^m) \times BV\left( \Omega;\mathbb{R}^{d}\right)$, and for every sequence $\left\{ (v_{n}, u_n)\right\} \subset SBV_0( \Omega; \mathbb R^m) \times W^{1,1}\left(\Omega;\mathbb{R}^{d}\right) $ such that $(v_n, u_n) \to (v,u)$ in $L^1(\Omega;\mathbb R^m)\times L^1(\Omega; \mathbb R^d)$, \begin{equation} \overline{F_{0}}\left( v,u;\Omega\right) \leq\underset{n\rightarrow\infty}{\lim\inf }F\left( v_{n},u_{n};\Omega\right) , \label{lsc0 \end{equation} where ${\overline F_0}$ is given by \eqref{representationFG} . \end{theorem} \begin{proof}[Proof] Let $(v,u) \in SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d)$. Without loss of generality, we may assume that for every $\{(v_n, u_n)\} \subset SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d)$ converging to $(v,u)$ in $L^1(\Omega; \mathbb R^m)\times L^1(\Omega;\mathbb R^d)$, \begin{equation}\nonumber \begin{array}{ll} \displaystyle{\underset{n\rightarrow\infty}{\lim\inf}\left(\int_{\Omega} f\left( v_n,\nabla u_n\right)dx +\int_{J_{v_n}\cap \Omega}g(v^+_n, v^-_n, \nu_{v_n})d {\cal H}^{N-1}\right)}\\ \\ \displaystyle{=\lim_{n\rightarrow\infty}\left(\int_{\Omega} f\left( v_n,\nabla u_{n}\right)dx + \int_{J_{v_n}\cap \Omega}g(v^+_n, v^-_n, \nu_{v_n})d {\cal H}^{N-1} \right) <+\infty.} \end{array} \end{equation} For every Borel set $B \subset \Omega$ define $$ \displaystyle{\mu_n(B):=\int_B f\left( v_n,\nabla u_{n}\right) dx + \int_{J_{v_n}\cap B}g(v^+_n, v^-_n, \nu_{v_n})d {\cal H}^{N-1} .} $$ Since $\{\mu_n\}$ is a sequence of nonnegative Radon measures, uniformly bounded in the space of measures, we can extract a subsequence, still denoted by $\{\mu_n\}$, weakly $\ast$ converging in the sense of measures to some Radon measure $\mu.$ Using Radon-Nikod\'ym theorem we can decompose $\mu$ as a sum of four mutually singular nonnegative measures, namel \begin{equation}\label{mudecomposition} \mu=\mu_{a}\mathcal{L}^{N}+\mu_{c}\left\vert D^{c}u\right\vert +\mu_{j}\mathcal{H}^{N-1}\lfloor J_{(v,u)}+\mu_{s}, \end{equation} where we have been considering $(v,u)$ as a unique field in $BV(\Omega; \mathbb R^{m + d})$ and we have been exploiting the fact that $D^c(v,u)= (\underline{0}, D^c u)$ (cf. Remark \ref{vmeas}). By Besicovitch derivation theore \begin{align} \mu_{a}\left( x_{0}\right) & =\lim_{\varepsilon\rightarrow0^{+}}\frac {\mu\left( B\left( x_{0},\varepsilon\right) \right) }{\mathcal{L ^{N}\left( B\left( x_{0},\varepsilon\right) \right) }<+\infty ,~\text{for~\ }\mathcal{L}^{N}-\text{a.e.}~x_{0}\in\Omega,\nonumber\\ \mu_{j}\left( x_{0}\right) & =\lim_{\varepsilon\rightarrow0^{+}}\frac {\mu\left( Q_{\nu}\left( x_{0},\varepsilon\right) \right) }{\mathcal{H ^{N-1}\left( Q_{\nu}\left( x_{0},\varepsilon\right) \cap J_{(v, u)}\right) }<+\infty,~\text{for }\mathcal{H}^{N-1}-\text{a.e. }x_{0}\in J_{(v, u)}\cap \Omega,\label{BDT}\\ \mu_{c}\left( x_{0}\right) & =\lim_{\varepsilon\rightarrow0^{+}}\frac {\mu\left( Q\left( x_{0},\varepsilon\right) \right) }{\left\vert Du\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) <+\infty,~\text{for }\left\vert D^{c}u\right\vert -\text{a.e. }x_{0}\in \Omega.\nonumber \end{align} We claim tha \begin{equation} \mu_{a}\left( x_{0}\right) \geq Qf\left( v\left( x_{0}\right) ,\nabla u\left( x_{0}\right) \right) ,~\text{for \ }\mathcal{L}^{N}-\text{a.e .~x_{0}\in\Omega,\label{lboundbulk \end{equation \begin{equation} \mu_{j}\left( x_{0}\right) \geq K_3\left( v^+(x_0),v^-(x_0),u^{+}\left( x_{0}\right) ,u^{-}\left( x_{0}\right) ,\nu_{(v,u)}\right) ,~\text{for }\mathcal{H ^{N-1}-\text{a.e}.~x_{0}\in J_{(v,u)}\cap\Omega,\label{lboundjump \end{equation \begin{equation} \mu_{c}\left( x_{0}\right) \geq\left( Qf\right) ^{\infty}\left( v\left( x_{0}\right) ,\frac{dD^{c}u}{d\left\vert D^{c}u\right\vert }\left( x_{0}\right) \right) \text{ for }\left\vert D^{c}u\right\vert -\text{a.e. }x_{0}\in\Omega, \label{lboundcantor \end{equation} \noindent where $Qf$ is the density introduced in \eqref{Qfbar}, $Qf^\infty$ is its recession function as in \eqref{finfty} and $K_3$ is given by \eqref{K3}. If $\left( \ref{lboundbulk}\right) -\left( \ref{lboundcantor}\right) $ hold then $\left( \ref{lsc0}\right) $ follows immediately. Indeed, since $\mu_{n}\overset{\ast}{\rightharpoonup}\mu$ in the sense of measures \begin{align*} & \underset{n\rightarrow\infty}{\lim\inf}\left(\int_{\Omega} f\left( v_n,\nabla u_{n}\right)dx + \int_{J_{v_n}\cap \Omega}g(v^+_n, v^-_n, \nu_{v_n})d {\cal H}^{N-1} \right) \\ & \geq\underset{n\rightarrow\infty}{\lim\inf}\mu_{n}\left( \Omega\right) \geq\mu\left( \Omega\right) \geq\int_{\Omega}\mu_{a}~dx+\int_{J_{(v,u)}}\mu _{j}~d\mathcal{H}^{N-1}+\int_{\Omega}\mu_{c}d|D^{c}u|\\ & \geq\int_{\Omega}Qf\left( v\left( x\right) ,\nabla u\left( x\right) \right) dx+\int_{J_{u}\cap\Omega}K_3\left( v^+(x),v^-(x),u^{+}(x) ,u^{-}( x) ,\nu_{(v,u)}\right) d\mathcal{H}^{N-1}\\ & +\int_{\Omega}\left( Qf\right) ^{\infty}\left( v\left( x\right) ,\frac{dD^{c}u}{d\left\vert D^{c}u\right\vert }\left( x\right) \right) d\left\vert D^{c}u\right\vert \end{align*} where we have used the fact that $\mu_s$ is nonnegative. We prove \eqref{lboundbulk}$-$\eqref{lboundcantor} using the blow-up method introduced in \cite{FM1}. \noindent\textbf{Step 1.} Let $x_{0}\in\Omega$ be a Lebesgue point for $\nabla u$ and $v$, such that $x_{0}\notin J_{(v,u)},$ \eqref{approximate differentiability} applied to $u$, and $\left( \ref{BDT}\right) _{1}$ hold. We observe that $$ \begin{array}{ll} \displaystyle{ \underset{n\rightarrow\infty}{\lim\inf}\left(\int_{\Omega} f\left( v_n,\nabla u_{n}\right)dx + \int_{J_{v_n}\cap \Omega}g(v^+_n, v^-_n, \nu_{v_n})d {\cal H}^{N-1} \right) }\\ \displaystyle{\geq \underset{n\rightarrow\infty}{\lim\inf}\int_{\Omega} f\left( v_n,\nabla u_{n}\right)dx \geq \underset{n\rightarrow\infty}{\lim\inf}\int_{\Omega} Qf\left( v_n,\nabla u_{n}\right)dx.} \end{array} $$ Note that by Proposition \ref{continuityQfbar} $Qf$ satisfies $(F_1)-(F_3)$. By Proposition \ref{prop2.4FM1} we may assume that $\left\{ (v_n, u_{n})\right\} \subset C_{0}^{\infty}\left( \mathbb{R}^{N};\mathbb R^m \right) \times C_{0}^{\infty}\left( \mathbb{R}^{N};\mathbb{R}^{d}\right) $ and applying \cite[formula (2.10) in Theorem 2.19]{FM2}, to the functional $G: (v,u)\in W^{1,1}(\Omega;\mathbb R^{m+d}) \to \int_{\Omega}Q f(v,\nabla u)dx$ we obtain \eqref{lboundbulk}. \noindent{\bf Step 2. }Now we prove $\left( \ref{lboundjump}\right) .$ Remind that $J_{( v,u) }=J_v\cup J_{u}$ and $\nu_{(v,u)}= \nu_v$ for every $(v,u) \in SBV_0(\Omega;\mathbb R^m) \times W^{1,1}(\Omega;\mathbb R^d)$. By Lemma \ref{lemma2.5BBBF}, Proposition \ref{thm2.3BBBF} ii) and Theorem \ref{thm2.6BBBF} we may fix $x_{0}\in J_{( v,u)}\cap\Omega$ such tha \begin{equation}\label{4.13} \begin{array}{ll} \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^{N-1}}\int_{J_{(v,u)} \cap Q_{\nu}\left( x_{0},\varepsilon\right) }\left( \left\vert v^{+}\left( x\right) -v^{-}\left( x_{0}\right) \right\vert + \left\vert u^{+}\left( x\right) -u^{-}\left( x_{0}\right) \right\vert \right) d\mathcal{H}^{N-1}}\\ \\ \displaystyle{ =\left\vert v^{+}\left( x_{0}\right) -v^{-}\left( x_{0}\right) \right\vert +\left\vert u^{+}\left( x_{0}\right) -u^{-}\left( x_{0}\right) \right\vert,} \end{array} \end{equation} \noindent \begin{equation}\label{4.14} \begin{array}{ll} \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^{N}}\int_{\left\{ x\in Q_{\nu}\left( x_{0},\varepsilon\right) :\left( x-x_{0}\right) \cdot\nu\left( x\right) >0\right\} }\left\vert v\left( x\right) -v^{+}\left( x_0\right) \right\vert ^{\frac{N}{N-1}}dx }\\ \\ \displaystyle{+\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^{N}}\int_{\left\{ x\in Q_{\nu}\left( x_{0},\varepsilon\right) :\left( x-x_{0}\right) \cdot\nu\left( x\right) >0\right\} }\left\vert u\left( x\right) -u^{+}\left( x_0\right) \right\vert ^{\frac{N}{N-1}}dx= 0}, \end{array} \end{equation} \noindent \begin{equation}\label{4.15} \begin{array}{ll} \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^N}\int_{\left\{ x\in Q_{\nu}\left( x_{0},\varepsilon\right) :\left( x-x_{0}\right) \cdot\nu\left( x\right) <0\right\} }\left\vert v\left( x\right) -v^{-}\left( x_0\right) \right\vert ^{\frac{N}{N-1}}dx}\\ \\ \displaystyle{+\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^N}\int_{\left\{ x\in Q_{\nu}\left( x_{0},\varepsilon\right) :\left( x-x_{0}\right) \cdot\nu\left( x\right) <0\right\} }\left\vert u\left( x\right) -u^{-}\left( x_0\right) \right\vert ^{\frac{N}{N-1}}dx=0,} \end{array} \end{equation} \noindent \begin{equation} \displaystyle{\mu_{j}\left( x_{0}\right) =\lim_{\varepsilon\rightarrow0^{+}}\frac {\mu( x_0+\varepsilon Q_{\nu( x_0)})}{\mathcal{H}^{N-1}\lfloor J_{( v,u) }( x_0+\varepsilon Q_{\nu( x_0)}) }\text{ \ exists and it is finite.}} \label{4.16 \end{equation} For simplicity of notation we write $Q:=Q_{\nu\left( x_{0}\right) }.$ Then by $\left( \ref{4.16}\right) $, \begin{equation}\label{4.17} \mu_{j}( x_0) =\lim_{\varepsilon \rightarrow 0^+}\frac{1}{\varepsilon^{N-1}}\int_{x_0+\varepsilon Q d\mu\left( x\right) . \end{equation} Without loss of generality, we may choose $\varepsilon>0$ such that $\mu\left( \partial\left( x_{0}+\varepsilon Q\right) \right) =0.$ Since $Qf \leq f$, we hav \begin{align*} \mu_{j}\left( x_{0}\right) & \geq\lim_{\varepsilon\rightarrow0^{+} \lim_{n\rightarrow\infty}\frac{1}{\varepsilon^{N-1}}\left( \in _{x_{0}+\varepsilon Q}Qf\left( v_n\left( x\right) ,\nabla u_{n}\left( x\right) \right) dx+\int_{J_{v_n}}g(v_n^+, v_n^-, \nu_{v_n}) d {\cal H}^{N-1} \right) \\ & =\lim_{\varepsilon\rightarrow0^{+}}\lim_{n\rightarrow\infty}\varepsilon \int_{Q}Qf\left( v_n\left( x_{0}+\varepsilon y\right) ,\nabla u_{n}\left( x_{0}+\varepsilon y\right) \right) dy\\ & +\int_{Q\cap J\left( v_n, u_n\right) -\frac{x_0}{\varepsilon}} g\left( v_n^+ ( x_0+\varepsilon y), v_n^-(x_0+ \varepsilon y) , \nu_{(v_n, u_n)} (x_0+\varepsilon y)\right) d\mathcal{H}^{N-1}\left( y\right) . \end{align*} Define \begin{equation}\label{vne} \begin{array}{cc} v_{n,\varepsilon}\left( y\right) :=v_{n}\left( x_{0}+\varepsilon y\right) , \; u_{n,\varepsilon}\left( y\right) :=u_{n}\left( x_{0}+\varepsilon y\right),\; \nu_{n,\varepsilon}\left( y\right) :=\nu_{\left( v_n,u_n\right) }\left( x_{0}+\varepsilon y\right) , \end{array} \end{equation} an \begin{equation}\label{u0v0} \begin{array}{cc} v_{0}\left( y\right) :=\left\{ \begin{array} [c]{ccc} v^{+}(x_0) & & \text{if }y\cdot\nu\left( x_{0}\right) >0,\\ v^{-}(x_0) & & \text{ if }y\cdot\nu\left( x_{0}\right) <0, \;\; \end{array} \right. u_{0}\left( y\right) :=\left\{ \begin{array} [c]{ccc u^{+}\left( x_{0}\right) & & \text{if }y\cdot\nu\left( x_{0}\right) >0,\\ u^{-}\left( x_{0}\right) & & \text{if }y\cdot\nu\left( x_{0}\right) <0. \end{array} \right. \end{array} \end{equation} Since $(v_n,u_n)\rightarrow (v,u)$ in $L^{1}\left( \Omega;\mathbb{R}^{m+d}\right)$, by \eqref{4.14} and \eqref{4.15} one obtain \begin{equation}\label{blabla} \begin{array}{ll} & \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\lim_{n\rightarrow\infty}\in _{Q}\left\vert v_{n,\varepsilon}\left( y\right) -v_{0}\left( y\right) \right\vert dy=\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^{N }\left( \int_{\left\{ x\in x_{0}+\varepsilon\partial Q:\left( x-x_{0}\right) \cdot\nu\left( x_{0}\right) >0\right\} }\left\vert v\left( x\right) -v^{+}\left( x_{0}\right) \right\vert dx\right.} \\ & \displaystyle{ \left. +\int_{\left\{ x\in x_{0}+\varepsilon\partial Q:\left( x-x_{0}\right) \cdot\nu\left( x_{0}\right) <0\right\} }\left\vert v\left( x\right) -v^{-}\left( x_{0}\right) \right\vert dx\right) =0} \end{array} \end{equation} an \begin{equation}\label{blabla2} \begin{array}{ll} & \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\lim_{n\rightarrow\infty}\in _{Q}\left\vert u_{n,\varepsilon}\left( y\right) -u_0\left( y\right) \right\vert dy=\lim_{\varepsilon\rightarrow0^{+}}\frac {1}{\varepsilon^{N}}\left( \int_{\left\{ x\in x_{0}+\varepsilon\partial Q:\left( x-x_{0}\right) \cdot\nu\left( x_{0}\right) >0\right\} }\left\vert u\left( x\right) -u^{+}\left( x_{0}\right) \right\vert dx\right.} \\ & \displaystyle{+\left. \int_{\left\{ x\in x_{0}+\varepsilon\partial Q:\left( x-x_{0}\right) \cdot\nu\left( x_{0}\right) <0\right\} }\left\vert u\left( x\right) -u^{-}\left( x_{0}\right) \right\vert dx\right) =0.} \end{array} \end{equation} Thu \begin{align*} \mu_{j}\left( x_{0}\right) & \geq\lim_{\varepsilon\rightarrow0^{+} \lim_{n\rightarrow\infty}\left(\int_{Q}Qf^{\infty}\left( v_{n,\varepsilon }\left( y\right) ,\nabla u_{n,\varepsilon}\left( y\right) \right) dy+\int_{Q\cap J\left( v_{n,\e} ,u_{n_\e}\right) } g(v_{n,\e}^+, v_{n,\e}^-, \nu_{v_{n,\e}}) d {\cal H}^{N-1}(y)\right.\\ & \left.+\int_{Q}\left(\varepsilon Qf\left( v_{n,\varepsilon}\left( y\right) ,\frac{1}{\varepsilon}\nabla u_{n,\varepsilon}\left( y\right) \right) -Qf^{\infty}\left( v_{n,\varepsilon},\nabla u_{n,\varepsilon}\right) \right) dy \right). \end{align*} Exploiting $(v)$ in Remark \ref{propfinfty} we can argue as in the estimates \cite[(3.3)-(3.5)]{FM2}, thus obtaining \[ \mu_{j}\left( x_{0}\right) \geq\underset{\varepsilon\rightarrow0^{+ }{\lim\inf}~\underset{n\rightarrow\infty}{\lim\inf}\left( \int_{Q}Qf^{\infty }\left( v_{n,\varepsilon}\left( y\right) ,\nabla u_{n,\varepsilon }\left( y\right) \right) dy+\int_{Q\cap J\left( v_{n,\varepsilon ,u_{n,\varepsilon}\right) }g(v_{n,\e}^+, v_{n,\e}^-, \nu_{v_{n,\e}}) d {\cal H}^{N-1}\left( y\right) \right) . \] Since $(v_{n,\varepsilon}, u_{n,\e})\rightarrow (v_0, u_0)$ in $L^{1}\left( Q;\mathbb{R ^{m+d}\right) $ as $n\rightarrow\infty$ and $\varepsilon\rightarrow 0^{+},$ by a standard diagonalization argument, as in \cite[Theorem 4.1 Steps 2 and 3]{BBBF}, we obtain a sequence $(\bar{v}_k,\bar{u}_k)$ converging to $(v_0, u_0)$ in $L^1(Q;\mathbb R^{m+d})$ as $k \to \infty$ such that \[ \mu_{j}\left( x_{0}\right) \geq\lim_{k\rightarrow\infty}\left( \in _{Q}Qf^{\infty}\left( \bar{v}_{k}\left( y\right) ,\nabla \bar{u}_{k}\left( y\right) \right) dy+\int_{Q\cap J_{\left( v_k,w_k \right) } g(\bar{v}_k^+, \bar{v}_k^-, \nu_{\bar{v}_k}) d {\cal H}^{N-1}\left( y\right) \right). \] Applying Lemma \ref{Lemma4.1FM} with $Qf$ replaced by $Qf^\infty$ and using $(v)$ in Remark \ref{propfinfty} we may find $\left\{ \left( \zeta_{k ,\xi_{k}\right) \right\} \in\mathcal{A}_3\left( v^+(x_0),v^-(x_0),u^{+}\left( x_{0}\right) ,u^{-}\left( x_{0}\right) ,\nu\left( x_{0}\right) \right) $ such that \begin{equation}\nonumber \begin{array}{ll} \displaystyle{\mu_{j}(x_0) \geq\lim_{k\rightarrow\infty}\left( \int_Q Qf^{\infty}\left( \zeta_{k},\nabla\xi_{k}\right) dx+ \int_{Q \cap J_{(\zeta_k, \xi_k)} }g(\zeta_k^+, \zeta_k^-, \nu_{\zeta_k}) d {\cal H}^{N-1} \right)} \\ \\ \displaystyle{\geq K_3\left( v^+(x_0), v^-(x_0),u^{+}\left( x_{0}\right) ,u^{-}\left( x_{0}\right) ,\nu\left( x_{0}\right) \right) }. \end{array} \end{equation} \noindent\textbf{Step 3.} Here we show \eqref{lboundcantor}. \noindent Let $(v,u) \in SBV_0(\Omega;\mathbb R^m)\times BV\left( \Omega;\mathbb{R}^{d}\right) $, note, as already emphasized in Remark \ref{vmeas}, that $|D^c(v,u)|= |D^c u|$. For $\left\vert D^{c}u\right\vert -$a.e. $x_{0}\in\Omega$ we hav \[ \lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert D(v,u)\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }{\left\vert D^{c}(v,u)\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }= \lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert D(v,u)\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }{\left\vert D^{c}u\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }=1. \] And so by Theorems 2.4. $iii)$ and 2.11 in \cite{FM2}, and by Theorem \ref{thm2.6BBBF} for $\left\vert D^{c}u\right\vert -$a.e. $x_{0}\in\Omega$ the following hol \[ \mu_c\left( x_{0}\right) =\lim_{\varepsilon\rightarrow0^{+}}\frac{\mu\left( Q\left( x_{0},\varepsilon\right) \right) }{\left\vert Du\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }, \ \[ \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon^{N}}\int_{Q\left( x_{0},\varepsilon\right) }\left(\left\vert u\left( x\right) -u\left( x_{0}\right) \right\vert +\left\vert v\left( x\right) -v\left( x_{0}\right) \right\vert\right) dx=0, \] for $\mathcal{H}^{N-1}-x_{0}\in\Omega\backslash J_{(v,u)}, \[ A\left( x_{0}\right) =\lim_{\varepsilon\rightarrow0^{+}}\frac{\left( D(v,u)\right) \left( Q\left( x_{0},\varepsilon\right) \right) }{\left\vert D(v,u)\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) },~~\left\Vert A\left( x_{0}\right) \right\Vert =1,~~A\left( x_{0}\right) =a\otimes\nu, \ with $a \in \mathbb R^d$ and $\nu \in S^{N-1}$, \[ \lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert D(v,u)\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }{\varepsilon^{N-1} =\lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert Du\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }{\varepsilon^{N-1}}=0, \ \[ \lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert D(v,u)\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }{\varepsilon^{N}}=\lim _{\varepsilon\rightarrow0^{+}}\frac{\left\vert Du\right\vert \left( Q\left( x_{0},\varepsilon\right) \right) }{\varepsilon^{N}}=\infty. \] Arguing as in the end of Step 1, by Proposition \ref{prop2.4FM1} (ii), we may assume that $\left\{ (\widetilde{v}_n, \widetilde{u}_n)\right\} \subset C_{0}^{\infty}(\mathbb R^N;\mathbb R^{m+d})$. Applying \cite[formula (2.12) in Theorem 2.19]{FM2}, to the functional $G: (v,u)\in W^{1,1}(\Omega;\mathbb R^{m+d}) \to \int_{\Omega}Qf(v,\nabla u)dx$ we obtain for $\left\vert D^{c}(v,u)\right\vert -$a.e. $x_{0}\in\Omega$ \begin{equation}\nonumber \mu_c( x_{0}) \geq( Qf) ^{\infty}\left( v( x_{0}) ,\frac{dD^{c} u }{d\left\vert D^{c}u \right\vert } (x_0) \right) \end{equation} and that concludes the proof. \end{proof} \section{Upper bound}\label{ub} This section is devoted to prove that ${\cal F}\leq {\overline F_0}$. \begin{theorem} \label{thupperbound}Let $\Omega\subset\mathbb{R}^{N}$ be a bounded open set, let $f:\mathbb R^d \times \mathbb R^m\rightarrow\lbrack0,+\infty)$, be a function satisfying $(F_1)$ - $(F_4)$, and let $g: \mathbb R^m \times \mathbb R^m \times S^{N-1}\to [0,+\infty[$ be a function satisfying $(G_1)$ - $(G_3)$. Then for every $\left( v,u\right) \in SBV_0\left( \Omega;\mathbb R^m \right) \times BV\left( \Omega;\mathbb{R}^{d}\right) ,$ for every $A\in\mathcal{A}\left( \Omega\right)$, there exist sequences $\left\{ v_n\right\} \subset SBV_0\left( \Omega;\mathbb R^m\right) ,\left\{ u_{n}\right\} \subset W^{1,1}\left( \Omega;\mathbb{R}^{d}\right) $ such that $v_n \to v$ in $L^1\left( \Omega;\mathbb R^m\right)$, $u_{n}\rightarrow u$ in $L^{1}\left( \Omega;\mathbb{R}^{d}\right) $, and \begin{equation}\nonumber \underset{n\rightarrow\infty}{\lim\inf}F\left( v_n,u_n;A\right) \leq {\overline F_0}\left( v,u;A\right) . \end{equation} \end{theorem} \noindent Before proving the upper bound we recall our strategy, which was first proposed in \cite{AMT} and further developped in \cite{FM2}. Namely, first we will show that ${\cal F}(v, u;\cdot )$ is a variational functional with respect to the $L^1$ topology and \begin{equation}\nonumber {\cal F}(v,u;\cdot) \leq \mathcal{L}^{N}+ |Dv|+|Du| + {\cal H}^{N-1}\lfloor{J_v}. \end{equation} Next by Besicovitch differentiation Theorem, a blow-up argument will provide an upper bound estimate in terms of ${\overline F}_0$, first for bulk and Cantor parts, then also for the jump part, when the target functions $(v,u)$ are bounded. Finally the same approximation as in \cite[Theorem 4.9]{AMT}, will give the estimate for every $(v,u)\in SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d)$. We recall that ${\cal F}(v,u ; \cdot)$ is said to be a variational functional with respect to the $L^1$ topology if \begin{itemize} \item[(i)] ${\cal F}(\cdot, \cdot ;A)$ is local, i.e., ${\cal F}(v,u;A) = {\cal F}(v', u';A)$ for every $v,v' \in SBV_0(A;\mathbb R^m)$, $ u, u' \in BV(A; \mathbb R^d)$ satisfying $u = u'$ , $v= v'$ a.e. in $A$. \item[(ii)] ${\cal F}(\cdot, \cdot;A)$ is sequentially lower semicontinuous, i.e., if $v_n, v \in BV(A; \mathbb R^m)$, $u_n, u \in BV(A;\mathbb R^d)$ and $v_n \to v$ in $L^1(A; \mathbb R^m)$, $u_n \to u$ in $L^1(A;\mathbb R^d)$ then ${\cal F}(v, u; A) \leq \liminf_{n \to \infty} {\cal F}(v_n,u_n ; A)$. \item[(iii)] ${\cal F}(\cdot, \cdot ;A)$ is the trace on $\{A \subset \Omega: A \hbox{ is open}\}$ of a Borel measure on ${\cal B}(\Omega) $ the family of all Borel subsets of $\Omega$. \end{itemize} Since the lower semicontinuity and the locality of ${\cal F}(\cdot, \cdot; A)$ follow by its definition, it remains to prove $(iii)$. This is the target of the following lemma, where $(iii)$ will be obtained via a refinement of De Giorgi-Letta criterion, cf. \cite[Corollary 5.2]{DMFL}. \begin{lemma} \label{measure} Let $\Omega\subset\mathbb{R}^{N}$ be an open bounded set with Lipschitz boundary and let $f$ and $g$ be as in Theorem \ref{thupperbound}. For every $\left( v,u\right) \in SBV_0\left( \Omega;\mathbb R^m \right) \times BV\left( \Omega;\mathbb{R}^{d}\right) $, the set function $\mathcal{F}\left( v,u;\cdot\right) $ in \eqref{calFG} is the trace of a Radon measure absolutely continuous with respect to $\mathcal{L}^{N}+ |Dv|+\left\vert Du\right\vert + {\cal H}^{N-1}\lfloor{J_v}.$ \end{lemma} \begin{proof} An argument very similar to \cite[ Lemma 2.6 and Remark 2.7]{BFMGlobal} and \cite[Lemma 4.7]{BZZ} entails \[ \mathcal{F}(v,u; A)\leq C\left( \mathcal{L}^{N}(A)+|D v |(A)+|Du|(A)+ {\cal H}^{N-1}\lfloor{J_v}(A)\right) . \] \noindent By \cite[Corollary 5.2]{DMFL} to obtain $(iii)$ it suffices to prove that \begin{equation}\nonumber \displaystyle\mathcal{F}{(v,u;A)\leq\mathcal{F}(v,u;B)+\mathcal{F (v,u;A\setminus\overline{U})} \end{equation} \noindent for all $A,U,B\in\mathcal{A}(\Omega)$ with $U\subset\subset B\subset\subset A$, $u\in BV(\Omega;\mathbb{R}^{d})$ and $v\in SBV_0(\Omega;\mathbb R^m)$. We start by assuming that $v \in SBV_0(\Omega;\mathbb R^m)\cap L^\infty(\Omega;\mathbb R^m)$. Fix $\eta>0$ and find $\{w_{n}\}\subset W^{1,1}\left( (A\setminus\overline {U});\mathbb{R}^{d}\right) $, $\{v _{n}\}\subset SBV_0(A\setminus\overline{U};\mathbb R^m)\cap L^\infty(A\setminus\overline{U};\mathbb R^m)$ (cf. Remark \ref{asinGlobalMethodBFLM}) such that $w_n\rightarrow u$ in $L^{1}((A\setminus\overline{U});\mathbb{R}^{d})$ and $v _{n}\rightarrow v$ in $L^{1}((A\setminus\overline{U});\mathbb R^m)$ an \begin{equation} {\limsup_{n\rightarrow\infty}\int_{A\setminus\overline{U}}f\left( v_n, \nabla w_n \right) dx+\int_{A \setminus {\overline U} \cap J_{v_n}} g(v_n^+. v_n^-, \nu_{v_n})d {\cal H}^{N-1} \leq\mathcal{F}(v,u;A\setminus{\overline{U}})+\eta.} \label{E1 \end{equation} Extract a subsequence still denoted by $n$ such that the above upper limit is a limit. Let $B_0$ be an open subset of $\Omega$ with Lipschitz boundary such that $U\subset\subset B_{0 \subset\subset B$. Then there exist $\{u_{n}\}\subset W^{1,1}(B_{0 ;\mathbb{R}^{d})$ and $\{\overline{v}_{n}\}\subset SBV_0\left( B_{0 ;\mathbb R^m\right) \cap L^\infty(B_0;\mathbb R^m) $ (cf. (i) in Remark \ref{asinGlobalMethodBFLM}) such that $u_{n}\rightarrow u$ in $L^{1}(B_{0 ;\mathbb{R}^{d})$ and $\overline{v}_{n}\rightarrow v$ in $L^{1 (B_{0};\mathbb R^m)$ an \begin{equation} \mathcal{F}{(v,u;B_{0})=\lim_{n\rightarrow\infty}}\left( \int_{B_0}f({\overline v_n},\nabla u_n)dx + \int_{J_{\overline v_n}\cap B_0}g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n})d {\cal H}^{N-1} \right). \label{E2 \end{equation} \noindent For every $(\overline{v}, w) \in SBV_0(A;\mathbb R^m)\cap L^\infty(A;\mathbb R^m) \times W^{1,1}(A;\mathbb R^d)$, consider $\displaystyle{\mathcal{G}_{n}({\overline v}, w;A):=\int_{A}\left( 1+|\nabla w|\right) dx}$ $\displaystyle{+ (1+[{\overline v}]){\cal H}^{N-1}\lfloor{(J_{\overline v} \cap A)}}$. \noindent Due to the coercivity \eqref{H1}, we may extract a bounded subsequence not relabelled, from the sequence of measures $\nu_{n}:=\mathcal{G}_{n}(v_n, w_n;\cdot)+\mathcal{G}_{n}({\overline v}_n, u_n;\cdot)$ restricted to $B_{0}\setminus \overline{U}$, converging in the sense of distributions to some Radon measure $\nu$, defined on $B_{0}\setminus\overline{U}$. Analogously, for every $w \in SBV_0(A;\mathbb R^m)\cap L^\infty(A;\mathbb R^m)$ we could define a sequence of measures ${\cal H}_n(w;E)$:=$\int_{J_w \cap E}d {\cal H}^{N-1}$. \noindent For every $t >0$ , let $B_{t}:= \left\{ x\in B_{0} | \mathrm{dist (x, \partial B_{0}) > t\right\} $. Define, for $0 < \delta< \eta$, the subsets $L_{\delta}:= B_{\eta- 2 \delta} \setminus\overline{B_{\eta+ \delta}}.$ Consider a smooth cut-off function $\varphi_{\delta}\in C^{\infty }_{0}(B_{\eta-\delta};[0,1])$ such that $\varphi_\delta(x)= 1$ on $B_{\eta $. As the thickness of the strip is of order $\delta$, we have an upper bound of the form $\|\nabla\varphi_{\delta}\|_{L^{\infty}(B_{\eta-\delta})} \leq\frac{C}{\delta}.$ \noindent Define ${\overline w_n}(x):=\varphi_{\delta}(x)u_{n}(x)+(1-\varphi_{\delta}(x))w_{n}(x)$. Clearly, $\left\{ {\overline w_n}\right\} $ converges to $u$ in $L^{1}(A)$ as $n\rightarrow\infty$, an \[ \nabla {\overline w_n}=\varphi_{\delta}\nabla u_{n}+(1-\varphi_{\delta})\nabla w_{n}+\nabla\varphi_{\delta}\otimes(u_{n}-w_{n}). \] \noindent Arguing as in \cite[Lemma 4.4]{ABr1}, we may consider a sharp transition for the \noindent $SBV_0$ functions, namely \noindent let $\{v_{n}\}$ and $\{{\overline v_n}\}$ be as above, then for every $0<t <1$ we may define $\tilde{v}_{n}^{t}$ such that $\tilde{v}_{n}^{t \rightarrow v$ in $L^{1}(A)$ as $n\rightarrow\infty$, an \[ \tilde{v}_{n}^{t}(x):=\left\{ \begin{array} [c]{ll v_n(x) & \hbox{ in }\{x:\varphi_\delta(x)<t\},\\ {\overline v_n}(x) & \hbox{ in }\{x:\varphi_\delta(x)\geq t\}. \end{array} \right. \] \noindent Clearly $\tilde{v}_{n}^{t}(x)\in\{v_{n}(x),\overline{v _{n}(x)\}$ almost everywhere in $A$, and since $\mathcal{H}^{N-1}(J_{v_n}), {\mathcal H}^{N-1}(J_{\overline v_n}) < +\infty$ for all but at most countable $t \in \left] 0,1\right[ $ it results that \[ \mathcal{H}^{N-1}\left( J_{v_{n}}\cap \left\{ x\in A:\varphi _{\delta }\left( x\right) =t\right\} \right) =\mathcal{H}^{N-1}\left( J_{\overline{v _{n}}\cap \left\{ x\in A:\varphi _{\delta }\left( x\right) =t\right\} \right) =0. \] Moreover, using coarea formula \eqref{FR} and the mean value theorem it is possible to find a $t$ for which the integral over the level set is comparable with the double integral with $t$ varying between $0$ and $1$. Thus we have $$ \int_{\partial^{\ast}\{\varphi_\delta < t\}} d {\cal H}^{N-1}\leq \frac{C}{\delta} {\cal L}^N(B_{\eta - \delta} \setminus B_\eta) \leq C. $$ An analogous reasoning provides for the same $t$ that \begin{equation}\label{secondmeanvalue} {\int_{\partial^{\ast}\{\varphi_\delta<t\}}|[{\tilde v_n}^t]|d {\cal H}^{N-1}} \leq\frac{C}{\delta }\int_{B_{\eta-\delta}\setminus B_{\eta}}|v_{n}(x)-\overline{v}_{n}(x)|dx. \end{equation} Thus, as for the $\{{\cal G}_n\}$ above, we may extract a bounded subsequence not relabelled, from the sequence of measures ${\cal H}_n ({\tilde v}_n^t, \cdot)$, restricted to $B_0\setminus \overline U \cap \partial^\ast\{\varphi_\delta <t\}$, converging in the sense of distributions to some Radon measure $\nu_1$, defined on $B_{0}\setminus \overline U$. By \eqref{H1} we have the estimat \[ \begin{array} [c]{l} \displaystyle{\int_A f\left( {\tilde v_n}^t, \nabla {\overline w_n}\right) dx+ \int_{A \cap J_{\tilde v_n^t}} g(({\tilde v}_n^t)^+, ({\tilde v_n}^t)^-, \nu_{{\tilde v_n}^t})d {\cal H}^{N-1}}\\ \\ \leq{\displaystyle\int_{B_{\eta}} f({\overline v_n}, \nabla u_n)dx+ \int_{J_{\overline v_n}\cap B_\eta}g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n })d {\cal H}^{N-1}}\\ \\ +{\displaystyle\int_{(A\setminus\overline{B_{\eta-\delta}})} f\left( v_n, \nabla w_n\right)dx + \int_{J_{v_n}\cap(A\setminus\overline{B_{\eta-\delta}}) }g( v_n^+, v_n^-, \nu_{v_n })d {\cal H}^{N-1}} \\ \\ +C\left( \mathcal{G}_n(v_n, w_n; L_{\delta})+{\mathcal G}_n({\overline v_n}, u_n;L_{\delta})\right) + \frac{1}{\delta} \displaystyle{\int_{L_{\delta}} |w_n-u_n|dx+\int_{\partial^{\ast}\{\varphi_\delta<t\}}|[{\tilde v_n}^t]|d {\cal H}^{N-1}+ {\cal H}_n ({\tilde v}_n^t; L_\delta \cap \partial^{\ast}\{\varphi_\delta<t\})} \\ \\ \leq {\displaystyle\int_{B_0} f({\overline v_n}, \nabla u_n)dx+ \int_{J_{\overline v_n}\cap B_0}g({\overline v_n}^+, {\overline v_n}^-, \nu_{\overline v_n })d {\cal H}^{N-1}}\\ \\ +{\displaystyle\int_{(A\setminus\overline{U})} f\left( v_n, \nabla w_n\right)dx + \int_{J_{v_n}\cap(A\setminus\overline{U}) }g( v_n^+, v_n^-, \nu_{v_n })d {\cal H}^{N-1}} \\ \\ +C\left( \mathcal{G}_n(v_n, w_n, L_{\delta})+{\mathcal G}_n({\overline v_n}, u_n,L_{\delta})\right) + \frac{1}{\delta} \displaystyle{\int_{L_{\delta}} |w_n-u_n|dx+\int_{\partial^{\ast}\{\varphi_\delta<t\}}|[{\tilde v_n}^t]|d {\cal H}^{N-1}+ {\cal H}_n ({\tilde v}_n^t; L_\delta \cap \partial^{\ast}\{\varphi_\delta<t\})} \end{array} \] Passing to the limit as $n\rightarrow\infty$, and applying \eqref{E1}, \eqref{E2}, \eqref{secondmeanvalue} and the $L^1$ convergence of $\{v_n\}$ and $\{\overline v_n\}$ to $v$, it results that \begin{align*} \mathcal{F}{(v,u;A)} & {\leq\mathcal{F}}({v,u;B_{0})+\mathcal{F (v,u;A\setminus{\overline{U}})+\eta+C\nu(\overline{L_{\delta}}) + C \nu_1(\overline{L_\delta}) +\limsup_{n\rightarrow\infty}\int_{\partial^{\ast}\{\varphi_\delta<t\}}|[{\tilde v_n}^t]|d {\cal H}^{N-1}}\\ & \leq\mathcal{F}{(v,u;B)+\mathcal{F}(v,u;A\setminus{\overline{U })+\eta+C\nu(\overline{L_{\delta}})+ C\nu_1(\overline{L_{\delta}}). \end{align*} \noindent Letting $\delta$ go to $0$ we obtain \[ \displaystyle\mathcal{F}{(v,u;A)\leq\mathcal{F}(v,u;B)+\mathcal{F (v,u;(A\setminus{\overline{U}}))+\eta+C\nu(\partial B_{\eta})+ C\nu_1(\partial B_{\eta}). \] \noindent It suffices to choose a subsequence $\{\eta_{i}\}$ such that $\eta_i \to 0^{+}$ and $\nu(\partial B_{\eta_i})=\nu_1(\partial B_{\eta_i})=0$, to conclude the proof of subadditivity in the case $v \in SBV_0\cap L^\infty$. \noindent In the general case, by virtue of Remark \ref{asinGlobalMethodBFLM}, we can argue as in the last part of Theorem 10 in \cite{BFLM}.\end{proof} \medskip \medskip \begin{proof} [Proof of Theorem \ref{thupperbound}] We assume first that $(v,u)\in (SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d)) \cap L^\infty(\Omega;\mathbb R^{m+d})$. \noindent\textbf{Step 1.} In order to prove the upper bound, we start by recalling that by Proposition \ref{propqcx} we can replace $Qf$ by $f$ in \eqref{calFG}. First we deal with the bulk part. Since the $\mathcal{F}\left( v,u;\cdot\right) $ is a measure absolutely continuous with respect to $\mathcal{L}^{N}+\left\vert Du\right\vert +(1+ [v]){\cal H}^{N-1}\lfloor{J_v} $ we claim that \[ \frac{d\mathcal{F}\left( v,u;\cdot\right) }{d\mathcal{L}^{N}}\left( x_{0}\right) \leq Qf\left( v\left( x_{0}\right) ,\nabla u\left( x_{0}\right) \right) \] for $\mathcal{L}^{N}-$a.e. $x_{0}\in\Omega$ where $x_{0}$ is a Lebesgue point of $v$ and $u$ such that \begin{equation \begin{array} [c]{l \lim\limits_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon}\left\{ \frac{1}{\varepsilon^{N}}\int_{B\left( x_{0},\varepsilon\right) }\left\vert u\left( x\right) -u\left( x_{0}\right) -\nabla u\left( x_{0}\right) \left( x-x_{0}\right) \right\vert ^{\frac{N}{N-1}}dx\right\} ^{\frac {N-1}{N}}=0,\medskip\\ \lim\limits_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon}\left\{ \frac{1}{\varepsilon^{N}}\int_{B\left( x_{0},\varepsilon\right) }\left\vert v\left( x\right) -v\left( x_{0}\right) \right\vert ^{\frac{N}{N-1 }dx\right\} ^{\frac{N-1}{N}}=0,\medskip\\ \mu_{a}\left( x_{0}\right) =\lim\limits_{\varepsilon\rightarrow0^{+} \frac{\mu\left( B\left( x_{0},\varepsilon\right) \right) }{\mathcal{L ^{N}\left( B\left( x_{0},\varepsilon\right) \right) }<\infty. \end{array} \label{upper1 \end{equation} Let $U:=\left( v,u\right) .$ By $\left( \ref{upper1}\right) $ and Theorems \ref{thm2.6BBBF} and \ref{thm2.8FM2} for $\mathcal{L}^{N}-$a.e. $x_{0 \in\Omega$ we hav \begin{align} & \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\mathcal{L}^{N}\left( B\left( x_{0},\varepsilon\right) \right) }\int_{B\left( x_{0},\varepsilon\right) }\left\vert U\left( x\right) -U\left( x_{0}\right) \right\vert \left( 1+\left\vert \nabla U\left( x\right) \right\vert \right) dx=0,\nonumber\\ & \lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert D_{s}U\right\vert \left( B\left( x_{0},\varepsilon\right) \right) }{\mathcal{L}^{N}\left( B\left( x_{0},\varepsilon\right) \right) }=0,\nonumber\\ & \lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert DU\right\vert \left( B\left( x_{0},\varepsilon\right) \right) }{\mathcal{L}^{N}\left( B\left( x_{0},\varepsilon\right) \right) }\text{ exists and it is finite, \label{upper3}\\ & \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\mathcal{L}^{N}\left( B\left( x_{0},\varepsilon\right) \right) }\int_{B\left( x_{0},\varepsilon\right) }Qf\left( v\left( x_{0}\right) ,\nabla u\left( x\right) \right) dx=Qf\left( v\left( x_{0}\right) ,\nabla u\left( x_{0}\right) \right) ,\nonumber\\ & \frac{d\mathcal{F}\left( v,u;\cdot\right) }{d\mathcal{L}^{N}}\left( x_{0}\right) \text{ exists and it is finite.}\nonumber \end{align} We observe that the assumptions imposed on $f$ and Proposition \ref{continuityQfbar} allow us to apply for every $v \in SBV_0(\Omega;\mathbb R^m)$ the Global Method (cf. \cite[Theorem 4.1.4]{BFMGlobal}) to the functional $u \in W^{1,1}(\Omega;\mathbb R^d) \times {\cal A}(\Omega)\to G(u;A):= \int_A Qf(v(x),\nabla u(x))dx$, thus obtaining an integral representation for the relaxed functional \begin{equation}\label{auxrelax} \displaystyle{{\cal G}(u;A) = \inf\left\{\liminf_{n \to \infty} G(u_n; A): u_n \to u \hbox{ in } L^1(A;\mathbb R^d)\right\}} \end{equation} for every $ (u,A)\in BV(\Omega;\mathbb R^d)\times {\cal A}(\Omega)$. Recall that the growth condition $(G_2)$, the lower semicontinuity with respect to the $L^1$-topology of the functional $v \in SBV_0(\Omega;\mathbb R^m)\mapsto ( (1+[v]){\cal H}^{N-1}\lfloor{(J_v \cap A)}$ entails \begin{equation}\label{FestG} {\cal F}(v, u;A)\leq {\cal G}(u;A)+ (1+[v]){\cal H}^{N-1}\lfloor{(J_v \cap A)}, \end{equation} Differentiating with respect to ${\cal L}^N$ at $x_0$ and exploiting \eqref{upper1} and \eqref{upper3} we obtain that $$ \displaystyle{\frac{d{\cal F}((v, u);\cdot)}{d {\cal L}^N}(x_0) \leq f_0(x_0, \nabla u(x_0))}, $$ where for every $x_0 \in \Omega$ and $\xi \in \mathbb R^d$, $f_0(x_0,\xi)$ is given as in \cite[formula (4.1.5)]{BFMGlobal}, namely \begin{equation}\label{f00} f_0(x_0,\xi):=\limsup_{\varepsilon \to 0^+} \inf_{\begin{array}{ll} z\in W^{1,1}(Q;\mathbb R^d)\\ z(y)= \xi y \hbox{ on }\partial Q \end{array}}\left\{\int_Q Qf(v(x_0+\varepsilon y), \nabla z(y))dy\right\}. \end{equation} To conclude the proof we claim that $f_0(x_0,\xi) \leq Qf(v(x_0),\xi)$ for every $x_0 \in \Omega$ satisfying \eqref{upper1} and \eqref{upper3} and $\xi \in \mathbb R^d$. By virtue of Lemma \ref{Lemma0} we have that $$ \begin{array}{ll} \displaystyle{\limsup_{\varepsilon \to 0^+} \inf_{\begin{array}{ll} z\in W^{1,1}(Q;\mathbb R^d)\\ z(y)= \xi y \hbox{ on }\partial Q \end{array}}\left\{\int_Q Qf(v(x_0+\varepsilon y), \nabla z(y))dy\right\}} \\ \leq \displaystyle{\inf_{\begin{array}{ll} z\in W^{1,1}(Q;\mathbb R^d)\\ z(y)= \xi y \hbox{ on }\partial Q \end{array}}\left\{\limsup_{\varepsilon \to 0^+}\int_Q Qf(v(x_0+\varepsilon y), \nabla z(y))dy\right\}.} \end{array} $$ Computing the $\limsup$ on the right hand side, we have $$ \begin{array}{ll} \displaystyle{\limsup_{\e \to 0^+} \int_Q Qf(v(x_0+ \varepsilon y), \nabla z(y))dy}\\ \displaystyle{=\limsup_{\e \to 0^+} \left(\int_Q Qf(v(x_0+ \varepsilon y), \nabla z(y))dy - \int_Q Qf(v(x_0), \nabla z(y))dy\right) + \int_Q Qf(v(x_0), \nabla z(y))dy.} \end{array} $$ Since $x_0$ is a Lebesgue point for $v$, and recalling that $v \in SBV_0(Q;\mathbb R^m)\cap L^\infty(Q;\mathbb R^m)$, by Lebesgue dominated convergence theorem and $(F_3)$ applied to $Qf$ (see Proposition \ref{continuityQfbar}), we have that $$ \begin{array}{ll} \displaystyle{\limsup_{\e \to 0^+} \left(\int_Q Qf(v(x_0+ \varepsilon y), \nabla z(y))dy - \int_Q Qf(v(x_0), \nabla z(y))dy\right)}\\ \displaystyle{ \leq \limsup_{\e \to 0^+} \int_Q L|v(x_0+ \varepsilon y)- v(x_0)|(1+ |\nabla z(y)|)dy =0.} \end{array} $$ Hence $$ \displaystyle{\limsup_{\e \to 0^+} \int_Q Qf(v(x_0+ \varepsilon y), \nabla z(y))dy=\int_Q Qf(v(x_0), \nabla z(y))dy.} $$ By the quasiconvexity of $Qf(v(x_0),\cdot)$, and \eqref{f00} one obtains $$ f_0(x_0,\xi) \leq Qf(v(x_0), \xi), $$ which concludes the proof, when replacing $\xi$ by $\nabla u(x_0)$. \noindent\textbf{Step 2.} We prove the upper bound for the Cantor part. By Radon-Nikod\'ym theorem we can write \begin{equation}\label{CantorU} \left\vert DU\right\vert =\left\vert D^{c}u\right\vert +\sigma \end{equation} where $U:=(v,u) \in (SBV_0(\Omega;\mathbb R^m) \times BV(\Omega;\mathbb R^d))\cap L^\infty(\Omega;\mathbb R^{m+d})$, $\sigma$ and $\left\vert D^{c}u\right\vert $ are mutually singular Radon measures. Observe that $U \equiv \left(v,u\right) $ is $\left\vert D^{c}u\right\vert -$measurable, $D v$ is singular with respect to $\left\vert D^{c}u\right\vert $ and by Theorems \ref{thm2.6BBBF}, \ref{thm2.8FM2}, and \cite[Theorem 2.11]{FM2} for $\left\vert D^{c}u\right\vert -$a.e. $x\in B\left( x_{0},\varepsilon\right) \begin{equation}\label{cantor} \begin{array}{ll} \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\frac{\mu\left( B\left( x_{0 ,\varepsilon\right) \right) }{\left\vert D^{c}u\right\vert \left( B\left( x_{0},\varepsilon\right) \right) }=0,} \\ \\ \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\frac{\left\vert Du\right\vert \left( B\left( x_{0},\varepsilon\right) \right) }{\left\vert D^{c}u\right\vert \left( B\left( x_{0},\varepsilon\right) \right) }\text{ exists and is finite }} \\ \\ \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\frac{\varepsilon^{N}}{\left\vert D^{c}u\right\vert\left( B\left( x_{0},\varepsilon\right) \right) }=0,}\\ \\ \displaystyle{\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{{\cal L}^N \left( B\left( x_{0},\varepsilon\right) \right) }\int_{B\left( x_{0},\varepsilon\right) }\left(\left\vert u\left( x\right) -u\left( x_{0}\right) \right\vert +\left\vert v\left( x\right) -v\left( x_{0}\right) \right\vert\right) d x =0.} \end{array} \end{equation} Moreover, \begin{equation}\label{ABrankone} \displaystyle{A\left( x\right) :=\lim_{\varepsilon\rightarrow0^{+}}\frac{D^{c}u\left( B\left( x,\varepsilon\right) \right) }{\left\vert D^{c}u\right\vert \left( B\left( x,\varepsilon\right) \right) },\text{~\ \ }\lim_{\varepsilon \rightarrow0^{+}}\frac{D^{c}U\left( B\left( x,\varepsilon\right) \right) }{\left\vert D^{c}U\right\vert \left( B\left( x,\varepsilon\right) \right) }=:D\left( x\right)} \end{equation} exist and they are rank-one matrices of norm 1, in particular \begin{equation}\label{as3.27BFMglobal} \displaystyle{A(x)= a_u(x) \otimes \nu_u(x)}, \end{equation} where $(a_u(x),\nu_u(x)) \in \mathbb R^d \times S^{N-1}$. By Theorem \ref{thm2.8FM2} we have \[ \lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\left\vert D^{c}u\right\vert \left( B\left( x_{0},\varepsilon\right) \right) }\int_{B\left( x_{0},\varepsilon\right) }f^{\infty}\left( v\left( x_{0}\right) ,A\left( x\right) \right) d\left\vert D^{c}u\right\vert =f^{\infty}\left( v\left( x_{0}\right) ,A\left( x_{0}\right) \right) . \] We recall as in Step 1, that via the Global Method (cf. \cite[Theorem 4.1.4]{BFMGlobal}) we can obtain an integral representation for the functional ${\cal G}(u; A)$ in \eqref{auxrelax} for every $ (v,u)\in BV(\Omega;\mathbb R^{m+d})$. Moreover by Proposition \ref{propqcx}, we can replace $f$ by $Qf$ in \eqref{calFG} and \eqref{FestG} holds. Differentiating with respect to $|D^c u|$ at $x_0$ and exploiting \eqref{CantorU} and \eqref{cantor} we deduce $$ \displaystyle{\frac{d{\cal F}((v,u);\cdot)}{d |D^c u|}(x_0) \leq h(x_0, a_u, \nu_u)}, $$ where $\nu_u(x)$ agrees with the unit vector that, together with $a_u$, satisfies \eqref{as3.27BFMglobal} for $|D^c u|$-a.e. $x \in \Omega \setminus J_u$, and where $h(x_0,a, \nu)$ is given as in \cite[formula (4.1.7)]{BFMGlobal}, namely \begin{equation}\label{f0} h(x_0,a,\nu):=\limsup_{k \to \infty}\limsup_{\varepsilon \to 0^+} \inf_{\begin{array}{ll} z\in W^{1,1}(Q^{(k)}_\nu;\mathbb R^d)\\ z(y)= a(\nu \cdot y) \hbox{ on }\partial Q^{(k)}_\nu \end{array}}\left\{\frac{1}{k^{N-1}}\int_{Q^{(k)}_\nu} Qf^\infty(v(x_0+\varepsilon y), \nabla z(y))dy\right\}, \end{equation} where $a \in \mathbb R^d$, $\nu \in S^{N-1}$, $Q_\nu^{(k)}:= R_\nu \left(\left(-\frac{k}{2},\frac{k}{2}\right)^{N-1}\times \left(-\frac{1}{2},\frac{1}{2}\right)\right),$ and $R_\nu$ is a rotation such that $R_\nu(e_N)=\nu$. We also recall that by (iv) in Remark \ref{propfinfty}, $Q(f^\infty)= (Qf)^\infty= Qf^\infty$. To conclude the proof it is enough to show that $$ h(x_0, a,\nu) \leq Qf^\infty (v(x_0), a \otimes \nu). $$ By Lemma \ref{Lemma0} \begin{equation}\label{hCantor} \begin{array}{ll} \displaystyle{h(x_0, a, \nu) \leq \limsup_{k \to \infty}\inf_{\begin{array}{ll} z\in W^{1,1}(Q^{(k)}_\nu;\mathbb R^d)\\ z(y)= a(\nu \cdot y) \hbox{ on }\partial Q^{(k)}_\nu \end{array}} \left\{\limsup_{\varepsilon \to 0^+} \frac{1}{k^{N-1}}\int_{Q_ \nu^{(k)}} Q f^\infty(v(x_0+\varepsilon y), \nabla z(y))dy\right\}.} \end{array} \end{equation} In order to compute $\displaystyle{\limsup_{\varepsilon \to 0^+} \frac{1}{k^{N-1}}\int_{Q_\nu^{(k)}}Q f^\infty(v(x_0+\varepsilon y), \nabla z(y))dy}$, we add and subtract inside the integral $ Qf^\infty(v(x_0), \nabla z(y))$. Then, as in Step 1, exploiting the fact that $x_0$ is a Lebesgue point for $v\in SBV_0(\Omega;\mathbb R^m)\cap L^\infty(\Omega;\mathbb R^m)$, and that $Qf^\infty$ satisfies $(F_3)$ (see Remark \ref{propfinfty} where $(F_3)$ has been deduced for $f^\infty$ and Proposition \ref{continuityQfbar}), via Lebesgue dominated convergence theorem, we can conclude that $$ \displaystyle{\limsup_{\varepsilon \to 0^+}\frac{1}{k^{N-1}}\int_{Q_\nu^{(k)}} Qf^\infty((v(x_0+\varepsilon y), \nabla z(y)) dy)= \frac{1}{k^{N-1}} \int_{Q_\nu^{(k)}}Qf^\infty(v(x_0), \nabla z(y))dy.} $$ Finally the quasiconvexity of $Qf^\infty$ (deduced via Remark \ref{propfinfty} and Proposition \ref{continuityQfbar}) provides $$ \displaystyle{ Qf^\infty (v(x_0), a \otimes \nu) = \inf_{\begin{array}{ll} z\in W^{1,1}(Q^{(k)}_\nu;\mathbb R^d)\\ z(y)= a(\nu \cdot y) \hbox{ on }\partial Q^{(k)}_\nu \end{array}} \left\{\frac{1}{k^{N-1}}\int_{Q_ \nu^{(k)}} Q f^\infty(v(x_0), \nabla z(y))dy\right\}}, $$ which, together with \eqref{hCantor} concludes the proof of the upper bound for the Cantor part when $(v,u) \in (SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d))\cap L^\infty(\Omega;\mathbb R^{m+d})$. \noindent\textbf{Step 3.} We prove the upper bound for the jump. Namely, we claim that \begin{equation}\label{claimjump} \mathcal{F}\left( U;J_U \right)\equiv{\cal F}(v,u, J_{(v,u)}) \leq\int_{J_ U}K_3\left( v^+,v^-,u^+,u^-,\nu\right) d\mathcal{H}^{N-1 \end{equation} for every $U\equiv \left( v,u\right) \in \left( SBV_0\left( \Omega;\mathbb R^m\right)\times BV\left( \Omega;\mathbb{R}^{d}\right) \right) \cap L^{\infty}\left(\Omega;\mathbb{R}^{m+ d}\right) $. The proof is divided into three parts according to the assumptions on the limit function $U.$ \emph{Case 1- }$U\left( x\right) :=\left( a,c\right) \chi_{E}\left( x\right) +\left( b,d\right) \left( 1-\chi_{E}\left( x\right) \right) $ with $P( E, \Omega) <\infty.$ \emph{Case 2- }$U\left( x\right) :=\sum_{i=1}^{\infty}(a_{i},c_i)\chi_{E_{i }\left( x\right) $ where $\left\{ E_{i}\right\} _{i=1}^{\infty}$ forms a partition of $\Omega$ into sets of finite perimeter and $(a_i,c_i) \in \mathbb R^m \times \mathbb R^d$. \emph{Case 3- }$U\in (SBV_0( \Omega;\mathbb R^m)\times BV( \Omega;\mathbb{R}^{d})) \cap L^{\infty}\left( \Omega;\mathbb{R ^{m+d}\right) .$ {\it Case 1-} We start by proving that for every open set $A\subset\Omega \[ \mathcal{F}\left( U;A\right)\equiv{\cal F}(v,u;A) \leq\int_{A}Qf\left( v\left( x\right) ,0\right) dx+\int_{J_U \cap A}K_3\left( a,b,c,d,\n \right) d\mathcal{H}^{N-1}. \] \begin{enumerate} \item[a)] Assume first tha \[ v(x) :=\left\{ \begin{array} [c]{cc a & \text{if }x\cdot\nu>0,\\ b & \text{if }x\cdot\nu<0, \end{array} \right. \text{ and }u\left( x\right) :=\left\{ \begin{array} [c]{cc c & \text{if }x\cdot\nu>0,\\ d & \text{if }x\cdot\nu<0. \end{array} \right. \] We start with the case when $A=a+\lambda Q$ is an open cube with two faces orthogonal to $\nu$, for simplicity we also assume that $\nu =e_N$ and $Q_\nu$ will be denoted simply by $Q$. Our proof develops as in \cite[Proposition 4.1 and Lemma 4.2]{FR}, cf. also \cite[Proposition 5.1]{BBBF}, thus we will present just the main steps. Suppose first that $a=0$ and $\lambda =1$. By Proposition \ref{prop3.5BBBF} (cf. also Remark \ref{applicationofprop3.4}), there exists $(v_n,u_n) \in {\cal A}_3(a,b,c,d,\nu)$ such that $(v_n,u_n)\to (v,u)$ in $L^1(Q;\mathbb R^{m+d})$ and \begin{equation}\label{5.9BBBF} \displaystyle{K_3(a,b,c,d,\nu) =\lim_{n\to \infty}\left(\int_Q Qf^\infty(v_n(x), \nabla u_n(x))dx + \int_{J_{v_n}\cap Q} g(v_n^+(x), v_n^-(x), \nu_n(x))d {\cal H}^{N-1}\right).} \end{equation} We denote by $Q'$ the set $\left\{ x\in Q:x_{N}=0\right\} .$ For $k\in\mathbb{N}$ we label the elements of $\left( \mathbb{Z\cap}\left[ -k,k\right] \right) ^{N-1}\times\left\{ 0\right\} $ by $\left\{ a_{i}\right\} _{i=1}^{\left( 2k+1\right) ^{N-1}}$ and we observe tha \[ \left( 2k+1\right) \overline{Q'} {\displaystyle\bigcup\limits_{i=1}^{\left( 2k+1\right) ^{N-1}}} \left( a_{i}+\overline{Q'}\right) \] with \[ \left( a_{i}+Q'\right) \cap\left( a_{j}+Q'\right) =\emptyset\text{ for }i\neq j. \] We defin \[ z_{n,k}\left( x\right) :=\left\{ \begin{array} [c]{lll a & & \text{if }x_{N}>\frac{1}{2\left( 2k+1\right) },\\ v_n\left( \left( 2k+1\right) x\right) & & \text{if }\left\vert x_{N}\right\vert <\frac{1}{2\left( 2k+1\right) },\\ b & & \text{if }x_{N}<-\frac{1}{2\left( 2k+1\right) }. \end{array} \right. \] an \[ w_{n,k}\left( x\right) :=\left\{ \begin{array} [c]{lll c & & \text{if }x_{N}>\frac{1}{2\left( 2k+1\right) },\\ u_n\left( \left( 2k+1\right) x\right) & & \text{if }\left\vert x_{N}\right\vert <\frac{1}{2\left( 2k+1\right) },\\ d & & \text{if }x_{N}<-\frac{1}{2\left( 2k+1\right) }. \end{array} \right. \] By the periodicity of the functions $v_n$ and $u_n$, it is easily seen that $$ \displaystyle{\lim_{n \to \infty}\lim_{k \to \infty}\|z_{n,k}-v\|_{L^{1}\left( Q;\mathbb R^m\right) }=0 }, \;\;\;\;\;\;\;\;\;\;\displaystyle{\lim_{n \to \infty}\lim_{k \to \infty}\|w_{n, k}- u \|_{L^{1}\left( Q;\mathbb{R}^{d}\right) }=0 .} $$ \noindent Thus, by a standard diagonalization argument, we have $$ \begin{array} {ll} \displaystyle{{\cal F}(v,u; Q)\leq \limsup_{n\to \infty}\limsup_{k\to \infty}\left(\int_Q Qf (z_{n,k}(x),\nabla w_{n,k}(x))dx +\int_{Q\cap J_{z_{n,k}}} g(z_{n,k}^+(x), z_{n,k}^-(x), \nu_{n,k}(x))d {\cal H}^{N-1}\right).} \end{array} $$ Arguing as in \cite[Proposition 5.1]{BBBF} for the bulk part we have $$ \begin{array}{ll} \displaystyle{\limsup_{k \to \infty}\int_Q Qf(z_{n,k}(x), \nabla w_{n,k}(x))dx = \int_Q Qf(v(y), 0)dy+\int_Q Qf^\infty (v_n(y),\nabla u_n(y))dy,} \end{array} $$ and for the surface term $$ \begin{array}{ll} \displaystyle{\int_{Q \cap J_{z_{n,k}}} g(z_{n,k}^+(x), z_{n,k}^-(x), \nu_{n,k}(x))d {\cal H}^{N-1}} \displaystyle{\leq \int_{Q \cap J_{v_n}} g(v_n^+(y), v_n^-(y), \nu_n(y))d {\cal H}^{N-1}(y).} \end{array} $$ Putting together the estimates for bulk and surface terms and exploiting \eqref{5.9BBBF} we obtain that $$ \begin{array}{ll} \displaystyle{{\cal F}(v,u; Q) \leq \limsup_{n\to \infty} \left(\int_Q Qf(v, 0)dx + \int_Q Qf^\infty(v_n(y), \nabla u_n(y)) dy \right.}\\ \\ \displaystyle{\left.+\int_{Q \cap J_{v_n}} g(v_n^+(y), v_n^-(y), \nu_n(y))d {\cal H}^{N-1} \right) = \int_Q Qf(v(x), 0)dx +K_3(a,b,c,d, e_N)}\\ \\ \displaystyle{=\frac{Qf(a,0)+ Qf(b,0)}{2}+ K_3(a,b,c,d,e_N).} \end{array} $$ In order to consider sets $A= a + \lambda Q$ with $a \in \mathbb R^N$ and $\lambda>0$ we define $$ \displaystyle{(Qf)_\lambda(b,B):= Qf\left(b, \frac{B}{\lambda}\right), \;\;\;\ g_\lambda(\xi, \zeta, \nu):=\frac{1}{\lambda}g(\xi, \zeta ,\nu) } $$ and for every $E \subset \Omega$, $$ \begin{array}{ll} \displaystyle{{\cal F}_\lambda(v,u; E):= \inf_{\{(v_n, u_n)\}} \left\{ \liminf_{n \to \infty} \left(\int_E (Qf)_\lambda(v_n(x), \nabla u_n(x) )dx + \int_{E \cap J_{v_n}} g_\lambda(v_n^+(x), v_n^-(x), \nu_n(x)) d {\cal H}^{N-1} \right):\right.} \\ \\ \displaystyle{ (v_n, u_n) \in SBV_0(E;\mathbb R^m) \times W^{1,1}(E;\mathbb R^d), (v_n, u_n)\to (v,u) \hbox{ in }L^1(E;\mathbb R^{m+d})\Big\}.}\end{array} $$ It is easily seen that for every $(v,u) \in L^1(\Omega ;\mathbb R^{m+d})$, we have $$ \displaystyle{{\cal F}(v,u; A)= \lambda^N {\cal F}_\lambda (v_\lambda, u_\lambda; Q)}, $$ where $$ \displaystyle{v_\lambda(x):=v\left(\frac{x-a}{\lambda}\right)}, \;\; \displaystyle{u_\lambda(x):=u\left(\frac{x-a}{\lambda}\right)}. $$ Since $Qf^\infty_\lambda = \frac{1}{\lambda}Qf^\infty$, by the definition of $K_3$ for $f_\lambda$ and $g_\lambda$ we have that $(K_3)_\lambda(a,b,c,d,\nu)= \frac{1}{\lambda} K_3(a,b,c,d,\nu). $ By the definition of $u_\lambda$ and $v_\lambda$ we have that $$ v_\lambda= \left\{ \begin{array}{ll} a \hbox{ if }x_N >0 ,\\ b \hbox{ if }x_N <0, \end{array} \right.\;\;\;\;\;\; u_\lambda= \left\{ \begin{array}{ll} c \hbox{ if }x_N >0 ,\\ d \hbox{ if }x_N <0. \end{array} \right. $$ So by the previous case it results that $$ \displaystyle{{\cal F}(v,u; A)\lambda^N ={\cal F}_\lambda (v_\lambda, u_\lambda; Q) \leq \lambda^N \left( \displaystyle{\frac{Qf_\lambda(a,0)+ Qf_\lambda(b,0)}{2}+ (K_3)_\lambda(a,b,c,d,e_N)}\right).} $$ \item[b)] Now let $U:=(v,u)$ as in $a)$ and let $A$ be any open set. The proof of this step is identical to \cite[Section 5. Step 3, case 1., b)]{FM2}. Indeed it is enough to apply the same strategy replacing $u$ and $K$ in \cite{FM2} by $U $ and $K_3$ respectively herein, obtaining \begin{equation}\label{formulasurface} \displaystyle{\mathcal{F}\left( v,u;A\right)\leq \int_{A}Qf\left( v\left( x\right) ,0\right) dx+\int_{J_U \cap A}K_3\left( a,b,c,d,\nu\right) d\mathcal{H}^{N-1}.} \end{equation} \item[c)] Now suppose that $U$ has a polygonal interface, i.e. $ U=\left( a,c\right) \chi_{E}+\left( b,d\right) \left( 1-\chi_{E}\right)$, $E$ is a polyhedral set, i.e., $E$ is a bounded strongly Lipschitz domain and $\partial E=H_{1}\cup H_{2}\cup\dots\cup H_{M}$ are closed subsets of hyperplanes of type $\left\{ x\in\mathbb{R ^{N}:x\cdot\nu_{i}=\alpha_{i}\right\} .$ The details of the proof are omitted since they are very similar to \cite[Section 5, Step 3, case 1, c)]{FM2} . We just observe that, given an open set $A $ contained in $\Omega$, the argument relies on an inductive procedure on $I:=\left\{ i\in\left\{ 1,\dots,M\right\} :\mathcal{H}^{N-1}\left( H_{i}\cap A\right) >0\right\}$, starting from the case $I=0$, when $u\in W^{1,1}\left( A;\mathbb{R}^{d}\right) $ and $v\in SBV_0(A;\mathbb R^m)\cap L^\infty(A;\mathbb R^m)$, for which it suffices to consider $u_{n}=u$ and $v_{n}=v$ with \eqref{formulasurface} reducing to \[ \mathcal{F}\left( v,u;A\right) \leq\int_{A}Qf\left(v\left( x\right) ,0\right) dx. \] The case $\operatorname*{card}I=1$ was studied in part $b)$ where $E$ is a large cube so that $J_U \cap\Omega$ reduces to the flat interface $\left\{ x\in\Omega:x\cdot\nu=0\right\} .$ Then the induction step, which first assumes that \eqref{formulasurface} is true if $\operatorname*{card}I=k,~k\leq M-1$ and then proves that it is still true if $\operatorname*{card}I=k,$ develops exactly as in \cite[Proposition 5.1, Step 2, c)]{BBBF}, the only difference being that the slicing method used to connect the sequence across the interfaces relies on the same techniques as Lemma \ref{Lemma4.1FM}, referred to more general open sets than cubes (cf. also \cite[Section 5, Step 3, case 1, c]{FM2}). Thus one can conclude that $$ \begin{array}{ll} {\cal F}(v,u; A)\leq \displaystyle{ \int_A Qf(v(x), 0)dx + \int_{J_U \cap A} K_3(a,b,c,d,\nu)d {\cal H}^{N-1}.} \end{array} $$ \item[d)] If $E$ is an arbitrary set of finite perimeter, the step develops in strong analogy with \cite [Section 5, Step 3, case 1, f)]{FM2}. Essentially, exploiting Proposition \ref{propK3} (b) and the approximation via polyhedral sets with finite perimeter as in \cite[Lemma 3.1]{B}, and application of Lebesgue's monotone convergence theorem gives $$ \displaystyle{{\cal F}(v,u; A) \leq \int_A Qf(v(x), 0)dx + \int_{A \cap J_U}K_3(a,b,c,d,\nu)d{\cal H}^{N-1}}, $$ This last inequality, together with Lemma \ref{measure}, yields $$ \displaystyle{{\cal F}(v,u; J_{(v,u)}) \leq \int_{J_{(v,u)}} K_3(a,b,c,d, \nu)d {\cal H}^{N-1}} $$ \noindent which gives \eqref{claimjump} when $U\equiv (v,u)=(a,c)\chi_E + (b,d)(1-\chi_E)$ is the characteristic function of a set of finite perimeter. \end{enumerate} {\it Case 2-} Arguing as in \cite [Section 5, Step 3, case 2]{FM2}, we refer to \cite[Proposition 4.8, Step 1]{AMT}, and clearly we obtain for every $(v,u)\in BV(\Omega; T) \times BV(\Omega;T)$, with $T$ a finite subset of $\mathbb R^d$ $$ \displaystyle{{\cal F}(v,u; A)= {\cal F}(v,u; A \cap J_{(v,u)}) \leq \int_{J_{(v,u)}}K_3(v^+, v^-, u^+, u^-, \nu_{v,u}(x))d {\cal H}^{N-1}(x).} $$ {\it Case 3-} For $U\equiv(v,u)\in (SBV_0(\Omega;\mathbb R^m) \times BV(\Omega;\mathbb R^d) )\cap L^\infty(\Omega;\mathbb R^{m+d})$, the proof develops analogously to \cite[Proposition 4.8, Step 2]{AMT} and we add some details for the reader's convenience. First we observe that the jump set $J_U \equiv J_{(v,u)}$ can be decomposed as $(J_{u} \setminus J_v) \cup (J_v \setminus J_u) \cup (J_u\cap J_v) $, recalling that these sets are mutually disjoint and the tangent hyperplanes to $J_u$ and $J_v$ coincide up to a set of ${\cal H}^{N-1}$- measure $0$. Let $A \in {\cal A}(\Omega)$, such that $A \supset J_U$, we assume $U(x) \in [0,1]^{m+d}$ for a.e. $x \in A$. For every $h \in \mathbb N$, $h \geq 2$, it is possible to define a set $B_h:= A \setminus J_U \cup \{ x \in J_U : |U^+(x)-U^-(x)| \leq \frac{1}{4(m+d)h}\}$, and define the sequence $\{U_h\}\equiv \{(v_h, u_h)\}$ according to \cite[Proposition 4.8, Step 2]{AMT}. Observe that $J_{v_h}\subset J_v$. Then, by Step 2, we have that \begin{equation}\label{Cantor0} \begin{array}{ll} \displaystyle{{\cal F}(v,u,;A )\leq \liminf_{h \to \infty} {\cal F}(v_h, u_h;A) } \displaystyle{=\liminf_{h \to \infty} \left(\int_A Qf(v_h, 0)dx + \int_A Qf^\infty \left(v_h, \frac{d D^c u_h}{d |D^c u_h|}\right) d |D^c u_h| \right.}\\ \\ \;\;\;\;\;\;\;\;\;\;\;\;\,\,\,\;\;\;\;\;\;\;\,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\,\,\,\;\;\;\;\;\;\;\,\;\;\;\displaystyle{\left. +\int_ {A\cap (J_{u_h}\cup J_{v_h})} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1} \right).} \end{array} \end{equation} We restrict our attention to the surface integral. Clearly, $$ \begin{array}{ll} \displaystyle{\int_ {A\cap (J_{u_h}\cup J_{v_h})} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1}= \int_ {A\cap (J_{u_h}\cup J_{v_h})\cap B_h} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1}}\\ \\ \displaystyle{+ \int_ {A\cap (J_{u_h}\cup J_{v_h})\cap (A \setminus B_h)} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1}.} \end{array} $$ By the decomposition of the jump set $J_{(v_h, u_h)}$, Proposition \ref{propK3} d), the fact that $J_{v_h}\subset J_v$, the same type of estimates as in \cite[page 300]{AMT}, entail (with the constant $C$ varying from place to place) \begin{equation}\label{stimeuhvh} \begin{array}{ll} \displaystyle{\int_ {A\cap (J_{u_h}\cup J_{v_h})\cap B_h} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1}= \int_ {A\cap (J_{u_h}\setminus J_{v_h})\cap B_h} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1}}\\ \\ \displaystyle{ +\int_ {A\cap (J_{v_h}\setminus J_{u_h})\cap B_h} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1}+ \int_ {A\cap J_{u_h}\cap J_{v_h} \cap B_h} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{v_h, u_h})d {\cal H}^{N-1}}\\ \\ \displaystyle{\leq C \int_ {A\cap (J_{u_h}\setminus J_{v_h})\cap B_h} |u_h^+-u_h^-|d {\cal H}^{N-1}+ C \int_ {A\cap (J_{v_h}\setminus J_{u_h})\cap B_h} (|v_h^+- v_h^-| +1) d {\cal H}^{N-1}}\\ \\ \displaystyle{+ C \int_ {A\cap J_{u_h}\cap J_{v_h}\cap B_h} \left( |v_h^+- v_h^-| + | u_h^+- u_h^-| +1 \right)d {\cal H}^{N-1}}\\ \\ \displaystyle{\leq 2C (m+d) |Du| (A \cap B_h)+ C (m+d)|Dv|(A \cap B_h) + C{\cal H}^{N-1}(J_v \cap B_h\cap A),} \end{array} \end{equation}. Moreover, by Proposition \ref{propK3} c), d) and reverse Fatou's lemma we have $$ \displaystyle{ \int_ {(J_{v_h}\cup J_{u_h})\cap (A \setminus B_h)} K_3(v_h^+, v_h^-, u_h^+, u_h^-, \nu_{(v_h, u_h)})d {\cal H}^{N-1}\leq \int_ {A\cap (J_v\cup J_u)} K_3(v^+, v^-, u^+, u^-, \nu_{(v, u)})d {\cal H}^{N-1}.} $$ Clearly, taking the limit as $h \to \infty$, from the above inequality and \eqref{stimeuhvh} we may conclude that, $$ \begin{array}{ll} \displaystyle{{\cal F}(v,u;A) \leq \int_{A \cap (J_v \cup J_u)}K_3(v^+, v^-, u^+, u^-, \nu_{(v,u)})d{\cal H}^{N-1} } \\ \\\displaystyle{+C (|Du| (A \setminus (J_v \cup J_u))+ |Dv|(A \setminus (J_u \cup J_v)) +\int_{A} Qf(v, 0)dx, } \end{array} $$ where we have exploited the fact that the Cantor term in \eqref{Cantor0} is $0$, from the construction of the $u_h$, and $\displaystyle{\liminf_{h \to \infty}{\cal H}^{N-1}(J_v \cap B_h\cap A)= {\cal H}^{N-1}(J_v \cap (A \setminus (J_u \cup J_v))) =0}$. Now, since ${\cal F}(v,u;\cdot )$ is a Radon measure, the above inequality holds for every Borel set $B$, in particular for the set $B= A \cap (J_v \cup J_u)$ and this gives $$ \displaystyle{{\cal F}(v,u;J_v\cap J_u) \leq \int_{J_v\cap J_u} K_3(v^+, v^-, u^+, u^-, \nu_{(v,u)})d {\cal H}^{N-1}.} $$ This concludes the proof of Step 2 when $(v,u) \in SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^{m+d})$. The general case $(v,u) \in SBV_0(\Omega;\mathbb R^m)\times BV(\Omega;\mathbb R^d)$ follows from (iii) in Remark \ref{asinGlobalMethodBFLM}, (cf.\cite[Section 5, Step 4.]{FM2} and \cite[Theorem 4.9]{AMT}). \end{proof} \begin{proof}[Proof of Theorem \ref{mainthmgen}] It follows from Theorems \ref{lsctheorem} and \ref{thupperbound} \end{proof} \begin{remark}\label{specializeK3}We observe that, as it can be easily conjectured from the proof of Theorems \ref{lsctheorem}, Step 2, and \ref{thupperbound}, Step 3, Case 3. i) and ii), $K_3$ admits the following equivalent representation: \begin{itemize} \item[on $J_u\setminus J_v$] $K_3(a,a,c,d,\nu)=Qf^\infty(a,(c-d)\otimes \nu)$, where $Q f^\infty$ represents the recession function of the quasiconvexification of $f$ as in Remark \ref{propfinfty}. In fact one inequality is trivial by Definition \ref{K3}, while the other can be obtained through Proposition \ref{prop3.5BBBF}, invoking the quasiconvexity and the growth properties of $Qf^\infty (a, \cdot)$ (cf. Remark \ref{propfinfty}) and analogous arguments to the ones leading to \cite[formula (5.84)]{AFP2}. \item[on $J_v \setminus J_u$] $K_3(a,b,c,c,\nu)= {\cal R}g(a,b,\nu)$ where ${\cal R}g$ represents the $BV$-elliptic envelope of $g$, namely the greatest $BV$-elliptic function less than or equal to $g$, which under the assumptions $(G_1)-(G_3)$ admits the representation \begin{equation}\label{Rg} {\cal R}g(a, b,\nu) = \inf\left\{\int_{J_w\cap Q_\nu} g(w^+, w^-, \nu) d{\cal H}^{N-1} : w \in SBV_0(Q_\nu;\mathbb R^m)\cap L^\infty(Q_\nu; \mathbb R^m), w = v_0 \hbox{ on }\partial Q_\nu\right\}, \end{equation} as in \cite{BDV}, \cite{CF}, \cite{BFLM}, where $v_0$ is defined as in \eqref{vab}. This is a consequence of \eqref{K3} and \eqref{Rg}. \end{itemize} We observe that the above characterizations of $K_3$ could be deduced directly reproducing the proof of lower bound and upper bound for Theorem \ref{mainthmgen}, for the jump part on the sets $J_u\setminus J_v$ and $J_v\setminus J_u$, respectively. \end{remark} \section{Applications}\label{appl} This section is devoted to the proof of Theorem \ref{mainthm} which is very similar to that of Theorem \ref{mainthmgen}. In particular we replace Lemma \ref{Lemma4.1FM} and Proposition \ref{propK3} by Lemma \ref{Lemma4.1FMchi} and Proposition \ref{propK2}, respectively. Having in mind the application that we will describe in more details in Remark \ref{remmainthm} we state it with more generality, but in order to prove Theorem \ref{mainthm} we will consider $m=1$ and $T=\{0,1\}$. Let $T\subset \mathbb R^m$ be a finite set and let \begin{equation} \label{Vgfr} V:T\times\mathbb{R}^{d\times N}\rightarrow(0,+\infty) \hbox{ and } g: T \times T \times S^{N-1} \to[0, +\infty[ \end{equation} satisfying $(F_1)$ - $(F_4)$ and $(G_1)$ - $(G_3)$, respectively, and denote by ${\cal A}_{fr}$ the set defined in \eqref{AFR}, where the range $\{0,1\}$ is replaced by $T$. For simplicity we will consider $\nu=e_N$ and consequently $Q_\nu=Q= [0,1]^N$. \begin{lemma}\label{Lemma4.1FMchi} Let $T \subset \mathbb R^m$ a finite set, an \[ v_0\left( y\right) :=\left\{ \begin{array} [c]{lll a & & \text{if }x_{N}>0,\\ b & & \text{if }x_{N} < 0, \end{array} \right.\qquad u_{0}\left( y\right) :=\left\{ \begin{array} [c]{lll c & & \text{if }x_{N}>0,\\ d & & \text{if }x_{N}< 0. \end{array} \right. \] Let $\left\{ v_{n}\right\} \subset BV\left( \Omega;T \right) $ and $\{u_{n}\} \subset W^{1,1}\left( Q;\mathbb{R}^{d}\right) $, such that $v_n \to v_0$ $L^{1}\left( Q;\mathbb R^m \right) $ and $u_n\to u_0$ in $L^{1}\left( Q;\mathbb{R}^{d}\right) .$ If $\rho$ is a mollifier, $\rho_{n}:=n^{N}\rho\left( nx\right) ,$ then there exists a sequence of functions $\left\{ \left( \zeta_{n},\xi_{n}\right) \right\} \in \mathcal{A}_{fr}\left( a,b,c,d,e_{N}\right) $, such that \[ \zeta_{n}=v_0\text{ on }\partial Q,~\zeta_{n}\rightarrow v_0\text{ in }L^{1}\left( Q;\mathbb R^m\right), \] \[ \xi_{n}=\rho_{i\left( n\right) }\ast u_{0}\text{ on }\partial Q,~~\ \ \xi _{n}\rightarrow u_{0}\text{ in }L^{1}\left( Q;\mathbb{R}^{d}\right) \ an \begin{equation}\label{eqlemma4.7} \begin{array}{ll} \displaystyle{\underset{n\rightarrow\infty}{\lim\sup}\left( \int_{Q}QV\left( \zeta_{n},\nabla\xi_{n}\right) dx+\int_{J_{\zeta_n}\cap Q}g(\zeta_n^+, \zeta_n^-, \nu_{\zeta_n})d {\cal H}^{N-1}\right)} \\ \\ \displaystyle{\leq \underset{n\rightarrow\infty}{\lim\inf}\left( \int_{Q}QV\left( v_{n},\nabla u_{n}\right) dx+\int_{J_{v_n}\cap Q}g(v_n^+, v_n^-, \nu_{v_n})d {\cal H}^{N-1} \right),} \end{array} \end{equation} where $QV$ represents the quasiconvex envelope of $V$ as in \eqref{Qfbar}. \end{lemma} We omit the proof since it is entirely similar to the one of Lemma \ref{Lemma4.1FM}. We just observe that there is no need of the first step where a truncation argument for $v$ was built, since in the present context we deal with functions with finite range. \medskip The following result, which contains the properties satisfied by $K_2$ in \eqref{K2}, is analogous to Proposition \ref{propK3} and it is stated for the reader's convenience. \begin{proposition}\label{propK2} Let $V$ be as in \eqref{Vbar}. Let $K_2$ be the function introduced in \eqref{K2}. The following properties hold. \begin{enumerate} \item[a)] $\left\vert K_2\left( a,b,c,d,\nu\right) -K_2\left( a',b',c',d',\nu\right) \right\vert \leq C\left( \left\vert a-a'\right\vert +\left\vert b-b'\right\vert +\left\vert c-c'\right\vert +\left\vert d-d'\right\vert \right) $ for every $\left( a,b,c,d,\nu\right) ,$ $\left( a',b',c',d',\nu\right) \in\{0,1\} \times \{0,1\} \times \mathbb R^{d}\times \mathbb R^{d}\times S^{N-1};$ \item[b)] $\nu\longmapsto K_2( a,b,c,d,\nu) $ is upper semicontinuous for every $( a,b,c,d) \in \{0,1\}\times \{0,1\}\times \mathbb R^{d}\times \mathbb R^{d};$ \item[c)] $K_2$ is upper semicontinuous in $\{0,1\}\times \{0,1\}\times \mathbb R^{d}\times \mathbb R^{d}\times S^{N-1};$ \item[d)] $K_2\left( a,b,c,d,\nu\right) \leq C\left( \left\vert a-b\right\vert +\left\vert c-d\right\vert\right) $ for every $\nu\in S^{N-1}.$ \end{enumerate} \end{proposition} \begin{proof}[Proof of Theorem \ref{mainthm}] The arguments develop as in Theorem \ref{mainthmgen}, essentially replacing $f$ by $V$ in \eqref{Vbar}, $v$ by $\chi$, the surface integral by $|D \chi|$, and using the blow-up argument introduced in \cite{FM1}, thus we will present just the main differences. \noindent {\bf Lower bound.} Let $(\chi,u) \in BV(\Omega;\{0,1\})\times BV(\Omega;\mathbb R^d)$. Without loss of generality we may assume that for every $\{(\chi_n, u_n)\} \subset BV(\Omega;\{0,1\})\times BV(\Omega;\mathbb R^d)$ converging to $(\chi,u)$ in $L^1(\Omega; \{0,1\})\times L^1(\Omega;\mathbb R^d)$, $\displaystyle{\underset{n\rightarrow\infty}{\lim\inf}\left(\int_{\Omega} V\left( \chi_n,\nabla u_n\right)dx +|D \chi_n|(\Omega)\right)}$ is indeed a limit. For every Borel set $B \subset \Omega$ define $$ \displaystyle{\mu_n(B):=\int_B V\left( \chi_n,\nabla u_{n}\right) dx + |D \chi_n|(B).} $$ The sequence $\{\mu_n\}$ behaves as in Theorem \ref{mainthmgen}, and its weak $\ast$ limit (up to a not relabelled subsequence) $\mu$ can be decomposed as in \eqref{mudecomposition} where, as in the remainder of the proof, $J_{(v,u)}$ has been replaced by $J_{(\chi,u)}$. Moreover we emphasize that we have been considering $(\chi,u)$ as a unique field in $BV(\Omega; \mathbb R^{1+ d})$ and we have been exploiting the fact that $D^c(\chi,u)= (0, D^c u)$ (cf. Remark \ref{vmeas}). By Besicovitch derivation theorem we deduce \eqref{BDT}. We claim tha \begin{equation} \mu_{a}\left( x_{0}\right) \geq QV\left( \chi\left( x_{0}\right) ,\nabla u\left( x_{0}\right) \right) ,~\text{for \ }\mathcal{L}^{N}-\text{a.e .~x_{0}\in\Omega,\label{lboundbulkchi \end{equation \begin{equation} \mu_{j}\left( x_{0}\right) \geq K_2\left( \chi^+(x_0),\chi^-(x_0),u^{+}\left( x_{0}\right) ,u^{-}\left( x_{0}\right) ,\nu_{(\chi,u)}\right) ,~\text{for }\mathcal{H ^{N-1}-\text{a.e}.~x_{0}\in J_{(\chi,u)}\cap\Omega,\label{lboundjumpchi \end{equation \begin{equation} \mu_{c}\left( x_{0}\right) \geq\left( QV\right) ^{\infty}\left( \chi\left( x_{0}\right) ,\frac{dD^{c}u}{d\left\vert D^{c}u\right\vert }\left( x_{0}\right) \right) \text{ for }\left\vert D^{c}u\right\vert -\text{a.e. }x_{0}\in\Omega. \label{lboundcantorchi \end{equation} If $\left( \ref{lboundbulkchi}\right) -\left( \ref{lboundcantorchi}\right) $ hold then the lower bound inequality for Theorem \ref{mainthm} follows. \noindent\textbf{Step 1.} Observing that by Proposition \ref{continuityQfbar} $QV$ satisfies $(F_1)-(F_3)$, the proof of \eqref{lboundbulkchi} develops as in Step 1 of Theorem \ref{mainthmgen}, just applying \cite[formula (2.10) in Theorem 2.19]{FM2}, to the functional $G: (\chi,u)\in W^{1,1}(\Omega;\mathbb R^{1+d}) \to \int_{\Omega}QV(\chi,\nabla u)dx$. \noindent\textbf{Step 2. }The proof of \eqref{lboundjumpchi} is very similar to the one of \eqref{lboundjump}. Remind that $J_{( \chi,u) }=J_\chi\cup J_{u}$ and $\nu_{(\chi,u)}= \nu_\chi$ for every $(\chi,u) \in BV(\Omega;\{0,1\}) \times W^{1,1}(\Omega;\mathbb R^d)$. The same arguments of Step 2. in Theorem \ref{mainthmgen} allow us to fix $x_{0}\in J_{( \chi,u)}\cap\Omega$ such that \eqref{4.13}, \eqref{4.14}, \eqref{4.15} \eqref{4.16} and \eqref{4.17} hold. Recall that we denote $Q_{\nu(x_0)}$ by $Q$ and we may choose $\varepsilon>0$ such that $\mu\left( \partial\left( x_{0}+\varepsilon Q\right) \right) =0.$ It results \begin{align*} \mu_{j}\left( x_{0}\right) & \geq\lim_{\varepsilon\rightarrow0^{+} \lim_{n\rightarrow\infty}\frac{1}{\varepsilon^{N-1}}\left( \in _{x_{0}+\varepsilon Q}QV\left( \chi_n\left( x\right) ,\nabla u_{n}\left( x\right) \right) dx+|D \chi_n|(x_0+ \e Q) \right) \\ & =\lim_{\varepsilon\rightarrow0^{+}}\lim_{n\rightarrow\infty}\left(\varepsilon \int_{Q}QV\left( \chi_n\left( x_{0}+\varepsilon y\right) ,\nabla u_{n}\left( x_{0}+\varepsilon y\right) \right) dy +|D \chi_n(x_0+ \varepsilon y)|\left(Q\cap J\left( \chi_n, u_n\right) -\frac{x_0}{\varepsilon}\right) \right) . \end{align*} Define $\chi_{n,\varepsilon}, u_{n, \e}, \nu_{n,\e}$ and $ \chi_0, u_0$ according to \eqref{vne} and \eqref{u0v0}. Since $(\chi_n,u_n)\rightarrow (\chi,u)$ in $L^{1}\left( \Omega;\mathbb{R}^{1+d}\right)$ we obtain \eqref{blabla} and \eqref{blabla2}, with $v_{n,\e}$ and $v_0$ replaced by $\chi_{n,\e}$ and $\chi_0$, respectively. Thu \begin{align*} \mu_{j}\left( x_{0}\right) & \geq\lim_{\varepsilon\rightarrow0^{+} \lim_{n\rightarrow\infty}\left(\int_{Q}QV^{\infty}\left( \chi_{n,\varepsilon }\left( y\right) ,\nabla u_{n,\varepsilon}\left( y\right) \right) dy+ |D\chi_{n, \e}|(Q)\right.\\ & \left.+\int_{Q}\varepsilon QV\left( \chi_{n,\varepsilon}\left( y\right) ,\frac{1}{\varepsilon}\nabla u_{n,\varepsilon}\left( y\right) \right) -QV^{\infty}\left( \chi_{n,\varepsilon},\nabla u_{n,\varepsilon}\right) dy \right). \end{align*} By Remark \ref{propfinfty} $(v)$ we can argue as in the estimates \cite[(3.3)-(3.5)]{FM2}, obtaining \[ \mu_{j}\left( x_{0}\right) \geq\underset{\varepsilon\rightarrow0^{+ }{\lim\inf}~\underset{n\rightarrow\infty}{\lim\inf}\left( \int_{Q}QV^{\infty }\left( \chi_{n,\varepsilon}\left( y\right) ,\nabla u_{n,\varepsilon }\left( y\right) \right) dy+ |D \chi_{n,\e}|(Q) \right) . \] \noindent Applying Lemma \ref{Lemma4.1FMchi} with $QV$ replaced by $QV^\infty$, $T\subset \mathbb R^m$ replaced by $\{0,1\}$, the surface integral replaced by the total variation, $K_{fr}$ and ${\mathcal A}_{fr}$ replaced by $K_2$ and ${\mathcal A}_2$ respectively, and using Remark \ref{propfinfty}, we may find $\left\{ \left( \zeta_{k},\xi_{k}\right) \right\} $ $ \in\mathcal{A}_2\left( \chi^+(x_0),\chi^-(x_0),u^{+}\left( x_{0}\right) ,u^{-}\left( x_{0}\right) ,\nu\left( x_{0}\right) \right) $ such that \begin{equation}\nonumber \displaystyle{\mu_{j}(x_0) \geq\lim_{k\rightarrow\infty}\left( \int_Q QV^{\infty}\left( \zeta_{k},\nabla\xi_{k}\right) dx+ |D \zeta_k|(Q) \right) \geq K_2\left( \chi^+(x_0), \chi^-(x_0),u^{+}\left( x_{0}\right) ,u^{-}\left( x_{0}\right) ,\nu\left( x_{0}\right) \right) }. \end{equation} \noindent\textbf{Step 3.} The proof of \eqref{lboundcantorchi} follows identically as in Step 3, Theorem \ref{lsctheorem}, namely applying \cite[formula (2.12) in Theorem 2.19]{FM2} to the functional $G$ introduced in Step 1 herein and this concludes the proof. \medskip {\bf Upper Bound.} The proof of the upper bound develops in three steps as the one of Theorem \ref{thupperbound}. Furthermore Propositions \ref{propqcx} can be readapted replacing $Qf$ by $QV$ and the surface integral by $|D\chi|$. \noindent {\bf Step 1.} For ${\cal L}^N$- a.e. $x_0 \in \Omega$, $x_0$ is a Lebesgue point for $U\equiv (\chi, u)$ such that also \eqref{upper1} and \eqref{upper3} hold for $QV$. In analogy with Theorem \ref{thupperbound} Step 1- we apply for every $\chi \in BV(\Omega;\{0,1\})$, the Global Method \cite[Theorem 4.1.4]{BFMGlobal} to the functional $G: (u,A) \in W^{1,1}(\Omega;\mathbb R^m)\times {\cal A}(\Omega) \to \int_\Omega QV(\chi, \nabla u)dx$, to obtain an integral representation for the functional \eqref{auxrelax} for every $(u, A) \in BV(\Omega;\mathbb R^m)\times {\cal A}(\Omega)$. Moreover we can write \begin{equation}\nonumber \displaystyle{{\cal F}_{\cal OD}(\chi,u; A) \leq {\cal G}(u; A)+ |D \chi|(A).} \end{equation} Differentiating with respect to ${\cal L}^N$ we obtain $\displaystyle{\frac{d {\cal F}_{\cal OD}(\chi, u;\cdot)}{d {\cal L}^N} \leq V_0(x_0, \nabla u(x_0)), }$ where $V_0$ is the correspective of $f_0$ in \eqref{f00} where $Qf$ has been replaced by $QV$. Arguing as in the last part of Theorem \ref{thupperbound} Step 1, applying Lemma \ref{Lemma0}, we deduce that $V_0(x_0,\xi_0) \leq QV(\chi(x_0), \xi_0)$ and this leads to the conclusion when $u \in BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^d)$. \noindent {\bf Step 2.} The same type of arguments as in Step 1, applies to the proof of the upper bound for the Cantor part. Radon-Nikod\'ym theorem implies \eqref{CantorU} for every $U\equiv(\chi,u)\in BV(\Omega;\{0,1\})\times (BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^d))$, with $|D^c u|$ and $\sigma$ mutually singular. Moreover \eqref{cantor}, \eqref{ABrankone}, \eqref{as3.27BFMglobal} hold, the Global Method \cite[Theorem 4.1.4]{BFMGlobal} applies to \eqref{auxrelax} and a differentiation with respect to $|D^c u|$ at $x_0$ provides $\displaystyle{\frac{d {\cal F}_{\cal OD}(\chi, u; \cdot)}{d |D^c u|}(x_0) \leq h(x_0, a_u, \nu_u),}$ where $h(x,a,\nu)$ is given by \eqref{f0}. Remark \ref{propfinfty} applied to $QV^\infty$, Lemma \ref{Lemma0} and the same techniques employed in the last part of Theorem \ref{thupperbound} Step 2, entail $$ h(x_0, a,\nu)\leq QV^\infty(\chi(x_0), a \otimes \nu), $$ and that concludes the proof of the Cantor part for $(\chi,u)\in BV(\Omega;\{0,1\})\times (BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^d))$. \noindent{\bf Step 3.} We claim that \begin{equation}\label{claimjumpod} \displaystyle{ {\cal F}_{\cal OD}(U;J_U)\leq \int_{J_U}K_2(\chi^+,\chi^-,u^+, u^-, \nu_{\chi, u})d {\cal H}^{N-1},} \end{equation} for every $(\chi, u)\in BV(\Omega;\{0,1\})\times (BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^d))$. The proof of \eqref{claimjumpod} is divided in three parts, according to the assumptions on the limit functions $u$. Namely, \noindent {\it Case 1.} $U(x):=(1,c)\chi_E(x)+ (0,d)(1- \chi_E(x))$, with $P(E,\Omega)< +\infty$, \noindent {\it Case 2.} $u(x)= \sum_{i=1}^\infty c_i \chi_{E_i}(x)$, where $\{E_i\}_{i=1}^\infty$ forms a partition of $\Omega$ into sets of finite perimeter and $c_i \in \mathbb R^d$, \noindent {\it Case 3.} $u(x)\in BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^d)$. For what concerns Case 1, we consider first the unit open cube $Q \subset \mathbb R^N$, and make the same assumptions on the target function $U$ as in Theorem \ref{thupperbound} Step 3, Case 1. Then we can invoke an argument analogous to Proposition \ref{prop3.5BBBF}, without invoking any truncation arguments as those in Remark \ref{applicationofprop3.4}. This guarantees that there exists $(\chi_n, u_n) \in {\cal A}_2(1,0,c,d,e_N)$ such that $(\chi_n, u_n)\to (\chi, u)$ in $L^1(Q;\mathbb R^{1+d})$ and \begin{equation}\nonumber \displaystyle{K_2(1,0, c,d,e_N) =\lim_{n \to \infty}\left(\int_Q QV^\infty(\chi_n(x), \nabla u_n(x)) dx + |D \chi_n|(Q)\right).} \end{equation} Then the proof develops exactly as Theorem \ref{thupperbound}, just taking into account that the sequence $z_{n,k}$ therein is built replacing $a, b$ and $v_n$ by $1,0$ and $\chi_n$ respectively, thus leading to $$ \displaystyle{{\cal F}_{OD}(\chi, u; Q)\leq \frac{QV(1,0)+ QV(0,0)}{2} + K_2(1,0, c,d ,e_N).} $$ For what concerns a more general set $A$ than $Q$, like in Theorem \ref{thupperbound} Step 3, Case 1, we achieve the following representation $$ \displaystyle{{\cal F}_{\cal OD}(\chi, u; A)\leq \int_A QV(\chi(x), 0) dx + \int_ {J_U} K_2(1,0,c,d,\nu)d {\cal H}^{N-1}.} $$ Then the strategy follows b), c), d) in Theorem \ref{thupperbound} Step 3, Case 1, hence we obtain $$ \displaystyle{{\cal F}_{\cal OD}(\chi, u; J_{\chi, u}) \leq \int_{J_{\chi, u}} K_2(1,0,c,d,\nu)d {\cal H}^{N-1}.} $$ \noindent {\it Case 2.} and {\it Case 3.} By the properties of $K_2$ in Proposition \ref{propK2}, the proof develops as in \cite[Proposition 4.8, Case 2 and Case 3]{AMT}. This concludes the proof of the upper bound when $(\chi, u)\in BV(\Omega;\{0,1\}) \times (BV(\Omega;\mathbb R^d)\cap L^\infty(\Omega;\mathbb R^d)$). The general case, since $\chi \in BV(\Omega;\{0,1\})$ and can be fixed, is identical to \cite[Section 5, Step 4.]{FM2}, where the truncation procedures involves just $u$. Putting together {\bf Lower bound} and {\bf Upper bound} we achieve the desired result. \end{proof} \begin{remark}\label{specializeK2}We observe that, as in Remark \ref{specializeK3}, $K_2$ admits the following equivalent representation: \begin{itemize} \item[i)]on $J_u\setminus J_\chi$, $K_2(a,a,c,d,\nu)=QV^\infty(a,(c-d)\otimes \nu)$, with $Q V^\infty$ as in \eqref{QVinfty}. \item[ii)] on $J_\chi \setminus J_u$, $K_2(a,b,c,c,\nu)= |(a-b)\otimes \nu|$ , i.e. $\displaystyle{\int_{J_\chi}K_2(\chi^+, \chi^- ,u^+,u^+, \nu)d {\cal H}^{N-1}= |D \chi|(\Omega)}$. \item[iii)] Note that $ \displaystyle{K_2(a,b,c,d,\nu)\geq \inf\left\{\int_{Q_\nu}\left(QV^\infty(w(x), \nabla u(x)) + |\nabla w(x)|\right)dx: (w,u) \in {\cal A}(a,b,c,d,\nu)\right\},} $ where this latter density is the density $K(a,b,c,d,\nu)$ first introduced in \cite{FM2} (cf. also \cite[formula (5.83)]{AFP2}) and \begin{equation}\nonumber \begin{array}{ll} \mathcal{A}\left( a,b,c,d,\nu\right) :=\left\{ \left(w,u\right) \in W^{1,1}\left( Q_{\nu};\mathbb{R}^{1+ d}\right) :\right. (w(y),u\left( y\right)) =(a,c)\text{ if }y\cdot\nu=\frac{1}{2}, \\ \\ (w(y),u\left( y\right)) =(b,d)\text{ if }y\cdot\nu=-\frac{1}{2}, \left. (w, u)\text{ are }1-\text{periodic in }\nu_{1},\dots,\nu_{N-1} \hbox{ directions}\right\}. \end{array} \end{equation} \noindent On the other hand, if $W_i$, $i=1,2$ in \eqref{H1} are proportional (as in the model presented in \cite{AB}), i.e. $W_2 = \alpha W_1$, $\alpha >1$, taking $V$ as in \eqref{Vbar}, since for every $q \in [0,1]$ $QV^\infty(q,z)=q QW_1^\infty(z) + \alpha(1-q) QW_1^\infty(z)$, then we claim that $K_2$ is equal to $K$ of \cite{FM2}. Indeed, without loss of generality, assuming $W_1$, quasiconvex and positively $1$-homogeneous, it is enough to observe that for every $(w,u) \in {\cal A}(a,b,c,d,\nu)$, $$ \begin{array}{ll} \displaystyle{K(1,0,c,d,\nu)\geq \int_{Q_v}\left(w(x) W_1(\nabla u(x))+ \alpha (1-w(x)) W_1(\nabla u(x)) + |\nabla w(x)| \right)dx \geq \int_{Q_\nu} (W_1(\nabla u(x)) + 1)dx, } \end{array} $$ where it has been used the fact that $\alpha + (1-\alpha)w \geq 1$ and $$\displaystyle{\int_{Q_\nu}|\nabla w|dx \geq \left|\int_{Q_\nu}\nabla w\right| dx = \left|\int_{\partial Q_\nu} w \otimes \nu(x) d {\cal H}^{N-1}\right| = 1.} $$ Taking a sequence of characteristic functions $\{\chi_\e\}$, admissible for $ {\cal A}_2(1,0,c,d,\nu)$ in \eqref{AFR}, such that their value is $1$ in a strip of the cube orthogonal to $\nu$ and of thickness $1-\e$, then, it results $$ \begin{array}{ll} \displaystyle{\int_{Q_\nu}W_1(\nabla u(x))dx + 1 = \lim_{\e \to 0^+} \int_{Q_\nu}(\chi_\e W_1 (\nabla u(x) )+ \alpha (1-\chi_\e) W_1(\nabla u(x)) d x + |D \chi_\e|(Q_\nu) }\\ \\ \displaystyle{\geq K_2(1,0,c,d,\nu),} \end{array} $$ and this proves our claim. Observe also that if $\alpha \in (0,1)$, then the result remains true, it is enough to express $W_1$ in terms of $W_2$. \end{itemize} \end{remark} \medskip As emphasized in \cite[Remark 2.4]{AB} one can consider mixtures of more than two conductive materials, hence we observe that Theorem \ref{mainthm} can be extended with minor changes to these models leading to formula \eqref{fr} in the remark below. \begin{remark}\label{remmainthm} Let $T$ be a finite subset of $\mathbb R^m$, Theorem \ref{mainthm} applies also to energies of the type $F_{fr}:L^{1}(\Omega;T)\times L^{1}(\Omega;\mathbb{R}^{d})\times \mathcal{A}\left( \Omega\right) \rightarrow\lbrack0,+\infty]$ defined by \begin{equation} F_{fr}(v,u;A):=\left\{ \begin{array} [c]{lll} {\displaystyle\int_{A}} V\left(v, \nabla u\right) dx+ \displaystyle{\int_{{J_v}\cap A}}g(v^+, v^-,\nu_v)d {\cal H}^{N-1} & & \text{in }BV(A;T)\times W^{1,1}(A;\mathbb{R}^{d}),\text{\bigskip}\\ +\infty & & \text{otherwise. \end{array} \right. \label{FFR \end{equation} Indeed, consider the relaxed localized energy of \eqref{FFR} given b \begin{equation}\nonumber \begin{array} [c]{c \mathcal{F}_{fr}\left( v,u;A\right) :=\inf\left\{ \underset{n\rightarrow \infty}{\lim\inf {\displaystyle\int_{A}}V\left( v_n,\nabla u_{n}\right) dx+ \displaystyle{\int_{J_{v_n} \cap A}}g(v_n^+, v_n^-, \nu_{v_n})d {\cal H}^{N-1}:\right. \\ \left. \left\{ (v_n, u_n)\right\} \subset BV(A;T) \times W^{1,1}\left( A;\mathbb{R}^{d}\right), (v_n, u_n) \to (v,u) \text{ in }L^1(A;T)\times L^1(A;\mathbb R^d) \right\}, \end{array} \end{equation} with $V$ and $g$ as in \eqref{Vgfr} satisfying $(F_1)- (F_4)$ and $(G_1)- (G_3)$, respectively. Moreover define ${\overline F}_{fr}:BV(A;T)\times BV(A;\mathbb{R}^{d )\times\mathcal{A}\left( \Omega\right) \rightarrow\lbrack0,+\infty]$ as \begin{equation}\nonumber {\overline F}_{fr}\left( v,u;A\right) :=\int_{A}QV\left( v,\nabla u\right) dx+\int_{A}QV^{\infty}\left( v,\frac{dD^{c}u}{d\left\vert D^{c u\right\vert }\right) d\left\vert D^{c}u\right\vert +\int_{J_{\left( v,u\right) }\cap A}K_{fr}\left( v^{+},v^{-},u^{+},u^{-},\nu\right) d\mathcal{H}^{N-1} \end{equation} where $QV$ is the quasiconvex envelope of $V$ given in $\left( \ref{Qfbar \right) ,$ $QV^{\infty}$ is the recession function of $QV,$ introduced in \eqref{QVinfty}, an \begin{equation} {K_{fr}(a,b,c,d,\nu):=\inf}\left\{ {\displaystyle\int_{Q_{\nu}}} QV^{\infty}(v,\nabla u(x))dx+\int_{Q_\nu}g(v^+, v^-, \nu_v)d {\cal H}^{N-1} :\left( v,u\right) {{\in\mathcal{A}_{fr}(a,b,c,d,\nu)}}\right\} ,\label{K2FR \end{equation} where ${\mathcal A}_{fr}$ is the set defined in \eqref{AFR}, with $\{0,1\}$ replaced by the finite set $T\subset \mathbb R^m$. Thus, we are lead to the following representation: for every $(v,u)\in L^1(\Omega;T)\times L^1(\Omega;\mathbb{R}^{d}) \begin{equation}\label{fr} \mathcal{F}_{fr}(v,u;A)=\left\{ \begin{array}{ll} {\overline F}_{fr}(v,u;A) &\hbox{ if } (v, u)\in BV(A;T) \times BV(A;\mathbb R^d) \\ \\ +\infty & \hbox{ otherwise.} \end{array} \right. \end{equation} \end{remark} \begin{remark}\label{K2K3} In general we cannot expect $K_3= K_{fr}$ since in \eqref{K2FR}, the function $g$ is defined in $T \times T \times S^{N-1}$, with $ T \subset \mathbb R^d$ and ${\rm card}(T)$ finite, while in \eqref{K3}, $g$ is defined in $\mathbb R^d \times \mathbb R^d \times S^{N-1}$. In particular we recall that in $J_v\setminus J_u$, $K_3$ coincides with ${\cal R}g$, the $SBV$-elliptic envelope of $g$ as in \cite{BFLM}, while $K_{fr}$ in \eqref{K2FR} is given by the $BV$-elliptic envelope introduced by Ambrosio and Braides, cf. \cite[Definition 5.13]{AFP2}. Analogously, it is easily seen that $K_2$ coincides with $|D \chi|$ in $J_\chi \setminus J_u$ . \end{remark} \section*{Acknowledgements} This paper has been written during various visits of the authors at Departamento de Matem\'atica da Universidade de \'Evora and at Dipartimento di Ingegneria Industriale dell' Universit\'a di Salerno, whose kind hospitality and support have been gratefully acknowledged. The authors are indebted to Irene Fonseca for having suggested this problem and for the many discussions on the subject. The work of both authors was partially supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (Portuguese Foundation for Science and Technology) through CIMA-UE, UTA-CMU/MAT/0005/2009 and through GNAMPA project 2013 `Funzionali supremali: esistenza di minimi e condizioni di semicontinuit\'a nel caso vettoriale'.
1,116,691,499,571
arxiv
\section{Introduction} \label{sec:intro} \subsection{Social humans, social robots} \label{subsec:motivation} Humans are inherently social beings, spending a great deal of their time establishing a diverse range of social connections. Their social nature is not only demonstrated by their social behavior~\cite{homans1974social}, but also possesses a biological basis~\cite{frith2010social}. This social dimension prompts human beings to involuntarily ascribe social qualities even to non-human media, such as technological artifacts, often treating them similarly to how they would treat humans or other living beings~\cite{nass1994computers}. This disposition stems from the general human tendency of ascribing human-like qualities to non-human entities, called \textit{anthropomorphism}, which has been observed and demonstrated in several contexts~\cite{epley2007seeing}. These phenomena therefore place technologies capable of social interactions with humans as unique technological innovations. In particular, \textit{social robots}, i.e., robots deliberately designed to interact with humans in a social way, open up a new paradigm for humans to communicate, interact, and relate to robotic technologies. The integration of a social dimension in the design of robots has generally been following two approaches. First, existing robotic technologies are being enhanced with social capabilities for more fluid interactions with humans. Second, social robots are being developed for new application areas where the social dimension is central, and beyond a mere interface. As a result of these approaches, social robots have been deployed in a wide variety of contexts, such as healthcare~\cite{broadbent2009acceptance}, education~\cite{belpaeme2018social}, companionship~\cite{dautenhahn2005robot}, and others (refer to Section~\ref{subsec:purpose} for a discussion of application areas). They offer a spectrum of interactions that is being continuously enriched by researchers from a variety of disciplines. The field of \ac{HRI}, as an expanding field of research, reflects this observation. \ac{HRI} is a multidisciplinary field bringing together researchers from an eclectic set of disciplines, including robotics, computer science, engineering, \ac{AI}, machine learning, \ac{HCI}, design, art, animation, cognitive science, psychology, sociology, ethology, and anthropology~\cite{fong2002survey,murphy2010human, baxter2016characterising, alves2016psychological, eyssel2017experimental}. The multidisciplinarity inherent to this field of research provides contributions and advancements nurtured by scholars from different backgrounds in the conception, design, and implementation of social robots. In addition to development, \ac{HRI} aims to evaluate how well such robots perform or serve the purpose they were designed for, being concerned with proper evaluation, testing, and refinement of these technologies. The result is a rich multidisciplinary effort to create engaging robots that can sustain personalized interactions with humans, adapt to the task at hand and to the interaction flow, but also understand and model aspects pertaining to the humans, such as affect and cognition~\cite{ho2010modelling, leite2013social}. In this chapter, we provide a framework for characterizing social robots that encompasses major aspects to consider when designing them and their interactions with humans. Our framework is focused on interactive robots that possess a social component in their design. Specifically, we use the term ``social robots'' to denote ``socially interactive robots'' as defined by Fong et al.~\cite{fong2002survey}, namely robots that have one or more of the following abilities: (1) communicating using natural language or non-verbal modalities (such as lights, movements, or sound), (2) expressing affective behaviors and/or perceiving human emotions, (3) possessing a distinctive personality or character, (4) modeling social aspects of humans, (5) learning and/or developing social skills, and (6) establishing and maintaining social relationships~\cite{fong2002survey}. Our framework builds upon existing work within the field of \ac{HRI}, providing a holistic understanding about the state of the art, while aiming at unifying, clarifying, and extending key concepts to be considered in the design of social robots. Specifically, our framework comprises several dimensions we identified to be of major relevance to the design of social robots. We summarize the $7$ dimensions considered in Figure~\ref{fig:intro}. Some of these dimensions relate to the robot itself -- namely \textit{appearance}, \textit{social capabilities}, and \textit{autonomy/intelligence} --, others relate to the interaction -- namely \textit{proximity} and \textit{temporal profile} --, and the remaining ones relate to the context -- namely robot \textit{relational role} and \textit{purpose / application area}. We envision this framework to be used broadly in order to gain a better understanding of existing social robots, as well as to inform the design and development of future ones. \begin{figure*}[t] \centering \includegraphics[width=1.00\textwidth]{Figures/intro.pdf} \caption{Visual summary of the $7$ dimensions of our framework, positioned in relation to the robot, the interaction, and the context. Each dimension will be further broken down and discussed separately in Section~\ref{sec:framework}.} \label{fig:intro} \end{figure*} \subsection{Brief summary of frameworks for characterizing social robots} \label{subsec:existing} Before outlining the content of our framework, it is useful to first look at existing frameworks for classifying social robots. In particular, existing taxonomies, as such from Fong et al.~\cite{fong2002survey}, Yanco et al.~\cite{yanco2004classifying}, Shibata~\cite{shibata2004overview}, and Dautenhahn~\cite{dautenhahn2007socially}, are useful to get an understanding of different aspects that may be included in the design space of social robots in \ac{HRI} research. While this list of frameworks is not exhaustive, we chose these particular ones to base our framework on, as they provide a broad range of classifications and definitions that relates to the scope of this chapter. As such, Fong et al.~\cite{fong2002survey} contributed a taxonomy of design methods and system components used to build socially interactive robots. These components include robot social capabilities, several design characteristics, and application domains. Additionally, Yanco et al.~\cite{yanco2004classifying} provided a framework that included elements of social robot's design, such as the role that a robot can have when interacting with humans, the types of tasks that robots can perform, different types of robot morphology, and the level of autonomy at which robots can operate. Similarly, Shibata~\cite{shibata2004overview} provided a taxonomy for the function and purpose of social robots by considering different ways of using them for psychological enrichment. Therefore, Shibata classified human-robot interactions in terms of the duration of these interactions and in terms of design characteristics (e.g., robot's appearance, hardware, and software functionalities), accounting for culture-sensitive aspects. Moreover, Dautenhahn~\cite{dautenhahn2007socially} focused on different evaluation criteria to identify requirements on social skills for robots in different application domains. The author identified four criteria, including contact between the robot and the human (which can vary from no contact or remote contact to repeated long-term contact), the extent of the robot's functionalities (which can vary from limited to a robot that learns and adapts), the role of the robot (which can vary between machine or tool to assistant, companion, or partner), and the requirement of social skills that a robot needs to have in a given application domain (which can vary from not required/desirable to essential). The author further explains that each evaluation criteria should be considered on a continuous scale. Taken together, these classifications and taxonomies have gathered essential aspects for the characterization and design of social robots. Despite each of them being unique in its contribution, we can see the existence of some overlapping terms and ideas between them. We now discuss our extended framework in the next section. \subsection{Overview of our extended framework} \label{subsec:overview} Our framework leverages the existing ones discussed previously as a starting point and goes beyond the individual frameworks discussed. In particular, it focuses on the following features: \begin{itemize} \item \textbf{Unification ---} The existence of multiple available perspectives in \ac{HRI} often results in scattered concepts and classifications. In this chapter, we aim at merging aspects of the literature on social robots and related fields in a self-contained and consistent resource. \item \textbf{Breadth ---} Existing individual taxonomies often focus on specific aspects relevant to the main line of research of their authors, and may not provide satisfactory coverage. Our framework includes dimensions related to the design of the robot itself, but also of the interaction and context. \item \textbf{Recency ---} In recent years, we have observed some important developments in robotic technologies, which have taken robots outside of research laboratory settings and enabled them to be deployed ``in the wild''. We incorporate those recent developments in our work. \item \textbf{Clarity ---} Concepts associated with \ac{HRI} are often difficult to define, and as a result clear definitions may not always be available. This lack of clarity may impede communication within the field, or result in inconsistent concepts. In this chapter, we attempt to clarify some important key concepts, such as the distinction between embodiment and purpose, or the concepts of autonomy and intelligence for social robots. \end{itemize} With these points in mind, we list below our focuses within each of the $7$ dimensions considered. \begin{enumerate} \item \textbf{Appearance ---} We present a broad classification of robot appearances, synthesizing and going beyond existing ones (Section~\ref{subsec:appearance}). \item \textbf{Social capabilities ---} We contribute a repositioning of existing classifications aiming to clarify how existing categories related to each other (Section~\ref{subsec:social}). \item \textbf{Purpose and application area ---} We discuss a cross-section of purposes for social robots, and benefiting application areas, with selected examples that include recent developments in the field (Section~\ref{subsec:purpose}). \item \textbf{Relational role ---} We provide a straightforward and broad classification of the robot's role in relation to the human(s) (Section~\ref{subsec:role}). \item \textbf{Autonomy and intelligence ---} We clarify the related but distinct concepts of autonomy and intelligence, and discuss their quantification (Section~\ref{subsec:autonomy}). \item \textbf{Proximity ---} We classify interactions according to their spatial features (Section~\ref{subsec:proximity}). \item \textbf{Temporal profile ---} We look at several time-related aspects of the interaction, namely timespan, duration, and frequency (Section~\ref{subsec:temporal}). \end{enumerate} It is to be noted that our framework is not meant to be exhaustive, but rather to provide the reader with major aspects that shape social robots and their interactions with humans. While our focus in illustrating the presented concepts will be on single human - single robot interactions, the concepts may also apply for group interactions involving more than one robot and/or more than one human. Additionally, even though this framework was developed with social robots in mind, some dimensions may also be of relevance to robots without a social component in their design, such as for example in the ``appearance'' dimension. In the following section, we delve into each of the $7$ dimensions of our framework. We then end this chapter with a brief discussion on designing social robots within the resulting design space. \section{Framework description} \label{sec:framework} We now provide a description of each of the $7$ dimensions of our framework. The dimensions purposefully operate at different levels, according to the aspects that are most relevant to the design of social robots. In some dimensions, we provide a classification into different categories and possibly subcategories (namely Sections \ref{subsec:appearance}, \ref{subsec:purpose}, \ref{subsec:role}, \ref{subsec:proximity}, and \ref{subsec:temporal}). In others, we focus on clarifying or reinterpreting existing distinctions in categories or scales (namely Sections \ref{subsec:social} and \ref{subsec:autonomy}). Due to different levels of research and relevant content in each, some dimensions are addressed in more depth than others. Also, since the discussions of dimensions are not dependent on each other, we invite the reader to jump to their subsections of interest. \subsection{Appearance} \label{subsec:appearance} The mere physical presence of robots in a shared time and space with humans sparks crucial aspects of a social interaction. Indeed, \textit{embodiment}, a term used to refer to the idea that ``intelligence cannot merely exist in the form of an abstract algorithm but requires a physical instantiation, a body''~\cite{pfeifer2001understanding}, plays an important role in the perception and experience of interacting with intelligent technology. Indeed, literature supports that physical embodiment influences the interaction between humans and robots~\cite{lee2006physically, wainer2007embodiment, powers2007comparing, mumm2011designing, fasola2011comparing, fasola2011comparing, li2015benefit, kennedy2015comparing}. In particular, the physical appearance of a robot \textit{per se}, was shown to have a strong influence on people regarding aspects like perception, expectations, trust, engagement, motivation and usability~\cite{jordan1998human, disalvo2003seduction, breazeal2004designing}. Several taxonomies were developed in order to create representative classifications for a robot's appearance. To cite a few, Shibata~\cite{shibata2004overview} classified robots as being human type, familiar animal type, unfamiliar animal type, or imaginary animals / new character type. Additionally, Fong et al.~\cite{fong2002survey} considered anthropomorphic, zoomorphic, caricatured, and functional categories. The amount of classifications present in the literature urges for a unified and broad classification for social robot appearances. Building upon the existing classifications, we introduce a broad classification that encompasses main categories described by other authors, as well as new categories and subcategories. Our classification targets only and exclusively a robot's \textit{physical appearance}, as distinct from any type of robot behavior, i.e., ``robot at rest''. \begin{figure*} \centering \includegraphics[width=.80\textwidth]{Figures/taxonomy.pdf} \caption{Summary of our robot appearance classification. This classification was based on prior work from Fong et al.~\cite{fong2002survey} and Shibata~\cite{shibata2004overview}, and was unified, extended, elaborated, and clarified in the present chapter. Although the focus is on social robots, its scope is general enough to encompass appearances of robots without a social component in their design. \textbf{List of robots shown (left-to-right, top-to-bottom)} \textit{Bio-inspired robots:} HI-4, ERICA, Kodomoroid, NAO, LOLA, Robotic Eyes, Elumotion, EMYS, AIBO, PARO, DragonBot, Keepon, GoQBot, Meshworm, Robotic Flower, Lollipop Mushroom. \textit{Artifact-shaped robots:} Travelmate, AUR, Google self-driving car, Greeting Machine, YOLO. \textit{Functional robots:} CoBot, Quadcopter, Beam, TurtleBot.} \label{fig:appearance} \end{figure*} We contribute to the study of social robot's appearance in the following ways: (1) we integrate similar terms already present in the robot appearance classification literature, (2) we add new terms to existing classifications as they were not represented in the literature but urged for a classification, and (3) we attempt to clarify concepts related to different categories. Our unified classification is visually represented in Figure~\ref{fig:appearance}. We considered the following categories of robot appearances: \textit{bio-inspired}, including \textit{human-inspired} and \textit{animal-inspired}, \textit{artifact-shaped}, and \textit{functional}, each with several further subcategories (see Figure~\ref{fig:appearance}). We generated this classification with a holistic mindset, meaning it can serve to classify existing robots, but also to inform the design of future ones. Although devised with social robots in mind, it is general enough to be applied to any robot, independent of its social capabilities. We now provide a description of each category in our classification. \begin{enumerate} \item \textbf{Bio-inspired ---} Robots in this category are designed after biological organisms or systems. This includes human-inspired and animal-inspired robots (described next), as well as other bio-inspired robots, such as robotic plants (e.g., the robotic flower\footnote{\href{http://www.roboticgizmos.com/android-things-robotic-flower/}{http://www.roboticgizmos.com/android-things-robotic-flower/}}) and fungi (e.g., the Lollipop Mushroom robot\footnote{\href{https://www.amazon.com/Lollipop-Cleaner-Mushroom-Portable-Sweeper/dp/B01LXCBM3E}{https://www.amazon.com/Lollipop-Cleaner-Mushroom-Portable-Sweeper/dp/B01LXCBM3E}}). \begin{enumerate} \item \textbf{Human-inspired ---} Robots in this category are inspired by features of the human body, including structure, shape, skin, and facial attributes. Human-inspired robots not only include full-body designs, but also robots designed after human body parts. When designed after the full-human body, they are called \textit{humanoids}. The level of fidelity can vary from a highly mechanical appearance, such as the LOLA robot~\cite{buschmann2009humanoid}, to a highly human-like appearance that includes skins and clothes, such as the ERICA robot~\cite{glas2016erica}, or even include an intermediate between this two, in the case of the NAO robot\footnote{\href{https://www.softbankrobotics.com/emea/en/nao}{https://www.softbankrobotics.com/emea/en/nao}}. % For humanoids, it is worth mentioning the case in which they strongly resemble the human outer appearance and are covered with flesh- or skin- like materials, in which case they are often referred to as \textit{androids} (if they possess male physical features) or \textit{gynoids} (if they possess female physical features). An example of a gynoid is the Kodomoroid robot\footnote{\href{http://www.geminoid.jp/en/robots.html}{ http://www.geminoid.jp/en/robots.html}}. Additionally, a special case of androids/gynoids are \textit{geminoids}, which are designed after an existing human individual -- i.e., it is a ``robotic twin'' -- such as Geminoid HI-4\footnote{\href{http://www.geminoid.jp/projects/kibans/resources.html}{http://www.geminoid.jp/projects/kibans/resources.html}}, the tele-operated robotic twin of Hiroshi Ishiguro. % On the other hand, some robots are inspired by individual \textit{parts of the human body}. These include robotic arms, e.g., Elumotion Humanoid Robotic Arm\footnote{\href{http://elumotion.com/index.php/portfolio/project-title-1}{http://elumotion.com/index.php/portfolio/project-title-1}}, robotic hands~\cite{liu2007modular}, robotic heads such as the EMYS robot~\cite{kkedzierski2013emys}, robotic torsos,~\cite{shidujaman2018roboquin}, and robotic facial features, such as robotic eyes~\cite{cannata2006design},. It is worth mentioning that high-fidelity human-inspired robots are often subject to uncanny valley effects~\cite{mori1970uncanny}. Being highly but not totally human-like, they elicit feelings of eeriness, and hence should be designed bearing these possible effects in mind.\\ \item \textbf{Animal-inspired ---} Robots in this category are inspired by animals or by creatures possessing animal traits of appearance. On the one hand, they may be inspired by \textit{real animals}, for which we consider inspiration from \textit{familiar} animals, like the AIBO\footnote{\href{https://us.aibo.com/}{https://us.aibo.com/}} dog-inspired robot, and inspiration from \textit{unfamiliar} animals, such as the PARO\footnote{\href{http://www.parorobots.com/}{http://www.parorobots.com/}} baby seal robot. The distinction between familiar and unfamiliar animals is emphasized by Shibata~\cite{shibata2004overview}. According to the author, familiar animals are those whose behavior can be easily recognized, such as pets; while unfamiliar animals are those that most people know something about but are not totally familiar with, and have rarely interacted with them before, such as savanna animals. The same author mentioned that when robots are designed to resemble an unfamiliar animal they can be more easily accepted due to the lack of exposure to their typical behavior. It is documented in the literature that people hold strong expectations when faced with the possibility of interacting with a social robot~\cite{spence2014welcoming}, wherein robots whose embodiment matches its abilities are perceived more positively~\cite{goetz2003matching, li2010cross, komatsu2012does}. However, it is to be noted that familiarity is a subjective concept depending on culture and individual experiences, making this distinction flexible. On the other hand, animal-inspired robots can also be \textit{imaginary}, meaning they possess animal-like features but are not designed after a real animal. They can either be \textit{familiar}, i.e., designed after familiar imaginary animals ``existing'' in fantasy worlds, like cartoon characters or legendary creatures (e.g., DragonBot~\cite{short2014train}), or \textit{unfamiliar}, i.e., robots that are purely created from imagination, such as Miro\footnote{\href{http://consequentialrobotics.com/miro/}{http://consequentialrobotics.com/miro/}} and Keepon\footnote{\href{https://beatbots.net/my-keepon}{https://beatbots.net/my-keepon}}. In addition, this category includes robots designed after \textit{animal body parts}, such as the GoQBot designed as a caterpillar part~\cite{lin2011goqbot}, the Meshworm designed after the oligochaeta~\cite{seok2010peristaltic}, and robotic soft tentacles~\cite{jorgensen2018interaction}. \end{enumerate} \item \textbf{Artifact-shaped ---} Robots in this category bear the appearance of physical human creations or inventions. They may be inspired by \textit{objects}, such as furniture and everyday objects, e.g., the AUR robotic desk lamp~\cite{hoffman2010effects}, the Mechanical Ottoman robotic footstool~\cite{sirkin2015mechanical}, and the Travelmate robotic suitcase\footnote{\href{https://travelmaterobotics.com/}{https://travelmaterobotics.com/}}. They may also be inspired by an existing \textit{apparatus}, demonstrating how existing apparatuses can become robotic systems while maintaining the same appearance, such as self-driving cars (e.g., the Google self-driving car\footnote{\href{https://waymo.com/}{https://waymo.com/}}), but also everyday apparatuses like toasters, washing machine, etc. Additionally, artifact-shaped robot may be \textit{imaginary}, i.e., translating the invention of the designer, such as the Greeting Machine robot~\cite{anderson2018greeting} or YOLO~\cite{alves2017yolo, alves2018yolo}. \item \textbf{Functional ---} The appearance of robots included in this category is merely the sum of appearances of the technological pieces needed to achieve a given task or function. This means that their appearance leans more towards mechanical aspects. Examples are quadcopters, or mobile robots such as the CoBots~\cite{veloso2015cobots}, the TurtleBot\footnote{\href{https://www.turtlebot.com/}{https://www.turtlebot.com/}}, and the Beam\footnote{\href{https://suitabletech.com/}{https://suitabletech.com/}}. \end{enumerate} As a side note, shape-shifting robots, modular robots, or polymorphic robots~\cite{balch2002robot, yim2002modular, yim2007modular, li2009amoeba} are all examples of hybrid robots that can fit into more than one category depending on their configuration. Also, robotic swarms are examples of multi-robot systems that may be perceived as a single entity, i.e., more than the sum of individual robots (homogeneous or heterogeneous)~\cite{kolling2016human}, however they are they are not part of our classification, because they are too dependent on the configuration and behavior of the swarm. Moreover, the actual process of assigning categories to existing robots always carries a certain degree of subjectivity, which relates to different possible perceptions of the same robot appearance, depending or not on the context, the behavior of the robot, and so on. The clearest example in our classification would be the distinction between familiar and unfamiliar, which strongly depends on people's cultural background and personal experiences. Those differences in perception should be accounted for when designing robot appearances. Our presented classification is not intended to offer a clear-cut or rigid boundary between categories of robots. Rather, it represents a useful guideline for categorizing robots based on major distinguishing features. It does encourage the view of robot design as a spectrum, providing fluidity to their design and allowing for the combination of different elements of our classification. A robot's appearance is the most obvious and unique visual attribute, which contributes highly to the interaction~\cite{fink2012anthropomorphism}. Nonetheless, in addition to appearance, there are several factors related to embodiment, such as size, weight, noise, material texture, among others~\cite{disalvo2002all} that may contribute to the perception of the robot during an interaction. More research is needed in order to develop classifications that account for the other factors mentioned above. \subsection{Social capabilities} \label{subsec:social} Social robots vary greatly in their social capabilities, i.e., how they can engage in and maintain social interactions of varying complexities. As such, researchers have classified and defined them according to those social capabilities. Based on the work of Fong et al.~\cite{fong2002survey}, we list the different components of a social robot's capabilities as follows: \begin{itemize} \item \textbf{Communicating using natural language or non-verbal modalities ---} Examples of these ways of communication are natural speech \cite{williams2018thank}, motion~\cite{knight2011eight,dragan2013legibility} -- possibly including gaze~\cite{admoni2017social}, gestures or facial expressions --, lights~\cite{baraka2018mobile,szafir2015communicating}, sounds~\cite{bethel2006auditory}, or a combination of them \cite{loffler2018multimodal}. Mavridis~\cite{mavridis2015review} provided a review on verbal and non-verbal interactive communication between humans and robots, defining different types of existing communications such as interaction grounding, affective communications, speech for purpose and planning, among others. \item \textbf{Expressing affect and/or perceiving human emotions ---} Beyond Ekamn's five basic emotions~\cite{ekman1992argument} -- anger, disgust, fear, happiness, sadness, and surprise --, this may include more complex affective responses such as empathy. For example, Paiva et al.~\cite{paiva2017empathy} analyzed different ways by which robots and other artificial agents can simulate and trigger empathy in their interactions with humans. \item \textbf{Exhibiting distinctive personality and character traits ---} The major components to be considered, according to Robert~\cite{robert2018personality}, are human personality when interacting with a robot, robot personality when interacting with humans, dissimilarities or complementary in human-robot personalities, and aspects that facilitate robot personality. Some companies such as Misty Robotics \footnote{\href{https://www.mistyrobotics.com/}{https://www.mistyrobotics.com/}} are prioritizing the user personalization of a robot's personality as an important feature for future commercial social robots. \item \textbf{Modeling and recognizing social aspects of humans ---} Modeling human agents allows for robots to interpret aspects of human behavior or communication and appropriately respond to them. Rossi et al.~\cite{rossi2017user} provide a survey of sample works aimed at profiling users according to different types of features. More advanced models may have to consider theory of mind approaches~\cite{scassellati2002theory}. \item \textbf{Learning and developing new social skills and competencies ---} In addition to being programmed to have social skills, social robots may have the ability to refine those skills with time through adaptation, or even developing new skills altogether. An active area of research that looks at such paradigms is the area of developmental robotics~\cite{lungarella2003beyond}. \item \textbf{Establishing and maintaining social relationships ---} Relationships operate over a timespan that goes beyond a few interactions. A number of questions arise when one considers long-term interactions of robots with humans and what it means for a robot to proactively establish and maintain a relationship that is two-sided. Leite et al.~\cite{leite2013social} established some initial guidelines for the design of social robots for long-term interaction. These include continuity and incremental robot behaviors (e.g., recalling previous activities and self-disclosure), affective interactions and empathy (e.g., displaying contextualized affective reactions), and memory and adaptation (e.g., identifying new and repeated users). \end{itemize} Complementary to these components, Breazeal~\cite{breazeal2003toward} distinguished $4$ categories of robot social capabilities: (1) \textit{socially evocative}, denoting robots that were designed mainly to evoke social and emotional responses in humans, leveraging the human tendency to anthropomorphize~\cite{epley2007seeing}. Therefore, despite expected social responsiveness, the robot's behavior does not necessarily reciprocate; (2) \textit{social interface}, denoting robots that provide a ``natural'' interface by using human-like social cues and communication modalities. In this sense, the social behavior of humans is only modeled at the interface level, which normally results in shallow models of social cognition in the robot; (3) \textit{socially receptive}, denoting robots that are socially passive but that can benefit from interaction. This category of robots is more aware of human behavior, allowing humans to shape the behavior of the robot using different modalities, such as learning by demonstration. Also, these robots are socially passive, responding to humans' efforts without being socially pro-active; and (4) \textit{sociable}, denoting robots that pro-actively engage with humans, having their own internal goals and needs in order to satisfy internal social aims (drives, emotions, etc.). These robots require deep models of social cognition not only in terms of perception but also of human modelling. In addition to this list, Fong et al.~\cite{fong2002survey} added the following $3$ categories: (5) \textit{socially situated}, denoting robots that are surrounded by a social environment that they can perceive and react to. These robots must be able to distinguish between other social agents and different objects that exist in the environment; (6) \textit{socially embedded}, denoting robots that are situated in a social environment and interact with other artificial agents and humans. Additionally, these robots can be structurally coupled with their social environment, and have partial awareness of human interactional structures, such as the ability to perform turn-taking; and (7) \textit{socially intelligent}, including robots that present aspects of human-style social intelligence, which is based on deep models of human cognition and social competence. Although robots have been classified according to their different social capabilities, it is yet unclear how these categories relate to each other. Are they part of a spectrum? Are they separate categories altogether? We argue that evaluating social capabilities of robots can be understood according to two main dimensions: \begin{enumerate} \item\textbf{The depth of the robot's actual social cognition mechanisms.} \item \textbf{The human perception of the robot's social aptitude.} \end{enumerate} Given these dimensions, and in light of the existing categories presented above, we propose a two-dimensional space map, providing a clearer understanding of the social capabilities of robots. This map is presented in Figure~\ref{fig:social_capability} for illustrative purposes. As it can be seen in the figure, socially evocative robots have the least depth of social cognition but are perceived as rather socially apt. A social interface typically possesses some additional cognition mechanisms to allow for easy communication with the range of the robot's functionality; it also possibly results in a slightly higher perceived social aptitude thanks to its more versatile nature. Socially receptive robots, socially situated, and socially embedded robots possess increasing depth in their social cognition, and as a result increasing perceived social aptitude. For socially embedded robots, the perceived aptitude may vary according to the degree of awareness about interactional structure the robot has. On the outskirts of our map we find sociable and socially intelligent robots, with much deeper models of social cognition. \begin{figure} \sidecaption[t] \includegraphics[width=7.5cm]{Figures/social_space.pdf} \caption{Positioning of the classifications of Breazeal \cite{breazeal2003toward} and Fong et al. \cite{fong2002survey} according to our proposed two-dimensional space formed by (1) the depth of the robot's social cognition mechanisms, and (2) the expected human-perceived level of robot social aptitude. This figure is merely illustrative and color patches deliberately fuzzy, as we do not pretend to have the tools to actually quantify these dimensions according to any scale.} \label{fig:social_capability} \end{figure} \subsection{Purpose and application area} \label{subsec:purpose} In this section, we discuss social robots according to their purpose, i.e., what types of goals they are designed to achieve, as well as benefiting application areas. Figure~\ref{fig:applications} summarizes the main purposes and application areas included in this section, with illustrative examples. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/applications.pdf} \caption[A cross-section of main application areas for social robots with selected examples, and emphasis on the possibility of more than one purpose for the same physical robot.]{A cross-section of main application areas for social robots with selected examples, and emphasis on the possibility of more than one purpose for the same physical robot, e.g., Baxter appears in healthcare, industry, and education. Education and entertainment/art were merged for conciseness. All images were reproduced with permission of the authors, companies or copyright owners. Additional credits, when applicable, are included in a footnote\footnotemark.} \label{fig:applications} \end{figure*} \nocite{shibata2011robot} \nocite{baraka2019optimization} \nocite{pollack2002pearl} \nocite{bethel2009robots} \nocite{srinivasan2012social} \subsubsection*{A note on purpose as being distinct from embodiment} In traditional engineering practice, the physical characteristics of a technological device (e.g., toaster, microwave, typewriter, manufacturing machine) tend to be strongly coupled with its purpose, i.e., the task it was designed to achieve. With the advent of personal computers and smartphones, we moved away from defining those devices solely by their purpose. For instance, it would be inappropriate to call a modern computer an ``electronic typewriter'' or even a smartphone an ``electronic phone'', because those devices can serve an immense variety of uses, thanks to software applications that constantly create new purposes for them. Similarly, even though some robots may currently be designed for a specific purpose in mind, some robots may possess a set of skills that can prove useful in a variety of scenarios, sometimes across completely different application areas. As a result, (1) many different robots can be programmed to be used for the same purpose, but also (2) a single robot can be used for many different purposes. For example, a robot such as NAO has been used across a large variety of purposes, both in research and industry, from playing soccer~\cite{graf2009robust} to assisting individuals with cognitive impairments~\cite{shamsuddin2012initial,esteban2017build} or teaching children~\cite{yadollahi2018deictic,alves2019empathic}. \footnotetext{\scriptsize{\textbf{Additional credits (left-to-right, top-to-bottom)} \textit{Paro}: Credits AIST, Japan; \textit{Baxter (industry)}: Courtesy of Rodney Brooks; \textit{SeRoDi}: Source Fraunhofer IPA, Photographer Rainer Bez (2015); \textit{Robear}: Credits RIKEN; \textit{Bee-bot}: Credits Ben Newsome, Fizzics Education; \textit{Care-O-bot}: Source Phoenix Design (2015); \textit{Furby}: Credits Robert Perry; \textit{HERB}: Courtesy of Siddhartha Srinivasa; \textit{Robovie}: Courtesy of Masahiro Shiomi; \textit{Pepper}: Retrieved from Wikimedia Commons under the \href{https://en.wikipedia.org/wiki/GNU_Free_Documentation_License}{GNU Free Documentation License}, Author Nesnad; \textit{Robotinho}: Credits University of Freiburg; \textit{Robota (social sciences)}: retrieved from~\cite{billard2007building}.}} There remains, however, a general tendency to define robots by characteristics of their programmed behavior, which can be limiting or inappropriate. As an example, we see locutions of the form ``educational robots'', ``therapeutic robots'', ``pet robots'', and so on. The Baxter robot\footnote{\href{https://www.rethinkrobotics.com/baxter/}{https://www.rethinkrobotics.com/baxter/}}, for instance, is often referred to as a ``collaborative industrial robot'' (or co-bot), because it has been used quite often in such a setting. However, it has also been used in very different applications, such as assistance for the blind~\cite{bonani2018my}, or education~\cite{fernandez2018may}, and hence the naming is reductive. Similarly, a ``pet robot'' such as the AIBO dog-inspired robot has been used in contexts where it is far from being considered a pet, such as playing soccer with other robots~\cite{stone2007intelligent}. Of course, the embodiment of the robot may restrict its capabilities and hence the type of tasks it may be able to physically achieve. Also, the robot's hardware may be optimized for a specific interactive application (e.g., Baxter has compliant joints for safer collaboration). Moreover, a robot's appearance, which goes beyond its hardware specifications, may be optimized for human perceptions such as acceptability, likeability, trust, and so on, for a specific intended purpose. However, given the considerations above, we believe that robots should not be defined solely by their purpose, the same way humans are (hopefully) not defined by their profession. As a result, we personally prefer a slightly different language to characterize robots according to their purpose(s): ``robots \textit{for} education'' instead of ``educational robots'', ``robots \textit{for} therapy'' instead of ``therapeutic robots'', and so on. Using this slightly modified language, we now discuss the main purposes and application areas that are benefiting from the use of social robots. In light of our discussion, the presented list is not meant to be selective, as the same robot may be used for more than one purpose. \subsubsection{Robots for healthcare and therapy} \label{subsubsec:healthcare} Robots are being introduced in the health sector to assist patients and providers in hospitals, at home, or in therapy settings. The type of assistance the robot provides can be generally categorized into physical and/or social. Physically assistive applications include helping patients with reduced mobility or dexterity, such as the elderly~\cite{forlizzi2004assistive} or people with physical impairments~\cite{burgar2000development}. These robots can help to carry out daily tasks, like getting out of bed, manipulating objects, eating, and so on, which can give them a higher sense of autonomy and dignity~\cite{sharkey2012granny}. They may also help in therapy to assist patients in regaining lost physical skills or build new ones~\cite{burgar2000development}. On the other hand, \ac{SAR} focus on providing assistance primarily through social interactions. Feil-Seifer et al.~\cite{feil2005defining} identified a number of applications where \ac{SAR} may have a strong impact, namely in therapy for individuals with cognitive disorders~\cite{scassellati2012robots,cabibihan2013robots}, companionship to the elderly and individuals with neurological disorders or in convalescent care~\cite{burton2013dolphins}, and students in special education. We also believe that robots in the healthcare domain may be used to benefit healthcare providers directly, for example training therapists through robotic simulation of interactions with patients~\cite{baraka2019interactive}. \subsubsection{Robots for education} Robots in education are mainly used with children~\cite{kanda2007two,tanaka2007socialization,Kozima08aplayful} because they can increase engagement in learning while favoring an interactive and playful component, which may be lacking in a traditional classroom setting. When designing such educational robots, it is crucial to design for and evaluate long-term interactions, to avoid successes merely due to strong novelty effects~\cite{leite2013social}. There is a number of formats that educational scenarios can take, where the robot has a different role. Beyond being a teacher delivering material, the robot can also act as a social mediator between children, encouraging dyadic, triadic, and group interactions~\cite{kozima2009keepon}. Moreover, the robot may play the role of a learner in learning-by-teaching scenarios, in which the child teaches the robot and in this process develops their own skills~\cite{jacq2016building}. \subsubsection{Robots for entertainment and the arts} The entertainment industry has benefited from the use of robots for their engaging and interactive capabilities. Personal entertainment creations emerged with robotic toys, such as Furby\footnote{\href{https://furby.hasbro.com/en-us}{https://furby.hasbro.com/en-us}} or Bee-Bot\footnote{\href{https://www.bee-bot.us/}{https://www.bee-bot.us/}}, and robotic dolls, such as Hasbro's My Real Baby\footnote{\href{https://babyalive.hasbro.com/}{https://babyalive.hasbro.com/}}. Public entertainment robots have appeared in theme parks and other public entertainment spaces~\cite{madhani2009bringing}. More complex robots with both verbal and non-verbal communication capabilities have been used for more prolonged interaction scenarios such as storytelling~\cite{chen2011survey} or comedy~\cite{bruce2000robot}. Other entertainment applications include interactive shows~\cite{alonso2014human}, acrobatic robots for movie stunts~\cite{pope2018stickman}, and sex robots~\cite{levy2009love}, among others. More artistic-oriented applications include robots in the visual arts\footnote{An annual robot art competition is held to encourage the use of robots in the visual arts \href{http://robotart.org/}{http://robotart.org/}}~\cite{pagliarini2009development} and installation art~\cite{augugliaro2014flight}. Social robots have also been deployed in fields of performative arts such as drama~\cite{zeglin2014herb} or dance~\cite{sum2017robot,cappo2018online}, where their embodied intelligence in real-time contexts and their interactivity remain a challenging and rich research challenge. Generally, the inclusion of intelligent robots in the arts and the broader field of computational creativity~\cite{colton2012computational} are questioning definitions and criteria of art, authorship, and creativity. \subsubsection{Robots for industry} As industrial robots are becoming more intelligent, they are being equipped with interactional capabilities that allow them to collaborate with humans, mainly in tasks involving manipulation skills. Schou et al.~\cite{schou2018skill} identified several tasks that can benefit from a human-robot collaborative setting, possibly including multi-robot/multi-human teams. These are: logistic tasks (namely transportation and part feeding), assistive tasks (namely machine tending, (pre)assembly, inspection, and process execution), and service tasks (namely maintenance and cleaning). Research has shown that robots exhibiting social communication cues in industrial settings are perceived as social entities~\cite{sauppe2015social}. Moreover, Fong et al.~\cite{fong2002survey} emphasized that, in order to achieve true collaboration between humans and robots, the robot must have sufficient introspection to detect its own limitations, must enable bidirectional communication and information exchange, and must be able to adapt to a variety of humans from the novice to the experienced. \subsubsection{Robots for search and rescue} Search and rescue is one of the applications in which robots are being investigated as replacements to humans in dangerous environments, such as in natural or human disasters. Even though typical robots in this domain have not been designed with social capabilities, research has shown the importance of ``social intelligence'' in this domain~\cite{fincannon2004evidence}. Bethel et al.~\cite{bethel2008survey} identified the importance of different modalities of social communication in the context of victim approach, across the scale of proxemic zones (i.e., the distancing of the robot to the human), ranging from the public to the personal space. Such modalities include body movement, posture, orientation, color, and sound. \subsubsection{Robots for assistance in home and workplace} With the advent of personal robots~\cite{gates2007robot}, the vision is that anyone will have the ability to own and operate a robot, regardless of their skills or experience, thanks to natural and intuitive interfaces \cite{liang2018simultaneous}. Such robots can be deployed in home or workplace environments to assist individuals, reduce their mental and physical load, and increase their comfort and productivity. In the home, personal robots are already cleaning floor surfaces autonomously\footnote{\href{https://www.irobot.com/for-the-home/vacuuming/roomba}{https://www.irobot.com/for-the-home/vacuuming/roomba}}, cooking full meals\footnote{\href{http://www.moley.com/}{http://www.moley.com/}}, and doing laundry\footnote{\href{http://www.laundry-robotics.com/}{http://www.laundry-robotics.com/}}, just to name a few. More ambitious research projects have aimed at designing versatile ``robotic butlers''~\cite{srinivasa2010herb}, that can operate in a variety of tasks across the home. In the workplace, robots are being used on a daily basis to transport objects, cataloguing inventory, escorting people, delivering messages, among other tasks, in settings such as offices, hospitals\footnote{\href{https://aethon.com/}{https://aethon.com/}}, supermarkets\footnote{\href{http://www.bossanova.com}{http://www.bossanova.com}}, and hotels. The majority of these robots are called service robots and have the capability of navigating in structured indoor environments, mainly corridors as opposed to open public spaces. An example of such service robots is the CoBots~\cite{veloso2015cobots}, developed and deployed at Carnegie Mellon University, servicing multiple floors and having navigated more than $1,000$~km autonomously~\cite{biswas20161}. Other types of robots used in the workplace include tele-presence robots for teleconferencing and virtual visits of remote places~\cite{tsui2011exploring}. \subsubsection{Robots for public service} Robots have been deployed in public spaces including malls~\cite{shiomi2009field}, museums~\cite{faber2009humanoid}, exhibition spaces~\cite{jensen2005robots}, and receptions~\cite{gockley2005designing}. Some (but not all) of those robots are mobile, and can navigate in open spaces or in crowds, which makes the design of their behavior challenging and subject to a variety of social constraints~\cite{luber2012socially}. Interactions with such robots have to account for the fact that the robot will interact with a very large number of people, with inevitable differences, and during a short duration. Hence, personalizing the interaction and making it as intuitive as possible (as there is very little adaptation time on the human side) are important design considerations. \subsubsection{Robots for the social sciences} Due to the possibility of programming robots to exhibit mechanisms of cognition similar to those of humans, a less publicized purpose of robots is in fields of the social sciences for the study of social development, social interaction, emotion, attachment, and personality~\cite{fong2002survey}. The idea is to use robots as test subjects in controlled laboratory experiments, leveraging the fact that such robots can reproduce consistent behaviors repeatedly and can be controlled to test predictions of human models of cognition. For example, the Cog robot~\cite{scassellati2003investigating} was used to investigate models of human social cognition. Similarly, a doll-like robot, Robota~\cite{billard2007building}, was used in comparative studies for social development theories~\cite{dautenhahn1999studying}. Additionally, robots (human-inspired or other types) can be used as stimuli to elicit behaviors from humans for the development and refinement of theories about human behavior and cognition. For a more detailed discussion on cognitive robotics and its applications outside of technology-related fields, consult Lungarella et al.~\cite{lungarella2003developmental}. \subsubsection{Other application areas} The list of application areas and purposes listed above is not comprehensive, but reflects major developments and deployments. To this list we can add: \begin{itemize} \item \textbf{Robots for companionship ---} Dautenhahn~\cite{dautenhahn2004robots} presented a perspective on different possible relationships with personalized (possibly life-long) robotic companions, drawing on literature from human-animal relationships. Situated somewhere between animal pets and lifeless stuffed animals, robotic companions may provide support for socially isolated populations. The technical and design challenges associated with robotic companions are numerous due to the time dimension, and the deployment of robotic pets has raised an ethical concern~\cite{sparrow2002march}. Examples of robotic companions include the Huggable$^{TM}$ robot~\cite{stiehl2009huggable}, the AIBO dog-inspired robot~\cite{friedman2003hardware}, and the Lovot robot\footnote{\href{https://groove-x.com/en/}{https://groove-x.com/en/}}. \item \textbf{Robots for personal empowerment ---} The ultimate ethically concerned use of robots is to expand human abilities instead of replacing them, and to empower people at an individual level. Examples of personal empowerment that robots may facilitate are physically assistive robots that help people with impairments gain autonomy and dignity, such as prosthetics, exoskeletons, brain-controlled robotic arms~\cite{hochberg2012reach}, and other assistive robots (see Section~\ref{subsubsec:healthcare}). Other examples include robots that are designed to enhance creativity in individuals, such as the YOLO robot~\cite{alves2018yolo}, or tele-presence robots for workers that cannot physically perform the required tasks, such as in the ``Dawn ver. $\mathrm{\beta}$'' cafe in Japan who hired paralyzed people to serve the costumers through a mobile robot controlled by their eye movements\footnote{\href{https://www.bbc.com/news/technology-46466531}{https://www.bbc.com/news/technology-46466531}}. \item \textbf{Robots for transportation ---} The rise of autonomous driving will revolutionize transportation and the urban environment. Autonomous vehicles (cars, trucks, public transportation, etc.) are expected to operate in environments populated by humans (drivers, pedestrians, bicyclists, etc.), and research is looking at adding social dimensions to their behavior~\cite{nass2005improving,wei2013autonomous, mavrogiannis2019effects}. Additionally, drones will be used in the near future for package delivery\footnote{\href{https://www.amazon.com/Amazon-Prime-Air/b?ie=UTF8\&node=8037720011}{https://www.amazon.com/Amazon-Prime-Air/b?ie=UTF8\&node=8037720011}} and will have to (socially) interact with costumers. \item \textbf{Robots for space ---} Robots for space exploration are historically known for their low level of interactions with humans. However, as humans are getting more involved in space explorations, social robots are being introduced to assist astronauts in their tasks and daily routines, e.g., the Jet Propulsion Laboratory's Robonaut and Valkyrie~\cite{yamokoski2019robonaut}. \item \textbf{Robots for technology research ---} Robots can also be used to test theories in fields related to technology, such as testing algorithms and architectures on physical platforms. More generally, robots can provide a platform for developing and testing new ideas, theories, solutions, prototypes, etc., for effective embodied technological solutions and their adoption in society. \end{itemize} The application areas mentioned above provide a cross-section of purposes that social robots hold in existing developments and deployments. If we view robots as embodied agents that can carry intelligently complex tasks in the physical and social world, we expect, in the future, to have robots introduced in virtually any application where they can complement, assist, and collaborate with humans in existing roles and expand their capabilities, as well as potentially assume new roles that humans cannot or should not assume. \subsection{Relational role} \label{subsec:role} One of the relevant dimensions that shapes human-robot interaction is the \textit{role} that the robot is designed to fulfill. The concept of role is an abstract one, for which various different perspectives can be presented. In this section, we specifically look at the \textit{relational role} of the robot towards the human. This is the role that a robot is designed to fulfill within an interaction, and is not necessarily tied to an application area. The relational role the robot has been designed to have is critical to the perception, or even the relationship, that arises between robot and human. Towards clarifying the concept of relational role, it is important to immediately distinguish relational role from role in an activity or application. In a specific activity or application, we may expect to find activity-specific roles (as in role-playing), such as teacher, driver, game companion, cook, or therapist. These types of roles are defined by the type of activity performed between the robot and humans, therefore making it an open-ended list that is likely to stay in constant evolution, as robots become applied to new fields and tasks. Given the fuzziness of this concept, there have not been many attempts at generalizing the concept of role of robots within a relation with humans. For the rest of this section, we will present and analyze some broader definitions from the existing literature, to conclude by contributing a broad classification that attempts to agglomerate the main concepts of the pre-existing ones while containing and extending them. Scholtz et al. presented a list of interaction models found in \ac{HRI}~\cite{scholtz2003theory}. They included roles that humans may have towards a robot in any \ac{HRI} application. The list defines the roles of the Supervisor, who monitors and controls the overall system (single or multiple robots), while acting upon the system's goals/intentions; the Operator, who controls the task indirectly, by triggering actions (from a set of pre-approved ones), while determining if these actions are being carried out correctly by the robot(s); the Mechanic, who is called upon to control the task, robot and environment directly, by performing changes to the actual hardware of physical set-up; the Peer, who takes part in the task or interaction, while suggesting goals/intentions for the supervisor to perform; and the Bystander, who may take part in the task or interaction through a subset of the available actions, while most likely not previously informed about which those are. These five roles were initially adapted from \ac{HCI} research, namely from Norman's \ac{HCI} Model~\cite{norman1986cognitive}. As such, they refer mostly to the role of the human within a technological system, whereas in this section we look for a classification to support the roles of robots in relation to humans within their interaction with each other. Later, Goodrich et al.~\cite{goodrich2008human} built upon this list to propose a classification of roles that robots can assume in \ac{HRI}. In their list, it is not specified whether the role refers to a human or to a robot. Their proposed classification can be vague, as they take Scholtz's roles (for humans) and directly apply them to both robots and humans with no discussion provided. They also extended the list by adding two more roles, but these are defined only for robot. In the Mentor role, the robot is in a teaching or leadership role for the human; in the Informer role, the robot is not controlled by the human, but the human uses information coming from the robot, for example in a reconnaissance task. The concept of robot roles was also addressed by Breazeal~\cite{breazeal2004social}, who proposed four interaction paradigms of \ac{HRI}. In these paradigms, the robot can either take the role of a Tool, directed at performing specific tasks, with various levels of autonomy; a Cyborg extension, in which it is physically merged with the human in a way that the person accepts it as an integral part of their body; an Avatar, through which the person can project themselves in order to communicate with another from far away; or a Sociable partner, as in classic science-fiction fantasy. Based on the many different proposed classifications, and of all the various interaction scenarios and applications found throughout literature and presented throughout this chapter, we have outlined our own classification for the role of robots within a relation with humans. Our classification attempts to merge the various dimensions of interaction while stepping away from explicit types of scenarios or applications. It does not necessarily add or propose new roles, but instead, redefines them from a relational perspective, placing emphasis on how the robot relates from a human's perspective, as depicted in Figure~\ref{fig:roles}. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Figures/robot_roles.png} \caption{Our classification of relational roles of robots towards humans (represented as the ``you'').} \label{fig:roles} \end{figure*} In our classification for relational roles of robots, we view \ac{HRI} as including both \textbf{robot} and \textbf{you} (the human). As such, we consider the following roles that a \textit{robot} may have towards \textit{you}: \begin{itemize} \item A robot \textbf{``for you''} serves some utility on a given task. This is the most traditional role of a tool or a servant, and is inspired by most previous classifications. Despite closely related with the concept of a tool, as proposed by other authors, we frame this role as a broader type of robotic tool, which can even include robots like autonomous cars. \item A robot \textbf{``as you''} plays the role of a proxy, namely, but not limited to, tele-presence. However it does not necessarily imply interaction from far away as in Breazeal's classification~\cite{breazeal2004social}. This type of role can exist even when inter-actors are co-located, as long as the robot is acting in place of another person who operates it (e.g. shared autonomy scenarios). \item A robot \textbf{``with you''} is typically collaborative, with various levels of autonomy, including being part of a group with you. It is used in applications in which both the human and the robot act together, as a team, or towards common goals, and also includes robots for companionship. The robot and human are not necessarily co-located, such as for example human-robot teams that have to communicate remotely. \item A robot \textbf{``as if you''} emulates particular social or psychological traits found in humans. These robots are mainly used as social sciences research tools (see Section 2.3.8). To date, robots have been used to examine, validate and refine theories of social and biological development, psychology, neurobiology, emotional and non-verbal communication, and social interaction. \item A robot \textbf{``around you''}, shares a physical space and common resources with the human. It differs from a \textit{robot with you} by the fact that it is necessarily co-located with the human, but not necessarily collaborating with them. These are typically called co-operating, co-present, or bystanders, as previously proposed in Scholzt's classification~\cite{scholtz2003theory}. \item A robot \textbf{``as part of you''} extends the human body's capabilities. These robots typically have nonexistent or very limited autonomy, but provide humans with physical capabilities that they could not otherwise perform using their own biological body. Such robots can be used for pure embodiment extension (e.g. strength-enhancing exoskeletons), or for close-range \ac{HRI} collaboration, such as the robotic wearable forearm \cite{vatsal2018design} whose function is to serve as a supernumerary third arm for shared workspace activities. \end{itemize} The list of relational roles that we present defines non-exclusive roles, meaning that for some particular applications, we may design and develop robots that take more than one of these roles, or take a different role when more than one human is involved in the interaction. An example would be of a robot used in an office, which can be used \textit{for the users} to deliver mail and packages to different locations, while at the same time acting \textit{around the users} when navigating the office space. Another example would be an autonomous vehicle operating \textit{for} the passenger(s), but \textit{around} pedestrians and other human drivers. \subsection{Autonomy and intelligence} \label{subsec:autonomy} Necessary aspects to consider when characterizing the behavior of social robots are those of autonomy and intelligence. Although related, these are two distinct concepts that are often inconsistently and confusingly used in existing literature~\cite{gunderson2004intelligence,gardner1996intelligence}. In particular, it is often assumed that a high level of robot autonomy implies both a high level of intelligence and of complexity. In reality, some fully autonomous systems can possess very low intelligence (e.g., a traditional manufacturing machine) or complexity (e.g., a simple self-operated mechanism). A better clarification of the concepts of autonomy and intelligence, and their relation, is needed, especially in the context of social robotics. \subsubsection{Definitions (or lack thereof)} The concepts of autonomy and intelligence are hard to define, and there does not seem to be unique accepted definitions~\cite{beer2014toward}. In particular, existing definitions in the literature seem to differ depending on the context of application, and the main field of focus of the author(s). Based on existing literature, we propose below extended working definitions of those two concepts in the context of social robotics. \subsubsection{Autonomy} It may seem somewhat paradoxical to talk about autonomy in the context of interactive robots, because traditionally fully autonomous robots are involved in minimal interactions with humans; in other words, reduced interaction with humans is a by-product of increased robot autonomy. For social robots however, this relation between amount of human interaction and robot autonomy is questioned. Highly autonomous social robots are expected to carry out more fluid, natural, and complex interactions, which does not make them any less autonomous. There exists a very large number of definitions of autonomy for general agents, however central to most existing definitions is the amount of \textit{control} the robot has over performing the task(s) it was designed to fulfill (or that it sets to itself), as emphasized by Beer et al.~\cite{beer2014toward}. For social robots, tasks may include well-defined goal states (e.g., assembling furniture) or more elusive ones (e.g., engaging in conversation). We claim that in addition to control, the concept of autonomy should also account for learning. Indeed, many learning paradigms include human-in-the-loop approaches, and we believe these should taken into account. These include active learning~\cite{chao2010transparent}, learning by demonstration~\cite{rybski2007interactive}, and corrective human feedback learning~\cite{mericcli2011task}, used within the context of interactions in applications involving human teachers such as learning-by-teaching educational scenarios~\cite{jacq2016building} or general collaborative scenarios~\cite{breazeal2004teaching}. As a result, we extend the definition from Beer et al.~\cite{beer2014toward} to make it applicable to social robots, and define autonomy of a social robot as follows:\\ \noindent\textit{\textbf{Autonomy ---} ``The extent to which a robot can operate in the tasks it was designed for (or that it creates for itself) without external intervention.''} \\ \noindent Note the use of the term \textit{intervention} as opposed to \textit{interaction}. \subsubsection{Intelligence} The is no real consensus on the definition of general intelligence~\cite{gardner1996intelligence}. In the context of robotics and \ac{AI}, intelligence is generally emphasized as related to problem solving~\cite{newell1972human}. For social robots, we propose the following extension of the definition of Gunderson et al.~\cite{gunderson2004intelligence}:\\ \noindent\textit{\textbf{Intelligence ---} ``The ability to determine behavior that will maximize the likelihood of goal satisfaction under dynamic and uncertain conditions, linked to the environment and the interaction with other (possibly human) agents.}''\\ Note that intelligence is also dependent on the difficulty of the goals to be achieved. Based on this definition, it can be seen that intelligence and autonomy are distinct concepts, but that, for a given task, intelligence creates a bound on achievable autonomy. In other words, the level of intelligence of a robot may prevent its ability to reach a given level of autonomy for fixed robot capabilities~\cite{gunderson2004intelligence}. A final important note concerning the design of social robots is that a robot's perceived intelligence~\cite{bartneck2009measurement} can be drastically different from its actual intelligence. As a result, minimizing the gap between the two is crucial for maintaining adequate expectations and appropriate levels of trust on the human side. Now that we have defined the concepts of autonomy and intelligence, we discuss approaches to quantify them. \subsubsection{Quantifying autonomy and intelligence} Unlike scales from the automation~\cite{endsley1999level} or tele-operation~\cite{sheridan1978human,huang2005autonomy,yanco2004classifying,goodrich2003seven} fields, and more recently with autonomous vehicles~\cite{sae2014automated}, all of which are based on the idea that more autonomy requires less HRI, some researchers have developed scales of autonomy that apply to social robots~\cite{beer2014toward,thrun2004toward,feil2007benchmarks,goodrich2008human}. These emphasize on the fact that autonomy has to be understood as a dynamic entity~\cite{goodrich2008human}. On the other hand, measuring robot intelligence has been the subject of some investigation, from both practical~\cite{adams2016athlon} and theoretical perspectives~\cite{bien2002machine}. Both autonomy and intelligence can be seen as belonging to a continuum, taking into account aspects of robot perception, cognition, execution, and learning~\cite{gunderson2004intelligence,yanco2004classifying}. As a result, autonomy is a dimension that one designs for, constrained by possible achievable levels of intelligence. As a general rule, the higher the autonomy and intelligence is, the higher the complexity of the system is. \subsubsection*{The importance of dimensional thinking} For a highly heterogeneous technology such as a social robot that involves a combination of hardware, software architecture, cognition mechanisms, intelligent hardware control, just to name a few, it is important to define dimensions about aspects such as autonomy and intelligence. The overall assessment of these aspects would then depend on a combination of assessments over individual dimensions. Researchers at IBM have proposed to define ``dimensions of (general artificial) intelligence'', as a way to define an updated version of the Turing test~\cite{turing2009computing}. Their list is more task-oriented, but can serve as a basis to think about general dimensions for both intelligence and autonomy. We propose the following dimensions of intelligence and autonomy, accounting for the socially interactive factor: \begin{enumerate} \item \textbf{Perception of environment-related and human-related factors ---} In order to engage in successful interactions, social robots need to be able to assess the dynamic state of the physical environment and of humans, to inform its decision making. On the human side, this includes estimating the human's physical parameters (pose, speed, motion, etc.), speech, and non-verbal social cues (gestures, gaze, prosody, facial expressions, etc.). \item \textbf{Modeling of environment and human(s) ---} In order to interpret robot perceptions, models of the environment and of humans are needed. For example, models of the humans can allow the robot to infer their intents, personality, emotional or affective states, and predict future human states or behavior. If models are parametrized to capture individual differences, then they can be a powerful tool to inform personalization and adaptation mechanisms in HRI~\cite{rossi2017user}. \item \textbf{Planning actions to interact with environment and human(s) ---} Decision-making on a robot can be reduced to creating plans for robot actions that take into account the shape of the task, the goal, and the current state of the world, including the robot, the environment, and the human(s). A social robot needs to plan its motion, speech, and any other modality of social behavior it may be able to exhibit. \item \textbf{Executing plans under physical and social constraints ---} The same way the environment poses physical constraints on how the robot interacts with it, culture and society impose social constraints on how interactions with a robot should take place \cite{lee2014culturally}. Robot decision-making should take human social norms into account while planning and executing generated plans~\cite{carlucci2015explicit}. Note that the execution of the plan may not be successful, hence the robot needs to account for all possible outcomes. \item \textbf{Learning through interaction with the environment or humans ---} On top of the $4$ basic dimensions mentioned above, some robots may be endowed with learning capabilities, which allow them to improve with time, throughout their interactions with the environment or humans (including human-in-the-loop learning). Note that this dimension does not necessarily encompass machine learning as a general technique, as many offline machine learning methods would fall under the dimensions of perception and modeling. \end{enumerate} The dimensions above span most existing building blocks for the intelligence of a social robot. However, depending on their implementation and complexity, some robots may not include one or more of the above dimensions. Those dimensions are generally separated in the design and implementation of most robots, hence as a result, intelligence and autonomy on each dimension may be completely different. For example, some semi-autonomous robots include completely human-controlled perception~\cite{steinfeld2009oz}, or rely on human input for learning~\cite{chao2010transparent,rybski2007interactive,mericcli2011task} or verifying the suitability of robot plans~\cite{esteban2017build}. As technology advances, higher amounts of robot intelligence will be achievable, unlocking new possible levels of autonomy for more complex tasks; however, the amount of autonomy of a system (within possible technological limits) will remain a design choice. As a design principle for future social robots, we advocate for the notion of symbiotic autonomy~\cite{veloso2015cobots, coradeschi2006symbiotic}, where both humans and robots can overcome their limitations and potentially learn from each other. \subsection{Proximity} \label{subsec:proximity} Spatial features of the interaction may have a strong influence on the type of possible interactions and their perception by humans. In this section, we focus on the proximity of the interaction, i.e., the physical distance between the robot and the human. In particular, we consider $3$ general categories of interactions according to the proximity dimension: \textit{remote}, \textit{co-located}, and \textit{physical}. \subsubsection{Remote \ac{HRI}} Several applications in \ac{HRI} require the human and the robot to be in physically remote places. Tele-operation applications generally involve tasks or environments that are dangerous or inaccessible for humans, and historically represents one of the first involvements of humans with robots. In traditional tele-operation contexts, the human is treated as an operator, intervening to shape the behavior of one or more robots. Such types of \ac{HRI} scenarios have been extensively studied and a number of metrics have been developed for them~\cite{steinfeld2006common}. However, they are often excluded from the literature in social robotics~\cite{fong2002survey}. More recent developments in the field of tele-operation gave rise to \textit{tele-presence} applications, which treat the robot as a physical proxy for the human~\cite{tsui2011exploring, kristoffersson2013review}, allowing the latter for example to be virtually present in tele-conferencing settings, or to visit remote places. As a result, as the robot is used to interact with humans in the remote environment, its design may include a strong focus on socially embodied aspects of the interaction beyond mere audio and video, such as distancing and gaze behavior~\cite{adalgeirsson2010mebot}. In all the previously cited literature, several notes are made regarding issues that are commonly faced, and should be addressed when developing social robots for tele-presence applications, such as concerns regarding privacy, a proper control interface for the pilot (including a map of the environment and the robot's surroundings), adaptability to people's height and stance (e.g., sitting, standing, behind a desk), robustness towards communication failures (e.g., loss of WiFi connection), and dynamic volume control. Finally, an important aspect of remote interaction is the translation of the operator's input into robot behaviors. Many interfaces have been developed for controlling tele-presence robots, including graphical and tangible interfaces~\cite{lazewatsky2011panorama}, but also virtual reality tools~\cite{nguyen2001virtual}, or brain-machine interfaces~\cite{tonin2011brain}. \subsubsection{Co-located \ac{HRI}} This category includes all interactions in which the robot and the human are located in a shared space and interact directly without explicit physical contact. This is the case for most existing social robotics scenarios. Within these case we are most interested in mentioning the ones in which the robot has some form of locomotion ability (e.g., legged robot, aerial robots, wheeled robots), and also the ability to perceive and measure the distance to the human, in order to be able to actively control the distance between them. The social meaning of proximity in this context is referred to as proxemics, and constitutes an important part of non-verbal robot behavior~\cite{mumm2011human}. Mead et al.~\cite{mead2016perceptual} have explored this topic by taking into account not only the psycho-physical and social aspects of proximity from the human's perspective, but also regarding the robot's needs. In terms of needs related to proximity, social robots may require or prefer certain distances to people in order for their sensors to work properly (e.g., vision, speech interaction). Depending on the actual distance of the co-located robot, different modalities of communication may be more suitable. For example, robots in the private space may interact using speech or sound, and use touch screen for human input. However, robots at a greater distance but within line of sight, such as mobile robots, autonomous cars, or drones may use visual signals instead, such as expressive lights~\cite{baraka2018mobile,szafir2015communicating}. \subsubsection{Physical \ac{HRI}} Interactions happening in a shared space may involve an additional modality, namely physical contact between the human and the robot. Such interactions pertain to a blossoming subfield of HRI, commonly designated as Physical Human-Robot Interaction, or pHRI for short~\cite{Haddadin2016, billard2013roboskin, youssefi2015skinware}. From a hardware perspective, robots involved in pHRI are being designed with compliant joints (e.g., Baxter robot) for safety. Also, the design of robot outer shells is taking texture and feel into account~\cite{yohanan2009tool}. Moreover, novel paradigms for robot hardware are emerging with soft robotics~\cite{majidi2014soft}. Examples of pHRI include physically assistive applications, where a robot has to be in physical contact with the person to execute its tasks, such as getting patients out of a chair~\cite{shomin2015sit}, or helping them feed~\cite{song2012novel} or dress themselves~\cite{kapusta2016data}. In industrial settings, physical proximity has also been shown, for some tasks, to improve the interaction and its perception by the workers~\cite{huber2017developing}. On the other hand, physical contact may be used as a communication modality in itself, using a combination of touch, motion, pressure and/or vibration, known as haptic communication~\cite{miyashita2007haptic}. Such a communication modality is especially useful when others (e.g., visual) are not feasible. In particular, research has looked at how robots can communicate or guide people with visual impairments using physical contact. For example, Bonani et al.~\cite{bonani2018my} investigated the use of movement of a Baxter's arm that blind people held to complement verbal instructions in a playful assembly task. Additionally, mobile robots have been used to guide people in indoor environments using physical contact~\cite{kulyukin2004rfid,shomin2016navigation}. Moreover, physical contact may possess a social component. This is the case when a robot behavior utilizing physical contact with a human is meant to induce or influence their behavior. For example, a mobile robot may use physical contact when navigating through a human crowded environment, inducing people to move away~\cite{shrestha2015using}. Also, affective robot behaviors involving contact, such as a hug or a handshake, have been shown to have an influence on the social behavior of the humans in their interaction with the robot (e.g., self-disclosure or general perception of the robot)~\cite{shiomi2017robot,avelino2018power}. Human-robot haptics have also been investigated by studying the role of physical contact in human-animal interactions~\cite{yohanan2012role}.\\ While the spatial features discussed in this section pertain to different fields of research, one would expect in future robotic technologies a range of interactions that would incorporate a combination of the three, according to the task and situation at hand. \subsection{Temporal profile} \label{subsec:temporal} In this section, we look at time-related aspects of interactions with a social robot. Knowing the intended temporal profile of these interactions may have a strong impact on the design of such robots. We specifically discuss the \textit{timespan}, the \textit{duration}, and the \textit{frequency} of interactions. \subsubsection{Timespan} Interactions with robots can be classified according to \textit{timespan}, meaning the period of time in which the human is exposed to the robot. We consider four timespan categories, namely \textit{short-term}, \textit{medium-term}, \textit{long-term}, and \textit{life-long}. There does not exist, in the \ac{HRI} literature, a quantitative way to establish the boundaries between these four categories, and as they may be context-dependent. Our aim is hence to provide a useful guideline for thinking about implications of such categories in the design of social robots, as well as their evaluation. \begin{itemize} \item \textbf{Short-term} interactions typically consist of a single or only a few consecutive interactions, e.g., a robot giving directions in a mall. Of special importance for these types of interactions are design factors that influence the first impression of the human towards the robot (e.g., appearance, size, motion ``at rest'', proxemics/approach behavior, initiation of the interaction). Usually very present in short-term interactions is the novelty effect, a fundamental characteristic of any innovation characterized by the newness or freshness of the innovation in the eyes of the adopter~\cite{wells2010effect}. It is a salient effect that plays a role in the adoption and use of novel media, characterized by higher initial achievements not because actual improvements occur, but due to the increased interest in technology~\cite{clark1983reconsidering}. This effect may help or harm the interaction depending on the its content and outcome, but it should be kept in mind in the design of robots for short-term use, also accounting for different expectations based on the users' demographics. \item \textbf{Medium-term} interactions go beyond a single or a few interaction(s) but do not extend over a timespan long enough to be considered part of the long-term category. They typically span several days or weeks. An example is a robot used to teach children a module in their curriculum over a few weeks. During repeated interactions, the novelty effect may wear off after the first few interactions, resulting in potential loss of interest or changes in attitudes towards robots over time~\cite{gockley2005designing, kanda2004interactive}. When considering repeated interactions with the same robot, it is hence essential to take this dynamic aspect into account by incrementally incorporating novelty or change in the behavior of the robot as well as maintaining a sense of continuity across interactions~\cite{leite2013social,alves2019empathic}. This will help sustain engagement and satisfaction both within and across individual interactions. \item \textbf{Long-term} interactions include prolonged interactions that go beyond the period needed for the novelty effect to fade~\cite{leite2013social}. An example is a personal robot operating in a home. Long-term interactions typically create a sense of predictability in the human to know they will encounter a subsequent interaction. Additionally, humans may start feeling a sense of attachment to the robot, and even develop relationships with it. In addition to the points mentioned for the medium-term category, it is crucial to consider how the robot can both personalize and adapt its interactions with the human. Personalization means that the robot will accommodate for inter-individual differences, usually focusing on static or semi-static features of the human such as personality, preferences, or abilities. Adaptation means that the robot accommodates for intra-individual changes, focusing on dynamic features of the human such as physical, psychological and emotional state, performance, or behavior. For surveys about personalization and adaptation in \ac{HRI}, please consult Rossi et al.~\cite{rossi2017user} and Ahmad et al.~\cite{ahmad2017systematic}. Personalization can also include a dynamic component; for example, an algorithm has been developed for an office robot to learn not only preferences of robot behaviors but also how to switch between them across interactions, according to personality traits of the human~\cite{baraka2015adaptive}. \item \textbf{Life-long} interactions differ from long-term interactions by the fact that the human may go through large changes, for example, transitioning from childhood to adulthood, or progressively loosing some capabilities during old age. These types of interactions are much rarer with existing robots, but we do have examples that include robotic pets adopted in life-long timespans such as the AIBO or PARO robots. Another example is robots meant to accompany people until the end of their lives, such as robots assisting the elderly while gaining skills over time hence compensating for the decrease in their users' capabilities~\cite{georgiadis2016robotic}. In the future, the vision of robotic companions~\cite{dautenhahn2004robots} may include richer interactions including mutual learning and evolution, emotional support, and building deeper bidirectional relationships. \end{itemize} \subsubsection{Duration and frequency} In addition to timespan, an important temporal aspect of the interaction is the average \textit{duration} of individual interactions. For example, a human can interact with a robot in short-term but prolonged interactions (e.g., in an educational context), or on the contrary in short interactions over a long timespan (e.g., office robot), or in other combinations and levels of the above. An important question to consider for longer durations is how to maintain engagement, especially with populations with a short attention span, such as children. For short durations, it is important to design for intuitiveness and efficiency of the interaction, in order to reduce the cognitive load or adaptation time of the human. It is worth mentioning that duration is often imposed by the task itself, but may also be imposed by the human's willingness to end it. For example, the Roboceptionist~\cite{gockley2005designing} interacts with people in a building over large timespans. It was designed as a conversational chatbot, hence every person that interacts with it can initiate and end the interaction at any moment. The authors reported short interactions generally under $30$ seconds, and aimed at increasing this number by designing for long-term interactions with engagement in mind, using techniques from the field of drama. In addition to timespan and duration, the \textit{frequency} of interactions plays a role in their human perception by humans, and in the resulting design considerations. The frequency of interactions with the same robot can vary from very occasional (e.g., robots in stores visited sporadically) to multiple times per day (e.g., workplace robots). For high frequencies, a lack of of incorporation of novelty, or at least variation in the robot's behavior, may result in fatigue and lack of engagement. Also, achieving continuity through memory is a particularly relevant factor~\cite{leite2013social}. Currently, the effect of frequency on the perception and effectiveness of interactions seems to be largely lacking in the \ac{HRI} literature. \\ This concludes our discussion of time-related aspects of the interaction, as well as the discussion of our framework as a whole. Before concluding this chapter, we provide a brief discussion of design approaches for social robots. \section{Working within the social robot design space} \label{sec:discussion} The framework presented in this chapter outlined major dimensions of relevance to the understanding of existing social robots and the design of future ones. Moving forward, it effectively defines a \textit{design space} for social robots, where each of the aspects discussed will involve a set of design decisions. For example: What role should my robot play in relation to humans? What should it look like? What kind of social capabilities should it have? What level of autonomy is best fitted for the task(s) and should it be fixed? etc. Higher-level decisions in the design process also arise such as: Are the requirements feasible with current technology, or will it require developing new technology? What are the practical considerations associated with the ``theoretically best'' design, as well as the costs, and are they outweighed by the benefits? The actual design process of social robots and their interactions with humans has benefited from a number of design approaches inspired by design practices from a variety of fields such as engineering, computer science, \ac{HCI}, and human factors. For example, some researchers in \ac{HRI} have looked at developing design patterns that can be reused without having to start from scratch every time~\cite{kahn2008design}. There generally exist three broad design approaches, each of which may be valid depending on the intended context and objectives: human-centered design, robot-centered design, and symbiotic design. We briefly discuss these approaches next. \subsection{Robots as technology adapted to humans (human-centered design)} Human-centered design (HCD) is the central paradigms of \ac{HCI}, and much of \ac{HRI} design as a result. It aims to involve the intended user population as part of most development stages, including identifying needs and requirements, brainstorming, conceptualizing, creating solutions, testing, and refining prototypes through an iterative design process~\cite{abras2004user}. In the \ac{HRI} context, the main assumption is that humans have their own communication mechanisms and unconsciously expect robots to follow human social communication modalities, rules, conventions and protocols. Important aspects of the robot behavior and embodiment design that play a strong role in terms of the human's perception of the interaction include physical presence \cite{bainbridge2008effect}, size \cite{powers2007comparing}, embodiment~\cite{lee2006physically, wainer2007embodiment}, affective behaviors~\cite{leite2008emotional}, role expectations~\cite{dautenhahn2005robot}, just to cite a few. From an evaluation point of view, HCD relies a lot on subjective self-reports of users to measure their perceptions, and complement more objective measures such as task performance. While many HCD approaches exist for social robots, one of particular interest is treating robots as expressive characters, i.e., robots with \textit{the ability of expressing identity, emotion and intention during autonomous interaction with human users}~\cite{ribeiro2017}. Designing for expressivity can be achieved for example by bringing professional animators to work side by side with robotic and \ac{AI} programmers. The idea is to utilize concepts of animation developed over several decades~\cite{ThomasJohnston1995} and apply them to robotic platforms~\cite{breemen2004, takayama2011expressing, ribeiro2012, hoffman2012, gielniak2012, ribeiro2013}. \subsection{Robots as goal-oriented technology (robot-centered design)} Historically, robots were developed solely by engineers who carried little concern about the human beyond the interface. While the focus in \ac{HRI} has now shifted to a more human-centered approach as was discussed in the previous section, HCD as a general design paradigm has been criticized by many researchers who consider it to be harmful in some aspects~\cite{greenberg2008usability,norman2005human}. For example, it has been criticized for its focus on usability (how easy it is to use) as opposed to usefulness (what benefits it provides) and its focus on incremental contributions based on human input conditioned by current technologies, which prevents from pushing technological boundaries. Additionally, adapting the technology to the user may sometimes be more costly than having the user adapt to the technology. As a result, there are cases where a more robot-centered approach may work best. Excessively adapting robots to humans may result in suboptimal performance, high cost of development, or unmatched expectations. It is important to recognize that in some cases, it may be better to ask the human to adapt to the robot (maybe through training) in order to achieve better performance on the long run. Humans have a much better ability to adapt than robots, and it is crucial to identify when robots should not adapt because it would be more efficient to ask or expect humans to do it~\cite{norman2005human}. In many cases, the robot may have needs that may incur an immediate cost on humans, but result in a better future performance. Examples include robots asking for help from humans when they face limitations~\cite{veloso2015cobots}, or teaching the robot to perform a certain task so that it can perform better in subsequent tasks. A robot-centered approach may also include the adaptation of our environments to make them suitable for robots. Examples include avoiding construction materials that are not compatible with the robot's sensors, interfacing the robot with building facilities (such as elevators), and so on. \subsection{Robots as symbiotic embodied agents (symbiotic design)} Both approaches discussed above, whether human-centered or robot-centered, are valid approaches that one can use when designing social robots and their associated tasks. As a general design process for such robots, we advocate for the careful identification of strengths and weaknesses of each part and design for an increased symbiosis between the human(s) and the robot(s). One way to achieve this symbiosis is to adopt a holistic view that focuses on the overall system behavior, as a function of robot(s), human(s), and the environment~\cite{steinfeld2009oz}. For example, the CoBot robots are autonomous mobile robots~\cite{veloso2015cobots} servicing human users in a building, designed with the ability to utilize the presence of other humans in the environment (i.e., bypassers) to overcome their limitations. For instance, they ask for assistance in pressing the elevator button or putting objects in their basket since they do not have arms. This is an example of symbiotic autonomy where humans and robots service each other mutually in the same shared environment, and where both parties have to adapt to the other party's needs. \section{Conclusion} \label{sec:conclusion} In this chapter, we have introduced a framework for characterizing social robots and their interactions with humans along principal dimensions reflecting important design considerations. In particular, we (1) presented a broad classification of robot appearances, (2) repositioned existing classifications of robot social capabilities, (3) discussed a cross-section of purposes and application areas, (4) provided a straightforward and broad classification of the robot's relational role, (5) clarified the related but distinct concepts of autonomy and intelligence, and discussed their quantification, (6) analyzed interactions according to their spatial features, and (7) looked at time-related aspects of the interactions. While this framework is aimed primarily at characterizing social robots by drawing from a large body of literature to illustrate the concepts discussed, it also serves as a useful guide to inform the design of future social robots. Towards this end, we briefly touched upon different design approaches, namely human-centered, robot-centered, and symbiotic. Social robotics is a growing multidisciplinary field that brings closer aspects of human nature with aspects of robotic technology. The scope of what a social robot means, does, or serves, will be shaped by future developments in the field. In this journey towards creating interactive intelligent machines, we are hopeful that as they become more socially apt, they contribute to expanding, not reducing, the foundational aspects of our humanity. \begin{acknowledgement} We would first like to thank C\'{e}line Jost for inviting us to be part of this book project and for contributing to the initial stages of the manuscript. Additionally, this book chapter would have not been possible without the valuable comments and suggestions of Prof. Ana Paiva. We would also like to thank the participants and co-organizers of the \href{https://gaips.inesc-id.pt/hri-reading-group/}{HRI Reading Group at Instituto Superior T\'{e}cnico} for sparking many discussions that influenced the content of this chapter. We would finally like to acknowledge the Global Communication Center at CMU for their feedback on one of our drafts. K. Baraka acknowledges the CMU-Portugal INSIDE project grant CMUP-ERI/HCI/0051/2013 and Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) grants with ref. SFRH/BD/128359/2017 and UID/CEC/50021/2019. P. Alves-Oliveira acknowledges a grant from FCT with ref. SFRH/BD/110223/2015. The views and conclusions in this document are those of the authors only. \end{acknowledgement} \input{referenc} \end{document}
1,116,691,499,572
arxiv
\section{Introduction} \label{intro-sec} The spiral features visible in many galaxies have long been the subject of debate. Although it has been almost a century since the resolution of the ``great debate'' of \cite{SC21}, when it was argued over whether these beautiful spiral structures were nebulae within our galaxy or galaxies in their own right, the mechanisms which generate them are still uncertain. One of the problems with developing a comprehensive theory of spiral arms is the so called ``winding dilemma". It is known from observations of disc galaxies that the stars in the inner region have a higher angular velocity than those in the outer region. Therefore the spiral structure should ``wind up" relatively quickly if the spiral arms rotate at the mean rotation velocity of the stars \citep[e.g.][]{W96}, contrary to observations of many ``grand design'' spiral galaxies. A proposed solution to the winding dilemma is given by spiral density wave theory \citep{LS64} which treats the spiral structure as a density wave which can rotate rigidly as a feature with a constant pattern speed and thus be long lived. However, no $N$-body simulations have yet been able to reproduce these long lived stable spiral arms, despite the increase in computational power and resolution which has occurred in recent years \citep[e.g.][]{S11,DB14}. Recent work has shown spiral modes and waves which survive over multiple rotations \citep{QDBMC11,RDL13,SC14} while the spiral arm features in the stellar mass are short-lived but recurrent \citep[e.g.][]{SC84,CF85,B03,FBSMKW11,GKC12,GKC12-2,GKC13,BSW13,RFetal13,DVH13} including in galaxies with a central bar \citep[e.g.][]{GKC12-2}, implying that the large spiral arms visible in external galaxies may only $appear$ to be rigid structures extending over the disc, while in fact being made of transient reforming features. The interpretation of the transient and recurrent spiral arm features observed in $N$-body simulations is still in debate. For example, \cite{MFQDCVEB12} showed for the first time (by studying the time evolution of the disc power spectrum) that spiral wave modes in $N$-body simulations can last for as long as 1 Gyr, which can justify treating the wave modes as quasi-stationary structure, and the transient and recurrent spiral arm features can be explained by the superposition of different modes with different pattern speeds \citep[see also][]{RDQW12,SC14}. On the other hand, \cite{GKC12,DVH13,BSW13} demonstrated non-linear growth of the spiral arm features due to similar but different (in terms of evolution) mechanisms from swing-amplification \citep{T81}, which could be difficult to explain with the linear superposition of the wave modes. Our position within the Milky Way gives us a unique view of these spiral structures seen in external galaxies, but it comes with its own set of problems which we must overcome when studying them. The location and kinematics of the gaseous component of the arms may be determined from HI and CO observations \citep[e.g.][]{DHT01,NS03,KK09}. However to observe the kinematics of the stellar component in and around the spiral arms we must look through the disc plane, which carries the heaviest levels of dust and gas, and thus high levels of extinction. Dust extinction has long been a problem for Milky Way model construction. Although there are reasonably reliable extinction maps for extra galactic sources whose extinction by the interstellar medium of the Milky Way can be corrected as a function $A_\lambda(l,b)$ \citep[e.g.][]{SFD98}, three dimensional extinction mapping for sources within the Milky Way i.e. $A_\lambda(l,b,d)$ is more challenging. There are three dimensional extinction maps for individual sections of the sky \citep[e.g.][]{DS01,MRRSP06,HBJ14,SM14} and two dimensional maps have been extended to three dimensions \citep[e.g.][]{DCL03}. However a truly Galactic 3D extinction map does not yet exist \citep{RB13}. The European Space Agency (ESA)'s $Gaia$ mission will help us map the stellar structure and kinematics of the Milky Way, and help constrain extinction at the same time \citep{BJetal13}. $Gaia$, which was launched on the 19th December 2013 will provide detailed astrometric \citep[e.g.][]{LLHOBH12}, spectroscopic \citep[e.g.][]{Ketal11} and photometric \citep[e.g.][]{Jetal10} information for around one billion stars in the Milky Way. Detailed information on $Gaia$ scientific accuracies is available in, for example, \cite{dB12}. Synthetic $Gaia$ mock data have already been used to demonstrate different applications of the real $Gaia$ data set. For example, \cite{AMAFR14} use three tracer populations (OB, A and Red Clump stars) with the $Gaia$ selection function, errors and dust extinction, and demonstrated that the $Gaia$ mock data can recover the parameters of the Galactic warp. \cite{RGFAAA14} examine the Galactic bar in the $Gaia$ observable space using Red Clump tracers with the $Gaia$ selection function, errors and dust extinction combined with selected Red Clump stars from the Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE DR10, e.g.][]{Aetal14} showing the value of combining data from complimentary surveys. In \cite{HK13} we show that we can recover the large scale structure of the Galactic disc with our Made-to-Measure Galaxy modelling code, \sc{primal }\rm \citep{HKM13,HK12,HK13}, and make a good estimation of the patten speed of the bar, using tracer populations of M0III and Red Clump stars with the $Gaia$ selection function, errors and dust extinction. There exist full mock catalogues of $Gaia$ stars, e.g. the $Gaia$ Universe Model Snapshot (\sc{gums}\rm) which provides a view of the Besan\c{c}on Galaxy model as seen from $Gaia$ \citep{Rea12}, taking into account dust extinction while assuming there are no observational errors. This detailed prediction of $Gaia$ observations gives an excellent indication of the volume and quality of data which will become available from $Gaia$, predicting 1.1 billion observable stars, almost 10,000 times more than from its predecessor $Hipparcos$. \sc{gums }\rm can be extended through the $Gaia$ Object Generator (\sc{gog}\rm) \citep{Xetal14} to simulate intermediate and final catalogue data including the introduction of realistic astrometric, photometric and spectroscopic observational errors to the catalogue based upon $Gaia$ science performance estimates. While these mock data provide an excellent example of the capabilities of $Gaia$, the Besan\c{c}on galaxy model is an axisymmetric model and a kinematic model not a dynamical model. Although $Gaia$ will not provide accelerations, the kinematics it will provide are from a dynamical system, the Milky Way. Thus it is important for our purpose to generate catalogues from fully dynamical models with non-axisymmetric structures, such as spiral arms and a bar, which for example $N$-body disc galaxy models can provide. Therefore we propose here to create mock $Gaia$ observations from an $N$-body model using a population synthesis code such as \sc{galaxia }\rm \citep{SBJB11}, or the methodology presented in \cite{PCK12} or \cite{LWCKHFC14}. \sc{galaxia }\rm is a flexible population synthesis code for generating a synthetic stellar catalogue from an $N$-body or an analytical galaxy model over wide sections of the sky, with a sampling scheme which generates a smoothly distributed sample of stars. Synthetic catalogues generated from dynamical Galaxy models are essential for preparing to exploit the real $Gaia$ catalogue and can be used to determine whether certain features within the Milky Way will be visible to $Gaia$. In our previous work \citep{KHGPC14} we examined the kinematics of both the stellar and gas components around a transient, co-rotating spiral arm in a simulated barred spiral galaxy similar in size to the Milky Way. Although this arm is transient, similar arms recur during the evolution of the galaxy. We made predictions of observable kinematic signatures that may be visible in the Milky Way's Perseus arm if it is also a transient, recurrent and co-rotating spiral arm. We then compared our simulation with data from APOGEE and the maser sources from \cite{Retal14} measured by the Bar and Spiral Structure Legacy (BeSSeL) survey and the Japanese VLBI Exploration of Radio Astronomy (VERA), finding tentative agreement between our simulation and the observations. Owing to the low number of maser sources and the lack of distance information for the APOGEE stars no firm conclusions could be drawn; however it is encouraging to see similar features in both, including the possible signatures of a co-rotating spiral arm. In this paper we build upon the previous work by generating a stellar sample with different populations from the simulation data in \citet{KHGPC14} and making mock observations of these stars taking into account the expected $Gaia$ science performance estimates. The aim is not to make further predictions about the kinematics of transient, recurrent and co-rotating spiral arms but rather to examine whether these signatures, remain visible in the $Gaia$ data if they exist in the Milky Way. \section{Simulation} \label{sim} We use the simulated galaxy which is presented in \citet{KHGPC14} and \cite{GKC14b}. The details of the numerical simulation code, and the galaxy model are described in \cite{KHGPC14}. We briefly describe the galaxy model in this section. The galaxy is set up in isolated conditions, and consists of a gas and stellar disc but no bulge component. The discs are embedded in a static dark matter halo potential \citep{RK12,KHGPC14}. The dark matter halo mass is $M_{\rm dm}=2.5 \times 10^{12}$ $\rm M_{\odot}$, and the dark matter density follows the density profile from \cite{NFW97}, with a concentration parameter of $c=10$. The stellar disc is assumed to follow an exponential surface density profile with the initial mass of $M_{\rm d,*} = 4.0 \times 10^{10}$ $\rm M_{\odot}$, a radial scale length of $R_{\rm d,*} = 2.5$ kpc and a scale height of $z_{\rm d,*} = 350$ pc. The gas disc is set up following the method of \citet{SDMH05}, and has an exponential surface density profile with the scale length of $R_{d,g} = 8.0$ kpc. The total mass of the gas is $10^{10}$ $\rm M_{\odot}$. The simulation comprises $10^6$ gas particles and $4 \times 10^6$ star particles; therefore each particle has a mass of $10^4$ $\rm M_{\odot}$. The resolution is sufficient to minimise numerical heating from Poisson noise \citep{FBSMKW11,S13}. We employ a minimum softening length of $158$ pc (equivalent to a Plummer softening length of $53$ pc) with the spline softening and variable softening length for gas particles as suggested by \citet{PM07}. The radial profile of the mean metallicity of stars and gas is initially set by $\mathrm{[Fe/H]} (R) = 0.2 - 0.05(R/1 \text{ kpc})$, and the metallicity distribution function at each radius is centred on the mean metallicity value with the dispersion set to a Gaussian distribution of $0.05$ dex for the gas and $0.2$ dex for the stars. The stellar ages are set randomly between 0 and 10 Gyr for stars present at the beginning of the simulation. The simulation was run for 1 Gyr from the initial conditions with the $N$-body smoothed particle hydrodynamics code, \sc{gcd+ }\rm \citep[e.g.][]{KG03,RK12,BKW12,KOGBC13,KGBGR14} without the inclusion of any continuous external inflow of gas for simplicity. In this paper we use the same snapshot of the galaxy as used in \cite{KHGPC14} which is taken at $t=0.925$ Gyr, as this snapshot shows a spiral arm at a similar location to the Perseus arm from the Milky Way (see Fig. \ref{galaxy}). \begin{figure} \centering \includegraphics[width=\hsize]{snap_c.ps} \caption{Snapshot of the simulated galaxy from \citet{KHGPC14} which is also used in this paper. The left (right) panel shows the face-on view of the star (gas) particle distribution. The solid line indicates the position of the spiral arm identified. The observer is assumed to be located at $(x,y)=(-8, 0)$ kpc. Three line-of-sight directions ($l_{\rm LOS}=90, 120$ and 150 deg) are highlighted with the dotted lines. The galaxy is rotating clockwise.} \label{galaxy} \end{figure} \section{$Gaia$ mock catalogue} \label{Model} \label{Kawata14} In \cite{KHGPC14} the kinematics of the spiral arm shown in Fig. \ref{galaxy} are examined at three lines of sight $l_{\rm LOS}=90, 120$ and 150 deg, with $b_{\text{los}}=0$ because of the lower extinction relative to other lines of sight in the plane. Predictions are made of the observational signatures of co-rotating spiral arms notably the difference in kinematic structure between the trailing near side and leading far side of the spiral arm. In general, in \cite{KHGPC14} (as also shown in \cite{GKC14}) the stars in the trailing near side rotate slower because they tend to be at the apo-centre and migrate outward, and the stars in the leading far side rotate faster as they tend to be at the peri-centre and migrate inward. There are however some stars which follow the opposite trend, leading to multiple populations seen in the rotational velocity in the leading far side; one faster, and one slower than the single population in the trailing near side. These features which will be discussed later may be caused by the co-rotation resonance of the spiral arm, and are visible at different galactic longitudes because the arm in the simulation co-rotates at all the examined radial range. However, in \cite{KHGPC14}, the spiral arm kinematics are examined using the full, error and extinction free $N$-body data and thus such trends, when present, are easy to identify. In this Section we describe how we generate a sample of stars from the $N$-body model of \cite{KHGPC14} to produce a mock $Gaia$ catalogue. It is worth noting that the population synthesis code, \sc{galaxia }\rm \citep{SBJB11} provides a tool to generate stellar populations from $N$-body simulation data. However, because we plan to combine such a tool with our Made-to-Measure Galaxy modelling code, \sc{primal}\rm, we have developed our own simplified version of \sc{galaxia}\rm, a population synthesis code called \sc{snapdragons}\rm, (Stellar Numbers And Parameters Determined Routinely And Generated Observing $N$-body Systems). \sc{snapdragons }\rm uses the same isochrones and extinction map as \sc{galaxia}\rm, but uses a different and more simplistic process to generate the stellar catalogue which is described in Section \ref{Synth}. \sc{snapdragons }\rm allows us to add the expected Gaia errors more easily, and enables us to track the link between sampled stars and their parent $N$-body particle for our future studies, e.g. \sc{primal }\rm modelling of the Galactic disc by fitting tracers from multiple stellar populations, and identifying radially migrating stars and non-migrating stars trapped by the spiral arm \citep{GKC14}. \subsection{Extinction} \label{Ex} We use the extinction map of the Milky Way taken from \sc{galaxia }\rm \citep{SBJB11}, which is a 3D polar logarithmic grid of the dust extinction constructed using the method presented in \citet{BKF10} and the dust maps from \citet{SFD98}. The same extinction is applied in \cite{HK13} and more detail is given there. In an update from \citet{HK13} we follow the correction to the Schlegel $E_{B-V}$ presented in \citet{Setal14} such that \begin{equation} E_{B-V}=E_{B-V}\biggl(0.6+0.2\biggl(1-\text{tanh}\biggl(\frac{E_{B-V}-0.15}{0.1}\biggr)\biggr)\biggr). \end{equation} This correction is made as it has been suggested \citep[e.g.][]{AG99,YFS07} that the reddening is overestimated by the maps from \cite{SFD98} by $\sim$1.3-1.5 in regions with high extinction with $A_V>0.5$ ($E_{B-V}>0.15$). This correction reduces extinction by $\sim40\%$ for low latitude high extinction regions but has minimal effect on high latitude low extinction regions. \subsection{Population Synthesis: \sc{snapdragons}\rm} \label{Synth} The goal of this population synthesis code is to split each $N$-body particle from the galaxy simulation into an appropriate number of stellar particles creating a mock catalogue of observable stars from our $N$-body model. We must choose an IMF and a set of isochrones with which to work. We choose a Salpeter IMF \citep{S55} where the IMF, $\Phi(m)$, is defined in each mass interval d$m$ as \begin{equation} \Phi(m)\text{d}m=Am^{-(x+1)}\text{d}m, \end{equation} where $x=1.35$ is the Salpeter index, and $A$ is a constant for normalisation in the desired mass range. We set this constant as \begin{equation} A_i=m_i\biggl(\int_{m_{\star,\text{min}}}^{m_{\star,i,\text{max}}}m^{-x}\text{d}m\biggr)^{-1}, \end{equation} where $m_i$ is the $N$-body particle mass, $m_{\star,i,\text{max}}$ is the maximum initial mass of any surviving star and $m_{\star,\text{min}}$ is the minimum stellar mass to be considered. We make use of the Padova isochrones \citep[e.g.][]{BBCFN94,MGBGSG08}, although the choice of isochrones (and IMF) may be substituted with others with no change to the methodology. It is worth noting that the Padova isochrones are available only for stellar masses above 0.15 $M_{\odot}$. \sc{galaxia }\rm for example uses the isochrones from \citet{CBAH00} to extend the mass limit down to 0.07 $M_{\odot}$, which is the hydrogen mass burning limit. We set our lower limit on stellar mass as $m_{\star,\text{min}}=0.1$ $M_{\odot}$ to correspond with the simulation from \cite{KHGPC14} and extrapolate from the Padova isochrones for $0.1\leq M_{\odot}\leq0.15$. It is relatively safe to do this because all such stars lie on the main sequence. Additionally these exceedingly faint stars will not be visible at the distance of the spiral arms which are the focus of this work. As discussed in Section \ref{sim} each $N$-body star particle in the simulated galaxy has been assigned an age and metallicity within the chemodynamical code \sc{gcd+}\rm, then it is made to evolve. When we examine the snapshot, each particle is matched to its nearest isochrone in both metallicity and age from the grid of isochrones which are extracted from \sc{galaxia}\rm. Once an isochrone is selected, we identify $m_{\star,i,\text{max}}$ from the isochrone. We then determine how many stars to sample from the $N$-body particle by integrating the IMF over the desired mass range; \begin{equation} N_s=A\int_{m_{\star,i,<V_{\text{lim}}}}^{m_{\star,i,\text{max}}}m^{-(x+1)}\text{d}m, \end{equation} where $m_{\star,i,<V_{\text{lim}}}$ is minimum mass required for the star particle to be brighter than our apparent magnitude selection limit, $V_{\text{lim}}$, taking into account the extinction value at the position of the parent particle. Stars smaller than $m_{\star,i,<V_{\text{lim}}}$ are not used in the subsequent analysis, to save on computational time. We then randomly sample stellar masses from the section of the isochrone $N_s$ times. We have weighted the random selection by the IMF using the equation \begin{equation} m_{\star}=(R m_{\star,i,\text{max}}^{-x}+(1-R)m_{\star,i,<V_{\text{lim}}}^{-x})^{\frac{1}{-x}}, \end{equation} where $R$ is a random number between 0 and 1. The isochrones are comprised of discrete stellar data, and therefore we then interpolate within the nearest isochrone values of $M_V$ and $V-I_c$ to determine $M_{V_{\star}}$ and $V-I_{c\star}$ for the generated $m_{\star}$. At this stage we assume the generated stars have the same position and velocity as their parent particles. \subsection{Observational Errors} \label{Error} Having generated the visible stellar catalogue we then add observational errors based upon the $Gaia$ Science Performance estimates\footnote{http://www.cosmos.esa.int/web/Gaia/science-performance}. We use the post launch error estimates approximated from the estimates in pre-launch performance by Merc\`{e} Romero-G\'{o}mez \citep[e.g.][]{RGFAAA14}, provided through the Gaia Challenge collaboration\footnote{http://astrowiki.ph.surrey.ac.uk/dokuwiki/doku.php}. We assume the position and velocity of the Sun is known. We locate the observer at ($-8$,0,0) kpc as shown in Fig. \ref{galaxy}, and the motion of the Sun is assumed to be 228 km s$^{-1}$. For this work, while generating the stellar catalogue we produced stars only brighter than $V_{\text{lim}}\leq16$ mag, which is well within $Gaia's$ $m_G\leq20$ mag magnitude limit for the astrometry. However, because we are interested in the Galactic radial and rotation velocity for the stars, which requires the full 6D phase space information, we chose the lower magnitude limit where $Gaia$ RVS can produce the reasonably accurate line-of-sight velocity. Note that the errors are added to the parallax, proper motion and line-of-sight velocities. A full description of the method to add the pre-launch $Gaia$ error is available in \cite{HK13}. However the $Gaia$ science performance estimates have been revised after launch, and as such a correction must be made. The error in parallax has increased, and although it has little effect for stars with $m_V\leq16$ mag which we work with in this paper, the coefficients within the equation to describe the pre-launch parallax performance (provided by Kazi, Antoja \& DeBruijne (Oct. 2014) by fitting to the new estimations on the $Gaia$ science performance web page) are revised to \begin{eqnarray} \sigma_{\pi}&=&(-11.5+706.1z+32.6z^2)^{1/2} \nonumber \\ & &\times(0.986+(1-0.986)(V-I_c)), \label{sigpi} \end{eqnarray} where \begin{equation} z=\text{max}(10^{0.4(12-15)},10^{0.4(G-15)}), \label{zmax} \end{equation} correcting also the typo for equations (\ref{sigpi}) and (\ref{zmax}) in \cite{HK13}. Additionally, because of the loss of spectroscopic accuracy by $\sim1.5$ mag in the RVS post launch performance we also apply a correction to the error function for the end of mission radial velocity. We change the table\footnote{http://www.cosmos.esa.int/web/Gaia/table-5} of values $a$ and $b$, again determined by fitting the revised performance estimates on the $Gaia$ science performance web page, for the equation \begin{equation} \sigma_{v_r} = 1 + b\text{e}^{a(V-14)}, \end{equation} where $a$ and $b$ are constants dependant on the spectral type of the star. The new table along with the code to add the $Gaia$ error is available online\footnote{https://github.com/mromerog/Gaia-errors}. \section{Results} \label{R} As discussed in Section \ref{Kawata14}, it was shown in \cite{KHGPC14} that in general the stars in the trailing near side of the spiral arm rotate slower than average because they tend to be at the apo-centre, and the stars in the leading far side of the spiral arm rotate faster than average as they tend to be at the peri-centre. However, there are groups of stars which follow different trends leading to multiple populations which will be discussed later. It is important to determine whether such features will still be visible in the $Gaia$ catalogue, not just the error and extinction-free $N$-body model. In this Section we show the result of sampling these $N$-body data into stellar data, first looking at the properties of the resulting mock stellar catalogue, and then examining the spiral arm kinematics with the stellar data taking into account dust extinction and $Gaia$ science performance estimates. \subsection{Population synthesis} In this section we describe the stellar catalogue produced by \sc{snapdragons}\rm, and show the resulting colour magnitude diagram (CMD) varying the area of the sky coverage. Fig. \ref{CMD} shows the CMD for stars generated by \sc{snapdragons }\rm from particles within a square region of $\pm2$ deg (upper) and $\pm5$ deg (lower) around $(l,b) = (90,0)$ deg. The upper panel of Fig. \ref{CMD} shows clearly the individual stellar isochrones because there are only a small number of $N$-body particles in the selected region, and each particle has only one age and metallicity. These problems are resolved when smoothing is applied in the phase space distribution and age-metallicity distribution \citep[e.g.][]{SBJB11}. However, as discussed in Section \ref{Synth} we deliberately avoid this smoothing to maintain the clear particle-star relation. The lower panel of Fig. \ref{CMD} shows no such discrete structure, as there are sufficiently many particles to cover a broad range of stellar ages and metallicities in the CMD. Therefore, care is required with the resolution of the $N$-body simulation and the selection function if we discuss in detail the stellar population distribution in the CMD. However, this is unlikely to affect the study in this paper. \begin{figure} \centering \includegraphics[width=\hsize]{HR2_c.ps} \caption{Colour magnitude diagram for stars generated by \sc{snapdragons }\rm from particles within a square region of $\pm2$ deg (upper) and $\pm5$ deg (lower) around $(l,b) = (90,0)$ deg. Stars with apparent magnitude of $m_V\leq16$ only are included.} \label{CMD} \end{figure} \subsection{Observable Spiral Arm Kinematics} In this section we examine if the possible kinematic signatures of co-rotating transient and recurrent spiral arms identified in \cite{KHGPC14} will be visible in the $Gaia$ data even given the dust extinction in the disc and $Gaia's$ science performance accuracy. A detailed analysis of the kinematics themselves is the focus of \cite{KHGPC14}, while this work is concerned with the visibility of this kinematic structure in the $Gaia$ data. We examine the rotational velocities of the stars in the catalogue for different distances because in \cite{KHGPC14} we found the rotation velocity is most affected by the transient co-rotating spiral arm. Then we calculated the Probability Density Function (PDF) of the rotation velocity of stars behind and in front of the spiral arm using Kernel Density Estimation (KDE) which we are using as a desirable alternative to histograms \citep[e.g.][]{W06}. Fig. \ref{Crot} shows a smoothed contour plot of the galactocentric rotational velocity against distance for particles and stars within a square region of $\pm5$ degrees around $(l,b)=(90,0)$ (left), $(l,b)=(120,0)$ (middle) and $(l,b)=(150,0)$ (right). This compares the kinematics of the underlying $N$-body model (upper) with the stellar catalogue generated with \sc{snapdragons}\rm, before (middle) and after (lower) the addition of the errors from the $Gaia$ science performance estimates. Owing to the high percentage of low mass and luminosity stellar types which would dominate the selected region and saturate the plot at small distances, we have made cuts to our sample to visualise the underlying kinematic structure from the stellar catalogue. We have first cut the sample of stars in all three lines of sight with absolute magnitude, $M_V\leq-1$, calculated from the apparent magnitude $m_V$ and observed distance $d_{\text{obs}}$, assuming the dust extinction is known. We then cut with $\sigma_{v_{\text{los}}}/(v_{\text{los}}\times d_{\text{obs}})\leq0.015$ kpc$^{-1}$ to select the stars with lower error in the line of sight velocities at a smaller distance to generate similar quantities of data at different distance scales. This is purely for illustration purposes and we are not suggesting that this is the best possible selection function. The upper panels of Fig. \ref{Crot} show the different kinematic structure in the $N$-body model at the different lines of sight. These are the same data as those shown in the top panels of Fig. 4 from \cite{KHGPC14}. Note that the density colour scale for the $N$-body data is different from the stellar data in the middle and lower panels. The middle row of panels of Fig. \ref{Crot} show the velocities of the selected stars, which appear slightly different from those of the $N$-body data owing to the selection function. While the general shape of the distribution has been recovered, at $(l,b)=(90,0)$ deg (middle left) the fast rotating stars within the arm dominate the density scale and wash out the rest of the plot slightly. At $(l,b)=(120,0)$ deg (middle), although there is some saturation around $220$ km s$^{-1}$ the kinematic structure is clearly visible and is a good match to the particle data. Similarly at $(l,b)=(150,0)$ deg (middle right), despite the lower number of counts, the kinematic structure is clearly shown. The lower panels of Fig. \ref{Crot} show the error affected rotation velocity and distance for the selected stars taking $Gaia$ science performance estimates into account. The rotation velocity is calculated from the observed parallax, proper motion and line of sight velocities. At $(l,b)=(90,0)$ (lower left) the shape of the distribution remains relatively unchanged, with the main loss in accuracy occurring around $d_{\text{obs}}\approx7-10$ kpc. The recovery of the kinematic structure around the spiral arm around $d_{\text{obs}}\approx4$ kpc remains almost identical to the case without observational errors. At $(l,b)=(120,0)$ (lower middle) the visible loss of accuracy is again in the outer region of $d_{\text{obs}}\approx7-10$ kpc, with the region containing the spiral arm remaining very similar to that of the error free case. At $(l,b)=(150,0)$ (lower right), the entire distribution remains very similar to the middle right panel, the case without $Gaia$ like observational errors. \begin{figure*} \centering \includegraphics[width=\hsize]{9DEc_c.ps} \caption{Smoothed linear scale contour plot of galactocentric rotation velocity of simulation particles (upper), selected \sc{snapdragons }\rm stars (middle) and selected \sc{snapdragons }\rm stars observed with $Gaia$ error (lower) for $(l,b)=(90,0)$ (left), $(l,b)=(120,0)$ (middle) and $(l,b)=(150,0)$ (right). For the \sc{snapdragons }\rm stars (middle and lower panels), a limited selection of $M_V\leq-1$ calculated using $m_V$ and $d_{\text{obs}}$ and assuming a known extinction, along with $\sigma_{v_r}/(v_r\times d_{\text{obs}})\leq0.15$ is shown to avoid overly dense populations of fainter stars at smaller distances. This is to visualise the data set, and these faint stars contribute to the subsequent analysis. Note while consistent for \sc{snapdragons }\rm stars across the different lines of sight, the density scale is different for the simulation particles; however the choice of the scale is arbitrary.} \label{Crot} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{PDFversion-many.ps} \caption{Comparison of the distribution of galactocentric rotational velocities for the stars generated by \sc{snapdragons }\rm within a square region of $\pm5$ degrees around $(l,b)=(90,0)$ (left), $(l,b)=(120,0)$ (middle) and $(l,b)=(150,0)$ (right) in the trailing near side (upper) and leading far side (lower) of the spiral arm which meet the $m_V\leq16$ selection limit. The black solid curve shows the true velocities, and the red dashed curve shows the distribution once the Gaia errors have been applied. The vertical lines show the circular velocity (dotted) and the mean rotation velocity (dash-dotted) at the radius of the spiral arm.} \label{Hrot} \end{figure*} Fig. \ref{Hrot} shows the PDF's, with a KDE bandwidth of 4, for the rotational velocity of the stars in the catalogue within a square region of $\pm5$ degrees around $(l,b)=(90,0)$ (left), $(l,b)=(120,0)$ (middle) and $(l,b)=(150,0)$ (right) in the trailing near side, between 1 and 2 kpc closer than the centre of the arm (upper) and leading far side between 1 and 2 kpc further than the centre of the arm (lower). Note that these distance bins were chosen as they show the discussed structure most clearly; the same features are present closer to the arm but are less clear. The centre of the arm was determined to be at $d=4.0$ kpc at $(l,b)=(90,0)$, $d=3.4$ kpc at $(l,b)=(120,0)$ and $d=3.3$ kpc at $(l,b)=(150,0)$. Note that Fig. \ref{Hrot} uses all the stars with $m_V\leq16$, not applying the selection function used for illustration purposes in Fig. \ref{Crot}. At all three lines of sight Fig. \ref{Hrot} shows a clear difference in the distribution of velocities for the `true' data (black solid) when comparing the different observed distances, as shown in \cite{KHGPC14}. This is a positive outcome considering the loss of data from the dust extinction. When comparing the `true' (black solid) stellar catalogue data with the stellar data taking into account dust extinction and $Gaia$'s expected errors (red dashed) a general smoothing out of the structure is evident in the `observed' data. The upper panels of Fig. \ref{Hrot} showing the trailing near side of the arm show very similar PDF's when comparing the true and observed stellar data, whereas the lower panels showing the leading far side show an information loss, especially at $(l,b)=(90,0)$, where the three peaks are no longer resolved. This is to be expected because of the higher distances and therefore additional extinction; however at $(l,b)=(120,0)$ and $(150,0)$ even on the far side of the spiral arm the structure within the distribution is still clearly visible. When comparing the `observed' data in Fig. \ref{Hrot} in front and behind the spiral arms, we see a clear difference in the PDF at all three lines of sight. In each case, the PDF in the trailing near side of the spiral arm forms a single central peak similar to the mean rotation velocity, with a small tail towards faster rotation velocities whereas the leading far side of the spiral arm shows a broader distribution of velocities with a peak velocity faster than the peak for the trailing near side. The difference is particularly apparent at $(l,b)=(120,0)$ deg where the leading far side shows two clear peaks, one faster and one slower than the single peak in the trailing near side. This bimodal distribution can also be seen in the lower middle panel of Fig. \ref{Crot} between 4.39 and 5.39 kpc (although note that Fig. \ref{Crot} uses a different selection function). Also at $(l,b)=(150,0)$ deg the single broad peak in the trailing near side is easily distinguishable from the leading far side which shows three peaks. These three peaks are also partially visible in the lower right panel of Fig. \ref{Crot} between 4.29 and 5.29 kpc. These features all match those observed in \cite{KHGPC14} despite the addition of dust extinction and observational errors to the data. In general, as shown in \cite{GKC14}, the stars in the leading side rotate faster as they tend to be at peri-centre phase and migrating inward, and stars in the trailing side rotate slower as they tend to be at apo-centre phase and migrating outward. This explains the single large peak in the trailing side, and the largest peak on the leading side which has a higher rotational velocity than the single peak on the trailing side as shown in Fig. \ref{Hrot}. However, when the transient spiral arm starts forming, stars which are close to the arm on the trailing side and are close to the peri-centre phase, are accelerated towards the arm, passing through and then slowing down as they reach the apo-centre on the leading side as discussed in \cite{KHGPC14}. These stars correspond to the `slower' peaks visible in the lower panels of Fig. \ref{Hrot}. Similarly, the stars which are close to the arm and close to the apo-centre phase on the leading side are decelerated by the arm, and are overtaken by the arm. Then they are accelerated again by the arm once they are on the trailing side at peri-centre phase, which corresponds to the small tail present at high velocities in the upper panels of Fig. \ref{Hrot}. The difference in the rotation velocity distribution between the leading and trailing side of the spiral arm seen in Figs. \ref{Crot} and \ref{Hrot} is that the latter population is smaller than the former. It appears that it is easier for stars to escape from the arm on the leading side than the trailing side. From our analysis of $N$-body simulations this appears to be a common feature of transient and co-rotating spiral arms. \cite{CQ12} propose that the radial overlap of multiple longer-lived patterns moving at different pattern speeds can reproduce the transient spiral features, which when strong enough can lead to radial migration away from the co-rotation radius associated with co-rotating spiral arms as seen, for example in \cite{GKC12,GKC12-2}. In such a scenario, the spiral arm features are co-rotating, which may give rise to the co-existence of many inner and outer Lindblad resonances in a range of radii and lead to the features visible in Figs. \ref{Crot} and \ref{Hrot}. However, further analysis of the spiral arms in $N$-body simulations is required before drawing firm conclusions on the mechanism that generates such kinematic signatures, which we will tackle in future studies. From Figs. \ref{Crot} and \ref{Hrot} we find that $Gaia's$ scientific accuracy ought to be sufficient to examine the kinematic structure of the nearby spiral arms in the Milky Way, even on the far side of the arm. Fig. \ref{Hrot} shows clear differences in the kinematics in the leading and trailing sides of the spiral arm, notably the difference in the number and locations of the peaks, and the small high velocity tail present in the trailing near side. The comparison between the middle and lower panels of Fig. \ref{Crot} shows little difference, implying that the observational error from $Gaia$ will have limited effect on our ability to study the kinematics of the spiral arms. Further examination of galaxy models constructed using the different theories of spiral arm formation will be essential to determine the distinct kinematic signatures of each theory. \section{Summary} \label{SF} We observed our $N$-body/SPH simulation of a Milky Way like barred spiral galaxy to create a mock $Gaia$ stellar catalogue, with particular interest in the stellar kinematics in and around the spiral arms. We focused on the same three lines of sight in the disc plane as \cite{KHGPC14}, $(l,b)=(90,0), (120,0)$ and $(150,0)$ deg and analysed the galactocentric rotational and line of sight velocities of the selected stars as a function of the distance from the observer. In agreement with existing literature on $N$-body spiral galaxy simulations the spiral arm features seen in the stellar mass in our model are transient, recurrent and co-rotating, i.e. the spiral arm is rotating at the circular velocity of the stars at the selected lines of sight. We show that the structure in the kinematics identified in \cite{KHGPC14} remains visible after the inclusion of dust extinction and observational errors based upon $Gaia$ science performance estimates. Although the inclusion of these observational effects makes the trends less clear, they are still observable in the mock $Gaia$ data in front of, inside and behind the spiral arm. The structure on the trailing near side is relatively unchanged, whereas the structure on the leading far side is, unsurprisingly, more affected, although the bi-modal (or more) and broader distribution of the rotation velocities is still clearly visible. Because we believe that these kinematic signatures are indications of transient and co-rotating spiral arms owing to the co-rotation resonance at all radii, we predict they should be visible in the $Gaia$ data at different longitudes if the Milky Way's Perseus arm is also a transient and co-rotating spiral arm. Encouraged by the success of this study, we intend to repeat the analysis with simulated galaxies which use different theories of spiral structure formation, for example test particle simulations \citep[e.g.][]{MQ08,MBSB10,MF10,FSF14,Aetal14-2} and $N$-body simulations with a fixed spiral arm potential \citep[e.g.][]{WBS11}. From these analyses we expect to make predictions of the kinematic signatures of different spiral arm theories, which can be tested by the $Gaia$ stellar catalogue. \section*{Acknowledgements} We gratefully acknowledge the support of the UK's Science \& Technology Facilities Council (STFC Grant ST/H00260X/1 and ST/J500914/1). The calculations for this paper were performed on Cray XT4 at Center for Computational Astrophysics, CfCA, of the National Astronomical Observatory of Japan and the DiRAC facilities (through the COSMOS consortium) including the COSMOS Shared Memory system at DAMTP, University of Cambridge operated on behalf of the STFC DiRAC HPC Facility. This equipment is funded by BIS National E-infrastructure capital grant ST/J005673/1 and STFC grants ST/H008586/1, ST/K00333X/1 \& ST/J001341/1. The authors acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services at the University of Southampton. We would also like to thank PRACE for the use of the Cartesius facility. This work was carried out, in part, through the $Gaia$ Research for European Astronomy Training (GREAT-ITN) network. The research leading to these results has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013] under grant agreement number 264895). We would also like to thank Merc\`{e} Romero-G\'{o}mez and Francesca Figueras for providing the subroutine to calculate the $Gaia$ performance errors, including the update to post launch estimates, Sanjib Sharma for providing the \sc{galaxia }\rm extinction maps and isochrones and Sami Niemi for the suggestion of using KDE's to visualise the velocity distributions. \bibliographystyle{mn2e}
1,116,691,499,573
arxiv
\section{Introduction}\label{sec1} Extreme value theory deals with stochastic behavior of extreme events, found in the tails of probability distributions, and it finds wide application in environmental sciences. Events such as extreme precipitation and storm wind speed are driven by complex spatio-temporal processes and are usually characterized by limited predictability. Understanding frequency and intensity of these phenomena is important in public-safety and long-term planning. Estimating the probability of extreme meteorological events is difficult because of limited temporal records and this issue is exacerbated in spatial settings, because forecasting entails the extrapolation in a high-dimensional space. A variety of statistical tools have been used for modeling extreme value and the book by \citet{Coles2001} provides an introduction to the topic. Traditional methods are based on the block maxima approach exploiting the generalized extreme value distribution (GEV) \citep{Fisher.Tippett1928, Gnedenko1943} or on the peaks over threshold (POT) approach exploiting the generalized Pareto distribution (GPD) \citep{Balk, Pickands1975}. The GEV is a three parameter family of distributions that describes the asymptotic behavior of suitably renormalised block maxima of a sequence of independent and identically distributed random variables. The shape parameter of this distribution plays the crucial role of determining the weight of the upper tail of the density. The GPD, used under a POT approach, is a two parameter family of distributions that is used to model excesses over a suitably chosen high threshold \citep{Pickands1975}. The GEV and the GPD model are deeply connected to each other \citep{Davison.Smith1990} and in most practical applications they are routinely employed regardless the suitability of their asymptotic arguments. The wide popularity of these methods led much of the extreme value literature to focus only on a small portion of data: the block-maxima or few values above a threshold. The "ordinary" values from which the maxima are extracted are discarded, wasting much of the available information. These issues received an increasing attention in recent times. Indeed, in many environmental applications the number of yearly events is not sufficiently large for the asymptotic argument to hold as discussed, for example, by \citet{Kou} and \citet{MARANI2015121} in hydrology. Another limitation of traditional extreme value theory is the assumption of a constant distribution for the ordinary events over time, since many phenomena display changes in the event magnitude generation process that are well established \citep{MARANI2015121}. Although the above limitations seem marginal issues in the study of atmospheric phenomena, they can lead to wide implications in the study of extreme values. Based on these considerations \citet{hmev} introduced a Bayesian hierarchical model for extreme values building upon the so called metastatistical extreme value (MEV) approach \citep{MARANI2015121, marraMEVD}. Bayesian hierarchical models for extremes value modelling represent an active area of research \citep{cooley2007, padoan_davison, sgelf, ghoshmall, bracken18}). One of the main advantages of employing a Bayesian approach in this context is the possibility to incorporate additional information to the data, in the form of prior distributions. This is dramatically relevant in a field like extreme value analysis where observations are by nature scarce and particularly in environmental studies where reliable expert prior information about the geophysical processes at hand are often available. Consistently with this, we introduce a spatial hierarchical Bayesian model to analyze extreme values of rainfall intensity. In Section \ref{sec2} we introduce the general structure of the spatial hierarchical model and subsequently specialize it to the analysis of rainfall data. In Section \ref{sec3} the proposed formulation is tested trough an extensive simulation study. An application to maximum rainfall data is described in Section \ref{sec4}. The paper ends with a a final discussion. \section{A spatial Hierarchical Bayesian extreme value model}\label{sec2} \subsection{Notation and general formulation} \label{sec:shmev_general} Let $x_{ij}(s)$ denote the magnitude of the $i$-th event within the $j$-th block for site $s$, where $j = 1, \dots, J$ with $J$ the number of blocks in the observed sample , $i = 1, \dots, n_j(s)$ with $n_j(s)$ the number of events observed within the $j$-th block and $s \in \mathcal{S}$ with $\mathcal{S}$ a spatial domain. Let $S$ denote the the total number of spatial points at which the data are observed within the spatial domain of interest $\mathcal{S}$. We assume that the response variables are the joint result of temporal and spatial latent processes. Specifically we assume that $x_{ij}(s)$, conditionally on unobserved latent processes, are realizations of conditionally independent random variables $X_{ij}(s)$ with common parametric distribution with cumulative distribution function (cdf) $F(\cdot;\theta_j(s))$, with $\theta_j(s) \in \Theta$ unknown parameter vector. Under this framework, the block maxima $Y_j(s) = \text{max} \{X_{ij}(s)\}$ for each site $s$ have cdf: \begin{equation} \label{eq:cdf_max} \mbox{Pr}(Y_j(s) \le y) = F(y;\theta_j(s))^{n_j(s)}. \end{equation} Having (\ref{eq:cdf_max}) as main goal, instead or relying on asymptotic arguments, we will exploit a fully generative hierarchical model. Specifically, the base layer models the event magnitudes at each spatial location for each block. The nested layer, then, models the latent processes that drive the parameters of the external layer, in time and space. Following a Bayesian approach, the last layer is associated with the prior distributions of the unknown parameters that control the inner latent processes. A graphical representation of the structure of the model is depicted in Figure \ref{fig:shmver}. \begin{figure} [ht] \centering \begin{tikzpicture} \node[latent](b){$\beta_{\lambda}$}; \node[latent, right=1.1 of b](l){$\lambda$(s)}; \node[obs, right=1.1 of l](n){$n_j$(s)}; \node[obs, right=1.1 of n](xij){$x_{ij}(s)$}; \node [latent, above=1.4 of b] (a) {$\beta_{\eta}$}; \node [latent, above=1.35 of l] (eta) {$\eta(s)$}; \node [latent, above=1.2 of n] (th) {$\theta_j(s)$}; \edge{b}{l} \edge{l}{n} \edge{a}{eta} \edge{eta}{th} \edge{th,n}{xij} \tikzset{rounded_box/.style={draw, inner sep=2mm, rectangle, rounded corners}}; \draw[thick, rounded corners] (4.4,-0.9) rectangle (6.7,1.1); \draw[thick, rounded corners] (2.7,-0.9) rectangle (6.7,2.8); \draw[thick, rounded corners] (0.7,-0.9) rectangle (6.7,2.8); \node at (5.6,3.2) { $j \in \{1,\dots,J\}$}; \node at (5.6,1.4) { $i \in \{1,\dots,n_j\}$}; \node at (1.5,3.2) { $s \in \cal{S}$}; \end{tikzpicture} \caption{Graphical representation of the spatial hierarchical model described in (\ref{eq:shmev_general}) and (\ref{eq:shmev_general_2}). } \label{fig:shmver} \end{figure} Let $n_j(s)$ be a realization of a stochastic process with conditional probability function $p\{\cdot;\lambda(s)\}$, where $\lambda(s)$ is an unknown parameter vector depending on the spatial location $s\in \cal S$. We further assume that $\theta_j(s)$ are realizations of a stochastic process with conditional probability density function $g\{\cdot ;\eta(s)\}$, where $\eta(s)$ is an unknown vector of parameters. The model can be written in hierarchical form, for each $s\in \cal S$, as \begin{equation} \label{eq:shmev_general} \begin{aligned} & x_{ij}(s) |n_j(s),\theta_j(s) \sim f\{x_{ij}(s);\theta_j(s)\},\,\, \mbox{for $i=1, \dots, n_j(s)$},\\ \theta_j(s) | \eta(s) \sim &g\{\theta_j(s);\eta(s)\}, \quad n_j(s)|\lambda(s) \sim p\{n_j(s); \lambda(s)\}, \\ \end{aligned} \end{equation} The latent spatial processes, $\eta(s)$ and $\lambda(s)$, can be driven by unknown parameters $\beta_\eta$ and $\beta_\lambda$ in a stochastic manner as \begin{equation} \label{eq:shmev_general_2} \begin{aligned} & \lambda(s) |\beta_\lambda \sim k\{\lambda(s);\beta_\lambda\}, \quad \eta(s)|\beta_\eta \sim m\{\eta(s);\beta_\eta\}, \end{aligned} \end{equation} where $k(\cdot; \beta_\lambda)$ and $m(\cdot;\beta_\eta)$ are suitable probability density functions. A simplified model assumes for $\eta(s)$ and $\lambda(s)$ simple deterministic functions of parameters $\beta_\eta$ and $\beta_\lambda$ and spatial covariates $Z(s)$. The Bayesian representation of the model is completed by eliciting suitable prior distributions for the unknown parameters $\beta_\eta$ and $\beta_\lambda$. The main goal of extreme value analysis can be summarized in estimating the cdf in (\ref{eq:cdf_max}) (or one of its functionals) and this can be be done, in our settings, marginalizing out the variables $n_j(s)$ and $\theta_j(s)$, i.e. \begin{equation} \label{eq:h} \zeta\{y;\lambda(s),\eta(s)\} = \sum_{n \ge 0} \int_{\Theta} F\{y;\theta(s)\}^{n(s)} \,p\{n(s);\lambda(s)\} \, g\{\theta(s);\eta(s)\} \text{d}\theta(s). \end{equation} \subsection{A specific formulation for modelling daily rainfall} \label{sHMEV_rain} Hereafter we specify the approach described in the previous section to the case of annual maxima daily rainfall accumulations over an area of interest. Several parametric families have been employed to model rainfall accumulations and while most the approaches are based on goodness of fit considerations, some of them exploit physical knowledge of the phenomena. For example \citet{stechmann} suggests that distribution of daily rainfall should follow a gamma distribution while \citet{wilsonw} suggests that its right tail should decay as a stretched exponential (i.e., Weibull) distribution. Consistently with these arguments, we assume that the magnitudes of daily rainfall accumulations at site $s$ follow a Weibull distribution with parameter vector $\theta_j(s) = (\gamma_j(s), \delta_j(s))$, where $\delta_j(s)>0$ denotes the scale parameter and $\gamma_j(s) > 0$ the shape parameter. To allow for the inter-block variability we assume that the latent variables $\gamma_j(s) \sim g_{\gamma}\{\gamma_j(s);\mu_{\gamma}(s), \sigma_{\gamma}\}$ and $\delta_j(s) \sim g_{\delta}\{\delta_j(s);\mu_{\delta}(s), \sigma_{\delta}\}$ are independent and follow a Gumbel distribution, a flexible yet parsimonious model allowing for possible asymmetry. We further assume that only the location parameters of the two Gumbel distributions are characterized by a spatial dependence and not their scale parameters, but more flexible alternatives are straightforward. For the location processes $\mu_{\gamma}(s)$ and $\mu_{\delta}(s)$ we assume a linear dependence from fixed spatial covariates $Z(s)$, i.e. \begin{equation} \mu_{\gamma}(s) =Z(s) \beta_{\gamma}, \quad \mu_{\delta}(s) =Z(s) \beta_{\delta} \label{eq:regress} \end{equation} where $\bm{\beta_{\gamma}} = \big[\begin{matrix} \beta_{\gamma,0} & \beta_{\gamma,1} & \cdots & \beta_{\gamma,p} \end{matrix}\big]^T $ and $\bm{\beta_{\delta}} = \big[\begin{matrix} \beta_{\delta,0} & \beta_{\delta,1} & \cdots & \beta_{\delta,p} \end{matrix}\big]^T$ and $p$ is the number of available spatial covariates. The vectors $\bm{\beta_{\gamma}}$ and $\bm{\beta_{\delta}}$ are independent and, in principle, we could also assume that they are of different size and related to different subset of variables in $Z(s)$. Despite we focus on simple linear relations between the location processes and the spatial covariates, extensions to more flexible modelling structures are also straightforward, e.g. adopting basis expansions. The idea underlying this specification is that observations are characterized by a latent process defined by geographic covariates, while the residual spatial variability, not captured by covariates, is described by the random Gumbel variability. For $n_j(s)$ we assume a binomial distribution, with a success probability $\lambda(s)$ and number of trials $N_t$ equal to the block size (e.g., $N_t = 366$ in our application to annual maximum daily rainfall). The decision to employ a simple parametric model is supported by the consideration that the distribution of $n_j$ mainly affects the distribution of extreme events only through its average value \citep{hmev}. We assume that the rainfall occurrence is also affected by geographical characteristics of the sites, as previously done for the location parameters of the Gumbel distributions. Consistently with (\ref{eq:regress}) we let \begin{equation} \text{logit}( \lambda(s)) = Z(s) \bm{\beta_{\lambda}}, \label{eq:logitlambda} \end{equation} where the function $\text{logit}(x) = \text{log}(x/(1-x))$ is employed for ensure $\lambda(s) \in (0,1)$. Specific examples of (\ref{eq:regress}) and (\ref{eq:logitlambda}) are discussed in Section \ref{sec4}. \subsection{Prior elicitation and posterior computation} \label{sec:prior} The introduction of a hierarchical model that describe the entire distribution of daily rainfall accumulations allows to specify priors directly on the underlying distribution of the observed ordinary events, rather than on the distribution of block maxima. This is a great advantage, because it avoids the difficulty of prescribing a prior directly on the shape parameter $\zeta$ of the GEV distribution, to which it is difficult to attribute physical meaning. In defining the prior distributions for the parameters $\sigma_{\gamma}$, $\sigma_{\delta}$, $\bm{\beta_{\delta}}$, $\bm{\beta_{\gamma}}$, and $\bm{\beta_{\lambda}}$, we seek to harness information on the physical processes generating the data, avoiding, where possible, strongly uninformative priors. For example, \citet{sornette, Frisch_1997} provide geophysical motivations to expect the Weibull shape parameters $\gamma(s)$ to be centered around 2/3 for rain accumulation. Consistently with this, we fix the prior expectation of $\beta_{\gamma,0}$ to $2/3$ as the remainder of the terms in vector $\beta_\gamma$ are assumed having null prior expectations and the covariates have been standardized. As parametric distribution for each vector $\bm{\beta_{\delta}}$, $\bm{\beta_{\gamma}}$ and $\bm{\beta_{\lambda}}$ we chose independent Gaussian distributions. For the latent Gumbel scale parameters $\sigma_{\gamma}$ and $\sigma_{\delta}$, quantifying the between-block variability of the Weibull parameters, we opt for independent inverse gamma distributions, with expectations equal to 25\% and 5\% of the respective location parameters ($\mu_{\delta}$ and $\mu_{\gamma}$). This choice reflects the expectation that the scale parameter varies across years more than the shape parameter \citep{hmev}. For the choice of the hyperparameters of the distributions we adopt an empirical Bayesian approach \citep{Casella1985AnIT} and a practical example is illustrated in Section \ref{sec4}.\\ Given the complex structure of the introduced model, the posterior distribution of the parameters is not available in closed form. Posterior approximation is then obtained using Markov Chain Monte Carlo (MCMC) methods. Specifically, we use the \textit{Hamiltonian Monte Carlo} (HMC) approach \citep{betancourt2018conceptual} exploiting the flexibility of the Stan software \citep{carpenter2017}. In all the following examples, we run $n_c$ = 4 parallel chains, with $n_g$ = 2000 iterations in each chain, starting from different initial points. We discard the first half of each chain to account for the burn-in effect. The final sample on which we perform inference is therefore based on $B = n_c n_g/2 = 4000$ draws, obtained merging the draws from different chains, following standard practice \citep{gelman_bayes}. We can obtain an estimation of (\ref{eq:h}), the cumulative probability of block maxima approximating at each location $s \in \cal S$, with a two step procedure. We first compute the value of $\hat{\zeta}_s^{(b)}(y)$ at the generic $b$ iteration for site $s$ as \begin{equation} \label{quant_sHMEV2} \hat{\zeta}^{(b)}_{s}(y) = \frac{1}{M_g} \sum_{j=1}^{M_g} F\{y;\theta_{j}^{(b)}(s)\}^{n_{j}^{(b)}(s)}, \end{equation} where $\theta_{j}^{(b)}(s) = \big(\gamma_{j}^{(b)}(s), \delta_{j}^{(b)}(s)\big)$ and $n_{j}^{(b)}(s)$, for $j = 1, \dots, M_g$, are drawn from the related posterior predictive distribution for site $s$ in the $j$-th over $M_g$ future blocks. Then, averaging over the $B$ draws from the posterior distribution for each site $s$ we have \begin{equation} \label{quant_sHMEV1} \hat{\zeta}_{s}(y) = \frac{1}{B} \sum_{b=1}^{B} \hat{\zeta}^{(b)}_{s}(y). \end{equation} \section{Simulation study}\label{sec3} \subsection{Description} To assess the empirical performance of the proposed model we perform a simulation study. A data set composed of $S = 27$ sites have been constructed, simulating uniformly inside a square of unitary side. Taking as reference point the bottom left edge of the square, the values on the x-axis and the y-axis represent the spatial coordinates, $z_1$ and $z_2$. Different synthetic data sets have been generated under three scenarios characterized by specific event magnitude distributions. In the first scenario (WEI) we assume a Weibull model in which the scale and shape parameters in each block follow Gumbel distributions. The location parameters of the Gumbel distributions are determined through a spatial trend, defined as: \begin{equation} \label{trend_sim} t(s) = \beta_0 + \beta_1 \ z_1(s) + \beta_2 \ z_2(s), \end{equation} with $\beta_0, \beta_1, \beta_2 \in \mathbb{R}$ and $s \in \cal{S}$. In the second scenario (WEI$_{gp}$) the model builds upon the previous one and in order to give some extra variability to the data, we add a Gaussian process with exponential correlation function to the regression trend defined in (\ref{trend_sim}). In the third specification (GAM) we assume for the event magnitude a Gamma distribution, where the two parameters are generated by a spatial trend defined accordingly to Equation (\ref{trend_sim}). The number of events in each block is drawn from a binomial distribution with number of trials $N_t = 366$ and success probability determined through a logit transformation of the spatial trend defined in (\ref{trend_sim}) with constant value of $\beta_0, \beta_1,$ and $\beta_2$ for all the three scenarios. Notably, only the first scenario reflects the structure of the proposed spatial hierarchical model. This will be used to assess the robustness of the proposed formulation under model misspecification. Common to all scenarios, two independent data sets have been generated under the same specifications: the first one, that contains time series of length 20 years, will be used as training set, while the second one, containing time series of length 100 years, will be used as test set. Using limited temporal records for the estimation is representative of many real geophysical data sets. The sHMEV model assumes, under correct specification of (\ref{trend_sim}), \begin{equation*} \mu_{\gamma}(s) = \beta_{\gamma,0} + \beta_{\gamma,1} z_1(s) + \beta_{\gamma,2} z_2(s), \quad \mu_{\delta}(s) = \beta_{\delta,0} + \beta_{\delta,1} z_1(s) + \beta_{\delta,2} z_2(s),\quad \mbox{logit}(\lambda(s)) = \beta_{\lambda,0} + \beta_{\lambda,1} z_1(s) + \beta_{\lambda,2} z_2(s), \end{equation*} where $z_1(s)$ and $z_2(s)$ are the spatial coordinates for site $s$, with $s \in \cal{S}$. \subsection{Performance measures} In order to evaluate the performance of the proposed spatial hierarchical model we compare it with standard alternative methods. In particular, the methods used to benchmark sHMEV are Bayesian implementations of the classical generalized extreme value distribution (GEV) and the Bayesian hierarchical model (HMEV) described in \citet{hmev} . For both competing models we take informative priors. More precisely, for the HMEV model we follow the specification of \citet{hmev}, while for the GEV model the prior distribution for the shape parameter is centered around the value 0.114 and has a standard deviation of 0.125; these values have been determined from investigations of rainfall records at the global scale \citep{papax}. To compare the different methods, we evaluate the predictive accuracy in estimating the distribution of block maxima on the test set. We introduce different criteria to measure the predictive performance of the methods for quantiles above a given non exceedance probability, that are computed marginally for each station. Specifically, we consider the fractional squared error (FSE) defined by \begin{equation} \frac{1}{m_T} \sum_{j=1}^{M_x} \mathds{1}_{(\Tilde{T},\infty)} \{T_{js}\} \sqrt{\frac{1}{B} \sum_{b=1}^B \bigg(\frac{\hat{\zeta}_s^{(b)^{-1}}(p_{js})-y_{js}}{y_{js}}\bigg)^2}, \end{equation} where $y_{js}$ is the $j$-th maximum for site $s$, $\hat{\zeta}_s^{(b)^{-1}}(\cdot)$ is the quantile function of the specific model at the $b$-th MCMC iteration for site $s$, $T_{js}$ is the empirical return time of $y_{js}$ defined as $ T_{js} = (1 - p_{js})^{-1}$, with $p_{js} = \text{rank}(y_{js})/(M_x + 1)$ and $M_x$ is the number of blocks used to compute the FSE. In this section $M_x$ corresponds to the number of blocks in the test set, i.e. $M_x = 100$. The value $m_T$ represents the number of observations in the test set with empirical return time equal to or larger than $\Tilde{T}$, i.e. $m_T = \sum_{j=1}^{M_x} \mathbb{1}_{(\Tilde{T},\infty)} (T_{js})$. The FSE index represents an average measure of a standardized distance between model-estimated quantiles and empirical quantiles for return times larger than $\Tilde{T}$. In the following analysis we define $\Tilde{T} = 2$, consistent with the range of exceedance probability of interest in many practical applications. To separately assess the precision and the variability of extreme value quantile estimates, we employ two additional measures, namely the average bias and the average width of the $ 90\% $ posterior predictive credible intervals, defined respectively as \begin{equation} \label{eq:mbias} b_q = \frac{1}{m_T} \sum_{j=1}^{M_x} \mathds{1}_{(\Tilde{T},\infty)} \{ T_{js}\} \frac{1}{B} \sum_{b=1}^B \bigg(\frac{\hat{\zeta}_s^{(b)^{-1}}(p_{js})-y_{js}}{y_{js}}\bigg), \end{equation} \begin{equation} \label{eq:mwidth} \Delta_{q90} = \frac{1}{m_T} \sum_{j=1}^{M_x} \mathds{1}_{(\Tilde{T},\infty)} (T_{js}) (\hat{q}_{95}(p_{js}) - \hat{q}_5(p_{js})), \end{equation} where the quantities $\hat{q}_{95}(p_{js})$ and $\hat{q}_{5}(p_{js})$ are the upper and lower bounds of the posterior credibility interval for the quantile $\hat{\zeta}_s^{(b)^{-1}}(p_{js})$. \subsection{Results} \begin{figure}[h] \centering \includegraphics[height = 0.25\textheight]{./figures/fse_sim.png} \caption{Fractional square error computed for the 3 different model specifications.} \label{fig:fse_sim} \end{figure} \begin{figure}[!h] \centering \includegraphics[height = 0.25\textheight]{./figures/mbias_sim.png} \includegraphics[height = 0.25\textheight]{./figures/mwidth_sim.png} \caption{Mean bias (top row) and mean credibility interval width (bottom row) computed for the 3 different model specifications.} \label{fig:mbmw_sim} \end{figure} \begin{figure}[h] \centering \includegraphics[width=12cm]{./figures/quant_sim.png} \caption{Quantiles predicted for two sites by the GEV (green), HMEV (blue), and sHMEV (red) models based on data simulated in the first scenario (WEI). Solid lines show the expected value of the quantile for a given return time, while dashed lines represent the bounds of 90\% credibility intervals. Circles represent the observed block maxima on training set, while the black lines report the quantiles computed from the true sHMEV model.} \label{fig:quant_sim} \end{figure} Figure \ref{fig:fse_sim} shows the empirical distribution of the FSE over the sites, computed on the test set. The proposed sHMEV outperforms the competitors in all scenarios, even if we can see that in the GAM scenario the interquartile variability of the FSE index for different sites is higher with respect to the other two models. In the WEI$_{gp}$ scenario the GEV model has some rather high FSE values, probably a consequence of a consistent inter-block variability for the distribution of $x_{ij}(s)$. To gain a deeper understanding of this general behavior, Figure~\ref{fig:mbmw_sim} reports the results of the two criteria introduced in (\ref{eq:mbias}) and (\ref{eq:mwidth}). Generally, the best performance for the bias appears to be specification dependent. In the WEI and GAM scenarios sHMEV tends to slightly underestimate the posterior predictive quantiles and HMEV gets the best results. In the WEI$_{gp}$ scenario, instead, sHMEV shows the lower bias. For what concerns the width of the credibility intervals, sHMEV is consistently the most efficient procedure, producing narrower credibility intervals. These considerations suggest that the variability of the estimates affects the calculation of the FSE index more than the bias, since the latter is reduced for all three competitors. To visualize this global behavior, Figure \ref{fig:quant_sim} shows a representative example of the performance of the methods. Specifically, it reports the quantile versus return time plots obtained for the different methods applied to the data generated under hypothesis of correct specification of the model (WEI) for two sites. The quantiles computed from the true model are overall satisfactorily captured by the proposed model. The sHMEV model yields quantile estimates with narrower credibility intervals, especially compared to the GEV model, and it is characterized by lighter tails than the other two methods. Note that the GEV model, despite the informative prior used, appears to be more sensitive to the largest observations in the training set and tend to overestimate the true quantile function, as noted for the site in the right panel of Figure \ref{fig:quant_sim}. This behavior is expected, given the limited length of the training samples used here, and consistent with previous studies \citep{hmev}. For the site considered in the left panel of Figure \ref{fig:quant_sim} the maxima observed on the training set are quite different from the expected values on the test set. In this case sHMEV, exploiting also information of the other sites (\textit{borrowing strength}), manages to obtain more accurate and less variable estimations than the competitors. \section{Application: rainfall in North Carolina}\label{sec4} \subsection{United States Historical Climatological Network data} The data analyzed in this section are extracted from the United States Historical Climatological Network (USHCN) data, that are freely available from the National Centers for Environmental Information (NCEI) of the National Oceanic and Atmospheric Administration (NOAA) \citep{nooa}. The data consist of daily precipitation records for all the available weather stations in North Carolina, for the time period 1870 through 2021, with a significant fraction of the available records being longer than 100 years. The region is characterized by heterogeneity in morphological and climatic features: it varies from the plain areas near the coastline, to the hilly and mountainous zones in the west of the region. This allows us to test the proposed model to different climates and precipitation regimes. The records characterized by non-blank quality flag were removed, as well as the years characterized by more than 30 daily missing observations. Therefore, for the subsequent analysis we select only stations that contain more than 73 years of data, for a total of 27 stations. We randomly chose 25 stations to fit our model and for each one we take the first 20 years. Figure \ref{fig:stazioni_nc} shows the station locations, with black points and triangles indicating stations that are included in the training set and blue points indicating stations that are included in the test set. Black triangles indicate three stations that will be used for illustrative purpose in the following. One of these three stations is on the coastline (Edenton), another one is in the middle of the region (Fayetteville) and the last one is in the mountainous area (Hendersonville). \begin{figure}[t] \centering \includegraphics[height = 0.25\textheight, width=0.86\textwidth]{./figures/map_nc.png} \caption{Map of North Carolina showing the sites and altitude in meters above sea level of the weather stations. The sites marked by black symbols were used to fit the model, the remaining to validate the model.} \label{fig:stazioni_nc} \end{figure} To assess the temporal dependence we plot the autocorrelations of the daily rainfall accumulations time series observed at each station. As the locations fail to show any significant temporal dependence, operations for enhance the events pseudo-independent (e.g. declustering) are not necessary. We tested for spatial dependence in the positive daily precipitation residuals using an exponential variogram with linear trend and we found that there was a low level of dependence between stations within 20 km. Since the two closest stations are 24 km apart, the choice to model the spatial dependence with a latent process driven by spatial covariates seems appropriate. Figure \ref{fig:max}, in the Appendix, depicts the annual maxima of precipitation for all stations, where the color and the size of the circles are respectively proportional to the mean and the standard error of the annual maxima. The greatest intensity and variability of the maxima occurs for stations on the coast and for some in the mountainous area. Therefore, we decided to consider as geographical covariates to be included in the model, in addition to latitude and longitude, also the altitude and the distance from the coast in km. These, in fact, appear to have a marginal effect on both the two parameters of the Weibull distribution, scale and shape, estimated via the method of moments for positive accumulations and on the number of annual rainy days. \subsection{Fit of the sHMEV model} We apply to the data the sHMEV model specified in Section \ref{sHMEV_rain}, using for the estimation the first 20 years of 25 stations, that means $J = 20$ and $S=25$. According to the considerations made in the previous section, we define $\mu_{\gamma}(s)$ as \begin{equation} \mu_{\gamma}(s) = \beta_{\gamma,0} + \beta_{\gamma,1} \text{lat}(s) + \beta_{\gamma,2} \text{lon}(s) + \beta_{\gamma,3} \text{alt}(s) + \beta_{\gamma,4} \text{dist}(s), \end{equation} where lat($s$), lon($s$), alt($s$) e dist($s$) are respectively latitude, longitude, altitude and distance from coast of site $s$. All the covariates have been standardized. The functions for the parameters $\delta$ and $\lambda$ are similarly defined. For prior elicitation we follow what is reported in Section \ref{sec:prior}, avoiding the use of distributions that are particularly uninformative. We describe below the procedure followed for choosing the hyperparameters of the normal distributions for $\bm{\beta_{\delta}}$, but similar reasoning was used for the selection of the other hyperparameters. Recalling that the covariates were standardized, the value of the intercept $\beta_{\delta,0}$ refers to the average case, that is the case in which all predictors take on average value. For this parameter we define a prior distribution centered on the mean for the 25 stations of the parameter $\delta$ estimated on the data by the method of moments. The variance of the distribution is chosen such that the probability that $\beta_{\delta,0}$ is between two values considered reasonable is greater than 0.95. For the parameter $\beta_{\delta,1}$ we take a prior distribution centered on the least squares estimation of the simple regression of $\delta$ (always estimated for the 25 stations by the method of moments) on latitude. The variance was chosen equivalently to what was done for $\beta_{\delta,0}$. We follow a similar procedure for $\beta_{\delta,2}$, $\beta_{\delta,3}$ e $\beta_{\delta,4}$.\\ After checking the convergence of the chains with several diagnostic technique \citep{brooks_gelman}, we assess whether the parametric assumptions of the proposed model provide a good fit to the observed data. We perform posterior predictive checks, comparing relevant quantities, such that $y_i$ or $x_{ij}$, with their corresponding posterior predictive densities. The posterior predictive distributions are not analytically available, but it is straightforward to simulate new data from them. Examination of the posterior predictive distributions for the annual maxima, number of events, and daily rainfall magnitudes for two of the example stations are given in Figure \ref{fig:ppd}, in the Appendix. Overall these posterior predictive distributions are satisfactorily captured by sHMEV, even if there are some discrepancies for the distribution of $n_j(s)$. However, as discussed in Section \ref{sHMEV_rain}, the distribution of $n_j(s)$ mainly affects the estimation of extreme events only through its average value, which appears to be adequately captured by the model. Note that a discrepancy appears for small values of daily rainfall magnitudes, due to the sensitivity of the measurement instruments. However, the right tail of the daily precipitation distribution, which play an important role in determining extreme values, is well captured by sHMEV. \begin{table} [h] \caption{ Summary statistics for the posterior distributions of the latent process parameters referred to $\gamma$ and $\delta$, respectively the shape and scale parameters of the Weibull distribution and $\lambda$, the success probability of the binomial distribution for the number of events. The posterior means and the associated 95\% credible intervals (parentheses) are displayed. } \label{tab:post_beta} \begin{center} \begin{tabular}{lccc} \toprule & $\gamma$ & $\delta$ & $\lambda$ \\ \midrule $\beta_0$ & $ 0.86 \ (0.85, \ 0.87)$ & $10.5 \ (10.3, \ 10.7)$ & $-0.93 \ (-0.94, \ -0.92)$ \\ $\beta_1$(lat) & $ 0.02 \ (0.01, \ 0.03)$ & $0.01 \ (-0.25,\ 0.26)$ & $0.14 \ (0.11,\ 0.17)$ \\ $\beta_2$(lon) & $ 0.2 \ (-0.04, \ 0.07)$ & $0.49 \ (-0.21, \ 1.17)$ & $-0.7 \ (-0.85, \ -0.55)$ \\ $\beta_3$(alt) &$-0.03 \ (-0.05, \ -0.01)$ & $0.01 \ (-0.46, \ 0.45)$ & $0.13 \ (0.11,\ 0.15)$ \\ $\beta_4$(coast) & $ 0.01 \ (-0.06, \ 0.07)$ & $-0.28 \ (-1.02,\ 0.43)$ & $-0.7 \ (-0.85, \ -0.55)$ \\ $\sigma$ & $ 0.09 \ (0.08, \ 0.10)$ & $2.19 \ (2.01, \ 2.38)$ \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Results} \begin{figure}[b!] \centering \includegraphics[width=0.9\textwidth]{./figures/qmap_un.png} \caption{Maps of the predictive pointwise 25 and 50 year return level estimates for rainfall (mm) obtained from the sHMEV model. The middle and bottom rows show the lower and upper bounds of the 90\% pointwise credible intervals, the middle row shows the predictive pointwise posterior mean. } \label{fig:mappe_return} \end{figure} A summary of the posterior distributions for $\bm{\beta_{\delta}}$, $\bm{\beta_{\gamma}}$, $\bm{\beta_{\lambda}}$, $\sigma_{\delta}$ and $\sigma_{\gamma}$ is given in Table \ref{tab:post_beta}. Overall, the distributions show low dispersion and the effect of the covariates on precipitation intensity and occurrence is in agreement with what was observed in the exploratory analysis. Our goal is to to estimate the posterior distribution for the return level for every location in North Carolina. Since the cdf of annual maxima is a function of the parameter $\gamma$, $\delta$ and $\lambda$, it is sufficient to estimate the posteriors of these processes. We divide the study region into a grid of points and we consider the values of latitude, longitude, altitude and distance from the coast for each point. With the posterior distributions of $\bm{\beta_{\delta}}$, $\bm{\beta_{\gamma}}$ and $\bm{\beta_{\lambda}}$ and the values of the covariates it is immediate to compute $\mu_{\gamma}$, $\mu_{\delta}$ and $\mu_{\lambda}$ for each point of the map. Doing this for each iteration $b$, $b = 1 \dots B=4000$, provides draws from the posterior distribution of $\lambda$ and simulating from a Gumbel distribution we get draws also from the posterior distribution of $\gamma$ and $\delta$. Figure \ref{fig:mappe_post} in the Appendix shows pointwise mean and pointwise interquartile range of the posterior draws over the $B$ iterations. The Weibull shape parameter, $\gamma$, is lower for the mountainous area and tends to increase as latitude increases. For the scale parameter $\delta$ a positive effect of longitude and distance from the coast is observed. Finally, the map for $\lambda$ shows that the number of annual rainy days, as might be expected, is higher in mountainous areas and decreases along the coast. The level of uncertainty for all three parameters is greatest at some mountain locations. Figure \ref{fig:mappe_return} shows maps of the predictive pointwise posterior mean for the 25 and 50 year return levels, with pointwise 90\% credible intervals. Adopting a Bayesian methodology, indeed, allows to obtain natural uncertainty estimates for the return levels, taking the pointwise 0.05 and 0.95 empirical quantiles from the return level draws. The return levels were calculated as described in Section \ref{sec:prior} using (\ref{quant_sHMEV2}) and (\ref{quant_sHMEV1}). We observe higher values for the mountainous area and for the southwest area, where there are few stations and the model is forced to extrapolate. In general, there also appears to be a slight decreasing trend with latitude. We want to evaluate the performance of the proposed model in predicting extreme values for sites not in the data, i.e., by extrapolating to space. For this purpose we calculate for several return times the quantiles for the two stations in the test set in space, which were not used to estimate the model. Figure \ref{fig:valid_spat} reports the quantile versus return time plots for the stations of Statesville and Wilson. For Statesville the quantiles predicted by the sHMEV model match observed annual maxima very well, for Wilson, instead, the model predictions turn out to be less good. However, for this site there are three rather high observed values that are quite difficult to predict. We also recall that the model was estimated using only the first 20 years of the time series and exploiting all the available temporal information could lead to a more accurate spatial extrapolation.\\ \begin{figure} [b!] \centering \includegraphics[width=13cm]{./figures/qvalidation.png} \caption{Quantiles predicted for the two stations (Statesville and Wilson) taken as test set on the space. Solid lines show the expected value of the quantile for a given return time, while dashed lines represent the bounds of 90\% credibility intervals. Circles represent the observed maxima.} \label{fig:valid_spat} \end{figure} In the following we compare extreme value quantiles obtained from the HMEV, GEV, and sHMEV models, evaluating the predictive accuracy in the estimation of the true distribution of annual maxima in the test set over time. In fact, all three models were trained on just the first 20 years of record. Figure \ref{fig:fse_nc} in the Appendix shows the empirical distribution of the indexes FSE, $b_q$ and $\Delta_{q90}$ over the 27 stations. In terms of FSE, the sHMEV model provides the best results for a larger number of stations, however it has rather high values for some sites. Looking specifically at these sites, they are mainly located on the coast. Probably for these stations, which exhibit a specific behavior, the sharing of information among different sites results in an excessive shrinkage to the mean, leading to higher errors in the prediction of extreme values. In terms of estimation bias, sHMEV and more HMEV tend to underestimate quantiles. As regards variability, the sHMEV model is the most efficient and there is a broad difference with the GEV model. These results are consistent with those obtained in the simulation study. Figure \ref{fig:quant_compare} in the Appendix reports the quantile versus return time plots of the different competing methods for two of the stations taken as an example. Models estimates differ, with sHMEV exhibiting, as previously observed from our simulation study, narrower credibility intervals with respect to HMEV and GEV models. For the Fayetteville station the sHMEV model presents an overall good agreement with the empirical frequencies associated to the annual maxima extracted from the entire record. The GEV model, instead, is more influenced by the specific training set used and tends to overestimate the values. For the Edenton station none of the three models adequately captures the distribution of the annual maxima. It should be noted that this station is on the coast, and is likely to experience very unusual and difficult to predict precipitation. \section{Discussion}\label{sec5} We introduced a spatial hierarchical Bayesian model for the analysis of environmental extreme values. The proposed approach, which extends and generalizes the hierarchical model of \citet{hmev}, avoids the asymptotic arguments of classical extreme values models and exploits most of the information contained in the data through the ordinary events. The spatial dependence has been induced in the parameters determining the ordinary events and, specifically, modeled through a linear combination of spatial covariates with unknown regression parameters. The Bayesian approach allowed the inclusion of valuable prior information that are often present in environmental modeling. The performance of the method are competitive with the state of the art methods and with the original proposal of \citet{hmev} not exploiting any spatial information. While we focused on simple formulations for the parameters related to the ordinary events, more complex formulations are also possible including semiparametric specifications, for example exploiting spline regressions, or including random noise through suitable stochastic process modeling the residual spatial dependence, e.g. Gaussian processes. The inclusion of spatial covariates sheds lights on possible extensions. For example one could also introduce a time dependence through suitable function of covariates or including trend and seasonality. \vspace{0.8cm} \section*{Appendix} \setcounter{figure}{0} \renewcommand{\thefigure}{A\arabic{figure}} \begin{figure} [h!] \centering \includegraphics[height = 0.22\textheight]{./figures/max.jpg} \caption{Annual maxima of daily rainfall accumulations for each station. The color and the size of the circles are respectively proportional to the mean and the standard error of the annual maxima.} \label{fig:max} \end{figure} \begin{figure} [h!] \centering \includegraphics[width=10cm]{./figures/ppdc7f.png} \includegraphics[width=10cm]{./figures/ppdc5f.png} \caption{ Posterior predictive distributions for the logarithm of the annual maximum daily rainfall accumulations (a), yearly number of events (b) and logarithm of non-zero daily rainfall events (c). The top rows show the results for the station of Hendersonville, the bottom ones for the station of Fayetteville. Dark blue lines show the density of the observed values (obtained by kernel density estimation), while the light blue line show the kernel density estimates for 100 draws from the posterior predictive distributions. } \label{fig:ppd} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.99\textwidth]{./figures/gamma_map.png} \includegraphics[width=0.99\textwidth]{./figures/delta_map.png} \includegraphics[width=0.99\textwidth]{./figures/lambda_map.png} \caption{Maps with posterior predictive distributions for the Weibull parameters, $\delta$ and $\gamma$, and for the success probability of the binomial distribution, $\lambda$. The figures show the posterior mean (on the left) and the interquartile range (on the right) of the distributions.} \label{fig:mappe_post} \end{figure} \begin{figure}[h!] \centering \includegraphics[height = 0.29\textheight]{./figures/fse_nc.png} \caption{FSE, mean bias and mean credibility interval width.} \label{fig:fse_nc} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=13cm]{./figures/compare_nc.png} \caption{Quantiles predicted for the stations of Fayetteville and Edenton by the GEV (green), HMEV (blue), and sHMEV (red) models. Solid lines show the expected value of the quantile for a given return time, while dashed lines represent the bounds of 90\% credibility intervals. Triangles represent the maxima on the training set, while the circles represent the maxima on the test set.} \label{fig:quant_compare} \end{figure} \nocite{*} \printbibliography[heading=bibintoc] \end{document}
1,116,691,499,574
arxiv
\section{Introduction} \IEEEPARstart{P}{ower} system oscillations have been traditionally damped by employing the power system stabilizer (PSS) \cite{PSS1} and Flexible AC Transmission System (FACTS) \color{black} with Power Oscillation Damping (POD) controller \color{black} \cite{STATCOM1}. However, PSS might be ineffective in damping multiple inter-area oscillations \cite{ref:LQG}, while {\color{black} the damping performance of both PSS and FACTS-POD controller may vary with their locations \cite{STATCOM1}}. More recently, {\color{black} growing renewable energy sources (RES) are integrated into the power grids. The inverter-based RES generators may interact with the damping torque, resulting in either improved or deteriorated system stability depending on different working conditions \cite{ref:quintero2014impact,ref:zhou2017damping}. Meanwhile, considering the flexible controls of inverter-based generations, many efforts have been made to develop effective POD controllers by exploiting the inverter-based generations \cite{ref:tang2015sliding, ref:offshore-wind, ref:multi-terminal-wind}.} Particularly, voltage source converters (VSCs), serving as the interface between RES and the AC power system, are regarded as new and promising candidates to improve AC system dynamic performance. Compared to the {\color{black} conventional } FACTS devices that can control only reactive power, VSCs connected with RES can control both active and reactive power independently, which provides more flexibility in control design \cite{ref:yazdani2010voltage}. {\color{black} Note that several FACTS devices equipped with energy storage systems can also control active power, but they share similar features in topology and functions with VSC integrating RES \cite{ref:beza2014adaptive}. Therefore, we mainly focus on VSCs for POD in this paper, whereas it is believed that other FACTS devices similar to VSCs can be exploited in the same fashion. } \color{black} {\color{black}Many control methods} have been proposed to enhance the system \color{black}POD \color{black} performance using VSCs such as {modal linear quadratic} Gaussian (LQG) {control} \cite{ref:LQG}, the traditional PSS-based { control} \cite{ref:trinh2014methods}, {mixed} $H_2/H_{\infty}$ output feedback control \cite{ref:mixed-H}, {Lyapunov control theory} \cite{ref:multi-terminal-wind} and sliding mode robust control \cite{ref:tang2015sliding}. \color{black}Although the robust control methods (e.g., mixed $H_2/H_{\infty}$ feedback control) provide the designed controllers with some robustness to the uncertainties of parameters and noise, \color{black} all the aforementioned methods require accurate knowledge of network topology and system parameters when the controllers are designed offline. However, the network may suffer from undetected topology changes and the system parameters may vary in different operation conditions. Therefore, effectiveness of the controllers designed offline may be deteriorated. { Such problems are aggravated after the integration of RES.} To address these issues, wide-area measurement system (WAMS)-based wide-area damping control (WADC) methods have been proposed in \cite{ref30}, \cite{ref31}. However, the method proposed in \cite{ref30} is not purely model-free, requiring the damping coefficients. Also, the method may not be directly implemented by VSCs. {\color{black}Besides, the method proposed in \cite{ref31} may not optimize the damping performance of multiple modes simultaneously.} {Several online WADC methods have been discussed in \cite{ref:zhang2016review}, which nevertheless also points out the gap between the online identification and control, as the reduced-order model obtained from the system identification methods may not be the physical model { and cannot be related to the real system state variables}, making the control design challenging.} In this paper, we propose a novel model-free WADC method using VSCs, {\color{black}which can achieve full decoupling of modes such that the damping performance of multiple inter-area modes can be optimized} simultaneously as the system \color{black} steady-state \color{black} operating condition varies. Specifically, we first integrate the model of VSCs into the dynamic AC power system model in the form of state-space representation. Next, a perturbation approach is designed to estimate the state matrix $A$ and the input matrix $B$ only from PMU data by exploiting the regression theorem of multivariate Ornstein-Uhlenbeck process \cite{regression-theorem}. It should be noted that, unlike other measurement-based mode identification methods (e.g., system identification methods), the estimated $A$ and $B$ {\color{black} correspond to the linearization of the \textit{physical model}, which possesses clear physical interpretations} and can be directly utilized in control design. Therefore, the {\color{black} modal linear quadratic regulator (MLQR)}-based control method is applied to the estimated $A$ and $B$ to design a wide-area damping controller using VSCs. To our best knowledge, the proposed method seems to be \textit{the first} model-free WADC strategy that can achieve full decoupling of modes and can target all critical modes simultaneously in various operation conditions and network topologies. The contributions of the paper are summarized below: \begin{itemize} \item An entirely model-free method is designed to estimate the system state matrix $A$ and the input matrix $B$ in the state-space representation of the \textit{true physical} model of a power system with VSCs. Compared to \cite{ref:sheng2019online}, additional model formulation and mathematical manipulation are conducted for the integration of VSCs. A perturbation approach is also designed to separate $A$ and $B$ from the closed-loop system state matrix. \item {Based on the estimated matrices, a MLQR-based WADC method, requiring no knowledge of network model, is proposed to update the control signals of VSCs so that multiple critical inter-area modes can be damped simultaneously without affecting others as the system operating condition varies, as opposed to the model-based WADC methods \cite{ref:trinh2014methods, ref:mixed-H, ref:LQG,ref:multi-terminal-wind,ref:tang2015sliding}.} \end{itemize} The rest of the paper is organized as follows. Section II presents the state-space representation of the AC system with grid-connected VSCs. Section III describes the WAMS-based estimation strategy for the system state matrix $A$ and the input matrix $B$ in the state-space representation. Section IV shows the MLQR-based WADC based on the estimated matrices using VSCs. Section V validates the proposed estimation and control strategy through comprehensive numerical studies. Section VI summarizes the conclusions. \section{System modeling} \subsection{The Stochastic Model of AC power system} In this paper, we consider the small-signal electromechanical stability around the steady state, in which the rotor angle dynamics dominate. Therefore, the classical generator dynamic model is considered: \color{black} \begin{equation} \begin{aligned} \mathop {\bm{\dot \delta }} &= {\omega _0}\left( {{\bm{\omega }} - \bm{1}} \right)\\ {{M}}\mathop {\bm{\dot \omega }} &= \bm{P_M} - \bm{P_E} - {{D}}\left( {{\bm{\omega }} - \bm{1}} \right) \end{aligned} \label{eq1} \end{equation} \color{black} with \begin{equation} {P_{ei}} = \sum\limits_{j = 1}^{{N_g}} {{E_i}{E_j}\left( {{G_{ij}}\cos \left( {{\delta _i} - {\delta _j}} \right) + {B_{ij}}\sin \left( {{\delta _i} - {\delta _j}} \right)} \right)} \nonumber \end{equation} where ${\bm{{\delta }}} = {\left[ {{\delta _1},{\delta _2},...,{\delta _{{N_g}}}} \right]^T}$ is the vector of generator rotor angles, ${\bm{\omega }} = {\left[ {{\omega _1},{\omega _2},...,{\omega _{{N_g}}}} \right]^T}$ is the vector of generator rotor speeds and ${{{\omega }}_{{0}}}$ is the base value, ${{M}} = diag\left( {\left[ {{M_1},{M_2},...,{M_{{N_g}}}} \right]} \right)$ is the inertia coefficient matrix, ${{D}} = diag\left( {\left[ {{D_1},{D_2},...,{D_{{N_g}}}} \right]} \right)$ is the damping coefficient matrix, $\bm{P_M} = {\left[ {{P_m}_1,{P_m}_2,...,{P_m}_{{N_g}}} \right]^T}$ is the vector of generators' mechanical power input, $\bm{P_E} = {\left[ {{P_e}_1,{P_e}_2,...,{P_e}_{{N_g}}} \right]^T}$ is the vector of generators' electromagnetic power output, ${{E_i}}$ is the constant voltage behind the transient reactance of the ${i}$th generator, ${G_{ij}}+{j}{B_{ij}}$ is the $\left( {i},{j} \right)$th entry of the reduced system admittance matrix with only generator buses, and $N_g$ is the number of generators. Similar to \cite{load-gaussian-model,load-stochastics}, we make a common assumption that load active powers are perturbed by independent Gaussian noise from their base loadings. As previously shown in \cite{load-stochastics}, the load variations can be described by random perturbations at the diagonal elements of the reduced admittance matrix $Y\left( {i,i} \right) = {Y_{ii}}\left( {1 + {\sigma _i}{\xi _i}} \right)\angle {\phi _{ii}}$, where $i$ is the generator number, ${\xi _i}$ is a standard Gaussian variable and ${\sigma _i}^2$ describes the intensity of the fluctuations. Hence, the power system dynamic model considering the stochasticity of loads can be described by\cite{ref:wang2017pmu}: \color{black} \begin{equation} \begin{aligned} \mathop {\bm{\dot \delta }} &= {\omega _0}\left( {{\bm{\omega }} - \bm{1}} \right)\\ {{M}}\mathop {\bm{\dot \omega }} &= \bm{P_M} - \bm{P_E} - {{D}}\left( {{\bm{\omega }} - \bm{1}} \right) - {{{E}}^2}{G\Sigma}\bm{\xi} \end{aligned} \label{eq2} \end{equation} \color{black} where ${{E}} = diag\left( {\left[ {{E_1},{E_2},...,{E_{{N_g}}}} \right]} \right)$ , ${{G}} = diag\left( {\left[ {{G_{11}},{G_{22}},...,{G_{{N_g}{N_g}}}} \right]} \right)$ , ${{\Sigma }} = diag\left( {\left[ {{\sigma _1},{\sigma _2},...,{\sigma _{{N_g}}}} \right]} \right)$ and ${\bm{\xi }} =\left[ {{\xi _1},{\xi _2},...,{\xi _{{N_g}}}} \right]^T$. Linearizing (\ref {eq2}) around the steady-state operating point gives \begin{equation} \begin{array}{l} {{\Delta }}\mathop {\bm{\dot \delta }} = {\omega _0}{{\Delta \bm{\omega} }}\\ {{{M}}{\Delta }}\mathop {\bm{\dot \omega }} = -\Delta{\bm{P_E}}- {{{D}}{\Delta \bm{\omega} }} - {{{E}}^2}{G\Sigma}\bm{\xi} \end{array} \label{eq3} \end{equation} which can be further represented in the compact form by substituting $\Delta{\bm{P_E}}=\frac{\partial{\bm{P_E}}}{\partial{\bm{\delta}}}{\Delta{\bm{\delta}}}$: \begin{equation} \mathop { \bm{\dot x}} = {{{A}_{o}}\bm{x}} + {{{S}}\bm{\xi }} \label{eq4} \end{equation} where ${\bm{x}} = {\left[ {\Delta}\bm{{ \delta }},{\Delta}\bm{{ \omega }} \right]^T}$, ${{S}} = {\left[ {{{0}}, - {{{M}}^{ - 1}}{{{E}}^2}{{G\Sigma }}} \right]^T}$, ${{{A}}_{{o}}}=\left[\begin{array}{cc}{0}&\omega_0{I_{N_g}}\\ -M^{-1}{\frac{{\partial {{\bm{P}}_{\bm{E}}}}}{{\partial {\bm{\delta }}}}}&-M^{-1}D\end{array}\right]$ is the open-loop system state matrix, and ${{{I}}_{{N_g}}}$ is an identity matrix of size ${N_g}$. Particularly, $\bm{x}$ is a vector Ornstein-Uhlenbeck process, which is Gaussian and Markovian. $A_{o}$ carries significant information about the system dynamics and stability. For instance, the small-signal stability analysis of a power system is based on analyzing the eigenvalues and eigenvectors of $A_{o}$. Conventionally, $A_{o}$ can be easily calculated if it is assumed that the system dynamic model and the network topology are known, and the system states from the state estimator are accurate. Nevertheless, such assumption may not be true in practice due to network topology errors, bad data, etc. Therefore, an online data-driven method was proposed in \cite{ref:sheng2019online} to estimate the matrix ${{A}_{o}}$, which has been shown to be accurate, robust to measurement noise, and adaptive to topology change. However, the formulation in \cite{ref:sheng2019online} does not consider the presence of VSCs, which in turn is essential considering an increasing number of VSCs due to the growing penetration of RES. The integration of VSCs, on the other hand, makes the matrix estimation challenging, as the impacts of VSCs has to be carefully expressed in the formulation of (\ref{eq4}). An approach to separate the open-loop system state matrix $A$ and the input matrix $B$ that quantifies the impacts of VSCs is also needed for the sake of control design. Details will be discussed in Section \ref{sectionmatrixestimation}. Lastly, although the 2nd-order generator model is assumed in the proposed methodology, higher-order generator models with detailed control are used in the simulation study to show the feasibility of the proposed method in practice. \subsection{VSC model} Since we are interested in the electromechanical stability around the steady state, the VSCs are modelled as power sinks and sources, similar to the approach adopted in \cite{ref:multi-terminal-wind}, \color{black}which is based on the approximation that fast dynamics of VSCs are much faster than electromechanical dynamics such that VSCs respond instantaneously fast to the power reference change \cite{ref:preece2012probabilistic}. \color{black} In order to provide damping support from VSCs, supplementary active and reactive power signals should be added to the steady state references \cite{ref:trinh2014methods}. Hence, {\color{black}the power injections from the buses where VSCs are connected} are \begin{eqnarray} {{\bm{P}}_{\bm{v}}}& = &{{\bm{P}}_{{\bm{vs}}}} + {{\bm{P}}_{{\bm{vd}}}}\nonumber\\ {{\bm{Q}}_{\bm{v}}}& =& {{\bm{Q}}_{{\bm{vs}}}} + {{\bm{Q}}_{{\bm{vd}}}}\label{eq5} \end{eqnarray} where ${{\bm{P}}_{\bm{v}}} = {\left[ {{P_{v1}},{P_{v2}}, \cdots ,{P_{v{N_v}}}} \right]^T}$ represents the real-time active power references of the VSC; ${{\bm{P}}_{\bm{vs}}} = {\left[ {{P_{vs_1}},{P_{vs_2}} \cdots {P_{vs_{N_v}}}} \right]^T}$ denotes the steady-state active power references; ${{\bm{P}}_{\bm{vd}}} = {\left[ {{P_{vd_1}},{P_{vd_2}} \cdots {P_{vd_{N_v}}}} \right]^T}$ are the references for the supplementary active power generated by the damping controller. The reactive powers ${{\bm{Q}}_{\bm{v}}}$, ${{\bm{Q}}_{\bm{vs}}}$ and ${{\bm{Q}}_{\bm{vd}}}$ are defined in a similar way. \color{black}It should be noted that the intermittency of RES is not considered in this paper similar to \cite{ ref:mokhtari2014toward, ref:singh2014interarea}, because the time scale of RES intermittency (a few minutes \cite{ref:brouwer2014impacts}) is much larger than that of electromechanical dynamics (0.5s-5s \cite{ref:machowski2020power}). In addition, \cite{ref:multi-terminal-wind} shows that the fluctuation of the output power of RES may be quickly smoothed out through proper power control (e.g., pitch angle control for wind turbines). \color{black} In order to provide damping, the frequency variations of generators are typically used as the feedback signals \cite{ref:multi-terminal-wind} and we have \color{black} \begin{equation} \begin{array}{l} {{\bm{P}}_{{\bm{vd}}}} = {{{K}}_{{1}}}\left( {{\bm{\omega }} - \bm{1}} \right)\\ {{\bm{Q}}_{{\bm{vd}}}} = {{K}_2}\left( {{\bm{\omega }} - \bm{1}} \right) \end{array} \label{eq6} \end{equation} \color{black} where ${{K}_{{1}}}$ and ${{K}_{{2}}}$ are damping coefficients of the active and reactive power control of VSCs, respectively. Linearizing (\ref {eq5}) and (\ref {eq6}) around the steady state gives \begin{equation} \begin{array}{l} {{\Delta }}{{\bm{P}}_{\bm{v}}} = {{\Delta }}{{\bm{P}}_{{\bm{vd}}}} = {{K}_{{1}}}{{\Delta \bm{\omega} }}\\ {{\Delta }}{{\bm{Q}}_{\bm{v}}} = {{\Delta }}{{\bm{Q}}_{{\bm{vd}}}} = {{K}_{{2}}}{{\Delta \bm{\omega} }} \end{array} \label{eq7} \end{equation} \subsection{Integration of VSCs into AC system} By applying Kron reduction \cite{kron-reduction}, we can eliminate all buses except the ${N_g}$ generator buses and the ${N_v}$ VSC buses as shown in Fig.~\ref{fig1}. The {\color{black}active power injection from the bus where the $i$th generator is connected} can be calculated by \begin{equation} \begin{array}{l} {P_{ei}} = \sum\limits_{j = 1}^{{N_g}} {{E_i}{E_j}\left( {{G_{GGij}}\cos \left( {{\delta _i} - {\delta _j}} \right) + {B_{GGij}}\sin \left( {{\delta _i} - {\delta _j}} \right)} \right)} \\ + \sum\limits_{j = 1}^{{N_v}} {{E_i}{V_j}\left( {{G_{GVij}}\cos \left( {{\delta _i} - {\theta _j}} \right) + {B_{GVij}}\sin \left( {{\delta _i} - {\theta _j}} \right)} \right)} \end{array}\label{eq8} \end{equation} where ${V_j}$ and ${\theta _j}$ are the voltage magnitude and voltage angle of VSC bus ${j}$. ${G_{GGij}}$ and ${B_{GGij}}$ are the equivalent conductance and susceptance between generator buses ${i}$ and ${j}$. ${G_{GVij}}$ and ${B_{GVij}}$ are the equivalent conductance and susceptance between generator bus ${i}$ and VSC bus ${j}$. Similarly, the {\color{black}active and reactive power injections from the bus where the $i$th VSC is connected} can be expressed by \begin{equation} \begin{array}{l} {P_{vi}} = \sum\limits_{j = 1}^{{N_g}} {{V_i}{E_j}\left( {{G_{VGij}}\cos \left( {{\theta _i} - {\delta _j}} \right) + {B_{VGij}}\sin \left( {{\theta _i} - {\delta _j}} \right)} \right)} \\ + \sum\limits_{j = 1}^{{N_v}} {{V_i}{V_j}\left( {{G_{VVij}}\cos \left( {{\theta _i} - {\theta _j}} \right) + {B_{VVij}}\sin \left( {{\theta _i} - {\theta _j}} \right)} \right)} \\ {Q_{vi}} = \sum\limits_{j = 1}^{{N_g}} {{V_i}{E_j}\left( {{G_{VGij}}\sin \left( {{\theta _i} - {\delta _j}} \right) - {B_{VGij}}\cos \left( {{\theta _i} - {\delta _j}} \right)} \right)} \\ + \sum\limits_{j = 1}^{{N_v}} {{V_i}{V_j}\left( {{G_{VVij}}\sin \left( {{\theta _i} - {\theta _j}} \right) - {B_{VVij}}\cos \left( {{\theta _i} - {\theta _j}} \right)} \right)} \end{array}\label{eq9} \end{equation} where ${G_{VGij}}$ and ${B_{VGij}}$ are the equivalent conductance and susceptance between VSC bus ${i}$ and generator bus ${j}$. ${G_{VVij}}$ and ${B_{VVij}}$ are the equivalent conductance and susceptance between VSC bus ${i}$ and VSC bus ${j}$. Linearizing the power injections of generators and VSCs (\ref{eq8})-(\ref{eq9}) around the steady state yields \begin{equation} \left[ \begin{array}{l} {{\Delta }}{\bm{P_E}}\\ {{\Delta }}{{\bm{P}}_{\bm{v}}}\\ {{\Delta }}{{\bm{Q}}_{\bm{v}}} \end{array} \right]{\bm{ = }}\left[ \begin{array}{l} \frac{{\partial {\bm{P_E}}}}{{\partial {\bm{\delta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {\bm{P_E}}}}{{\partial {\bm{\theta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {\bm{P_E}}}}{{\partial {\bm{V}}}}\\ \frac{{\partial {{\bm{P}}_{\bm{v}}}}}{{\partial {\bm{\delta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{P}}_{\bm{v}}}}}{{\partial {\bm{\theta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{P}}_{\bm{v}}}}}{{\partial {\bm{V}}}}\\ \frac{{\partial {{\bm{Q}}_{\bm{v}}}}}{{\partial {\bm{\delta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{Q}}_{\bm{v}}}}}{{\partial {\bm{\theta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{Q}}_{\bm{v}}}}}{{\partial {\bm{V}}}} \end{array} \right]\left[ \begin{array}{l} {{\Delta \bm{\delta} }}\\ {{\Delta \bm{\theta} }}\\ {{\Delta \bm{V}}} \end{array} \right]\label{eq10} \end{equation} Since the system is assumed to be in the normal operating state such that the Jacobian matrix is well-conditioned \cite{well-conditioned}, we can represent ${{\Delta }}{\bm{P_E}}$ by ${{\Delta \bm{\delta} }}$, $\Delta \bm{P_v}$ and $\Delta \bm{Q_v}$: \begin{equation} {{\Delta }}{\bm{P_E}} = {{{A}}_{{1}}}{{\Delta \bm{\delta} }} + {{{A}}_{{2}}}{{\Delta }}{{\bm{P}}_{\bm{v}}} + {{{A}}_{{3}}}{{\Delta }}{{\bm{Q}}_{\bm{v}}} \label{eq11} \end{equation} The detailed expression of ${A_1}$, ${A_2}$ and ${A_3}$ can be found in Appendix \ref{Aderivation}. Substituting the expression of ${{\Delta }}{\bm{P_E}}$ from (\ref{eq11}) to (\ref{eq3}) leads to \begin{equation} \begin{aligned} \left[ \begin{array}{l} {{\Delta }}\mathop {\bm{\dot \delta }} \\ {{\Delta }}\mathop {\bm{\dot \omega }} \end{array} \right] &= \left[ \begin{array}{cc} {0}&{\omega _0}{{{I}}_{{N_g}}}\\ - {{{M}}^{ - 1}}{{{A}}_{{1}}}&- {{{M}}^{ - 1}}{{D}} \end{array} \right]\left[ \begin{array}{l} {{\Delta \bm{\delta} }}\\ {{\Delta \bm{\omega} }} \end{array} \right]\\ &+ \left[ \begin{array}{cc} {0}&{0}\\ - {{{M}}^{ - 1}}{{{A}}_{{2}}}&- {{{M}}^{ - 1}}{{{A}}_{{3}}} \end{array} \right]\left[ \begin{array}{l} {{\Delta }}{{\bm{P}}_{\bm{v}}}\\ {{\Delta }}{{\bm{Q}}_{\bm{v}}} \end{array} \right] \\ &+ \left[ \begin{array}{c} {0}\\ - {{{M}}^{ - 1}}{{{E}}^2}{{G\Sigma }} \end{array} \right]{\bm{\xi }} \end{aligned} \label{eq12} \end{equation} It can be further represented in the compact form: \begin{equation} \mathop { \bm{\dot x}} = {{{A}}\bm{x}} + {{{B}}\bm{u}} + {{{S}}\bm{\xi }} \label{eq13} \end{equation} where ${\bm{u}} = {\left[ {{{\Delta }}{{\bm{P}}_{\bm{v}}},{{\Delta }}{{\bm{Q}}_{\bm{v}}}} \right]^T}$, ${{A}} =\left[\begin{array}{l} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{0}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\omega _0}{{{I}}_{{N_g}}}\\ \bar{{{A}}_{{1}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} - {{{M}}^{ - 1}}{{D}} \end{array} \right]$, ${{B}} =\left[\begin{array}{l} {\kern 1pt} {\kern 1pt} {{0}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{0}}\\ \bar{{{A}}_{{2}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \bar{{{A}}_{{3}}} \end{array} \right]$. Particularly, $\bar {{{{A}}_{{1}}}{\kern 1pt} } = - {{{M}}^{ - 1}}{{{A}}_{{1}}}$, $\bar {{{{A}}_{{2}}}} = - {{{M}}^{ - 1}}{{{A}}_{{2}}}$ and $\bar {{\kern 1pt} {{{A}}_{{3}}}} {\kern 1pt} {\kern 1pt} = - {{{M}}^{ - 1}}{{{A}}_{{3}}}$. In order to design the damping controller, the accurate information of ${A}$ and ${B}$ in (\ref{eq13}) needs to be known. Substituting (\ref{eq7}) into (\ref{eq13}), $B$ can be embedded in the system state matrix as follows: \begin{equation} \mathop {\bm{\dot x}} = {{{A}}_{{c}}}{\bm{x}} + {{S\bm{\xi} }} \label{eq14} \end{equation} where ${{{A}}_{{c}}}=\left[\begin{array}{cc}{0}&\omega_0{I_{N_g}}\\ \bar{A_1}&\bar{A_2}{K_1}+\bar{A_3}{K_2}-M^{-1}D\end{array} \right]$. Compared with the open-loop matrix ${{A}_{o}}$ in (\ref{eq4}) without VSC, the closed-loop system state matrix ${{A}_{c}}$ includes the impact of VSCs, as reflected by the new elements $\bar{A_2}{K_1}$ and $\bar{A_3}{K_2}$. Since the work in \cite{ref:trinh2014methods, ref:LQG} has shown that the impact of reactive power on damping control is much smaller than that of active power, the reactive power control for damping is not considered in the rest of paper, i.e., ${{\Delta }}{{\bm{Q}}_{\bm{v}}}$ is zero by setting ${K_2}$ to be zero matrix. As a result, $A$ remains the same as in (\ref{eq13}): \begin{equation} {{A}} =\left[\begin{array}{cc} {0}&{\omega _0}{{{I}}_{{N_g}}}\\ \bar{{{A}}_{{1}}}&- {{{M}}^{ - 1}}{{D}} \end{array} \right]\label{eq:A} \end{equation} ${B}$ becomes: \begin{equation} {{B}} =\left[\begin{array}{c} {{0}}\\ \bar{{{A}}_{{2}}}\end{array} \right]\label{eq:B} \end{equation} $A_c$ becomes: \begin{equation} {{{A}}_{{c}}}=\left[\begin{array}{cc}{0}&\omega_0{I_{N_g}}\\ \bar{A_1}&\bar{A_2}{K_1}-M^{-1}D\end{array} \right]\label{eq:Acl} \end{equation} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{Fig1.pdf} \caption{An AC system integrated with VSCs.} \label{fig1} \end{figure} \section{The proposed WAMS-based method for estimating matrices} \label{sectionmatrixestimation} The WAMS-based damping control design is comprised of two steps: estimating the matrices $A$ and $B$ in the current operating condition; designing the damping coefficients ${K_1}$ of VSCs based on the estimated $A$ and $B$ for a desired damping performance. \subsection{The Theoretical Basis of the WAMS-Based Method for Matrix Estimation} In the compact form of the power system dynamic model incorporating VSCs (\ref{eq14}), $\bm{x}$ is a multivariate Ornstein-Uhlenbeck process. According to the regression theorem of a multivariate Ornstein-Uhlenbeck process \cite{regression-theorem}, if the dynamic system described by (\ref{eq14}) is stable which is typically satisfied if the system is in the normal operating condition, the $\tau$-lag time correlation matrix $R(\tau)$ satisfies the following differential equation: \begin{equation} \frac{d}{{d\tau }}\left[ {{{R}}\left( \tau \right)} \right] = {{{A}}_{{c}}}{{R}}\left( \tau \right) \label{eq15} \end{equation} where ${{R}}\left( \tau \right) \triangleq \left\langle {\left[ {{\bm{x}}\left( {t + \tau } \right) - \bar {\bm{x}} } \right], {{\left[ {{\bm{x}}\left( t \right) - \bar {\bm{x}} } \right]}^T}} \right\rangle $, and $\bm{\bar{x}}$ denotes the mean of $\bm{x}$. \color{black} The $\tau$-lag time correlation matrix describes the correlation of a random vector with itself at time lag $\tau$. \color{black} Therefore, the system state matrix can be obtained by solving (\ref{eq15}) according to \cite{ref:sheng2019online} \begin{equation} {{{A}}_{{c}}} = \frac{1}{\tau }\log \left[ {{{R}}\left( \tau \right){{{C}}^{ - 1}}} \right] \label{eq:estA_c} \end{equation} where the stationary covariance matrix ${{C}} \triangleq \left\langle {\left[ {{\bm{x}}\left( t \right) - \bar {\bm{x}} } \right], {{\left[ {{\bm{x}}\left( t \right) - \bar {\bm{x}} } \right]}^T}} \right\rangle=R(0)$. Note that the statistics $R(\tau)$ and $C$ of {{$\bm{x}=[\Delta\bm{\delta}, \Delta\bm{\omega}]^T$}} can be estimated from PMU measurements ({see Appendix \ref{Approximation}} \color{black}). Equation (\ref{eq:estA_c}) indicates an ingenious way of estimating the physical model knowledge $A_{c}$ from the statistical properties of PMU measurements, which serves as the theoretical basis of the proposed method for estimating matrices. \subsection{The Proposed Algorithm for Estimating Matrices} In order to estimate $A$ and $B$ in (\ref{eq:A})-(\ref{eq:B}), we propose an algorithm to estimate the dynamic components $-M^{-1}D$, $\bar{A_1}$, and $\bar{A_2}$, respectively. We assume that all generator buses are equipped with PMUs so that generators' rotor angles and speeds in ambient conditions can be calculated using the approaches in \cite{zhou2014PMU, yan2011PMU}. The equipment of PMUs at all generators might seem optimistic for now, yet is reasonable in the near future because of the wide adoption of PMUs worldwide {\cite{ref:chen2015measurement}}. \color{black} In addition, we assume that VSCs are within their capability limits to provide damping and have enough controllability over the critical modes. \color{black} Also, the current damping coefficients ${K_1}$ of VSCs in (\ref{eq:Acl}) are assumed to be known when the algorithm is initiated (i.e., the default setting or no support). The specific steps of the proposed algorithm are as follows: \textbf{Step 1}: Given PMU measurements with a sufficient window length, estimate $[\Delta\bm{\delta}, \Delta\bm{\omega}]^T$ and compute their sample covariance matrix $\hat{C}$ and sample $\tau$-lag correlation matrix $\hat{R}$ {\color{black}for a selected $\tau$ according to (\ref{Approximation of matrix 1}) and (\ref{Approximation of matrix 2}) in Appendix \ref{Approximation}}. Estimate the closed-loop system state matrix by (\ref{eq:estA_c}). \begin{equation} {{\hat{A}}_{{c1}}} = \left[ \begin{array}{cc} {{0}}& {\omega _0}{{{I}}_{{N_g}}}\\ {{{A}}_{{{c}}1{{LL}}}}& {{{A}}_{{{c}}1{{LR}}}} \end{array} \right] =\left[\begin{array}{cc}{0}&\omega_0{I_{N_g}}\\ \bar{A_1}&\bar{A_2}{K_1}-M^{-1}D\end{array} \right] \label{eq18} \end{equation} \textbf{Step 2}: Add small perturbations (e.g., $\alpha\%$) to the damping coefficients of all VSCs simultaneously by ${{K}_{{1}}}( {i,j})\leftarrow{{K}_{{1}}}( {i,j})+{\Delta{K}_{{1}}}( {i,j})$, where $K_1\in \mathbb{R}^{N_v\times N_g}$, \begin{equation} {\Delta{K}_{{1}}}\left( {i,j} \right) = \left\{ \begin{array}{l} \alpha\% {{K}_{{1}}}\left( {i,j} \right),{\kern 1pt} {\kern 1pt} {\kern 1pt}j = {g_i}\\ 0,{\kern 1pt} {\kern 1pt} {\kern 1pt}j \ne {g_i} \end{array} \right. \label{eq20} \end{equation} where ${g_i}$ represents the column of the entry that has the largest absolute value among $\{1,2,...,N_g\}\setminus\{g_1,...g_{i-1}\}$ columns in the ${i}$th row {\color{black} of $K_1$}. {\color{black} More specifically, $g_i$ is determined by the following steps.} {Starting} from the first VSC (e.g., 1st row of $K_1$), let $g_1$ be the column number that has the largest absolute value in the $1$st row; go to the $2$nd VSC (i.e., $2$nd row of $K_1$) and let $g_2$ be the column number that has the largest absolute value among $\{1,2,...,N_g\}\setminus\{g_1\}$ columns in the $2$nd row. We continue the above procedure until the last row of $K_1$. As such, the perturbation matrix $\triangle K_1$ will be a generalized permutation matrix such that each column of $\bar{A}_2$ will be detectable in the estimation (see (\ref{eq:A2est})). \textbf{Step 3}: Collect new PMU measurements and estimate $[\Delta\bm{\delta}, \Delta\bm{\omega}]^T$ after the perturbation. Estimate the new system state matrix after the perturbation by (\ref{eq:estA_c}): \begin{equation} \begin{aligned} {{\hat{A}}_{{c2}}} &= \left[ \begin{array}{cc} {{0}}& {\omega _0}{{{I}}_{{N_g}}}\\ {{{A}}_{{{c}}2{{LL}}}}& {{{A}}_{{{c}}2{{LR}}}} \end{array} \right] \\ &=\left[\begin{array}{cc}{0}&\omega_0{I_{N_g}}\\ \bar{A_1}&\bar{A_2}({K_1}+{\Delta{K}_{{1}}})-M^{-1}D\end{array} \right] \label{eq22} \end{aligned} \end{equation} \textbf{Step 4:} Estimate the dynamic components ${\bar{A_1}}$, $\bar {{{{A}}_{{2}}}{\kern 1pt} }$ and $-{M^{-1}D}$ as follows: \begin{eqnarray} \bar{A_1}^{est} &=& {A}_{{c}2{LL}} \label{eq19}\\ {\bar {{{{A}}_2}}}^{est} &=& \left( {{{{A}}_{{{c}}2{{LR}}}} - {{{A}}_{{{c}}1{{LR}}}}} \right)\Delta{K}_{{1}}^{+}\label{eq:A2est}\\ ({{-{M^{-1}D}}})^{est}&=&A_{c1LR}-\bar{A_2}^{est}{K_1} \label{eq:MDest} \end{eqnarray} where ${\Delta{K}_{{1}}}^{+}$ is the pseudo inverse matrix of ${\Delta{K}_{{1}}}$. Particularly, (\ref{eq:A2est}) is obtained by subtracting (\ref{eq18}) from (\ref{eq22}) {using a} simple manipulation to the lower-right part. Equation (\ref{eq:MDest}) is obtained by substituting $\bar{A_2}^{est}$ in (\ref{eq:A2est}) back to the lower-right part of (\ref{eq18}). \noindent\textit{Remarks:}\\ \begin{itemize} \item In this paper, a window size of 300s is used in \textbf{Step 1} and \textbf{Step 3}, respectively, for which a good accuracy is achieved. It should be noted that, despite of a relatively large window size, the above algorithm does not assume any model information (e.g. network topology, generator dynamic parameters $D$ and $M$), but can estimate the dynamic components as well as the matrices $A$ and $B$ purely from PMU data. Therefore, in the case of incomplete or inaccurate network information, the proposed method provides a way to acquire the dynamical system model for monitoring or control if sufficient data is available. \item {\color{black}The way to choose $g_i$ in \textbf{Step 2} is not unique. Different $g_i$ can be selected as long as the resulting $\triangle K_1$ is a generalized permutation matrix such that each column of $\bar{A}_2$ is detectable in the estimation. Regarding the feasibility of adding perturbations to VSCs simultaneously, it can be realized by using the Global Positioning System (GPS) satellite and the high speed communication network between the control center and distributed VSCs. A more detailed implementation in the real systems can be found in \cite{ref:taylor2005wacs, ref:peng2009implementation}.} \item The second-order generator model is assumed in the proposed algorithm. However, 3rd-order generator models with automatic voltage regulators (AVR) are used in the simulation study to show the effectiveness of the algorithm in practical applications. \end{itemize} \section{The WAMS-based method for wide-area damping control} Once the dynamic components $ - {{{M}}^{ - 1}}{{D}}$, $\bar{A}_1$ and $\bar{A}_2$ are estimated, ${{A}}$ and ${{B}}$ in (\ref{eq13}) can be obtained from (\ref{eq:A})-(\ref{eq:B}). Therefore, the control input ${\bm{u}}$ can be designed to provide effective damping. To this end, the MLQR \cite{ref:almutairi2010enhancement} is applied to design the system input $\bm{u}$ in (\ref{eq13}) such that the following quadratic cost function is minimized: \color{black} \begin{equation} J_C=\mathop {\lim }\limits_{t \to \infty } E\left\{ {\int\limits_0^t {\left( {{{\bm{x}}^T}({{{L}}^T}{{{W}}_{{Q}}}{{L})\bm{x}} + {{\bm{u}}^T}{{{W}}_{{R}}}{\bm{u}}} \right)dt} } \right\} \label{eq:costfunc} \end{equation} \color{black} where ${{{W}}_{{Q}}}\geq0$ and ${{{W}}_{{R}}}>0$ are weighting matrices which in most cases are set as diagonal matrices. In specific, higher diagonal values in ${{{W}}_{{Q}}}$ correspond to a greater desire to stabilize the corresponding oscillation modes. Higher diagonal values in ${{{W}}_{{R}}}$ represent a more strict requirement to reduce the corresponding control inputs. Note that only the relative sizes of the components in the weighting matrices matter rather than the absolute values. Besides, ${{L}}$ is the mapping matrix obtained from the Real Schur Decomposition \cite{mapping-matrix} {\color{black}of the system state matrix $A$}, which transforms state variables ${\bm{x}}\left( t \right)$ to the modal variables ${\bm{z}}\left( t \right)$: \begin{equation} {\bm{z}}\left( t \right) = {{L}\bm{x}}\left( t \right) \label{eq26} \end{equation} where $\bm{z}$ are directly associated with system modes $e^{\lambda_i t}$, $i=1,2,...,2\times N_g$. As a result, to damp the critical modes, we can add weights only to the corresponding diagonal values of $W_Q$ while setting all the other values to be zero, making the well-damped modes unaffected by the control. The MLQR controller gain $\Gamma$ can be obtained by solving the associated algebraic Riccati equation (ARE) according to the cost function (\ref{eq:costfunc}). The final MLQR feedback control law is: \begin{equation} \bm{u}=-\Gamma\bm{x} \end{equation} The damping coefficients ${K_1}$ of VSCs, therefore, are set according to $\Gamma$. The main advantage of the MLQR control method lies in the fact that a full decoupling between modes can be reached such that multiple critical modes can be damped simultaneously while well-damped modes remain unaffected, making the local controllers effective \cite{ref:almutairi2010enhancement}. In addition, the VSCs' modulation capacities can be incorporated by tuning the weighting matrix $W_R$. To sum up, the overall procedure for the online WAMS-based WADC method using VSCs is described in Fig.~\ref{Flow chart}. It consists of two stages: the WAMS-based method for estimating matrices and the MLQR-based WADC design. The damping coefficients can be adjusted if the changes of working condition or network topology are detected. \color{black} It should be noteworthy that if enough damping is provided by VSCs when the system works in the normal working condition, the system may still have suboptimal damping performance for the critical modes within the small time span after the event, when the controller is not updated yet. In the extreme case where very poor damping occurs, some emergency control strategy (e.g., PSS-based) using ring-down PMU data \cite{ref:pradhan2018model} can be incorporated. \color{black} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{Fig2.pdf} \caption{Flow chart of the WAMS-based WADC strategy.} \label{Flow chart} \end{figure} \section{Case Studies} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Fig3.pdf} \caption{The network topology of the IEEE 68-bus system with integrated VSCs.} \label{network topology} \end{figure} The IEEE 68-bus system is modified to test the proposed WAMS-based WADC method. In particular, {three VSCs (denoted by VSC1, VSC2 and VSC3) are placed at bus 54, 20 and 42 respectively, which are marked in red in Fig.~\ref{network topology}.} \color{black} The implementation for different VSC locations will be discussed in Section \ref{section_diffVSCs}. \color{black} In steady state, all VSCs inject the same amount of active power of 0.5 p.u. into the AC system and there are no reactive power exchanges. In order to validate the effectiveness of the proposed method in practical applications, the 3rd-order generator models are used throughout the simulations. In addition, G1-G12 are controlled by automatic voltage regulators (AVRs). The fluctuation intensities ${\sigma _1},...,{\sigma _n}$ describing load variations are all set to 0.05. \color{black} $\tau=100$ ms is used to calculate the correlation matrix (see (\ref{Approximation of matrix 2})). \color{black} All the simulations are done in the Power System Analysis Toolbox (PSAT) \cite{ref:milano2005open}. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{Fig4m.pdf} \caption{Trajectories of G1's rotor speed before and after adding the small perturbation.} \label{stochastics} \end{figure} \subsection{Validation For the Algorithm of Estimating Matrices} State variables ${\bm{\delta }}$ and ${\bm{\omega }}$ are obtained from the emulated PMU data with a sampling rate of 50 Hz and a window size of 300 s. For example, Fig.~\ref{stochastics} presents the time-domain trajectory of G1's rotor speed before and after adding the small perturbation introduced in {\bf{Step 2}} by (21). It can be seen that the perturbation needed for estimating matrices is small and will not have an obvious impact on system performance. Following the procedure of the WAMS-based method of estimating matrices, the dynamic components are estimated and compared with their true values, as shown in Fig.~\ref{dynamic components}. All matrices can be estimated with fairly good accuracy. Particularly, the entries with large values can always be well estimated, though some discrepancies may exist in the entries with smaller values, which, nevertheless, have little impact on the performance of the designed controllers as shown in the subsequent section. {\color{black}Besides, the impacts of PMU noise and missing PMUs on the matrix estimation using (\ref{eq:estA_c}) have been discussed in \cite{ref:sheng2019online}, which shows that the closed-loop system state matrix $A_c$ or its sub-matrix can still be well estimated. Readers are referred to \cite{ref:sheng2019online} for more details. The subsequent discussions will focus on the effectiveness and the adpaptiveness of the proposed WAMS-Based WADC. } \begin{figure} [htbp] \centering \subfigure[Values of $ {{{M}}^{ - 1}}{{D}}$]{ \label{fig6a} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig5a.pdf} \end{minipage}% }% \subfigure[Values of $-\bar {{{{A}}_{{1}}}{\kern 1pt} }$]{ \label{fig6b} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig5b.pdf} \end{minipage}% }% \subfigure[Values of $\bar {{{{A}}_{{2}}}{\kern 1pt} }$]{ \label{fig6c} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig5c.pdf} \end{minipage}% }% \centering \caption{The actual and estimated values of the dynamic components.} \label{dynamic components} \end{figure} \subsection{Validation For the WAMS-Based WADC} The eigenvalues of the open-loop system state matrix are denoted in blue in Fig. \ref{Actual eigenvalues}, where all inter-area modes (those in the yellow circle) are poorly damped. In order to increase the damping ratios of the inter-area oscillations to 10\%, above which the modes are considered to be well-damped \cite{ref:rogers2012power}, the WADC method introduced in section IV is applied. Specifically, the entries of the weighting matrix ${{{W}}_{{Q}}}$ corresponding to three inter-area modes are adjusted until the design objective is achieved and the other entries are set to 0. In this case, the entries corresponding to three inter-area modes are set to 2, 2.7 and 65, respectively. Besides, an identity matrix is used for the weighting matrix ${{{W}}_{{R}}}$, which assumes that all VSCs have the same modulation capacities. \begin{figure} [htbp] \centering \subfigure[Eigenvalues of the estimated system state matrix]{ \label{Estimated eigenvalues} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig6a1.pdf} \end{minipage}% }% \subfigure[Eigenvalues of the actual system state matrix]{ \label{Actual eigenvalues} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig6b1.pdf} \end{minipage}% }% \centering \caption{Comparison of eigenvalues with and without WADC.} \label{eigenvalues} \end{figure} Fig.~\ref{eigenvalues} presents the comparison of eigenvalues with and without applying the WADC method. The results based on the estimated matrices is shown in Fig.~\ref{Estimated eigenvalues}. It can be seen that the eigenvalues corresponding to the three critical inter-area modes all move to the left, making the damping ratios larger than 10\%. Moreover, the WADC designed based on the estimation is still effective in the true system. As shown in Fig.~\ref{Actual eigenvalues}, the damping ratios of all the critical inter-area modes of the true system have been increased above 10\% by the designed WADC based on estimated matrices, {indicating that multiple inter-area modes are damped simultaneously. In the mean time,} the rest of modes are unaffected, demonstrating the decoupling between different modes by the proposed method. {\color{black} In addition, the proposed WADC method took only 0.25s on a regular laptop (Core i7-7500U @ 2.70GHz, 16 GB RAM) to estimate the matrices and to design MLQR, indicating negligible computational time and good feasibility in an online environment.} \subsection{Dynamic response to system disturbances} To validate the effectiveness of the WAMS-based WADC method under different system disturbances, we test the performance of the proposed method in two situations---under the variation of load and generation and under a line fault. \subsubsection{Load and generation variation} In the first case, there is a sudden 10\% generation decrease from G8-G10 and 10\% load decrease from bus 17 at 50.0 s. The dynamic response of the rotor angle difference between G5 and G13 by applying the WAMS-based WADC method is compared to that with PSS control, as shown in Fig.~\ref{Rotor angle load variation}. It can be seen that the WAMS-based WADC method seems to be more effective than PSS in damping the inter-area oscillation modes. Fig.~\ref{Pvsc load variation} presents the output active power of VSCs, showing the support of VSCs to damp the oscillations. \begin{figure} [htbp] \centering \subfigure[Rotor angle of G5 relative to G13]{ \label{Rotor angle load variation} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig7a.pdf} \end{minipage}% }% \subfigure[Active power of VSCs]{ \label{Pvsc load variation} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig7b1.pdf} \end{minipage}% }% \centering \caption{The system performance under a generation and load variation.} \label{load variation} \end{figure} \subsubsection{Line fault} In the second case, a three-phase line-to-ground fault is applied to bus 54 at 50.0 s and is cleared after 0.0833s. Similarly, the dynamic response of the relative angle between G5 and G13 is presented in Fig.~\ref{Rotor angle fault}. Obviously, a better damping performance is achieved using the proposed WAMS-based WADC method compared to the conventional PSS control. In addition, comparing the results in Fig.~\ref{Pvsc fault} with those in Fig.~\ref{Pvsc load variation}, we note that the maximum modulated power of VSC2 is increased from 0.35 p.u. to 0.51 p.u. when providing the damping support given that the line fault may be a more severe disturbance compared to the power variation of load and generation. {\color{black}It should be noted that the communication delay in practical applications may deteriorate the damping performance, which, however, can be alleviated by appropriate compensation approaches (e.g., \cite{ref:chaudhuri2009new}).} \begin{figure} [htbp] \centering \subfigure[The rotor angle of G5 relative to G13]{ \label{Rotor angle fault} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig8a.pdf} \end{minipage}% }% \subfigure[The active power of VSCs]{ \label{Pvsc fault} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig8b1.pdf} \end{minipage}% }% \centering \caption{The system performance when there is a line-to-ground fault at bus 54.} \label{fault} \end{figure} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table*} \captionsetup{justification=centering, labelsep=newline} \centering \caption{An electromechanical mode (Mode 2) under different control strategies and working conditions \color{black}when VSCs are located at buses 54, 20, 42\color{black}} \label{tab1} \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Working condition} & \multicolumn{2}{c}{Open-loop (without damping control) } & \multicolumn{2}{c}{Closed-loop (the model-based WADC) } & \multicolumn{2}{c}{Closed-loop (the WAMS-based WADC) }\\ \cline{2-7} & Frequency (Hz) & Damping ratio (\%) & Frequency (Hz) & Damping ratio (\%) & Frequency (Hz) & Damping ratio (\%) \\ \hline Base case & 0.952 & 5.121 & 0.943 & 11.143 & 0.943 & 11.470\\ \tabincell{c} {Line outage at bus 60-61} & 0.735 & 4.231 & 0.724 & 8.872 & 0.720 & 11.900\\ \tabincell{c} {Line outage at bus 53-54} & 0.757 & 4.107 & 0.746 & 8.800 & 0.743 & 11.643\\ \textcolor{black}{\tabincell{c} {Generation and load variation\\ followed by a line outage 53-54}} & \textcolor{black}{0.727} & \textcolor{black}{3.780} & \textcolor{black}{0.717} & \textcolor{black}{7.989} & \textcolor{black}{0.715} & \textcolor{black}{11.209}\\ \hline \end{tabular}} \end{table*} \subsection{Adaptiveness to different working conditions} In contrast to the model-based WADC, which {\color{black}may not} consider the change of working conditions, a significant advantage of the proposed WAMS-based WADC method is that the damping coefficients can be adjusted as the working condition varies. {\color{black}To show this, the adaptiveness of the proposed method to the individual line outage and a combined case where both line outage and load variations occur will be demonstrated. Specifically, three working conditions except the base case are considered: the condition after a line outage at bus 60-61; the condition after a line outage at bus 53-54; the condition after a sudden 10\% load increase from bus 17 and 10\% generation increase from G8-G10, followed by a line outage at bus 53-54 1s afterward.} The system performance without any control is compared to that with the model-based WADC using MLQR and that with the WAMS-based WADC using MLQR. We assume that \color{black}the topology changes are undetected such that \color{black}the model-based WADC designed for the base case remains unchanged. Table~\ref{tab1} presents the frequency and the damping ratio of an inter-area oscillation (Mode 2) under different control strategies and in various working conditions. It can be seen that both the model-based WADC and the WAMS-based WADC can increase the damping ratio to above 10\% in the base case. {\color{black}However, Mode 2 changes from well-damped to poorly-damped when any of the aforementioned working conditions happens, whereas the model-based WADC fails to increase the damping ratio of Mode 2 to the requested 10\%. The model-based WADC becomes ineffective when the working condition changes as the controller damping gain is not updated due to the undetected topology changes or load variations.} In contrast, the WAMS-based WADC can always ensure that the minimum requested damping ratio is achieved {\color{black} when the working condition and/or the characteristics of electromechanical modes change.} In addition, Fig.~\ref{Adapativeness} shows comparisons of the time domain response {\color{black}when Mode 2 (0.952Hz) is excited in different working conditions with different control strategies}. Obviously, we can see that the WAMS-based WADC can always achieve better damping performance than the model-based WADC. Note that the difference is not huge because the targeted damping ratio in the WAMS-based WADC is 10\%. A higher damping ratio can be achieved by setting a higher threshold. \begin{figure} [htbp] \centering \subfigure[The condition after the line outage at bus 60-61]{ \label{mode2-bus6061} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig9a.pdf} \end{minipage}% }% \subfigure[The condition after the line outage at bus 53-54]{ \label{mode2-bus5354} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig9b.pdf} \end{minipage}% }% \subfigure[{\color{black}The condition after generation and load variations followed by a line outage at bus 53-54}]{ \label{mode2-generation-load-bus5354} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{load_variation_topology_change.pdf} \end{minipage}% }% \centering \caption{The system response to Mode 2 in different working conditions with different control methods.} \label{Adapativeness} \end{figure} \color{black} \subsection{Validation for different VSCs' locations} \label{section_diffVSCs} \begin{figure} [htbp] \captionsetup{labelfont={color=black},font={color=black}} \centering \subfigure[Eigenvalues of the estimated system state matrix]{ \label{New Estimated eigenvalues} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig10a1.pdf} \end{minipage}% }% \subfigure[Eigenvalues of the actual system state matrix]{ \label{New Actual eigenvalues} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.8in]{Fig10b1.pdf} \end{minipage}% }% \centering \caption{Comparison of eigenvalues with and without WADC when VSCs' locations change (at buses 56, 20, 42).} \label{new eigenvalues} \end{figure} \begin{table*} \captionsetup{labelfont={color=black},font={color=black}, justification=centering, labelsep=newline} \centering \caption{An electromechanical mode (Mode 2) under different control strategies and working conditions when VSCs are located at buses 56, 20, 42.} \label{tab2} \textcolor{black}{ \resizebox{\textwidth}{!}{ \begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Working condition} & \multicolumn{2}{c}{Open-loop (without damping control) } & \multicolumn{2}{c}{Closed-loop (the model-based WADC) } & \multicolumn{2}{c}{Closed-loop (the WAMS-based WADC) }\\ \cline{2-7} & Frequency (Hz) & Damping ratio (\%) & Frequency (Hz) & Damping ratio (\%) & Frequency (Hz) & Damping ratio (\%) \\ \hline Base case & 0.952 & 5.119 & 0.943 & 11.216 & 0.942 & 11.539\\ \tabincell{c} {Line outage at bus 53-54} & 0.759 & 4.113 & 0.749 & 8.400 & 0.745 & 11.876\\ {\tabincell{c} {Generation and load variation\\ followed by a line outage 53-54}} & 0.729 & 3.796 & 0.720 & 7.656 & 0.714 & 12.412\\ \hline \end{tabular}}} \end{table*} In practice, VSCs may be located electrically far from the generations. As a result, the effectiveness of the proposed strategy by using VSCs at different locations is tested in this section. Generally speaking, the selection of the locations of VSCs can follow the controllability indexes. Interested readers can check more details in \cite{ref:latorre2011improvement, ref:trinh2016analytical}. The controllability analysis shows that VSCs added to bus 56 and 20 have a relative large controllability indexes associated with Mode 1 (0.594Hz) and Mode 2 (0.952Hz). In comparison, the VSC added to bus 42 is more effective in Mode 1 and Mode 3 (1.108Hz). Therefore, a system with three VSCs placed at bus 56, 20 and 42 is considered, which has VSCs both far from and close to generators. More rigorous analysis on how to select the feedback signals and optimize the locations of VSCs for an effective control of multiple modes requires future effort. Modal analysis shows that there are still three critical oscillation modes when damping control is not provided by VSCs. After applying the proposed WADC method, the damping ratios of the three modes can be successfully increased to above 10\%, as presented in Fig.~\ref{new eigenvalues}. Additionally, the adaptiveness of the WAMS-based WADC strategy is tested in three working conditions: the base working condition, the condition after a line outage at bus 53-54 as well as the condition after a generation and load variation followed by the line outage at bus 53-54, which is the same as the one applied in Section IV.D. The results summarized in Table.~\ref{tab2} validate that the proposed WAMS-based WADC control can always maintain the required damping ratio, which {\color{black}may not} be achieved by the model-based WADC in different working conditions. \color{black} \section{Conclusion and Future Work} This paper has proposed a novel model-free WADC method to damp multiple inter-area oscillations using VSCs in various operation conditions. The method can be divided into two steps: the WAMS-based model-free {estimation of the actual system state matrix $A$ and the input matrix $B$, which is followed by} the MLQR-based WADC method using the estimated matrices. The proposed method, being independent of model parameters and network topology, can adjust the control signals of VSCs to provide sufficient damping as the system evolves. Numerical studies in the modified IEEE 68-bus system show that the inter-area oscillations can always be well damped by the proposed method regardless of the change of working conditions and network topology. In the near future, robust control methods with respect to time delay will be studied. {\color{black} More detailed VSC dynamics and optimal placement of VSCs to provide damping will also be considered.} \appendices \small \section{} \label{Aderivation} Firstly, the matrix of derivatives in (\ref{eq10}) are expressed by \begin{equation} \left[ \begin{array}{l} \frac{{\partial {{\bm{P}}_{{E}}}}}{{\partial {{\delta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{P}}_{{E}}}}}{{\partial {\bm{\theta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{P}}_{{E}}}}}{{\partial {\bm{V}}}}\\ \frac{{\partial {{\bm{P}}_{\bm{v}}}}}{{\partial {{\delta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{P}}_{\bm{v}}}}}{{\partial {\bm{\theta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{P}}_{\bm{v}}}}}{{\partial {\bm{V}}}}\\ \frac{{\partial {{\bm{Q}}_{\bm{v}}}}}{{\partial {{\delta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{Q}}_{\bm{v}}}}}{{\partial {\bm{\theta }}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \frac{{\partial {{\bm{Q}}_{\bm{v}}}}}{{\partial {\bm{V}}}} \end{array} \right] = \left[ \begin{array}{l} {{{A}}_{{{11}}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{{A}}_{{{12}}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{{A}}_{{{13}}}}\\ {{{A}}_{{{21}}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{{A}}_{{{22}}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{{A}}_{{{23}}}}\\ {{{A}}_{{{31}}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{{A}}_{{{32}}}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {{{A}}_{{{33}}}} \end{array} \right] \label{eq32} \end{equation} Based on the second and third rows of (\ref{eq10}), ${{\Delta \bm{\theta} }}$ and ${{\Delta \bm{V}}}$ can be calculated by \begin{equation} \begin{aligned} {{\Delta \bm{\theta} }} &= {F_1}{{{A}}_{{{23}}}}^{ - 1}\left( {{{\Delta }}{{\bm{P}}_{\bm{v}}} - {{{A}}_{{{21}}}}{{\Delta \bm{\delta} }}} \right) \\ &- {F_1}{{{A}}_{{{33}}}}^{ - 1}\left( {{{\Delta }}{{\bm{Q}}_{\bm{v}}} - {{{A}}_{{{31}}}}{{\Delta \bm{\delta} }}} \right)\\ {{\Delta \bm{V}}} &= {F_2}{{{A}}_{{{22}}}}^{ - 1}\left( {{{\Delta }}{{\bm{P}}_{\bm{v}}}{\bm{ - }}{{{A}}_{{{21}}}}{{\Delta \bm{\delta} }}} \right) \\ &- {F_2}{{{A}}_{{{32}}}}^{ - 1}\left( {{{\Delta }}{{\bm{Q}}_{\bm{v}}} - {{{A}}_{{{31}}}}{{\Delta \bm{\delta} }}} \right) \end{aligned} \label{eq33} \end{equation} where \begin{equation} \nonumber \begin{aligned} {F_1} &= {\left( {{{{A}}_{{{23}}}}^{ - 1}{{{A}}_{{{22}}}} - {{{A}}_{{{33}}}}^{ - 1}{{{A}}_{{{32}}}}} \right)^{ - 1}} \\ {F_2} &= {\left( {{{{A}}_{{{22}}}}^{ - 1}{{{A}}_{{{23}}}} - {{{A}}_{{{32}}}}^{ - 1}{{{A}}_{{{33}}}}} \right)^{ - 1}} \end{aligned} \end{equation} According to the first row of (\ref{eq10}) and substituting ${{\Delta \bm{\theta} }}$ and ${{\Delta \bm{V}}}$ by (\ref{eq33}), we have \begin{equation} \begin{aligned} {{\Delta }}{{\bm{P}}_{{E}}} = {{{A}}_{{{11}}}}{{\Delta \bm{\delta} }} + {{{A}}_{{{12}}}}{{\Delta \bm{\theta} }} + {{{A}}_{{{13}}}}{{\Delta \bm{V}}}\\ = {{{A}}_{{1}}}{{\Delta \bm{\delta} }} + {{{A}}_{{2}}}{{\Delta }}{{\bm{P}}_{\bm{v}}} + {{{A}}_{{3}}}{{\Delta }}{{\bm{Q}}_{\bm{v}}} \end{aligned} \label{eq34} \end{equation} where \begin{equation} \nonumber \begin{aligned} {{{A}}_{{1}}} &= {{{A}}_{{{11}}}} + {{{A}}_{{{12}}}}{F_1}\left( { - {{{A}}_{{{23}}}}^{ - 1}{{{A}}_{{{21}}}} + {{{A}}_{{{33}}}}^{ - 1}{{{A}}_{{{31}}}}} \right) \\ &+ {{{A}}_{{{13}}}}{F_2}\left( { - {{{A}}_{{{22}}}}^{ - 1}{{{A}}_{{{21}}}} + {{{A}}_{{{32}}}}^{ - 1}{{{A}}_{{{31}}}}} \right)\\ {{{A}}_{{2}}} &= {{{A}}_{{{12}}}}{F_1}{{{A}}_{{{23}}}}^{ - 1} + {{{A}}_{{{13}}}}{F_2}{{{A}}_{{{22}}}}^{ - 1} \\ {{{A}}_{{3}}} &= - {{{A}}_{{{12}}}}{F_1}{{{A}}_{{{33}}}}^{ - 1} - {{{A}}_{{{13}}}}{F_2}{{{A}}_{{{32}}}}^{ - 1} \end{aligned} \end{equation} \section{} \label{Approximation} The estimated correlation matrix and the stationary covariance matrix are given by: \begin{equation} \begin{aligned} {\hat{R}_{\bm{xx}}\left(\tau\right)}&=\left[ \begin{array}{cc} \hat{R}_{\bm{\delta \delta}}\left(\tau\right)& \hat{R}_{\bm{\delta \omega}}\left(\tau\right)\\ \hat{R}_{\bm{\omega \delta}}\left(\tau\right)& \hat{R}_{\bm{\omega \omega}}\left(\tau\right) \end{array} \right]\\ {\hat{C}_{\bm{xx}}}&=\left[ \begin{array}{cc} \hat{C}_{\bm{\delta \delta}}&\hat{C}_{\bm{\delta \omega}}\\\hat{C}_{\bm{\omega \delta}}&\hat{C}_{\bm{\omega \omega}} \end{array} \right] \end{aligned} \label{Approximation of matrix 1} \end{equation} Each entry of $\hat{R}_{\bm{\delta \delta}}\left(\tau\right)$ and $\hat{C}_{\bm{\delta \delta}}$ can be computed by \begin{equation} \begin{aligned} {\hat{R}_{{\delta_i}{\delta_j}}\left(\tau\right)}&=\frac{1}{N}\sum_{k=1}^{N-\tau} {\left({\delta_{\left(k+\tau \right)i}}-\bar{\delta_i}\right)}{\left({\delta_{kj} }-\bar{\delta_j}\right)}\\ {\hat{C}_{{\delta_i}{\delta_j}}}&=\frac{1}{N}\sum_{k=1}^{N} {\left({\delta_{ki} }-\bar{\delta_i}\right)}{\left({\delta_{kj} }-\bar{\delta_j}\right)} \end{aligned} \label{Approximation of matrix 2} \end{equation} where $\tau$ is a given lagging time, $\bar{\delta_i}$ is the mean value of $\delta_i$, and $N$ is the sample size. Likewise, $\hat{R}_{\bm{\delta \omega}}\left(\tau\right)$, $\hat{R}_{\bm{\omega\delta}}\left(\tau\right)$, $\hat{R}_{\bm{\omega \omega}}\left(\tau\right)$, $\hat{C}_{\bm{\delta \omega}}$, $\hat{C}_{\bm{\omega \delta}}$ and $\hat{C}_{\bm{\omega \omega}}$ can also be calculated. \input{Main.bbl} \bibliographystyle{IEEEtran} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{a1.pdf}}]{Jinpeng Guo} (S'19) received the B.S. degree in electrical engineering and automation from Chongqing University, Chongqing, China, in 2014 and M.S. degree in electrical engineering from Southeast University, Nanjing, China, in 2017. He is currently pursuing the Ph.D. degree in the Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada. His research interests include power system monitoring, analysis and control. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{a3.pdf}}]{Xiaozhe Wang} (S’13-M’16) is currently an Assistant Professor in the Department of Electrical and Computer Engineering at McGill University, Montreal, QC, Canada. She received the Ph.D. degree in the School of Electrical and Computer Engineering from Cornell University, Ithaca, NY, USA, in 2015, and the B.S. degree in Information Science and Electronic Engineering from Zhejiang University, Zhejiang, China, in 2010. Her research interests are in the general areas of power system stability and control, uncertainty quantification in power system security and stability, and wide-area measurement system (WAMS)-based detection, estimation, and control. She is serving on the editorial boards of IEEE Transactions on Power Systems, Power Engineering Letters, and IET Generation, Transmission and Distribution. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{a2.pdf}}]{Boon-Teck Ooi} (M’71-SM’85-F’02-LF’05) was born in Malaysia. He received the B. Eng. (Honors) degree in electrical engineering from the University of Adelaide, Australia, the M.S. degree in electrical engineering from the Massachusetts Institute of Technology and the Ph.D. degree in electrical engineering from McGill University, Montreal, QC, Canada. He is Emeritus Professor with the Department of Electrical and Computer Engineering, McGill University. He is IEEE Life Fellow. His research interests are in linear and conventional electric motors and generators (steady-state, transient, stability); power electronics (voltage-source converters, current-source converters, multi-level converters, power quality, thyristor HVDC, PWM-HVDC, multi-terminal HVDC FACTS), wind and other renewable energy sources. \end{IEEEbiography} \end{document}
1,116,691,499,575
arxiv
\section{INTRODUCTION} For dynamical systems with numerical models, data assimilation is an estimation process of combining observational data with a numerical model to obtain an estimate of the system's state. Data assimilation is essential to numerical weather prediction (NWP). The state estimate can be considered as an interpolation of the sparse observational data; and it is used as the initial condition for the numerical forecast process. If the dimension is relatively low and the data set is small, various linear and nonlinear estimators can be found in the literature that have optimal or suboptimal performances. However, to assimilate big data sets with models that have high dimensions, such as those in operational NWP systems with tens of millions of variables, achieving reliable state estimates and error probability distributions is a challenging problem that have been studied for decades with a huge literature. There are two categories of methods widely used in NWP, namely variational methods and the ensemble Kalman filter (EnKF) \cite{xu,houtekamer}. The former is based on a weighted least-square optimization, such as the four dimensional variational data assimilation (4D-Var) in a fixed time window or the three dimensional version (3D-Var) that excludes the time variable. The EnKF algorithm is based on the Kalman filter except that the error covariance is approximated using a set of state ensembles. 4D-Var methods are used in operational NWP systems by many meteorological centers. While it serves as an effective method of data assimilation, 4D-Var algorithms have difficulty to explicitly track the evolution of error covariance within its estimation process due to outrageous computational costs and input/output (I/O) loads required by matrices of extremely high dimensions. EnKF, on the other hand, updates information about the error covariance in the form of ensembles. However, the number of ensemble states is significantly smaller than the number of state variables. As a result, the rank deficiency of error covariance tends to deteriorate the integrity of the estimation process unless remedies to the algorithm, such as localization and covariance inflation, are applied. Different types of Kalman filters have been developed and widely used in science and engineering applications, such as the EnKF, the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). In this paper, we introduce two algorithms of sparsity-based Kalman fillters, namely the sparse UKF and the progressive EKF. The goal of the work is to explore innovative ideas that take the advantage of the (almost) sparsity structure of matrices so that analysis and error covariance can be updated effectively and efficiently without the drawback of rank deficiency. The granularity of subproblems for the purpose of algorithm parallelization is also emphasized in the method. The filters are developed specifically for problems with very high dimensions. Different from EnKFs in which the error covariance is represented using a set of dense ensemble vectors, the new algorithms in this paper are based on a sparse but full rank matrix as an approximation of the error covariance. This is made possible because of two assumptions: (a) the error covariance is approximately a sparse matrix; (b) the system model is component based, i.e. the state vectors are divided into components that can be computed independently in parallel. \section{Sparse UKF} Consider a dynamical system model in which the state variable is $x(t)$, where $t=1, 2, 3,\cdots$ represents time steps. The value of observation at $t=k$ is denoted by $y(k)$. The system model is defined as follows, \begin{equation}\begin{array}{lllllllll} \label{eq_mdl} \begin{aligned} x (k)&={\cal M}(x(k-1))+\eta_{k-1}, && x(k), \eta_{k-1} \in \mathbb{R}^n, \\ y(k)&={\cal H}(x(k))+\delta_k, && y_k, \delta_k \in \mathbb{R}^m, \end{aligned} \end{array}\end{equation} where $ \eta_{k-1}$ is a random variable representing the model error. Its covariance is $Q$. The observational error, $ \delta_k$, has a covariance $R$. In data assimilation, the goal is to estimate the value of $x(k)$ given the observations $y(1), y(2), \cdots, y(k)$ and the model (\ref{eq_mdl}). If (\ref{eq_mdl}) is linear and if all random variables are Gaussian, then the Kalman filter is an optimal state estimator. For nonlinear systems with non-Gaussian random errors, various types of Kalman filters exist in the literature with successful applications in science and engineering. If a system has a very high dimension, the conventional form of Kalman filter based on a dense error covariance is not applicable. In this section, we introduce an algorithm that is a variation of UKF for problems with approximately sparse error covariances. In a sparse matrix/vector, most entries are zeros. In this paper, we use an underbar to emphasize that a vector or matrix is sparse, for instance $\underline{P}$ and $\underline{x}$. A sparse vector is associated with an index set, denoted by ${\cal I}$, consisting of the indices of nonzero entries. For a sparse matrix, columns may have different numbers of nonzero entries. The largest such number is denoted by $N_{sp}$. In sparsity-based algorithms, a full model evaluation is not always necessary. Using a {\it component-based} model can significantly reduce the computational load. In the notation, a component-based model has three inputs: the sparse state variable, its index, and the index of the output state. More specifically, \begin{equation}\begin{array}{lllllllll} \label{ad_model} \underline{x}(k)={\cal M}(\underline{x}(k-1); {\cal I}_1; {\cal I}_2),\\ \end{array}\end{equation} where ${\cal I}_1$ is the index set of the sparse vector $\underline{x}(k-1)$ and ${\cal I}_2$ is the index set of $\underline{x}(k)$. The model evaluates only the entries with indices in ${\cal I}_2$, setting all other entries as zeros. Note that $\underline{x}(k)$ is different from the full state variable $x(k)$ because the later is, in general, a dense vector with mostly nonzero entries. Therefore, it is important to specify the index set ${\cal I}_2$ of the sparse vector $\underline{x}(k)$ to be evaluated using a component-based model. For simplicity, we often omit ${\cal I}_1$ in the notation, i.e. \begin{equation}\begin{array}{lllllllll} \underline{x}(k)={\cal M}(\underline{x}(k-1); {\cal I}), \end{array}\end{equation} where ${\cal I}$ is the same as ${\cal I}_2$ in (\ref{ad_model}). Algebraic operations between sparse vectors, such as addition and dot product, are defined in the same way as dense vectors. Thus, we may conduct operations between sparse vectors and dense vectors, such as adding a sparse vector to a dense vector $\underline{x}+y$ as long as both vectors have the same dimension. A new operation, called merging, between a sparse vector and a dense vector is defined as follows, \begin{equation}\begin{array}{lllllllll} z= \underline{x}\triangleright y, & \left\{ \begin{array}{lll} i\mbox{th component of }z= i\mbox{th component of } x, & \mbox{if } i\in {\cal I}.\\ i\mbox{th component of }z= i\mbox{th component of } y , & \mbox{if } i\not\in {\cal I}. \end{array}\right. \end{array}\end{equation} If an operation has an underbar, it means that the evaluation is carried out only at a given index set. For instance, given two sparse matrices $\underbar{A}$ and $\underbar {B}$, then $\underbar{A}* \underbar {B}$ is a different matrix from $\underline{\underline{A}* \underline {B}}$. The former is the conventional matrix multiplication between two sparse matrices; the later is a matrix multiplication in which the entries in a given index set are evaluated and all other entries are set to be zeros. Other operations, such as $\underline{\sqrt{\underline{P}}}$, are defined similarly. A summary of notations is summarized in the following table. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline {\small Notation}& {\small Definition}& {\small Notation}&{\small Definition} \\ \hline $x$& state variable&$y$&{\small observation variable} \\ \hline ${\cal M}$&{\small model function}&${\cal H}$&{\small observation operator}\\ \hline $n$& {\small state space dimension }& $t=1,2...$&{\small (discrete) time variable}\\ \hline $x^\sigma_i$&{\small $\sigma$-point at $t=k-1$}&&\\ \hline $x^b_i$&{\small background - state vector }&$y^b_i$&output of observation operator ${\cal H}(x^b_i)$\\ \hline $\bar x^b$&{\small average of $x^b_i$}&$\bar y^b$&{\small average of $y^b_i$}\\ \hline $P^b$&{\small background - error covariance}&&\\ \hline $x^a$&{\small analysis - state vector}&$P^a$& analysis - error covariance\\ \hline \end{tabular} \caption{Notations} \label{table2} \end{center} \end{table} \subsection{The UKF} The unscented Kalman filter has been increasingly popular in engineering applications since its introduction about twenty years ago \cite{UKF:julier,julier2}. In a UKF algorithm, the error covariance is propagated with the dynamics using a set of vectors, or $\sigma$-points denoted by $x^\sigma$. Their definition is given in (\ref{eq_km1})-(\ref{eq_km2}). The $\sigma$-points are computed at each time step using a square root of the error covariance. In most UKF applications, $\sigma$-points are computed using either Cholesky factorization or matrix diagonalizations. In the notation, a variable with a superscript '$a$', such as $x^a$, represents the {\it analysis} value of the variable, i.e., the updated value based on observations. A variable with a superscript '$b$', such as $y^b$, represents the background, i.e., the propagated value of analysis using the system model. The algorithm is summarized as follows. At $t=k-1$, suppose we have the analysis and error covariance as well as its square root \begin{equation}\begin{array}{lllllllll} \label{eq_km1} x^a (k-1), \; P^a(k-1),\\ X^a(k-1)=\sqrt{(n+\kappa)P^a(k-1)}, \end{array}\end{equation} where $\kappa$ is a scaling factor for the fine tuning of the higher order moments of the approximation error \cite{UKF:julier}. How to tune the value of $\kappa$ for a sparsity-based UKF is an open problem that needs further study. In this paper, $\kappa =0$ is used in all examples. A set of $\sigma$-points is generated as follows, \begin{equation}\begin{array}{lllllllll} \label{eq_km2} x^\sigma_0(k-1)=x^a (k-1),\\ x^\sigma_i(k-1)=x^a(k-1)+X^a_i(k-1), & 1\leq i\leq n, \\ x^\sigma_i(k-1)=x^a(k-1)-X^a_i(k-1), & n+1\leq i\leq 2n. \\ \end{array}\end{equation} The next step is to propagate the $\sigma$-points, which represent the background at $t=k$. For simplicity, the index '$k$' of all variables in the $k$th time-step is omitted. \begin{equation} \begin{aligned} x^b_i&={\cal M}(x^\sigma_i(k-1)), & y^b_i&={\cal H}(x^b_i), & 0\leq i\leq 2n,\\ \bar x^b&= \displaystyle\sum_{i=0}^{2n} w_ix^b_i, & \bar y^b&=\displaystyle\sum_{i=0}^{2n} w_iy^b_i, \end{aligned} \end{equation} where the weights are defined as follows \begin{equation}\begin{array}{lllllllll} w_0=\displaystyle\frac{\kappa}{n+\kappa}, \; w_i=\frac{1}{2(n+\kappa)}, \end{array}\end{equation} for $i=1,2,\cdots, 2n$. Define the variations \begin{equation}\begin{array}{lllllllll} X^b_i=x^b_i-\bar x^b, \; Y^b_i=y^b_i-\bar y^b. \end{array}\end{equation} The background covariances are \begin{equation}\begin{array}{lllllllll} P^b=\displaystyle \sum_{i=0}^{2n} w_i X^b_i (X^b_i)^T+Q,\\ P_{xy}=\displaystyle \sum_{i=0}^{2n} w_i X^b_i (Y^b_i)^T,\\ P_{yy}=\displaystyle \sum_{i=0}^{2n} w_i Y^b_i (Y^b_i)^T+R. \end{array}\end{equation} The Kalman gain, $K$, satisfies the following equation, \begin{equation}\begin{array}{lllllllll} KP_{yy}=P_{xy}. \end{array}\end{equation} The analysis is updated as follows \begin{equation}\begin{array}{lllllllll} \begin{aligned} x^a&=\bar x^b+K(y_o-\bar y^b),\\ P^a &= P^b -K(P_{xy})^T. \end{aligned} \end{array}\end{equation} where $y_o$ is the observation at $t=k$. This completes one iteration of the filter. For the next step, $t=k+1$, go back to (\ref{eq_km1}) replacing the analysis by the updated value of $x^a$ and $P^a$. \subsection{Sparse UKF} The square root factorization of a matrix is not unique. For large and sparse matrices, various algorithms and their implementations on different computing platforms have been studied for many years. The literature can be traced back to the early days of electronic computers \cite{davis2}. In the case of Cholesky factorization, the square root of a sparse matrix is still sparse, although the computation may require larger memory than the original matrix \cite{davis,rozin}. A dense error covariance is intractable in computation for global models used in NWP. In the proposed approach, we assume that $P$ and $\sqrt{P}$ are approximately sparse. In the algorithm, they are replaced by their sparse approximations, $\underline{P}$ and $\sqrt{\underline{P}}$. The indices of nonzero entries in the $i$th column are denoted by ${\cal I}_i$ and ${\cal I}^{\sigma}_i$, respectively. The sparsity index set of the forecast of $\sigma$-points, i.e. the background, are denoted by ${\cal I}^{b}$. Becasue $\underline{P}$ and $\sqrt{\underline{P}}$ are approximations of $P$ and $\sqrt{P}$, the sparsity patterns do not have to be exact. In fact, in the example of Lorenz-96 model presented in the next section, we assume that ${\cal I}_i$, ${\cal I}^{\sigma}_i$ and ${\cal I}^{b}$ equal to each other although $\sqrt{\underline{P}}$ may have a different sparsity pattern from that of $\underline{P}$.\\ \noindent \textbf{Algorithm I} (Sparse UKF)\\ Given the initial analysis, \begin{equation}\begin{array}{lllllllll} x^a(k-1), \; \underline{P}^a(k-1). \end{array}\end{equation} \noindent \textbf{Step 1}. $\sigma$-points and forecast \begin{equation} \label{eq_sukf0} \underline{X}^a(k-1)=\underline{\sqrt{(n+\kappa)\underline{P}^a(k-1)}}, \mbox{ sparsity index set }{\cal I}^{\sigma} \end{equation} and \begin{equation} \label{eq_sukf1} \begin{aligned} x^b_0&={\cal M}(x^a(k-1)), & y^b_0&={\cal H}(x^b_0), \\ \underline{x}^b_{i}&={\cal M}(x^a(k-1)+\underline{X}^a_{i}(k-1); {\cal I}^{b}_i), & y^b_i&={\cal H}(\underline{x}^b_{i}\trianglerightx^b_0), && 1\leq i\leq n.\\ \underline{x}^b_i&={\cal M}(x^a(k-1)-\underline{X}^a_{i}(k-1); {\cal I}^{b}_i), & y^b_i&={\cal H}(\underline{x}^b_{i}\trianglerightx^b_0), && n+1\leq i\leq 2n.\\ \bar x^b&=\displaystyle\sum_{i=0}^{2n} w_i(\underline{x}^b_i\trianglerightx^b_0), & \bar y^b&=\displaystyle\sum_{i=0}^{2n} w_iy^b_i\\ \end{aligned} \end{equation} \noindent \textbf{Step 2}. Background covariances \begin{equation}\begin{array}{lllllllll} \label{eq_sukf2} \begin{aligned} \underline{P}^b&=\displaystyle \sum_{i=0}^{2n} w_i\underline{(\underline{x}^b_i\trianglerightx^b_0-\bar x)(\underline{x}^b_i\trianglerightx^b_0-\bar x)^T}+\underline{Q}, &\mbox{ sparsity index set }{\cal I},\\ P_{xy}&=\displaystyle \sum_{i=0}^{2n} w_i(\underline{x}^b_i\trianglerightx^b_0-\bar x)(y^b_i-\bar y)^T,\\ P_{yy}&=\displaystyle \sum_{i=0}^{2n} w_i(y^b_i-\bar y)(y^b_i-\bar y)^T+R. \end{aligned} \end{array}\end{equation} \noindent \textbf{Step 3}. Kalman gain and analysis \begin{equation}\begin{array}{lllllllll} \label{eq_update} \begin{aligned} KP_{yy}&=P_{xy},\\ x^a&=\bar x^b+K(y_o-\bar y^b),\\ \underline{P}^a &= \underline{P}^b -\underline{K(P_{xy})}^T+\gamma I,& \mbox{ sparsity index set }{\cal I}. \end{aligned} \end{array}\end{equation} In (\ref{eq_sukf2}), we assume $\underline{x}^b_0 = x^b_0$. The constant term $\gamma I$ in (\ref{eq_update}) is a diagonal matrix. The value of $\gamma$ is selected so that $P^a$ is positive definite. The positive definiteness is guaranteed if $\gamma$ is larger than $|\lambda_{\min}|$, where $|\lambda_{\min}|$ is the smallest negative eigenvalue of \begin{equation}\begin{array}{lllllllll} \label{eq_update2} \underline{P}^b -\underline{K(P_{xy})}^T. \end{array}\end{equation} If the updated covariance matrix is positive definite, then $\gamma =0$. In all examples, the value of $\gamma$ is adaptively changed in every cycle depending on the smallest negative eigenvalue of (\ref{eq_update2}) The sparse UKF algorithm is based on the assumption that $P^a$ can be approximated by a sparse matrix $\underline{P}^a$. Although the $\sigma$-points in the algorithm play a similar role as that of ensembles in EnKF, using sparse UKF one can avoid the problem of rank deficiency. For systems with very high dimensions, the number of ensemble members used in an EnKF is much smaller than the dimension. As shown in Figure \ref{fig_rank}, the narrow and tall matrix of ensemble vectors makes EnKF fundamentally a rank deficient approach. In contrast, the block diagonal matrix $\underline{P}^a$ shown in Figure \ref{fig_rank} as a sparse approximation of $P^a$ has full rank. If $N_{sp}$ of $\underline{P}^a$ is an integer close to the ensemble size of an EnKF, then the memory required by $\underline{P}^a$ is smaller than that by the ensemble matrix because of the symmetry of error covariance. The required computational load in Step 1 is extremely high if full state vectors are computed. Thanks to the sparsity, we only need to compute the entries with indices in ${\cal I}^{b}$. For a sparse UKF to be successful for problems with very high dimensions, it is critical to have component-based numerical models so that subsets of entries defined by ${\cal I}^{b}$ are computed in parallel; and most entries of state vectors are not evaluated at all. In addition each component-based computation requires a part of the state vector only. It saves the computer I/O load. \begin{figure}[!ht] \begin{center} \includegraphics[width=4.5in,height=2.5in]{fig_sparsitypattern.pdf} \caption{Patterns of ensemble vectors and sparse error covariances} \label{fig_rank} \end{center} \end{figure} Memory and (I/O) requirements are big factors influencing the efficiency of computational algorithms. Because an error covariance and its diagonal blocks are symmetric, the memory and I/O usage can be reduced by almost a half for subproblems with symmetry. The granularity of a computational algorithm has considerable impact on its efficiency in parallel computing. By granularity control we mean that one can divided a high dimensional problem into subproblems of desired dimensions. As shown in Figure \ref{fig_rank}, the error covariance and its square root consist of sparse blocks or sparse columns. This is different from the EnKF in which state vectors in an ensemble are dense. In a sparse approximation of $P^a$, the number of nonzero entries, a parameter similar to that used for distance-based localization methods, can be easily changed in $\underline{P^a}$ so that the error covariance and its square root can be grouped into smaller blocks of different sizes for parallel computation. \subsection{Lorenz-96 model} In this section, we use a Lorenz-96 model that was first introduced in \cite{lorenz96} to test the performance of the sparse UKF. Consder \begin{equation}\begin{array}{lllllllll} \label{lorenz_1} \begin{aligned} \displaystyle \frac{dx_i}{dt}&=(x_{i+1}-x_{i-2})x_{i-1}-x_i+F, & i=1, 2, \cdots, n,\\ x_{n+1}&=x_1,\\ n&=40,\\ \Delta t&=0.025,\\ F&= 8. \end{aligned} \end{array}\end{equation} The system has chaotic trajectories as shown in Figure \ref{fig_chaos}, a plot of $x_1(t), x_2(t), x_3(t)$. \begin{figure}[!ht] \begin{center} \includegraphics[width=3.0in,height=2.5in]{fig_lorenz_chaos1-eps-converted-to.pdf} \caption{A chaotic trajectory of the Lorenz-96 model, $x_1$(solid), $x_2$(dash), $x_3$(dot).} \label{fig_chaos} \end{center} \end{figure} The simulations are conducted based on a 4th-order Runge-Kutta discretization. The trajectories are used as the ground truth. The sparsity pattern for $\underline{P}^a$ and $\sqrt{\underline{P}^a}$ are assumed to be centered along the diagonal line with a fix length. The total number of nonzero entries in each column is $N_{sp}$. We would like to point out that the sparse matrices are approximations of the true error covariance and its square root. The sparsity pattern of $\sqrt{\underline{P}^a}$ is, in fact, different from that of $\underline{P}^a$. In the approximation, however, we ignore the difference and use the same sparsity pattern for both. As a result, the memory required by $\sqrt{\underline{P}^a}$ is reduced almost by half. This idea works fine for the Lorenz-96 model. However, a systematic way of choosing the sparsity pattern for $\sqrt{\underline{P}^a}$ based on given $\underline{P}^a$ is an open problem that needs further study. The numerical experimentation is based on $N=1000$ uniformly distributed random initial states in $[-1\; 1]$. The time step size is $\Delta t=0.025$. The total number of time steps for each simulation is $N_t=4000$. The number of observations at any given time is $m=20$, i.e. every other state variable is measured. The observational error has the Gaussian distribution with a covariance $R=I$, the identity matrix. The initial background error covariance is $P^b(0)=0.2I$. The following RSME is used to measure the accuracy of estimation \begin{equation}\begin{array}{lllllllll} RSME=\sqrt{\displaystyle \frac{1}{n(N_t+1)}\displaystyle\sum_{k=0}^{N_t}||x^a(k)-x^{truth}(k)||^2_2}. \end{array}\end{equation} For comparison, an EnKF is applied to the same data set. The localization radius is $\rho=4$ and the inflation factor is $\sqrt{1.08}$. A full scale UKF based on dense error covariance is also applied as the best estimator for comparison. As an indication of computational load, the number of entries to be computed in each algorithm is shown in Table \ref{table_1}. The boxplot of simulation results is shown in Figure \ref{fig_box_UKF}. To summarize, the sparse UKF has considerably smaller error variation than that of the EnKF. This is expected because the new approach avoids the problem of rank deficiency. The medians of estimation errors are also smaller than that of EnKF. In the cases of $N_{sp}=7$ and $11$, the memory size required by the sparse error covariance is smaller than the memory size needed to store the ensemble vectors in the EnKF if $N_{ens}=10$. The Cholesky factorization, while maintaining the sparsity property, may require additional memory. In this example of Lorenz-96, we use the same sparsity pattern for both $\sqrt{\underline{P}^a}$ and $\underline{P}^a$. This assumption simplifies the algorithm and reduces the memory and I/O requirements. The computational load, in the number of entry evaluations, is increased for sparse UKF because of the number of $\sigma$-points is $2n$. Reducing the number of entries being evaluated and testing the impact of Cholesky factorization on the efficiency of UKF is ongoing research that is not addressed in this paper. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline {\small Filter}& {\small Size}& {\small Entries}&{\small Error} &{\small Error}&{\small Error}\\ &&{\small EVAL}&{\small Median}&{\small Mean}&{\small STD}\\ \hline {\small EnKF}&{\small $N_{ens}=10$}& {\small 400}&0.3462&1.0741&1.0652\\ \hline {\small S-UKF}&{\small $N_{sp}=7$}&{\small 600}&0.3061&0.3067&0.0071\\ \hline {\small S-UKF}&{\small $N_{sp}=11$}&{\small 920}&0.2691&0.2691&0.0048\\ \hline \end{tabular} \caption{Summary of simulation results} \label{table_1} \end{center} \end{table} \begin{figure}[!ht] \begin{center} \includegraphics[width=3.0in,height=2.0in]{fig_sparseUKF1-eps-converted-to.pdf} \caption{Boxplot of RMSE} \label{fig_box_UKF} \end{center} \end{figure} \section{Progressive EKF} In a sparse UKF, the $\sigma$-points are computed by taking a square root of the error covariance, such as the Cholesky factorization. The resulting $\sigma$-points are sparse. However, this process may require additional memory and computation. In this section, we propose a progressive algorithm of approximating error covariance without taking square roots. \subsection{Basic ideas} The main assumption for this algorithm is the following progressive relationship \begin{equation}\begin{array}{lllllllll} \label{eq_prog} M_{k-1}P^a(k-1) M_{k-1}^T=P^a (k-1) + \Delta P^b, \end{array}\end{equation} where $\Delta P^b$ is assumed to be small. In (\ref{eq_prog}), $M_{k-1}$ is the Jacobian of ${\cal M}$ at $x^a (k-1)$. Similarly, the Jacobian of ${\cal H}$ is $H_k$. To estimate $\Delta P^b$, assume \begin{equation}\begin{array}{lllllllll} \label{eq_DM} M_{k-1}&=I+\Delta M_{k-1}. \end{array}\end{equation} where we assume that $\Delta M_{k-1}$ is small. If the system model is based on the discretization of a differential equation with a small time step size, then \begin{equation}\begin{array}{lllllllll} \label{eq_prog3} {\cal M}(x(k-1))=x(k-1) + O(\Delta t^\alpha), \;\; \alpha \geq 1. \end{array}\end{equation} The Jacobian of $O_{k-1}(\Delta t^\alpha)$ in space variables is expected to have small value if $\Delta t$ is small, which makes (\ref{eq_DM}) a reasonable assumption. Then we have \begin{equation}\begin{array}{lllllllll} \label{eq_prog2a} \begin{aligned} &M_{k-1}P^a(k-1) M_{k-1}^T\\ &= (I+\Delta M_{k-1})P^a(k-1)(I+\Delta M_{k-1}^T)\\ &= P^a(k-1)+\Delta M_{k-1} P^a(k-1)+\left( \Delta M_{k-1} P^a(k-1)\right)^T+ \Delta M_{k-1} P^a(k-1)\Delta M_{k-1}^T\\ &\approx P^a(k-1)+\Delta M_{k-1} P^a(k-1)+\left( \Delta M_{k-1} P^a(k-1)\right)^T. \end{aligned} \end{array}\end{equation} This is in consistent with (\ref{eq_prog}). It can be computed using a tangent linear model. Or it can be approximated using the dynamical model \begin{equation}\begin{array}{lllllllll} \label{eq_prog2b} \begin{aligned} &M_{k-1}P^a(k-1) M_{k-1}^T\\ &= (I+\Delta M_{k-1})P^a(k-1)(I+\Delta M_{k-1}^T)\\ &\approx \left({\cal M}(x(k-1)+\delta P^a(k-1))-{\cal M}(x(k-1))\right)/\delta\\ &+\left({\cal M}(x(k-1)+\delta P^a(k-1))-{\cal M}(x(k-1))\right)^T/\delta-P^a. \end{aligned} \end{array}\end{equation} where $\delta>0$ is the step size of a finite difference approximation of $\Delta M_{k-1}P^a$. Its value should be determined depending on the numerical model and its linearization. In (\ref{eq_prog2b}), a vector and matrix summation is a new matrix resulting from adding the vector to every column in the matrix. Applying an operator to a matrix is to apply the operator to every column in the matrix. \subsection{Progressive EKF} The column vectors in the matrices in (\ref{eq_prog2a}) and (\ref{eq_prog2b}) are sparse. However, the number of column vectors equals $n$, which can be as high as $10^6-10^7$ for some atmospheric models. Applying a full model to all the vectors is impractical because of the high computational and I/O loads. On the other hand, if we approximate the error covariance using a given sparsity, only a small portion of the entries in each column vector is to be evaluated. Evaluating the entire state vector is unnecessary. This is the reason we need a component-based model. Then the algorithm of progressive EKF is summarized as follows. \\ \noindent \textbf{Algorithm II} (Progressive EKF)\\ Given the initial analysis at $t=k-1$, \begin{equation}\begin{array}{lllllllll} x^a(k-1) \mbox{ and }\underline{P}^a(k-1). \end{array}\end{equation} \noindent \textbf{Step 1}. Forecast \begin{equation}\begin{array}{lllllllll} x^b={\cal M}(x^a(k-1)),\\ y^b={\cal H}(x^b). \end{array}\end{equation} \noindent \textbf{Step 2}. Background error covariance \begin{equation}\begin{array}{lllllllll} \label{eq_prop4} \begin{aligned} \underline{P}^b&=\underline{\left({\cal M}\left(x^a(k-1)+\delta \underline{P}^a(k-1),{\cal I}\right)-x^b \right)}/\delta\\ &+\underline{\left({\cal M}\left(x^a(k-1)+\delta \underline{P}^a(k-1),{\cal I}\right)-x^b \right)}^T/\delta-\underline{P}^a+Q. \end{aligned} \end{array}\end{equation} \noindent \textbf{Step 3}. Kalman gain and analysis \begin{equation}\begin{array}{lllllllll} \begin{aligned} K&=\underline{P}^b H_{k}^T(H_{k}\underline{P}^b H^T_{k} +R)^{-1},\\ x^a&= x^b+K(y_o- y^b),\\ \underline{P}^a&= (I-KH_{k})\underline{P}^b.\\ \end{aligned} \end{array}\end{equation} Different from the sparse UKF, this algorithm avoids the computation of matrix square roots. However, the algorithm requires that $\Delta P^b$ in (\ref{eq_prog}) can be approximated effectively. From (\ref{eq_prog3}), the method is expected to work better for a small time step-size. If $\Delta t$ is large, $\Delta M_{k-1}$ in (\ref{eq_DM}) may not be small enough. A remedy is to use a refined step-size in an inner-loop computation. More specifically, the discrete model is a discretization of a continuous-time model. The discrete time moment $k-1$ corresponds to the continuous time moment $(k-1)\Delta t$. We refine the step size by dividing the time interval into $n_p$ subintervals. In our examples, we choose $n_p=2$. The refined time steps are \begin{equation}\begin{array}{lllllllll} (k-1)\Delta t, (k-1)\Delta t+\displaystyle \frac{\Delta t}{n_p}, \cdots, (k-1)\Delta t+s\displaystyle \frac{\Delta t}{n_p},\cdots, k\Delta t, \;\;\;\;\; 0\leq s \leq n_p \end{array}\end{equation} For the inner loop, one can compute a sequence of backgrounds, $\tilde x^b (s)$. \begin{equation}\begin{array}{lllllllll} \label{refinestep} t_s=(k-1)\Delta t+s\displaystyle \frac{\Delta t}{n_p},\\ \tilde x^b(s)=\tilde{\cal M}_{t_s} (x^a(k-1)), & s=1,2, \cdots, n_p. \end{array}\end{equation} where $\tilde {\cal M}_{t_s}$ represents the refined model function in the time interval from $t=(k-1)\Delta t$ to $t=t_s$. In Step 2, repeat (\ref{eq_prop4}) $n_p$ times along the sequence of background states, $\tilde x^b(s)$, without adding $Q$ until the last round. This refined Step 2 increases the computational load, while improving the accuracy of the progressive estimation. \subsection{Examples} In the following, we apply the progressive EKF to the Lorenz-96 model using the same parameters as in (\ref{lorenz_1}). The error covariance is approximated using sparsity matrices with $N_{sp}=7, 11, 17$. For $N_{sp}=11$, we tested the idea of refining step-size using $n_p=1$ and $n_p=2$. The results are shown in Figure \ref{fig_progEKF} and summarized in Table \ref{table2}. Comparing to EnKF, the error variations of the progressive EKF are uniformly and significantly smaller. If $N_{sp}=7$, which is smaller than the ensemble size $N_{ens}=10$, the median value of estimation error is larger than that of the EnKF. The median error for $N_{sp}=11$ is comparable to that of the EnKF. If a refined step-size in (\ref{refinestep}) is applied, for instance $n_p=2$, the median estimation error is further reduced. Comparing to the performance of the sparse UKF, the error variations are similar. However, the estimation error of the sparse UKF has a smaller median in all cases. For example, to achieve a similar performance as the sparse UKF when $N_{sp}=11$, one has to use a larger sparsity index $N_{sp}=17$ for the progressive EKF. \begin{figure}[!ht] \begin{center} \includegraphics[width=3.0in,height=2.0in]{fig_progressiveKF1-eps-converted-to} \caption{Boxplot of RMSE. For Prograssive-KF, $N_{sp}$ = 7, 11, and 17} \label{fig_progEKF} \end{center} \end{figure} The computational load of the progressive EKF, in terms of the component evaluation, is comparable to the EnKF, as shown in Table \ref{table2}. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline {\small Filter}& {\small Size}& {\small Entries}&{\small Error} &{\small Error}&{\small Error}\\ &&{\small EVAL}&{\small Median}&{\small Mean}&{\small STD}\\ \hline {\small EnKF}&{\small $N_{ens}=10$}& {\small 400}&0.3462&1.0741&1.0652\\ \hline {\small P-KF}&{\small $N_{sp}=7$}&&&&\\ &{\small $N_p=1$}&{\small 320}&0.3845&0.3846&0.0055\\ \hline {\small P-KF}&{\small $N_{sp}=11$}&&&&\\ &{\small $N_p=1$}&{\small 480}&0.3455&0.3458&0.0050\\ \hline {\small P-KF}&{\small $N_{sp}=11$}&&&&\\ &{\small $N_p=2$}&{\small 480x2}&0.3041&0.3041&0.0044\\ \hline {\small P-KF}&{\small $N_{sp}=17$}&&&&\\ &{\small $N_p=3$}&{\small 720x2}&0.2872&0.2873&0.0046\\ \hline \end{tabular} \caption{Summary of simulation results} \label{table2} \end{center} \end{table} \section{Conclusions} Two algorithms of Kalman Filters based on sparse error covariances are introduced. They are tested using the Lorenz-96 model with $40$ state variables and chaotic trajectories. Both algorithms share the same basic idea: the error covariance is approximated using a sparse matrix. Thanks to the sparsity, the required memory size is significantly reduced. The symmetry of the error covariance can potentially reduce the I/O load. The analysis error covariance can be updated as a sparse matrix in each cycle using a deterministic process, either a square root matrix or a progressive algorithm. The updated sparse matrix is then used as the background error covariance for the next cycle. Relative to the EnKF, the main advantage of the proposed methods is that the estimation process do not need an ensemble; and the error covariance has a full rank. The algorithms do not suffer issues of rank deficiency as in EnKFs. As a result, the variation of analysis error is constantly small in all examples. Techniques of localization and covariance inflation are unnecessary. Relative to 4D-Var methods, the proposed algorithms are mostly parallel. They provide not only the state estimate but also the analysis error covariance. For the purpose of scalability, we suggest that the proposed methods are applied with component-based numerical models. From the examples, the sparse UKF has better accuracy than the progressive EKF. If the computational load of taking square roots of sparse matrices is affordable, then the sparse UKF is the approach of our choice. On the other hand, the progressive EKF is a simple algorithm that avoids taking square roots of large matrices, provided that the progressive approximation of error covariance is adequately accurate. Although most conclusions drawn in this paper are based on simulations using the Lorenz-96 model, the algorithms are developed for general applications. Testing the methods using different types of system models is a main topic of our future work.
1,116,691,499,576
arxiv
\section{Introduction} Given some points in general position, one can ask for the number of varieties of a fixed dimension and fixed number of nodes passing through the points. We study tropical counts of binodal cubic surfaces over $\mathbb{C}$ and $\mathbb{R}$. The space $\mathbb{P}^{19}$ parameterizes cubic surfaces by the coefficients of their defining polynomial. The singular cubic surfaces form a variety of degree 32 called the \emph{discriminant} in $\mathbb{P}^{19}$. The surfaces passing through a particular point in $\mathbb{P}^3$ form a hyperplane in $\mathbb{P}^{19}$. Thus, through 18 generic points there are 32 nodal surfaces. The reducible singular locus of the discriminant is the union of the cuspidal cubic surfaces and the binodal cubic surfaces. Each is a codimension 2 variety in $\mathbb{P}^{19}$. In \cite[Section 7.1]{Va03}, Vainsencher gives formulas for the number of $k$-nodal degree $m$ surfaces in a $k$ dimensional family in $\mathbb{P}^3$. That is, for k=2 and m=3, he determines the degree of the variety parameterizing 2-nodal cubics. For $k = 2$ nodes, there are $2 (m - 2) (4 m^3 - 8 m^2 + 8 m - 25) (m - 1)^2$ such surfaces. Setting $m = 3$, we have the following count. \begin{thm}{\cite{Va03}} There are 280 binodal complex cubic surfaces passing through 17 general points. \end{thm} Mikhalkin pioneered the use of tropical geometry to answer questions in enumerative geometry \cite{Mik}. Tropical methods have successfully counted nodal plane curves over $\mathbb{C}$ and $\mathbb{R}$ \cite{Mik,BM09}. In \cite{BrMi2007,BM09} this technique is enriched by the concept of floor diagrams. In our setting, we ask: \begin{que}[Question 10\footnote{\textit{27 Questions on the Cubic Surface}, \url{http://cubics.wikidot.com/question:all}}] Can the number 280 of binodal cubic surfaces through 17 general points be derived tropically? \end{que} For a specific choice of points, given in Section \ref{sec:background}, tropical methods are useful because the dual subdivisions of the Newton polytope are very structured. This allows us to study only 39 subdivisions of the Newton polytope of a cubic surface. This is minuscule compared to the 344,843,867 unimodular triangulations of this polytope \cite{JJK18,JPS19}. Singular tropical surfaces and hypersurfaces are studied in \cite{MaMaSh12,DT12}. A surface with $\delta$ nodes as its only singularities is called \textit{$\delta$-nodal}. The tropicalization of a $\delta$-nodal surface is called a \textit{$\delta$-nodal} tropical surface. We say a $\delta$-nodal surface is \textit{real} if the polynomial defining the surface is real and the surface has real singularities. If we count all tropical binodal cubic surfaces through our points with multiplicities, we will recover the true count. We study tropical surfaces with \emph{separated} nodes in the tropical surface, in the sense that the topological closures of the cells in the surface containing the nodes have empty intersection. To count them, we list the dual subdivisions of candidate binodal tropical cubic surfaces and count their multiplicities. \begin{thm} \label{thm:212} There are $39$ tropical binodal cubic surfaces through $17$ points in Mikhalkin position (described in Section \ref{sec:background}) containing separated singularities. They give rise to $214$ complex binodal cubic surfaces through $17$ points. \end{thm} \begin{proof} We distinguish five cases based on which \emph{floors} (Definition \ref{def:floorplan}) of the tropical cubic surface contain the nodes and count with complex multiplicities (Definition \ref{def:multiplicities}). $$ 214 = \underbrace{20}_{\text{Proposition \ref{prop:31}}} + \underbrace{24}_{\text{Proposition \ref{prop:21}}} +\underbrace{90}_{\text{Proposition \ref{prop:32}}} +\underbrace{72}_{\text{Proposition \ref{prop:22}}} +\underbrace{8}_{\text{Proposition \ref{prop:33}}}. $$ \end{proof} \begin{thm} \label{thm:56} There exists a point configuration $\omega$ of $17$ real points in $\mathbb{P}^3$ all with positive coordinates, such that there are at least $58$ real binodal cubic surfaces passing through $\omega$. \end{thm} \begin{proof} We count the floor plans in Theorem \ref{thm:212} with real multiplicities. $$ 58 \leq \underbrace{\geq 16}_{\text{Proposition \ref{prop:31}}} + \underbrace{\geq 4}_{\text{Proposition \ref{prop:21}}} +\underbrace{\geq 34}_{\text{Proposition \ref{prop:32}}} +\underbrace{\geq 4}_{\text{Proposition \ref{prop:22}}} +\underbrace{\geq 0}_{\text{Proposition \ref{prop:33}}}. $$ \end{proof} As we conduct the counts in Theorems \ref{thm:212} and \ref{thm:56}, we encounter cases with \emph{unseparated} nodes. Here, the two node germs are close together, and so the cells that would normally contain the nodes interact and their topological closures intersect. Thus, the node germs interfere with the conditions on producing nodes \cite{MaMaSh18}. These cases account for the 66 surfaces missing from our count. Their dual Newton subdivisions contain unclassified polytope complexes, which we list in Section \ref{sec:polytopes}. \\ \ \\ \noindent\textbf{Acknowledgements.} The authors thank Hannah Markwig for her explanations, the insightful discussions and her feedback, and Bernd Sturmfels for helpful remarks and recommendations for the improvement of this article. \section{Tropical Floor Plans} \label{sec:background} We now give an overview of counting surfaces using tropical geometry. Let $\mathbb{K} = \cup_{m\geq 1} \mathbb{C}\{t^{1/m}\}$ and $\mathbb{K}_\mathbb{R} = \cup_{m\geq 1} \mathbb{R}\{t^{1/m}\}$. We assume the reader is familiar with tropical hypersurfaces and the corresponding dual subdivision of the Newton polytope as in \cite[Chapter 3.1]{tropicalbook}. For any 17 generic points in $\mathbb{K}$, the tropicalizations of the 280 binodal cubic surfaces pass through the tropicalizations of the 17 points. However, a bad choice of points might lead to the tropicalizations of the points not being distinct, or not being tropically generic. Furthermore, these surfaces would be difficult to characterize in general. Luckily, we can choose points in \textit{Mikhalkin position} (Definition \ref{def:pointconfig}). This is a special configuration of points in generic position such that their tropicalizations are tropically generic. Tropical surfaces passing through such points have a very nice form, and the combinatorics of the dual Newton subdivision is well understood. We can count tropical surfaces through points in Mikhalkin position. Since the algebraic points are generic, we have $280 = \sum_{S} \text{mult}_{\mathbb{C}}(S)$, where we sum over all tropical surfaces $S$ passing through the tropicalized points and $\text{mult}_{\mathbb{C}}(S)$ is the lifting multiplicity of $S$ over $\mathbb{K}$. At this time, the ways in which two nodes can appear in a tropical surface are not fully understood, so our count is incomplete. Cases we do not understand yet are listed in Section \ref{sec:polytopes}. We now give the point configuration for counting tropical surfaces. \begin{dfn}[{\cite[Section 3.1]{MaMaSh18}}]\label{def:pointconfig} Let $\omega=(p_1,...,p_{17})$ be a configuration of 17 points in $\mathbb{K}^3$ or $\mathbb{K}_{\mathbb{R}}^3$. Let $q_i\in \mathbb{R}^3$ be the tropicalization of $p_i$ for $i=1,...,17$. We say $\omega$ is in \emph{Mikhalkin position} if the $q_i$ are distributed with growing distances along a line $\{\lambda\cdot (1,\eta,\eta^2)|\lambda \in \mathbb{R} \}\subset\mathbb{R}^3$, where $0<\eta\ll1,$ and the $p_i$ are in algebraically generic position. This is possible by \cite{Mik}. From now on all cubic surfaces are assumed to satisfy point conditions from points in Mikhalkin position. \end{dfn} We now summarize the recipe for constructing binodal tropical cubic surfaces through our choice of 17 points. Given a singular tropical surface $S$ passing through $\omega=(p_1,...,p_{17})$ in Mikhalkin position, each point $p_i$ is contained in the interior of its own 2-cell of $S$ \cite[Remark 3.1]{MaMaSh18}. Therefore, we can encode the positions of these points by their dual edges in the Newton subdivision. Marking these edges in the subdivision leads to a path through $18$ of the lattice points in the Newton polytope $\Delta$. Due to our special configuration, this path is always connected for cubics and we only have one step of the path in between each slice of $\Delta$ in the $x$-direction. Therefore we can look at each slice independently, obtaining subdivisions of polytopes dual to curves of degrees 3, 2, and 1. These are the \emph{floors} of our floor plans (see Definition \ref{def:floorplan}). By the smooth extension algorithm a floor plan uniquely defines a subdivision of $\Delta$ and therefore gives a unique tropical surface. This assignment is injective \cite[Proposition 5.6]{MaMaShSh19}. Since every tropical surface passing through points in Mikhalkin position is floor decomposed \cite{BeBrLo17}, its dual subdivision can be sliced in the $x$-direction. The resulting floors then give rise to the original surface. Tropicalizations of singularities leave a mark in the dual subdivision \cite{MaMaSh12}. For our point configuration and chosen degree there are only three circuits in the dual subdivision that give rise to separated nodes, see Figure \ref{fig:circuits}. Circuit A is a pentatope, its dual vertex is the node. To encode a singularity, circuit D must be part of a bipyramid, Figure \ref{fig:bipyramid}. The node is the midpoint of the edge dual to the parallelogram. Circuit E must have at least three neighboring points in special positions, forming at least two tetrahedra with the edge, Figure \ref{fig:weight2config}. The weighted barycenter of the 2-cell dual to the edge of length two is the node, where the weight is given by the choice of the three neighbors. A node germ (Definition \ref{def:nodegerms}) is a feature of a tropical curve appearing in a floor plan which gives one of these circuits in the subdivision dual to the tropical surface. \begin{figure}[h] \centering \begin{subfigure}{.19\textwidth} \centering \includegraphics[height=0.5\textwidth]{circuitA.png} \caption{circuit A}\label{fig:pentatope} \end{subfigure}{} \begin{subfigure}{.19\textwidth} \centering \includegraphics[height=0.5\textwidth]{circuitD.png} \caption{circuit D}\label{fig:circuitD} \end{subfigure}{} \begin{subfigure}{.19\textwidth} \centering \includegraphics[height=0.5\textwidth]{bipyramid.png} \caption{Bipyramid}\label{fig:bipyramid} \end{subfigure}{} \begin{subfigure}{.19\textwidth} \centering \includegraphics[height=0.5\textwidth]{circuitE.png} \caption{circuit E}\label{fig:circuitE} \end{subfigure}{} \begin{subfigure}{.19\textwidth} \centering \includegraphics[height=0.5\textwidth]{weight2config.png} \caption{Weight two configuration}\label{fig:weight2config} \end{subfigure} \caption{Circuits in the dual subdivision. } \label{fig:circuits} \end{figure}{} \begin{dfn}[\cite{MaMaShSh19}, Definition 5.1]\label{def:nodegerms} Let $C$ be a plane tropical curve of degree $d$ passing through $\binom{d+2}{2}-2$ points in general position. A \textit{node germ} of $C$ of a floor plan of degree 3 is one of the following: \begin{enumerate} \item a vertex dual to a parallelogram \item a horizontal or diagonal end of weight two, \item a right or left \emph{string} (see below). \end{enumerate} If the lower right (resp. left) vertex of the Newton polytope has no point conditions on the two adjacent ends, we can prolong the adjacent bounded edge in direction $(1,0)$ (resp. $(-1,-1)$) and still pass through the points. The union of the two ends is called a \emph{right} (resp. \emph{left}) \emph{string}. \end{dfn} \begin{rem} Now we can explain what our notion of separated nodes means in terms of the dual subdivision: two nodes are separated if they arise from polytope complexes of the form \ref{fig:pentatope}, \ref{fig:bipyramid} or \ref{fig:weight2config}. Any such two complexes might intersect in a unimodular face. \end{rem} In \cite{MaMaShSh19} tropical floor plans are introduced to count surfaces satisfying point conditions, similar to the concept of floor diagrams used to count tropical curves. Their definition of tropical floor plans requires node germs to be separated by a floor, thus neglecting surfaces where the nodes are still separated but closer together, because that is enough to count multinodal surfaces asymptotically \cite[Theorem 6.1]{MaMaShSh19}. \begin{dfn}[\cite{MaMaShSh19}, Definition 5.2]\label{def:floorplan} Let $Q_i$ be the projection of $q_i$ along the $x$-axis. A \textit{$\delta$-nodal floor plan $F$ of degree $d$} is a tuple $(C_d,...,C_1)$ of plane tropical curves $C_i$ of degree $i$ together with a choice of indices $d\geq i_{\delta}\geq ... \geq i_1\geq1$, such that $i_{j+1}> i_j +1$ for all $j$, satisfying: \begin{enumerate} \item The curve $C_i$ passes through the following points:\begin{align*} &\text{if } i_{\nu}>i>i_{\nu-1}: Q_{\sum_{k=i+1}^d \binom{k+2}{2}-\delta+\nu},...,Q_{\sum_{k=i}^d \binom{k+2}{2}-\delta+\nu} \\ &\text{if } i=i_{\nu}: Q_{1-\delta+\nu+\sum_{k=i+1}^d \binom{k+2}{2}},...,Q_{-2-\delta+\nu+\sum_{k=i}^d \binom{k+2}{2}-1}. \end{align*} \item The plane curves $C_{i_j}$ have a node germ for each $j=1,...,\delta.$ \item If the node germ of $C_{i_j}$ is a left string, then its horizontal end aligns with a horizontal bounded edge of $C_{i_j+1}$. \item If the node germ of $C_{i_j}$ is a right string, then its diagonal end aligns either with a diagonal bounded edge of $C_{i_j-1}$ or with a vertex of $C_{i_j-1}$ which is not adjacent to a diagonal edge. \item If $i_{\delta}=d$, then the node germ of $C_d$ is either a right string or a diagonal end of weight two. \item If $i_1=1$, then the node germ of $C_1$ is a left string. \end{enumerate} \end{dfn} \begin{rem}As soon as we allow node germs in adjacent or the same floors, we need to consider an additional alignment condition not occurring if the node germs are separated by floors: A left string in $C_i$ can also align with a vertex of $C_{i+1}$ not adjacent to a horizontal edge. \end{rem} \begin{figure} \centering \begin{subfigure}{.32\textwidth} \includegraphics[height=0.55\textwidth]{leftstring.png}\label{fig:lefstring} \caption{Left string} \end{subfigure}{} \begin{subfigure}{.32\textwidth} \includegraphics[height=0.55\textwidth]{rightstring.png}\label{fig:rightstring} \caption{Right string} \end{subfigure}{} \begin{subfigure}{.32\textwidth} \includegraphics[height=0.55\textwidth]{parallelogram.png} \caption{Parallelogram in subdivision dual to floor}\label{fig:parallelogram} \end{subfigure}{} \caption{Node germs giving a circuit of type D.} \label{fig:paralelo_nodes} \end{figure}{} Figure \ref{fig:paralelo_nodes} shows all node germs which lead to a parallelogram in the subdivision of the Newton polytope. If the node germ in a curve is dual to a parallelogram we have a picture as in Figure \ref{fig:parallelogram}. The right vertex of the floor of higher degree and the left vertex of the floor of lower degree form a bipyramid over the parallelogram as in Figure \ref{fig:bipyramid}. The alignment of the horizontal (resp. diagonal) end of the left (resp. right) string with a bounded horizontal (resp. diagonal) edge of a curve of higher (resp. lower) degree in the floor plan translates to the dual vertical (resp. diagonal) edges in the subdivisions forming a parallelogram as in Figure \ref{fig:circuitD}. Since the string passes through the two vertices bounding the horizontal (resp. diagonal) edge it aligns with, the dual polytope complex is a bipyramid over the parallelogram. The two top vertices of the pyramids are neighbors of the vertical bounded edge contained in the dual subdivision to the curve of higher (resp. lower) degree dual to the horizontal (resp. diagonal) bounded edge the string aligns with. Figure \ref{fig:pentatopeintersection} shows the alignment of a left string with a vertex not adjacent to a horizontal edge. The 5-valent vertex in this Figure is dual to a pentatope, circuit A, Figure \ref{fig:pentatope}. The analogous alignment of a right string with a vertex not adjacent to a diagonal edge is very rare in our setting, since we consider surfaces of degree 3 and a smooth conic contains no such vertex. The occurring cases in our count are due to node germs in the conic and lead not to a pentatope as in Figure \ref{fig:pentatope}, but to different complexes considered in Section \ref{sec:polytopes}. Figures \ref{fig:horizontal2}-\ref{fig:diagonal2} show the node germs coming from an undivided edge of length two in the subdivision, as shown in Figure \ref{fig:circuitE}. The node is contained in the dual 2-cell of the length two edge. Every intersection point of the weight two diagonal (resp. horizontal) end with the lower (resp. higher) degree curve of the floor plan can be selected to lift the node \cite{MaMaSh18}. This corresponds in the dual subdivision to choosing those three neighboring vertices which allow forming the polytope complex shown in Figure \ref{fig:weight2config}. With our chosen point condition the neighboring vertex in the dual subdivision of the floor containing the undivided edge is always one of the three neighboring vertices. If the length two edge is diagonal (resp. vertical) the other two vertices have to form a length one edge on the vertical (resp. diagonal) facet of the subdivision dual to the lower (resp. higher) degree curve of the floor plan. \begin{figure} \centering \begin{subfigure}{.3\textwidth} \includegraphics[width=0.9\textwidth]{pentatopeintersetion.png} \caption{intersection dual to a pentatope}\label{fig:pentatopeintersection} \end{subfigure}{}\quad \begin{subfigure}{.3\textwidth} \includegraphics[width=0.9\textwidth]{horizontalweight2.png} \caption{horizontal end of weight two}\label{fig:horizontal2} \end{subfigure}{}\quad \begin{subfigure}{.3\textwidth} \includegraphics[width=0.9\textwidth]{diagonalweight2.png} \caption{diagonal end of weight two}\label{fig:diagonal2} \end{subfigure}{} \caption{Node germs leading to circuits of type A and type E.} \label{fig:weight2ends} \end{figure}{} The complex lifting multiplicity of the node germs in the floors can be determined combinatorially using \cite{MaMaSh18}. \begin{dfn}[Definition 5.4, \cite{MaMaShSh19}]\label{def:multiplicities} Let $F$ be a $\delta$-nodal floor plan of degree $d$. For each node germ $C^{*}_{i_j}$ in $C_{i_j}$, we define the following local complex multiplicity $\text{mult}_{\mathbb{C}}(C^{*}_{i_j})$: \begin{enumerate} \item If $C^{*}_{i_j}$ is dual to a parallelogram, then $\text{mult}_{\mathbb{C}}(C^{*}_{i_j}) =2$. \item If $C^{*}_{i_j}$ is a horizontal end of weight two, then $\text{mult}_{\mathbb{C}}(C^{*}_{i_j}) = 2(i_j + 1)$. \item If $C^{*}_{i_j}$ is a diagonal end of weight two, then $\text{mult}_{\mathbb{C}}(C^{*}_{i_j}) = 2(i_j - 1)$. \item If $C^{*}_{i_j}$ is a left string, then $\text{mult}_{\mathbb{C}}(C^{*}_{i_j}) = 2$. \item If $C^{*}_{i_j}$ is a right string whose diagonal end aligns with a diagonal bounded edge, then $\text{mult}_{\mathbb{C}}(C^{*}_{i_j}) = 2$. \item If $C^{*}_{i_j}$ is a right string whose diagonal end aligns with a vertex not adjacent to a diagonal edge, then $\text{mult}_{\mathbb{C}}(C^{*}_{i_j})= 1$. \end{enumerate} The multiplicity of a $\delta$-nodal floor plan is $\text{mult}_{\mathbb{C}}(F) = \prod_{j=1}\text{mult}_{\mathbb{C}}(C^{*}_{i_j}).$ \end{dfn} To determine the real multiplicity, we have to fix the signs of the coordinates of the points in $\omega$, as they determine the existence of real solutions of the initial equations in \cite{MaMaSh18}. We fix the sign vector $s = ((+)^3)^n$. \begin{dfn}[\cite{MaMaShSh19}, Definition 5.7]\label{def:multiplicities_real} For a node germ $C^{*}_{i_j}$ in $C_{i_j}$, we define the local real multiplicity $\text{mult}_{\mathbb{R},s}(C^*_{i_j})$: \begin{enumerate} \item If $C^{*}_{i_j}$ is dual to a parallelogram, it depends on the position of the parallelogram in the Newton subdivision: \begin{itemize} \item if the vertices are $(k, 0$, $(k, 1)$, $(k - 1, l)$ and $(k - 1, l + 1)$, then $$\text{mult}_{\mathbb{R},s}(C^*_{i_j})=\begin{cases} 2 \\ 0\end{cases} \hspace{-7pt}\text{if } (\frac{3}{2}i_j + 1+k+l)(i_j-1)\equiv \begin{cases}1 \\ 0 \end{cases} \hspace{-7pt}\text{modulo } 2$$ \item if the vertices are $(k, d-i_j -k)$, $(k, d-i_j -k -1)$, $(k +1, l)$ and $(k +1, l +1)$, then $$\text{mult}_{\mathbb{R},s}(C^*_{i_j})=\begin{cases} 2 \\ 0\end{cases} \hspace{-7pt}\text{if } \frac{1}{2}\cdot (i_j + 2+2l)(i_j-1)\equiv \begin{cases}1 \\ 0 \end{cases} \hspace{-7pt}\text{modulo } 2$$ \end{itemize} \item If $C^{*}_{i_j}$ is a diagonal edge of weight two, $\text{mult}_{\mathbb{R},s}(C^{*}_{i_j})= 2(i_j-1).$ \item If $C^{*}_{i_j}$ is a left string, then it depends on the position of the dual of the horizontal bounded edge of $C_{i_j+1}$ with which it aligns. Assume it has the vertices $(k, l)$ and $(k, l + 1)$. Then $$\text{mult}_{\mathbb{R},s}(C^{*}_{i_j})=\begin{cases} 2 \\ 0\end{cases} \text{ if } i_j-k \equiv \begin{cases}0 \\ 1 \end{cases} \text{modulo } 2$$ \item If $C^{*}_{i_j}$ is a right string whose diagonal end aligns with a a vertex not adjacent to a diagonal edge, then $\text{mult}_{\mathbb{R},s}(C^{*}_{i_j})= 1.$ \end{enumerate} \end{dfn} A tropical $\delta$-nodal surface $S$ of degree $d$ given by a $\delta$-nodal floor plan $F$ has at least $\text{mult}_{\mathbb{R},s}(F) = \prod_{j=1}^{\delta} \text{mult}_{\mathbb{R},s}(C^{*}_{i_j})$ real lifts satisfying the point conditions with sign vector $s=((+)^3)^n$ \cite[Proposition 5.8]{MaMaShSh19}. Several cases are left out of the above definition because the number of real solutions is hard to control. We address this in Section \ref{subsec:real_mult}. This is why we can only give a lower bound of real binodal cubic surfaces where the tropicalization contains separated nodes. We now count surfaces from the floor plans defined in \cite[Proposition 5.8]{MaMaShSh19}, which have node germs in the linear and cubic floors. Since we adhere exactly to Definition \ref{def:floorplan} the nodes will always be separated. \begin{prop} \label{prop:31} There are $20$ cubic surfaces containing two nodes such that there is one node germ in the cubic floor and one in the linear floor. Of these binodal surfaces at least $16$ are real. \end{prop} \begin{proof} By Definition \ref{def:floorplan} a floor plan consists of a cubic curve $C_3$, a conic $C_2$, and a line $C_1$, where the tropical curves $C_3$ and $C_1$ contain node germs. Recall that the notation $C_i^*$ stands for the node germ in $C_i$. By Definition \ref{def:floorplan} (6) the node germ of $C_1$ is a left string as in Figure \ref{fig:31_line}, which always aligns with the horizontal bounded edge in $C_2$, so $\text{mult}_{\mathbb{C}}(C_1^*) = 2$. The node germs in $C_3$ possible by Definition \ref{def:floorplan} (5) are depicted in Figures \ref{fig:31_1}-\ref{fig:31_3} and each one gives a different floor plan. \begin{enumerate} \item[(\ref{fig:31_1})] There is a right string in the cubic floor. In the smooth conic, there is no vertex which is not adjacent to a diagonal edge. So, the right string of the cubic must align with the diagonal bounded edge. This gives $\text{mult}_{\mathbb{C}}(F) = \text{mult}_{\mathbb{C}}(C_3^*) \cdot \text{mult}_{\mathbb{C}}(C_1^*) = 2 \cdot 2 = 4$. In this case, $\text{mult}_{\mathbb{R},s}(F)$ is undetermined, see Section \ref{subsec:real_mult}. \item[(\ref{fig:31_2}, \ref{fig:31_3})] The cubic has a weight two diagonal end. We have $2 \cdot \text{mult}_{\mathbb{C}}(F) = 2 \cdot \text{mult}_{\mathbb{C}}(C_3^*) \cdot \text{mult}_{\mathbb{C}}(C_1^*) = 2 \cdot(2(3-1) \cdot 2) = 16$. By Definition \ref{def:multiplicities_real} (3) the real multiplicity of the left string depends on coordinates of the dual of the edge it aligns with: $(1,0)$ and $(1,1)$. This gives $2 \cdot \text{mult}_{\mathbb{R},s}(F) = 2 \cdot \text{mult}_{\mathbb{R},s}(C_3^*) \cdot \text{mult}_{\mathbb{R},s}(C_1^*) = 2 \cdot(2(3-1) \cdot 2) = 16$. \end{enumerate} \end{proof} Notice that having node germs separated by a floor only accounts for 20 of the 280 tropical cubic surfaces through our 17 points. As we will show, our extension of Definition \ref{def:floorplan} captures many more surfaces. \begin{figure}[h] \centering \begin{subfigure}{.245\textwidth} \centering \includegraphics[height = 0.9 in]{_3,1__cubic_1.pdf} \caption{$\ $} \label{fig:31_1} \end{subfigure}% \begin{subfigure}{.245\textwidth} \centering \includegraphics[height = 0.9 in]{_3,1__cubic_2.pdf} \caption{$\ $} \label{fig:31_2} \end{subfigure}% \begin{subfigure}{.245\textwidth} \centering \includegraphics[height = 0.9 in]{_3,1__cubic_3.pdf} \caption{$\ $} \label{fig:31_3} \end{subfigure}% \begin{subfigure}{0.245\textwidth} \centering \includegraphics[height = 0.9 in]{smooth_cubic.pdf} \caption{$\ $}\label{fig:21_cubic} \end{subfigure}% \caption{The triangulation dual to a smooth cubic floor and the three possible subdivisions dual to a cubic tropical curve with one node germ.}\label{fig:31_triangulations} \end{figure} \begin{figure}[h] \centering \begin{subfigure}{.35\textwidth} \centering \includegraphics[width=.4\linewidth]{smooth_conic.pdf} \caption{A triangulation dual to a smooth conic} \label{fig:31_conic} \end{subfigure} \begin{subfigure}{.35\textwidth} \centering \includegraphics[width=.35\linewidth]{line.pdf} \caption{Left string line} \label{fig:31_line} \end{subfigure}% \caption{The triangulations dual to linear and conic curves appearing as part of a floor plan of a nodal cubic surface.} \label{fig:line_conic} \end{figure} \section{Nodes in adjacent floors} We now study cases where node germs are in adjacent floors of the floor plan, extending Definition \ref{def:floorplan}, and check that the nodes are separated. \begin{lemma}\label{lemma:weight2andbipyramid} If a floor plan contains a diagonal or horizontal weight two end and a second node germ leading to a bipyramid in the subdivision, such that the bipyramid does not contain the weight two end, the nodes are separated. \end{lemma} \begin{proof} The bipyramid and the weight two end share at maximum one vertex. The neighboring points of the weight two end can be part of the bipyramid. This causes no obstructions to the conditions in \cite{MaMaSh18} for the existence of a binodal surface tropicalizing to this. \end{proof} \begin{lemma}\label{lemma:elimination} If a floor plan has separated nodes, $C_2$ cannot have a right string. \end{lemma} \begin{proof} By Definition \ref{def:floorplan} (4) a right string in $C_2$ would have to align with a diagonal bounded edge of $C_1$ or with a vertex of $C_1$ not adjacent to a diagonal edge. Since $C_1$ is a tropical line, both cases can never occur. \end{proof} We now give the lemma used to eliminate cases with complexes in the Newton subdivision that cannot accommodate two nodes. \begin{lemma} \label{lem:elim_polytopes} Let $\Delta \subset \mathbb{Z}^3$ be finite, and let $B_\Delta$ be the variety of binodal hypersurfaces with defining polynomial having support $\Delta$. If the dimension of $B_\Delta$ is less than $|\Delta| - 3$, then any tropical surface whose dual subdivision consists of unimodular tetrahedra away from $\Delta$ is not the tropicalization of a complex binodal cubic surface. \end{lemma} \begin{proof} If a binodal cubic surface had such a triangulation and satisfied our point conditions, then we could obtain from it a binodal surface with support $\Delta$ satisfying $|\Delta|-3$ point conditions. Therefore, if the dimension of $B_\Delta$ is less than $|\Delta|-3$ we do not expect any such surfaces to satisfy $|\Delta|-3$ generic point conditions. \end{proof} \begin{prop} \label{prop:21} There are $24$ cubic surfaces containing two nodes such that the tropical cubic has two separated nodes and the corresponding node germs are contained in the conic and linear floors. Of these, at least $4$ are real. \end{prop} \begin{proof} Here a floor plan consists of a smooth cubic curve $C_3$ (see Figure~\ref{fig:21_cubic}), a conic $C_2$ and a line $C_1$, both with a node germ. The node germ of $C_1$ is by Definition \ref{def:floorplan} (6) a left string, see Figure \ref{fig:31_line}. For $C_2$ all possibilities from Definition~\ref{def:nodegerms} are depicted in Figure \ref{fig:21_triangulations}. We examine all choices for the floor plan $F$ and check whether the nodes are separated. \begin{enumerate}[(A)] \item[(\ref{fig:21_1})-(\ref{fig:21_3})] By Definition \ref{def:floorplan} (3) the left string in $C_1$ must align with the horizontal bounded edge of $C_2$, which is dual to a face of the parallelogram in the subdivision. We obtain a prism polytope between the two floors, and by completion of the subdivision, we get two pyramids sitting over those two rectangle facets of the prism, that are not on the boundary of the Newton polytope. This complex may hold two nodes, see Section \ref{sec:polytopes}. \item[(\ref{fig:21_4})] By Definition \ref{def:floorplan} (3), the left string of $C_1$ aligns with the horizontal bounded edge of $C_2$, giving a bipyramid in the subdivision, with top vertices the neighbors to the dual of the bounded diagonal edge in $C_2$. The length two edge dual to the horizontal weight two end is surrounded by tetrahedra that only intersect the bipyramid in a face. So, the nodes are separated and we count their multiplicities: $\text{mult}_\mathbb{C}(F) = \text{mult}_\mathbb{C}(C_1^*) \cdot \text{mult}_\mathbb{C}(C_2^*) = 2 \cdot 2(2+1) = 12$. In this case, $\text{mult}_{\mathbb{R},s}(F)$ is undetermined, see Section \ref{subsec:real_mult}. \item[(\ref{fig:21_5})] The left string in $C_1$ must align with the vertex in $C_2$ not adjacent to a horizontal edge, but this vertex is dual to the area two triangle in the subdivision. The resulting volume two pentatope contains the neighbors of the length two edge and is by Lemma \ref{lem:elim_polytopes} not big enough to hold two nodes. \item[(\ref{fig:21_7})] The left strings in $C_1$ and $C_2$ lead to two bipyramids in the subdivision. For each of the 3 alignment possibilities of the left string in $C_2$, the resulting bipyramids are disjoint and the nodes separate. We get $3 \cdot \text{mult}_\mathbb{C}(F) = 3 \cdot \text{mult}_\mathbb{C}(C_1^*) \cdot \text{mult}_\mathbb{C}(C_2^*) = 3 \cdot (2\cdot 2) = 12$. By Definition \ref{def:multiplicities_real} (3) we need to consider the positions of the dual edges the left strings align with in order to compute the real multiplicities. The left string in $C_1$ aligns with the edge given by the vertices $(1,0),(1,1)$ in the conic floor, it has $\text{mult}_{\mathbb{R},s}(C_1^*) = 2$. For the conic, two of the 3 choices have $x$-coordinate $k=1$ in the cubic floor, so $\text{mult}_{\mathbb{R},s}(C_2^*) = 0$. The last alignment is dual to $x$-coordinate $k=2$, so we have $\text{mult}_{\mathbb{R},s}(C_2^*) = 2$. We obtain $\text{mult}_{\mathbb{R},s}(F) = 4$. \end{enumerate} \end{proof} \begin{figure}[h] \centering \begin{subfigure}{.14\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,1__conic_1.pdf} \caption{$ $} \label{fig:21_1} \end{subfigure}% \begin{subfigure}{.14\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,1__conic_3.pdf} \caption{$ $} \label{fig:21_2} \end{subfigure}% \begin{subfigure}{.14\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,1__conic_4.pdf} \caption{$ $} \label{fig:21_3} \end{subfigure}% \begin{subfigure}{.14\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,1__conic_5.pdf} \caption{$ $} \label{fig:21_4} \end{subfigure}% \begin{subfigure}{.14\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,1__conic_2.pdf} \caption{$ $} \label{fig:21_5} \end{subfigure} \begin{subfigure}{.14\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,1__conic_6.pdf} \caption{$ $} \label{fig:21_7} \end{subfigure}% \caption{The possible subdivisions dual to a tropical conic curve with one node germ appearing as part of a floor plan of a nodal cubic surface.} \label{fig:21_triangulations} \end{figure} \begin{prop} \label{prop:32} There are $90$ cubic surfaces containing two nodes such that the tropical binodal cubic has separated nodes and the node germs are contained in the cubic and conic floors. Of these, at least $34$ are real. \end{prop} \begin{proof} A floor plan consists of a cubic $C_3$ with a node germ (Figures \ref{fig:31_1}-\ref{fig:31_3}), a conic $C_2$ with a node germ (Figure \ref{fig:21_triangulations}), and a smooth line $C_1$. There are 18 combinations. \begin{itemize} \item[(\ref{fig:31_1}, \ref{fig:21_1}-\ref{fig:21_2})] The cubic contains a right string, which must align with a diagonal bounded edge by Definition \ref{def:floorplan} (4). The resulting subdivision contains a triangular prism with two pyramids. This complex may contain two nodes, see Section \ref{sec:polytopes}. \item[(\ref{fig:31_1}, \ref{fig:21_3})] The right string in the cubic must align with the vertex of the conic dual to the square in the subdivision, giving rise to the polytope complex shown in Section \ref{sec:polytopes}. \item[(\ref{fig:31_1}, \ref{fig:21_4})] The right string in the cubic must align with the vertex dual to the left triangle in the conic containing the weight two edge. The resulting complex may hold 2 nodes, see Section \ref{sec:polytopes}. \item[(\ref{fig:31_1}, \ref{fig:21_5})] The resulting subdivision contains a bipyramid and a weight two configuration only overlapping in vertices, so the nodes are separated. We have $\text{mult}_{\mathbb{C}}(F) = \text{mult}_{\mathbb{C}}(C_3^*)\cdot \text{mult}_{\mathbb{C}}(C_2^*) = 2 \cdot 2(2-1) = 4$. In this case, $\text{mult}_{\mathbb{R},s}(F)$ is undetermined, see Section \ref{subsec:real_mult}. \item[(\ref{fig:31_1}, \ref{fig:21_7})] The left string in $C_2$ has to align with a horizontal bounded edge of $C_3$ by Definition \ref{def:floorplan} (4). There are 3 possibilities. If it aligns with the bounded edge adjacent to the right string in the cubic, we obtain a prism with two pyramids as in (\ref{fig:31_1}, \ref{fig:21_1}). See Section \ref{sec:polytopes}. If it aligns with either of the other two horizontal bounded edges, we obtain two bipyramids in the dual subdivision. Because the diagonal bounded edge of $C_2$ is part of the left sting aligning with a horizontal bounded end not adjacent to the right string of $C_3$, we cannot align the right string with the diagonal edge, such that the end of the right string contains the whole horizontal bounded edge of $C_2$. Instead the end meets the bounded edge somewhere in the middle and passes only through one vertex. Therefore, in the subdivision the second pyramid over the alignment parallelogram must have its vertex in $C_3$ instead of in the $C_2$, see Figure \ref{fig:fig1a3g}. In total, we get two bipyramids that only share an edge, so the node germs are separated. We have $2 \cdot \text{mult}_{\mathbb{C}}(F) = 2 \cdot \text{mult}_{\mathbb{C}}(C_3^*)\cdot \text{mult}_{\mathbb{C}}(C_2^*) = 2 \cdot (2 \cdot 2) = 8$. In these two cases, the edge the string aligns with has $x$-coordinate $k=1$ in the cubic floor and thus by Definition \ref{def:multiplicities_real} they both give $\text{mult}_{\mathbb{R},s}(F) = 0$. \begin{figure} \centering \includegraphics[height = 2 in]{case__1a3g_.pdf} \caption{The two bipyramids for one alignment of (\ref{fig:31_1}, \ref{fig:21_7}). The gray (resp. black) dots are the lattice points of the dual polytope to $C_3$ (resp. $C_2$). The shared edge of the bipyramids is marked blue and red.} \label{fig:fig1a3g} \end{figure} \item[(\ref{fig:31_2}, \ref{fig:21_1}-\ref{fig:21_2})] We obtain a bipyramid only overlapping with the configuration of the weight two end in vertices or edges. So the nodes are separated and $2\cdot\text{mult}_{\mathbb{C}}(F) = 2\cdot\text{mult}_{\mathbb{C}}(C_3^*)\cdot \text{mult}_{\mathbb{C}}(C_2^*) =2( 2(3-1) \cdot 2 )= 2\cdot 8$. The parallelogram has vertices as in the first case of Definition \ref{def:multiplicities_real} (1) with $k=1,l=1$ and $i_j=2$, so $\text{mult}_{\mathbb{R},s}(F) = 0$. \item[(\ref{fig:31_2}, \ref{fig:21_3})] As in (\ref{fig:31_2}, \ref{fig:21_1}) we have $\text{mult}_{\mathbb{C}}(F) = 8$. For the real multiplicity we need the vertices of the parallelogram. They are as in the first case of Definition \ref{def:multiplicities_real} (1) with $k=1,l=0$ and $i_j=2$, so $\text{mult}_{\mathbb{R},s}(C_2^*) = 2$. The weight $2$ end in $C_3$ has $\text{mult}_{\mathbb{R},s}(C_3^*) = 4$, so $\text{mult}_{\mathbb{R},s}(F) = 8$. \item[(\ref{fig:31_2}-\ref{fig:31_3}, \ref{fig:21_4})] This subdivision contains a tetrahedron which is the convex hull of both weight two ends. We also need a choice of the neighboring points of the two weight two edges. By their special position to each other, it only remains to add the two vertices neighboring the edges in the respective subdivisions dual to their floors. Whether it can contain $2$ nodes is so far undetermined, see Section \ref{sec:polytopes}. \item[(\ref{fig:31_2}-\ref{fig:31_3}, \ref{fig:21_5})] The nodes are separated, since the weight two ends with any choice of their neighboring points intersect in one vertex. So $2\cdot\text{mult}_{\mathbb{C}}(F) = 2\cdot\text{mult}_{\mathbb{C}}(C_3^*)\cdot \text{mult}_{\mathbb{C}}(C_2^*) = 2\cdot(2(3-1) \cdot 2(2-1) )=2\cdot 8$ and $2\cdot\text{mult}_{\mathbb{R},s}(F) = 2\cdot\text{mult}_{\mathbb{R},s}(C_3^*)\cdot \text{mult}_{\mathbb{R},s}(C_2^*) =2\cdot( 2(3-1) \cdot 2(2-1)) = 2\cdot8$. \item[(\ref{fig:31_2}, \ref{fig:21_7})] There are two possibilities to align the left string in $C_2$ with a horizontal bounded edge in $C_3$. If we select the left edge, we have a bipyramid, which does not contain the weight two end. By Lemma \ref{lemma:weight2andbipyramid} the nodes are separate. However, we need to adjust the multiplicity formula from Definition \ref{def:multiplicities} (3) to this case, because due to the alignment of the left string we obtain one intersection point less of the diagonal end of weight two with $C_2$. So instead of $3-1=2$ intersection points to chose from when lifting the node we have $3-2=1$. Thus, we obtain $\text{mult}_{\mathbb{C}}(F) = \text{mult}_{\mathbb{C}}(C_3^*)\cdot \text{mult}_{\mathbb{C}}(C_2^*) = 2(3-2) \cdot 2 = 4$. Since the left edge has $x$-coordinate $k=1$, we obtain $\text{mult}_{\mathbb{R},s}(F)=0.$ If we select the right edge, then the bipyramid contains the weight two end. See Section \ref{sec:polytopes}. As the cubic floor contains a vertex of $C_3$ not adjacent to a horizontal edge, it is also possible to align the left string with this. In the dual subdivision this gives rise to a pentatope spanned by the triangle dual to the vertex in $C_3$ and the vertical edge in the conic floor dual to the horizontal end of the left string, see Figure \ref{fig:pentatope}. The nodes dual to the length two edge and the pentatope are separated. By \cite{MaMaSh18} we have $\text{mult}_{\mathbb{C}}(C_2^*)=\text{mult}_{\mathbb{R},s}(C_2^*)=1.$ We count: $\text{mult}_{\mathbb{C}}(F) = \text{mult}_{\mathbb{C}}(C_3^*)\cdot \text{mult}_{\mathbb{C}}(C_2^*) = 2(3-2) \cdot 1 = 2$ and $\text{mult}_{\mathbb{R},s}(F)=\text{mult}_{\mathbb{R},s}(C_3^*)\cdot \text{mult}_{\mathbb{R},s}(C_2^*) =2(3-2)\cdot 1=2.$ \item[(\ref{fig:31_3}, \ref{fig:21_1}-\ref{fig:21_2})] We obtain a bipyramid overlapping with the weight two configuration in one or two vertices, so the nodes are separated and $2\cdot\text{mult}_{\mathbb{C}}(F) = 2\cdot\text{mult}_{\mathbb{C}}(C_3^*)\cdot \text{mult}_{\mathbb{C}}(C_2^*) = 2(2(3-1) \cdot 2(2-1)) = 2\cdot8$. With the same parallelogram as in $(\ref{fig:31_2},\ref{fig:21_1})$: $\text{mult}_{\mathbb{R},s}(F) = 0$. \item[(\ref{fig:31_3}, \ref{fig:21_3})] This follows (\ref{fig:31_3}, \ref{fig:21_1}), and we have $\text{mult}_{\mathbb{C}}(F)=8$. The real multiplicity follows (\ref{fig:31_2}, \ref{fig:21_3}), and we have $\text{mult}_{\mathbb{R},s}(F) = 8$. \item[(\ref{fig:31_3}, \ref{fig:21_7})] For each of the two choices for the alignment of the left string of the conic with a horizontal bounded edge of the cubic, we obtain a bipyramid which may share two vertices with the neighbors of the edge of weight two. As in (\ref{fig:31_2}, \ref{fig:21_7}) we need to adjust the multiplicity formula for the weight two end to $\text{mult}_{\mathbb{C}}(C_3^*)=2(3-2)=2$. We have $2 \cdot \text{mult}_{\mathbb{C}}(F)=2 \cdot 4$. For both alignments the dual edges have $x$-coordinate $k=1$ in the cubic floor, giving $\text{mult}_{\mathbb{R},s}(F) = 0$. \\ As $C_3$ also contains a vertex not adjacent to a horizontal edge, this opens a third alignment possibility. However, this vertex is adjacent to the weight two, so the nodes are not separated. The polytope complex can be seen in Figure \ref{fig:complexes}. \end{itemize} \end{proof} \section{Nodes in the same floor} We now examine cases where both node germs are in the same floor of the floor plan. By Lemma \ref{lemma:elimination} we cannot have a right string in the conic part of the floor plan, if the nodes are separated. A few more cases, depicted in Figure \ref{fig:elim_22}, can be eliminated with the following Lemma \ref{lem:elim_22}. \begin{lemma}\label{lem:elim_22} The ways of omitting $2$ points in the floor path in the conic floor shown in Figure \ref{fig:elim_22} do not give separated nodes. \end{lemma} \begin{proof} \begin{figure}[h] \centering \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__3.pdf} \caption{}\label{fig:elim_22_a} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__8.pdf} \caption{}\label{fig:elim_22_b} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__10.pdf} \caption{}\label{fig:elim_22_c} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__12.pdf} \caption{}\label{fig:elim_22_d} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__7.pdf} \caption{}\label{fig:elim_22_e} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__11.pdf} \caption{}\label{fig:elim_22_f} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_22__a1.pdf} \caption{}\label{fig:elim_22_a'} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_22__c1.pdf} \caption{}\label{fig:elim_22_c'} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_22__e1.pdf} \caption{}\label{fig:elim_22_e'} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_22__e2.pdf} \caption{}\label{fig:elim_22_e''} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_22__f1.pdf} \caption{}\label{fig:elim_22_f'} \end{subfigure} \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.9\linewidth]{_22__f2.pdf} \caption{}\label{fig:elim_22_f''} \end{subfigure} \caption{Conics through 3 points eliminated by Lemma \ref{lem:elim_22}.} \label{fig:elim_22} \end{figure} If the conic in a floor plan has two node germs, it passes only through $3$ points of the point configuration. In order to fix our cubic surface, every point we omit in the lattice path of the conic floor needs to compensate for the omitted point condition on our cubic surface. A vertical weight two end does allow our conic to be fixed by fewer points. But our point configuration ensures the end has no interaction with the other floors and thus cannot give rise to a node-encoding circuit as in Figure \ref{fig:circuits}. So, combined with a classical node germ this does not encode two separated nodes, dealing with \ref{fig:elim_22_a}, \ref{fig:elim_22_b}, \ref{fig:elim_22_c} and \ref{fig:elim_22_d}. If the top vertex of the Newton polytope of $C_2$ is omitted in the floor path, we always obtain an upwards string. If the upwards string is to be pulled vertically upwards, it can never be aligned with any part of the other floors, thus not fixing the curve, eliminating \ref{fig:elim_22_c'}, \ref{fig:elim_22_e''} and \ref{fig:elim_22_f''}. If the direction to pull the upwards string has some slope, as in \ref{fig:elim_22_e} and \ref{fig:elim_22_f}, or in the 2-dimensional strings in \ref{fig:elim_22_e'} and \ref{fig:elim_22_f'}, we still cannot align with any bounded edges of the other cubic, since we are above the line through the points. In \ref{fig:elim_22_a'} on the other hand we can align the vertical end of the string, but since we have two degrees of freedom this does not fix the curve, as we can still move the first vertical end.\end{proof} \begin{rem}\label{rem:endalignments} The last issue in the proof of Lemma \ref{lem:elim_22} can be fixed, if we allow alignments with ends. These however do not give rise to separated nodes \cite{MaMaSh18}. Therefore the cases \ref{fig:elim_22_a}, \ref{fig:elim_22_e}, \ref{fig:elim_22_f}, \ref{fig:elim_22_a'}, \ref{fig:elim_22_e'} and \ref{fig:elim_22_f'} require further investigation, see Section \ref{sec:polytopes}. In this light the non-existence of right strings in the conic floor needs to be investigated. \end{rem} \begin{prop} \label{prop:22} There are $72$ cubic surfaces containing two nodes, such that the tropical binodal conic has separated nodes and the corresponding node germs are both contained in the conic floor. Of these, at least $4$ are real. \end{prop} \begin{proof} See Figure \ref{fig:22_cases}. \begin{enumerate} \item[(\ref{fig:22_1})] Since the end of the left string, which aligns with a bounded horizontal edge of the conic, is of weight two, we obtain a bipyramid over a trapezoid. We get two different complexes depending upon the alignment, see Section \ref{sec:polytopes}. \item[(\ref{fig:22_1b})] We have a string with two degrees of freedom, because we can pull on both horizontal ends and vary their distance. Hence, we can align them both with the horizontal bounded edges of the cubic. There are three ways to do this. In the dual subdivisions this gives rise to two bipyramids. In all three cases they intersect maximally in two facets, and thus are separated. Since the bipyramids arise not from classical node germs, we check their multiplicities via the underlying circuit. By \cite[Lemma 4.8]{MaMaSh18} we obtain multiplicity 2 for each, and thus $3\cdot\text{mult}_{\mathbb{C}}(F)=3\cdot \text{mult}_{\mathbb{C}}(C_2^*)=3\cdot 2\cdot2 =12.$ We get $\text{mult}_{\mathbb{R},s}(F)=0,$ since one end has to align with a bounded edge in $C_3$ with dual edge of $x$-coordinate $k=1$. \item[(\ref{fig:22_2}-\ref{fig:22_4})] The conic floor has a left string and a parallelogram. This gives two bipyramids in the subdivision which, depending on the choice of alignment for the left string, have a maximal intersection of an edge. We obtain $2\cdot(3 \cdot \text{mult}_{\mathbb{C}}(F)) =2\cdot 12$. The vertex positions of the parallelogram give $\text{mult}_{\mathbb{R},s}(F)=0$ as in Proposition \ref{prop:21} (\ref{fig:21_1}). \item[(\ref{fig:22_5})] As in (\ref{fig:22_2}), we obtain $3 \cdot \text{mult}_{\mathbb{C}}(F) = 12$. The formulas for real multiplicities in Definition \ref{def:multiplicities_real} do not match this case, see Section \ref{subsec:real_mult}. \item[(\ref{fig:22_6})] The bipyramids arising from the different alignment options only intersect with the neighboring points of the weight two end in one vertex, so $3 \cdot \text{mult}_{\mathbb{C}}(F) = 12$. Only the alignment with the horizontal bounded edge of $C_3$ dual to the vertical edge of $x$-coordinate $k=2$ has non-zero real multiplicity, giving $\text{mult}_{\mathbb{R},s}(F)=4$. \item[(\ref{fig:22_9})] The two sets of neighboring points to the two weight two ends intersect in one vertex. So the nodes are separated and $\text{mult}_{\mathbb{C}}(F) =6\cdot 2 = 12,$ while $\text{mult}_{\mathbb{R},s}(F)$ is undetermined, see Section \ref{subsec:real_mult}. \end{enumerate} \end{proof} \begin{figure}[h] \centering \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__1.pdf} \caption{$\ $} \label{fig:22_1} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__1b.pdf} \caption{$\ $} \label{fig:22_1b} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__2.pdf} \caption{$\ $} \label{fig:22_2} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__4.pdf} \caption{$\ $} \label{fig:22_4} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__5.pdf} \caption{$\ $} \label{fig:22_5} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__6.pdf} \caption{$\ $} \label{fig:22_6} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=.9\linewidth]{_2,2__9.pdf} \caption{$\ $} \label{fig:22_9} \end{subfigure} \caption{Dual subdivisions of conics with two node germs.} \label{fig:22_cases} \end{figure} \begin{prop} \label{prop:33} There are $8$ cubic surfaces containing two nodes, such that the tropical binodal cubic has separated nodes and the corresponding node germs are both contained in the cubic floor. \end{prop} So far the number of real surfaces is undetermined. \begin{proof} Only two types of node germs may occur in $C_3$, see Figure \ref{fig:33}. \begin{enumerate} \item[(\ref{fig:33_1})] Since the weight two end is not contained in the bipyramid the two nodes are separated by Lemma \ref{lemma:weight2andbipyramid}, giving $\text{mult}_{\mathbb{C}}(F) = 2\cdot 4=8$. In this case, $\text{mult}_{\mathbb{R},s}(F)$ is undetermined, see Section \ref{subsec:real_mult}. \item[(\ref{fig:33_4})] The classical alignment condition of the right string with diagonal end of weight two can not be satisfied, since the direction vector of the variable edge has a too high slope. Due to the point conditions the diagonal end of weight two and the diagonal bounded edge of the conic curve never meet. \item[(\ref{fig:33_2})] Here we have a two-dimensional string. By the same argument as in (\ref{fig:33_4}) we cannot align the middle diagonal end with the diagonal bounded edge of the conic. Aligning the right string with the diagonal bounded edge of the conic does not fixate our floor plan, since we can still move the middle diagonal end of the cubic. \item[(\ref{fig:33_3})] We have three tetrahedra in the subdivision containing the weight three edge. This could contain two nodes, see Section \ref{sec:polytopes}. \end{enumerate} In (\ref{fig:33_4}), (\ref{fig:33_2}) alignments with ends are an option, see Section \ref{sec:polytopes}. \end{proof} \begin{figure}[h] \centering \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.9\linewidth]{_3,3__1.pdf} \caption{$\ $} \label{fig:33_1} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.9\linewidth]{_3,3__4.pdf} \caption{$\ $} \label{fig:33_4} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.9\linewidth]{_3,3__2.pdf} \caption{$\ $} \label{fig:33_2} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[width=.9\linewidth]{_3,3__3.pdf} \caption{$\ $} \label{fig:33_3} \end{subfigure} \caption{Cubics with two node germs.} \label{fig:33} \end{figure} \section{Next steps} \subsection{Dual complexes of unseparated nodes} \label{sec:polytopes} In previous sections, we encountered cases where two distinct node germs did not give rise to separated nodes. The dual complexes arising from these cases are shown in Figure \ref{fig:complexes}. We also encountered the floors which do not give separated nodes in Figure \ref{fig:elim_22} and the proof of Proposition \ref{prop:33}. By new alignment conditions, they might encode unseparated nodes, see Remark \ref{rem:endalignments}. Alignment with ends is not allowed for separated nodes because circuit D in Figure \ref{fig:circuitD} is then contained in the boundary of the Newton polytope and cannot encode a single node \cite{MaMaSh18}. However, our cases have one degree of freedom more and thus might allow not only the alignment of two ends, but also the alignment of the vertices the ends are adjacent to. This leads to a triangular prism shape in the subdivision, which has at least one parallelogram shaped facet in the interior of the Newton polytope. At this time, we do not yet know whether any of these cases can contain two nodes or with what multiplicity they should be counted with, but in total they ought to give the 66 missing surfaces from our count. \begin{figure}[h] \centering \begin{subfigure}{.42\textwidth} \centering \includegraphics[height = 1.2 in]{complex_4a.pdf} \caption*{(\ref{fig:21_1}-\ref{fig:21_3}), (\ref{fig:31_1}, \ref{fig:21_1}), (\ref{fig:31_1}, \ref{fig:21_2}), (\ref{fig:31_1}, \ref{fig:21_7})} \end{subfigure} \begin{subfigure}{.26\textwidth} \centering \includegraphics[height = 1.2 in]{4a6d.pdf} \caption*{(\ref{fig:31_1}, \ref{fig:21_4})} \end{subfigure} \begin{subfigure}{.22\textwidth} \centering \includegraphics[height = 1.2 in]{2b4d.pdf} \caption*{(\ref{fig:31_2}, \ref{fig:21_4}), (\ref{fig:31_3}, \ref{fig:21_4})} \end{subfigure} \begin{subfigure}{.35\textwidth} \centering \includegraphics[height = 1.2 in]{complex_2a4b.pdf} \caption*{(\ref{fig:31_1}, \ref{fig:21_3})} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[height = 1.2 in]{complex_9b4f.pdf} \caption*{(\ref{fig:31_2}, \ref{fig:21_7})} \end{subfigure} \begin{subfigure}{.26\textwidth} \centering \includegraphics[height = 1.2 in]{complex_4c,6f.png} \caption*{(\ref{fig:31_3},\ref{fig:21_7})} \end{subfigure} \begin{subfigure}{.26\textwidth} \centering \includegraphics[height = 1.2 in]{complex_9a1.pdf} \caption*{(\ref{fig:22_1})} \end{subfigure} \begin{subfigure}{.24\textwidth} \centering \includegraphics[height = 1.2 in]{complex_9a2alignment.png} \caption*{(\ref{fig:22_1})} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[height = 1.2 in]{complex_9c.pdf} \caption*{(\ref{fig:33_3})} \end{subfigure} \caption{Complexes whose duals could have to two nodes.} \label{fig:complexes} \end{figure} \subsection{Undetermined real multiplicities} \label{subsec:real_mult} In the previous sections, we encountered cases in which the real multiplicity was undefined. This happens when $C^{*}_{i_j}$ is the midpoint of an edge of weight two, $C^{*}_{i_j}$ is a horizontal edge of weight two ((\ref{fig:21_4}) and (\ref{fig:22_9})), and $C^{*}_{i_j}$ is a right string whose diagonal end aligns with a diagonal bounded edge ((\ref{fig:31_1}), (\ref{fig:31_1}, \ref{fig:21_5}), (\ref{fig:22_5}), and (\ref{fig:33_1})). There might be real lifts satisfying the point conditions coming from floor plans containing these node germs, but the number of real solutions is hard to control. An investigation of these cases is beyond the scope of this paper, so we leave Theorem \ref{thm:56} as a lower bound under these assumptions. We may compute the real multiplicity of (\ref{fig:22_5}), as well as of right strings aligning with diagonal bounded edges as follows. Shift the parallelogram to a special position used to prove \cite[Lemma 4.8]{MaMaSh18}. The equations of the proof of \cite[Lemma 4.8]{MaMaSh18} applied to our exact example then need to be checked for the existence of real solutions.
1,116,691,499,577
arxiv
\section{Introduction} Quandle homology \cite{CJKLS} was defined from rack homology \cite{FRS1} as the quotient by a subcomplex corresponding to the idempotency, for invariance under the type I Reidemeister move. Similar subcomplexes have been considered for various identities of racks and moves on diagrams. Typically a certain change of knot diagrams requires a condition for quandle $2$-cocycles to satisfy to obtain desired cocycle invariant, and the condition leads to a subcomplex. For example, for defining quandle cocycle invariants for unoriented knots, a good involution was defined in \cite{KO}, and a corresponding condition for cocycles and a subcomplex were defined. For a telephone cord move for racks, a condition for rack $2$-cocycles and a subcomplex were defined in \cite{EN}. Certain moves on handlebody-links were considered in \cite{CIST,IIJO}, and corresponding subcomplexes were defined. In this paper, we give a construction of a $2$-cycle $L$ from a given identity of quandles in a certain form, show that the abelian extension with a $2$-cocycle $\phi$ such that $\phi(L)=0$ inherits the identity, and construct a subcomplex from the identity. The homology and cohomology of these subcomplexes remain to be investigated. Preliminary material and definitions are provided in Section~\ref{sec:prelim}, and an outline of the method is explained using type 3 racks in Section~\ref{sec:type3}. The main results are presented in Section~\ref{sec:main} with proofs, and applied to type $n$ quandles and identities similar to the Burnside relations. In Section~\ref{sec:id}, specific identities are examined among small connected quandles, and a large class of Alexander quandles is given that satisfies many identities. \section{Preliminary} \label{sec:prelim} In this section, we provide preliminary material, definitions and notation. More details can be found, for example, in \cite{CKS,FR}. A {\it rack} $X$ is a non-empty set with a binary operation $(a, b) \mapsto a * b$ satisfying the following conditions. \medskip {\rm (1)} \ For any $b \in X$, the map $R_b: X \rightarrow X$ defined by $R_b(a)=a*b$ for $a \in X$ is a bijection. {\rm (2)} \ For any $a,b,c \in X$, we have $ (a*b)*c=(a*c)*(b*c). $ \medskip The map $R_b$ in the first axiom is called the {\it right translation by $b$}. By axioms $R_b$ is a {\it rack isomorphism}. A {\it quandle } $X$ is a rack with idempotency: $a*a=a$ for any $a \in X$. A {\it quandle homomorphism} between two quandles $X, Y$ is a map $f: X \rightarrow Y$ such that $f(x*_X y)=f(x) *_Y f(y) $, where $*_X$ and $*_Y$ denote the quandle operations of $X$ and $Y$, respectively. A {\it generalized Alexander quandle} is defined by a pair $(G, f)$ where $G$ is a group, $f \in {\rm Aut}(G)$, and the quandle operation is defined by $x*y=f(xy^{-1}) y $. If $G$ is abelian, this is called an {\it Alexander quandle}. Let $X$ be a rack. For brevity we sometimes omit $*$ and parentheses, so that for $x_i \in X$, $x_1x_2=x_1 * x_2$, $x_1 x_2 x_3 = (x_1 x_2)x_3$, and inductively, $x_1 \cdots x_{k-1} x_k=(x_1 \cdots x_{k-1} ) x_k$. We also use the notation $x*^n y = x*y *\cdots * y $ where $y$ is repeated $n$ times. A rack $X$ is said to be of {\it type $n$} (cf.~\cite{Joyce2}) if $n$ is the least positive integer such that $x *^n y=x$ holds for all $x, y \in X$, and we write ${\rm type}(X)=n$. A type 1 quandle is called {\it trivial}, and a type 2 quandle is called a {\it kei} or an {\it involutory } quandle. The subgroup of ${\rm Sym}(X)$ generated by the permutations ${ R}_a$, $a \in X$, is called the {\it inner automorphism group} of $X$, and is denoted by ${\rm Inn}(X)$. A rack is {\it connected} if ${\rm Inn}(X)$ acts transitively on $X$. The rack chain group $C_n(X)=C^R_n(X)$ for a rack $X$ is defined to be the free abelian group generated by $n$-tuples $(x_1, \ldots, x_n)$, $x_i \in X$ for $i=1, \ldots, n$. Let $d_h^{(n)}, \delta _h^{(n)} : C_n(X) \rightarrow C_{n-1}$ be defined by \begin{eqnarray*} d_h ^{(n)} (x_1, \ldots, x_h, \ldots, x_n) &=& (x_1, \ldots, \widehat{x_h}, \ldots, x_n), \\ \delta _h^{(n)} (x_1, \ldots, x_h, \ldots, x_n) &=& (x_1 * x_h , \ldots, x_{h-1} * x_h , \widehat{x_h}, \ldots, x_n), \end{eqnarray*} respectively, where $\hat{ \ } $ denotes deleting the entry. Then the boundary map is defined by $\partial_n=\sum_{h=2}^{n} (-1)^h [ d_h^{(n)} - \delta _h^{(n)} ] $. The degeneracy subcomplex $C^D(X)$ was defined \cite{CJKLS} for a quandle $X$ with generating terms $(x_i)_{i=1}^n \in C_n(X) $ with $x_j=x_{j+1}$ for some $j=1, \ldots, n-1$, and the quotient complex $\{ C^Q_n(X)=C^R_n(X) / C^D_n(X), \partial_n \}$ was defined \cite{CJKLS} as the quandle homology. The corresponding $2$-cocycle is formulated as follows. A quandle $2$-cocycle is regarded as a function $\phi: X \times X \rightarrow A$ for an abelian group $A$ that satisfies $$ \phi (x, y)- \phi(x,z)+ \phi(x*y, z) - \phi(x*z, y*z)=0$$ for any $x,y,z \in X$ and $\phi(x,x)=0$ for any $x\in X$. For a quandle $2$-cocycle $\phi$, $E=X \times A$ becomes a quandle by \[ (x, a) * (y, b)=(x*y, a+\phi(x,y)) \] for $x, y \in X$, $a,b \in A$, denoted by $E(X, A, \phi)$ or simply $E(X, A)$, and it is called an \emph{abelian extension} of $X$ by $A$. The second factor, in this case, is written in additive notation of $A$. See \cite{CENS,CKS} for more details. Computations using \textsf{GAP}~\cite{Leandro} significantly expanded the list of small connected quandles. These quandles, called {\it Rig} quandles, may be found in the \textsf{GAP}~package Rig \cite{rig}. Rig includes all connected quandles of order less than 48, at this time. Properties of some of Rig quandles, such as homology groups and cocycle invariants, are also found in \cite{rig}. We use the notation $Q(n,i)$ for the $i$-th quandle of order $n$ in the list of Rig quandles, denoted in \cite{rig} by {\sf SmallQuandle}$(n,i)$. Note, however, that in \cite{rig} quandles are left distributive, so that as a matrix, $Q(n,i)$ is the transpose of the quandle matrix {\sf SmallQuandle}$(n,i)$ in \cite{rig}. \section{Type $3$ quandles}\label{sec:type3} Before presenting the main theorem and proof, we describe the properties of rack identities through the example of type $3$ quandles in this section. We note that a subcomplex for type 2 quandles, or keis, is defined in \cite{KO} as a special case of their subcomplex. Recall that a rack $X$ is of type 3 if it satisfies the identity $S$: $x*y*y*y=x$ for any $x, y \in X$. We observe the following three properties. (i) From this identity $S$ we form a $2$-chain $$L=L_S=(x, y)+(x * y, y) + (x*y*y, y).$$ It is checked that $L$ is a $2$-cycle: $$\partial (L)=[\ (x) - (x * y) \ ] + [ \ (x * y) - (x * y * y ) \ ] + [ \ (x * y*y ) - (x * y * y *y) \ ] = 0, $$ using the identity $S$. (ii) Let $\phi \in Z^2_R(X, A)$ be a rack $2$-cocycle with the coefficient abelian group $A$ such that $\phi(L)=0$. Then for $E(X,A,\phi)=X \times A$, one computes \begin{eqnarray*} \lefteqn{(x, a)*(y,b)*(y,b)*(y,b)}\\ &=& (x*y, a+\phi(x,y) )*(y,b)*(y,b) \\ &=& (x*y*y*y, a+\phi(x,y) +\phi(x*y,y)+ \phi(x*y*y,y) ) \\ &=& (x,a). \end{eqnarray*} Hence $E(X,A,\phi)$ is of type 3. (iii) Define, for each $n$, a subgroup $C^S_n(X) \subset C_n(X)$ generated by \begin{eqnarray*} \bigcup_{j=1}^{n-1} \{ \ & & (x_1, \ldots, x_j, y, x_{j+2}, \ldots, x_n) \\ &+& (x_1 * y , \ldots, x_j*y, y, x_{j+2} , \ldots, x_n) \\ &+& (x_1 * y *y , \ldots, x_j*y * y , y, x_{j+2} , \ldots, x_n) \ \\ & & | \ x_i, y \in X, \, i=1, \ldots, \widehat{j+1}, \ldots, n \ \} . \end{eqnarray*} For a fixed $j$, $y$ is positioned at $(j+1)$-th entry. Then $\{ C_n^S, \partial_n \}$ is a subcomplex, which will be proved in the general case in Section~\ref{sec:main}. In this section, to illustrate the idea of the proof, we compute the image under the boundary map for a specific generator of $C^S_4(X)$. We simplify the notation and use $(1,2,3,4)$ for $(x_1, x_2, x_3, x_4)$, $12$ for $x_1*x_2$, etc. Let $c=(1,2,3,4) + (13, 23, 3, 4) + (133, 233, 3, 4)$ that is a generator for the case $n=4$ and $j=2$. We compute $\partial_4 (c)$ and show that the image is in $C_3^S(X)$. First we compute $$d_2^{(4)} (c) = (1,3,4)+(13,3,4)+(133,3,4) $$ which is a generator of $C_3^S(X)$. Then we have $$ \delta_2^{(4)}(c) = (12,3,4) + ((13)(23), 3,4) + ((133)(233), 3,4) . $$ One computes that $(13)(23)=(12)3$, and $$(133)(233)=[ ( 13) 3 ] [ ( 23 ) 3 ] = [ ( 13) (23) ] 3 = [ (12) 3 ] 3, $$ so that $\delta_2^{(4)}(c)= (12, 3, 4) + (123,3,4) + (1233,3,4), $ which is a generator of $C_3^S(X)$. We also compute \begin{eqnarray*} [ d_3^{(4)} - \delta_3^{(4)}] (c) &=& [ (1,2,4) - (13,23,4)] + [(13,23,4)-(133,233,4) ] \\ & & \quad +\ [ (133,233,4)- (1,2,4)]\ =\ 0, \end{eqnarray*} in this case. Next we have $d_4^{(4)} (c)=(1,2,3)+(13,23,3) + (133, 233, 3)$ which is a generatior. Finally we have \begin{eqnarray*} \delta_4^{(4)} (c) &=& (14, 24, 34) + (134, 234, 34) + (1334, 2334, 34) \\ &=& (14,24,34) + ((14)(34), (24)(34), 34) + ((14)(34)(34), (24)(34)(34), 34) \end{eqnarray*} which is a generator. This concludes the computation that $\partial_4 (c) \in C_3^S(X)$. \section{From identities to extensions and subcomplexes}\label{sec:main} Let $X$ be a rack. For brevity we omit $*$ and take the left-most parenthesis as before. Fix a surjection $\tau: \{ 1, \ldots, k \} \rightarrow \{ 1, \ldots, m \}$, where $k \geq m$ are positive integers. We consider identities of the form $x y_{\tau(1)} \cdots y_{\tau(k)} =x$ for $x, y_{\tau(i)} \in X$, $i=1, \ldots, m$. The expression $y_{\tau(1)} \cdots y_{\tau(k)}$ is a word of length $k$ from the alphabet $\{ y_1, \ldots, y_m \}$. We assume that $k>1$, since otherwise the quandle is trivial. For example, for a type $n$ rack $X$, there is an identity of the form $ x \, \underbrace{y_1 \cdots y_1}_{k} = x $ for any $x, y_1 \in X$. Another example is $ x \underbrace{y_1 y_2\cdots y_1 y_2}_{2k} = x $. \begin{definition} {\rm We call an identity $S$ of the form $x y_{\tau(1)} \cdots y_{\tau(k)} =x$ as described above a $(\tau, k, m)$ {\it inner identity}. If an inner identity $S$ above holds for any $x, y_j \in X$, $j=1, \ldots, m$, then we say that $X$ satisfies the $(\tau, k, m)$ inner identity $S$. } \end{definition} A rack $X$ satisfies a $(\tau, k,m)$ inner identity $S$ of the form $x y_{\tau(1)} \cdots y_{\tau (k)} =x$ if and only if $ R_{y_{\tau(k)}} \cdots R_{y_{\tau(2)}} R_{y_{\tau(1)}} = {\rm id} \in {\rm Inn}(X)$ for all $y_j \in X$, $j=1, \ldots, m$. \begin{definition} {\rm Let $S$ be an inner identity $x y_{\tau(1)} \cdots y_{\tau (k)} =x$. Set $ \omega_i = y_{\tau(1)} \cdots y_{\tau(i)} $, when $i > 0$. Define a $2$-chain $L_S$ by \begin{eqnarray*} L_S & = & (x, y_{\tau(1)}) + \sum_{i=1}^{k-1} ( x y_{\tau(1)} \cdots y_{\tau(i)}, y_{\tau(i+1)} ) \\ &=& (x, \omega_1) + \sum_{i=1}^{k-1} ( x \omega_i , y_{\tau(i+1)} ) \\ &=& (x, \omega_1) + (x \omega_1, y_{\tau(2)}) + \cdots + (x \omega_{k-1}, y_{\tau (k)} ) . \end{eqnarray*} } \end{definition} \begin{definition} {\rm Let $S$: $x y_{\tau(1)}\cdots y_{\tau(k)}=x$ be an inner identity. Set $ \omega_i = y_{\tau(1)} \cdots y_{\tau(i)} $, when $i > 0$. Let $C^S_n (X) \subset C_n(X)$, $n \in \mathbb{Z}$, be subgroups generated by \begin{eqnarray*} \lefteqn{ \bigcup_{j=1}^{k-1} \ \{ \ (x_1, \ldots, x_j, y_{\tau(1)}, x_{j+2}, \ldots, x_n) }\\ & & + \sum_{i=1}^{k-1} (x_1 \omega_i , \ldots, x_j \omega_i, y_{\tau(i+1)}, x_{j+2}, \ldots, x_n ) \\ & & \ | \ x_h, y_{\tau(i)} \in X,\, h=1, \ldots, \widehat{j+1}, \ldots, n, \, i=1, \ldots, k-1 \ \} . \end{eqnarray*} } \end{definition} We use the following lemma in the proof of Theorem~\ref{thm:main}. \begin{lemma} \label{lem:reduce} Let $X$ be a quandle. \begin{itemize} \item[{\rm (i)}] For any $a, b, c_i \in X$, it holds that $(a c_1 \cdots c_i )( b c_1 \cdots c_i ) = (ab) c_1 \cdots c_i $. \item[{\rm (ii)}] For any $a_i, b \in X$, it holds that $a_1 \cdots a_i b = (a_1 b) \cdots (a_i b)$. \end{itemize} \end{lemma} \begin{proof} (i) One computes inductively, using self-distributivity, \begin{eqnarray*} (a c_1 \cdots c_i )( b c_1 \cdots c_i ) &=& [ ( a c_1 \cdots c_{i-1} ) c_i ] [ ( b c_1 \cdots c_{i-1} ) c_i ] \\ & = & [ ( a c_1 \cdots c_{i-1} ) ( b c_1 \cdots c_{i-1} ) ] c_i \\ & = & [ [ ( a c_1 \cdots c_{i-2} ) ( b c_1 \cdots c_{i-2} ) ] c_{i-1} ] c_i \\ & =& \cdots \\ &=& (ab) c_1 \cdots c_i . \end{eqnarray*} (ii) One computes inductively \begin{eqnarray*} a_1 \cdots a_i b &=& (a_1 \cdots a_{i-1} b ) ( a_i b ) \\ & = & [ (a_1 \cdots a_{i-2} b ) (a_{i-1} b) ] ( a_i b ) \\ & =& \cdots \\ &=& (a_1 b) \cdots (a_i b) \end{eqnarray*} as desired. \end{proof} \begin{figure}[htb] \begin{center} \includegraphics[width=4.5in]{lemma.eps}\\ \caption{ Diagrams for Lemma~\ref{lem:reduce} }\label{lemma} \end{center} \end{figure} \begin{remark} {\rm The proof of Lemma~\ref{lem:reduce} has an diagrammatic interpretation as depicted in Figure~\ref{lemma}. Colorings of knot diagrams by quandles are well known and extensively used to construct knot invariants. At a crossing, for positive crossings in the figure with all arcs oriented downwards, the coloring condition is as depicted at the top left crossing of Figure~\ref{lemma} (B), where $a$ and $b$ are assigned on the left top under-arc and the over-arc, respectively, and $a*b$ (simply denoted by $ab$) is required to be assigned on the other under-arc. Then the bottom right arc of (A) receives $x=(a c_1 \cdots c_i )( b c_1 \cdots c_i ) $. In (B) the corresponding arc receives $x'=(ab) c_1 \cdots c_i $. Thus the fact that the colorings are in bijection under Reidemeister moves shows the equality (i). Similarly, (C) and (D) represents the equality (ii), with $y=a_1 \cdots a_i b $ and $y'=(a_1 b) \cdots (a_i b) $. One also sees that the inductive calculations in the proof can be represented by step by step moves. } \end{remark} \begin{theorem}\label{thm:main} \begin{sloppypar} Let $X$ be a rack. Let $S$ be a $(\tau, k, m)$ inner identity $x y_{\tau(1)} \cdots y_{\tau(k)}=x $ that $X$ satisfies. Then the following holds. \end{sloppypar} \begin{itemize} \item[{\rm (i)}] The $2$-chain $L_S$ is a $2$-cycle, $L_S \in Z_2(X)$. \item[{\rm (ii)}] For an abelian group $A$ and a $2$-cocycle $\phi$, $E(X,A, \phi)$ satisfies $S$ if and only if $\phi(L_S)=0$. \begin{sloppypar} \item[{\rm (iii)}] The sequence of subgroups $C^S_n (X) \subset C_n(X)$ form a subcomplex $\{ C^S_n (X), \partial_n \}$, $n \in \mathbb{Z}$. \end{sloppypar} \end{itemize} \end{theorem} \begin{proof} (i) Set $ \omega_i = y_{\tau(1)} \cdots y_{\tau(i)} $ when $i > 0$ as before. One computes \begin{eqnarray*} \partial \left( (x, \omega_1) + \sum_{i=1}^{k-1} ( x \omega_i , y_{\tau(i+1)} ) \right) &=& (x) + \sum_{i=1}^{k-1}\ [ \ ( x \omega_i ) - ( x \omega_{i+1}) \ ] \\ &=& (x) - (x \omega_k ) \ = \ 0 \end{eqnarray*} as desired. \noindent (ii) For $(y_{\tau(i)}, a_i) \in E=X \times A$, $i=0, \ldots, k$, one computes, inductively, \begin{eqnarray*} \lefteqn{ (x, a_0) \cdots (y_{\tau(k)} , a_k) } \\ &=& (x y_{\tau(1)}, a_0 + \phi (x, y_{\tau(1)}) ) \cdots (y_{\tau(k)} , a_k) \\ &=& (x y_{\tau(1)} y_{\tau(2)}, a_0 + \phi (x, y_{\tau(1)}) + \phi (x y_{\tau(1)}, y_{\tau(2)}) ) \cdots (y_{\tau(k)} , a_k)\\ &=& \cdots \\ &=& (\ x \omega_k , a_0 + ( \ \phi (x, \omega_1) + \sum_{i=0}^{k-1} \phi (x \omega_i , y_{\tau(i+1)}) \ ) \ ) , \end{eqnarray*} which is equal to $(x, a_0)$ if and only if $\phi(L_S)=0$, as desired. \noindent (iii) We check the following three cases. Case (1): $h \leq j $. In this case, each term of \begin{eqnarray*} \lefteqn{ d_h ^{(n)} ( \ (x_1, \ldots, x_j, y_{\tau(1)}, x_{j+2}, \ldots, x_n) } \\ & & + \sum_{i=1}^{k-1} ( x_1 \omega_i , \, \ldots, x_{j-1} \omega_{i}, \, x_j \omega_i, \ y_{\tau(i+1)}, \, x_{j+2}, \ldots, x_n) \ ) \end{eqnarray*} is obtained from the original term by deleting the $h$-th entry, hence the image is an element of $C^S_{n-1} (X)$. Each term of \begin{eqnarray*} \lefteqn{ \delta_h ^{(n)} ( \ (x_1, \ldots, x_j, y_{\tau(1)}, x_{j+2}, \ldots, x_n) } \\ & & + \ \sum_{i=1}^{k-1} ( x_1 \omega_i ,\, \ldots, x_{j-1} \omega_i , \, x_j \omega_i , \, y_{\tau(i+1)}, \, x_{j+2}, \ldots, x_n) \ ) \end{eqnarray*} is obtained from the original by replacing the first $h$ entries of the form $ x_\ell \omega_i$ by $$ ( x_\ell \omega_i ) (x_\ell \omega_i) = ( x_\ell y_{\tau(1)} \cdots y_{\tau(i)} ) ( x_h y_{\tau(1)} \cdots y_{\tau(i)} ) . $$ By Lemma~\ref{lem:reduce} (i), we obtain $$( x_\ell y_{\tau(1)} \cdots y_{\tau(i)} ) ( x_h y_{\tau(1)} \cdots y_{\tau(i)} ) = (x_\ell x_h ) y_{\tau(1)} \cdots y_{\tau(i)} .$$ Hence the image is in $C^S_{n-1} (X)$. Case (2): $h=j+1$. One computes \begin{eqnarray*} \lefteqn{ d_{j+1} ^{(n)} ( x_1 \omega_i, \, \ldots, x_{j} \omega_i, \, y_{\tau(i+1)}, \, x_{j+2}, \ldots, x_n) }\\ &=& ( x_1 \omega_i, \, \ldots, x_{j} \omega_i, \, \widehat{y_{\tau(i+1)}}, \, x_{j+2}, \ldots, x_n) \end{eqnarray*} for the $i$-the term, and \begin{eqnarray*} \lefteqn{ \delta_{j+1} ^{(n)} ( x_1 \omega_{i-1}, \, \ldots, x_{j-1} \omega_{i-1}, \, x_j \omega_{i-1}, \, y_{\tau(i)}, \, x_{j+2}, \ldots, x_n) ) }\\ &=& ( x_1 \omega_{i}, \, \ldots, x_{j-1} \omega_{i}, \, x_j \omega_{i}, \, \widehat{y_{\tau(i)}}, \, x_{j+2}, \ldots, x_n) \end{eqnarray*} for the $(i-1)$-th term, so that these terms cancel in pairs by opposite signs. The first term before the sum over $i$ and the last term of the sum over $i$, $$d_{j+1} ^{(n)} ( x_1 , \ldots, x_{j-1} , x_j , y_{\tau(1)}, x_{j+2}, \ldots, x_n) =( x_1 , \ldots, x_{j-1} , x_j , x_{j+2}, \ldots, x_n) $$ and \begin{eqnarray*} \lefteqn{ \delta_{j+1} ^{(n)} ( x_1 \omega_{k-1}, \, \ldots, x_{j-1} \omega_{k-1}, \, x_j \omega_{k-1} , \, y_{\tau(k)}, \, x_{j+2}, \ldots, x_n) } \\ &=& ( x_1 \omega_{k}, \, \ldots, x_{j-1} \omega_{k}, \, x_j \omega_{k} , \, \widehat{y_{\tau(k)}}, x_{j+2}, \ldots, x_n) \end{eqnarray*} are equal and cancel by opposite signs. Hence the image of this case is zero. Case (3): $h>j+1$. Each term of \begin{eqnarray*} \lefteqn{ d_h ^{(n)} (\ (x_1, \ldots, x_j, y_{\tau(1)}, x_{j+2}, \ldots, x_n) } \\ & & + \sum_{i=1}^{k-1} ( x_1 \omega_i, \, \ldots, x_{j-1} \omega_i, \, x_j \omega_i, \, y_{\tau(i+1)},\, x_{j+2}, \ldots, x_n) \ ) \end{eqnarray*} is obtained from the original term by deleting the $h$-th entry, hence the image is an element of $C^S_{n-1} (X)$. Each term of \begin{eqnarray*} \lefteqn{ \delta_h ^{(n)} (\ (x_1, \ldots, x_j, y_{\tau(1)}, x_{j+2}, \ldots, x_n) } \\ & & + \sum_{i=1}^{k-1} ( x_1 \omega_i, \, \ldots, x_{j-1} \omega_i, \, x_j \omega_i, \, y_{\tau(i+1)}, x_{j+2}, \ldots, x_n) \ ) \quad (*) \end{eqnarray*} is computed as $$ ( x_1 \omega_i, x_h , \, \ldots, x_j \omega_i, x_h , \, y_{\tau(i+1)} x_h, \, x_{j+2} x_h, \ldots , x_{h-1} x_h, \widehat{x_h}, x_{h+1}, \ldots, x_n) . $$ By Lemma~\ref{lem:reduce} (ii), we obtain $$ x_\ell \omega_i x_h = x_\ell y_{\tau(1)} \cdots y_{\tau(i)} x_h = (x_\ell x_h) (y_{\tau(1)} x_h) \cdots (y_{\tau(i)} x_h) . $$ We note that it holds that if $y_{\tau(u)} = y_{\tau(v)}$ then $y_{\tau(u)} x_h = y_{\tau(v)} x_h$. Hence we can set $y'_{\tau(j)}=y_{\tau(j)} x_h \in X$ for $j=1, \ldots, k$, and $x'_\ell=x_\ell x_h$, then $$(x_\ell x_h) (y_{\tau(1)} x_h) \cdots (y_{\tau(i)} x_h) =x'_\ell \, y'_{\tau(1)} \cdots y'_{\tau(i)} , $$ so that the above sum $(*)$ is an element of $C^S_{n-1} (X)$ as desired. \end{proof} The construction of 2-cycles in (i) is a generalization of \cite{Zab}. The following are immediate corollaries of Theorem~\ref{thm:main} for specific identities. \begin{corollary} Let $X$ be a type $n$ quandle, with the identity $S$: $x *^n y=x$ for all $x, y \in X$ for a fixed $n >1$. Then the following properties hold. \begin{itemize} \item[{\rm (i)}] The $2$-chain $L_S = \sum_{i=0}^{k-1} ( x *^i y, y )$ is a $2$-cycle, $L_S \in Z_2^R(X)$. \item[{\rm (ii)}] For an abelian group $A$ and a $2$-cocycle $\phi$, $E(X,A, \phi)$ satisfies $S$ if and only if $\phi( L_S)=0$ for any element $x, y \in X$. \item[{\rm (iii)}] Let $C^S_n(X)$ be the subgroup of $C_n(X)$ generated by $$ \bigcup_{j=0}^{n-1} \{ \sum_{i=0}^{k-1} ( x_1*^i y, \ldots, x_j *^i y , y, x_{j+2}, \ldots, x_n) \} . $$ Then the sequence of subgroups $\{ C^S_n (X), \partial_n \}$ is a subcomplex. \end{itemize} \end{corollary} The following is motivated from Burnside relations discussed in \cite{NP}. We consider the identity $xw=x$ for $$w= \underbrace{y_1 *y_2 * y_1 * y_2 * \cdots * y_1 * y_2}_{k\ {\rm repetitions}} . $$ For simplicity denote $w$ by $Y^k$ where $Y$ denotes $y_1 y_2$ and the exponent $k$ represents the number of repetitions of $y_1 y_2$ (but each $y_1 y_2$ is not parenthesized). \begin{corollary} Let $X$ be a quandle that satisfies the identity $S$: $xw=x$ for $w=Y^k$ as above, for all $x, y_1, y_2 \in X$ for a fixed $k >1$. Then the following properties hold. \begin{itemize} \item[{\rm (i)}] The $2$-chain $L_S = \sum_{i=0}^{k-1} \ [ \ ( x Y^i, y_1) + (xY^i y_1, y_2) \ ] $ is a $2$-cycle, $L_S \in Z_2^R(X)$. \item[{\rm (ii)}] For an abelian group $A$ and a $2$-cocycle $\phi$, $E(X,A, \phi)$ satisfies $S$ if and only if $\phi( L_S)=0$ for any element $x, y_1, y_2 \in X$. \item[{\rm (iii)}] Let $C^S_n(X)$ be the subgroup of $C_n(X)$ generated by \begin{eqnarray*} \lefteqn{ \bigcup_{j=0}^{n-1} \ \{ \sum_{i=0}^{k-1} \ \ [ \ ( x_1 Y^i, \cdots, x_j Y^i, y_1, x_{j+2}, \ldots, x_n ) }\\ & & \qquad + ( x_1 Y^i y_1, \cdots, x_j Y^i y_1, y_2, x_{j+2}, \ldots, x_n ) \ ] \ \} . \end{eqnarray*} Then the sequence of subgroups $\{ C^S_n (X), \partial_n \}$ is a subcomplex. \end{itemize} \end{corollary} \begin{remark} {\rm A quandle $X$ is {\it medial}, or {\it abelian}, if the identity $S$: $$(x*y)*(u*v)=(x*u)*(y*v)$$ holds for any $x,y,u,v \in X$. This property is well known, see \cite{JPSZ} for some discussions. We note that this is not in the form of inner identity, but point out that procedures analogous to those in the proof of Theorem~\ref{thm:main} (i) and (ii) still apply. Let $L_S= [\, (x,y) + (x*y, u*v)\, ] - [\, (x, u) + (x*u, y*v) \, ] $. It holds that if $X$ is medial, then (i) $L_S \in Z_2^R(X)$ for any $x,y,u,v \in X$, and (ii) for a $2$-cocycle $\phi$ with a coefficient abelian group $A$, $E(X,A,\phi)$ is medial if $\phi(L(S))=0$. Proof is direct computations. For (i), we compute \begin{eqnarray*} \partial (L_S) &=& [\, (x) - (x*y) + (x*y) - ((x*y)*(u*v) )\, ] \\ & & - [\, (x) - (x*u) + (x*u) - ((x*u)*(y*v)) \, ] \quad = \quad 0 , \end{eqnarray*} and for (ii), we compute \begin{eqnarray*} \lefteqn{ (\ (x,a) * (y,b) \ ) * (\ (u, c) * (v, d) \ ) } \\ &=& (x*y, a+ \phi ( x, y) ) * (u*v, c + \phi( u, v) ) \\ &=& ( ( x*y ) * (u*v) , a+ \phi ( x, y) + \phi ( x*y, u*v) ), \\ \lefteqn{ (\ (x,a) * (u,c) \ ) * (\ (y,b) * (v, d) \ ) } \\ &=& (x*y, a+ \phi ( x, u) ) * (y*v, b + \phi( y, v) ) \\ &=& ( ( x*u ) * (y*v) , a+ \phi ( x, u) + \phi ( x*u, y*v) ). \end{eqnarray*} } \end{remark} \section{Inner identities}\label{sec:id} In this section we examine specific inner identities, as well as quandles that satisfy these identities. First we present the number of type $n$ Rig quandles for possible values of $n$. The following list of vectors $[k, m]$ represent that there are $m$ Rig quandles (all connected quandles of order $<48$) of type $k$. $$ \begin{array}{llllllll} [ 2, 117 ] & [ 3, 38 ] & [ 4, 90 ] & [ 5, 16 ] & [ 6, 117 ] & [ 7, 15 ] & [ 8, 38 ] & [ 9, 13 ] \\ { } [ 10, 31 ] & [ 11, 10 ] & [ 12, 52 ] & [ 13, 4 ] & [ 14 , 19 ] & [ 15, 14 ] & [ 16, 9 ] & [ 18, 27 ] \\ { } [ 20, 19 ] & [ 21, 14 ] & [ 22 , 11 ] & [ 23, 22 ] & [ 24, 9 ] & [ 26, 5 ] & [ 28, 17 ] & [ 30, 15 ]\\ { } [ 31 , 6 ] & [ 36, 12 ] & [ 40, 16 ] & [ 42, 12 ] & [ 46, 22 ] & & & \end{array} $$ \begin{remark} {\rm Let $X$ be a quandle. The subgroup of ${\rm Sym}(X)$ generated by right transformations $\{ R_x \ | \ x \in X\}$ is called the {\it inner automorphism group} and denoted by ${\rm Inn}(X)$. For a quandle $X$, the map ${\rm inn}: X \rightarrow {\rm Inn}(X)$ defined by ${\rm inn}(a)=R_a$ for $a \in X$ is called the {\it inner representation}. Computer calculations show that for over 3000 non-faithful quandles $X$ (mostly generalized Alexander quandles) it holds that ${\rm type}(X)={\rm type}({\rm inn}(X))$. For the great majority of these quandles, ${\rm inn}: X \rightarrow {\rm inn}(X)$ are abelian extensions. In \cite{CSV}, it was conjectured that if a quandle $X$ is a kei, then any abelian extension of $X$ is a kei. Thus we make the following conjectures. } \end{remark} \begin{conjecture} {\rm (1) If $\alpha: E \rightarrow X$ is a connected abelian extension then ${\rm type}(E)={\rm type}(X)$. \noindent (2) If $Q$ is connected then ${\rm type}(Q) = {\rm type}({\rm inn}(Q))$. } \end{conjecture} Let $X$ be a quandle. Let $xw=x$ be an inner identity, where $w$ is a word in the alphabet $\Lambda$. Let $|w|$ denote the length of $w$, that is, the number of letters in $w$. We note that if the length of $w$ is $k$, then the quandle is of type at most $k$, since the identity must hold for any values of variables. We observe the following. \begin{lemma}\label{lem:triv} Let $w$ be a word in the alphabet $\Lambda$ such that some letter of the alphabet, say, $a$, appears only once in $w$. Then a quandle satisfying $xw = x$ is trivial. \end{lemma} \begin{proof} Convert this identity to a product of $R_c$, $c \in X$. The identity is equivalent to that his product equals 1, so we can solve it for $R_a$. Thus $R_a$ is a product of $R_c^{\pm 1}$, $c \neq a$. This identity must hold for any $a, c \in X$. By fixing the values of $c$ and choosing different values of $a$, we obtain $R_a = R_{a'}$ for all $a, a' \in X$. Since $R_a(a)=a*a=a$, we obtain $R_{a'}(a)=a*a'=a$ for all $a, a' \in X$. \end{proof} \begin{corollary} If $Q$ is a non-trivial quandle satisfying an identity $xw =x$ where $w$ is a word in alphabet $\Lambda$, then each letter appearing in $w$ must appear at least twice. \end{corollary} \begin{lemma}\label{lem:type} If a quandle $X$ satisfies $xw=x$ where $w$ has two letters one of which appears consecutively $k$ times, then ${\rm type}(X) \leq {\rm gcd}\{ k, |w| - k \}$. \end{lemma} \begin{proof} \begin{sloppypar} Under the assumption $w$ is written as $w=a^h b^k a^{|w|-h-k}$ for $h \geq 0$, where the exponents represents the number of repetitions. Then the identity $xw=x$ is converted to the identity $(R_a)^{|w| - h - k} (R_b)^k (R_a)^h=1$ in ${\rm Inn}(X)$. \end{sloppypar} Hence $(R_b)^k = (R_a^{-1})^{|w|-k}$, so that $a*^kb=(R_b)^k (a)=(R_a^{-1})^{|w|-k}(a)=a$, and $$b*^{|w|-k} a=(R_a)^{|w|-k}(b)=(R_b^{-1})^k (b)= b$$ for all $a, b \in X$, as desired. \end{proof} We examine identities of small lengths. \bigskip \noindent {\it Length 1}. If the identity $xa=x$ holds in a quandle $X$, then $X$ is trivial by Lemma~\ref{lem:triv}. \bigskip \noindent {\it Length 2}. The identity $xaa=x$ holds in a quandle $X$ if and only if $X$ is a kei by Lemma~\ref{lem:type}. If the identity $xab=x$ holds, then the quandle is trivial by Lemma~\ref{lem:triv}. \bigskip \noindent {\it Length 3}. The identity $xaaa=x$ holds in a quandle $X$ if and only if $X$ is of type 3. Other cases are trivial quandles by Lemma~\ref{lem:triv}. \bigskip \noindent {\it Length 4}. Excluding trivial quandles from Lemma~\ref{lem:triv}, we have the following cases. (1) $xaabb=x$, (2) $xabba=x$, (3) $xabab=x$. Lemma~\ref{lem:type} implies that (1) or (2) holds if and only if the quandle is a kei. Computer calculation shows that among 790 Rig quandles of order less than 48, the following quandles satisfy the identity $xabab=x$, none of which is a kei. $$ \begin{array}{lllllll} Q(5,2) & Q(5,3) & Q(9,3) & Q(13,4) & Q(13,7) & Q(17,3) & Q(17,12) \\ Q(25,4) & Q(25,5) & Q(25,6) & Q(25,7)& Q(25, 8 ) & Q(29, 11) & Q(29, 16) \\ Q(37, 45) & Q(37, 5) & Q(41,2) & Q(41, 3) & Q(45, 36) & Q(45, 37) & \end{array} $$ \noindent {\it Length 5}. Excluding trivial quandles from Lemma~\ref{lem:triv}, there are 10 identities $xw=x$ (all must have two distinct letters in $w$), $$w=aaabb, \ aabab, \ aabba, \ abaab, \ ababa, \ abbaa, \ aabbb, \ ababb, \ abbab, \ abbba. $$ From Lemma~\ref{lem:type}, the identities $aaabb$, $aabba$, $abbaa$, $aabbb$, $abbba$ imply trivial quandle. Computer calculation shows that none of the remaining is satisfied by any of 790 Rig quandles. We conjecture that no connected quandle satisfies the remaining 5 identities of length $5$. \bigskip \noindent {\it Length 6}. Words $w$ of length 6 with 2 letters, excluding those implying trivial and type 2, 3 quandles from Lemmas~\ref{lem:triv}, \ref{lem:type}, consist of the following list. The words $w$ such that identities $xw=x$ are not satisfied by any of the Rig quandles are: \begin{eqnarray*} w &=& aaabab, \ aababa, \ aabbab, \ aababb, \ abaaab, \ abaabb, \\ & & ababbb, \ abbaba, \ abbaab, \ ababaa, \ ababba, \ abbbab. \end{eqnarray*} Thus we conjecture that no connected quandle satisfies these. The following words are satisfied by the same 202 Rig quandles, which contain all 117 Rig keis: $aabaab$, $abaaba$, $abbabb$. The word $ababab$ is satisfied by 55 Rig quandles, 4 of which are keis. \bigskip \noindent {\it Length 7}. Except for those words $w$ that give trivial quandles from Lemmas~\ref{lem:triv}, \ref{lem:type} and type 7 quandles (15 of them among 790 Rig quandles), there are only two Rig quandles that satisfy the identity $xw=x$ with 2 letter words $w$ of length 7, and they are: $Q(8,2)=\mathbb{Z}_2[t]/(t^3 + t^2 +1)$ satisfies identities with $$w=aababba, abbbaba, ababbaa, aabbbab, aaababb, abaabbb, abbaaab. $ $Q(8,3)=\mathbb{Z}_2[t]/(t^3 + t +1)$ satisfies identities with $$w=aabbaba, abbabaa, ababbba, aababbb, aaabbab, abaaabb, abbbaab. $ Finally we observe the following. \begin{proposition}\label{prop:many} For any $m, n \in \mathbb{Z}$ such that $m>0$ and $n>1$, there exist infinitely many connected quandles that satisfy the identity $xw=x$ for $$w= \underbrace{y_1 * \cdots * y_m * y_1 \cdots * y_m *\cdots y_1 \cdots * y_m}_{n\ {\rm repetitions}}. $$ \end{proposition} \begin{proof} We consider Alexander quandles $(X, t)$, where $t$ is an automorphism of an abelian group $X$, with $x*y=tx + (1-t)y$. Inductively one computes $$x*y_1 * \cdots * y_k = t^k x + (1-t) ( t^{k-1} y_1 + t^{k-2} y_2 + \cdots + t y _{k-1} + y_k ) . $$ Setting $k=mn$ and $y_{hm+j}=y_j$ for $h=0, 1, \ldots, n-1$, $j=1, \ldots, m$, we obtain \begin{eqnarray*} \lefteqn{ x* \underbrace{y_1 * \cdots * y_m * y_1 \cdots * y_m *\cdots y_1 \cdots * y_m}_{n\ {\rm repetitions}} } \\ &=& t^{mn} x + (1-t)( t^{mn-m} + t^{mn-2m} + \cdots + t^m + 1 ) \\ & & \times ( t^{m-1} y_1 + y^{m-2} y_2 + \cdots + t y _{m-1} + y_m ) . \end{eqnarray*} Let $g_{m,n}(t)=t^{mn-m} + t^{mn-2m} + \cdots + t^m + 1 =(t^{mn} -1)/(t^m -1)$ and $X=\mathbb{Z}_p [t]/( g_{m,n}(t) )$. For all primes $p>n$, $g_{m,n} (1) \neq 0$, so that $1-t$ is invertible in $X$ and hence $X$ is connected, and $t^{mn}=1$ in $X$. Hence there is an infinite family of connected quandles that satisfy $xw=x$. \end{proof} Note that in Proposition~\ref{prop:many}, $n=1$ is not possible for any choice of a word $ y_1 \cdots y_m$ by Lemma~\ref{lem:triv}. \begin{remark} {\rm For a group $G$, the least possible integer $n$ such that $x^n = 1$ for all $x \in G$ is called the {\it exponent} of $G$. We note that if the exponent of ${\rm Inn}(X)$ is $n$, then $X$ satisfies the inner identity $xw^n = x$ for any word $w$ of length $n$, where $\omega^n$ denotes $\omega$ repeated $n$ times with no parentheses. We computed the exponent of ${\rm Inn}(X)$ by \textsf{GAP} \ for all 790 Rig quandles, and obtained the following data. The notation $[e, n]$ below indicates that there are $n$ Rig quandles $X$ such that the exponent of ${\rm Inn}(X)$ is $e$. The pairs are listed in order of increasing exponent. $$ \begin{array}{llllllll} [ 6, 11 ] & [ 10, 4 ] & [ 12, 59 ] & [ 14, 3 ] & [ 15, 1 ] & [ 18, 47 ] & [ 20, 15 ] \\ { } [ 21, 2 ] & { } [ 22, 1 ] & [ 24, 38 ] & [ 26, 1 ] & [ 30, 22 ] & [ 34, 1 ] & [ 36, 31 ] \\ { } [ 38, 1 ] & [ 39, 6 ] & { } [ 40, 6 ] & [ 42, 22 ] & [ 46, 1 ] & [ 48, 4 ] & [ 50, 5 ] \\ { } [ 52, 2 ] & [ 54, 9 ] & [ 55, 4 ] & % { } [ 57, 2 ] & [ 58, 1 ] & [ 60, 44 ] & [ 62, 7 ] \\ { } [ 66, 4 ] & [ 68, 2 ] & [ 70, 3 ] & [ 72, 13 ] & { } [ 74, 1 ] & [ 78, 13 ] & [ 82, 1 ] \\ { } [ 84, 24 ] & [ 86, 1 ] & [ 90, 9 ] & [ 93, 2 ] & [ 94, 1 ] & { } [ 100, 10 ] & [ 110, 4 ] \\ { } [ 111, 2 ] & [ 114, 2 ] & [ 116, 2 ] & [ 120, 27 ] & [ 129, 2 ] & [ 136, 4 ] & { } [ 140, 6 ] \\ { } [ 148, 2 ] & [ 155, 4 ] & [ 156, 10 ] & [ 164, 2 ] & [ 168, 4 ] & [ 171, 6 ] & [ 180, 12 ] \\ { } [ 186, 2 ] & [ 203, 6 ] & [ 205, 4 ] & [ 210, 4 ] & [ 222, 2 ] & [ 240, 3 ] & [ 253, 10 ] \\ { } [ 258, 2 ] & { } [ 272, 8 ] & [ 301, 6 ] & [ 310, 4 ] & [ 328, 4 ] & [ 330, 16 ] & [ 333, 6 ] \\ { } [ 342, 6 ] & [ 360, 1 ] & { } [ 406, 6 ] & [ 410, 4 ] & [ 420, 10 ] & [ 444, 4 ] & [ 465, 8 ] \\ { } [ 506, 10 ] & [ 602, 6 ] & [ 666, 6 ] & { } [ 812, 12 ] & [ 820, 8 ] & [ 840, 3 ] & [ 903, 12 ] \\ { } [ 930, 8 ] & [ 1081, 22 ] & [ 1332, 12 ] & [ 1640, 16 ] & { } [ 1806, 12 ] & [ 2162, 22 ] & [ 2520, 2 ] \end{array} $$ } \end{remark} \subsection*{Acknowledgements} We are grateful to Jozef Przytycki for valuable comments. M.S. was partially supported by the NIH 1R01GM109459. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of NIH.
1,116,691,499,578
arxiv
\section*{Abstract} The spatial lag model (SLM) has been widely studied in the literature for spatialised data modeling in various disciplines such as geography, economics, demography, regional sciences, etc. This is an extension of the classical linear model that takes into account the proximity of spatial units in modeling. The extension of the SLM model in the functional framework (the FSLM model) as well as its estimation by the truncated maximum likelihood technique have been proposed by \cite{Ahmed}. In this paper, we propose a Bayesian estimation of the FSLM model. The Bayesian MCMC technique is used as estimation methods of the parameters of the model. A simulation study is conducted in order to compare the results of the Bayesian estimation method with the truncated maximum likelihood method. As an illustration, the proposed Bayesian method is used to establish a relationship between the unemployment rate and the curves of illiteracy rate observed in the 45 departments of Senegal. \\ \\ \noindent \textit{Keywords}. Spatial lag model, Functional data analysis, Bayesian estimation, MCMC algorithms.\\ \\ \noindent {\bfseries Mathematics Subject Classification :} 62F15, 62H11, 62P20, 60J22, 62-07.\\ \section{Introduction} In the last decades, functional data is becoming increasingly common in many scientific fields, facilitated mainly by recent advances in data storage and processing. We can cite, among others, biology, climatology, econometrics, chemistry, meteorology that are likely to produce data considered as random curves. These data can be longitudinal, multivariate or spatial. Longitudinal data can include, for example, temperature curves, precipitation curves, growth curves of a plant, electrocardiogram of a patient, etc. Multivariate or logitidunal data may concern, for example, image modeling as random functions depending on two variables (abscissa and ordinate). The equipments for collecting and storing the functional data as powerful as they are, in fact, never makes it possible to observe a random curve. The functional data are therefore in vector form: they consist of a certain number of discrete values which have been measured on a sufficiently fine grid and recorded. Functional data analysis (FDA) has arisen as a new discipline in the field of statistics proposing to model this type of data (\cite{Ramsay2005}).\\ The first category of models of the functional statistics are based on the hypothesis of the independence of the functions $X_{1}(t), X_{2}(t), \dots, X_{n}(t)$ defined on a compact interval $I=[0, T]$ on the real line $\mathbb{R}$. For instance, functional linear models (\cite{Cardot1999}, \cite{Cardot2003}), \cite{Hall}, \cite{Hilgert}), functional analysis of variance (\cite{Zoglat}, \cite{Zhang2013}), multivariate functional data (\cite{James}, \cite{Jacques}) are models that rely on the assumption of independence of functions. However, in many disciplines of applied sciences, there is a growing need to model corelated functional data. This is the case when samples of functions are observed over a discrete set of time points (temporally correlated functional data) or when these functions are observed over different sites or areas of a region (spatially correlated functional data). In such cases, the independence assumption of $X_{1}(t), X_{2}(t), \dots, X_{n}(t)$ is violated and the above models mentionned are not appropriate. To overcome these limitations, appropriate models are developed to take into account the dependency between functions. For more developments, one can refer to the works of \cite{Delicado}, \cite{Ramsay2011}.\\ In the domain of spatially correlated data, functional geostatistics models random functions observed at a given set of points $s_{1}, s_{2}, \dots, s_{n}$ in a region $\mathcal{D}\subset\mathbb{R}^{n}$ (\cite{Dabo}, \cite{Giraldo}, \cite{Caballero}). Functional point processes correspond to the case where the functions are observed at each point $s_{1}, s_{2}, \dots, s_{n}$ in $\mathcal{D}$ which is generated by a standard point process. When $\mathcal{D}$ is a fixed and countable set, functional areal data (functional data in lattices) are concerned. Some works relating to functional areal data exist in the literature and can be found in \cite{Zhang2016}, \cite{Ahmed}, \cite{Huang}, \cite{Pineda}.\\ The motivation of this paper is to provide a Bayesian estimation method to functional linear models on lattices. Specifically, we are intersted at the Bayesian estimation of the spatial lag model which is one of the family of spatial autoregressive models. This paper is organized as follows: we present the functional spatial lag model (FSLM) in the Bayesian context in section 2. In section 3, we propose the Bayesian MCMC estimation procedure of the functional spatial lag model. Section 4 gives a simulation study of the model proposed. In section 5, we apply the methodology proposed to real data. We end the article by a conclusion in section 6. \section{The model} We consider here n spatial units located in a fixed and countable region $\mathcal{D}\subset\mathbb{R}^{n}$. In each spatial unit, we observe a real response variable $y_{i}$ and a functional explanatory variable $X_{i}(t), t\in\mathcal{T}$ which takes its values in the Hilbert space $L^{2}(\mathcal{T})$. The set $\mathcal{T}$ is a compact interval of the real line $\mathbb{R}$. \\ We start from the model (see \cite{Ahmed}) which assumes an endogenous relationship between $y_{i}$ and $X_{i}(t)$ according to the spatial lag model defined by \begin{equation} \label{prop1} y_{i}=\rho\sum_{j=1}^{n}\omega_{ij}y_{j}+\int_{\mathcal{T}}X_{i}(t)\gamma(t)dt+\epsilon_{i},\quad \quad \epsilon_{i}\sim N(0, \sigma^2), \quad i=1,2,\dots, n, \end{equation} where $\rho$ is an unkwon real spatial autoregressive parameter, $\gamma(t)$ is an unknown functional parameter, the error terms $\epsilon_{i}$ are assumed to be independent and identically distributed according to the normal density with mean 0 and variance $\sigma^2$ and $\omega_{ij}$ is the coefficient of the spatial weight matrix W defined by $W=(\omega_{ij})_{1\leq i,j\leq n}$ where $\omega_{ij}=1$ if the areas $i$ and $j$ are contiguous and $\omega_{ij}=o$ if not. By convention, $\omega_{ii}=0$ for all $i=1,2,\dots, n$. \\ The introduction of this matrix is very important because it defines the relationships among locations. The weight matrix W is the space version of lag operator in time series \cite{Anselin}. In practice, the weight matrix W is row-standardized so that comparisons between models are easy to make.\\ Consider an orthonormal basis $\{\phi_{j}, j\in \mathbb{N}\}$ of $L^{2}(\mathcal{T})$. We can decompose $X_{i}(t)$ and $\gamma(t)$ in this basis as follows \begin{equation} X_{i}(t)=\sum_{j=1}^{\infty}z_{ij}\phi_{j}(t) \quad\mbox{and}\quad \gamma(t)=\sum_{j=1}^{\infty}\beta_{j}\phi_{j}(t). \end{equation} The real random variables $z_{ij}$ and the functional coefficients $\beta_{j}$ are given by \begin{equation} z_{ij}=\int_{\mathcal{T}}X_{i}(t)\phi_{j}(t)dt\quad\mbox{and}\quad\beta_{j}=\int_{\mathcal{T}}\gamma(t)\phi_{j}(t)dt. \end{equation} From this decomposition, it follows that \begin{equation} \int_{\mathcal{T}}X_{i}(t)\gamma(t)dt=\sum_{j=1}^{\infty}z_{ij}\beta_{j}. \end{equation} Equation (4) can be proved using dominated convergence and the fact that $\{\phi_{j}, j\in \mathbb{N}\}$ is an orthornormal basis of $L^{2}(\mathcal{T})$ by writing \begin{eqnarray} \int_{\mathcal{T}}X_{i}(t)\gamma(t)dt &=&\int_{\mathcal{T}}\left(\sum_{k=1}^{\infty}z_{ik}\phi_{k}(t)\right)\left(\sum_{j=1}^{\infty}\beta_{j}\phi_{j}(t)\right)dt\nonumber\\ &=&\sum_{k=1}^{\infty}\sum_{j=1}^{\infty}z_{ik}\beta_{j}\left(\int_{\mathcal{T}}\phi_{k}(t)\phi_{j}(t))dt\right)\nonumber\\ &=&\sum_{k=1}^{\infty}\sum_{j=1}^{\infty}z_{ik}\beta_{j}\langle\phi_{k}, \phi_{j} \rangle\nonumber\\ &=&\sum_{k=1}^{\infty}\sum_{j=1}^{\infty}z_{ik}\beta_{j}\delta_{kj}\nonumber\\ &=&\sum_{j=1}^{\infty}z_{ij}\beta_{j}\nonumber, \end{eqnarray} where $\langle\phi_{k}, \phi_{j} \rangle=\int_{\mathcal{T}}\phi_{k}(t)\phi_{j}(t))dt$ is the inner product of $L^{2}(\mathcal{T})$ of the functions $\phi_{k}, \phi_{j}$ and $\delta_{kj}$ is the Kronecker symbol. \\ Following \cite{Ahmed}, we decompose the right-hand side of equation (4) like this \begin{equation} \sum_{j=1}^{\infty}z_{ij}\beta_{j}=\sum_{j=1}^{k_{n}}z_{ij}\beta_{j}+\sum_{j=k_{n}+1}^{\infty}z_{ij}\beta_{j}, \end{equation} where $k_{n}$ is a sequence of positive integers which increases with the size of the sample n. The goal here is to approach the infinite sum $\sum_{j=1}^{\infty}z_{ij}\beta_{j}$ by the finite sum $\sum_{j=1}^{k_{n}}z_{ij}\beta_{j}$. This is only possible when the sum $\sum_{j=k_{n}+1}^{\infty}z_{ij}\beta_{j}$ becomes negligible. For more details, one can refer to \cite{Muller} and \cite{Ahmed}.\\ The truncation of equation (5) at order $k_{n}$ can be accomplished by using the eingenfunctions basis, the Fourier basis, the spline basis or the wavelet basis. When the variable to be truncated is periodic, one can use the Fourier basis. In the other cases, one can think of using the other forms of basis according to the situations.\\ After truncation, model (1) becomes: \begin{equation} \label{prop1} y_{i}=\rho\sum_{j=1}^{n}\omega_{ij}y_{j}+\sum_{j=1}^{k_{n}}z_{ij}\beta_{j}+\epsilon_{i},\quad \epsilon_{i}\sim N(0, \sigma^2), \quad i=1,2,\dots, n. \end{equation} Then, the infinite dimension of model (1) is reduced to a finite dimension. The problem now comes down to estimating $k_{n}+2$ parameters: $k_{n}$ parameters $\beta_{j}$ and 2 parameters concerning the spatial autoregressive parameter $\rho$ and the variance $\sigma^2$ of the error term.\\ \\ If we set $y=\begin{bmatrix} y_1\\ y_2\\ \vdots\\ y_n \end{bmatrix}$, $\textbf{Z}_{k_{n}}=\begin{bmatrix} z_{11} & z_{12} & \dots & z_{1k_{n}} \\ z_{21} & z_{22} & \dots & z_{2k_{n}} \\ \vdots & \vdots & \ddots & \vdots \\ z_{n1} & z_{n2} & \dots & z_{nk_{n}} \end{bmatrix}$, $\mathbf{\beta_{k_{n}}}=\begin{bmatrix} \beta_{1}\\ \beta_{2}\\ \vdots\\ \beta_{k_{n}}\end{bmatrix}$, $\epsilon=\begin{bmatrix} \epsilon_{1}\\ \epsilon_{2}\\ \vdots\\ \epsilon_{n} \end{bmatrix}$, $W=\begin{bmatrix} \omega_{11} & \omega_{12} & \dots & \omega_{1n} \\ \omega_{21} & \omega_{22} & \dots & \omega_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \omega_{n1} & \omega_{n2} & \dots & \omega_{nn} \end{bmatrix}$, \\ \\ then the matrix form of equation (6) can be written as \begin{equation} y=\rho Wy+\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}+\epsilon, \quad \epsilon\sim N(0, \sigma^2I_{n}), \end{equation} where $I_{n}$ is the identity matrix of order n. \\ As in \cite{LeSage1997} and \cite{LeSage2009}, we assume a normal prior for $\mathbf{\beta_{k_{n}}}$ with mean $m_{k_{n}}$ and variance-covariance matrix $\Sigma_{k_{n}}$, an inverse gamma prior for $\sigma^2$ with shape parameter $a$ and scale parameter $b$ and a uniform prior for $\rho$. These prior distributions are given by \begin{itemize} \item $p(\mathbf{\beta_{k_{n}}})\sim N(m_{k_{n}}, \Sigma_{k_{n}})=\frac{1}{(2\pi)^{\frac{k_{n}}{2}}|\Sigma_{k_{n}}|^{\frac{1}{2}}}\exp\left(-\frac{1}{2}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})^{T}\Sigma_{k_{n}}^{-1}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})\right)$, \\ \item $p(\sigma^2)\sim IG(a, b)=\frac{b^a}{\Gamma(a)}(\sigma^2)^{-(a+1)}\exp\left(-\frac{b}{\sigma^2}\right)$,\\ \item $p(\rho)\sim U[0, 1]$. \end{itemize} In order to facilitate the statistical processing, we assume that all prior distributions are independent. \section{Bayesian MCMC estimation of the FSLM} \subsection{Full conditional distributions} \lettrine From equation (7), we can deduce \begin{eqnarray*} \epsilon &=& y-\rho Wy-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}\\ &=& (I_{n}-\rho W)y-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}\\ &=& Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}. \end{eqnarray*} where $A=I_{n}-\rho W$. Noting $\theta=(\mathbf{\beta_{k_{n}}}^T, \sigma^2,\rho)^T$ and using the transformation theorem, the likelihood function of the model is given by \begin{equation*} L(\theta)=f(y|\textbf{Z}_{k_{n}};\theta)=f(\epsilon|\textbf{Z}_{k_{n}};\theta)\left|\frac{\partial \epsilon}{\partial y}\right|. \end{equation*} where $\frac{\partial \epsilon}{\partial y}$ is the Jacobian matrix. Since $\epsilon\sim N(0, \sigma^2 I_{n})$, its multidimensional probability density is given by \begin{eqnarray} f(\epsilon|\textbf{Z}_{k_{n}};\theta) &=& \frac{1}{(2\pi)^{\frac{n}{2}}|\sigma^2I_{n}|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}\epsilon^T(\sigma^2 I_{n})^{-1}\epsilon\right]\nonumber\\ &=& \frac{1}{(2\pi)^{\frac{n}{2}}(\sigma^2)^{\frac{n}{2}}}\exp\left[-\frac{1}{2\sigma^2}\epsilon^T\epsilon\right]\nonumber\\ &=& (2\pi)^{-\frac{n}{2}}(\sigma^2)^{-\frac{n}{2}}\exp\left[-\frac{1}{2\sigma^2}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})\right]. \end{eqnarray} The calculation of the determinant of the Jacobian matrix is as follows \begin{eqnarray} \left|\frac{\partial \epsilon}{\partial y}\right| &=& \left|\frac{\partial (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})}{\partial y}\right|\nonumber\\ &=& \left|\frac{\partial Ay}{\partial y}-\frac{\partial \textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}}{\partial y}\right|\nonumber\\ &=& \left|\frac{\partial Ay}{\partial y}\right|\nonumber\\ &=& \left|A \right|\nonumber \\ &=& \left|I_{n}-\rho W\right|. \end{eqnarray} Combining equation (8) and equation (9), the likelihood function of the model is then given by \begin{equation} L(\theta)=f(y|\textbf{Z}_{k_{n}};\theta)=(2\pi)^{-\frac{n}{2}}(\sigma^2)^{-\frac{n}{2}}\exp\left[-\frac{1}{2\sigma^2}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})\right]\left|A \right|. \end{equation} The posterior distribution of the model, under the assumption of independence of the prior distributions, is \begin{equation} p(\mathbf{\beta_{k_{n}}}, \sigma^2, \rho|y, W)=L(\theta)\times p(\mathbf{\beta_{k_{n}}})\times p(\sigma^2)\times p(\rho). \end{equation} When we ignore the constant terms involved in the prior distributions of the parameters, the posterior distribution can be put in the form of a relation of proportionality as follows\\ \\ $p(\mathbf{\beta_{k_{n}}}, \sigma^2, \rho|y, W)\propto (\sigma^2)^{-\frac{n}{2}}\exp\left(-\frac{1}{2\sigma^2}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})\right)\left|A \right|\times (\sigma^2)^{-(a+1)}\exp\left(-\frac{b}{\sigma^2}\right)$ \\ $\times\exp\left(-\frac{1}{2}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})^{T}\Sigma_{k_{n}}^{-1}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})\right)$.\\ \\ From this posterior distribution, we will deduce the full conditional distributions of the parameters. We start by looking for the conditional distribution of $\sigma^2$. By omitting the elements that do not include the $\sigma^2$ parameter in the posterior distribution, one can get the full conditional distribution for this parameter in terms of proportionality relation as descibed below \begin{eqnarray} p(\sigma^2|\mathbf{\beta_{k_{n}}}, \rho) &\propto& (\sigma^2)^{-\frac{n}{2}}\exp\left(-\frac{1}{2\sigma^2}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})\right)\times(\sigma^2)^{-(a+1)}\exp\left(-\frac{b}{\sigma^2}\right)\nonumber\\ &\propto& (\sigma^2)^{-\left(\frac{n}{2}+a+1\right)}\exp\left(-\frac{(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})+2b}{2\sigma^2}\right). \end{eqnarray} We recognize in equation (12) an inverse gamma distribution for $\sigma^2|\mathbf{\beta_{k_{n}}}, \rho$ with shape parameter $\frac{n}{2}+a$ and scale parameter $\frac{(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})+2b}{2}$, which we materialize by writing \begin{equation} \sigma^2|\mathbf{\beta_{k_{n}}}, \rho\sim IG\left(\frac{n}{2}+a, \frac{(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})+2b}{2}\right). \end{equation} Similarly, starting from the posterior distribution and considering only terms that include the $\mathbf{\beta_{k_{n}}}$ parameter, we get the full conditional distribution given below \begin{eqnarray} p(\mathbf{\beta_{k_{n}}}|\sigma^2, \rho) &\propto& \exp\left(-\frac{1}{2\sigma^2}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})\right)\times\exp\left(-\frac{1}{2}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})^{T}\Sigma_{k_{n}}^{-1}(\mathbf{\beta_{k_{n}}}-m_{k_{n}}\right)\nonumber\\ &\propto& \exp\left(-\frac{1}{2}\left((Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T(\sigma^2 I_{n})^{-1}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})+(\mathbf{\beta_{k_{n}}}-m_{k_{n}})^{T}\Sigma_{k_{n}}^{-1}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})\right)\right)\nonumber. \end{eqnarray} In order to recognize the distributional form of $\mathbf{\beta_{k_{n}}}|\sigma^2, \rho$, we complete the square by expending out the terms $(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T(\sigma^2 I_{n})^{-1}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})$ and $(\mathbf{\beta_{k_{n}}}-m_{k_{n}})^{T}\Sigma_{k_{n}}^{-1}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})$ (we can refer to \cite{Gelman} for more details on the technique of completing the square). We begin completing the square by wrting \begin{equation*} \left(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}\right)^T (\sigma^2 I_{n})^{-1}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})=\mathbf{\beta_{k_{n}}}^T\left(\frac{1}{\sigma^2}\textbf{Z}_{k_{n}}^T\textbf{Z}_{k_{n}}\right)-\mathbf{\beta_{k_{n}}}^T\left(\frac{1}{\sigma^2}\textbf{Z}_{k_{n}}^T Ay\right)-\left(\frac{1}{\sigma^2}y^TA^T\textbf{Z}_{k_{n}}\right)\mathbf{\beta_{k_{n}}}+C_{1}. \end{equation*} where $C_{1}=y^TA^T\left(\sigma^2I_{n}\right)^{-1}Ay$. Similarly, we write \begin{equation*} (\mathbf{\beta_{k_{n}}}-m_{k_{n}})^{T}\Sigma_{k_{n}}^{-1}(\mathbf{\beta_{k_{n}}}-m_{k_{n}})=\mathbf{\beta_{k_{n}}}^T\Sigma_{k_{n}}^{-1}\mathbf{\beta_{k_{n}}}-\mathbf{\beta_{k_{n}}}^T\Sigma_{k_{n}}^{-1}m_{k_{n}}-m_{k_{n}}^T\Sigma_{k_{n}}^{-1}\mathbf{\beta_{k_{n}}}+m_{k_{n}}^T\Sigma_{k_{n}}^{-1}m_{k_{n}}. \end{equation*} After completing the square, the distribution of $\mathbf{\beta_{k_{n}}}|\sigma^2, \rho$ is a multinormal distribution with mean=$\left(\textbf{Z}_{k_{n}}^T\textbf{Z}_{k_{n}}+\sigma^2\Sigma_{k_{n}}^{-1}\right)^{-1}\left(\textbf{Z}_{k_{n}}^TAy+\sigma^2\Sigma_{k_{n}}^{-1}m_{k_{n}}\right)$ and variance-covariance matrix equals to $\left(\textbf{Z}_{k_{n}}^T\textbf{Z}_{k_{n}}+\sigma^2\Sigma_{k_{n}}^{-1}\right)^{-1}$. We summarize the conditional distribution of $\mathbf{\beta_{k_{n}}}|\sigma^2, \rho$ as follows \begin{equation} \mathbf{\beta_{k_{n}}}|\sigma^2, \rho\sim N\left(\left(\textbf{Z}_{k_{n}}^T\textbf{Z}_{k_{n}}+\sigma^2\Sigma_{k_{n}}^{-1}\right)^{-1}\left(\textbf{Z}_{k_{n}}^TAy+\sigma^2\Sigma_{k_{n}}^{-1}m_{k_{n}}\right), \left(\textbf{Z}_{k_{n}}^T\textbf{Z}_{k_{n}}+\sigma^2\Sigma_{k_{n}}^{-1}\right)^{-1}\right) \end{equation} The proportionality relation relative to the full conditional distribution for the parameter $\rho$ is given by \begin{eqnarray} p(\rho|\mathbf{\beta_{k_{n}}}, \sigma^2) &\propto& \left|A \right|\exp\left(-\frac{1}{2\sigma^2}(Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})^T (Ay-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}})\right)\nonumber\\ &\propto& \left|I_{n}-\rho W \right|\exp\left(-\frac{1}{2\sigma^2}\left((I_{n}-\rho W)y-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}\right)^T \left((I_{n}-\rho W)y-\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}\right)\right)\nonumber. \end{eqnarray} Unfortunately there is any recognizable form for the full conditional distribution of the parameter $\rho$ due to the presence of the derminant $\left|I_{n}-\rho W \right|$. \subsection{Bayesian MCMC computation} Recall that $\theta=(\mathbf{\beta_{k_{n}}}^T, \sigma^2,\rho)^T$ is the parameter vector. We consider here the loss function $L(\hat{\theta}, \theta)=(\hat{\theta}-\theta)R(\hat{\theta}-\theta)$ where R is a positive define matrix. This function is used as a good criterion to determine an optimal point estimate of the parameter $\theta$. The Bayesian point estimate of the parameter $\theta$ is then given by \begin{equation} \hat{\theta}=argmin\mathbb{E}_{\theta}\left(L(\hat{\theta}, \theta)\right)=argmin\int_{\Theta}(\hat{\theta}-\theta)R(\hat{\theta}-\theta)p(\theta|y)d\theta. \end{equation} The solution of equation (15) is defined as $\hat{\theta}=\mathbb{E}(\theta|y)=\int_{\Theta}\theta p(\theta|y)d\theta$, which corresponds to the mean of the posterior distribution of $\theta$ (\cite{Koop}). In general, the analytical solution of equation (15) cannot be obtained in the closed form.\\ To avoid this problem, we use MCMC algorithms which consist of onstructing a Markov chain from the conditional posterior distributions. If the Markov chain is simulated long enough, then the mean of the sample is seen as an estimate of $\mathbb{E}(\theta|y)$ (\cite{Casella}, \cite{Roberts}).\\ Since we know the full conditional distributions for $\mathbf{\beta_{k_{n}}}$ and $\sigma^2$, we can use Gibbs sampling method to easily obtain random draws for these parameters. Unfortunately, the full conditional distribution of the autoregressive parameter $\rho$ is in the unknown form. Thus, we use the Metropolis-Hastings algorithm to obtain samples for this parameter. In practice, we combine the Gibbs sampling method and the Metropolis-Hastings algorithm for the sampling scheme of the functional spatial lag model. This technique is known in the litterature as the Metropolis-within-Gibbs algorithm (\cite{Gilks}).\\ The Metropolis-Hastings algorithm uses a proposal distribution, according to which, we will sample the autoregressive parameter $\rho$. Following \cite{LeSage2009}, we can use a normal proposal distribution with tuned random-walk procedure defined as \begin{equation} \rho^{new}=\rho^{old}+c\psi, \quad \psi\sim N(0, 1). \end{equation} The constant c represents the tuning parameter. Since $ \psi$ is symmetric, so the probability to accept the candidate $\rho^{new}$ is given by \begin{equation} \alpha(\rho^{new}, \rho^{old})=\min\left(\frac{p(\rho^{new}|\mathbf{\beta_{k_{n}}}, \sigma^2, y)}{p(\rho^{old}|\mathbf{\beta_{k_{n}}}, \sigma^2, y)}, 1\right). \end{equation} \cite{Dogan} show that the tuning parameter affects the behavior of the chain in at least two ways: (i) it affects the acceptance rate of new candidate values through acceptance probability, (ii) it also affects the region where the new candidate values are sampled. \\ In practice, the tuning parameter is choosen so that the acceptance rate falls between 40$\%$ and 60$\%$. The Metropolis-within-Gibbs algorithm for estimating parameters of the functional spatial lag model is described through the following steps.\\ 1. Set up initial value $\theta^{(0)}=(\mathbf{\beta_{k_{n}}}^{(0)}, \sigma^{2(0)}, \rho^{(0)})$,\\ 2. For iteration j from 1 to N, do the following steps.\\ 2.1 Simulate $\mathbf{\beta_{k_{n}}}^{(j)}$ from $p(\mathbf{\beta_{k_{n}}}|\sigma^{2(j-1)}, \rho^{(j-1)})$,\\ 2.2 Simulate $\sigma^{2(j)}$ from $p(\sigma^2|\mathbf{\beta_{k_{n}}}^{(j)}, \rho^{(j-1}))$,\\ 2.3 Compute $\rho^{new}=\rho^{(j-1)}+c\psi$,\\ 2.3.1 Calculate $\alpha(\rho^{new}, \rho^{(j-1)})=\min\left(\frac{p(\rho^{new}|\mathbf{\beta_{k_{n}}}^{(j)}, \sigma^{2(j)}, y)}{p(\rho^{(j-1)}|\mathbf{\beta_{k_{n}}}^{(j)}, \sigma^{2(j)}, y)}, 1\right)$,\\ 2.3.2 Simulate u from uniform (0, 1),\\ 2.3.3 Set $\rho^{(j)}=\rho^{new}$ if $\alpha(\rho^{new}, \rho^{(j-1)})>u$ else $\rho^{(j)}=\rho^{(j-1)}$.\\ 2.3.4 Return $\theta^{(j)}=(\mathbf{\beta_{k_{n}}}^{(j)}, \sigma^{2(j)}, \rho^{(j)})$. \\ The sequence $(\theta^{(0)}, \theta^{(1)}, \dots, \theta^{(N)})$ generated by the Metropolis-within-Gibbs algorithm is a markov chain whose stationary distribution converges towards the joint posterior density $p(\theta|y)$ (\cite{Thierney}).\\ Alternatively, we can use this algorithm considering a uniform proposal distribution instead of a normal proposal distribution. A slight modification of the algorithm takes place at the level of updating the autoregressive parameter $\rho$ (see, for instance \cite{LeSage1997}, \cite{LeSage2009}). \section{Simulation study} In this section, we propose simulations of the FSLM model from the point of view of the Bayesian approach. The simulations cover the 121 communes of Senegal represented by the spatial layout in figure 1 below.\\ \begin{figure}[H] \caption{Spatial layout of communes of Senegal (left panel) and centroids of this communes (right panel).} \centering \includegraphics[width=16cm,height=8cm]{fig1} \end{figure} \noindent We consider several scenarios of simulations according to the values of the autoregressive parameter $\rho$. In each scenario, we compare the results obtained for the Bayesian FSLM model with uniform kernel and normal kernel, and the FSLM with maximum likelihood procedure. The process is summarized by the following steps. \begin{itemize} \item The $121\times 121$ weighting matrix W whose elements $w_{ij}$ are equal to 1 if the communes i and j share a common border and 0 otherwise is calculated through the spatial layout in figure 1; \item In each commune, we simulate the functional covariate $X_{i}(t_{j})=cos(t_{j})+sin(t_{j})+\epsilon_{i}(t_{j}), \quad i = 1,2,\dots, 121, \quad j = 0,1, \dots, 100$, where $\epsilon_{i}(t_{j})$ is a realization of a white noise. We then smooth these values using a 7 B-spline basis in order to obtain the curves $X_{i}(t), \quad i=1,2,\dots, 121$. The number of basis is choosen using cross validation criterion (one can refer to Ramsay and Silverman (2005) for more details); \item The functional parameter is defined by $\gamma(t)=e^{-\frac{t}{10}}\left(\left(\frac{t}{10}\right)^2+3\left(\frac{t}{10}\right)-4\right)$ as in \cite{Pineda}; \item We generate a $121\times 1$ Gaussian vector $\epsilon\sim N(0, \sigma^2 I_{n})$; \item We calculate $y=(I_{n}-\rho W)^{-1}\textbf{Z}_{k_{n}}\mathbf{\beta_{k_{n}}}+(I_{n}-\rho W)^{-1}\epsilon$ by considering the values $\rho=0.3, 0.5, 0.7$; \item The parameters $\gamma(t)$, $\sigma^2$ and $\rho$ are estimated using the Metropolis-within-Gibbs algorithm described above. We consider both the uniform kernel and the normal kernel for the proposal distribution. \end{itemize} The curves $X_{i}(t)$ are represented in figure 2 below.\\ \begin{figure}[H] \caption{121 curves $X_{i}(t)$ obtained after smoothing the values $X_{i}(t_{j})$ by a 7 B-spline basis.} \centering \includegraphics[width=16cm,height=8cm]{fig2} \end{figure} \noindent We give in Table 1 below the results of simulations of the Metropolis-within-Gibbs algorithm for the FSLM model. For $\rho=0.3$, we find that all the methods considered give substantially the same results for the values of the $\beta$ parameter. However, it is generally noted that the normal kernel method and the maximum likelihood method are closer in terms of the results obtained. The value of the simulated $\rho$ parameter ($\rho=0.28)$ for the normal kernel method is closer to the true value $\rho=0.3$ than the other methods. However, the uniform kernel method is preferable than the other methods according to the BIC values. The simulated values of the parameters are of the same order of magnitude for the different methods considered when $\rho=0.5$. \begin{table}[!htbp] \centering \caption{Comparison of simulation results of the Bayesian FSLM model} \vspace{0.5cm} \begin{tabular}{p{1.0cm} l l c c c c c c c c c} \hline $\rho$& Method &$\beta_{1}$& $\beta_{2}$ & $\beta_{3}$ &$\beta_{4}$ &$\beta_{5}$ & $\beta_{6}$&$\beta_{7}$ & $\sigma^2$& $\rho$& BIC\\ \hline &Uniform kernel&-3.62 &-3.04 &-2.55 &-1.96 &-1.47& -1.64&-0.79&1.09&0.48&182 \\ 0.3 & Normal kernel&-3.67& -3.06 &-2.53 & -1.98& -1.46& -1.15&-0.82&1.07&0.28&185 \\ & Maximum likelihood& -3.68& -3.05&-2.52 & -1.90& -1.43& -1.16&-0.80&1.00&0.25&393 \\ \hline &Uniform kernel &-3.49 &-2.82 &-2.26 &-1.41 &-1.27& -0.86&-0.35&0.99&0.54&188 \\ 0.5 &Normal kernel& -3.41& -2.76&-2.20 & -1.33& -1.23& -0.86&-0.40&0.97&0.46&203 \\ & Maximum likelihood& -3.45& -2.80 &-2.26 & -1.41& -1.30& -0.88&-0.39&0.89&0.44&383 \\ \hline &Uniform kernel&-3.32 &-2.76 &-2.18 &-1.32 &-1.20& -0.91&-0.51&0.81&0.57&427 \\ 0.7 &Normal kernel& -3.28& -2.73&-2.20 & -1.37& -1.23& -0.91&-0.54&0.81&0.72&289 \\ & Maximum likelihood & -3.33& -2.75&-2.18 & -1.33& -1.22& -0.89&-0.50&0.74&0.74&371 \\ \hline \end{tabular} \end{table} \noindent However, the uniform kernel and maximum likelihood methods produce simulated values that are substantially the same. It is always noted that the uniform kernel method is preferable with respect to the value of the BIC.\\ When $\rho=0.7$, it was again found that the uniform kernel and maximum likelihood methods yield results that are about the same. The normal kernel method gives a value $\rho=0.72$ which is closer to the true value $\rho=0.7$. \\This method, with a value of $BIC=289$, is preferable. In summary, the general finding is that the Bayesian proposition is better than the classical method. When the $\rho$ value is low, the uniform kernel method overrides the normal kernel method. However for large $\rho$ values, the normal kernel method is preferable to the uniform kernel method. \section{Application to unemployment data} In this section, we apply the proposed methodology to real data. The aim is to explain the variations in the unemployment rate according to those of the illiteracy rate. In each department of Senegal, we observe the unemployment rate in the first quarter of 2019 and the illiteracy rate from the second quarter of 2016 to the first quarter of 2019. The data come from the National Agency of Statistics and Demography of Senegal.\\ We give in Figure 3 below a spatial distribution of the unemployment rate in each of the 45 departments of Senegal. We observe in this figure that the spatial distribution of the unemployment rate is not neutral. Most geographically close departments have similar unemployment values. \\ \begin{figure}[H] \caption{Spatial distribution of the unemployment rate in Senegal.} \centering \includegraphics[width=16cm,height=8cm]{fig3} \end{figure} \noindent This means that the unemployment rate is a spatially autocorrelated variable. This visual observation was confirmed by the Moran test.\\ Figure 4 below shows the curves of the illiteracy rate over the observation period. As a first step, these discrete data on the illiteracy rate collected per quarter are smoothed using 7 B-splines basis. Overall, the curves of the illiteracy rate are decreasing over the observation period. This shows the great efforts made by the Government of Senegal to reduce the illiteracy rate by formulating targeted literacy programs.\\ \begin{figure}[H] \caption{Curves of the rate of illiteracy of Senegal, obtained after expanding the data by using a 7 B-splines basis.} \centering \includegraphics[width=16cm,height=8cm]{fig4} \end{figure} \noindent In Table 2 below, we group the results of the parameter estimation of the Bayesian FSLM model according to the different methods. The results of the estimation of the vector parameter $\beta$ for the normal kernel and the uniform kernel methods that are very close do not deviate too much from the results obtained for the maximum likelihood method. The estimated standard deviations of the normal kernel and the uniform kernel methods are almost the same. The values of the $\rho$ parameter for the normal kernel and the uniform kernel methods show that the autocorrelation of the unemployment rate is more pronounced in these models than that of the maximum likelihood.\\ \begin{table}[!htbp] \centering \caption{Comparison of parameter estimation of the Bayesian FSLM model} \vspace{0.5cm} \begin{tabular}{p{3cm} p{1cm} l c c c c c c c c c c} \hline Method& &$\beta_{1}$& $\beta_{2}$ & $\beta_{3}$ &$\beta_{4}$ &$\beta_{5}$ & $\beta_{6}$&$\beta_{7}$ & $\sigma^2$& $\rho$& BIC & AR \\ \hline &Value&-0.31 &-0.22 &1.08 &0.89 &-1.45& 0.26&-0.09& 57&0.53&299&69$\%$ \\ Normal kernel \\ & Std& 0.38& 0.47&0.67 & 0.84& 0.75& 0.69&0.77& 14& 0.14& \\ \hline &Value&-0.32 &-0.24 &1.21 &1.03 &-1.58& 0.17&-0.07&58&0.53&279&12$\%$ \\ Uniform kernel \\ & Std& 0.38& 0.48&0.68 & 0.86& 0.77& 0.70&0.79& 14&0.27& \\ \hline &Value&-0.27 &-0.22 &1.23 &0.99 &-1.59& 0.20&-0.08&45&0.45&340 \\ ML method \\ & Std& 0.34& 0.42&0.60 & 0.74& 0.67& 0.61&0.69& 6.73&0.14& \\ \hline \end{tabular} \end{table} \noindent Looking at the values of the BIC, we conclude that the uniform kernel method gives the best model. However, the normal kernel method is preferable if we look at the acceptance rate (AR=69$\%$) of the Metropolis-within-Gibbs algorithm. In all cases, the results of the Bayesian approach are better than those obtained by the frequentist approach.\\ \section{Conclusion} In this paper, we have proposed a Bayesian estimation method of the functional spatial lag model. Using the Metropolis-within-Gibbs algorithm, we have shown on the basis of numerical simulations that Bayesian estimation gives better results than frequentist estimation. We have also illustrated the Bayesian methodology with an application to unemployement data of Senegal. The results obtained confirm that the Bayesian methodology is more satisfactory than the maximum likelihood estimation. This modeling approach is innovative in the fields of spatial statistics and spatial econometrics. The application of this Bayesian methodology to other types of data and other areas of activity may be considered.
1,116,691,499,579
arxiv
\section{Introduction} \label{sec:intro} The expansion of the universe is currently accelerating, and yet we have no compelling explanation of why this is happening unless we are prepared to accept the extraordinary degree of fine tuning associated with the introduction of a cosmological constant. Attempts to further our understanding typically introduce new scalar fields either explicitly as quintessence, or implicitly through a modification of the gravitational sector~\cite{Copeland:2006wr,Clifton:2011jh,Joyce:2014kja}. It is therefore crucial for cosmology to understand what theoretical properties these scalar fields could have, and to constrain them experimentally; whilst remaining agnostic about the complete solution of the cosmological constant problem and the source of the acceleration of the expansion of the universe. A lot of attention has been recently focused on the Horndeski theories, which are the most general theories describing one scalar field coupled to gravity~\cite{Horndeski:1974wa}, that have second order equations of motion. These theories were first written down by Horndeski, and later independently rediscovered by Deffayet, Gao, Steer and Zahariade~\cite{Deffayet:2011gz}. Insisting on second order equations of motion guarantees the absence of ghost degrees of freedom, although it has also been realised that if additional constraints are present this condition can be relaxed and the theories extended to the so called `beyond-Horndeski' theories~\cite{Gleyzes:2014dya}. The Horndeski theories provide a complete description of the possible effects of a new scalar degree of freedom uniformly coupled to matter, and constraining these theories is an important target for upcoming large scale cosmological surveys including Euclid~\cite{Amendola:2012ys}. Such a dark energy scalar field may arise as part of a solution to the cosmological constant problem; the question of why the vacuum fluctuations of standard model fields do not generate a large effective cosmological constant. Any solution to this problem must therefore interact to both the gravitational and matter fields. Therefore, bar any otherwise compelling reason, we expect that the dark energy scalar will couple to matter~\cite{Joyce:2014kja}. This is potentially problematic, because light scalar fields coupled to matter mediate fifth forces. The stringent experimental constraints on the existence of such forces can be avoided, either by imposing a shift symmetry which forbids Yukawa type interactions with the scalar, or by making the theory non-linear and thereby allowing the properties of the fifth force to vary depending on the environment, an effect known as screening \cite{Joyce:2014kja}. The energy scales relevant to dark energy are the (reduced) Planck mass $M_P = 2.4 \times 10^{18}~\text{GeV}$ controlling the strength of gravitational effects, and the Hubble scale today $H_0= 1.5 \times 10^{-42}~\text{GeV}$ which sets the coherence scale for dark energy effects. The vast hierarchy between these two scales is the source of the cosmological constant problem, which we do not address here, but it also allows us to build a vast array of intermediate scales by taking different combinations of the Planck mass and the Hubble scale. For example Dvali-Gabadadze-Porrati (DGP) and Galileon models have higher mass dimension scalar operators suppressed by the scale $ (M_P H_0^2)^{1/3} \sim 10^{-22}~\text{GeV} $~\cite{Dvali:2000hr,Nicolis:2004qq,Nicolis:2008in}. The invention and widespread adoption of screening mechanisms to dynamically suppress fifth forces in intra-solar-system searches~\cite{Joyce:2014kja} also means that experimental bounds can be met without the energy scale controlling the strength of the coupling of the scalar to matter being forced to lie above the Planck scale. As a result we should ask whether it is possible to detect the Horndeski model of dark energy on terrestrial scales. Constraints from laboratory experiments will provide important information, complementary to that obtained from cosmological surveys, and allows us to test theories of dark energy over the widest possible range of distance and energy scales. The LHC probes our understanding of physics at unprecedented energies and under controlled and reproducible conditions. A large variety of particles, including ones with heavy masses, are being produced beyond threshold, resulting in potentially sizeable interactions of new scalars that couple to standard model (SM) particles via the energy-momentum tensor. In doing so, the LHC creates a controlled and non-static environment in the sense that large momentum transfers of physical systems are probed with sufficient accuracy and statistics. Since interactions of a scalar dark energy candidate with the SM sector and itself often involve derivative couplings, we can expect the high momentum transfer events at the LHC to provide an excellent strategy to constrain such realisations of dark energy. In this work we will survey the modified phenomenology of LHC processes that are particularly motivated as probes of dark energy interactions. Before we discuss these processes in detail in Sec.~\ref{sec:colla}, to make this work self-contained, we survey effective dark energy models in Sec.~\ref{sec:eff} to introduce the relevant dark energy effective theory (EFT) interactions. Although different in fundamental aspects, dark energy phenomenology at the LHC shares certain aspects with searches for dark matter at colliders. The potential to pin down the dark energy character of a potential new physics signal due to different a priori phenomenology and the expected non-linear self-interactions will be discussed in Secs.~\ref{sec:darkmatter} and~\ref{sec:collb}. We give our conclusions and an outlook in Sec.~\ref{sec:conclusions}. \section{Effective Models for Dark Energy} \label{sec:eff} We consider the effective role that dark energy could play in collider experiments. Our starting point is a dark energy scalar field $\phi$ with a comparably small mass compared to particle physics scales. We will differentiate in what follows between theories which respect the shift symmetry $\phi \rightarrow \phi + c$, and those that break it. We assume that $\phi$ couples to matter universally, in such a way that matter fields move on geodesics of the Jordan frame metric \begin{equation} g_{\mu\nu}= A^2(\phi,X)\tilde{g}_{\mu\nu} + B(\phi,X) \partial_\mu\phi\partial_\nu\phi \end{equation} where $X=\frac{1}{2} \eta^{\mu\nu} \partial_\mu \partial_\nu \phi$. We assume that in the collider environment the Einstein frame metric is simply the Minkowski metric $\tilde{g}_{\mu\nu}=\eta_{\mu\nu}$, which is certainly a reasonable assumption on earth where Newton's potential is very small. Expanding the coupling functions $A$ and $B$ in powers of $\partial_\mu\phi\partial_\nu \phi$ gives a tower of characteristic interactions. In particular we can write \begin{equation} A(\phi,X)=\sum_n \frac{a_n(\phi/{M})}{M^{4N}} X^n \end{equation} and \begin{equation} B(\phi,X)=\sum_n \frac{b_n({\phi}/{M})}{M^{4N}} X^n \end{equation} where $a_n$ and $b_n$ are dimensionless, and become constant and independent of $\phi$ when the shift symmetry is imposed. \subsection{Shift symmetric theories} \subsubsection{Coupling to matter} Assuming that the model is shift symmetric under $\phi \to \phi +c$ the lowest order interactions between the scalar and the Standard Model are through the Lagrangian terms \begin{equation} \label{eq:c1} \mathcal{L}_1 = \frac{\partial_{\mu}\phi\partial^{\mu} \phi}{M^4}T^{\nu}_{\;\nu} \end{equation} corresponding to a direct conformal coupling with constant $a_1$, and the disformal coupling \begin{equation} \label{eq:c2} \mathcal{L}_2 = \frac{\partial_{\mu}\phi\partial_{\nu} \phi}{M^4}T^{\mu\nu} \end{equation} associated with a constant $b_1$. Here $T_{\mu\nu}$ is the energy momentum tensor of all of the standard model fields. Note that no coupling between the scalar and photons arises from $\mathcal{L}_1$. Higher order operators can have the following forms: \begin{equation} \mathcal{L}_{3,n} = \left(\frac{\partial_{\mu}\phi\partial^{\mu} \phi}{M^4}\right)^nT^{\nu}_{\;\nu} \end{equation} coming from a constant $a_n$ and \begin{equation} \mathcal{L}_{4,n} = \left(\frac{\partial_{\alpha}\phi\partial^{\alpha} \phi}{M^4}\right)^n\frac{\partial_{\mu}\phi\partial_{\nu} \phi}{M^4}T^{\mu\nu} \end{equation} from the cross term between a constant $b_1$ and a constant $a_n$. Finally we can have higher order terms of the form \begin{multline} \mathcal{L}_{5,n-1} = \frac{1}{M^{4n}}\partial_{\alpha_1}\phi\partial_{\beta_1} \phi \ldots \partial_{\alpha_n}\phi\partial_{\beta_n} \phi\\ \frac{2^{n-1}}{\sqrt{-g}}\frac{\partial^{n-1}(\sqrt{-g}T^{\alpha_1\beta_1})}{\partial g_{\alpha_2 \beta_2}\ldots \partial g_{\alpha_n \beta_n}} \end{multline} where $n$ is a positive integer. The form of $\mathcal{L}_5$ is derived in~\cite{Brax:2014vva}. \subsubsection{Kinetic terms} Possible kinetic terms for the scalar fall into two classes. The first, known as $P(X)$, have the form \begin{equation} \mathcal{L}_{6,n} = \frac{(\partial_\mu\phi \partial^{\mu} \phi)^n}{M^{4(n-1)}} \end{equation} for positive integer $m$. A particular series of such operators, $\mathcal{L} = M^4 \sqrt{1+\partial_{\mu}\phi\partial^{\mu}\phi /M^4}$, arises in DBI theories~\cite{Silverstein:2003hf}, where the theory possesses an additional (non-linearly realised) symmetry which encodes 5d Lorentz invariance when $\phi$ is viewed as determining the position of a D3 brane in 5d Minkowski space. This extra symmetry allows the field to acquire large gradients while remaining in the regime of validity of the EFT. The second class of kinetic terms are known as the Galileons, and contain terms with more than one derivative per field. Around flat space they are invariant (up to total derivatives) under the symmetry $\phi \rightarrow \phi + c + b_{\mu}x^{\mu}$ for constant $c$ and $b_{\mu}$. There are five Galileon operators, but one is the tadpole and one is the canonical kinetic term, so there are only three more terms we need to consider: \begin{equation} \mathcal{L}_7 = \frac{1}{M^3} \partial_{\mu}\phi\partial^{\mu}\phi \Box \phi \end{equation} \begin{equation} \mathcal{L}_8 = \frac{1}{M^6} \partial_{\mu} \phi\partial^{\mu}\phi\left[2(\Box \phi)^2 -2 D_\alpha D_\beta \phi D^\beta D^\alpha \phi\right] \end{equation} \begin{multline} \mathcal{L}_9 = \frac{1}{M^9} \partial_{\mu} \phi\partial^{\mu}\phi\left[(\Box\phi)^3 -3(\Box\phi)D_\alpha D_\beta \phi D^\beta D^\alpha \phi \right. \\ \left. + 2 D_\alpha D^\beta \phi D_\beta D^\gamma\phi D_\gamma D^\alpha\phi\right] \end{multline} It has been shown for both $P(X)$ and Galileon theories, that while the scale $M$ in these operators is the strong coupling scale controlling self interactions of the scalar, the effective field theory description remains valid up to a higher cut-off scale \cite{deRham:2014wfa}. \subsection{Breaking the shift symmetry} This set of operators can be extended further if the shift symmetry is broken and terms depending on the undifferentiated scalar field are allowed. A scalar theory with a softly broken shift symmetry can still be cosmologically relevant, however it suffers from issues of fine tuning, because it is necessary to keep the mass of the field light enough that it has a cosmologically relevant Compton wavelength. If we take $n$ to be a positive integer and $N$ is the energy scale that enters with $\phi$, then each of the operators $\mathcal{L}_1$ - $\mathcal{L}_9$ can be pre-multiplied by a factor of $(\phi/N)^n$. There are two other possibilities which depend only on $\phi$. Firstly the coupling to matter can take the form \begin{equation} \mathcal{L}_{10,n} = \left(\frac{\phi}{N}\right)^n T^{\mu}_{\;\mu}. \end{equation} For a canonical scalar with an $m^2\phi^2$ potential this form of the coupling is extremely well constrained by fifth force searches~\cite{Adelberger:2003zx}. But in more complex and non-linear models collider bounds can still provide new information~\cite{Brax:2009aw,Brax:2009ey}. Secondly we can include potential terms for the scalar \begin{equation} \mathcal{L}_{11,n} = \frac{\phi^n}{N^{n-4}}, \end{equation} where $n$ can be either positive or negative. When $n=1$ this is a tadpole that, as mentioned above, we ignore. When $n=2$ this is a mass term for the scalar, which it will be helpful to consider separately in what follows. \subsection{Ghosts} The above list clearly does not include all possible operators that depend on $\phi$ and its derivatives. However the remaining terms will introduce ghost degrees of freedom, that is fields with negative norms or wrong sign kinetic terms, leading to instabilities and a violation of unitarity. These terms have the schematic form \begin{equation} \mathcal{L}_{12,m,n}= \frac{\partial^m\phi^n}{M^{m+n-4}} \end{equation} with $m>n>1$ and the derivatives are contracted in a Lorentz invariant way, and can be included in our effective field theory as long as they handled with care, as the instabilities introduced by the ghost only appear at the scale $M$ assumed to lie close to the cut off of the theory, at which our effective treatment breaks down. The exception to this are the so-called beyond Horndeski theories which contain non trivial constraints that remove the ghost degrees of freedom introduced by these operators. The nature of these constraints means that they are difficult to study on an operator by operator basis. \section{Collider Phenomenology} \label{sec:coll} We now consider the collider phenomenology of the operators introduced above by writing \begin{equation} {\cal{L}}_{\text{BSM}}={\cal{L}}_{\text{SM}}+\sum_{i}C_i {\cal{L}}_i + {1\over 2}m_\phi^2\phi^2 \,, \end{equation} with Wilson coefficients $C_i$ and we limit ourselves to the lowest non-trivial orders in each operator series. The production cross sections of a given multiplicity of $\phi$ scalars depends on the ratio $\sim C_i^2/M^{2r}$, where $r$ is the characteristic scaling of the operators listed above. We will choose $C_i=1$ to report constraints solely expressed by the scale $M$, but it should be understood that $C_i\neq 1$ are possible choices, too. As already mentioned, we focus on light values of $m_\phi$ in comparison to typical collider scales; we adopt $m_\phi=0.1~\text{GeV}$ as our benchmark in the following. Out of the operators of the previous section, $\mathcal{L}_{10}$ is special as it enables the prompt decay of $\phi$ into SM fields if sufficient phase space is available. This changes the LHC phenomenology dramatically, also because single $\phi$ production becomes available, only suppressed by $\sim N^{-1}$, thus giving rise to a possibly dominant contribution. The mass $m_\phi$ becomes a crucial parameter in this case and there LHC analysis strategies will be fundamentally different from the situation when $\phi$ is stable on collider length scales. We will not discuss this possibility in detail at this stage but provide a qualitative discussion in Sec.~\ref{sec:shift} and leave a detailed analysis of the shift-symmetry breaking phenomenology to future work. Not considering $\mathcal{L}_{10,11}$ for the moment, the dominant phenomenological signature is missing energy as the pair-produced scalar particles escape detection on collider scales. In a phenomenological bottom-up approach, such a signature can be attributed to a plethora of models ranging from Supersymmetry over general dark matter signatures to extra dimensions. The operators listed in the previous section, however, have a significantly modified phenomenology due to their particular derivative structure and characteristic mass suppression, in addition to their relation to the energy momentum tensor. This also provides an opportunity to address the inverse problem by directly investigating the non-linear structure of the $\phi$ interactions and their impact on LHC phenomenology. In the following, we will identify suitable search channels for the scenarios discussed in the previous section, extending beyond available investigations~\cite{Brax:2015hma}, specifically with the aim to distinguish the leading EFT operators ${\cal{L}}_1$ and ${\cal{L}}_2$. We will also investigate the characteristic behaviour of non-linearities and discuss the prospects to pin down the dark energy character of a missing energy signature if such an observation is made at the LHC in the future. We will then come back to broken shift symmetry operators to discuss their phenomenological impact. Throughout we use the combination of {\sc{FeynRules}}~\cite{Alloul:2013bka}, {\sc{Ufo}}~\cite{Degrande:2011ua}, and {\sc{MadGraph5}}~\cite{Alwall:2014hca} to simulate the final states. \subsection{Dark energy signatures at the LHC} \label{sec:colla} \begin{figure}[!t] \centering \includegraphics[height=6.0cm]{plot_emiss_c1c2-eps-converted-to.pdf} \caption{\label{jet:c1c2comp} Shape comparison of the jet+missing transverse momentum distribution for conformal and disformal couplings, Eqs.~\eqref{eq:c1} and \eqref{eq:c2}.} \end{figure} Under the assumption that $\phi$ is stable on collider scales, the dominant signature is missing energy as the visible particles recoil against the invisible and pair-produced $\phi$ bosons. There is a comprehensive catalogue of missing energy searches, mostly interpreted in a Supersymmetry or dark matter-related context. Channels that have been scrutinised recently are mono-boson production in association with missing energy (e.g.~\cite{cmsphoton,atlasphoton,cmslepton,atlaslepton,atlasdilepton}) and mono-jet searches~\cite{cmsjets,atlasjet,Chatrchyan:2013mys,Aad:2014wea}. The latter have been identified as excellent candidates to constrain disformal couplings $\sim T^{\mu\nu}\partial_\mu\phi \partial_\nu\phi$ in~\cite{Brax:2015hma} motivated by the large momentum transfers that are probed with sufficient statistics in the mono-jet signal, especially for the high missing energy selections of~\cite{cmsjets}. Turning to the operator of Eq.~\eqref{eq:c1}, the scaling arguments of the $2\phi $+jet signature still hold, see Fig.~\ref{jet:c1c2comp}. The crucial difference between the ${\cal{L}}_1$ and ${\cal{L}}_2$ couplings lies in the fact that the coupling to the trace of the energy momentum tensor is tantamount to coupling the $\phi$ pairs to all explicit conformal invariance-violating terms in the Standard Model, in particular to all mass terms. This results in an extremely small cross section of the mono-jet final states as the quark masses are small and the hadronic cross section receives a large contribution from massless gluons. Using {\sc{CheckMate}}~\cite{Drees:2013wra} to survey ATLAS and CMS mono-jet analyses we can only set a constraint at 95\% confidence level of\footnote{We only quote the most sensitive search region in the respective analyses.} \begin{equation} \label{eq:liml1j} \parbox{0.37\textwidth}{ \begin{tabular}{c >{\hspace{0.3cm}} l >{\hspace{0.3cm}} r } ${\cal{L}}_1 $ & $M\gtrsim 75.4~\text{GeV}$ & $\text{(ATLAS~\cite{atlasjet})}$ \\[0.1cm] $2\phi+\text{jet}$ & $M\gtrsim 66.5~\text{GeV}$ & $\text{(CMS~\cite{Chatrchyan:2013mys})}$ \\ \end{tabular} }\,, \end{equation} \begin{figure}[!t] \centering \includegraphics[height=6.0cm]{plot_emiss_c1c2_t-eps-converted-to.pdf} \caption{\label{top:c1c2compt} Shape comparison of the $t\bar t$+transverse momentum distribution in the presence of conformal and disformal couplings, Eqs.~\eqref{eq:c1} and \eqref{eq:c2}.} \end{figure} \begin{figure*}[!t] \centering \subfigure[\label{jet:dmcomp}]{\includegraphics[height=6.0cm]{plot_comp_j-eps-converted-to.pdf}} \hfill \subfigure[\label{top:dmcomp}]{\includegraphics[height=6.0cm]{plot_comp_t-eps-converted-to.pdf}} \caption{\label{fig:dmcomp} Shape comparison of the dark energy scalar $p_{T,\text{mis}}$ distribution for (a) $2\phi$+jet and (b) $2\phi+t\bar t$ with a scalar mediator of mass $1~$TeV.} \end{figure*} The observation that ${\cal{L}}_1$ is directly related to explicit mass scales, however, directly motivates top quark production in association with missing energy. The reason for this is twofold. Firstly, the top quark is the heaviest particle in the SM, and as a consequence will have a large ${\cal{L}}_1$-mediated coupling to the dark energy scalars. Secondly, top quark pair production with a total strong interaction-dominated production cross section of around $900$~pb at 13 TeV is far more accessible than the Higgs boson, which would be motivated as a potential probe of ${\cal{L}}_1$ along the same line of arguments. Indeed, we find that $2\phi+t\bar t$ production has a significant cross section for $C_1\neq 0$ and setting more stringent limits becomes possible. We find \begin{equation} \label{eq:liml1t} \parbox{0.37\textwidth}{ \begin{tabular}{c >{\hspace{0.3cm}} l >{\hspace{0.3cm}} r } ${\cal{L}}_1 $ & $M\gtrsim 237.4~\text{GeV}$ & $\text{(ATLAS~\cite{ATLAS-CONF-2013-024})}$ \\[0.1cm] $2\phi+t\bar t$ & $M\gtrsim 192.8~\text{GeV}$ & $\text{(CMS~\cite{Chatrchyan:2013mys})}$\\ \end{tabular} } \end{equation} This not only motivates $t\bar t + p_{T,\text{mis}}$ searches as probes for dark energy scalars, but in particular the combination of mono-jet and top pair$+ p_{T,\text{mis}}$ searches can provide a fine-grained picture of the phenomenology of ${\cal{L}}_{1,2}$ as we will see in the following when we study the effects of ${\cal{L}}_2$. For the mono-jet signatures, the most constraining 8 TeV analyses yield \begin{equation} \label{eq:liml2j} \parbox{0.37\textwidth}{ \begin{tabular}{c >{\hspace{0.3cm}} l >{\hspace{0.3cm}} r } ${\cal{L}}_2 $ & $M\gtrsim 693.9~\text{GeV}$ & $\text{(ATLAS~\cite{Aad:2014wea})}$ \\[0.1cm] $2\phi+\text{jet}$ & $M\gtrsim 822.8~\text{GeV}$ & $\text{(CMS~\cite{Chatrchyan:2013mys})}$~.\\ \end{tabular} } \end{equation} While these findings are in agreement with the dark matter searches~\cite{cmsjets} recast in~\cite{Brax:2015hma}, we note that the cut scenarios devised in searches for Supersymmetry~\cite{Aad:2014wea,Chatrchyan:2013mys} are slightly better tailored towards dark energy scalar searches. This already sheds some light on the possible discrimination of the nature of a dark energy signature from dark matter signatures. We will discuss this further below. The limits on ${\cal{L}}_2$ from $t\bar t+p_{T,\text{mis}}$ searches are \begin{equation} \label{eq:liml2t} \parbox{0.37\textwidth}{ \begin{tabular}{c >{\hspace{0.3cm}} l >{\hspace{0.3cm}} r } ${\cal{L}}_2 $ & $M\gtrsim 461.2~\text{GeV}$ & $\text{(ATLAS~\cite{Aad:2014wea})}$ \\[0.1cm] $2\phi+t\bar t$ & $M\gtrsim 399.8~\text{GeV}$ & $\text{(CMS~\cite{Chatrchyan:2013mys})}$~.\\ \end{tabular} } \end{equation} As expected these limits are not as strong as the ones that are obtained from mono-jet signatures, as large momentum transfer configurations in $t\bar t+p_{T,\text{mis}}$ have a smaller differential cross section, leading to a decreased sensitivity of top pair and missing energy searches compared to mono-jet analyses. Together, the results of Eqs.~\eqref{eq:liml1j}-\eqref{eq:liml2t} allow us to draw the conclusion that the leading dark energy interactions can be constrained by combining $t\bar t$ and mono-jet searches, with current constraints ranging in the few hundred GeV regime, based on the LHC run I analyses provided in {\sc{CheckMate}}. These constraints can be expected to be pushed during run II (100/fb) with further improvements possible during the LHC high luminosity phase. They provide important complementary information to other existing searches for dark energy and we encourage the experimental community to perform missing energy searches as outlined above also in the dark energy context. \subsection{Comparison with LHC dark matter phenomenology} \label{sec:darkmatter} A question that becomes important in case of a missing energy-related new physics discovery at the LHC is pinning down, or excluding its relation to dark energy. In case of Supersymmetry, we can expect new exotic states to accompany a missing energy signature in complementary searches, while in dark matter scenarios, similar to dark energy, additional degrees of freedom can lie beyond the kinematic coverage of LHC searches~\cite{Goodman:2010yf,Goodman:2010ku,Fox:2011pm,Buchmueller:2013dya,Abdallah:2014hon,Abdallah:2015ter}. This prompts us to the question: can we tell a difference between the leading dark energy interactions and a similar scalar dark matter scenario? To this end, we show in Fig.~\ref{fig:dmcomp} the normalised expected $p_{T,\text{mis}}$ distributions of the mono-jet and $t\bar t +p_{T,\text{mis}}$ channels for ${\cal{L}}_1$ and ${\cal{L}}_2$ alongside the $p_{T,\text{mis}}$ spectrum of a simplified dark matter model characterised by \begin{equation} {\cal{L}}_{\text{BSM}} \supset {\cal{L}}_{\text{SM}} + {1\over 2} g_\phi\phi^2 Y + {1\over \sqrt{2}} \sum g_i \bar f_i f_i Y, \end{equation} where we consider a scalar mediator $Y$ coupling to SM fermions $f_i$. We set the mediator mass $m_Y =$1 TeV, such that our comparison is not affected by $Y$ going on-shell. As can be seen from Fig.~\ref{fig:dmcomp}, the energy dependence of a typical dark matter scenario (motivated through a Higgs portal interaction for instance) differs from the dark energy scalar production. While dark energy signatures can be constrained by adapting dark matter searches, their phenomenology is intrinsically different. This provides a new avenue to look for physics beyond the Standard Model through analyses that are specifically tailored to dark energy signals, which will likely result in a better sensitivity than quoted in Eqs.~\eqref{eq:liml1j}-\eqref{eq:liml2t}. \begin{figure}[!t] \centering \includegraphics[height=6.0cm]{plot_emiss_c2_nonlin-eps-converted-to.pdf} \caption{\label{jet:c2nonlin} Missing transverse momentum distribution for the mono-jet channel with $\phi$ multiplicities up to three.} \end{figure} \begin{figure}[!t] \centering \includegraphics[height=6.0cm]{plot_emiss_c2_nonlin_c7-eps-converted-to.pdf} \caption{\label{jet:c2nonlinc7} Missing transverse momentum distribution for the mono-jet channel with $\phi$ multiplicities up to four, based on combining $C_2$ with $C_7$. The distributions of $C_{6,8,9}$ are shown separately for comparison.} \end{figure} \begin{figure}[!t] \centering \includegraphics[height=6.0cm]{plot_emiss_c2_c3ac4ac5a-eps-converted-to.pdf} \caption{\label{jet:c2nonlinc3a} Same as Fig.~\ref{jet:c2nonlinc7}, however considering the interactions arising from the higher order terms $C_{4,1}$ and $C_{5_1}$. The impact of $C_{3,1}$ is even more suppressed than $C_{4,1}$ and we do not include it to the histogram.} \end{figure} \subsection{Phenomenological tests of higher order operators} \label{sec:collb} So far we have limited ourselves to the operators ${\cal{L}}_{1,2}$, i.e. the leading interactions of scalar dark energy with the SM sector discussed in Sec.~\ref{sec:eff}, i.e. ${\cal{L}}_{i,1}, 3\leq i\leq 5$ and ${\cal{L}}_{12,4,3}$ (focussing on standard propagators). Given the intrinsic non-linear structure of scalar dark energy, it is worthwhile to address the question of whether these interactions impact the limit setting. Alternatively, if they turn out to have a significant impact (i.e. for a comparably low $M$) we might be able to use collider measurements to formulate a refined picture of the dark energy nature. The phenomenology of the higher order operators introduced in Sec.~\ref{sec:eff} can be classified according to the dark energy scalar multiplicity in the final states. Since they all lead to the same signature, i.e. they contribute to missing energy, we may add the respective $\phi$ multiplicities incoherently to the full hadronic final state to include the effects of the higher order operators. The number of $\phi$ fields in a particular operator dictates the number of effective operator insertions, which again determines the effective scaling of a cross section with the scale $M$. For instance, ${\cal{L}}_7$ describes a scalar self-interaction and will not contribute to $2\phi$ production with for our case $C_{10}=0$. However it can be combined with ${\cal{L}}_{1,2}$ to obtain a $3\phi$ final state with a scaling $\sim M^{-7}$ at the amplitude level. Note, that this way the interactions ${\cal{L}}_{1,2}$ are probed by one additional off-shell leg and probe the operators ${\cal{L}}_{1,2}$ in a different way. Again we set the Wilson coefficients $C_i=1$ in the following. \begin{figure*}[!t] \centering \centering \subfigure[\label{top:c1nonlin}]{\includegraphics[height=6.0cm]{plot_emiss_c1_nonlin_t-eps-converted-to.pdf}} \hfill \subfigure[\label{top:c2nonlin}]{\includegraphics[height=6.0cm]{plot_emiss_c2_nonlin_t-eps-converted-to.pdf}} \caption{\label{top:nonlin} Same as Fig.~\ref{jet:c2nonlin} expect that we consider the interactions parameterised by $C_1$ (a) and $C_2$ (b) for the $t\bar t+p_{T,\text{mis}}$ final state.} \end{figure*} \begin{figure*}[!t] \centering \centering \subfigure[\label{top:c1nonlin2}]{\includegraphics[height=6.0cm]{plot_emiss_c1_c3ac4ac5a_t-eps-converted-to.pdf}} \hfill \subfigure[\label{top:c2nonlin2}]{\includegraphics[height=6.0cm]{plot_emiss_c2_c3ac4ac5a_t-eps-converted-to.pdf}} \caption{\label{top:nonlin2} Same as Fig.~\ref{jet:c2nonlinc3a} expect that we consider the interactions parameterised by $C_1$ (a) and $C_2$ (b) for the $t\bar t+p_{T,\text{mis}}$ final state for the operators $C_{4,1}$ and $C_{5,1}$.} \end{figure*} Starting from Fig.~\ref{jet:c2nonlin}, we show the effects of combining different operators and $\phi$ multiplicities up to four in Fig.~\ref{jet:c2nonlinc3a} for the operator $C_2$, for which we can formulate constraints in the first place. We choose $M=700~\text{GeV}$ inspired by our results of the previous section. From Fig.~\ref{jet:c2nonlin}, it becomes apparent that the only operator that significantly adds $3\phi$ in comparison to $2\phi$ production is $C_7$, while $C_{12}$ has a negligible effect. In general, for $4\phi$ production, while the energy dependence amongst the different operators $C_{3,1},C_{4,1},C_{5,1},C_{6},C_{8},C_{9}$ is different (see in particular Fig.~\ref{jet:c2nonlinc7}), their overall contribution in light of the constraints obtained in Sec.~\ref{sec:colla} is negligible. We repeat the same analysis for the $t\bar t+p_{T,\text{mis}}$ channel in Figs.~\ref{top:nonlin} and \ref{top:nonlin2}, with scale choice $M=500~\text{GeV}$ following our discussion in Sec.~\ref{sec:colla}. For comparability, we also choose the same $M$ for the limits from $C_1$, although the current constraints on $M$ are considerably lower. The qualitative impact of the higher-order interactions is analogous to the mono-jet channel and the comparison of $t\bar t+p_{T,\text{mis}}$ with jet$+p_{T,\text{mis}}$ shows the higher order operator's qualitative behavior as a function of $M$. As we have adopted a lower scale than in the jet+$p_{T,\text{mis}}$ channel we see that operators like ${\cal{L}}_{4,1},{\cal{L}}_{5,1}$ that share similarities with ${\cal{L}}_{2}$ in terms of their structure of $\phi$-derivatives start to compete with the $2\phi$ final state, Fig.~\ref{top:c1nonlin2}. When limits on $M$ are weak, this can mean that the higher order operators can dominate the phenomenology of a particular missing energy search. Such a result needs to interpreted with care as it might correspond to a breakdown of perturbation theory. In the particular case of ${\cal{L}}_1$, however, the tree level effects can be suppressed by requiring a relatively small explicit violation of conformal invariance, while the effects of e.g. ${\cal{L}}_5$ are not restricted by an approximate chiral invariance of ${\cal{L}}_{\text{SM}}$ (this lead to stronger constraints on ${\cal{L}}_2$ in Sec.~\ref{sec:colla}) and mediate prompt $4\phi$ production. If operators fall into the same category, however, such as ${\cal{L}}_{4,n}$ and ${\cal{L}}_{5,n}$, competing multiplicities signal a poor convergence of the effective theory. For example, by choosing different $\phi$ multiplicities to obtain different loop orders contributing to, say, the top 2-point function, we can see that different loop orders start to become equally important, influencing the top lifetime which is related to the imaginary part of the 2-point function. While the relative size of the operators depends on a particular scalar dark energy scenario,better adapted search strategies as well as increased statistics will push the scale also for these interactions to $\sim 700~\text{GeV}$, which effectively restores a good behavior in the multiplicity scaling pattern that we already observe for ${\cal{L}}_{2}$ in Fig.~\ref{top:c2nonlin2}. In this case ${\cal{L}}_{7}$ is the only interaction that still leaves a sizable impact, and can then be constrained if a new physics discovery exhibits a dark energy character. \subsection{On the phenomenology of shift-symmetry breaking theories} \label{sec:shift} Our analysis so far is valid for coefficient choices $C_{10}$ that leave the scalar stable on collider distances. If there is a significant $C_{10}\neq 0$, the phenomenology dramatically changes as the scalar can be singly-produced and can decay to lighter hadrons, leptons, or photons. For example, the operator ${\cal{L}}_{10}$ introduces interaction vertices with fermions $f_{i}$ (where the index describes the fermion generation) of the form \begin{equation} \label{eq:decay} \parbox{2.2cm}{\includegraphics[scale=0.8]{diag.pdf}}\quad = {4iC_{10}\over N} m_{f_i} \delta_{ji}\,. \end{equation} Depending on the scenario, this can lead to spectacular signatures that range from (highly) displaced vertices similar scenarios of hidden valley or Supersymmetry (see~\cite{Tarem:2005gxa,Ciapetti:2008zz,Nisati:330302,Ambrosanio:468069,Ellis:1006573}) to emerging signatures in the different layers of the detector~\cite{Englert:2011us,Schwaller:2015gea}. These signatures are fundamentally different from the ones that we have discussed so far and a comprehensive investigation is beyond the scope of this work.\footnote{It is however worthwhile to remark that since ${\cal{L}}_{10,1}$ effectively describes an interaction of a Higgs boson (i.e. it couples like a pseudo-dilaton), the dark energy phenomenology shares the signatures of light Higgs portal scalars discussed in~\cite{Englert:2011us,Curtin:2013fra} as well as on-going efforts within the Higgs Cross Section Working Group~\cite{YR4}. This includes in particular the phenomenology of heavy $\phi$ bosons in the TeV range.}. Eq.~\eqref{eq:decay} leads to a partial $\phi$ decay width into fermions \begin{equation} \label{eq:decwidth} \Gamma(\phi\to f\bar f) = {2\over \pi} {C_{10}^2} {m_f^2\over N^2} {(m_\phi^2-4m_f^2)^{3/2}\over m_\phi^2}\, , \end{equation} which leads to a traveled distance through the detector \begin{equation} D={\beta\gamma \over \Gamma_\phi} \end{equation} where $\Gamma_\phi$ is the total decay width, which is dominated by the size of the effective Yukawa interaction $\sim C_{10} m_f/N$ if sufficient phase space is available, i.e. for $\phi$ masses not too close to the respective decay threshold. The width is typically very small, and for a mass of $0.1~\text{GeV}$ we obtain a total decay width of $\sim 2\times 10^{-10}$~GeV. The probability of decaying between distances $L_1 < L_2$ is then given by \begin{equation} P(L_1\leq L \leq L_2 ) =\int_{L_1}^{L_2} \hbox{d}L'\, {1\over D} \exp\left( - {{L' \over D}}\right)\,. \end{equation} To get an idea of the resulting phenomenology, we consider a dark energy scalar with mass $m_\phi=20~\text{GeV}$ and its decay $\phi \to b\bar b$ produced at $p_T\simeq 100~\text{GeV}$ (the typical scale of a mono-jet configuration). For this mass choice the decay is open and enhanced over the other channels. For a choice $C_{10}m_b/N \simeq 10^{-8}$ we can expect that around 99\% of the produced $\phi$ bosons will decay inside the detector $\lesssim 7~\text{m}$ (using the transverse CMS dimensions in this particular case): 54\% of decays in the tracker ($L\lesssim 1~\text{m}$), 41\% inside the electromagnetic and hadronic calorimeters ($1~\text{m}\lesssim L\lesssim 4~\text{m}$) and 4\% inside the muon detectors ($4~\text{m}\lesssim L\lesssim 7~\text{m}$).\footnote{Due to a different geometry, we can expect a slightly better coverage by the ATLAS experiment.} The search strategies in each part of the detector depends on trigger and selection criteria as well as on the calibrated performance of each part of the detector. For instance, fermions are typically stripped off in the first layers of the muon system, hence a decay $\phi\to b\bar b$ in that region of the detector would be considered as noise. On the other hand, decays inside the tracker whose high resolution enables the search for displaced vertices makes this part of the parameter space accessible. The phenomenology strongly depends on the effective and dominant Yukawa interaction $C_{10} m_b/N$. Increasing $C_{10} m_b/N\simeq 5\times 10^{-8}$ all particles decay inside the tracker with $99\%$ of $p_T\simeq 100~\text{GeV}$ events decay with displaced vertices, whereas for $C_{10} m_b/N\simeq 10^{-6}$ the $\phi$ bosons will decay before leaving a displaced vertex signal. In such a case, additional reconstruction techniques are available but subject to detector systematics as well as large QCD backgrounds. The considerably larger scales that can be probed with displaced vertex searches (note that this also applies to different quark flavors and leptons other than the bottom considered in this example) should allow to probe scales in the region $N\sim 10^8~\text{GeV}$, which will provide comparably stronger constraints on $N$ than on $M$. For the latter we lose sensitivity as soon as the scalar is allowed to decay inside the detector\footnote{Note that even for $\phi$ masses below the lightest fermion thresholds, the loop-induced decay to photons and gluons can still dominate.}. This provides an interesting and complementary avenue to look for dark energy scalars on the basis of existing searches. We leave a more detailed investigation to future work~\cite{future}. \section{Summary And Conclusions} \label{sec:conclusions} The mystery of dark energy is motivation to consider new physics that is relevant on cosmological scales. In particular the possibility that light dark energy scalar fields might exist and interact with the Standard Model. In this paper we have surveyed a large class of effective dark energy interactions and motivated the combination of mono-jet and $t\bar t+p_{T,\text{mis}}$ analyses to constrain the leading aspects of dark energy interactions with the SM sector at the LHC. In passing we have used the phenomenological signatures in these channels to obtain the latest LHC constraints on the dominant dark energy signatures by recasting existing 8 TeV Supersymmetry and dark matter analyses. While dark energy signatures share some aspects of dark matter phenomenology, the dark energy signatures are in general different, and provide a new phenomenological avenue to look for well-motivated signs of physics beyond the SM. In case a new physics discovery is made that falls in to the category of a scalar dark energy signal, some aspects of the dark energy scalar's self-interactions can be probed by investigating the missing energy-dependence of the new physics signal, depending on the particular dark energy model. In particular, the discrimination from the competing scalar dark matter interpretation will become possible. Allowing the presence of shift symmetry-breaking operators, the sensitivity to shift symmetry-conserving operators is decreased when scalar decays on collider scales becomes possible. In such a case searches for displaced vertices provide an avenue to constrain the presence of such scalars for relatively large scales. \medskip \noindent{\it{Acknowledgements}} --- PB acknowledges partial support from the European Union FP7 ITN INVISIBLES (Marie Curie Actions, PITN- GA-2011- 289442). MS is supported in part by the European Commission through the ``HiggsTools'' Inital Training Network PITN-GA-2012-316704. CB is supported by a Royal Society University Research Fellowship and in part by the IPPP Associateship programme. We would like to thank Eugene Lim, Andrew Pilkington and David Seery for very helpful discussions in the preparation of this work.
1,116,691,499,580
arxiv
\section{Introduction} In the last couple of decades, the growing world of nanotechnology put at our disposal several classes of low-dimensional materials. Particularly interesting examples are two-dimensional (2D) quantum dots~\cite{qd1,qd2} (QDs), formed at the interface between two semiconductors. These systems are not only important from a technological point of view, but are also remarkable from a purely theoretical perspective. In fact, as they can be built with different shapes and sizes, and with a varying number of electrons, they are the ideal system to study electronic correlation. The problem of electronic correlation is perhaps the most challenging in the field of condensed matter physics. Numerous approaches to handle this problem, with varying degrees of sophistication and complexity, have been put forward since the very birth of quantum mechanics. Few-electron QDs can be studied accurately by, e.g., configuration interaction~\cite{rontani} (CI), or by quantum Monte Carlo techniques~\cite{harju,pederiva,guclu} (QMC). To describe the electronic properties of larger dots one has to resort to alternative approaches such as extended Hartree-Fock~\cite{cavaliere} (HF) or density-functional theory~\cite{qd2,dft_large,spin_droplet} (DFT). In DFT, the complexity of the many-body problem is embodied in the so-called exchange and correlation functional. Several approximations exist for this quantity, allowing for very accurate calculations of electronic properties in atoms, molecules, and solids. Clearly, most exchange-correlation functionals are derived for three-dimensional electronic systems. However, these approximations are known to break down when applied in the 2D limit.~\cite{kim_pollack} This calls for new formulas specialized for the 2D case. Particularly challenging in these applications is the fact that, compared with atomic systems, correlation effects in 2D typically have a more prominent role due to the large size of the systems (from $10^{-8}$ to $10^{-6}$\,m), and to their low electronic densities. Within DFT, 2D systems such as QDs are commonly studied using the 2D version of the local-density approximation (LDA). It is a combination of the exchange functional derived for the uniform 2D electron gas by Rajagopal and Kimball,~\cite{rajagopal} and the corresponding correlation functional fitted to accurate QMC calculations. The first of these LDA correlation functionals was put forward by Tanatar and Ceperley~\cite{tanatar} in 1989. Later on, it was generalized for the complete range of collinear spin polarizations by Attaccalite {\em et al.}~\cite{attaccalite} Applications of the 2D-LDA to QDs have been generally successful, even up to high magnetic fields.~\cite{qd2,spin_droplet} The LDA, however, suffers from several shortcomings, already well known from the three-dimensional world, especially for strongly inhomogeneous systems, or in the low-density (strong correlation) regime. Several alternative paths exist to go beyond the simple LDA. A particularly successful approach starts with the seminal work of Colle and Salvetti~\cite{CS1,CS2} (CS) who, starting with a physically motivated ansatz for the many-body wavefunction, developed a closed formula for the correlation energy. This formula has received a large interest, especially because it was used to derive the popular Lee-Yang-Parr (LYP) generalized gradient functional:~\cite{LeeYangParr:88} Together with Becke's exchange functional~\cite{Becke:88} it forms the BLYP functional, and in hybrid schemes it is a part of B3LYP,~\cite{Becke:93} X3LYP,~\cite{x3lyp} etc. Interestingly, the same CS formula can also be interpreted as an orbital-dependent correlation functional, especially suited for DFT calculations beyond the exact-exchange.~\cite{GraboGross:95_Pittalis2} It should, however, be emphasized that the CS correlation-energy functional has several known limitations.~\cite{SinghMassaSahni:99,TaoGoriPerdewMcWeeny:01,ImamuraScuseria:02} In particular, while short-range correlations are well described,~\cite{TaoGoriPerdewMcWeeny:01} important long-range correlations are missing. Even if these latter effects often cannot be ignored in large molecules and solids, they can be energetically negligible in small systems such as atoms. However, it has been shown recently that the long-range correlation problem may be cured to some extent. \cite{MFA,M} Secondly, in the CS functional the kinetic-energy contribution to the correlation energy (named below as the kinetic-energy correlation) is taken into account only in an empirical fashion through the fitting parameter. In this context, an interesting modifications of the original CS approach have been recently proposed.~\cite{ragot} In this work, we generalize the CS scheme~\cite{CS1,CS2} to 2D. Then we use a Gaussian approximation for the pair probability function. Finally, we introduce an {\em ad-hoc} modification which, {\it post-factum}, seems to recover both the long-range and the kinetic-energy correlation to some good extent. \section{Theory} Our starting point is the following ansatz~\cite{CS1,CS2} for the many-body wavefunction $\Psi$ \begin{multline}\label{cs} \Psi(\vr_1 \sigma_1,..., \vr_N \sigma_N) = \Psi_{\rm SD}(\vr_1 \sigma_1,..., \vr_N \sigma_N) \\ \times \prod_{i < j} \left[1-\varphi(\vr_i,\vr_j) \right] \,. \end{multline} Here, $\vr$ and $\sigma$ denote respectively the space and spin coordinates of the electrons, and $\Psi_{\rm SD}$ indicates the {\em single} Slater determinant of HF theory, which in the DFT context should be replaced by the Slater determinant generated from the occupied Kohn-Sham orbitals. The function $\varphi$ describes the correlated part of the wavefunction. In the center-of-mass, $\vr=(\vr_1+\vr_2)/2$, and relative, $\vs=\vr_1-\vr_2$, coordinate system, it can be written as \begin{equation}\label{cf} \varphi(\vr,s) = \left[ 1 - \Phi(\vr)(1 + \alpha s) \right]e^{-\beta^2(\vr) s^2} \,, \end{equation} where the quantities $\Phi$, $\alpha$, and $\beta$ act as correlation factors. We point out that we introduce $\beta(\vr)$ as a {\em local} $\vr$-dependent quantity for reasons which become obvious below. To find a reasonable value for $\beta$, which determines the local correlation length, we estimate the area where the electron is correlated as \begin{equation} A(\vr) = \int\!\!d^2s\; e^{-\beta^2(\vr) s^2} = \frac{\pi}{\beta^2(\vr)} \,. \end{equation} Then, we assume that this area is proportional to the area of the Wigner circle $\pi r_s^2$, where the density parameter $r_s$ is given through the total electron density as $r_s(\vr)=1/\sqrt{\pi\rho(\vr)}$. Thus, we find the relation \begin{equation}\label{beta} \beta(\vr)=q\sqrt{\rho(\vr)} \,, \end{equation} where $q$ is a fitting parameter. The SD wavefunction in Eq.~\eqref{cs} is recovered when all pairs of electrons are far apart from each other. In contrast, when two electrons are brought to the same point, the parameter $\alpha$ is chosen to satisfy the {\em cusp condition} (for the singlet case) of the wavefunction. It can be shown~\cite{RJK} that, for the 2D case, $\alpha=1$. The function controlling the exponential decay is given by \begin{equation}\label{Phi} \Phi(\vr) = \frac{\beta(\vr)}{\beta(\vr) + \sqrt{\pi}/2}, \end{equation} which can be deduced by imposing the condition~\cite{CS2,MS} \begin{equation} \int\!\!d^2s\; \varphi(\vr,\vs) = 0 \,. \end{equation} By using the wavefunction~\eqref{cs} and the definition of the correlation factor $\varphi$ given by~\eqref{cf}, we can obtain a formula for the correlation energy~\cite{CS1,CS2} \begin{equation}\label{ce2} E_c = \int\!\!d^2r\!\!\int\!\!d^2s\; \rho_{2,{\rm SD}}(\vr,\vs) \frac{\varphi^2(\vr,\vs)-2\varphi(\vr,\vs)}{s} \,, \end{equation} where $\rho_{2,{\rm SD}}(\vr,\vs)$ refers to the SD pair density. To simplify this expression, we write a Gaussian approximation for this function, \begin{equation} \rho_{2,{\rm SD}}(\vr,\vs) = \rho_{2,{\rm SD}}(\vr) e^{-s^2/\gamma^2(\vr)} \,. \end{equation} The use of this Gaussian approximation was proposed in the context of the CS scheme by Moscard\'o and San-Fab\'ian,~\cite{MS} but it has been used in the field of DFT even further back.~\cite{Parr88} To obtain the function $\gamma(\vr)$, that defines the width of the Gaussian, we apply the exact sum-rule \begin{align} \rho_{{\rm SD}}(\vr) & = \frac{2}{N-1}\int\!\!d^2s\;\rho_{2,{\rm SD}}(\vr,\vs)\nonumber\\ & = \frac{2\pi}{N-1}\,\rho_{2,{\rm SD}}(\vr)\gamma^2(\vr) \,, \end{align} from which follows \begin{equation} \gamma(\vr) = \sqrt{\frac{(N-1)\,\rho_{{\rm SD}}(\vr)}{2\pi\,\rho_{2,{\rm SD}}(\vr)}} \,. \end{equation} To simplify this expression, we apply the relation \begin{equation}\label{hfrel} \rho_{2,{\rm SD}}(\vr)=\frac{1}{4}\rho_{{\rm SD}}^2(\vr) \,, \end{equation} as well as Eq.~\eqref{beta} for the SD density $\rho_{{\rm SD}}(\vr)$, and find \begin{equation} \frac{1}{\gamma^{2}(\vr)}=c\,\beta^2(\vr) \,, \end{equation} where \begin{equation}\label{qc} c = \frac{\pi}{2(N-1)q^2} \,. \end{equation} Using these results in Eq.~\eqref{ce2}, and performing the integration over $s$, leads to the final result \begin{equation}\label{Ec} E_{\rm c}^{\rm local} = \int\!\!d^2r\; \rho_{{\rm SD}}(\vr)\,\epsilon^{\rm local}_{\rm c}(\vr) \,, \end{equation} where we have defined $\epsilon_{\rm c}(\vr)$ as the local correlation energy per electron having the lengthy expression \begin{multline}\label{epsilon} \epsilon_{\rm c}(\vr) = \frac{\pi}{2q^2}\Bigg\{ \frac{\sqrt{\pi}\:\beta(\vr)}{2\sqrt{2+c}}[\Phi(\vr) - 1]^2 +\frac{\Phi(\vr)[\Phi(\vr)-1]}{2+c} \\ + \frac{\sqrt{\pi}\:\Phi^2(\vr)}{4\beta(\vr)(2+c)^{3/2}} + \frac{\sqrt{\pi}\:\beta(\vr)}{\sqrt{1+c}}[\Phi(\vr)-1] + \frac{\Phi(\vr)}{1+c} \Bigg\}\,. \end{multline} Up to this point, the only inputs for the correlation energy are the fitting parameter $q$ (we will come back to the choice of this parameter later on), the total number of electrons $N$, and the electron density $\rho(\vr)$. We remind that the parameter $c$ is defined through $q$ and $N$ in Eq.~\eqref{qc}, and $\beta(\vr)$ is given in terms of $\rho(\vr)$ in Eq.~\eqref{beta}. This particular depencency on $N$ conflicts with the extent requirement of the correlation functional. For example, situations where two systems are very far apart from each other are expected to be problematic. In conclusion, equation~\eqref{Ec} is an {\em explicit density functional} for the correlation energy with a single fitting parameter $q$. This functional is self-interaction free, in the sense that it is identically zero for one-electron systems. Note that to recover this important property within the standard ladder of exchange-correlation functionals, one has to resort to highly sophisticated orbital functionals. \section{Application and refinement of the approximation} Here we complete and apply the approximation for the correlation energy in 2D. In particular, along the applications, we shall present an {\em ad-hoc} modification which better accounts for both the long-range and the kinetic-energy correlation. As the first step, now we need to choose a value for the fitting parameter $q$, we use Taut's analytic result~\cite{taut} for the singlet state of a two-electron parabolic QD with confining strength $\omega=1$. In terms of energy components, the correlation energy can be written as $E_{\rm c} = E_{\rm tot}-E^{\rm EXX}_{\rm tot}$, where EXX refers to the exact-exchange result. Applying this formula yields $E_c\approx -0.1619$ for the $N=2$ singlet when $\omega=1$. To obtain the same value from Eq.~\eqref{Ec}, we need to set $q=2.258$. Of course the choice may be refined, if needed. But aiming at providing ideally a {\em predictive} approximation, the fitting should not be carried out for each new system (to obtain every time the correct answer) but rather carried out once for ever. This is a quite general, and a well known way of defining, or refining, new approximations for the central quantities of DFT. In the following, we will show that our fitting procedure, outlined just above, guarantees a very good performance for a large class of systems. \begin{table}[t] \caption{\label{table_qd} Comparison of the correlation energies (in atomic units) for parabolic quantum dots. The reference value $E_c^{\rm ref}$ is obtained by subtracting the exact-exchange energy from accurate data for the total energy. The last row contains the mean percentual error, $\Delta$, for the parabolic dots (excluding the one used in the fitting procedure). } \begin{tabular}{c c c c c c c c} \hline \hline $N$ & $\omega$ & $E_{\rm tot}^{\rm ref}$ & $E^{\rm EXX}_{\rm tot}$ & $-E_{\rm c}^{\rm ref}$ & $-E_{\rm c}^{\rm local}$ & $-E_{\rm c,mod}^{\rm local}$ & $-E_{\rm c}^{\rm LDA}$ \\ \hline 2 & 1 & $3^\dagger$ & 3.1619 & 0.1619 & 0.1619$^*$ & 0.1619$^*$ & 0.1988 \\ 2 & 1/4 & $0.9324^\ddagger$ & 1.0463 & 0.1137 & 0.0957 & 0.1212 & 0.1391 \\ 2 & 1/16 & $0.3031^\ddagger$ & 0.3732 & 0.0701 & 0.0477 & 0.0757 & 0.0852 \\ 2 & 1/36 & $0.1607^\ddagger$ & 0.2094 & 0.0487 & 0.0299 & 0.0527 & 0.0607 \\ 6 & 0.42168 & $10.37^{\mathchar "278}$ & 10.8204 & 0.4504 & 0.3805 & 0.4453 & 0.5305 \\ 6 & $1/1.89^2$ & $7.6001^{\mathchar "27B}$ & 8.0211 & 0.4210 & 0.3205 & 0.4060 & 0.4732 \\ 6 & 1/4 & $6.995^\ddagger$ & 7.3911 & 0.3961 & 0.3047 & 0.3946 & 0.4574 \\ 12 & $1/1.89^2$ & $25.636^{\mathchar "27B}$ & 26.5528 & 0.9168 & 0.6837 & 0.8504 & 1.0000 \\ \hline \multicolumn{5}{l}{$\Delta$} & 26.1\% & 5.9\% & 18.4\% \\ \hline \hline \end{tabular} \begin{flushleft} $^*$ Fitted result (see text). $^\dagger$ Analytic solution by Taut from Ref.~\onlinecite{taut}. $^\ddagger$ CI data from Ref.~\onlinecite{rontani}. $^{\mathchar "278}$ Variational QMC data from Ref.~\onlinecite{lda}. $^{\mathchar "27B}$ Diffusion QMC data from Ref.~\onlinecite{pederiva}. \end{flushleft} \end{table} \begin{table} \caption{\label{table_qd_recta} Comparison of the correlation energies (in atomic units) for square ($\pi\times\pi$) quantum dots. The reference value $E_c^{\rm ref}$ is obtained by subtracting the exact-exchange energy from the quantum Monte Carlo result for the total energy (Ref.~\onlinecite{rectapaper}). } \begin{tabular}{c c c c c c c } \hline \hline $N$ & $E_{\rm tot}^{\rm QMC}$ & $E^{\rm EXX}_{\rm tot}$ & -$E_{\rm c}^{\rm ref}$ & -$E_{\rm c}^{\rm local}$ & -$E_{\rm c,mod}^{\rm local}$ & -$E_{\rm c}^{\rm LDA}$ \\ \hline 2 & 3.2731 & 3.4643 & 0.1908 & 0.1905 & 0.1763 & 0.2226 \\ 6 & 26.9679 & 27.5928 & 0.6249 & 0.6578 & 0.5763 & 0.7624 \\ 8 & 46.7940 & 47.5962 & 0.8022 & 0.9168 & 0.7836 & 1.0514 \\ 12 & 103.3378 & 104.5620 & 1.2242 & 1.4494 & 1.2026 & 1.6419 \\ 16 & 178.5034 & 179.9804 & 1.4770 & 2.0096 & 1.6282 & 2.2534 \\ \hline \multicolumn{4}{l}{$\Delta$} & 19.3\% & 6.8\% & 33.6\% \\ \hline \hline \end{tabular} \end{table} Tables~\ref{table_qd} and \ref{table_qd_recta} show results for parabolically confined, and for square ($\pi\times\pi$) quantum dots. The results obtained with our local formula for the correlation energy (denoted by $E_{\rm c}^{\rm local}$) are compared to reference results $E^{\rm ref}_{\rm c}$, as well as with LDA correlation energies $E^{\rm LDA}_{\rm c}$. We computed the EXX and LDA values using the real-space code {\tt octopus}.~\cite{octopus} The EXX result was calculated in the Krieger-Li-Iafrate (KLI) approach~\cite{KLI}, which is an accurate approximation in the static case.~\cite{kli_oep} For $E_{\rm c}^{\rm LDA}$ we applied the parametrization of Attaccalite {\em et al}.~\cite{attaccalite} Note that we used a perturbative approach to calculate $E_c^{\rm local}$ from Eq.~\eqref{Ec}: The self-consistent EXX density was the input for our local functional. We also found that using the LDA density as input did not make a considerable difference. The QDs studied here span a wide range of density parameters $r_s$ determined in a parabolic QD as $r_s=N^{-1/6}\omega^{-2/3}$ and in our square QD as $r_s=\sqrt{\pi/N}$. This parameter corresponds to the average radius of an electron in a QD with an average number density $n_0=1/(\pi r_s^2)$. Thus, the cases shown in the tables are between $0.44<r_s<9.71$. In fact, the upper limit exceeds the threshold of $r_s\sim 7.5$ for Wigner crystallization in the impurity-containing 2D electron gas.~\cite{chui} One should, however, bear in mind that in QDs the concept of Wigner localization is ambiguous, and no general formula exists for the density parameter at the onset of localization. It can be also seen that in our examples the ratio of the correlation to the total energy varies from less than one percent up to around $30\%$. Results with our local formula are roughly of the same quality as the LDA, slightly worse for the parabolic dots, but slightly better for the square dots. Furthermore, a calculation for the homogeneous electron gas (Fig.~\ref{fig:egas}) \begin{figure} \setlength{\unitlength}{0.95\columnwidth} \begin{center} \includegraphics[width=\unitlength]{fig1.eps} \vspace{-1cm} \end{center} \caption{\label{fig:egas} Correlation energy per unit particle for the uniform 2D electron gas in various approximations. } \end{figure} reveals that this functional agrees with the LDA (which is exact for this system) in the limit of vanishing $r_s$, but underestimates the correlation energy otherwise. The derived functional not only gives already very reasonable results, but is also a very good starting point for further developments. In fact, we found an alternative functional (that we will denote by $E_{\rm c,mod}^{\rm local}$) obtained by modifying the first term between the parenthesis in Eq.~(\ref{epsilon}) by \begin{equation} [\Phi(\vr) - 1]^2 \rightarrow \Phi(\vr) - 1\,. \label{mod_formula} \end{equation} To finish the derivation of this new functional we need to refit the parameter $q$, which now reads $q^{\rm mod}=3.9274$ (in such a way, we again obtain the exact value of the correlation energy for the two-electron quantum dot as described above). {\it Sensu stricto}, Eq.~(\ref{mod_formula}) is an empirical approximation. However, our results suggest, {\it Post-factum}, that the proposed modification better accounts for the long-rage~\cite{TaoGoriPerdewMcWeeny:01,M} and kinetic-energy correlation.~\cite{ragot,umrigar} According to Tables~\ref{table_qd} and \ref{table_qd_recta}, our corrected functional agrees very well with the reference results. We find that, in all the cases studied, our approximation is vastly superior to the LDA correlation. Note that our results exhibit the correct scaling with respect to both confinement strength and number of electrons, even if the adjustable parameter $q$ has only been fitted to the case $N=1$ and $\omega=2$. Also for the homogeneous electron gas (Fig.~\ref{fig:egas}), our modified functional yields results that are remarkably close to the reference LDA curve, departing significantly from the exact curve only for very small $r_s$ (weak correlation limit). Finally, we wish to make a few remarks on the usage of the present correlation functional. First, we point out that in practical purposes within, e.g., the Kohn-Sham scheme of DFT, the functional should be combined with an adequate recipe for the exchange energy, such as the exact-exchange or the functionals suggested in Ref.~\onlinecite{own_Ex}. Second, for many systems ---like, e.g., QDs in magnetic fields--- one requires a spin-polarized version of the exchange-correlation functional. This has already been taken into account in the LDA functional by Attaccalite {\em et al.}~\cite{attaccalite}, but a spin-polarized extension of the present functional is still missing. Work to solve these two issues is already under way. \section{Conclusions} We developed a correlation energy functional for the two-dimensional electron gas, starting from the Colle and Savetti ansatz for the many-body wavefunction and a Gaussian approximation to the pair density. To better account for the long-range and kinetic-energy correlation, we have then introduced an additional ad-hoc modification. The resulting functional has a very simple form, depending parametrically on the total number of electrons $N$ and locally on the electronic density $n(\vr)$. It only contains a single parameter, $q$ that was adjusted to the exact calculation of a two-electron quantum dot. Calculations performed for several systems, with a wide range of density parameters $r_s$, show that our functional gives results in very good agreement with reference values. This agreement is maintained even for very dilute electron gases, where the correlation energy amounts to 30\% of the total energy. Furthermore, our functional performs significantly better than the standard LDA correlation functional, while maintaining much of its simplicity. \acknowledgments We thank Ari Harju for the original quantum Monte Carlo data shown in Table~\ref{table_qd_recta}. This work was supported by the EU's Sixth Framework Programme through the Nanoquanta Network of Excellence (NMP4-CT-2004-500198), Deutsche Forschungsgemeinschaft, and the Academy of Finland. M.A.L. Marques acknowledges partial support by the Portuguese FCT through the project PTDC/FIS/73578/2006.
1,116,691,499,581
arxiv
\section{Introduction} The equation-of-motion coupled-cluster (EOM-CC) methods \cite{Emrich1981,Stanton1993,Piecuch2000,Kucharski2001,Kowalski2004,Hirata2004a,Levchenko2004,Kallay2004,Krylov2008} and the closely-related CC linear response (CC-LR) theory \cite{Monkhorst1977,Nakatsuji1978_JCP,Mukherjee1979,Monkhorst1983,Koch1990_CCRF,Koch1990_CCSDLR,Christiansen1995,Christiansen1998,Kallay2004} have been established as useful tools for treating electronically excited states of small and medium-sized molecules. Recent efforts have also been devoted to extending the applicability of EOM-CC and similarity-transformed EOM-CC methods\cite{Nooijen1997,Nooijen1997a} to large molecules\cite{Helmich2013,Dutta2016,Dutta2018a,Frank2018,Izsak2020} and solids.\cite{Hirata2008,Katagiri2005,Wang2020,Pulkin2020,Gallo2021} In spite of the tremendous success, the non-hermitian nature of the CC theory poses difficult unsolved problems. CC calculations in combination with complex Hamiltonians, e.g., the Hamiltonian in magnetic fields and/or including spin-orbit coupling have been shown to produce complex ground-state energies.\cite{Thomas2021complex} This is a non-trivial formal problem of the standard CC theory, although the real part of the complex CC energy is expected to serve as an accurate approximation to the full configuration interaction energy. Further, EOM-CC calculations have been demonstrated to have incorrect crossing conditions for intersections between electronic states of the same symmetry (known as ``same-symmetry conical intersection''). \cite{Hattig2005,Kohn2007,Kjonstad2017-a,Thomas2021complex} To enable CC calculations of same-symmetry conical intersections, K\"{o}hn and Tajti \cite{Kohn2007} have proposed a simple correction to obtain physically meaningful potential energy surfaces around conical intersections. Koch and collaborators have recently developed a similarity constrained coupled-cluster singles and doubles (SCCSD) method that introduces an additional parameter associated with a triple excitation and determines this parameter by requiring the eigenvectors of two target states to be orthogonal to each other. \cite{Kjonstad2017-b,Kjostad2019} These correction schemes have to introduce a substantial modification to the \red{wavefunctions} in order to enforce orthogonalization in two otherwise parrallel eigenvectors. For example, the resulting SCC wavefunction often involves a significant contribution from a triply excited determinant.\cite{Kjonstad2017-b,Kjostad2019} \red{On the other hand, SCCSD produces excitation energies similar to that of CCSD.} Same-symmetry conical intersections play essential roles in photochemistry. \cite{Domcke2004,Levine2007,Domcke2012} Available calculations of same symmetry conical intersections have used hermitian excited-state formulations such as the algebraic diagrammatic construction (ADC) methods, \cite{Hattig2005,Lefrancois2017} time-dependent density functional theory (TDDFT)-based techniques,\cite{Levine2006,Ou2013} constrained density functional theory-configuration interaction (CDFT-CI) method,\cite{Kaduk2010} and multireference techniques including complete active space self-consistent-field (CASSCF) method,\cite{Ben-Nun2000} CAS second-order perturbation theory (CASPT2),\cite{Levine2008} multi-reference perturbation theory (MRPT),\cite{Nangia2006,Xu2014} and multi-reference configuration interaction (MRCI) methods.\cite{Lengsfield1984,Woywod1994,Yarkony2001,Matsika2004,Zhu2014} The MRCI method as a non-perturbative wavefunction based approach has exhibited robust performance. However, the lack of size-extensivity in MRCI often poses difficulties in obtaining accurate electronic energies. For example, while MRCI calculations provided high-quality potential energy surfaces to gain insights into nonadiabatic tunneling dynamics of phenol dissociation, \cite{Zhu2016-a,Zhu2016-b,Xie2016} an energetic shift had to be applied to the computed potential energy surfaces to obtain a good agreement with the experimental energetics.\cite{Zhu2016-b} The size-inextensivity problem of MRCI is expected to be more serious for calculations of larger molecules. Therefore, the development of new non-perturbative size-extensive/size-intensive hermitian excited-state theories to enhance the capability to treat same-symmetry conical intersections is of significant interest to photochemistry applications. The unitary version of coupled-cluster (UCC) theory appears to be a natural approach to solve the formal problems of the CC theory arising from nonhermiticity and to enable CC studies of same-symmetry conical intersections. Analyses of the formal properties for the UCC theory and the relation with the standard CC methods have been reported. \cite{Kutzelnigg1983,Kutzelnigg1991,Bartlett1989,Harsha2018,Evangelista2019} Numerical studies of the UCC methods have been carried out. \cite{Cooper2010,Evangelista2011} The UCC methods truncated up to a given rank of excitation operators have been shown to recover a similar amount of dynamic correlation energies compared with the standard CC methods involving the same ranks of excitation operators. \cite{Evangelista2011} However, a formidable challenge in the UCC theory is to develop a practically tractable truncation scheme for the non-terminating expansion of the transformed Hamiltonian while maintaining the computational accuracy. Several truncation schemes for the ground-state UCC theory have been reported. The UCC(4) and UCC(5) methods have been developed using a perturbative analysis of the UCC energy expression.\cite{Bartlett1989,Watts1989} Taube and Bartlett have reported a truncation scheme exact for two-electron systems.\cite{Taube2006} The commutator truncation schemes have been explored for the multireference version of UCC theory \cite{Hoffmann1988, Chen2012} and the canonical transformation methods.\cite{Neuscamman2009,Neuscamman2010} A stochastic approach to select excitation operators in UCC calculations has recently been developed. \cite{Filip2020} The recent development of density-cumulant functional theory has also provided information about the accuracy of the truncation schemes for hermitian formulations.\cite{Sokolov2013,Misiewicz2020} We mention the rapidly growing interest in using UCC in quantum computations and refer the readers to recent publications and the references therein for this exploding field. \cite{Peruzzo2014,Wecker2015,Barkoutsos2018,Romero2019,Grimsley2019,Lee2019, Evangelista2019,Sokolov2020,Lang2021,Chen2021,Pavosevic2021,Bauman2021} Here accurate and efficient UCC calculations on classical computers have the potential to help the initial state preparation for quantum computations. Concerning UCC-based excited-state theories, the second-order version UCC linear response theory has been shown to be identical to the second-order version of ADC [ADC(2)].\cite{Walz2012} We have recently developed a third-order formulation for calculations of both ground-state energies and excitation energies within the UCC-based polarization propagator (PP) framework (the UCC3 scheme).\cite{Liu2018} Interestingly, the strict version of UCC3 (UCC3-s) has been shown to be equivalent to the strict version of the third order ADC [ADC(3)], \cite{Schirmer2002,Dreuw2015,Banerjee2021} establishing the relation between the UCC-based polarization propagator theory and ADC.\cite{Liu2018} Hodecker {\it{et al.}} have reported an implementation of UCC3 \cite{Hodecker2020} and a combination with a second-order density matrix for calculations of properties. \cite{Hodecker2020a} Although the schemes based on perturbation theory performs well for simple molecules around the equilibrium structures, the performance decays quickly for more complex molecules in the absence of smooth convergence of the M{\o}ller-Plesset (MP) perturbation series. Therefore, we base our present work on an alternative strategy of truncating the expansion of the UCC transformed Hamiltonian to up to a certain power of cluster amplitudes. In Section II, We report the formulation and implementation of a quadratic UCCSD scheme (qUCCSD) for calculations of ground-state energies and excitation energies. The details about the benchmark calculations are discussed in Section III. The benchmark results for ground-state properties and excitation energies are presented and discussed in section IV. Finally, a summary and a perspective about future work are presented in Section V. \section{Theory} \subsection{Unitary coupled-cluster based polarization propagator theory} In this subsection, we present a succinct summary of unitary coupled-cluster based polarization propagator (UCC-PP) theory in the language of wavefunction theory. We refer the readers to Refs.~\citenum{Liu2018} and \citenum{Prasad1985} for a detailed account of the UCC-PP theory and to the literature \cite{Nooijen1992,Nooijen1993,Nooijen1995,Meissner1993,Kowalski2014,Peng2016, Mcclain2016,Lange2018,Mertins1996,Banerjee2019} for Green's function methods based on the biorthogonal CC representation. The self-consistent polarization propagator methods represent the polarization propagator in an approximate many-electron basis by applying the inner-projection technique \cite{Lowdin1971} with a self-consistent operator manifold to decouple the forward and backward polarization propagator.\cite{Prasad1985} In the UCC-based self-consistent polarization propagator method, the ground-state wavefunction adopts the UCC parameterization \begin{equation} |\Psi_\mathrm{gr} \rangle = e^\sigma | \Phi_0 \rangle, \end{equation} in which the cluster operator $\sigma$ comprises both excitation and de-excitation operators, e.g., in the UCC singles and doubles (UCCSD) method $\sigma$ can be written as \begin{eqnarray} \sigma &=& \sigma_1 + \sigma_2, \\ \sigma_1 &=& \sum_{ai} \sigma_i^a \{ a_a^\dagger a_i\} - \sum_{ai}(\sigma_i^a)^\ast \{a_i^\dagger a_a \}, \\ \sigma_2 &=& \frac{1}{4}\left[\sum_{abij} \sigma_{ij}^{ab} \{ a_a^\dagger a_b^\dagger a_j a_i\} - \sum_{abij} (\sigma_{ij}^{ab})^\ast \{ a_a a_b a_j^\dagger a_i^\dagger \} \right]. \end{eqnarray} $\{i,j,\dots\}$ and $\{a,b,\dots\}$ denote occupied orbitals and virtual orbitals, respectively. $\sigma_i^a$ and $\sigma_{ij}^{ab}$ represent the cluster amplitudes. This anti-hermitian form of the cluster operator $\sigma$ ensures the wave operator $e^\sigma$ to be unitary. The UCCSD ground-state energy and amplitude equations are given by \begin{eqnarray} \langle \Phi_0 | \bar{H} | \Phi_0 \rangle &=& E_\mathrm{gr} , \label{en_eq} \\ \langle \Phi_l | \bar{H} | \Phi_0 \rangle &=& 0 . \label{ap_eq} \end{eqnarray} Here the transformed Hamiltonian $\bar{H}=e^{-\sigma} H e^\sigma$ is hermitian. $\Phi_0$ represents the ground-state Hartree-Fock wavefunction, while $\Phi_l$'s denote singly and doubly excited determinants. The UCC-based polarization propagator theory employs a self-consistent operator manifold consisting of the transformed excitation and de-excitation operators, $\{e^\sigma b_I^\dagger e^{-\sigma}\} \cup \{e^\sigma b_I e^{-\sigma}\}$, in which $b_I^\dagger$ is the original excitation operators, i.e., $\{b_I^\dagger\}=\{a_a^\dagger a_i\}\cup\{a_a^\dagger a_b^\dagger a_j a_i\}$ in the UCCSD method. This leads to the following eigenvalue equations \begin{equation} \sum_I \bar{H}_{JI} C_{IL} = E_L C_{JL}~,~ \bar{H}_{JI}=\langle \Phi_0|b_J \bar{H} b_I^\dag|\Phi_0\rangle, \label{secular_eq} \end{equation} to determine excitation energies $E_L$ and the excited-state wavefunctions \begin{equation} |\Phi_L^{ext} \rangle = \sum_I C_{IL} e^\sigma b_I^\dagger |\Phi_0\rangle. \end{equation} \red{In other words, the UCCSD excited-state equations solve for eigenvalues and eigenstate of $\bar{H}$ within the space of singly and doubly excited determinants.} The excited-state secular equations can be rewritten in a block form as \begin{eqnarray} \begin{bmatrix} \bar{H}_\mathrm{SS} &\bar{H}_\mathrm{SD} \\ \bar{H}_\mathrm{DS} &\bar{H}_\mathrm{DD} \end{bmatrix} \begin{bmatrix} C_\mathrm{S} \\ C_\mathrm{D} \end{bmatrix} = E \begin{bmatrix} C_\mathrm{S} \\ C_\mathrm{D} \end{bmatrix}. \label{eigen_eq} \end{eqnarray} Here $\bar{H}_\mathrm{SS}$ refers to the singles-singles block involving $\bar{H}_{ij}$, $\bar{H}_{ab}$, and $\bar{H}_{ia,bj}$, $\bar{H}_\mathrm{SD}$ and $\bar{H}_\mathrm{DS}$ represent the singles-doubles block and doubles-singles block involving the contributions from $\bar{H}_{ci,ab}$, $\bar{H}_{jk,ia}$, $\bar{H}_{ajk,ibc}$ and $\bar{H}_{ab,ci}$, $\bar{H}_{ia,jk}$, $\bar{H}_{ibc,ajk}$, and $\bar{H}_\mathrm{DD}$ is the doubles-doubles block involving $\bar{H}_{ij}$, $\bar{H}_{ab}$, $\bar{H}_{ia,bj}$, $\bar{H}_{ij,kl}$, $\bar{H}_{ab,cd}$, $\bar{H}_{iab,jcd}$, and $\bar{H}_{ija,klb}$. The $\bar{H}$ components pertinent to the UCCSD ground-state energy and amplitude equations as well as the excited-state secular equations thus can be summarized as \begin{eqnarray} \bar{H} &=& E_\mathrm{gr}+ \left((\bar{H}_{ai}\{a_a^\dag a_i\} +\frac{1}{4} \bar{H}_{ab,ij}\{a_a^\dag a_b^\dag a_j a_i\})+h.c.\right) \nonumber \\ &+& \left(\bar{H}_{ij}\{a_i^\dag a_j\} + \bar{H}_{ab}\{a_a^\dag a_b\} +\frac{1}{4} \bar{H}_{ij,kl}\{a_i^\dag a_j^\dag a_l a_k\} +\frac{1}{4} \bar{H}_{ab,cd}\{a_a^\dag a_b^\dag a_d a_c\} + \bar{H}_{ia,bj}\{a_i^\dag a_a^\dag a_j a_b\} \right) \nonumber \\ &+& \left( (\frac{1}{2} \bar{H}_{ij,ka}\{a_i^\dag a_j^\dag a_a a_k\} +\frac{1}{2} \bar{H}_{ab,ci}\{a_a^\dag a_b^\dag a_i a_c\} )+h.c.\right) \nonumber \\ &+& \left( \frac{1}{4} \bar{H}_{ibc,ajk}\{a_i^\dag a_b^\dag a_c^\dag a_k a_j a_a \}+h.c.\right) \nonumber \\ &+&\left( \frac{1}{4} \bar{H}_{iab,jcd}\{a_i^\dag a_a^\dag a_b^\dag a_d a_c a_j \} +\frac{1}{4} \bar{H}_{ija,klb}\{a_i^\dag a_j^\dag a_a^\dag a_b a_l a_k \} \right), \label{hbarexp} \end{eqnarray} in which $E_{gr}$ is the UCCSD ground-state energy. In contrast to that $\bar{H}$ in the CC theory terminates at the quadruple commutators, the commutator expansion of $\bar{H}$ in the UCC theory is non-terminating. We adopt an expansion using Bernoulli numbers for $\bar{H}$ \cite{Liu2018} \begin{eqnarray} \bar{H} &=& \bar{H}^0 + \bar{H}^1 + \bar{H}^2 + \bar{H}^3 + \bar{H}^4 + \cdots\cdots, \\ \bar{H}^0 &=& F + V, \\ \bar{H}^1 &=& [F, \sigma] + \frac{1}{2}[V, \sigma] + \frac{1}{2}[V_R, \sigma], \\ \bar{H}^2 &=& \frac{1}{12}[[V_N, \sigma], \sigma] + \frac{1}{4}[[V, \sigma]_R, \sigma] + \frac{1}{4}[[V_R, \sigma]_R, \sigma], \\ \bar{H}^3 &=& \frac{1}{24}[[[V_N, \sigma], \sigma]_R,\sigma] + \frac{1}{8}[[[V_R, \sigma]_R, \sigma]_R, \sigma] + \frac{1}{8}[[V, \sigma]_R, \sigma]_R, \sigma] \nonumber \\ & & - \frac{1}{24}[[[V, \sigma]_R, \sigma],\sigma] - \frac{1}{24}[[[V_R, \sigma]_R, \sigma],\sigma], \\ \bar{H}^4 &=& \frac{1}{16}[[[[V_R, \sigma]_R, \sigma]_R,\sigma]_R,\sigma] + \frac{1}{16}[[[[V, \sigma]_R, \sigma]_R,\sigma]_R,\sigma] + \frac{1}{48}[[[[V_N, \sigma] , \sigma]_R,\sigma]_R,\sigma] \nonumber \\ & & -\frac{1}{48}[[[[V, \sigma]_R, \sigma], \sigma]_R, \sigma] -\frac{1}{48}[[[[V_R,\sigma]_R, \sigma], \sigma]_R, \sigma] -\frac{1}{144}[[[[V_N, \sigma], \sigma]_R,\sigma], \sigma] \nonumber \\ & & -\frac{1}{48}[[[[V, \sigma]_R, \sigma]_R,\sigma], \sigma] -\frac{1}{48}[[[[V_R,\sigma]_R, \sigma]_R,\sigma], \sigma] -\frac{1}{720}[[[[V_N, \sigma], \sigma] ,\sigma], \sigma]. \end{eqnarray} Here ``N'' refers to the joint set of excitation and de-excitation portions of the target operator, while ``R'' refers to the rest of the operator excluding the ``N'' part. \cite{Liu2018} This expansion using Bernoulli numbers eliminates higher than linear commutators with respect to the Fock operator and offers a compact framework for formulating practical UCCSD methods. \subsection{A general strategy for truncating the commutator expansion and the qUCCSD scheme} The magnitude of the cluster amplitudes serves as a faithful measure for the strength of dynamic correlation. We thus explore UCC truncation schemes based on the powers of the cluster amplitudes, or equivalently, on the order of commutators in the commutator expansion of $\bar{H}$ using Bernoulli numbers. Note that, although $\sigma_1$ emerges at the second order in M{\o}ller-Plesset perturbation theory, single-reference systems with strong orbital-relaxation effects exhibit large ground-state CC amplitudes for single excitations. The standard CC methods can provide accurate treatments of orbital relaxation through the exponential of single excitations. \cite{Thouless1960} However, methods based on MP perturbation theory or truncation of single excitations to the linear terms could not treat these systems accurately, e.g., see Refs. \cite{Bohme1994,Hrusak1997} Therefore, we truncate single and double excitations up to the same power in the present work. We use a general notation UCCSD[$k$$\mid$$l$,$m$,$n$] to denote a scheme that include up to the $k$'th order commutators for the ground-state amplitude equations [($k$+1)'th order commutators for the ground-state energy expression], $l$'th order commutators for the singles-singles block of the excited-state secular equations, $m$'th order for the singles-doubles and doubles-singles blocks, and $n$'th order commutators for the doubles-doubles block. Applying the partitioning technique\cite{Lowdin1971} to Eq.~\eqref{eigen_eq} to fold the contributions from double excitations into singly excited states, the eigenvalue equations can be rewritten as \begin{equation} \left(\bar{H}_\mathrm{SS} + \bar{H}_\mathrm{SD}(E-\bar{H}_\mathrm{DD})^{-1} \bar{H}_\mathrm{DS}\right)C_\mathrm{S} = EC_\mathrm{S}. \end{equation} \red{A balanced truncation scheme for the excited-state eigenvalue equations thus would involve expansions of $\bar{H}_\mathrm{SS}$ and $\bar{H}_\mathrm{SD}(E-\bar{H}_\mathrm{DD})^{-1}\bar{H}_\mathrm{DS}$ to the same accuracy. Since $V$ serves as a similar measure of electron correlation as $\sigma$, we count the power of $V$ and $\sigma$ together. $\bar{H}_\mathrm{SS}$ and $\bar{H}_\mathrm{DD}$ involve $F$, $V$, and commutators of $V$ and $\sigma$. The expansions of $\bar{H}_\mathrm{SS}$ and $(E-\bar{H}_\mathrm{DD})^{-1}$ thus start with a contribution of $F$, which is of the zeroth power of $V$ and $\sigma$. In contrast, $\bar{H}_\mathrm{SD}$ and $\bar{H}_\mathrm{DS}$ involve $V$ and commutators of $V$ and $\sigma$, and thus are of at least linear power.} \red{Therefore}, the truncation of $\bar{H}_\mathrm{SS}$ to the $l$'th order commutators \red{of $V$ and $\sigma$}, $\bar{H}_\mathrm{SD}$ and $\bar{H}_\mathrm{DS}$ to the ($l$-1)'th order commutators, and $\bar{H}_\mathrm{DD}$ to the ($l$-2)'th order commutators ensures $\bar{H}_\mathrm{SS}$ and $\bar{H}_\mathrm{SD}(E-\bar{H}_\mathrm{DD})^{-1}\bar{H}_\mathrm{DS}$ to be correct up to the \red{$l+1$}'th power of \red{$V$} and $\sigma$ and provides a balanced description for the singly excited states. Further, \red{we choose to include in $\bar{H}_\mathrm{SS}$ the same ranks of commutators} as in the ground-state amplitude equations. The UCCSD[$l$$\mid$$l$,$l$-1,$l$-2] schemes thus emerge as \red{promising} options for treating ground state and singly excited states. Since the linearized methods usually are numerically not accurate, in the present work we explore the quadratic version, UCCSD[2$\mid$2,1,0], which we will refer to as the qUCCSD scheme. We should mention that the present general strategy is also applicable to the Baker-Campbell-Hausdorff (BCH) expansion. Since $[F,\sigma]$ is of similar magnitude as $V$, the commutators between $F$ and $\sigma$ in the BCH expansion should be truncated to one rank higher than the commutators between $V$ and $\sigma$. For example, the qUCCSD scheme within the BCH expansion consists of up to quadruple commutators for $F$ and $\sigma$ for the energy expression, triple commutators of $F$ and $\sigma$ for the ground-state amplitude equations, triple commutators of $F$ and $\sigma$ for $\bar{H}_\mathrm{SS}$, double commutators of $F$ and $\sigma$ for $\bar{H}_\mathrm{SD}$ and $\bar{H}_\mathrm{DS}$, and single commutators of $F$ and $\sigma$ for $\bar{H}_\mathrm{DD}$\red{.} The expansion using Bernoulli numbers is more compact than the BCH expansion. On the other hand, the BCH expansion is applicable to non-Hartree-Fock reference functions. The present work is focused on the qUCCSD scheme with the expansion using Bernoulli numbers. \subsection{The working equations for the qUCCSD scheme} \red{The qUCCSD working equations have been derived using the recipe for $\bar{H}$ discussed in the previous subsection and the standard diagrammatic techniques as in the CC methods.\cite{Crawford2000,Shavitt2009}} The expression for the qUCCSD ground-state energy $E_{\text{gr}}^{\text{qUCCSD}}$ consists of up to the third commutators of the fully contracted part of $\bar{H}$ and can be written as \begin{eqnarray} E_{\text{gr}}^{\text{qUCCSD}} &=& E_\mathrm{HF} + \langle \Phi_0|\bar{H}^{1}|\Phi_0\rangle + \langle \Phi_0|\bar{H}^{2}|\Phi_0\rangle + \langle \Phi_0|\bar{H}^{3}|\Phi_0\rangle, \\ \langle \Phi_0|\bar{H}^{1}|\Phi_0\rangle &=&\sum_{ijab} \frac{1}{8} \langle ij||ab \rangle \sigma_{ij}^{ab} + h.c., \label{en_1} \\ \langle \Phi_0|\bar{H}^{2}|\Phi_0\rangle&=&\sum_{ijab} \frac{1}{12} \langle ij||ab \rangle \sigma_{i}^{a} \sigma_{j}^{b} + h.c., \label{en_2}\\ \langle \Phi_0|\bar{H}^{3}|\Phi_0\rangle&=&\Bigg( \Big( - \sum_{ijklabcd} \frac{1}{12} (\sigma_{kl}^{cd})^* \langle ij||ab \rangle \sigma_{ik}^{ac} \sigma_{jl}^{bd} {+}\sum_{ijklabcd} \frac{1}{24} (\sigma_{kl}^{cd})^* \langle ij||ab \rangle \sigma_{ij}^{ac} \sigma_{kl}^{bd} \nonumber \\ &&\phantom{+\Bigg(} {+}\sum_{ijklabcd} \frac{1}{24} (\sigma_{kl}^{cd})^* \langle ij||ab \rangle \sigma_{ik}^{ab} \sigma_{jl}^{cd} - \sum_{ijklabcd} \frac{1}{96} (\sigma_{kl}^{cd})^* \langle ij||ab \rangle \sigma_{ij}^{cd} \sigma_{kl}^{ab} \Big) + h.c.\Bigg) \nonumber \\ &+&\Bigg(\Big( \phantom{-} \sum_{ijklabc} \frac{1}{4} (\sigma_{jl}^{bc})^* \langle ij||ak \rangle \sigma_i^a\sigma_{kl}^{bc} - \sum_{ijkabcd} \frac{1}{4} (\sigma_{jk}^{cd})^* \langle ic||ab \rangle \sigma_i^a\sigma_{jk}^{bd} \nonumber \\ &&\phantom{+\Bigg(} + \sum_{ijklabc} \frac{1}{2} (\sigma_{il}^{bc})^* \langle kj||ai \rangle \sigma_{j}^{b}\sigma_{lk}^{ca} - \sum_{ijkabcd} \frac{1}{2} (\sigma_{jk}^{cd})^* \langle ic||ab \rangle \sigma_{j}^{b}\sigma_{ik}^{ad} \nonumber \\ &&\phantom{+\Bigg(} + \sum_{ijklabc} \frac{1}{12} (\sigma_{il}^{bc})^* \langle jk||ia \rangle \sigma_{l}^{c}\sigma_{kj}^{ab} - \sum_{ijkabcd} \frac{1}{12} (\sigma_{jk}^{cd})^* \langle ic||ab \rangle \sigma_{k}^{d}\sigma_{ij}^{ab} \nonumber \\ &&\phantom{+\Bigg(} - \sum_{ijklabc} \frac{1}{8} (\sigma_{il}^{cb})^* \langle kj||ia \rangle \sigma_{l}^{a}\sigma_{kj}^{cb} + \sum_{ijkabcd} \frac{1}{8} (\sigma_{jk}^{dc})^* \langle ci||ab \rangle \sigma_{i}^{d}\sigma_{kj}^{ab} \Big) + h.c.\Bigg) \nonumber \\ &+&\Bigg(\Big( - \sum_{ijkabc} \frac{1}{12} (\sigma_{k}^{c})^* \langle ij||ab \rangle \sigma_i^a\sigma_{jk}^{bc} + \sum_{ijkabc} \frac{1}{12} (\sigma_{k}^{c})^* \langle ij||ab \rangle \sigma_i^c\sigma_{jk}^{ba} \nonumber \\ &&\phantom{+\Bigg(} + \sum_{ijkabc} \frac{1}{12} (\sigma_{k}^{c})^* \langle ij||ab \rangle \sigma_{k}^{a}\sigma_{ij}^{cb} + \sum_{ijkabc} \frac{1}{3} (\sigma_{ik}^{bc})^* \langle jb||ai \rangle \sigma_{j}^{c}\sigma_{k}^{a} \nonumber \\ &&\phantom{+\Bigg(} - \sum_{ijklab} \frac{1}{12} (\sigma_{ij}^{ab})^* \langle kl||ij \rangle \sigma_{k}^{a}\sigma_{l}^{b} - \sum_{ijabcd} \frac{1}{12} (\sigma_{ij}^{cd})^* \langle cd||ab \rangle \sigma_{j}^{b}\sigma_{i}^{a} \Big) + h.c.\Bigg) \nonumber \\ &+&\Bigg(\Big( + \sum_{ijkab} \frac{1}{3} (\sigma_{k}^{b})^* \langle ij||ak \rangle \sigma_{i}^{a}\sigma_{j}^{b} - \sum_{ijabc} \frac{1}{3} (\sigma_{j}^{a})^* \langle ai||bc \rangle \sigma_{j}^{b}\sigma_{i}^{c} \Big) + h.c.\Bigg). \label{en_3_7th} \end{eqnarray} The qUCCSD amplitude equations comprise up to double commutators for $\bar{H}_{ai}$ \begin{eqnarray} &&\bar{H}^{\text{qUCCSD}}_{ai}=\bar{H}_{ai}^{1}+\bar{H}_{ai}^{2}=0, \\ \bar{H}_{ai}^{1} &=& \red{\sum_b f_{ab} t_i^b - \sum_j t_j^a f_{ji}}+ \frac{1}{2}\sum_{jbc}\langle aj||cb\rangle \sigma_{ij}^{cb}- \frac{1}{2}\sum_{jkb}\langle kj||ib\rangle \sigma_{jk}^{ba}+ \sum_{jb}\langle aj||ib\rangle \sigma_j^{b}+ \frac{1}{2}\sum_{jb}(\sigma_j^b)^*\langle ab||ij\rangle, \label{hbarai1} \\ % \bar{H}_{ai}^{2} &=& - \sum_{jklbc} \frac{1}{2} (\sigma_{jk}^{bc})^* \langle al||ik\rangle \sigma_{jl}^{bc} + \sum_{jkbcd} \frac{1}{2} (\sigma_{jk}^{bd})^* \langle ad||ic\rangle \sigma_{jk}^{bc} - \sum_{jklbc} (\sigma_{jk}^{bc})^* \langle bl||ji\rangle \sigma_{kl}^{ca} + \sum_{jkbcd} (\sigma_{jk}^{bc})^* \langle ab||dj\rangle \sigma_{ki}^{cd} \nonumber \\ &-& \sum_{jklbc} \frac{1}{4} (\sigma_{jk}^{bc})^* \langle bl||jk\rangle \sigma_{il}^{ac} + \sum_{jkbcd} \frac{1}{4} (\sigma_{jk}^{bd})^* \langle bd||jc\rangle \sigma_{ik}^{ac} + \sum_{jklbc} \frac{1}{4} (\sigma_{jk}^{bd})^* \langle bd||ic\rangle \sigma_{jk}^{ca} - \sum_{jkbcd} \frac{1}{4} (\sigma_{jk}^{bc})^* \langle al||jk\rangle \sigma_{il}^{cb} \nonumber \\ &+& \sum_{jkbc} \frac{5}{12} \langle jk||bc\rangle \sigma_j^b \sigma_{ik}^{ac} - \sum_{jkbc} \frac{1}{3} \langle jk||bc\rangle \sigma_k^a \sigma_{ij}^{cb} - \sum_{jkbc} \frac{1}{3} \langle jk||bc\rangle \sigma_i^c \sigma_{jk}^{ba} - \sum_{jkbc} \frac{1}{2} (\sigma_k^c)^* \langle cj||ib\rangle \sigma_{jk}^{ba} \nonumber \\ &-& \sum_{jkbc} \frac{1}{2} (\sigma_k^c)^* \langle aj||kb\rangle \sigma_{ij}^{cb} - \sum_{jkbc} \frac{1}{3} (\sigma_{jk}^{cb})^* \langle ab||ij\rangle \sigma_k^c - \sum_{jkbc} \frac{1}{6} (\sigma_{jk}^{bc})^* \langle bc||ji\rangle \sigma_k^a - \sum_{jkbc} \frac{1}{6} (\sigma_{jk}^{bc})^* \langle ab||kj\rangle \sigma_i^c \nonumber \\ &+& \sum_{jbcd} \frac{1}{4} (\sigma_j^c)^* \langle ac || bd \rangle \sigma_{ij}^{bd} + \sum_{jklb} \frac{1}{4} (\sigma_k^b)^* \langle jl || ik \rangle \sigma_{jl}^{ab} \nonumber \\ &+& \sum_{jbc} \langle aj||cb\rangle \sigma_j^{b} \sigma_i^c - \sum_{jkb} \langle kj||ib\rangle \sigma_j^{b} \sigma_k^a + \sum_{jbc}\frac{1}{2} (\sigma_j^b)^* \langle ab||cj\rangle \sigma_i^c - \sum_{jkb}\frac{1}{2} (\sigma_j^b)^* \langle kb||ij\rangle \sigma_k^a \nonumber \\ &+& \sum_{jbc}\frac{1}{2} (\sigma_j^c)^* \langle ac||ib\rangle \sigma_j^b - \sum_{jkb}\frac{1}{2} (\sigma_j^b)^* \langle ak||ij\rangle \sigma_k^b, \label{hbarai2} \end{eqnarray} and for $\bar{H}_{ab,ij}$ \begin{eqnarray} \bar{H}^{\text{qUCCSD}}_{ab,ij}&=&\bar{H}_{ab,ij}^0+\bar{H}_{ab,ij}^1+\bar{H}_{ab,ij}^2=0, \\ \bar{H}_{ab,ij}^{0}&=&\langle ab || ij \rangle, \label{hbarabij0} \\ \bar{H}_{ab,ij}^{1} &=& \sum_c f_{ac} \sigma_{ij}^{cb} - \sum_k f_{ki} \sigma_{kj}^{ab}+ \frac{1}{2} \sum_{kl} \langle kl||ij \rangle \sigma_{kl}^{ab}+ \frac{1}{2} \sum_{cd} \langle ab||cd \rangle \sigma_{ij}^{cd}+ P(ij) P(ab) \sum_{kc} \langle ak||ic \rangle \sigma_{jk}^{bc} \nonumber \\ &-& P(ab) \sum_{k} \langle ka||ji\rangle \sigma_k^{b} + P(ij) \sum_{c} \langle ab||ic\rangle \sigma_j^{c}, \label{hbarabij1} \\ % \bar{H}_{ab,ij}^{2} &=& P(ij) P(ab) \sum_{klcd} \frac{1}{3} \langle kl||cd\rangle \sigma_{ik}^{ac} \sigma_{jl}^{bd} + \sum_{klcd} \frac{1}{6} \langle kl||cd \rangle \sigma_{ij}^{cd} \sigma_{kl}^{ab} - P(ab) \sum_{klcd} \frac{1}{3} \langle kl||cd \rangle \sigma_{ij}^{ad} \sigma_{kl}^{cb} \nonumber \\ &-& P(ij) \sum_{klcd} \frac{1}{3} \langle kl||cd \rangle \sigma_{il}^{ab} \sigma_{jk}^{dc} + P(ij) P(ab) \sum_{klcd} \frac{1}{3} (\sigma_{kl}^{cd})^* \langle ad||il\rangle \sigma_{jk}^{bc} + \sum_{klcd} \frac{1}{12} (\sigma_{kl}^{cd})^* \langle cd||ij\rangle \sigma_{kl}^{ab} \nonumber \\ &+& \sum_{klcd} \frac{1}{12} (\sigma_{kl}^{cd})^* \langle ab||kl\rangle \sigma_{ij}^{cd} - P(ab) \sum_{klcd} \frac{1}{6} (\sigma_{kl}^{cd})^* \langle ad||ij\rangle \sigma_{kl}^{cb} - P(ij) \sum_{klcd} \frac{1}{6} (\sigma_{kl}^{cd})^* \langle ab||il\rangle \sigma_{jk}^{dc} \nonumber \\ &-& P(ab) \sum_{klcd} \frac{1}{6} (\sigma_{kl}^{cd})^* \langle cb||kl\rangle \sigma_{ij}^{ad} - P(ij) \sum_{klcd} \frac{1}{6} (\sigma_{kl}^{cd})^* \langle cd||kj\rangle \sigma_{il}^{ab} \nonumber \\ &-& P(ij) \sum_{klc} (\sigma_l^{c})^* \langle ck||lj\rangle \sigma_{ik}^{ab} + P(ab) \sum_{lcd} (\sigma_l^{c})^* \langle bc||dl\rangle \sigma_{ij}^{ad} + P(ij) \sum_{lcd} \frac{1}{2}(\sigma_l^{c})^* \langle ab||id\rangle \sigma_{jl}^{dc}\nonumber \\ &-& P(ab) \sum_{klc} \frac{1}{2}(\sigma_l^{c})^* \langle ak||ij\rangle \sigma_{kl}^{bc} + \sum_{klc} (\sigma_l^{c})^* \langle ck||ji\rangle \sigma_{kl}^{ab} + P(ij) P(ab) \sum_{klc} (\sigma_l^{c})^* \langle bk||li\rangle \sigma_{jk}^{ca} \nonumber \\ &-& P(ij) P(ab) \sum_{lcd} (\sigma_l^{c})^* \langle ac||dj\rangle \sigma_{il}^{db} - \sum_{lcd} (\sigma_l^{c})^* \langle ab||dl\rangle \sigma_{ij}^{dc} - P(ij) \sum_{klc} \langle kl||cj\rangle \sigma_k^{c} \sigma_{il}^{ab} \nonumber \\ &+& P(ab) \sum_{kcd} \langle kb||cd \rangle \sigma_k^{c} \sigma_{ij}^{ad} - P(ij) P(ab) \sum_{klc} \langle kl||cj \rangle \sigma_l^{b} \sigma_{ik}^{ac} + P(ij) P(ab) \sum_{kcd} \langle kb||cd \rangle \sigma_j^{d} \sigma_{ik}^{ac} \nonumber \\ &+& P(ij) \frac{1}{2} \sum_{klc} \langle kl||ci\rangle \sigma_j^{c} \sigma_{kl}^{ba} - P(ab) \frac{1}{2} \sum_{kcd} \langle ka||cd\rangle \sigma_k^{b} \sigma_{ij}^{dc} \nonumber \\ &+& P(ab) \sum_{kl} \frac{1}{2} \langle kl||ij \rangle \sigma_k^a \sigma_l^b - P(ij) P(ab) \sum_{kc} \langle ak||cj \rangle \sigma_i^c \sigma_k^b + P(ij) \sum_{cd} \frac{1}{2} \langle ab||cd \rangle \sigma_i^c \sigma_j^d \nonumber \\ &-& P(ab) \sum_{kc} \frac{1}{3} (\sigma_k^{c})^* \langle ac||ij\rangle \sigma_k^b - P(ij) \sum_{kc} \frac{1}{3} (\sigma_k^{c})^* \langle ab||ik\rangle \sigma_j^c. \label{hbarabij2} \end{eqnarray} The qUCCSD scheme truncates $\bar{H}_{ij}$, $\bar{H}_{ab}$, and $\bar{H}_{ia,bj}$ in the singles-singles block of the excited-state eigenvalue equations to up to the double commutators. The expressions for $\bar{H}^{\text{qUCCSD}}_{ij}$ thus is given by \begin{eqnarray} \bar{H}^{\text{qUCCSD}}_{ij}&=&\bar{H}_{ij}^{0}+\bar{H}_{ij}^{1}+\bar{H}_{ij}^{2}, \\ \bar{H}_{ij}^{0}&=& f_{ij}, \label{hbarij0}\\ \bar{H}_{ij}^{1}&=& \frac{1}{4}\sum_{kab}\langle ik||ab\rangle \sigma_{jk}^{ab} + \sum_{ka}\langle ik||ja\rangle \sigma_k^{a}+ h.c., \label{hbarij1} \\ % \bar{H}_{ij}^{2} &=& \left(\sum_{klabc}\frac{1}{2}(\sigma_{kl}^{bc})^* \langle ic||al \rangle \sigma_{jk}^{ab} +\sum_{klmab}\frac{1}{8} (\sigma_{kl}^{ab})^*\langle im||kl \rangle \sigma_{jm}^{ab} + h.c. \right) -\sum_{klmab}\frac{1}{2}(\sigma_{kl}^{ab})^* \langle im||jl \rangle \sigma_{km}^{ab} \nonumber \\ &+& \sum_{klabc}\frac{1}{2}(\sigma_{kl}^{ac})^* \langle ic||jb \rangle \sigma_{kl}^{ab} \nonumber \\ &+& \left(\sum_{kabc}\frac{1}{4} (\sigma_k^b)^* \langle ib||ac \rangle \sigma_{jk}^{ac} -\sum_{klab}\frac{1}{2} (\sigma_k^b)^* \langle il||ak \rangle \sigma_{jl}^{ab} +\sum_{klab}\frac{1}{2} (\sigma_l^b)^* \langle ik||ja\rangle \sigma_{kl}^{ab} + h.c. \right) \nonumber \\ &+& \left(\sum_{kab}\frac{5}{12}\langle ik||ab \rangle \sigma_{j}^{a} \sigma_{k}^{b} +\sum_{kab}\frac{1}{2}(\sigma_{k}^{b})^* \langle ib||ak\rangle \sigma_{j}^{a} + h.c. \right) -\sum_{kla}(\sigma_{l}^{a})^* \langle ik||jl \rangle \sigma_{k}^{a} \nonumber \\ &+& \sum_{kab}(\sigma_{k}^{a})^* \langle ia||jb \rangle \sigma_{k}^{b} . \label{hbarij2} \end{eqnarray} Similarly, $\bar{H}^{\text{qUCCSD}}_{ab}$ and $\bar{H}^{\text{qUCCSD}}_{ia,bj}$ can be written as \begin{eqnarray} \bar{H}^{\text{qUCCSD}}_{ab}&=&\bar{H}_{ab}^{0}+\bar{H}_{ab}^{1}+\bar{H}_{ab}^{2}, \\ \bar{H}_{ab}^{0}&=& f_{ab}, \label{hbarab0}\\ \bar{H}_{ab}^{1}&=& \left( -\sum_{ijc}\frac{1}{4}\langle ij||bc \rangle \sigma_{ij}^{ac} +\sum_{ic}\langle ai||bc \rangle \sigma_i^{c} + h.c.\right), \label{hbarab1} \\ % \bar{H}_{ab}^{2} &=& \left( -\sum_{ijkcd}\frac{1}{2} (\sigma_{ij}^{cd})^* \langle kd||bj \rangle \sigma_{ik}^{ca} -\sum_{ijcdf}\frac{1}{8} (\sigma_{ij}^{fd})^* \langle df||cb \rangle \sigma_{ij}^{ac} + h.c. \right) +\sum_{ijcdf}\frac{1}{2} (\sigma_{ij}^{fd})^* \langle ad||bc \rangle \sigma_{ij}^{fc} \nonumber \\ &-& \frac{1}{2}\sum_{ijkcd}(\sigma_{ij}^{cd})^*\langle ka||jb \rangle \sigma_{ik}^{cd} \nonumber \\ &+& \left(\sum_{ijkc}\frac{1}{4}(\sigma_j^c)^* \langle ik||bj \rangle \sigma_{ik}^{ac} - \sum_{ijcd}\frac{1}{2}(\sigma_j^c)^* \langle ic||bd\rangle \sigma_{ij}^{ad} + \sum_{ijcd}\frac{1}{2}(\sigma_j^d)^* \langle ia||cb\rangle \sigma_{ij}^{cd} + h.c. \right) \nonumber \\ &+& \left(-\frac{5}{12}\sum_{ijc}\langle ij||bc \rangle \sigma_i^a \sigma_j^c -\sum_{ijc}\frac{1}{2}(\sigma_j^c)^*\langle ic||bj \rangle \sigma_i^a + h.c. \right) -\sum_{ijc} (\sigma_i^c)^*\langle ja||ib \rangle \sigma_j^c \nonumber \\ &+& \sum_{icd} (\sigma_i^d)^* \langle ad||bc \rangle \sigma_i^c, \label{hbarab2} \end{eqnarray} and \begin{eqnarray} \bar{H}^{\text{qUCCSD}}_{ia,bj}&=&\bar{H}_{ia,bj}^{0}+\bar{H}_{ia,bj}^{1}+\bar{H}_{ia,bj}^{2}, \\ \bar{H}_{ia,bj}^{0}&=& \langle ia || bj \rangle, \label{hbariabj0} \\ \bar{H}_{ia,bj}^{1}&=& \frac{1}{2}\sum_{kc}(\sigma_{ik}^{bc})^* \langle ac||jk \rangle + \sum_{c}\langle ai||cb \rangle \sigma_j^{c} -\sum_{k}\langle ki||jb \rangle \sigma_k^{a} + h.c., \label{hbariajb1} \\ % \bar{H}_{ia,bj}^{2}&=& \left( \frac{1}{4}\sum_{klmc}(\sigma_{kl}^{bc})^* \langle im||kl \rangle \sigma_{jm}^{ac} +\frac{1}{4}\sum_{kcde}(\sigma_{ik}^{ce})^* \langle ce||bd \rangle \sigma_{jk}^{ad} -\frac{1}{2}\sum_{klcd}(\sigma_{ik}^{cd})^* \langle lc||kb \rangle \sigma_{jl}^{ad} \right.\nonumber \\ &-& \frac{1}{2}\sum_{klcd}(\sigma_{kl}^{bd})^* \langle id||kc \rangle \sigma_{jl}^{ac} -\frac{1}{4}\sum_{klcd}(\sigma_{kl}^{cd})^* \langle id||bj \rangle \sigma_{kl}^{ca} -\frac{1}{4}\sum_{klcd}(\sigma_{kl}^{cd})^* \langle ia||bl \rangle \sigma_{jk}^{dc} \nonumber \\ &-&\left. \sum_{klcd}(\sigma_{kl}^{bd})^* \langle ia||kc \rangle \sigma_{jl}^{cd} + h.c. \right) \nonumber \\ &+& \sum_{klmc}(\sigma_{lm}^{bc})^* \langle ik||lj \rangle \sigma_{km}^{ac} + \sum_{kcde}(\sigma_{ik}^{de})^* \langle ad||cb \rangle \sigma_{jk}^{ce} +\frac{1}{2}\sum_{klcd}(\sigma_{il}^{cd})^* \langle ka||bl \rangle \sigma_{kj}^{cd} \nonumber \\ &+& \frac{1}{2}\sum_{klcd}(\sigma_{kl}^{cb})^* \langle ic||dj \rangle \sigma_{kl}^{ad} \nonumber \\ &+& \left( -\frac{1}{2}\sum_{klc}(\sigma_{l}^{c})^* \langle ik||bj \rangle \sigma_{kl}^{ac} +\frac{1}{2}\sum_{kcd}(\sigma_{k}^{d})^* \langle ia||bc \rangle \sigma_{jk}^{cd} -\frac{1}{2}\sum_{klc}(\sigma_{k}^{c})^* \langle il||bk \rangle \sigma_{jl}^{ac} \right. \nonumber \\ &+& \frac{1}{2}\sum_{kcd}(\sigma_{k}^{d})^* \langle id||bc \rangle \sigma_{jk}^{ac} +\frac{1}{2}\sum_{kcd}(\sigma_{i}^{d})^* \langle kd||cb \rangle \sigma_{jk}^{ac} -\frac{1}{2}\sum_{klc}(\sigma_{k}^{b})^* \langle il||kc \rangle \sigma_{jl}^{ac} \nonumber \\ &-& \frac{1}{4}\sum_{klc}(\sigma_{i}^{c})^* \langle kl||bj \rangle \sigma_{kl}^{ac} +\frac{1}{2}\sum_{klc}(\sigma_{l}^{b})^* \langle ik||cj \rangle \sigma_{kl}^{ac} -\frac{1}{4}\sum_{kcd}(\sigma_{k}^{b})^* \langle ai||cd \rangle \sigma_{jk}^{cd} \nonumber \\ &-&\left. \frac{1}{2}\sum_{kcd}(\sigma_{i}^{d})^* \langle ka||bc \rangle \sigma_{jk}^{cd} + h.c. \right) \nonumber \\ &+& \left(-\frac{2}{3}\sum_{kc}\langle ik||bc \rangle \sigma_{k}^{a}\sigma_{j}^{c} -\frac{1}{2}\sum_{kc}(\sigma_{k}^{c})^* \langle ic||bj \rangle \sigma_{k}^{a} -\frac{1}{2}\sum_{kc}(\sigma_{k}^{c})^* \langle ia||bk \rangle \sigma_{j}^{c} \right. \nonumber \\ &\red{-}&\left. \red{\sum_{kc}(\sigma_{k}^{b})^* \langle ia || kc \rangle \sigma_{j}^{c}}+h.c. \right) +\sum_{kl}(\sigma_{l}^{b})^* \langle ik||lj \rangle \sigma_{k}^{a} +\sum_{cd}(\sigma_{i}^{d})^* \langle ad||cb \rangle \sigma_{j}^{c}, \label{hbariabj2} \end{eqnarray} respectively. $\bar{H}_{ab,ci}$, $\bar{H}_{ia,kj}$, $\bar{H}_{ibc,ajk}$ are involved in the singles-doubles and doubles-singles block. They are truncated up to single commutators, i.e., $\bar{H}^{\text{qUCCSD}}_{ab,ci}$ is given by \begin{eqnarray} \bar{H}^{\text{qUCCSD}}_{ab,ci}&=&\bar{H}_{ab,ci}^{0}+\bar{H}_{ab,ci}^{1}, \\ \bar{H}_{ab,ci}^{0}&=& \langle ab||ci \rangle, \\ \bar{H}_{ab,ci}^{1}&=& P(ab) \sum_{jd}\langle aj||cd\rangle \sigma_{ij}^{bd} +\frac{1}{2} \sum_{kj} \langle jk||ci \rangle \sigma_{jk}^{ab} -\frac{1}{2}\sum_{j} (\sigma_j^c)^* \langle ab || ji \rangle \nonumber\\ &+&\sum_{d}\langle ab||cd \rangle \sigma_i^{d} -P(ab) \sum_{j} \langle aj||ci\rangle \sigma_j^{b}. \label{Habci1} \end{eqnarray} $\bar{H}^{\text{qUCCSD}}_{ia,jk}$ can be written as \begin{eqnarray} \bar{H}^{\text{qUCCSD}}_{ia,jk}&=&\bar{H}_{ia,jk}^{0}+\bar{H}_{ia,jk}^{1}, \\ \bar{H}_{ia,jk}^{0} &=& \langle ia || jk \rangle, \\ \bar{H}_{ia,jk}^{1} &=& P(jk) \sum_{lb}\langle il||jb \rangle \sigma_{kl}^{ab} +\frac{1}{2}\sum_{bc}\langle ia||bc \rangle \sigma_{jk}^{bc} +{\frac{1}{2} \sum_b (\sigma_i^b)^* \langle ba || jk \rangle }\nonumber \\ &-& \sum_{l}\langle il||jk \rangle \sigma_l^{a} +P(jk) \sum_{b} \langle ai||bj \rangle \sigma_k^{b}, \label{Hiajk1} \end{eqnarray} and the three-body term takes the form \begin{equation} \bar{H}^{\text{qUCCSD}}_{ibc,ajk} = - P(jk) \sum_{l} \langle il||aj \rangle \red{\sigma}_{kl}^{cb} + P(bc) \sum_{d} \langle ib||ad \rangle \red{\sigma}_{jk}^{dc}. \label{hibcajk} \end{equation} \red{The contributions from this three-body term to the excited-state eigenvalue equation is evaluated using the efficient algorithms similar to those within the EOM-CCSD method,\cite{Stanton1993} i.e., for the evaluation of the contribution $\sum_{jkbc}\bar{H}_{ibc,ajk}^\mathrm{qUCCSD} C_{jk}^{bc}$ to the singles residue one first contracts $C_{jk}^{bc}$ with $\sigma_{kl}^{bc}$ or $\sigma_{jk}^{dc}$ to form one-body intermediates, while for the evaluation of the contribution $\sum_{jkbc}\bar{H}_{ibc,ajk}^\mathrm{qUCCSD} C_{i}^{a}$ to the doubles residue one first contracts $C_i^a$ with $\langle il || aj \rangle$ or $\langle ib || ad \rangle$ to form one-body intermediates.} $\bar{H}_{ij,kl}$ and $\bar{H}_{ab,cd}$ contribute to the doubles-doubles block and in the qUCCSD scheme comprise only the bare Hamiltonian integral \begin{eqnarray} \bar{H}^{\text{qUCCSD}}_{ijkl} = \langle ij || kl \rangle,~ \bar{H}^{\text{qUCCSD}}_{ab,cd}= \langle ab || cd \rangle \end{eqnarray} Note that the qUCCSD scheme also uses the bare Hamiltonian integrals for $\bar{H}_{ij}$, $\bar{H}_{ab}$, and $\bar{H}_{ia,bj}$ in the calculations of the contributions from $\bar{H}_{\mathrm{DD}}$ to the excited-state equations. $\bar{H}_{iab,jcd}$ and $\bar{H}_{ija,klb}$ do not contribute to the qUCCSD working equations. \red{The qUCCSD ground-state amplitude equations are solved using the same iterative procedure as CCSD, while the excited-state eigencalue equations are solved using the Davidson algorithms.\cite{Stanton1993,Davidson1975} qUCCSD and CCSD or EOM-CCSD share ``particle-particle ladder contractions'' of the type $\sum_{cd} \langle ab || cd \rangle \sigma_{ij}^{cd}$ with a $N_o^2 N_v^4$ scaling and ``ring contraction'' of the type $\sum_{kc} \langle ak || ic \rangle \sigma_{jk}^{bc}$ or $\sum_{kc} \langle ik || ac \rangle \sigma_{jk}^{bc}$ with a $N_o^3N_v^3$ scaling, in which $N_o$ and $N_v$ represent the number of occupied and virtual orbitals, as the most time-consuming steps. The qUCCSD ground-state amplitude equations involve one particle-particle ladder contraction and four ring contractions per iteration, to be compared with one particle-particle ladder contraction and two ring contractions in CCSD. Overall, the computing time of a qUCCSD ground-state calculation is expected to be around twice that of a CCSD calculations. The qUCCSD excited-state eigenvalue equations share the same particle-particle ladder and ring contractions as EOM-CCSD and thus have essentially indentical computational cost per iteration as EOM-CCSD.} \section{Computational Details} The qUCCSD method for the calculations of ground-state energies and excitation energies as detailed in Section II.C have been implemented in the X2CSOCC module \cite{Liu2018-SOCC} of the CFOUR program\cite{CFOUR,cfour2020} on top of the previous implementation of the UCC3 method.\cite{Liu2018} In order to demonstrate the accuracy of the qUCCSD method for challenging ground-state problems, qUCCSD calculations for the equilibrium structures and harmonic frequencies of \ce{CuH}, \ce{CuF}, and \ce{O3} using cc-pVTZ basis sets \cite{Dunning1989,Balabanov2005} have been carried out and compared with the corresponding results obtained from CCSD, UCC(4), and UCC3 calculations. The copper-containing molecules have been chosen as examples with strong orbital-relaxation effects that have been shown to be difficult to treat using approximate variants of CC methods.\cite{Bohme1994,Hrusak1997 The calculations of structural parameters for the ozone molecule, especially the vibrational frequency for the asymmetric stretching mode and the ordering of the asymmetric and symmetric stretching frequencies, played an important role in establishing the CCSD and CCSD(T) methods. \cite{Scuseria1989,Stanton1989,Magers1989,Pople1989,Lee1990,Watts1991,Watts1998,Kucharski1999} In spite of a certain degree of diradical character in ozone, CCSD and CCSD(T) can provide qualitatively correct results. It is important for a UCC method with a truncation of the commutator expansion to have this robustness. The classic benchmark set compiled by Trofimov {\it{et al.}} consisting of excitation energies in \ce{H2O}, \ce{HF}, \ce{N2}, \ce{Ne}, \ce{CH2}, \ce{BH}, and \ce{C2} \cite{Schirmer2002} have been used to demonstrate the accuracy of qUCCSD excitation energies. We have used the same structures and basis sets as in the previous calculations \cite{Koch1995,Christiansen1996,Larsen2001,Hald2001,Schirmer2002} summarized in the footnotes 81 and 82 of Ref. \cite{Liu2018} \red{The full configuration interaction (FCI) excitation energies have been given as reference values. The results obtained using EOM-CCSD, ADC(3), and UCC3 methods with the same computational scaling as qUCCSD have also been presented for comparison. We mention that the CC3 method includes an iterative treatment of triple excitations and thus is in general more accurate but at the same time more time consuming than the qUCCSD method.} Here \ce{H2O}, \ce{HF}, \ce{N2}, and \ce{Ne} serve as example molecules for which the perturbation series converge smoothly, and \ce{CH2}, \ce{BH}, and \ce{C2} as examples in the absence of a smooth convergence of the ADC series. We focus our discussion on the improvement of the performance of qUCCSD over the previous UCC3 method. \red{Although general characterization of same-symmetry conical intersections using qUCCSD will have to wait for the implementation of analytic gradients and derivative coupling, it is worthwhile mentioning that the hermitian nature of qUCCSD enables the description of degeneracies between electronic states. As an example, we have enclosed in the Supporting Information a qUCCSD calculation of potential energy surfaces in the immediate vicinity of one of the conical intersection point between the $2^1A_1$ and $3^1A_1$ states of the \ce{HOF} molecule. The qUCCSD calculations show the correct degeneracy at the intersecting point and the correct linear behavior of the electronic energies with respect to the displacements. In contrast, EOM-CCSD calculations produce complex eigenvalues when the energies of these two state are within 0.03 eV of each other. } \section{Results and Discussions} \subsection{Equilibrium structures and harmonic frequencies for \texorpdfstring{\ce{CuH}}{}, \texorpdfstring{\ce{CuF}}{}, and \texorpdfstring{\ce{O3}}{}} Copper-containing molecules serve as excellent avenues to test the robustness for approximate many-body methods. They exhibit significant orbital-relaxation effects, e.g., the largest CCSD singles amplitudes in CuH and CuF amount to around 0.06. On the other hand, the wavefunctions are dominated by a single determinant and the CCSD and CCSD(T) methods can provide accurate results for properties of CuH and CuF, e.g., as shown in Refs. \cite{Bohme1994,Hrusak1997,Cheng2012} Here we focus our discussion on the assessment of the qUCCSD, UCC3, and UCC(4) results using the CCSD results as the reference values. As shown in Table \ref{gs}, the UCC3 and UCC(4) results exhibit large discrepancies compared with the CCSD ones, e.g., the UCC3 harmonic frequency of 739 cm$^{-1}$ for CuF is more than 100 cm$^{-1}$ greater than the CCSD value of 609 cm$^{-1}$. In contrast, the qUCCSD results agree closely with the CCSD values, with the deviations in frequencies amounting to 8 cm$^{-1}$ for CuH and 2 cm$^{-1}$ for CuF. Interestingly, the UCC3 and UCC(4) calculations of CuH and CuF produced singles amplitudes larger than 0.2. This might be attributed to that UCC3 and UCC(4) have only linear terms involving single excitations in the amplitude equations, which results in larger $t_1$ amplitudes when attempting to account for the large orbital-relaxation effects. It thus is essential to include the quadratic terms involving single excitations in the amplitude equations to obtain robust performance. Ozone is a classic molecule for testing the accuracy of electronic-structure methods. In particular, the asymmetric stretching frequencies, $\omega_3$, of \ce{O3} is very sensitive to the treatment of electron correlation. For example, calculations of $\omega_3$ demonstrated the importance of the fifth-order contribution in the noniterative triples correction of the CCSD(T) method. \cite{Pople1989} Although the ground state of ozone possesses certain degree of biradical character, i.e., the largest $t_2$ amplitude amount to around 0.2, the CCSD and CCSD(T) methods can provide quite accurate equilibrium structures and vibrational frequencies. \cite{Scuseria1989,Stanton1989,Magers1989,Pople1989,Lee1990,Watts1991,Watts1998,Kucharski1999} As shown in Table \ref{gs}, the UCC3 and UCC(4) calculations provide inaccurate results for the structures and harmonic frequencies of ozone. UCC3 grossly overestimated $\omega_3$ and UCC(4) produced an imaginary harmonic frequency for this mode. The qUCCSD method obtained structures and vibrational frequencies in close agreement with the CCSD results, demonstrating the robustness of the commutator truncation scheme. As expected, the qUCCSD results is slightly worse than the CCSD ones, with the latter obtaining the correct ordering of $\omega_2$ and $\omega_3$. \cite{Scuseria1989} The inclusion of higher commutators is expected to further improve the performance over qUCCSD. \subsection{Excitation energies of \texorpdfstring{\ce{H2O}}{}, \texorpdfstring{\ce{HF}}{}, \texorpdfstring{\ce{N2}}{}, and \texorpdfstring{\ce{Ne}}{}} We use \ce{H2O}, \ce{HF}, \ce{N2}, and \ce{Ne} as examples for which the M{\o}ller-Plesset perturbation series converge smoothly. The excitation energies for these molecules computed using the qUCCSD method are summarized in Tables \ref{t1}-\ref{t4} together with the corresponding FCI, ADC(3), UCC3, and EOM-CCSD values. Here we use the FCI values as the reference and give the other results as the deviation from the FCI values. The balanced inclusion of high-order terms in the qUCCSD scheme provides uniformly better excitation energies than UCC3. The mean absolute deviations of the qUCCSD results amount to \red{0.12} eV for \ce{H2O}, \red{0.13} eV for \ce{HF}, \red{0.19} eV for \ce{N2}, and \red{0.18} eV for Ne, which exhibit consistent improvement compared with the UCC3 values of 0.16 eV for \ce{H2O}, 0.19 eV for \ce{HF}, 0.21 eV for \ce{N2}, and 0.22 eV for Ne. The performance of qUCCSD for these molecules is similar to that of EOM-CCSD. The absolute mean deviations of the qUCCSD results with respect to FCI values are slightly larger than those of EOM-CCSD for \ce{H2O} (by \red{0.04} eV) and \ce{N2} (by \red{0.06} eV) and slightly smaller for \ce{HF} (by \red{0.03} eV) and \ce{Ne} (by \red{0.03} eV). \subsection{Excitation energies of \texorpdfstring{\ce{CH2}}{}, \texorpdfstring{\ce{BH}}{}, and \texorpdfstring{\ce{C2}}{}} The computed vertical excitation energies for \ce{CH2} and \ce{BH} are summarized in Table \ref{t5}-\ref{t6} as examples of simple molecules for which the ADC series do not converge smoothly. \cite{Schirmer2002} Here the mean and maximum absolute deviations of the ADC(3) method with respect to the FCI values are much larger than for the molecules in the previous subsection. UCC3 provides better results perhaps because of the iterative solutions of the ground-state amplitude equations. \cite{Liu2018} \red{The performance of qUCCSD is similar to that of UCC3 for \ce{BH} and \ce{CH2}.} The mean absolute deviation of the qUCCSD excitation energies with respect to the FCI results amount to \red{0.07} eV for \ce{CH2} and \red{0.11} eV for \ce{BH}, to be compared with 0.07 eV and 0.12 eV in the case of UCC3. The mean absolute deviations of qUCCSD are still greater than those of EOM-CCSD, by \red{0.05} eV for \ce{CH2} and by \red{0.06} eV for \ce{BH}. The ground state of the \ce{C2} molecule has a certain degree of biradical character with the largest $t_2$ amplitude amounting to more than 0.2. The calculations of excitation energies for \ce{C2} thus serves as a challenging test for the present truncated UCC-based polarization propagator methods. As shown in Table \ref{t7}, the absolute deviation of the qUCCSD vertical excitation energies with respect to the FCI values amount to \red{0.38} eV for the $^1\Pi_u$ state, \red{0.53} eV for the $^1\Sigma_u^+$ state, \red{0.54} eV for the a$^3\Pi_u$ state and \red{0.65} eV for the c$^3\Sigma_u^+$ state. These are significantly more accurate than the UCC3 values with errors as large as 0.64 eV, 1.02 eV, 0.74 eV, and 0.88 eV for $^1\Pi_u$, $^1\Sigma_u^+$, a$^3\Pi_u$, and c$^3\Sigma_u^+$, respectively. As expected, the qUCCSD method is still not as accurate as the EOM-CCSD method for the excitation energies for \ce{C2}. On the other hand, the significant improvement of qUCCSD over UCC3 indicates that the commutator truncation scheme offers a promising pathway to obtain robust practical UCC-based methods; the inclusion of triple and higher commutators is expected to further improve the accuracy of the method. \section{Summary and Outlook} We develop a self-consistent polarization propagator method using a quadratic unitary coupled-cluster singles and doubles (qUCCSD) parameterization for the ground state wavefunction and the excitation manifold. Benchmark calculations of ground-state properties and excitation energies for representative small molecules show that the qUCCSD scheme using a commutator truncation scheme exhibits a uniform improvement of the accuracy and robustness over the previous UCC3 method derived using M{\o}ller-Plesset perturbation theory. The future work will be focused on an implementation of \red{the qUCCSD scheme and its analytic gradients and derivative coupling within tensor contraction engines well developed for the non-relativistic CC machinery} to enable extensive molecular applications and the development of a cubic UCCSD (cUCCSD) scheme, i.e., the UCCSD[3$\mid$3,2,1] scheme, to further improve the accuracy and robustness. \section{Acknowledgement} This work has been supported by the Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0020317. All computations in this work were carried out using Maryland Advanced Research Computing Center (MARCC). L. C. is greatly indebted to Debashis Mukherjee (Kolkata), J{\"u}rgen Gauss (Mainz), and John Stanton (Gainesville) for stimulating discussions and support. L. C. is grateful to Gustavo Scuseria (Houston) for helpful discussions about the basis-set effects in calculations for vibrational frequencies of ozone. \section{Data Availability Statement} The data that supports the findings of this study are available within the article. \clearpage \begin{center} \begin{threeparttable} \caption{Computed equilibrium bond lengths (in \AA), bond angle (in degree), and harmonic frequencies (in cm$^{-1}$) of \ce{CuH}, \ce{CuF}, and \ce{O3}. The cc-pVTZ basis sets were used for all the calculations presented here. The $1s$ electrons of O and $1s$, $2s$, $2p$, $3s$, $3p$ electrons of Cu have been kept frozen in the electron-correlation calculations.} \label{gs} \tabcolsep=6pt \begin{tabular}[t]{@{}ccccccccccccc@{}} \hline\hline \multicolumn{2}{c}{\multirow{2}{*}{method}} &\multicolumn{2}{c}{\ce{CuH}} & &\multicolumn{2}{c}{\ce{CuF}} & &\multicolumn{5}{c}{\ce{O3}} \\ \cline{3-4} \cline{6-7} \cline{9-13} & &$R_{\ce{Cu-H}}$ &$\omega_e$ & &$R_{\ce{Cu-F}}$ &$\omega_e$ & &$R_{\ce{O-O}}$ &$\theta$ &$\omega_{1,e} (a_1)$ &$\omega_{2,e} (a_1)$ &$\omega_{3,e} (b_2)$\\ \hline UCC(4) & &1.4616 &2052 & &1.6998 &646 & &1.3142 &117.1 &560 &876 &1922$i$ \\ UCC3 & &1.4877 &1948 & &1.7367 &739 & &1.2659 &117.9 &674 &1033 &4698 \\ qUCCSD & &1.4891 &1829 & &1.7686 &607 & &1.2488 &117.5 &767 &1279 &1314 \\ CCSD & &1.4888 &1837 & &1.7669 &609 & &1.2499 &117.6 &763 &1278 &1266 \\ \hline\hline \end{tabular} \end{threeparttable} \end{center} \begin{center} \begin{threeparttable} \caption{Computed vertical excitation energies (in eV) of the \ce{H2O} molecule. The UCC3, qUCCSD, ADC(3), and CCSD values are presented as the differences relative to the corresponding FCI values. $\bar{\Delta}_\mathrm{abs}$ and $\Delta_\mathrm{max}$ denote the mean absolute error and maximum absolute error relative to the FCI results, respectively. \red{The $1s$ electrons of O have been kept frozen in the electron-correlation calculations.}} \label{t1} \tabcolsep=20pt \begin{tabular}[t]{@{}lrrrrr@{}} \hline\hline State & FCI\tnote{a} &ADC(3)\tnote{b} &UCC3 &qUCCSD &CCSD\tnote{a} \\ \hline 2\ $^1A_1$ & 9.87 &0.14 &0.20 &\red{0.13} &-0.07\\ 1\ $^1B_1$ & 7.45 &0.13 &0.18 &\red{0.14} &-0.07\\ 1\ $^1B_2$ &11.61 &0.18 &0.23 &\red{0.15} &-0.09\\ 1\ $^1A_2$ & 9.21 &0.17 &0.20 &\red{0.15} &-0.09\\ 1\ $^3B_1$ & 7.06 &0.09 &0.14 &\red{0.11} &-0.08\\ 1\ $^3A_2$ & 9.04 &0.14 &0.19 &\red{0.14} &-0.08\\ 1\ $^3A_1$ & 9.44 &0.10 &0.15 &\red{0.10} &-0.08\\ 2\ $^3A_1$ &10.83 &0.01 &0.04 &\red{0.06} &-0.11\\ 2\ $^3B_1$ &11.05 &0.11 &0.14 &\red{0.11} &-0.09\\ 1\ $^3B_2$ &11.32 &0.13 &0.17 &\red{0.11} &-0.08\\ $\bar{\Delta}_{\mathrm{abs}}$ & -- &0.12 &0.16 &\red{0.12} &0.08 \\ $\Delta_{\mathrm{max}}$ & -- &0.18 &0.23 &\red{0.15} &0.11 \\ \hline\hline \end{tabular} \begin{tablenotes} \item[a] Results for the singlet states are from Ref.\citenum{Christiansen1996} and those for the triplet states are from Ref. \citenum{Larsen2001}. \item[b] Ref.\citenum{Schirmer2002} \end{tablenotes} \end{threeparttable} \end{center} \begin{center} \begin{threeparttable} \caption{Computed vertical excitation energies (in eV) of the \ce{HF} molecule. The difference of ADC(3), UCC(3), qUCCSD, and CCSD results relative to the corresponding FCI values are presented. $\bar{\Delta}_\mathrm{abs}$ and $\Delta_\mathrm{max}$ denote the mean absolute error and maximum absolute error relative to the FCI results, respectively. \red{The $1s$ electrons of F have been kept frozen in the electron-correlation calculations.}} \label{t2} \tabcolsep=20pt \begin{tabular}[t]{@{}lrrrrr@{}} \hline\hline State &FCI\tnote{a} &ADC(3)\tnote{b} &UCC3 &qUCCSD &CCSD\tnote{a} \\ \hline 1\ $^1\Pi$ &10.44 &0.18 &0.23 &\red{0.15} &-0.14\\ 2\ $^1\Pi$ &14.21 &0.19 &0.23 &\red{0.16} &-0.15\\ 2\ $^1\Sigma^+$ &14.58 &0.10 &0.17 &\red{0.07} &-0.11\\ 1\ $^1\Delta $ &15.20 &0.12 &0.16 &\red{0.12} &-0.17\\ 1\ $^1\Sigma^-$ &15.28 &0.12 &0.15 &\red{0.12} &-0.18\\ 3\ $^1\Pi$ &15.77 &0.23 &0.25 &\red{0.17} &-0.18\\ 3\ $^1\Sigma^+$ &16.43 &0.37 &0.36 &\red{0.24} &-0.14\\ 1\ $^3\Pi$ &10.04 &0.14 &0.20 &\red{0.13} &-0.15\\ 1\ $^3\Sigma^+$ &13.54 &0.05 &0.09 &\red{0.02} &-0.13\\ 2\ $^3\Pi$ &14.01 &0.19 &0.23 &\red{0.16} &-0.16\\ 2\ $^3\Sigma^+$ &14.46 &0.07 &0.11 &\red{0.09} &-0.21\\ 1\ $^3\Delta$ &14.93 &0.10 &0.13 &\red{0.11} &-0.19\\ 1\ $^3\Sigma^-$ &15.25 &0.12 &0.16 &\red{0.12} &-0.18\\ 3\ $^3\Pi$ &15.57 &0.22 &0.25 &\red{0.17} &-0.19\\ $\bar{\Delta}_{\mathrm{abs}}$ & -- &0.16 &0.19 &\red{0.13} &0.16 \\ $\Delta_{\mathrm{max}}$ & -- &0.37 &0.36 &\red{0.24} &0.21 \\ \hline\hline \end{tabular} \begin{tablenotes} \item[a] Ref.\citenum{Larsen2001} \item[b] Ref.\citenum{Schirmer2002} \end{tablenotes} \end{threeparttable} \end{center} \begin{center} \begin{threeparttable} \caption{Computed vertical excitation energies (in eV) of the \ce{N$_2$} molecule. The ADC(3), UCC3, qUCCSD, and CCSD values are presented as the differences relative to the corresponding FCI values. $\bar{\Delta}_\mathrm{abs}$ and $\Delta_\mathrm{max}$ denote the mean absolute error and maximum absolute error relative to the FCI results, respectively. \red{The $1s$ electrons of N have been kept frozen in the electron-correlation calculations.}} \label{t3} \tabcolsep=20pt \begin{tabular}[t]{@{}lrrrrr@{}} \hline\hline State &FCI\tnote{a} &ADC(3)\tnote{b} &UCC3 &qUCCSD &CCSD\tnote{a} \\ \hline 1\ $^1\Pi_g$ & 9.58 &-0.17 &-0.08 &\red{-0.06} & 0.08\\ 1\ $^1\Sigma_u^-$ &10.33 &-0.33 &-0.27 &\red{-0.23} & 0.14\\ 1\ $^1\Delta_u$ &10.72 &-0.37 &-0.25 &\red{-0.21} & 0.18\\ 1\ $^1\Pi_u$ &13.61 &-0.23 &-0.25 &\red{-0.29} & 0.40\\ 1\ $^3\Sigma_u^+$ & 7.90 &-0.19 &-0.26 &\red{-0.24} &-0.02\\ 1\ $^3\Pi_g$ & 8.16 &-0.29 &-0.13 &\red{-0.11} & 0.06\\ 1\ $^3\Delta_u$ & 9.19 &-0.27 &-0.27 &\red{-0.24} & 0.07\\ 1\ $^3\Sigma_u^-$ &10.00 &-0.29 &-0.25 &\red{-0.22} & 0.19\\ 1\ $^3\Pi_u$ &11.44 &-0.19 &-0.11 &\red{-0.12} & 0.10\\ $\bar{\Delta}_{\mathrm{abs}}$ & -- &0.26 &0.21 &\red{0.19} & 0.13\\ $\Delta_{\mathrm{max}}$ & -- &0.37 &0.27 &\red{0.29} & 0.40\\ \hline\hline \end{tabular} \begin{tablenotes} \item[a] Results for the singlet states are from Ref.\citenum{Christiansen1996} and those for the triplet states are from Ref. \citenum{Larsen2001}. \item[b] Ref.\citenum{Schirmer2002} \end{tablenotes} \end{threeparttable} \end{center} \begin{center} \begin{threeparttable} \caption{Computed vertical excitation energies (in eV) of the \ce{Ne} atom. The ADC(3), UCC3, qUCCSD, and CCSD values are presented as the differences relative to the corresponding FCI values. $\bar{\Delta}_\mathrm{abs}$ and $\Delta_\mathrm{max}$ denote the mean absolute error and maximum absolute error relative to the FCI results, respectively. \red{The $1s$ electrons of Ne have been kept frozen in the electron-correlation calculations.}} \label{t4} \tabcolsep=20pt \begin{tabular}[t]{@{}lrrrrr@{}} \hline\hline State &FCI\tnote{a} &ADC(3)\tnote{b} &UCC3 &qUCCSD &CCSD\tnote{a} \\ \hline 1\ $^1P$ &16.40 &0.17 &0.16 &\red{0.14} &-0.24 \\ 1\ $^1D$ &18.21 &0.18 &0.18 &\red{0.15} &-0.25 \\ 2\ $^1P$ &18.26 &0.18 &0.17 &\red{0.15} &-0.25 \\ 2\ $^1S$ &18.48 &0.27 &0.27 &\red{0.20} &-0.24 \\ 3\ $^1S$ &44.05 &0.35 &0.35 &\red{0.32} &-0.17 \\ 1\ $^3P$ &18.70 &0.13 &0.16 &\red{0.11} &-0.24 \\ 1\ $^3S$ &19.96 &0.10 &0.13 &\red{0.10} &-0.26 \\ 1\ $^3D$ &20.62 &0.13 &0.17 &\red{0.12} &-0.23 \\ 2\ $^3P$ &20.97 &0.13 &0.17 &\red{0.12} &-0.24 \\ 2\ $^3S$ &45.43 &0.40 &0.44 &\red{0.36} &-0.10 \\ $\bar{\Delta}_{\mathrm{abs}}$ & -- &0.20 &0.22 &\red{0.18} &0.22 \\ $\Delta_{\mathrm{max}}$ & -- &0.40 &0.44 &\red{0.36} &0.25 \\ \hline\hline \end{tabular} \begin{tablenotes} \item[a] Results for the singlet states are from Ref.\citenum{Koch1995} and those for the triplet states are from Ref. \citenum{Larsen2001}. \item[b] Ref.\citenum{Schirmer2002} \end{tablenotes} \end{threeparttable} \end{center} \begin{center} \begin{threeparttable} \caption{Computed vertical excitation energies (in eV) of the \ce{CH2} molecule. The ADC(3), UCC3, qUCCSD, and CCSD values are presented as the differences relative to the corresponding FCI values. $\bar{\Delta}_\mathrm{abs}$ and $\Delta_\mathrm{max}$ denote the mean absolute error and maximum absolute error relative to the FCI results, respectively.} \label{t5} \tabcolsep=20pt \begin{tabular}[t]{@{}lrrrrrr@{}} \hline\hline State &FCI\tnote{a} &ADC(3)\tnote{b} &UCC3 &qUCCSD &CCSD\tnote{a} \\ \hline 3\ $^1A_1$ & 6.51 &-0.31 &-0.05 &\red{-0.06} &-0.01\\ 4\ $^1A_1$ & 8.48 &-0.29 &-0.04 &\red{-0.04} &-0.02\\ 1\ $^1B_2$ & 7.70 &-0.24 & 0.01 &\red{ 0.01} & 0.01\\ 1\ $^1B_1$ & 1.79 &-0.55 &-0.10 &\red{-0.11} &-0.01\\ 1\ $^1A_2$ & 5.85 &-0.42 &-0.09 &\red{-0.08} & 0.01\\ 1\ $^3A_1$ & 6.39 &-0.31 &-0.06 &\red{-0.06} &-0.01\\ 2\ $^3A_1$ & 8.23 &-0.38 &-0.09 &\red{-0.08} &-0.03\\ 3\ $^3A_1$ & 9.84 &-0.31 &-0.07 &\red{-0.08} & 0.01\\ 2\ $^3B_2$ & 7.70 &-0.31 &-0.06 &\red{-0.06} &-0.06\\ 1\ $^3B_1$ &-0.01 &-0.61 &-0.14 &\red{-0.13} &-0.03\\ 2\ $^3B_1$ & 8.38 &-0.41 &-0.02 &\red{-0.02} & 0.01\\ 1\ $^3A_2$ & 4.79 &-0.44 &-0.10 &\red{-0.10} & 0.00\\ $\bar{\Delta}_{\mathrm{abs}}$ & -- &0.38 &0.07 &\red{0.07} &0.02 \\ $\Delta_{\mathrm{max}}$ & -- &0.61 &0.14 &\red{0.13} &0.06 \\ \hline\hline \end{tabular} \begin{tablenotes} \item[a] Results for the singlet states are from Ref.\citenum{Koch1995} and those for the triplet states are from Ref. \citenum{Hald2001}. \item[b] Ref.\citenum{Schirmer2002} \end{tablenotes} \end{threeparttable} \end{center} \begin{center} \begin{threeparttable} \caption{Computed vertical excitation energies (in eV) of the \ce{BH} molecule. The ADC(3), UCC3, qUCCSD, and CCSD values are presented as the differences relative to the corresponding FCI values. $\bar{\Delta}_\mathrm{abs}$ and $\Delta_\mathrm{max}$ denote the mean absolute error and maximum absolute error relative to the FCI results, respectively.} \label{t6} \tabcolsep=20pt \begin{tabular}[t]{@{}lrrrrr@{}} \hline\hline State &FCI\tnote{a} &ADC(3)\tnote{b} &UCC3 &qUCCSD &CCSD\tnote{a} \\ \hline 1\ $^1\Pi$ &2.94 &-0.61 &-0.10 &\red{-0.10} & 0.02\\ 2\ $^1\Sigma^+$ &6.38 &-0.43 &-0.07 &\red{-0.07} & 0.04\\ 2\ $^1\Pi$ &7.47 &-0.51 &-0.14 &\red{-0.11} & 0.04\\ 4\ $^1\Sigma^+$ &7.56 &-0.54 &-0.16 &\red{-0.13} & 0.19\\ 3\ $^1\Pi$ &8.24 &-0.50 &-0.15 &\red{-0.12} & 0.04\\ 1\ $^3\Pi$ &1.31 &-0.62 &-0.11 &\red{-0.10} &-0.01\\ 1\ $^3\Sigma^+$ &6.26 &-0.47 &-0.10 &\red{-0.10} & 0.03\\ 2\ $^3\Sigma^+$ &7.20 &-0.49 &-0.10 &\red{-0.09} & 0.02\\ 2\ $^3\Pi$ &7.43 &-0.51 &-0.14 &\red{-0.13} & 0.00\\ 3\ $^3\Sigma^+$ &7.62 &-0.52 &-0.14 &\red{-0.13} & 0.05\\ 3\ $^3\Pi$ &7.92 &-0.45 &-0.12 &\red{-0.15} & 0.08\\ $\bar{\Delta}_{\mathrm{abs}}$ & -- &0.51 &0.12 &\red{0.11} &0.05\\ $\Delta_{\mathrm{max}}$ & -- &0.62 &0.16 &\red{0.15} &0.19\\ \hline\hline \end{tabular} \begin{tablenotes} \item[a] Results for the singlet states are from Ref.\citenum{Koch1995} and those for the triplet states are from Ref. \citenum{Larsen2001}. \item[b] Ref.\citenum{Schirmer2002} \end{tablenotes} \end{threeparttable} \end{center} \begin{center} \begin{threeparttable} \caption{Computed vertical excitation energies (in eV) of the \ce{C2} molecule. The UCC3, qUCCSD, and CCSD results are given as the difference relative to the corresponding FCI values. \red{The $1s$ electrons of C have been kept frozen in the electron-correlation calculation.}} \label{t7} \tabcolsep=20pt \begin{tabular}[t]{@{}lrrrr@{}} \hline\hline State &FCI\tnote{a} &UCC3 &qUCCSD &CCSD\tnote{a}\\ \hline $^1\Pi_u$ &1.39 &-0.64 &\red{-0.38} & 0.09\\ $^1\Sigma_u^+$ &5.60 &-1.02 &\red{-0.53} & 0.20\\ a$^3\Pi_u$ &0.31 &-0.74 &\red{-0.54} &-0.03\\ c$^3\Sigma_u^+$ &1.21 &-0.88 &\red{-0.65} &-0.44\\ \hline\hline \end{tabular} \begin{tablenotes} \item[a] Results for the singlet states are from Ref.\citenum{Christiansen1996} and those for the triplet states are from Ref. \citenum{Larsen2001}. \end{tablenotes} \end{threeparttable} \end{center} \clearpage
1,116,691,499,582
arxiv
\section{Introduction} \label{1} White dwarfs, also known as degenerate dwarfs, are of great interest since early 1900s and were first discovered in triple star system of 40 Edirani \citep{sutherland1972white}. They are typically highly densed stars having mass comparable to the Sun, and diameter 40 to 50 times smaller than the Sun \citep{burnham2013burnham}. The hot DO white dwarf, RE 0503-289, which lies within a degree of the hot DA white dwarf RE 0457-281 was detected in the ROSAT sky survey in 1994 \citep{Barstow}. The emission spectra of these white dwarfs showed abundance of photospheric lines of trans-iron group elements \citep{werner2012first}. Observation of spectral lines in the atmospheres of these high-gravity stars, e.g. spectral lines of RE 0503-289 and G 191-B2B stars \citep{rauch2016stellar, werner2012first}, showed a reasonable abundance of As, Se and Br elements emitting ultraviolet (UV) spectra. Besides that the UV spectra obtained from the Far Ultraviolet Spectroscopic Explorer (FUSE), Goddard High Resolution Spectrograph (GHRS) on the Hubble Space Telescope and International Ultraviolet Explorer (IUE) for HD 149499 B and HZ 21 stars, which are cool DO white dwarfs, also showed the presence of As $(Z=33)$ and Se $(Z=34)$ and Br $(Z=35)$ spectral lines \citep{chayer2005abundance}. Atomic data are prerequisite for stellar-atmosphere modeling and the earlier available spectroscopic data of the above ions are less precise \citep{rauch2016stellar}. Determination of chemical abundances and energy transport through the star depend upon the reliable values of oscillator strengths and transition probabilities of the spectral lines of the emission stars \citep{martin1992fine}. Accurate knowledge of these quantities are important to analyze the intensities \citep{ruffoni2014fe} and infer fundamental parameters such as mass \textit{M}, radius \textit{R} and luminosity \textit{L} \citep{wittkowski2005fundamental} of the stellar objects. This very fact demands the need for more precise estimate of the spectroscopic properties of the emission lines of the elements present in the stellar objects. They are also useful in the analysis of interstellar and quasar absorption lines \citep{morton2000atomic}. In addition, the radiative data of the spectral lines of highly charged ions are essential for the construction of kinetic models of plasma processes and investigating processes in the thermonuclear reactor plasma \citep{glushkov1996calculation}. Presence of spectral lines of Cu-like ions motivates for more accurate determination of the radiative properties of these ions for analyzing the chemical abundances and inferring the stellar parameters that are essential for investigating the environmental conditions of the white dwarfs. A considerable amount of relativistic data on oscillator strengths for some selected transitions of As V has already been available so far \citep{cheng1978energy,migdalek1979influence,curtis1989comprehensive,martin1992fine,lavin1994relativistic,engo1997comparison,owono2005core}, whereas only a limited data for the transitions in Se VI \citep{cheng1978energy,migdalek1979influence,curtis1989comprehensive,martin1992fine, lavin1994relativistic,engo1997comparison,owono2005core} and Br VII \citep{cheng1978energy,migdalek1979influence,curtis1989comprehensive, lavin1994relativistic,knystautas1977oscillator} are known in literature. In 1978, Cheng et~al. \citep{cheng1978energy} considered the As V, Se VI and Br VII ions for theoretical studies and reported their electric dipole (E1) transition properties between the low-lying states ($n = 4,5,6$) using the Hartree-Fock method. Migdalek and Baylis \citep{migdalek1979influence} had studied the roles of electron correlations in the oscillator strengths of the $4S_{1/2} \rightarrow 4P_{1/2,3/2}$ and $4P_{1/2,3/2} \rightarrow 4D_{3/2,5/2}$ transitions in the Cu-isoelectronic As, Se and Br ions by considering the core-polarization effects in the relativistic theory framework. Victor and Taylor used semi-empirical model-potential methods for the calculation of absorption oscillator strengths for the $nS \rightarrow n'P$ transitions with $n,n'=4$ to $7$ in the Cu-like systems till $Z=42$ \citep{tables1983oscillator}. Curtis and Theodosiou had performed comprehensive calculations of {excitation energies, ionization potential, E1 polarizabilities and lifetimes of the $4P$ and $4D$ states as well as the oscillator strengths for the $4S \rightarrow 4P_{1/2,3/2}, 4P_{1/2,3/2} \rightarrow 4D_{3/2}$ and $4P_{3/2} \rightarrow 4D_{5/2}$ transitions} of Cu-isoelectronic sequence by combining the experimental data and matrix elements obtained using the Hartree-Slater potential to represent the ionic-core \citep{curtis1989comprehensive}. Martin et~al. calculated oscillator strengths for the D1 and D2 lines of the As V and Se VI ions by using Quantum Defect Orbital (QDO) and Relativistic Quantum Defect Orbital (RQDO) techniques \citep{martin1992fine}. Lavin et~al. computed oscillator strengths for fine structure transitions i.e. $4S \rightarrow 4P_{1/2,3/2}, 4P_{1/2,3/2} \rightarrow 4D_{3/2}$ and $4P_{3/2} \rightarrow 5S_{1/2}$ by employing the QDO and RQDO methods for Cu-like systems up to $Z=92$ \citep{lavin1994relativistic}. Sen and Puri had evaluated the E1 oscillator strengths corresponding to the Rydberg transition ($nS \rightarrow nP$) for As V and Se VI using local density functional including correlation effects \citep{sen1995slater}. Engo et~al. calculated oscillator strengths for the Cu-isoelectronic sequences using supersymmetry-inspired quantum-defect methods in their relativistic and quasi-relativistic formulations and compared their data with the then available literature \citep{engo1997comparison}. In 2005, Owono et~al. evaluated oscillator strengths of the principal $4S \rightarrow 4P_{1/2,3/2}$ transitions in As V and Se VI ions by including explicitly core-polarization effects using the Dirac and quasi-relativistic quantum-defect radial wave functions \citep{owono2005core}. However, interest to theoretically investigate atomic properties of these ions date back to 1973 when G. Sorenson had calculated atomic lifetimes for the Cu-isoelectronic As and Se ions using foil-excitation method to study the systematic trends in f-values for the $4S \rightarrow 4P$ and $4P \rightarrow 4D$ transitions in As V and the $4P \rightarrow 4D$ transitions in Se VI \citep{sorensen1973atomic}. Fischer had calculated wavelength and oscillator strength for the $4S$--$4P$ transition for As V using to non-relativistic theory \citep{fischer1977oscillator}. Beam-foil spectroscopy was implemented for the calculation of radiative lifetimes and multiplet f-values for the $4P$, $4D$ and $5S$ terms for As V \citep{pinnington1981beam} and Se VI \citep{pinnington1981beam,bahr1982beam}. In case of Br VII, Andersen et~al. derived systematic trends of f-values from the lifetime measurements using the foil-exchange technique for the $nS$--$nP$ transitions \citep{andersen1973oscillator}. Knystautas and Drouin obtained oscillator strengths for the $4S_{1/2} \rightarrow 4P_{1/2,3/2}$ and $4P_{3/2} \rightarrow 4D_{5/2}$ transitions using beam-foil technique \citep{knystautas1977oscillator}. On the basis of evidence of spectral lines of Cu-isoelectronic sequence in white dwarfs, it is important to seek through atomic properties of these ions precisely. It has been observed from the aforementioned discussions that atomic properties of As V, Se VI and Br VII were mostly calculated using the non-relativistic approaches or lower-order many-body methods such as quantum defect theory. These calculations show large differences among different results of the investigated spectroscopic properties. Thus, it is necessary to employ more advanced recently developed methods in the relativistic theory framework to determine spectroscopic properties of the above highly-stripped ions that are immensely interesting for the astrophysical studies. Among the calculations carried out earlier, the most recent data calculated for these ions were obtained based on the radial wave functions determined using the quantum defect orbitals which included only the core-polarization effects \citep{owono2005core}. These screening effects are considered through an effective Hamiltonian used in the RQDO approach and more valid in the core region of the ions \citep{charro1999oscillator}, but precise evaluation of radiative properties requires accurate behavior of the wave functions in the asymptotic region. To improve over these calculations, we have implemented here a relativistic all-order (AO) formalism for the computation of wave functions for the considered ions. Our AO approach accounts electron correlations among the core and valence electrons on an equal footing, hence it can describe wave functions properly in both the nuclear and asymptotic regions of an atomic system. It is also based on the fully four-component relativistic formalism, thereby it is adequate to incorporate the relativistic effects and to describe the fine-structure splitting of atomic levels in a natural manner. In the present study, we consider all the allowed transitions among the $nS_{1/2}$, $nP_{1/2,3/2}$, $n'D_{3/2,5/2}$ and $n'F_{5/2,7/2}$ states with $n=4$ to $6$ and $n'=4,5$ for the above ions, and present wavelengths, line strengths, transition probabilities and absorption oscillator strengths of these transitions as well as the estimated lifetimes of the excited states associated with these transitions. To validate the results, we have also calculated energies of a few low-lying states of these ions and compared them with the data listed in the National Institute of Standards and Technology Atomic Database (NIST AD) \citep{ralchenko2008nist}. The paper is organized as follows: In Sec. \ref{2}, we provide theoretical formulae for different spectroscopic quantities such as transition probabilities, oscillator strengths and lifetimes of the atomic states, whereas in Sec. \ref{3} we discuss the method of atomic wave function evaluation. Sec. \ref{4} discusses all the acquired data from the present work and compare them with the previously reported values, while findings are concluded in Sec. \ref{5}. All the results are given in atomic units (a.u.) unless stated otherwise. \section{Theoretical Aspects} \label{2} Transitions among different atomic states are generally driven through the E1 channel when allowed. The transition probability due to this channel ($A^{E1}_{vk}$) for the transition between the lower state $|\psi_{k}\rangle$ and the upper state $|\psi_{v}\rangle$ with corresponding angular momenta $J_{k}$ and $J_{v}$ is given by \citep{kelleher2008atomic} \begin{equation} A_{vk}^{E1}=\frac{2}{3} \alpha c \pi \sigma \times \left(\frac{\alpha \sigma}{R_{\infty}}\right)^2 \frac{S^{E1}}{g_{v}}, \label{eq2} \end{equation} where $\alpha= \frac{e^{2}}{2\epsilon_{0}hc}$ is the fine structure constant, $c$ is the speed of light, $g_v=2J_v+1$ is the degeneracy factor for the corresponding state, $R_{\infty}$ is the Rydberg constant, $S^{E1}$ denotes the line strength which can be calculated by the formula $S^{E1}=|\langle J_{v}||\textbf{D}|| J_{k} \rangle|^{2}$ \citep{nahar1995atomic} with the E1 operator $\textbf{D}=\Sigma_{j} \textbf{d}_{j} =-e\Sigma_{j}\textbf{r}_{j}$ having $j^{th}$ electron at position $\textbf{r}_{j}$ and $\sigma$ denotes the differential energy between the transition levels given by $\sigma=E_{v}-E_{k}$. Substituting the values of fundamental constants, the absorption transition probability (in $s^{-1}$) is given by \citep{AYMAR1978537,kelleher2008atomic} \begin{equation} A^{E1}_{vk}=\frac{2.02613 \times 10^{18}}{g_{v}\lambda^{3}}S^{E1}, \label{eq3} \end{equation} where $\lambda$ and $S$ are used in $\text{\normalfont\AA}$ and a.u., respectively. Consequently, the oscillator strengths for the corresponding transitions are evaluated by using the following formulae \citep{AYMAR1978537,kelleher2008atomic} \begin{equation} f_{kv}^{E1}=\frac{1}{3\alpha}\left(\frac{\alpha\sigma}{R_{\infty}} \right) \times \frac{S^{E1}}{g_{k}}=\frac{303.756}{g_{k}\lambda}\times S^{E1}. \label{eq10} \end{equation} The radiative lifetime ($\tau$) of level $v$ can be estimated by taking the reciprocal of the total transition probability calculated by summing up the individual transition probabilities of the transitions from the considered upper electronic state ($v$) to each possible lower electronic state ($k$) \citep{qin2019energy}; i.e. \begin{equation} \tau_{v}=\frac{1}{\Sigma_{k} A^{E1}_{vk}}. \label{eq14} \end{equation} Substituting $A^{E1}_{vk}$ values from Eq. (\ref{eq3}), $\tau$ can be given in $s$. \section{Method of Evaluation} \label{3} \subsection{Relativistic AO Method} For the precise evaluation of transition-matrix elements, the electron correlation effects are included to all-order using our AO method which is based on the relativistic coupled-cluster (RCC) theory framework. The general formulation and potential applications of RCC theory, also referred as gold standard of many-body method, can be found in many earlier studies including Refs. \citep{blundell1991relativistic, safronova2008all,sahoo2015correlation,sahoo2015theoretical}. We give a brief outline of our employed AO method in the RCC theory framework below. In the (R)CC theory ansatz, wave function of a many-electron system can be expressed as \citep{vcivzek1969use} \begin{equation} |\psi_0\rangle=e^S|\phi_0\rangle, \label{eqa} \end{equation} where $|\phi_0 \rangle$ is the mean-field wave function of an atomic state and $S$ represents the electron excitation operator from the mean-field wave function. We have obtained the mean-field wave function using the Dirac-Fock (DF) method. First, we solve DF equation for the Ni-like closed-shell configurations for the undertaken ions then add a valence orbital to obtain the DF wave function of the Cu-like ions by defining as \citep{safronova1999relativistic} \begin{equation} |\phi_{v}\rangle=a_{v}^{\dagger}| \phi_0 \rangle . \label{eqdag} \end{equation} Expanding $e^S$ in Eq. (\ref{eqa}), we get \begin{equation} |\psi_v\rangle=(1+S+\frac{S^2}{2}+... )|\phi_v\rangle . \label{eq17} \end{equation} For computational simplicity, we have dropped the non-linear terms and consider only the singly and doubly excited-state configurations (SD method) in our AO approach by expressing \citep{blundell1989relativistic,safronova1998relativistic} \begin{equation} |\psi_v\rangle=(1+S_1+S_2)|\phi_v\rangle . \label{eq18} \end{equation} The excitation operators take into account excitations from both from the core and valence orbitals of the DF wave functions of the Cu-like ions, and they are defined using the second quantized operators as \citep{blundell1989relativistic,safronova1998relativistic,safronova1999relativistic, iskrenova2007high} \begin{equation} S_1=\sum_{ma} \rho_{ma} a^{\dagger}_m a_a + \sum_{m\neq v}\rho_{mv} a^{\dagger}_m a_v, \label{eq19} \end{equation} and \begin{equation} S_2=\frac{1}{2}\sum_{mnab} \rho_{mnab} a^{\dagger}_m a^{\dagger}_n a_b a_a + \sum_{mna}\rho_{mnva} a^{\dagger}_m a^{\dagger}_n a_v a_a , \label{eq19} \end{equation} where the indices $m$ and $n$ range over all possible virtual states, the indices $a$ and $b$ range over all occupied core states and the indices $v$ and $w$ represent valence states of the system. The quantities $\rho_{ma}$ and $\rho_{mv}$ depict excitation coefficients of the respective singly-excitations for core and valence electron, whereas $\rho_{mnab}$ and $\rho_{mnva}$ denote doubly-excitation coefficients for core and valence electrons respectively. In addition to this, we have tried to improve the results by considering contributions from the dominant triple excitations by constructing triple excitation operator in the perturbative approach (SDpT) as \begin{eqnarray}\label{eq19} S_3^{\text{pert}} &=& \frac{1}{6}\sum_{mnrab}\rho_{mnrvab}a^{\dagger}_{m}a^{\dagger}_{n}a^{\dagger}_{r}a_{b}a_{a}a_{v} \nonumber\\ && +\frac{1}{18}\sum_{mnrabc}\rho_{mnrabc}a^{\dagger}_{m}a^{\dagger}_{n}a^{\dagger}_{r}a_{c}a_{b}a_{a} , \end{eqnarray} where $\rho_{mnrvab}$ and $\rho_{mnrabc}$ are triple excitation coefficients. This operator is considered in $S$ along with $S_1$ and $S_2$ to obtain amplitudes of the SD excitation operators as discussed in Refs. \citep{blundell1991relativistic,safronova1999relativistic,safronova2008all}. After obtaining the wave function in both the SD and SDpT methods, the matrix elements for the E1 operator $D$ between states $v$ and $w$ with the corresponding wave functions $|\psi_{v}\rangle$ and $|\psi_{w}\rangle$ are evaluated by \citep{iskrenova2007high} \begin{eqnarray} D_{wv}=\frac{\langle\psi_{w}|D|\psi_{v}\rangle}{\sqrt{\langle\psi_{w}|\psi_{w}\rangle \langle\psi_{v}|\psi_{v}\rangle}}. \label{eq16} \end{eqnarray} In the resulting expression of the SD method, the numerator contains the sum of 20 terms for incorporating electron correlation effects \citep{safronova2011blackbody} apart from the dominantly contributing DF expression. \subsection{Gauge Invariance and Evaluation of Uncertainty} \label{gi} To ascertain numerical stability and convergence in the results for the determination of E1 matrix elements, we have calculated these quantities using the length (L) and velocity (V) gauge expressions of the E1 operator. Though the results from the exact RCC theory can be gauge independent, results from both the gauge expressions obtained using the approximated SD and SDpT methods can differ. Also, it is important to use the four-component Dirac orbitals for accurate calculations, which is given for an orbital $v$ with angular quantum numbers $j$ and $m_j$ as \citep{sakurai2006advanced} \begin{equation} |j_{v}m_{j_{v}} \rangle=\frac{1}{r} \left(\begin{matrix} iG_v (r)| \chi_{\kappa_{v},m_{j_v}}(\theta,\phi) \rangle \\ F_v (r)|\chi_{-\kappa_{v},m_{j_v}}(\theta,\phi) \rangle \\ \end{matrix}\right), \end{equation} where $G_v (r)$ and $F_v (r)$ are the large and small components of Dirac wave function, respectively. Here, $|\chi_{\kappa_{v},m_{j_v}} (\theta,\phi)\rangle $ is the angular function given by \citep{sakurai2006advanced} \begin{equation} \chi_{\kappa,m}(\theta,\phi)=\sum_{\sigma=\pm \frac{1}{2}} \langle lm-\sigma \frac{1}{2} \sigma|l \frac{1}{2}jm \rangle Y_{l}^{m-\sigma} (\theta,\phi) \phi_{\sigma}, \end{equation} where $\langle lm-\sigma \frac{1}{2} \sigma|l \frac{1}{2}jm \rangle$ are Clebsch-Gordan coefficients, $Y_{l}^{m-\sigma}$ are normalized spherical harmonics and $\phi_\sigma$ are two-component spinors with the values $\phi_{\frac{1}{2}}=\left(\begin{matrix} 1\\ 0\\ \end{matrix} \right)$ and $\phi_{\frac{-1}{2}}=\left( \begin{matrix} 0\\ 1\\ \end{matrix} \right).$ On the basis of this single particle orbital wave function, the reduced E1 matrix element between $|j_{v} m_{j_{v}} \rangle$ and $|j_{w} m_{j_{w}} \rangle$ is given by \citep{johnson2006lectures} \begin{eqnarray} &&\langle j_{v}||d_l||j_{w} \rangle=\frac{3}{k} \langle \kappa_{v} ||C_{1}||\kappa_{w} \rangle \nonumber \\ &&\times \int_{0}^{\infty} dr \left\{j_1(kr)|G_{v}(r)G_{w}(r)+F_{v}(r)F_{w}(r)\right. \nonumber\\ & & \left. + j_{2}(kr) \left[ \frac{\kappa_{v}-\kappa_{w}}{2}[G_{v}(r)F_{w}(r)+F_{v}(r)G_{w}(r)]\right. \right. \nonumber\\ && \left. \left. +[G_{v}(r)F_{w}(r)-F_{v}(r)G_{w}(r)]\right]\right\}, \nonumber\\ & & \end{eqnarray} in L-gauge and \begin{eqnarray} && \langle j_v||d_v||j_w \rangle=\frac{3}{k} \langle \kappa_{v} ||C_{1}||\kappa_{w} \rangle \nonumber \\ &&\times \int_{0}^{\infty} dr \left\{-\frac{\kappa_v - \kappa_w}{2}\left[\frac{dj_1(kr)}{dkr}+\frac{j_1(kr)}{kr}\right] \right. \nonumber\\ & & \left. \times [G_{v}(r)F_{w}(r)+F_{v}(r)G_{w}(r)]\right. \nonumber\\ && \left. +\frac{j_1(kr)}{kr}[G_{v}(r)F_{w}(r)-F_{v}(r)G_{w}(r)]\right\}, \nonumber\\ & & \end{eqnarray} in V-gauge. Here, $k=\omega\alpha$ is the emitted photon energy $\omega$ of the transition, $C_n$ is the normalized spherical harmonic of rank $n$ and $j_l(x)$ is a spherical Bessel function of order $l$, given by \begin{equation} j_l(x)=\frac{x^l}{1\cdot{3}\cdot{5}\cdots(2l+1)}. \end{equation} The differences in the results from both the gauge forms can be used as the maximum uncertainties corresponding to our calculated E1 matrix elements. Since calculations using L-gauge expression converge faster with respect to the number of configurations, we believe that these results are more reliable. Therefore, we consider these results as the central values for our further computations. We also analyze the differences in the results from both the SD and SDpT methods to verify contributions from the neglected higher-level excitations. \onecolumn \begin{table} \caption{\label{tab1} Our calculated energy values from the SD method ($E^{SD}$) in cm$^{-1}$ of a few low-lying states of the As V, Se VI and Br VII ions are listed. They are compared with the literature values and percentage (\%) of deviations ($\delta$) of our calculated values from the experimental values from the NIST AD \citep{ralchenko2008nist} are also given. } \begin{center} \begin{tabular}{@{\extracolsep\fill}cccccccccc@{}} \hline State & \multicolumn{3}{c}{As V} & \multicolumn{3}{c}{Se VI} & \multicolumn{3}{c}{Br VII}\\ \cline{2-4} \cline{5-7} \cline{8-10} & $E^{SD}$ & Experiment & $\delta$ & $E^{SD}$ & Experiment & $\delta$ & $E^{SD}$ & Experiment & $\delta$ \\ \hline $4S_{1/2}$ & $506608$ & $506250$ & $0.07$ & $660552$ & $659980$ & $0.10$ & $830186$ & $831000$ & $0.10$\\ $4P_{1/2}$ & $409410$ & $409115$ & $0.07$ & $547699$ & $547215$ & $0.09$ & $701807$ & $702726$ & $0.13$ \\ $4P_{3/2}$ & $405224$ & $405005$ & $0.05$ & $541898$ & $541518$ & $0.07$ & $694093$ & $695146$ & $0.15$\\ $4D_{3/2}$ & $269505$ & $269353$ & $0.06$ & $377425$ & $377150$ & $0.07$ & $501283$ & $503795$ & $0.50$\\ $4D_{5/2}$ & $269034$ & $268908$ & $0.05$ & $376708$ & $376471$ & $0.06$ & $500261$ & $502934$ & $0.53$\\ $5S_{1/2}$ & $242909$ & $242654$ & $0.10$ & $326788$ & $326386$ & $0.12$ & $420835$ & &\\ $4F_{5/2}$ & $174422$ & $174263$ & $0.09$ & $252169$ & & & $344617$ & &\\ $4F_{7/2}$ & $174433$ & $174175$ & $0.15$ & $252187$ & & & $344642$ & &\\ \hline \end{tabular} \end{center} \end{table} \begin{longtable}{@{\extracolsep\fill}ccrrrrrr@{}} \caption{\label{tab2} The line strengths ($S_{vk}$) (in a.u.) calculated using both the L- and V-gauge expressions, wavelengths ($\lambda$) (in $\text{\normalfont\AA}$), transition probabilities ($A_{L_{vk}}$) in ($s^{-1}$) and absorption oscillator strengths ($f_{L_{kv}}$) for the As V ion through E1 decay channel are presented in this table. Values in square brackets represent the order of 10. Uncertainties are given in parentheses.}\\ \hline Upper State(v) & Lower State(k) & $\lambda$ (in $\text{\normalfont\AA}$) & \multicolumn{2}{c}{$S_{vk}$ (in a.u.)} & $A_{L_{vk}}$(in $s^{-1}$) & $f_{L_{kv}}$\\ [0.5ex] \cline{4-5} \\ & & & L & V & & \\ \hline $4P_{1/2}$ & $4S_{1/2}$ & $1028.835$ & $2.068[0]$ & $2.019[0]$ & $1.924(73)[9]$ & $3.053(78)[-1]$\\ $4P_{3/2}$ & $4S_{1/2}$ & $986.356$ & $4.153[0]$ & $4.040[0]$ & $2.192(89)[9]$ & $6.40(19)[-1]$\\ $4D_{3/2}$ & $4P_{1/2}$ & $714.768$ & $5.208[0]$ & $5.031[0]$ & $7.22(33)[9]$ & $1.107(39)[0]$\\ $4D_{3/2}$ & $4P_{3/2}$ & $736.813$ & $1.069[0]$ & $1.034[0]$ & $1.354(60)[9]$ & $1.102(38)[-1]$\\ {$4D_{5/2}$} & $4P_{3/2}$ & $734.264$ & $9.610[0]$ & $9.296[0]$ & $8.20(37)[9]$ & $9.94(34)[-1]$\\ $4F_{5/2}$ & $4D_{3/2}$ & $1051.720$ & $1.663[1]$ & $1.687[1]$ & $4.83(16)[9]$ & $1.201(21)[0]$\\ {$4F_{5/2}$} & $4D_{5/2}$ & $1056.957$ & $1.195[0]$ & $1.210[0]$ & {$3.42(11)[8]$} & {$5.722(93)[-2]$}\\ {$4F_{7/2}$} & $4D_{5/2}$ & $1057.074$ & $2.387[1]$ & $2.492[1]$ & $5.12(27)[9]$ & $1.143(51)[0]$\\ $5S_{1/2}$ & $4P_{1/2}$ & $600.595$ & $5.285[-1]$ & $5.098[-1]$ & $2.47(12)[9]$ & $1.337(50)[-1]$\\ $5S_{1/2}$ & $4P_{3/2}$ & $616.084$ & $1.153[0]$ & $1.109[0]$ & $5.00(25)[9]$ & $1.422(57)[-1]$\\ $5P_{1/2}$ & $4S_{1/2}$ & $335.378$ & $1.613[-2]$ & $1.588[-2]$ & $4.33(15)[8]$ & $7.30(14)[-3]$\\ $5P_{1/2}$ & $4D_{3/2}$ & $1637.506$ & $4.215[0]$ & $3.980[0]$ & $9.72(62)[8]$ & $1.96(11)[-1]$\\ $5P_{1/2}$ & $5S_{1/2}$ & $2900.867$ & $8.197[0]$ & $8.077[0]$ & $3.40(11)[8]$ & $4.292(76)[-1]$\\ $5P_{3/2}$ & $4S_{1/2}$ & $333.726$ & $2.280[-2]$ & $2.280[-2]$ & $3.107(93)[8]$ & $1.038(10)[-2]$\\ $5P_{3/2}$ & $4D_{3/2}$ & $1598.867$ & $8.136[-1]$ & $7.691[-1]$ & $1.008(64)[8]$ & $3.86(22)[-2]$\\ $5P_{3/2}$ & $4D_{5/2}$ & $1611.003$ & $7.420[0]$ & $7.012[0]$ & $8.99(57)[8]$ & $2.33(13)[-1]$\\ $5P_{3/2}$ & $5S_{1/2}$ & $2781.775$ & $1.634[1]$ & $1.608[1]$ & $3.84(13)[8]$ & $8.92(17)[-1]$\\ $5D_{3/2}$ & $4P_{1/2}$ & $442.136$ & $5.242[-1]$ & $1.225[-1]$ & $3.1(3.2)[9]$ & $1.80(1.9)[-1]$\\ $5D_{3/2}$ & $4P_{3/2}$ & $450.474$ & $1.156[-1]$ & $2.856[-2]$ & $6.4(6.4)[8]$ & $1.9(2.0)[-2]$\\ $5D_{3/2}$ & $4F_{5/2}$ & $11346.415$ & $1.990[1]$ & $1.061[1]$ & {$7(19)[6]$} & $9(24)[-2]$\\ $5D_{3/2}$ & $5P_{1/2}$ & $3968.171$ & $1.473[1]$ & $2.304[1]$ & $1.19(60)[8]$ & $5.6(2.8)[-1]$\\ $5D_{3/2}$ & $5P_{3/2}$ & $4215.014$ & $2.972[0]$ & $4.818[0]$ & $2.0(1.1)[7]$ & $5.4(2.9)[-2]$\\ $5D_{5/2}$ & $4P_{3/2}$ & $463.532$ & $1.311[0]$ & $3.832[-1]$ & $4.4(4.1)[9]$ & $2.1(2.0)[-1]$\\ $5D_{5/2}$ & $4F_{5/2}$ & $6636.920$ & $1.173[0]$ & $4.162[0]$ & $1.4(2.4)[6]$ & {$8.95(16)[-3]$}\\ $5D_{5/2}$ & $4F_{7/2}$ & $6641.526$ & $2.345[1]$ & $8.352[1]$ & $2.7(4.8)[7]$ & {$1.3(2.4)[-1]$}\\ $5D_{5/2}$ & $5P_{3/2}$ & $5723.824$ & $2.027[1]$ & $4.136[1]$ & $3.7(3.1)[7]$ & $2.7(2.3)[-1]$\\ $5F_{5/2}$ & $4D_{3/2}$ & $633.899$ & $3.956[-1]$ & $4.199[-1]$ & $5.25(35)[8]$ & $4.74(29)[-2]$\\ $5F_{5/2}$ & $4D_{5/2}$ & $635.797$ & $2.756[-2]$ & $2.936[-2]$ & $3.62(24)[7]$ & $2.19(13)[-3]$\\ $5F_{5/2}$ & $5D_{3/2}$ & $1398.897$ & $9.193[0]$ & $6.145[0]$ & $1.13(42)[9]$ & $5.0(1.8)[-1]$\\ $5F_{5/2}$ & $5D_{5/2}$ & $1286.359$ & $3.636[-1]$ & $2.190[-1]$ & $5.8(2.6)[7]$ & $1.43(64)[-2]$\\ {$5F_{7/2}$} & $4D_{5/2}$ & $635.780$ & $5.535[-1]$ & $5.868[-1]$ & $5.46(36)[8]$ & $4.41(26)[-2]$\\ $5F_{7/2}$ & $5D_{5/2}$ & $1286.285$ & $7.285[0]$ & $4.381[0]$ & $8.7(3.9)[8]$ & $2.9(1.3)[-1]$\\ $6S_{1/2}$ & $4P_{1/2}$ & $376.150$ & $5.244[-2]$ & $4.973[-2]$ & $9.98(60)[8]$ & $2.12(11)[-2]$\\ $6S_{1/2}$ & $4P_{3/2}$ & $382.168$ & $1.102[-1]$ & $1.043[-1]$ & $2.00(12)[9]$ & $2.19(12)[-2]$\\ $6S_{1/2}$ & $5P_{1/2}$ & $1541.372$ & $2.316[0]$ & $2.283[0]$ & $6.41(21)[8]$ & $2.283(40)[-1]$\\ $6S_{1/2}$ & $5P_{3/2}$ & $1577.251$ & $4.990[0]$ & $4.915[0]$ & $1.289(43)[9]$ & $2.403(44)[-1]$\\ $6P_{1/2}$ & $4S_{1/2}$ & $263.658$ & $6.241[-3]$ & $6.084[-3]$ & {$3.45(14)[8]$} & $3.595(98)[-3]$\\ $6P_{1/2}$ & $4D_{3/2}$ & $703.352$ & $1.069[-1]$ & $9.610[-2]$ & $3.11(34)[8]$ & $1.15(12)[-2]$\\ $6P_{1/2}$ & $5S_{1/2}$ & $865.200$ & $2.890[-2]$ & $2.993[-2]$ & $4.52(21)[7]$ & $5.07(19)[-3]$\\\ $6P_{1/2}$ & $5D_{3/2}$ & $1788.677$ & $2.274[0]$ & $2.993[0]$ & $4.0(1.2)[8]$ & $9.7(2.8)[-2]$\\ $6P_{1/2}$ & $6S_{1/2}$ & $6161.241$ & $2.231[1]$ & $2.209[1]$ & $9.66(30)[7]$ & $5.499(77)[-1]$\\ $6P_{3/2}$ & $4S_{1/2}$ & $263.170$ & $9.801[-3]$ & $9.409[-3]$ & $2.72(14)[8]$ & $5.66(24)[-3]$\\ $6P_{3/2}$ & $4D_{3/2}$ & $699.886$ & $2.220[-2]$ & $2.016[-2]$ & $3.28(32)[7]$ & $2.41(23)[-3]$\\ {$6P_{3/2}$} & $4D_{5/2}$ & $702.202$ & $1.998[-1]$ & $1.798[-1]$ & $2.92(31)[8]$ & {$1.44(15)[-2]$}\\ $6P_{3/2}$ & $5S_{1/2}$ & $859.961$ & $3.648[-2]$ & $3.803[-2]$ & $2.91(15)[7]$ & $6.44(28)[-3]$\\ $6P_{3/2}$ & $5D_{3/2}$ & $1766.430$ & $4.225[-1]$ & $4.316[-1]$ & $3.9(1.2)[7]$ & $1.82(57)[-2]$\\ $6P_{3/2}$ & $5D_{5/2}$ & $1590.705$ & $1.858[0]$ & $2.699[0]$ & $2.34(96)[8]$ & {$5.9(2.4)[-2]$}\\ $6P_{3/2}$ & $6S_{1/2}$ & $5905.068$ & $4.433[1]$ & $4.436[1]$ & $1.090(33)[8]$ & $1.140(11)[0]$\\ \hline \end{longtable} \begin{longtable}{@{\extracolsep\fill}ccrrrrrr@{}} \caption{\label{tab3} The line strengths ($S_{vk}$) (in a.u.) calculated using both the L and V-gauge expressions, wavelengths ($\lambda$) (in $\text{\normalfont\AA}$), transition probabilities ($A_{L_{vk}}$) in ($s^{-1}$) and absorption oscillator strengths ($f_{L_{kv}}$) for the Se VI ion through E1 decay channel are presented in this table. Values in square brackets represent the order of 10. Uncertainties are given in parentheses. }\\ \hline Upper State(v) & Lower State(k) & $\lambda$ (in $\text{\normalfont\AA}$) & \multicolumn{2}{c}{$S_{vk}$ (in a.u.)} & $A_{L_{vk}}$(in $s^{-1}$) & $f_{L_{kv}}$\\ [0.5ex] & & & L & V & & \\ \hline $4P_{1/2}$ & $4S_{1/2}$ & $886.111$ & $1.484[0]$ & $1.486[0]$ & $2.160(65)[9]$ & $2.543(26)[-1]$\\ $4P_{3/2}$ & $4S_{1/2}$ & $842.783$ & $2.983[0]$ & $2.989[0]$ & $2.524(76)[9]$ & $5.375(55)[-1]$\\ $4D_{3/2}$ & $4P_{1/2}$ & $587.287$ & $3.806[0]$ & $3.810[0]$ & $9.52(29)[9]$ & $9.844(99)[-1]$\\ $4D_{3/2}$ & $4P_{3/2}$ & $608.003$ & $7.815[-1]$ & $7.832[-1]$ & $1.761(53)[9]$ & $9.76(10)[-2]$\\ $4D_{5/2}$ & $4P_{3/2}$ & $605.366$ & $7.033[0]$ & $7.038[0]$ & $1.071(32)[10]$ & $8.823(88)[-1]$\\ $4F_{5/2}$ & $4D_{3/2}$ & $798.368$ & $1.178[1]$ & $1.179[1]$ & {$7.82(23)[9]$} & $1.120(11)[0]$\\ $4F_{5/2}$ & $4D_{5/2}$ & $802.961$ & $8.446[-1]$ & $8.464[-1]$ & $5.51(17)[8]$ & $5.235(54)[-2]$\\ $4F_{7/2}$ & $4D_{5/2}$ & $803.074$ & $1.691[1]$ & $1.692[1]$ & $8.27(25)[9]$ & $1.066(11)[0]$\\ $5S_{1/2}$ & $4P_{1/2}$ & $452.671$ & $4.007[-1]$ & $3.994[-1]$ & $4.38(13)[9]$ & $1.344(14)[-1]$\\ $5S_{1/2}$ & $4P_{3/2}$ & $464.880$ & $8.780[-1]$ & $8.761[-1]$ & $8.85(27)[9]$ & $1.434(15)[-1]$\\ $5P_{1/2}$ & $4S_{1/2}$ & $266.267$ & $2.924[-2]$ & $2.890[-2]$ & $1.569(51)[9]$ & $1.668(26)[-2]$\\ $5P_{1/2}$ & $4D_{3/2}$ & $1081.842$ & $2.746[0]$ & $2.739[0]$ & $2.197(66)[9]$ & $1.927(20)[-1]$\\ $5P_{1/2}$ & $5S_{1/2}$ & $2392.438$ & $6.295[0]$ & $6.295[0]$ & $4.66(14)[8]$ & $3.996(40)[-1]$\\ $5P_{3/2}$ & $4S_{1/2}$ & $264.773$ & $4.622[-2]$ & $4.580[-2]$ & $1.261(40)[9]$ & $2.652(36)[-2]$\\ $5P_{3/2}$ & $4D_{3/2}$ & $1057.594$ & $5.271[-1]$ & $5.256[-1]$ & $2.257(68)[8]$ & $3.785(39)[-2]$\\ $5P_{3/2}$ & $4D_{5/2}$ & $1065.669$ & $4.814[0]$ & $4.800[0]$ & $2.015(61)[9]$ & $2.287(24)[-1]$\\ $5P_{3/2}$ & $5S_{1/2}$ & $2276.988$ & $1.257[1]$ & $1.256[1]$ & $5.39(16)[8]$ & $8.382(84)[-1]$\\ $5D_{3/2}$ & $4P_{1/2}$ & $314.540$ & $1.369[-3]$ & $1.296[-3]$ & $2.23(14)[7]$ & $6.61(36)[-4]$\\ $5D_{3/2}$ & $4P_{3/2}$ & $320.387$ & $2.250[-4]$ & $2.250[-4]$ & $3.47(10)[6]$ & $5.333(53)[-5]$\\ $5D_{3/2}$ & $4F_{5/2}$ & $4465.359$ & $7.344[-2]$ & $9.120[-2]$ & $4.18(96)[5]$ & {$8.3(1.9)[-4]$}\\ $5D_{3/2}$ & $5P_{1/2}$ & $1811.094$ & $1.232[-1]$ & $1.225[-1]$ & $1.051(32)[7]$ & $1.033(12)[-2]$\\ $5D_{3/2}$ & $5P_{3/2}$ & $1883.383$ & $2.528[-2]$ & $2.528[-2]$ & $1.917(58)[6]$ & $1.019(10)[-3]$\\ $5D_{5/2}$ & $4P_{3/2}$ & $328.381$ & $1.521[-3]$ & $1.369[-3]$ & $1.45(16)[7]$ & $3.52(36)[-4]$\\ $5D_{5/2}$ & $4F_{5/2}$ & $6758.432$ & $2.209[-3]$ & $3.481[-3]$ & $2.4(1.2)[3]$ & {$1.66(85)[-5]$}\\ $5D_{5/2}$ & $4F_{7/2}$ & $6750.409$ & $4.410[-2]$ & $7.182[-2]$ & $4.8(2.7)[4]$ & {$2.5(1.4)[-4]$}\\ $5D_{5/2}$ & $5P_{3/2}$ & $2197.916$ & $1.043[-1]$ & $1.030[-1]$ & $3.32(11)[6]$ & $3.605(57)[-3]$\\ $5F_{5/2}$ & $4D_{3/2}$ & $463.324$ & $1.076[-1]$ & $1.082[-1]$ & $3.65(11)[8]$ & $1.763(21)[-2]$\\ $5F_{5/2}$ & $4D_{5/2}$ & $464.867$ & $7.396[-3]$ & $7.396[-3]$ & $2.486(75)[7]$ & $8.055(81)[-4]$\\ $5F_{5/2}$ & $5D_{3/2}$ & $1466.674$ & $2.352[-1]$ & $2.480[-1]$ & $2.52(15)[7]$ & $1.218(66)[-2]$\\ $5F_{5/2}$ & $5D_{5/2}$ & $1319.613$ & $7.225[-3]$ & $7.744[-3]$ & $1.062(81)[6]$ & {$2.77(20)[-4]$}\\ $5F_{7/2}$ & $4D_{5/2}$ & $464.886$ & $1.459[-1]$ & $1.467[-1]$ & $3.68(11)[8]$ & $1.589(18)[-2]$\\ $5F_{7/2}$ & $5D_{5/2}$ & $1319.765$ & $1.391[-1]$ & $1.498[-1]$ & $1.53(12)[7]$ & {$5.34(40)[-3]$}\\ $6S_{1/2}$ & $4P_{1/2}$ & $284.481$ & $4.244[-2]$ & $4.203[-2]$ & $1.867(59)[9]$ & $2.266(32)[-2]$\\ $6S_{1/2}$ & $4P_{3/2}$ & $289.255$ & $8.940[-2]$ & $8.880[-2]$ & $3.74(12)[9]$ & $2.347(28)[-2]$\\ $6S_{1/2}$ & $5P_{1/2}$ & $1126.022$ & $1.638[0]$ & $1.636[0]$ & $1.163(35)[9]$ & $2.210(22)[-1]$\\ $6S_{1/2}$ & $5P_{3/2}$ & $1153.550$ & $3.557[0]$ & $3.553[0]$ & $2.348(70)[9]$ & $2.342(24)[-1]$\\ $6P_{1/2}$ & $4S_{1/2}$ & $206.419$ & $1.082[-2]$ & $1.061[-2]$ & $1.246(44)[9]$ & $7.96(17)[-3]$\\ $6P_{1/2}$ & $4D_{3/2}$ & $496.711$ & $9.303[-2]$ & $9.303[-2]$ & $7.69(23)[8]$ & $1.422(14)[-2]$\\ $6P_{1/2}$ & $5S_{1/2}$ & $663.624$ & $4.666[-2]$ & $4.623[-2]$ & $1.617(51)[8]$ & $1.068(15)[-2]$\\\ $6P_{1/2}$ & $5D_{3/2}$ & $1863.092$ & $4.080[-2]$ & $4.121[-2]$ & $6.39(20)[6]$ & $1.663(23)[-3]$\\ $6P_{1/2}$ & $6S_{1/2}$ & $4979.779$ & $1.711[1]$ & $1.709[1]$ & $1.403(42)[8]$ & $5.217(52)[-1]$\\ $6P_{3/2}$ & $4S_{1/2}$ & $205.979$ & $1.823[-2]$ & $1.823[-2]$ & $1.056(32)[9]$ & $1.344(13)[-2]$\\ $6P_{3/2}$ & $4D_{3/2}$ & $494.170$ & $1.932[-2]$ & $1.904[-2]$ & $8.11(27)[7]$ & $2.969(52)[-3]$\\ $6P_{3/2}$ & $4D_{5/2}$ & $495.926$ & $1.722[-1]$ & $1.722[-1]$ & $7.15(21)[8]$ & {$1.758(18)[-2]$}\\ $6P_{3/2}$ & $5S_{1/2}$ & $659.096$ & $6.554[-2]$ & $6.554[-2]$ & $1.159(35)[8]$ & $1.510(15)[-2]$\\ $6P_{3/2}$ & $5D_{3/2}$ & $1827.838$ & $7.569[-3]$ & $8.100[-3]$ & $6.28(19)[5]$ & $3.145(31)[-4]$\\ $6P_{3/2}$ & $5D_{5/2}$ & $1604.937$ & $1.664[-2]$ & $1.690[-2]$ & $2.039(69)[6]$ & {$5.249(97)[-4]$}\\ $6P_{3/2}$ & $6S_{1/2}$ & $4735.642$ & $3.402[1]$ & $3.394[1]$ & $1.623(49)[8]$ & $1.091(11)[0]$\\ \hline\\ \end{longtable} \begin{longtable}{@{\extracolsep\fill}ccrrrrrr@{}} \caption{\label{tab4} The line strengths ($S_{vk}$) (in a.u.) calculated using both the L and V-gauge expressions, wavelengths ($\lambda$) (in $\text{\normalfont\AA}$), transition probabilities ($A_{L_{vk}}$) in ($s^{-1}$) and absorption oscillator strengths ($f_{L_{kv}}$) for the Br VII ion through E1 decay channel are presented in this table. Values in square brackets represent the order of 10. Uncertainties are given in parentheses.}\\ \hline Upper State(v) & Lower State(k) & $\lambda$ (in $\text{\normalfont\AA}$) & \multicolumn{2}{c}{$S_{vk}$ (in a.u.)} & $A_{L_{vk}}$(in $s^{-1}$) & $f_{L_{kv}}$\\ [0.5ex] & & & L & V & & \\ \hline $4P_{1/2}$ & $4S_{1/2}$ & $778.944$ & $1.272[0]$ & $1.275[0]$ & {$2.727(82)[9]$} & $2.481(25)[-1]$\\ $4P_{3/2}$ & $4S_{1/2}$ & $734.789$ & $2.560[0]$ & $2.563[0]$ & {$3.269(98)[9]$} & $5.291(53)[-1]$\\ $4D_{3/2}$ & $4P_{1/2}$ & $498.692$ & $3.183[0]$ & $3.183[0]$ & $1.300(39)[10]$ & $9.693(97)[-1]$\\ $4D_{3/2}$ & $4P_{3/2}$ & $518.646$ & $6.529[-1]$ & $6.545[-1]$ & {$2.370(71)[9]$} & $9.559(98)[-2]$\\ $4D_{5/2}$ & $4P_{3/2}$ & $515.912$ & $5.876[0]$ & $5.881[0]$ & $1.445(43)[10]$ & $8.649(87)[-1]$\\ $4F_{5/2}$ & $4D_{3/2}$ & $638.301$ & $9.145[0]$ & $9.151[0]$ & $1.187(36)[10]$ & $1.088(11)[0]$\\ $4F_{5/2}$ & $4D_{5/2}$ & $642.491$ & $6.561[-1]$ & $6.651[-1]$ & $8.35(25)[8]$ & $5.170(52)[-2]$\\ $4F_{7/2}$ & $4D_{5/2}$ & $642.593$ & $1.313[1]$ & $1.314[1]$ & $1.254(38)[10]$ & {$1.035(10)[0]$}\\ $5S_{1/2}$ & $4P_{1/2}$ & $355.906$ & $3.080[-1]$ & $3.069[-1]$ & $6.92(21)[9]$ & $1.314(14)[-1]$\\ $5S_{1/2}$ & $4P_{3/2}$ & $365.954$ & $6.806[-1]$ & $6.773[-1]$ & $1.407(43)[10]$ & $1.412(16 )[-1]$\\ $5P_{1/2}$ & $4S_{1/2}$ & $218.080$ & $3.028[-2]$ & $2.993[-2]$ & $2.957(95)[9]$ & $2.109(32)[-2]$\\ $5P_{1/2}$ & $4D_{3/2}$ & $771.345$ & $1.904[0]$ & $1.899[0]$ & $4.20(13)[9]$ & $1.875(20)[-1]$\\ $5P_{1/2}$ & $5S_{1/2}$ & $2032.719$ & $5.157[0]$ & $5.157[0]$ & $6.22(19)[8]$ & $3.853(39)[-1]$\\ $5P_{3/2}$ & $4S_{1/2}$ & $216.709$ & $4.796[-2]$ & $4.752[-2]$ & $2.387(75)[9]$ & $3.361(46)[-2]$\\ $5P_{3/2}$ & $4D_{3/2}$ & $754.465$ & $3.624[-1]$ & $3.612[-1]$ & $4.27(13)[8]$ & $3.648(38)[-2]$\\ $5P_{3/2}$ & $4D_{5/2}$ & $760.327$ & $3.316[0]$ & $3.309[0]$ & $3.82(12)[9]$ & {$2.208(23)[-1]$}\\ $5P_{3/2}$ & $5S_{1/2}$ & $1919.541$ & $1.030[1]$ & $1.030[1]$ & $7.38(22)[8]$ & $8.153(82)[-1]$\\ $5D_{3/2}$ & $4P_{1/2}$ & $243.347$ & $8.410[-4]$ & $8.410[-4]$ & $2.956(9)[7]$ & $5.249(52)[-4]$\\ $5D_{3/2}$ & $4P_{3/2}$ & $248.002$ & $4.000[-6]$ & $4.000[-6]$ & $1.328(40)[5]$ & $1.225(12)[-6]$\\ $5D_{3/2}$ & $4F_{5/2}$ & $1860.601$ & $6.111[0]$ & $6.101[0]$ & $4.81(14)[8]$ & $1.663(69)[-1]$\\ $5D_{3/2}$ & $5P_{1/2}$ & $1238.108$ & $1.116[1]$ & $1.116[1]$ & $2.979(89)[9]$ & $1.369(14)[0]$\\ $5D_{3/2}$ & $5P_{3/2}$ & $1284.228$ & $2.295[0]$ & $2.295[0]$ & $5.49(17)[8]$ & $1.357(14)[-1]$\\ $5D_{5/2}$ & $4P_{3/2}$ & $248.154$ & $7.290[-4]$ & $7.290[-4]$ & {$1.61(48)[7]$} & $2.231(22)[-4]$\\ $5D_{5/2}$ & $4F_{5/2}$ & $1869.183$ & $2.852[-1]$ & $2.862[-1]$ & $1.474(45)[7]$ & {$7.723(83)[-3]$}\\ $5D_{5/2}$ & $4F_{7/2}$ & $1868.321$ & $5.698[0]$ & $5.731[0]$ & $2.950(90)[8]$ & {$1.158(13)[-1]$}\\ $5D_{5/2}$ & $5P_{3/2}$ & $1288.311$ & $1.370[1]$ & $1.368[1]$ & $2.164(65)[9]$ & $8.078(82)[-1]$\\ $5F_{5/2}$ & $4D_{3/2}$ & $356.635$ & $2.102[-2]$ & $2.132[-2]$ & $1.565(52)[8]$ & $4.477(76)[-3]$\\ $5F_{5/2}$ & $4D_{5/2}$ & $357.939$ & $1.296[-3]$ & $1.369[-3]$ & $9.54(60)[6]$ & {$1.83(10)[-4]$}\\ $5F_{5/2}$ & $5D_{3/2}$ & $1428.839$ & $2.723[1]$ & $2.724[1]$ & $3.152(95)[9]$ & $1.447(14)[0]$\\ $5F_{5/2}$ & $5D_{5/2}$ & $1423.818$ & $1.290[0]$ & $1.295[0]$ & $1.510(46)[8]$ & {$4.588(49)[-2]$}\\ $5F_{7/2}$ & $4D_{5/2}$ & $357.958$ & $2.624[-2]$ & $2.657[-2]$ & $1.449(47)[8]$ & $3.71(41)[-3]$\\ $5F_{7/2}$ & $5D_{5/2}$ & $1424.119$ & $2.582[1]$ & $2.591[1]$ & $2.264(68)[9]$ & {$9.178(97)[-1]$}\\ $6S_{1/2}$ & $4P_{1/2}$ & $224.204$ & $3.349[-2]$ & $3.349[-2]$ & $3.010(90)[9]$ & $2.269(23)[-2]$\\ $6S_{1/2}$ & $4P_{3/2}$ & $228.150$ & $7.129[-2]$ & $7.129[-2]$ & $6.08(18)[9]$ & $2.373(24)[-2]$\\ $6S_{1/2}$ & $5P_{1/2}$ & $863.238$ & $1.210[0]$ & $1.208[0]$ & $1.906(57)[9]$ & {$2.129(22)[-1]$}\\ $6S_{1/2}$ & $5P_{3/2}$ & $885.407$ & $2.650[0]$ & $2.647[0]$ & $3.87(12)[9]$ & $2.273(23)[-1]$\\ $6P_{1/2}$ & $4S_{1/2}$ & $167.098$ & $1.020[-2]$ & $1.020[-2]$ & $2.215(66)[9]$ & $9.272(93)[-3]$\\ $6P_{1/2}$ & $4D_{3/2}$ & $370.990$ & $7.840[-2]$ & $7.784[-2]$ & $1.555(48)[9]$ & $1.605(20)[-2]$\\ $6P_{1/2}$ & $5S_{1/2}$ & $528.819$ & $5.244[-2]$ & $5.244[-2]$ & $3.59(11)[8]$ & $1.506(15)[-2]$\\\ $6P_{1/2}$ & $5D_{3/2}$ & $1690.989$ & $7.209[0]$ & $7.193[0]$ & $1.510(45)[9]$ & $3.238(33)[-1]$\\ $6P_{1/2}$ & $6S_{1/2}$ & $4157.961$ & $1.376[1]$ & $1.373[1]$ & $1.940(58)[8]$ & $5.030(51)[-1]$\\ $6P_{3/2}$ & $4S_{1/2}$ & $166.701$ & $1.716[-2]$ & $1.716[-2]$ & $1.876(56)[9]$ & $1.564(16)[-2]$\\ $6P_{3/2}$ & $4D_{3/2}$ & $369.040$ & $1.588[-2]$ & $1.588[-2]$ & $1.600(48)[8]$ & $3.267(33)[-3]$\\ $6P_{3/2}$ & $4D_{5/2}$ & $370.436$ & $1.436[-1]$ & $1.436[-1]$ & $1.431(44)[9]$ & {$1.963(22)[-2]$}\\ $6P_{3/2}$ & $5S_{1/2}$ & $524.865$ & $7.563[-2]$ & $7.563[-2]$ & $2.649(79)[8]$ & $2.188(22)[-2]$\\ $6P_{3/2}$ & $5D_{3/2}$ & $1651.209$ & $1.376[0]$ & $1.374[0]$ & $1.548(47)[8]$ & $6.328(64)[-2]$\\ $6P_{3/2}$ & $5D_{5/2}$ & $1644.508$ & $8.037[0]$ & $8.020[0]$ & $9.15(28)[8]$ & {$2.474(25)[-1]$}\\ $6P_{3/2}$ & $6S_{1/2}$ & $3925.430$ & $2.739[1]$ & $2.731[1]$ & $2.294(69)[8]$ & $1.060(11)[0]$\\ \hline \end{longtable} \begin{longtable}{@{\extracolsep\fill}ccrrrrrr@{}} \caption{\label{tab6} Comparison of oscillator strengths for the As V, Se VI and Br VII ions from our calculations with available theoretical data and experimental values obtained using the beam-foil measurement.}\\ \hline Ion & Transition & \multicolumn{5}{c}{Theoretical studies} & Experiment \\ \cline{3-7 }\\ & & Present & RMP-CP$^{a}$ & MCDF$^{b}$ & RQDO$^{c,d}$ & DSQDT$^{e,f}$ & Beam-foil$^g$\\ \hline As V & $4S_{1/2} \rightarrow 4P_{1/2}$ & $0.305(78)$ & $0.254$ & $0.26159$ & $0.2793^{c}$ & $0.2571^e$,$0.2571^f$ &\\ & $4S_{1/2} \rightarrow 4P_{3/2}$ & $0.6395(19)$ & $0.531$ & $0.54797$ & $0.5779^{c}$ & $0.5453^{e}$,$0.5455^f$ &\\ & $4S_{1/2} \rightarrow 5P_{1/2}$ & $0.00730(14)$ & & & & $0.0054^{e}$ &\\ & $4S_{1/2} \rightarrow 5P_{3/2}$ & $0.01038(10)$ & & & & $0.0073^{e}$ &\\ & $4P_{1/2} \rightarrow 4D_{3/2}$ & $1.107(39)$ & $0.973$ & $0.98927$ & $0.8418^{d}$ & &\\ & $4P_{3/2} \rightarrow 4D_{3/2}$ & $0.1102(38)$ & $0.0970$ & $0.09869$ & $0.08479^{d}$ & &\\ & $4P_{3/2} \rightarrow 4D_{5/2}$ & $0.994(34)$ & $0.864$ & $0.89009$ & $0.7628^{d}$ & &\\ & $4P_{1/2} \rightarrow 5S_{1/2}$ & $0.1337(50)$ & & & $0.1266^{d}$ & &\\ & $4P_{3/2} \rightarrow 5S_{1/2}$ & $0.1422(57)$ & & & $0.1308^{d}$ & &\\ & $4P_{1/2} \rightarrow 6S_{1/2}$ & $0.0212(11)$ & & & $0.01843^{c}$ & &\\ & $4P_{3/2} \rightarrow 6S_{1/2}$ & $0.0219(12)$ & & & $0.01843^{c}$ & &\\ & $5P_{1/2} \rightarrow 6S_{1/2}$ & $0.228(40)$ & & & $0.2407^{c}$ & &\\ & $5P_{3/2} \rightarrow 6S_{1/2}$ & $0.240(44)$ & & & $0.2484^{c}$ & &\\ Se VI & $4S_{1/2} \rightarrow 4P_{1/2}$ & $0.2543(26)$ & $0.247$ & $0.2560$ & $0.2748^d$ & $0.2562^e$,$0.2562^f$ &\\ & $4S_{1/2} \rightarrow 4P_{3/2}$ & $0.5375(55)$ & $0.521$ & $0.5409$ & $0.5736^d$ & $0.5465^e$,$0.5465^f$ &\\ & $4S_{1/2} \rightarrow 5P_{1/2}$ & $0.01668((26)$ & & & & $0.0086^{e}$ &\\ & $4S_{1/2} \rightarrow 5P_{3/2}$ & $0.02652(36)$ & & & & $0.0126^{e}$ &\\ & $4P_{1/2} \rightarrow 4D_{3/2}$ & $0.9844(99)$ & $0.963$ & $1.0087$ & $0.8713^d$ & &\\ & $4P_{3/2} \rightarrow 4D_{3/2}$ & $0.09760(10)$ & $0.0959$ & $0.1001$ & $0.08725^d$ & &\\ & $4P_{3/2} \rightarrow 4D_{5/2}$ & $0.8823(88)$ & $0.864$ & $0.9035$ & $0.7859^d$ & &\\ & $4P_{1/2} \rightarrow 5S_{1/2}$ & $0.1344(14)$ & & & $0.1221^d$ & &\\ & $4P_{3/2} \rightarrow 5S_{1/2}$ & $0.1434(15)$ & & & $0.1266^d$ & &\\ & $4P_{1/2} \rightarrow 6S_{1/2}$ & $0.02266(32)$ & & & $0.01837^c$ && \\ & $4P_{3/2} \rightarrow 6S_{1/2}$ & $0.02347(28)$ & & & $0.01848^c$ & &\\ & $5P_{1/2} \rightarrow 6S_{1/2}$ & $0.2210(22)$ & & & $0.2298^c$ & &\\ & $5P_{3/2} \rightarrow 6S_{1/2}$ & $0.2342(24)$ & & & $0.2385^c$ & &\\ Br VII & $4S_{1/2} \rightarrow 4P_{1/2}$ & $0.2481(25)$ & $0.244$ & $0.2497$ & $0.2683^d$ & & $0.16$\\ & $4S_{1/2} \rightarrow 4P_{3/2}$ & $0.5291(53)$ & $0.521$ & $0.5321$ & $0.5652^d$ & & $0.32$\\ & $4P_{1/2} \rightarrow 4D_{3/2}$ & $0.9693(97)$ & $0.951$ & $0.9685$ & $0.8797^d$ & &\\ & $4P_{3/2} \rightarrow 4D_{3/2}$ & $0.09559(98)$ & $0.0938$ & $0.0956$ & $0.08761^d$ & &\\ & $4P_{3/2} \rightarrow 4D_{5/2}$ & $0.8649(87)$ & $0.848$ & $0.8643$ & $0.7901^d$ & & $0.34$\\ & $4P_{1/2} \rightarrow 5S_{1/2}$ & $0.1314(14)$ & & & $0.1208^d$ & &\\ & $4P_{3/2} \rightarrow 5S_{1/2}$ & $0.1412(16)$ & & & $0.1257^d$ & & \\ \hline \end{longtable} $^{a}$\citep{migdalek1979influence}\\ $^{b}$\citep{curtis1989comprehensive}\\ $^{c}$\citep{martin1992fine}\\ $^{d}$\citep{lavin1994relativistic}\\ $^{e}$\citep{engo1997comparison}\\ $^{f}$\citep{owono2005core}\\ $^g$\citep{knystautas1977oscillator}\\ \clearpage \begin{longtable}{@{\extracolsep\fill}crrr@{}} \caption{\label{tab5} The estimated lifetimes $\tau$ (in ns) for a few low-lying excited states of the As V, Se VI and Br VII ions and their comparisons with the available literature data. Uncertainties are given in the parentheses.}\\ \hline State & As V & Se VI & Br VII\\ \hline $4P_{1/2}$ & $0.520(20)$ & $0.4630(14)$ & $0.3667(11)$\\ & $0.68 \pm{0.09}^a$ & $0.45 \pm{0.05}^{a,c}$ & \\ & $0.607^b$ & $0.460^b$ & $0.365^b$\\ $4P_{3/2}$ & $0.456(19)$ & $0.3962(12)$ & $0.3059(9)$\\ & $0.54 \pm{0.03}^a$ & $0.39 \pm{0.04}^{a,c}$ & \\ & $0.534^b$ & $0.395^b$ & $0.305^b$\\ $4D_{3/2}$ & $0.117(5)$ & $0.0887(23)$ & $0.0651(17)$\\ & $0.166 \pm{0.012}^a$ & $0.11 \pm{0.02}^{a,c}$ & \\ & $0.131^b$ & ${0.087}^b$ & {$0.065^b$}\\ $4D_{5/2}$ & $0.122(6)$ & $0.0934(28)$ & $0.0692(21)$\\ & $0.18\pm{0.02}^a$ & $0.14 \pm{0.03}^{a,c}$ & \\ & {$0.136^b$} & {$0.091^b$} & {$0.069^b$}\\ $5S_{1/2}$ & $0.134(5)$ & $0.0756(17)$ & $0.0476(11)$\\ & $0.14 \pm{0.01}^a$ & $0.06 \pm{0.02}^{a,c}$ & \\ $4F_{5/2}$ & $0.194(6)$ & $0.1195(33)$ & $0.0787(22)$\\ $4F_{7/2}$ & $0.195(10)$ & $0.1209(36)$ & $0.0797(24)$\\ $5P_{1/2}$ & $0.573(21)$ & $0.2363(47)$ & $0.1285(27)$\\ $5P_{3/2}$ & $0.589(21)$ & $0.2475(46)$ & $0.1356(26)$\\ $5D_{3/2}$ & $0.26(22)$ & $25.90(96)$ & $0.2476(56)$\\ $5D_{5/2}$ & $0.22(20)$ & $55.96(5.02)$ & {$0.4016(11)$}\\ $5F_{5/2}$ & $0.57(14)$ & $2.401(64)$ & $0.2883(8)$\\ $5F_{7/2}$ & $0.71(20)$ & $2.610(75)$ & $0.415(12)$\\ $6S_{1/2}$ & $0.2029(59)$ & $0.1096(19)$ & $0.0673(11)$\\ $6P_{1/2}$ & $0.833(86)$ & $0.4304(93)$ & $0.1714(28)$\\ $6P_{3/2}$ & $0.99(32)$ & $0.4688(85)$ & $0.1987(31)$\\ \hline \end{longtable} $^a$\citep{pinnington1982beam} $^b$\citep{curtis1989comprehensive} $^c$\citep{bahr1982beam} \twocolumn \section{Results and Discussion} \label{4} The detailed analysis of our results is provided in this section along with comparison of our results with the available theoretical and experimental values from the literature. We have presented the calculated energies, line strengths, transition probabilities as well as oscillator strengths of the considered ions explicitly. We have also provided lifetimes of all the considered excited states from $4S$ through $6P_{3/2}$ for the above stated ions. The calculations for the E1 matrix elements are exclusively carried out by the relativistic AO method as discussed in previous section. It has been observed that our calculated E1 matrix elements using both the L- and V-gauge expressions are in good agreement for maximum number of transitions in all the considered ions. This, thereby, advocates for the reliability in our calculated results. It is, however, perceived that the energy values obtained using the SD method come out to be in better agreement with the experimental values than those from the SDpT method. It could be due to the fact that there may be large cancellations among the contributions arising from the non-linear terms of the SD approximation in the RCC theory and triple excitations as indicated in Refs. \citep{safronova2008all}. Thus, we consider E1 matrix elements from the SD method to be more accurate and use them for estimating other properties. The differences between the results from the SD and SDpT methods are treated as possible uncertainties. The differences in the results obtained using the L- and V-gauge expressions are also included as uncertainties. In addition, a maximum of $1\%$ uncertainty from the calculated energies are taken into account in the results. Our calculated energies for a few low-lying states and their comparison with the available experimental values from the NIST AD \citep{ralchenko2008nist} are given in Table \ref{tab1}. As can be seen the deviation in energy values of all the considered states is less than $1\%$. We have observed maximum energy deviations of $0.15\%, 0.12\%$ and $0.53\%$ for the $4F_{7/2}, 5S_{1/2}$ and $4D_{5/2}$ states of As V, Se VI and Br VII ions respectively. This small differences between our calculated energies and experimental values indicate that the wave functions determined using the SD method are accurate and can be applied to estimate the radiative properties reliably. Hence, we expect a maximum of $1\%$ uncertainty in energies of high-lying states, which is therefore, included in our uncertainty calculations. We have listed our results for wavelengths $\lambda$, line strengths $S_{vk}$, transition probabilities $A_{vk}$ and absorption oscillator strengths $f_{kv}$ for As V in Table \ref{tab2}. All these quantities are {\it ab initio}, and they are estimated by combining the calculated energies and E1 matrix elements from the SD method. We have also presented the line strengths (square of the E1 matrix elements from both L- and V-gauge expressions using the SD wave functions, and observe a reasonable agreement between them. The uncertainties (quoted in the parentheses) for the $S_{vk}$, $A_{vk}$ and $f_{kv}$ values are estimated from the uncertainties of the E1 matrix elements as well as of energies. Through the investigation of data of Table \ref{tab2}, it is accentuated that the spectroscopic properties of As V follows the systematic trend of high transitions probabilities for transitions occurring between the low-lying states with the $4P_{3/2}$--$4D_{5/2}$ at the maxima. Small uncertainties are seen for the investigated properties in most of the allowed transitions except in the $4P_{1/2,3/2}$ -- $5D_{3/2,5/2}$, $4F_{5/2,7/2}$--$5D_{3/2,5/2}$, $5P_{1/2,3/2}$--$5D_{3/2,5/2}$, $5D_{5/2}$--$5F_{5/2,7/2}$ and $5D_{3/2,5/2}$--$6P_{1/2,3/2}$ transitions. This may be because of the electron correlation contributions (difference between the DF and SD values) observed in the calculations of E1 matrix elements of these transitions. Nonetheless, the estimated uncertainties in these transitions are still reasonable for the applications of astrophysical studies. The results for the wavelengths $\lambda$, line strengths $S_{vk}$, transition probabilities $A_{vk}$ and absorption oscillator strengths $f_{kv}$ for Se VI are presented in Table \ref{tab3}. It is seen that the $4F_{5/2}$--$5D_{5/2}$ transition exhibits lowest transition probability of order $3$, whereas the $4P_{3/2}$--$4D_{5/2}$ transition shows maximum transition probability of the order $10$. Therefore, we consider these transitions to be least and most dominant transitions respectively, among all the $48$ considered transitions in Se VI. It is also analyzed that there is a remarkably low uncertainty throughout the data except for the $4F_{5/2}$--$5D_{3/2,5/2}$ and $4F_{7/2}$--$5D_{5/2}$ transitions. The unreasonable large uncertainties of $23\%$ and $56\%$ are noticed in absorption oscillator strengths for $4F_{5/2}$--$5D_{3/2}$ and $4F_{7/2}$--$5D_{5/2}$ transitions respectively. This may be owing to the fact that there are about $11\%$ and $27\%$ difference between the L- and V-gauge matrix elements with respect to L-gauge results, respectively. Such large differences could be a consequence of strong correlations among the electrons from the $D$ and $F$ states. We have tabulated our spectroscopic data for Br VII in Table \ref{tab4}. Once again it is observed that the data follows the same trend as observed in case of the As V and Se VI ions, and $4P_{3/2}$--$4D_{5/2}$ is the most dominant transition. Relatively low uncertainty is noticed for this highly stripped ion in comparison to that of preceding Cu-isoelectronic ions. The maxima of uncertainty in the absorption transition probability is located around 6\% for the $4D_{5/2}$--$5F_{5/2}$ transition. Table \ref{tab6} demonstrates comparison of absorption oscillator strengths of a few transitions among the three ions with the available theoretical and experimental data. In this table, it is seen that the present results for transition probabilities and absorption oscillator strengths agree well with the reference data within quoted error limits for As V, Se VI and Br VII except for the results given by Engo et~al. \citep{engo1997comparison}. This is expected because of entirely different approaches as well as the parameters involved in their estimations. Our results deviate less than $17\%$ in comparison to the above stated reference \citep{engo1997comparison} except for the $4S$--$5P_{1/2,3/2}$, $4P_{1/2}$--$4D_{3/2}$ and $4P_{3/2}$--$4D_{3/2,5/2}$ transitions in As V. However, significant discrepancies have been noticed among the $4S$--$5P_{3/2}$ and $4P_{1/2}$--$4D_{3/2}$ transitions for Se VI; the corresponding deviations are about $48\%$ and $52\%$. Such disagreement between the $f_{kv}$ values is not seen for the Br VII ion when we compare of our results with respect to the theoretical data obtained by Migdalek and Baylis \citep{migdalek1979influence}, Curtis and Theodosiou \citep{curtis1989comprehensive} and Lavin et~al. \citep{lavin1994relativistic}. Migdalek and Baylis had used a relativistic model-potential with core-polarization (RMP-CP) effects for these calculations, while multi-configuration Dirac-Fock (MCDF) and RQDO methods were employed in Refs. \citep{curtis1989comprehensive} and \citep{lavin1994relativistic}, respectively. However, large discrepancies are seen with respect to the experimental values obtained using the beam-foil measurement technique by Knystautas and Drouin \citep{knystautas1977oscillator} {which are expected due to the fact that the beam-foil technique has the tendency to overestimate lifetimes and thus, underestimate oscillator strengths due to cascade effects attributed by high-lying levels}. This strongly suggests to carry out another independent measurement to affirm correctness of the above data. We have tabulated radiative lifetime values of the considered states and their comparison with the available literature data in Table \ref{tab5}. It is realized that the radiative lifetime increases as we move on from the $S$ state to further excited states in general. However, for As V, the $6P_{3/2}$ state exhibits maximum lifetime among all the considered states, whereas in case of Se VI, the $5D_{3/2,5/2}$ states show large lifetimes about $25.90$ ns and $55.96$ ns, respectively. We did not find such large lifetime for any state for As V and Br VII. In case of Br VII, the $5F_{7/2}$ state displayed large lifetime of $0.4151$ ns with an uncertainty less than $1\%$. During the comparison of lifetimes of a few low-lying states with available literature, it is realized that our results are in reasonable agreement with the results computed by Curtis and Theodosiou \citep{curtis1989comprehensive} using their MCDF method. Also, it is seen that the lifetime values remain within the quoted error limit with respect to the values given by Pinnington et~al. \citep{pinnington1982beam} as well as Bahr et~al. \citep{bahr1982beam}. Due to unavailability of data for the states above $5S$ in As V and Se VI, and above $4D_{5/2}$ in Br VII, we could not ascertain our results thoroughly and it calls for further theoretical and experimental studies. In case of Br VII, we believe that our aforementioned estimated values for various radiative properties are more reliable due to the fact that our employed AO method accounts electron correlation effects more rigorously than the previous estimations. Since earlier only a handful number of data were available and their uncertainties were not quoted, our reported values will be useful for the analyses of various astrophysical processes involving the Cu-isoelectronic As, Se and Br ions. Moreover, our precisely calculated values may be useful to guide the prospective experiments for their observations. \section{Conclusion} \label{5} We have reported a large number of radiative properties of the Cu-isoelectronic As, Se and Br ions by employing an all-order many-body method in the relativistic coupled-cluster theory framework. This consists of precise estimations of line strengths, transition probabilities and absorption oscillator strengths of various transitions as well as the radiative lifetimes of many low-lying states. A total of $48$ transitions were considered for each of the three ions and the lifetimes are calculated for all the associated excited states from $4S$ to $6P_{3/2}$. We have also compared our results with the previously reported values for a few selected transitions and find a reasonably good agreement among them except in a couple of transitions in all the three ions. It requires further theoretical and experimental investigations to affirm correctness of these data. Our data with the quoted uncertainties can provide a benchmark for many applications in the astrophysical processes and for their observations in the future. \section*{Acknowledgement} The work of B. A. is supported by SERB-TARE research grant no. TAR/2020/000189, New Delhi, India. The employed all order method was developed in the group of Professor M. S. Safronova of the University of Delaware, USA. \section*{Data availability} The data underlying this article are available in the article. \bibliographystyle{mnras}
1,116,691,499,583
arxiv
\section*{\sc Exercises} \addcontentsline{toc}{section}{Exercises}} \newcommand{\Proof}{\noindent {\sc Proof.\ }} \newcommand{\Proofof}[2]{\noindent {\sc Proof of #1~\ref{#2}.\ }} \newcommand{\Notes}{\section*{\sc Notes} \addcontentsline{toc}{section}{Notes}} \newcommand{\Remarknumb}{\medskip \addtocounter{thm}{1} \noindent {\bf Remark \arabic{section}.\arabic{thm}.} } \newcommand{\Examplenumb}{\medskip \addtocounter{thm}{1}\noindent {\sc Example \arabic{section}.\arabic{thm}.} } \newcommand{\tens}[1]{\mbox{$\otimes_{#1}$}} \newcommand{\Sum}[2]{\sum_{#1}^{#2}} \newcommand{\Map}[3]{\mbox{$\mbox{Map}_{#1}(#2,#3)$}} \newcommand{\Hom}[3]{\mbox{$\mbox{Hom}_{#1}(#2,#3)$}} \newcommand{\Spec}{\mbox{$\mbox{Spec}$} } \newcommand{\G}{\mbox{$\Gamma$}} \newcommand{\D}{\mbox{$\Delta$}} \newcommand{\into}{\mbox{$\rightarrow$}} \newcommand{\C}{\mbox{${\bf C}$}} \newcommand{\Z}{\mbox{${\bf Z}$}} \newcommand{\R}{\mbox{${\bf R}$}} \newcommand{\Q}{\mbox{${\bf Q}$}} \newcommand{\X}{\mbox{${\cal X}$}} \newcommand{\Y}{\mbox{${\cal Y}$}} \newcommand{\T}{\mbox{${\cal Z}$}} \newcommand{\iso}{\cong} \newcommand{\Oin}[1]{\mbox{$O_{#1}$}} \newcommand{\p}{\mbox{$p$}\ } \newcommand{\q}{\mbox{$q$}\ } \newcommand{\Rof}[1]{\mbox{$\cal R (#1)$}} \newcommand{\Clof}[1]{\mbox{$Cl(#1)$}} \renewcommand{\thepart}{\Roman{part}} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\thesubsection}{\thesection.\alph{subsection}} \renewcommand{\theequation}{\thethm} \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\thethmletter}{\Roman{thmletter}} \newenvironment{eq}{\addtocounter{thm}{1}\begin{equation} }{\end{equation}} \newenvironment{exple}{\medskip \addtocounter{thm}{1} \noindent {\sc Example \arabic{section}.\arabic{thm}.}}{\medskip} \newenvironment{exos}{\begin{list}{\arabic{section}.\arabic{ex}} {\usecounter{ex}\small}}{\end{list}} \newenvironment{notes}{\begin{list}{\arabic{not})}{\usecounter{not}}}{\end{list}}
1,116,691,499,584
arxiv
\section{Introduction}\label{Sec:Intro} Free-space optical (FSO) communication systems have recently attracted great attention within the research community as well as for commercial use. FSO systems can provide ultra-high data rates (at the order of multiple gigabits per second), immunity to electromagnetic interference, excellent security and large unlicensed bandwidth i.e. hundred and thousand times higher than radio-frequency (RF) systems, along with low installation and operational cost \cite{J:Zhu1}. The challenge in employing such systems is that FSO links are highly vulnerable due to the detrimental effects of attenuation under adverse weather conditions (e.g. fog), pointing errors and atmospheric turbulence. One method to improve the reliability of the FSO link is to employ spatial diversity, i.e. multiple-lasers and multiple-apertures to create a multiple-input multiple-output (MIMO) optical channel. Because of its low complexity, spatial diversity is a particularly attractive fading mitigation technique and performance enhancements have been extensively studied in many past research works in the field of FSO communications \cite{J:Garcia-ZambranaA, J:Navidpour, J:Peppas2, J:Peppas2012}. In order to evaluate the impact of atmospheric turbulence on the performance of OSM, accurate models for the fading distribution are necessary. For example the lognormal distribution is often used to model weak turbulence conditions whereas the negative exponential and the K-distribution are used to model strong turbulence conditions \cite{J:Andrews}. Other more general statistical models have also been proposed to model scintillation over all turbulence conditions, including the Gamma-Gamma \cite{J:Al-Habash}, the lognormal-Rice (or Beckmann) \cite{J:Churnside} the homodyned K distribution (H-K) \cite{J:Jakeman2} and the I-K \cite{J:AndrewsIK2,J:AndrewsIK3, J:PeppasIK} distributions. All these three models are based on the argument that scintillation is a doubly stochastic random process modelling both small and large scale turbulence effects. Besides, they agree well with measurement data and simulations for a wide range of turbulence conditions. In this paper, the H-K distribution is adopted to model turbulence-induced fading. The main reason for this choice is the fact that this distribution is based on a very general scattering model which is valid for a wide range of atmospheric conditions. It is also noted that the H-K distribution generalizes existing models such as the K-distribution. The H-K distribution models the field of the optical wave as the sum of a deterministic component and a random component, the intensity of which follows the Rice (Nakagami-$n$) distribution. The average intensity of the random portion of the field is treated as a fluctuating quantity \cite{J:AndrewsIK2}. It is important to underline that, to the best of our knowledge, in the open technical literature there have been no papers published analyzing and evaluating the performance of FSO systems over such channels, because of the complicated mathematical form of their respective probability density functions (PDF). Depending on their detection, FSO systems can be classified into two main categories, namely coherent (heterodyne detection) and non-coherent (direct detection) systems. Coherent FSO systems have the information bits encoded directly onto the electric field of the optical beam. At the receiver, a local oscillator (LO) is employed to extract the information encoded on the optical carrier electric field. On the one hand, coherent FSO systems can provide significant performance enhancements due to spatial temporal selectivity and heterodyne gain in comparison to direct detection systems. Moreover, they are more versatile as any kind of amplitude, frequency, or phase modulation can be employed. On the other hand, coherent receivers are more difficult to implement as the LO field should be spatially and temporally coherent with the received field. Recently, the so-called optical spatial modulation (OSM) has emerged as a power- and bandwidth-efficient single-carrier transmission technique for optical wireless communication systems \cite{J:OSM2011, J:OSMCOML2011, J:FathHaas}. This spatial diversity scheme, initially proposed in \cite{C:ChauandYuSpatial} and further investigated in \cite{J:Mesleh, J:TrellisCodedSM}, employs a simple modulation mechanism that foresees to activate just one out of several MIMO transmitters at any time instant and to use the index of the activated transmitter as an additional dimension for conveying implicit information. It has been shown that OSM can increase the data rate by base two logarithm of the number of transmit units \cite{J:OSM2011}. Also, OSM can increase the data rate by by a factor of 2 and 4, respectively, as compared to on-off keying (OOK) and pulse position modulation (PPM) \cite{J:OSM2011, J:OSMCOML2011}. It is underlined that such performance gains are obtained with a significant reduction in receiver complexity and system design. Because of the above mentioned advantages of OSM over other more conventional transmission schemes and given the wide applicability of FSO, it is of interest to investigate the potential performance enhancements obtained by incorporating OSM in FSO systems. However, in general this research topic has not been dealt within our research community. Only recently, there have been papers published in the open technical literature dealing with performance analysis studies of FSO systems employing spatial modulation and operating in the presence of atmospheric turbulence, e.g. see \cite{J:Hwang} and \cite{J:Ozbilgin}. Specifically, in \cite{J:Hwang}, the combination of subcarrier intensity modulation and spatial modulation with receiver diversity was proposed to enhance the performance of intensity modulated direct detection (IM/DD) FSO systems. In \cite{J:Ozbilgin}, another IM/DD based system FSO system which combines antenna shift keying with joint pulse position and amplitude modulations was considered. For this system, which was denoted as spatial pulse position and amplitude modulation (SPPAM), the atmospheric turbulence channel was modeled as log-normal or Gamma-Gamma distributions and was evaluated, in terms of bounds, for uncoded and coded signals. ABEP performance evaluation results have shown that SPPAM offers a compromise between spectral and power efficiencies as well as a certain degree of robustness against atmospheric turbulence. Despite these two papers which deal with non-coherent detection schemes, the potential enhancements of OSM on the performance of FSO systems with coherent detection still remains an open research topic which, to the best of our knowledge, has not been addressed so far in the open technical literature. Motivated by the above, in this paper we present for the first time a generic analytical framework which can be used to accurately obtain the performance of outdoor OSM with coherent detection in the presence of turbulence-induced fading. More specifically and within this novel analytical framework, the main novel research contributions of the paper are as follows: \begin{itemize} \item New analytical expressions for the ABEP of coherent OSM under turbulence conditions modeled by the H-K distribution are derived. When the transmitter is equipped with two apertures the resulting analytical expressions are exact, whereas for an arbitrary number of transmit– apertures tight upper–bounds can be obtained. \item Error probability performance bounds for coded OSM systems are derived and the performance enhancements when channel coding is employed are presented and analyzed. \end{itemize} The error probability performance of OSM is also compared to that of conventional FSO schemes with transmit or receive diversity only, i.e. when Maximal Ratio Combining (MRC), Selection Combining (SC) or Alamouti-type Space-Time Block Codes (STBC) are employed. It is noted that the theoretical analysis is substantiated by comparing the theoretical and equivalent simulated performance evaluation results obtained by means of Monte Carlo techniques. The paper is organized as follows. After this introduction, Section~\ref{Sec:Model} outlines the system and channel models. In Section~\ref{Sec:ABEP_Analysis} analytical expressions for the ABEP of uncoded OSM systems are presented. Asymptotic ABEP expressions are also derived, wherefrom the diversity gain of coherent OSM can be readily deduced. The performance of coded OSM systems is discussed in Section~\ref{Sec:Coded}. In Section~\ref{Sec:Results} the various performance evaluation results and their interpretations as well as comparisons are presented. Finally, concluding remarks can be found in Section~\ref{Sec:Conclusions}. {\textit{Notations:} A comprehensive list of all mathematical notations used in this paper can be found in Table~\ref{Tab:Notations}.} \section{System and Channel Model}\label{Sec:Model} In this section, a detailed description of the OSM FSO system model , i.e. transmitter, channel and receiver is provided. Moreover, the H-K distribution is introduced and analytical expressions for its parameters in terms of equivalent physical parameters of the turbulence phenomenon, such as the refractive-index structure parameter, optical wave number, and propagation path length, are derived. \subsection{Preliminaries} Let us consider a $M \times N$ MIMO FSO system with $M$ transmit units (lasers) and $N$ coherent receivers. It is assumed that the receiving apertures are separated by more than a coherence wavelength to ensure the independency of fading channels. The basic principle of OSM modulation is as follows \cite{J:OSM2011, J:FathHaas}: i) The transmitter encodes blocks of $\log_2(M)$ data bits into the index of a single transmit unit. Such a block of bits is hereafter referred to as ``message" and is denoted by $b_m$, $\forall m =1,2,...,M$. It is assumed that the $M$ messages are transmitted with equal probability by the encoder and that the related transmitted signal is denoted by $\tilde{E}_{m} = E_{m}\exp(\jmath \phi_{b_m})$. During each time slot, only one transmitter $\ell$, where $\ell = 1,2, \ldots , M$ is active for data transmission. The information bits are modulated on the electric field of an optical signal beam through an external modulator. During this particular time slot, the remaining transmit lasers are kept silent, i.e. they do not transmit. ii) At the receiver, the incoming optical field is mixed with a local oscillator (LO) field and the combined wave is first converted by the photodetector to an electrical one. A bandpass filter is then employed to extract the intermediate frequency (IF) component of the total output current. Finally, a $N$-hypothesis detection problem is solved to retrieve the active transmit unit index, which results in the estimation of the unique sequence of bits emitted by the transmitter. \subsection{Receiver Structure} The received electric field at the aperture plane of the $n$-th receiver after mixing with a LO beam, can be expressed as \cite{J:Niu3, J:Bayaki2} \begin{equation} \begin{split} e_n(t) & = \sqrt{2P_tZ_0}E_{m} h_{m,n} \cos(\omega_0 t + \phi_{m,n} + \phi_{b_m}) \\ & + \sqrt{2P_{LO}Z_0} \cos(\omega_{LO}t). \end{split} \end{equation} In the above equation, $P_t$ is the transmit laser power, $Z_0$ is the free space impedance, $h_{m,n}$ and $\phi_{m,n}$ denote the magnitude and the phase of the complex channel coefficient between the $m$-th transmit and the $n$-th receive aperture, respectively. Furthermore, $P_{LO}$ denotes the power of the local oscillator, $\omega_{LO} = \omega_0 + \omega_{IF}$ where $\omega_0$ and $\omega_{IF}$ are the carrier and the intermediate radian frequencies, respectively. The output current of the $n$-th photodetector can be mathematically expressed as \cite{J:Niu3, J:Bayaki2} \begin{equation}\label{Eq:Photocurrent} i_n(t) = \frac{R}{Z_0}[e_n(t)]^2 \end{equation} where $R = \eta q_e /(h \nu_0)$ is the responsivity of the photodetector with $q_e = 1.6\times 10^{-19} $Cb is the charge of an electron, $h = 6.6\times 10^{-34} \rm{J\cdot s}$ is the Planck constant, $\eta$ is the photodetector efficiency, and $\nu_0 = \omega_0/(2\pi)$ is the optical center frequency. Expanding \eqref{Eq:Photocurrent} and ignoring the double-frequency terms that are filtered out by the bandpass filter, the resulting photocurrent can be expressed as \begin{equation}\label{Eq:Photocurrent2} \begin{split} i_n(t) & = R P_t E_{m}^2 h_{m,n}^2+RP_{LO} \\ & +2R\sqrt{P_tP_{LO}}E_{m} h_{m,n}\cos(\omega_{IF} t - \phi_{m,n} - \phi_{b_m}) \\ & \triangleq i_{\rm{DC}}(t)+ i_{\rm{AC}}(t). \end{split} \end{equation} In \eqref{Eq:Photocurrent2}, $i_{\rm{DC}}(t) \triangleq R P_t E_{m}^2 h_{m,n}^2+RP_{LO}$ is the DC component generated by the signal and local oscillator fields, respectively, $i_{\rm{AC}}(t) \triangleq 2R\sqrt{P_tP_{LO}}\cos(\omega_{IF} t - \phi_{m,n} - \phi_{b_m})$ is the AC component in the received photocurrent which, unlike for direct detection, contains information about the frequency and phase of the received signal. It is assumed that for coherent detection the intermediate frequency $\omega_{IF}$ is nonzero, so that the signal power can be expressed as $P_s = 2R^2P_tP_{LO}E_{m}^2 h_{m,n}^2$ As in \cite{J:Bayaki2, J:Niu3, J:Aghajanzadeh, J:Kiasaleh}, we also consider that $P_{LO} \gg P_s$ and thus, the DC photocurrent can be approximated as $i_{\rm{DC}}(t) \approx RP_{LO}$. The photodetection process is impaired by shot noise with variance $\sigma_{\rm{shot,L}}^2 = 2q_eRP_{LO}B_e $ where $B_e$ is the electrical bandwidth of the photodetector. It is also noted that because of the large value of $RP_{LO}$ the photocurrent due to thermal noise and the dark current can be ignored \cite{J:Niu3}. Following \cite{J:Bayaki2} and \cite{J:Aghajanzadeh}, the sufficient statistics at the $n$-th coherent receiver can be expressed as \begin{equation}\label{Eq:signal_model} y_n = \sqrt{\mu}h_{m,n}E_m\exp[\jmath(\phi_{m,n}+ \phi_{b_m})] + z_{n} \end{equation} where $\mu = {RP_t}/({q_eB_e})$ is the average signal-to-noise ratio (SNR) and $z_n$ is the noise at the $n$-th receiver. Assuming that the LO power is large and the receiver noise is dominated by LO related noise terms, the Additive White Gaussian Noise (AWGN) model can be employed as an accurate approximation of the Poisson photon-counting detection model \cite{J:Bayaki2, J:Niu3}. Thus, $z_{n}$ can be modeled as a zero-mean unit variance complex Gaussian random variable \cite{J:Bayaki2}. Similar to \cite{J:RenzoRice}, it is assumed that the receiver has knowledge of the actual fading gains and that the total fading remains constant over one bit interval and changes from one interval to another in an independent manner. At the receiver, the optimal spatial modulation detector estimates the active transmitter index, $\ell$, at a given time slot according to \cite{J:Jeganathan} \begin{equation}\label{Eq:MLdetector} \begin{split} \hat{\ell}& = {\operatornamewithlimits{argmax}_{\ell}} p_{\mathbf{y}}\left(\mathbf{y} | {\mathbf{x}}, {\mathbf{H}} \right) \\ & = {\operatornamewithlimits{argmin}_{\ell}}\left\{\sqrt{\mu} \parallel \mathbf{h}_{\ell} x_{\ell}\parallel_F^2 -2\left(\mathbf{y}^{T}\mathbf{h}_{\ell}x_{\ell}\right)\right\} \end{split} \end{equation} where \begin{itemize} \item[-] $\mathbf{x}$ is an $M$-dimensional vector with elements corresponding to the electrical field ${E}_m\exp(\jmath\phi_{b_m})$ that is transmitted over the optical MIMO channel; \item[-] $\mathbf{H}(t)$ is an $N\times M$ optical MIMO channel defined as \begin{equation} \begin{split} &\mathbf{H}(t) = [\mathbf{h}_{1}, \mathbf{h}_{2}, \ldots, \mathbf{h}_{M}] \\ & \triangleq \left[ \begin{array}{ccc} {{h}}_{11}(t)\exp(\jmath\phi_{11}) & \ldots & {{h}}_{1M}(t)\exp(\jmath\phi_{1M})\\ {{h}}_{21}(t)\exp(\jmath\phi_{21}) & \ldots & {{h}}_{2M}(t)\exp(\jmath\phi_{2M})\\ \vdots & \ddots & \vdots \\ {{h}}_{N1}(t)\exp(\jmath\phi_{N1}) & \ldots & {{h}}_{NM}(t)\exp(\jmath\phi_{NM}) \end{array} \right]\; \end{split} \end{equation} \item[-] $\mathbf{z}$ is the $N$-dimensional noise vector; \item[-] $p_{\mathbf{y}}\left(\mathbf{y} | {\mathbf{x}}, {\mathbf{H}} \right)$ is the PDF of $\mathbf{y}$ conditioned on the transmitted vector $\mathbf{x}$ and the channel ${\mathbf{H}}$; \end{itemize} \subsection{Channel Model} A discrete scattering model is considered, where the radiation field of an optical wave at a particular point is assumed to be composed of a number of scattered components that have traveled different paths. Under the Ricean assumption \cite{J:AndrewsIK2}, the complex channel path gains ${\tilde{h}}_{ij}(t)$ between the $i$-th transmitter and the $j$-th photodetector can be expressed as ${\tilde{h}}_{ij}(t) = h_{ij}(t)\exp(\jmath \omega t)$ where $\omega$ is the radian frequency of the optical wave and \begin{equation}\label{Eq:field} \begin{split} h_{ij}(t) & = \Re\{h_{ij}(t)\}+\jmath \Im\{h_{ij}(t)\} \\ &= A_{ij}\exp[\jmath \theta_{ij}(t)]+R_{ij}(t)\exp[\jmath \Phi_{ij}(t)] \end{split} \end{equation} where the term $A_{ij}\exp(\jmath \theta_{ij}(t))$ is a deterministic component and $R_{ij}(t)\exp(\jmath \Phi_{ij}(t))$ is a circular complex Gaussian random variable. Hence, the amplitude $R_{ij}$ is Rayleigh distributed with parameter $\sigma_{ij}^2 = b_{ij}/2$ \cite[Eq. (13)]{J:AndrewsIK2} and the phase $\Phi_{ij}$ is uniformly distributed over $[0, 2\pi)$. Under the assumption of a doubly stochastic scintillation model \cite{J:AndrewsIK2}, the effect of random fluctuations in the turbulence parameters is modeled by allowing random variations in the parameter $b_{ij}$ of the Rayleigh component. Following \cite{J:AndrewsIK2}, it is further assumed that $b_{ij}$ follows a gamma distribution with PDF given by \begin{equation}\label{Eq:pdfb} f_{b_{ij}}(b) = \left(\frac{\alpha_{ij}}{b_0}\right)^{\alpha_{ij}}\frac{b^{\alpha_{ij}-1}}{\Gamma(\alpha_{ij})}\exp\left(-\frac{\alpha b}{b_{0_{ij}}}\right) \end{equation} where $\alpha$ is the shaping parameter and represent the effective number of scatters and $b_{0_{ij}} = \mathbb{E}\{b_{ij}\}$. Then, the PDF of the irradiance $\mathrm{I}_{ij} = |{h}_{ij}(t)|^2$, $f_{\mathrm{I}_{ij}}(\mathrm{I})$, can be expressed as \cite[Eq. (8)]{J:AndrewsIK2} \begin{equation}\label{Eq:PDFIK} \begin{split} & f_{\mathrm{I}_{ij}}(\mathrm{I}) = \frac{\left({\alpha_{ij}}/{b_{0_{ij}}}\right)^{\alpha_{ij}}}{\Gamma(\alpha_{ij})} \\ & \times \int_0^{\infty} b^{\alpha_{ij}-2}\exp\left(-\frac{\alpha_{ij} b}{b_{0_{ij}}}-\frac{\mathrm{I}+{A_{ij}}^2}{b}\right)I_0\left(\frac{2{A_{ij}}\sqrt{\mathrm{I}}}{b}\right)\mathrm{d}b \end{split} \end{equation} which is actually the integral representation of the H-K distribution \cite{J:Jakeman2}. It is noted that $f_{\mathrm{I}_{ij}}(\mathrm{I})$ cannot, in general, be expressed in closed form, with the exception of the special cases $A_{ij} = 0$ or $\alpha = 1$. Specifically, for $A_{ij} = 0$ \eqref{Eq:PDFIK} reduces to the K-distribution whereas for $\alpha = 1$, \eqref{Eq:PDFIK} reduces to a special case of the I-K distribution \cite[Eq. (10)]{J:Jakeman2}. The $\nu$-th normalized moment of $\mathrm{I}_{ij}$ is given by \cite[Eq. (22)]{J:Jakeman2} as \begin{equation}\label{Eq:moment} \frac{\mathbb{E}\{\mathrm{I}_{ij}^\nu\}}{\mathbb{E}\{\mathrm{I}_{ij}\}^\nu} = \frac{\nu!}{\alpha_{ij}^\nu(1+\rho_{ij})^\nu} \sum_{k=0}^{\nu}\binom{\nu}{k}\frac{\Gamma(\alpha_{ij}+\nu-k)}{\Gamma(\alpha_{ij})}\frac{(\alpha_{ij}\rho_{ij})^\nu}{\nu!} \end{equation} where $\rho_{ij} = A_{ij}^2/b_{0_{ij}}$ is the coherence parameter, defined as the power ratio of mean intensities of the constant-amplitude component and random component of the field in \eqref{Eq:field} \cite{J:AndrewsIK2,J:AndrewsIK3}. Using \eqref{Eq:moment}, the \emph{scintillation index} can be readily calculated as \begin{equation}\label{Eq:SIHK} \sigma_\mathrm{I_{ij}}^2 \triangleq \frac{\mathbb{E}\{\mathrm{I}_{ij}^2\}}{\mathbb{E}\{\mathrm{I}_{ij}\}^2}-1 = \frac{\alpha_{ij}+2\alpha_{ij}\rho_{ij}+2}{\alpha_{ij}(1+\rho_{ij})^2}. \end{equation} Under the assumption of spherical wave propagation, $\sigma_\mathrm{I_{ij}}^2$ can be directly related to atmospheric conditions as \cite[Eq. (7), Eq. (9)]{J:AndrewsIK3} \begin{equation}\label{Eq:SI2} \sigma_\mathrm{I_{ij}}^2 \approx \begin{cases} 0.41\alpha_{ij}2(1 + 0.5\sigma_1^2),\, \sigma_1 \ll 1 \\ 1+{2.8}/{\sigma_1^{{4}/{5}}},\, \sigma_1 \gg 1 \end{cases} \end{equation} where $\sigma_1^2=1.23 C_{n_{ij}}^2k^{7/6}L_{ij}^{11/6}$ is the Rytov variance, $k=2\pi/\lambda$ is the optical wave number with $\lambda$ being the wavelength, $L_{ij}$ is the link distance and $C_{n_{ij}}$ denotes the index of refraction structure parameter. For FSO links near the ground, $C_{n_{ij}}^2 \approx 1.7 \times 10^{-14} \mathrm{m}^{-2/3}$ and $8.4 \times 10^{-15} \mathrm{m}^{-2/3}$ for the daytime and night, respectively \cite{B:Goodman1985}. Moreover, $\sigma_1 \ll 1$ and $\sigma_1 \gg 1$ correspond to weak and strong turbulence conditions, respectively. Using \eqref{Eq:SI2}, the parameters of the H-K distribution, $\alpha$ and $\rho$, can be directly related to physical parameters of the turbulence by following a similar line of arguments as in \cite{J:AndrewsIK3}, where similar results were derived for the I-K distribution. In particular, on the one hand, weak turbulence conditions are characterized in the H-K distribution by large values of $\rho_{ij}$. In this case the scintillation index given by \eqref{Eq:SIHK} can be approximated as \begin{equation}\label{Eq:sigmaweak} \sigma_\mathrm{I_{ij}}^2 \approx \frac{2}{\rho_{ij}}, \,\text{with } \rho_{ij} \gg 1. \end{equation} On the other hand, assuming strong turbulence conditions where $\rho_{ij}$ tends to zero, \eqref{Eq:SIHK} can be approximated as \begin{equation}\label{Eq:sigmastrong} \sigma_\mathrm{I_{ij}}^2 \approx 1+\frac{2}{\alpha_{ij}}, \,\text{with } \rho_{ij} \ll 1. \end{equation} By comparing \eqref{Eq:sigmaweak} and \eqref{Eq:sigmastrong} with the first and second branches of \eqref{Eq:SI2}, respectively, $\alpha_{ij}$ and $\rho_{ij}$ can be obtained as \begin{equation}\label{Eq:param-a} \alpha_{ij} = 0.71\sigma_{1_{ij}}^{{4}/{5}} \end{equation} \begin{equation}\label{Eq:param-rho} \rho_{ij} = \frac{4.88}{\sigma_{1_{ij}}^2(1+0.2\sigma_{1_{ij}}^2)}. \end{equation} To the best of our knowledge, the relationship of $\alpha_{ij}$ and $\rho_{ij}$ with $\sigma_{1_{ij}}$ given by \eqref{Eq:param-a} and \eqref{Eq:param-rho} is a novel result. \section{Performance Analysis of Uncoded OSM}\label{Sec:ABEP_Analysis} In this section, by employing the well-known MGF-based approach for the performance analysis of digital communications over fading channels \cite{B:Alouini}, analytical expressions for the ABEP of uncoded OSM systems will be derived. Expressions for the diversity and coding gains of OSM systems are also presented, thus providing useful insight as to how these parameters affect the overall system performance. \subsection{Preliminaries} For $M = 2$, the conditional bit error probability (BEP) of OSM systems when no turbulence induced fading is considered can be obtained in closed form as \cite{J:RenzoRice} \begin{equation}\label{Eq:ABEP2xN} P_E(\mathbf{h}_1, \mathbf{h}_2) = Q\left(\sqrt{\frac{\mu}{4}{\parallel \mathbf{h}_1- \mathbf{h}_2\parallel}_F^2 }\right). \end{equation} The squared Frobenius norm in \eqref{Eq:ABEP2xN} can be expressed as \begin{equation}\label{Eq:Frob} {\parallel \mathbf{h}_1- \mathbf{h}_2\parallel}_F^2 = \sum_{n=0}^N|h_{1,n}- h_{2,n}|^2 \end{equation} where $h_{i,n}$ is the $n$-th element of $\mathbf{h}_i$, $\forall i \in \{1,2\}$. When $M > 2$ transmitters are considered, a tight upper bound for the conditional BEP of the above system can be obtained as \cite[Eq. (7)]{J:OSM2011} \begin{equation}\label{Eq:ABEPMxN} \begin{split} &P_E({\mathbf{H}}) \leq \frac{M^{-1}}{\log_2(M)}\\ &\times \sum_{m_1=1}^{M}\sum_{m_2 \neq m_1 =1}^{M}N_b(m_1, m_2){\rm PEP}(m_1\rightarrow m_2) \end{split} \end{equation} where ${\rm PEP}(m_1\rightarrow m_2)$ denotes the pairwise error probability (PEP) related to the pair of transmitters $m_1$ and $m_2$, where $m_1$ and $m_2 \in 1, 2, \ldots, M$, and $N_b(m_1, m_2)$ is the number of bit which have occurred when the receiver decides incorrectly that $m_2$ instead of $m_1$ has been active. The ${\rm PEP}(m_1\rightarrow m_2)$ can be evaluated as \cite[Eq. (8)]{J:OSM2011} \begin{equation}\label{Eq:ABEP2xNrMGF} {\rm PEP}(m_1\rightarrow m_2) = Q\left(\sqrt{\frac{\mu}{4}{\parallel \mathbf{h}_{m_1}- \mathbf{h}_{m_2}\parallel}_F^2 }\right). \end{equation} \subsection{MGF-Based Approach} When atmospheric turbulence is taken into account, the conditional error probabilities in \eqref{Eq:ABEP2xN} and \eqref{Eq:ABEPMxN} need to be averaged over the elements of the channel matrix $\mathbf{H}$ in order to evaluate the ABEP. Without loss of generality, let us consider the case of a $2\times N$ MIMO system. Since $h_{i,n}$ are complex Gaussian random variables, the difference $\Delta_n \triangleq h_{1,n}- h_{2,n}$ is a complex Gaussian random variable having mean equal to the difference of the means of $h_{i,n}$ and variance equal to the sum of variances of $h_{i,n}$. In order to deduce a closed form expression for the ABEP, it is further assumed that $h_{i,n}$ have uncorrelated real and imaginary components with the same variance $\sigma_n^2 = b_n/2$. It is noted that such an assumption is justified for link distances of the order of km and for aperture separation distances of the order of cm \cite{J:LeeChan, J:LetzepisHolland}. For example, in \cite{J:LetzepisHolland} it was reported that for a link distance of 1.5 km, a wavelength of 1550 nm, an aperture diameter of 1 mm and photodetectors separated by as little as 35 mm, which validates the independence assumption. Consequently, $\Delta_n$ has uncorrelated components too and its squared envelope, $|\Delta_n|^2$, is characterized by a non-central chi-square PDF as follows \begin{align}\label{Eq:PDF1} f_{|\Delta_n|^2 }(x|b_n) = \frac{1}{2b_n}\exp\left(-\frac{x+\tilde{A}_n^2}{2b_n}\right)I_0\left(\frac{\tilde{A}_n\sqrt{x}}{b_n}\right) \end{align} where $\tilde{A}_n = |A_{2,n}e^{\jmath\theta_{2,n}}-A_{1,n}e^{\jmath\theta_{1,n}}|$. Assuming that $b_n$ follows a gamma distribution with parameters $\alpha_n$ and $b_{0,n}$, the unconditional PDF of $|\Delta_n|^2$ is obtained by averaging \eqref{Eq:PDF1} with respect to $b_n$, i.e. \begin{equation}\label{Eq:PDF2} \begin{split} & f_{|\Delta_n|^2}(x) = \frac{\left({\alpha_n}/{b_{0,n}}\right)^{\alpha_n}}{2\Gamma(\alpha_n)} \\ & \times \int_0^{\infty} b_n^{\alpha_n-2}\exp\left(-\frac{\alpha_n b_n}{b_{0,n}}-\frac{x+\tilde{A}_n^2}{2b_n}\right)I_0\left(\frac{\tilde{A}_n\sqrt{x}}{b_n}\right)\mathrm{d}b_n. \end{split} \end{equation} As was pointed out in \cite{J:AndrewsIK2}, the integral in \eqref{Eq:PDF2} cannot be solved in closed form. Nevertheless, for the special case of $\alpha_n = 1$, i.e. when one scatterer per branch is considered, and by employing \cite[Eq. (10)]{J:AndrewsIK2}, this integral can be evaluated in closed form as \begin{equation}\label{Eq:CIK2} f_{|\Delta_n|^2}(x) = \begin{cases} \frac{1}{b_{0,n}}K_0\left(\sqrt{{2\tilde{A}_n}/{b_{0,n}}}\right)I_0\left({\sqrt{2x}}/{b_{0,n}}\right),\,x < \tilde{A}_n^2 \\ \frac{1}{b_{0,n}}I_0\left(\sqrt{{2\tilde{A}_n}/{b_{0,n}}}\right)K_0\left({\sqrt{2x}}/{b_{0,n}}\right),\,x > \tilde{A}_n^2. \end{cases} \end{equation} Moreover, for the special case where $h_{1,n}$ and $h_{2,n}$ have identical mean value, i.e. when $\tilde{A}_n = 0$, \eqref{Eq:PDF2} yields the well known K-distribution with PDF given by \begin{equation}\label{Eq:PDFKdistr} \begin{split} f_{|\Delta_n|^2}(x) & = {2^{{(1-\alpha_n)}/{2}}}{\Gamma(\alpha_n)}\left(\frac{\alpha_n x}{b_{0,n}}\right)^{{(\alpha_n-1)}/{2}} \\ & \times K_{\alpha_n-1}\left(\sqrt{\frac{2\alpha_n x}{b_{0,n}}}\right). \end{split} \end{equation} By employing the MGF-based approach for the performance analysis of digital communications over fading channels, the average PEP (APEP) can be obtained as \begin{equation}\label{Eq:APEPexact} \mathrm{APEP} = \frac{1}{\pi}\int_{0}^{\pi/2}\prod_{n=1}^{N}\left[\mathcal{M}_{|\Delta_n|^2}\left(\frac{\mu}{8\sin^2\theta}\right)\right]\mathrm{d}\theta. \end{equation} Moreover, using the tight approximation for the Gaussian Q-function presented in \cite[Eq. (14)]{J:Chiani} (i.e., $Q(x) \approx {1}/{12}\exp(-x^2)+{1}/{4}\exp(-2x^2/3)$), an expression accurately approximating APEP can be deduced as \begin{equation}\label{Eq:APEPapprox} \mathrm{APEP} \approx \frac{1}{12}\prod_{n=1}^{N}\left[\mathcal{M}_{|\Delta_n|^2}\left(\frac{\mu}{8}\right)\right]+\frac{1}{4} \prod_{n=1}^{N}\left[\mathcal{M}_{|\Delta_n|^2}\left(\frac{\mu}{6}\right)\right]. \end{equation} In the following analysis, analytical expressions for the MGF of $|\Delta_n|^2$ will be deduced. Specifically, the following result holds: \newtheorem{proposition}{Proposition} \begin{proposition} An integral representation for the MGF of $|\Delta_n|^2$ can be deduced as \begin{equation}\label{Eq:MGF} \begin{split} & \mathcal{M}_{|\Delta_n|^2}(s) = \frac{\left({\alpha_n}/{b_{0,n}}\right)^{\alpha_n}}{\Gamma(\alpha_n)} \\ & \times \int_0^{\infty}\frac{b^{\alpha_n-1}}{2bs+1}\exp\left(-\frac{\tilde{A}_ns}{2bs+1}-\frac{\alpha_n b}{b_{0,n}}\right)\mathrm{d}b. \end{split} \end{equation} \end{proposition} \begin{IEEEproof} By employing the definition of the MGF, $\mathcal{M}_{|\Delta_n|^2}(s)$ can be obtained as \begin{equation} \begin{split} & \mathcal{M}_{|\Delta_n|^2}(s) = \int_0^{\infty}\exp(-sx)f_{|\Delta_n|^2}(x)\mathrm{d}x \\ & = \frac{\left({\alpha_n}/{b_{0,n}}\right)^{\alpha_n}}{2\Gamma(\alpha_n)}\int_0^{\infty}\int_0^{\infty} \exp\left(-sx-\frac{\alpha_n b}{b_{0,n}}-\frac{x+\tilde{A}_n^2}{2b}\right) \\ & \times I_0\left(\frac{\tilde{A}_n\sqrt{x}}{b}\right)b^{\alpha_n-2}\mathrm{d}b\mathrm{d}x. \end{split} \end{equation} By changing the order of integration, the above equation can be expressed as \begin{equation}\label{Eq:MGFinterm1} \begin{split} & \mathcal{M}_{|\Delta_n|^2}(s) = \frac{\left({\alpha_n}/{b_{0,n}}\right)^{\alpha_n}}{2\Gamma(\alpha_n)}\int_0^{\infty}b^{\alpha_n-2} \exp\left(-\frac{\alpha_n b}{b_{0,n}}\right)\\ &\left[\int_0^{\infty} \exp\left(-sx-\frac{x+\tilde{A}_n^2}{2b}\right)I_0\left(\frac{\tilde{A}_n\sqrt{x}}{b}\right)\mathrm{d}x\right] \mathrm{d}b. \end{split} \end{equation} The inner integral, i.e. with respect to $x$ can be evaluated by employing \cite[Eq. (3.15.2.2)]{B:Prudnikov4} as \begin{equation}\label{Eq:MGFinterm2} \begin{split} & \int_0^{\infty} \exp\left(-sx-\frac{x+\tilde{A}_n^2}{2b}\right)I_0\left(\frac{\tilde{A}_n\sqrt{x}}{b}\right)\mathrm{d}x = \\ & \frac{2b}{2sb+1}\exp\left[\frac{1}{2\tilde{A}_nb(2sb+1)}\right]. \end{split} \end{equation} Substituting \eqref{Eq:MGFinterm2} into \eqref{Eq:MGFinterm1} and after some straightforward manipulations, \eqref{Eq:MGF} is readily deduced thus completing the mathematical proof. \end{IEEEproof} The integral in \eqref{Eq:MGF} can be accurately approximated by employing a Gauss-Chebyshev Quadrature (GCQ) technique as \cite{B:Abramowitz} \begin{equation} \begin{split} & \mathcal{M}_{|\Delta_n|^2}(s) \approx \frac{\left({\alpha_n}/{b_{0,n}}\right)^{\alpha_n}}{\Gamma(\alpha_n)} \\ & \times \sum_{j=0}^Jw_j\frac{{t_j}^{\alpha_n-1}}{2{t_j}s+1}\exp\left(-\frac{\tilde{A}_ns}{2{t_j}s+1}-\frac{\alpha_n {t_j}}{b_{0,n}}\right) \end{split} \end{equation} where $J$ is the number of integration points, $t_j$ are the abscissas and $w_j$ the corresponding weights. In \cite[eqs. (22) and (23)]{C:YilmazMGF}, $t_j$ and $w_j$ are defined as \begin{subequations}\label{Eq:GCQ} \begin{equation}\label{Eq:xk} t_j = \tan\left[\frac{\pi}{4}\cos\left(\frac{2j-1}{2J}\pi\right)+\frac{\pi}{4}\right] \end{equation} \begin{equation}\label{Eq:wk} w_j = \frac{\pi^2\sin\left(\frac{2j-1}{2J}\pi\right)}{4J\cos^2\left[\frac{\pi}{4}\cos\left(\frac{2j-1}{2J}\pi\right)+\frac{\pi}{4}\right]}. \end{equation} \end{subequations} For the special case of $\tilde{A}_n = 0$, it can be shown that \eqref{Eq:MGF} can be evaluated in closed form. Specifically, the following result holds: \newtheorem{corrolary}{Corrolary} \begin{corrolary} For the special case of $\tilde{A} = 0$ the MGF of $|\Delta_n|^2$ can be deduced in closed form as \begin{equation}\label{Eq:MGFK} \begin{split} \mathcal{M}_{|\Delta_n|^2}(s) & = \left(\frac{\alpha_n}{2sb_{0,n}}\right)^{\frac{\alpha_n}{2}}\exp\left(\frac{\alpha_n}{4sb_{0,n}}\right)\\ & \times W_{-\frac{\alpha_n}{2}, \frac{\alpha_n-1}{2}}\left(\frac{\alpha_n}{2sb_{0,n}}\right). \end{split} \end{equation} \end{corrolary} This result can be readily deduced by employing the integral representation of the Whittaker $W$-function given in \cite[Eq. (9.222)]{B:Gradshteyn_00}. Moreover it is worth pointing out that \eqref{Eq:MGFK} is in agreement with a previously known result, namely the analytical expression for the MGF of the K-distribution. \cite[Eq. (4)]{J:Theofilakos_GSC}. \subsection{Analysis of the Diversity Gain} The diversity gain of the considered OSM MIMO system can be obtained by using the approach presented in \cite{J:WangGiannakis}. In particular, a generic analytical expression, which becomes asymptotically tight at high SNR values, will be derived for the ${\rm APEP}$ appearing in \eqref{Eq:APEPexact}, as follows: \begin{proposition} For high SNR values, \eqref{Eq:APEPexact} can be approximated by \begin{equation}\label{Eq:PEPasym} {\rm APEP} \overset{\mu \gg 1}{\approx} \frac{2^{N - 1}\Gamma\left(N + \frac{1}{2}\right)}{\sqrt{\pi}\Gamma\left(N + 1\right)} \left[\prod_{n=1}^{N} c_\ell\right] \left(\frac{\mu}{4}\right)^{-N} \end{equation} where \begin{equation}\label{Eq:codinggain} c_n = \left(\frac{\tilde{A}_n}{2}\right)^{\frac{\alpha_n-1}{2}} \frac{\left(\alpha_n/b_{0,n}\right)^{\frac{\alpha_n+1}{2}}}{\Gamma(\alpha_n)}K_{\alpha_n-1}\left(\sqrt{\frac{2\tilde{A}_n\alpha_n}{b_{0,n}}}\right). \end{equation} \end{proposition} \begin{IEEEproof} According to \cite[Proposition 3]{J:WangGiannakis}, the asymptotic error performance of the OSM system depends on the behavior of $\mathcal{M}_{|\Delta_n|^2}(s)$, as $s\rightarrow\infty$. To determine an analytical asymptotic expression for APEP a Taylor series expansion is employed to approximate $\mathcal{M}_{|\Delta_n|^2}(s)$ as \begin{equation}\label{Eq:MGFTaylor} |\mathcal{M}_{|\Delta_n|^2}(s)| = c_n|s|^{-d_n}+o(|s|^{-d_n}), \, s\rightarrow\infty \end{equation} where $c_n$ and $d_n$ are parameters that determine the diversity and coding gains of the $n$-th diversity branch, respectively. Observe that since ${\tilde{A}s}/({2sb+1})\overset{s\rightarrow\infty}{\approx}{\tilde{A}}/({2b})$ and ${1}/({2sb+1})\overset{s\rightarrow\infty}{\approx}1/({2bs})$, \eqref{Eq:MGF} yields \begin{equation}\label{Eq:MGF2} \begin{split} \mathcal{M}_{|\Delta_n|^2}(s) & \approx \frac{\left({\alpha_n}/{b_{0,n}}\right)^{\alpha_n}}{2s\Gamma(\alpha_n)} \\ & \times \int_0^{\infty}b^{\alpha_n-2}\exp\left(-\frac{\tilde{A}_n}{2b}-\frac{\alpha_n b}{b_{0,n}}\right)\mathrm{d}b. \end{split} \end{equation} By employing \cite[Eq. (2.2.2.1)]{B:Prudnikov4}, \eqref{Eq:MGF2} can be solved in closed form yielding \begin{equation}\label{Eq:MGF3} \begin{split} \mathcal{M}_{|\Delta_n|^2}(s) & \approx \left(\frac{\tilde{A}_n}{2}\right)^{\frac{\alpha_n-1}{2}} \frac{\left(\alpha_n/b_{0,n}\right)^{\frac{\alpha_n+1}{2}}}{s\Gamma(\alpha_n)} \\ & \times K_{\alpha_n-1}\left(\sqrt{\frac{2\tilde{A}_n\alpha_n}{b_{0,n}}}\right). \end{split} \end{equation} By comparing \eqref{Eq:MGF3} and \eqref{Eq:MGFTaylor} it is readily deduced that $d_n = 1$ and $c_n$ is given by \eqref{Eq:codinggain}. Thus, by substituting \eqref{Eq:MGFTaylor} into \eqref{Eq:APEPexact}, the asymptotic PEP expression can be obtained as in \eqref{Eq:PEPasym} which concludes the proof. \end{IEEEproof} From \eqref{Eq:PEPasym} it is clear that the diversity gain achieved by the considered system is equal to $N$. It is also evident that the diversity gain depends only on the number of the receive apertures and is independent of the fading severity. This finding is in agreement with relevant findings reported in \cite{J:RenzoRice} and \cite{J:RenzoAsym}, for the case of radio-frequency MIMO wireless systems. It is noted that for the special case $\tilde{A}_n = 0$, i.e. when $|\Delta_n|^2$ follows the K-distribution, by employing the asymptotic result $K_t(x) \overset{x\rightarrow 0}{\approx} ({\Gamma(t)}/{2})\left({2}/{x}\right)^t$ \cite{B:Abramowitz}, $c_n$ can be further simplified as \begin{equation} c_n = \frac{\alpha_n}{2b_{0,n}(\alpha_n-1)}. \end{equation} \section{Performance Analysis of Coded OSM over turbulence channels}\label{Sec:Coded} When coded OSM is employed, the input signal $\mathbf{s}(t)$ is first encoded by a convolutional encoder. The encoded data are interleaved by a random block interleaver and transmitted through the optical wireless channels using spatial modulation. It is also assumed that perfect interleaving at the transmitter and de-interleaving at the receiver is used. Assuming maximum likelihood soft decision decoding, the log likelihood ratios (LLRs) for the $i$-th constellation bit when the $\ell$-th transmitting antenna is active are computed as \cite[Eq. (6)]{J:OSM2011} \begin{equation} \begin{split} \mathrm{LLR} & = \log\frac{\mathrm{Pr}\{\ell^i= 1| \mathbf{y}\}}{\mathrm{Pr}\{\ell^i= 0| \mathbf{y}\}} \\ & = \log\frac{\sum_{\hat{\ell} \in \mathcal{L}_1^i}\exp\left(-{\parallel \mathbf{y}-\mathbf{h}_{\hat{\ell}}s_\ell \parallel^2}/{N_0}\right)}{\sum_{\hat{\ell} \in \mathcal{L}_0^i}\exp\left(-{\parallel \mathbf{y}-\mathbf{h}_{\hat{\ell}}s_\ell \parallel^2}/{N_0}\right)} \end{split} \end{equation} where $\mathcal{L} \in \{1 : M\}$ is the set of spatial constellation points, $\mathcal{L}_1^i$ and $\mathcal{L}_0^i$ are subsets from $\mathcal{L}$ containing the transmitter indices having "1" and "0" at the $i$-th bit, respectively. The resulting data are finally decoded by a Viterbi decoder. A union bound on the ABEP of a coded communication system can be evaluated as \cite{B:Alouini} \begin{equation}\label{Eq:coded1} \bar{P}_{\mathrm{ub}} \leq \frac{1}{n}\sum_{\mathbf{X}}P(\mathbf{X})\sum_{\mathbf{X}\neq \mathbf{X'}}q(\mathbf{X},\mathbf{X'})\mathrm{PEP}(\mathbf{X},\mathbf{X'}) \end{equation} where $P(\mathbf{X})$ is the probability that the coded sequence $\mathbf{X}$ is transmitted, $q(\mathbf{X},\mathbf{X'})$ is the number of information bit errors in choosing another coded sequence $\mathbf{X'}$ instead of $\mathbf{X}$ $n$ is the number of information bits per transmission and $\mathrm{PEP}(\mathbf{X},\mathbf{X'})$ is the pairwise error probability, i.e the probability of selecting $\mathbf{X'}$ when $\mathbf{X}$ was actually transmitted. By employing \cite[p. 510]{B:Alouini}, \eqref{Eq:coded1} can be efficiently evaluated as \begin{equation}\label{Eq:bound1} \bar{P}_{\mathrm{ub}} \leq \frac{1}{n}\sum_{\mathbf{X}}P(\mathbf{X})\int_0^{\pi/2}\left[\left.\frac{\partial }{\partial N}T[D(\theta), N]\right|_{N=1}\right] \end{equation} where $T[D(\theta), N]$ is the transfer function of the employed convolutional code, $N$ is an indicator variable taking into account the number of the erroneous bits and $D(\theta)$ depends on the underlying PEP expression. Furthermore, assuming that uniform error probability (UEP) codes are considered and taking into account the symmetry property this code family exhibits, thus making the distance structure of a UEP code independent of the transmitted sequence, \eqref{Eq:bound1} can be further simplified as \cite{B:Alouini} \begin{equation}\label{Eq:bound2} \bar{P}_{\mathrm{ub}} \leq \frac{1}{\pi}\int_0^{\pi/2}\left[\frac{1}{n}\left.\frac{\partial }{\partial N}T[D(\theta), N]\right|_{N=1}\right]. \end{equation} For $M=2$, using \eqref{Eq:ABEP2xN}, \eqref{Eq:Frob} and Craig's formula for the Gauss Q-function, i.e. $Q(x) = 1/\pi\int_0^{\pi/2}\exp(-x^2/2\sin^2\theta)\mathrm{d}\theta$, $D(\theta)$ can be expressed as \begin{equation} D(\theta) = \prod_{n=1}^N\mathcal{M}_{|\Delta_n|^2}\left(\frac{\mu}{8\sin^2\theta}\right) \end{equation} where $\mathcal{M}_{|\Delta_n|^2}$ can be obtained from \eqref{Eq:MGF}. When $M>2$, by employing \cite[Eq. (13)]{J:OSM2011}, and using a similar line of arguments as in the case of $M = 2$, $D(\theta)$ can be written as \begin{equation} \prod_{m_1=1}^{M}\prod_{m_2 \neq m_1 =1}^{M}\mathcal{M}_{|\Delta_{m_1, m_2}|^2}\left(\frac{\mu}{8\sin^2\theta}\right) \end{equation} where $|\Delta_{m_1, m_2}|^2 = \parallel \mathbf{h}_{m_1} -\mathbf{h}_{m_2} \parallel^2$. The last MGF can be easily computed analytically with the help of \eqref{Eq:MGF}. \section{Performance Evaluation Results and Discussion}\label{Sec:Results} \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=3.0in]{ABEP1} \caption{ABEP of uncoded OSM for $2\times N$ MIMO H-K turbulent channels as a function of the average SNR, $\mu$, for various number of receiving apertures, $N$. Simulation Parameters: $A_{1,n} = 2$, $A_{2,n} = 1$, $\theta_{1,n} = \pi/3$, $\theta_{2,n} = \pi/4$, $\alpha_{n} = 2$, $b_{0,n} = 2$. } \label{Fig:ABEP1} \end{figure} \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=3.0in]{ABEP2} \caption{ABEP of uncoded OSM for $2\times2$ and $2\times4$ MIMO H-K turbulent channels as a function of the average SNR, $\mu$, for various values of link distances, $L$. Simulation Parameters: $\lambda = 1550{\rm nm}$, $C_n^2 = 1.7\times 10^{-14} {\rm m}^{-2/3}, \theta_{1,n} = \pi/3, \theta_{2,n} = \pi/4$. } \label{Fig:ABEP2} \end{figure} \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=3.0in]{OSMvsMRC} \caption{ABEP Comparison of $2\times2$ OSM with $1\times2$ coherent MRC systems employing DPSK, as a function of the average SNR, $\mu$, for various values of $A_{1,n}$. Simulation Parameters: $A_{2,n} = 0$, $\theta_{1,n} = 0$, $\theta_{2,n} = 0$, $\alpha_{n} = 1.5$, $b_{0,n} = 1.5$. } \label{Fig:MRCComparison} \end{figure} \begin{figure}[!t] \centering \includegraphics[keepaspectratio,width=3.0in]{ABEPcoded} \caption{ABEP upper bounds of convolutional coded OSM for $2\times2$ and $2\times1$ H-K turbulent channels as a function of the average SNR, $\mu$, for various values of link distances, $L$. Simulation Parameters: $\lambda = 1550{\rm nm}$, $C_n^2 = 1.7\times 10^{-14} {\rm m}^{-2/3}$, $\theta_{1,n} = \pi/3$, $\theta_{2,n} = \pi/4$. } \label{Fig:Coded} \end{figure} In this section the various performance evaluation results which have been obtained by numerically evaluating the mathematical expressions presented in Sections \ref{Sec:ABEP_Analysis} and \ref{Sec:Coded} for uncoded and coded OSM systems operating over H-K turbulent channels will be presented. In particular, for uncoded OSM systems the following performance evaluation results have been obtained: \textit{i}) ABEP vs. SNR for $2\times N_r$ OSM systems (obtained using \eqref{Eq:APEPapprox} with \eqref{Eq:MGF}, and \eqref{Eq:PEPasym} - see Figs.~\ref{Fig:ABEP1}, \ref{Fig:ABEP2} and \ref{Fig:MRCComparison}); \textit{ii}) ABEP vs. SNR for $2\times N$ MIMO OSM systems, $2\times N$ MIMO (obtained using \eqref{Eq:APEPapprox} with \eqref{Eq:MGF}). For the uncoded schemes, in order to validate the accuracy of the previously mentioned expressions, comparisons with complementary Monte Carlo simulated performance results are also included in these figures. As far as the performance of coded OSM systems is concerned, ABEP upper bounds vs. SNR have been obtained using \eqref{Eq:bound2} with \eqref{Eq:MGF} (see Fig.~\ref{Fig:Coded}). Fig.~\ref{Fig:ABEP1}, presents the ABEP performance as a function of the average SNR, $\mu$, of $2\times N$ MIMO OSM systems with $N \in \{1,2,3,4\}$. Independent and identically distributed branches are considered with $A_{1,n} = 2$, $A_{2,n} = 1$, $\theta_{1,n} = \pi/3$, $\theta_{2,n} = \pi/4$, $\alpha_{n} = 2$, $b_{0,n} = 2$. The obtained results clearly indicate that the ABEP curves, obtained using \eqref{Eq:APEPapprox}, are in close agreement with those obtained via simulations, verifying the correctness of the proposed analysis. Moreover, it is evident that the asymptotic ABEP curves correctly predict the diversity gain of the considered system for all tested cases. In Fig.~\ref{Fig:ABEP2}, the dependence on the link distance $L$ of the ABEP of a $2\times N$ MIMO OSM system is illustrated. The considered system is again equipped with either $N = 2$ or $N = 4$ receiving apertures and identically distributed branches are assumed. The parameters of the H-K distribution are calculated from \eqref{Eq:param-a} and \eqref{Eq:param-rho} assuming spherical wave propagation. Following \cite{J:Uysal}, it is further assumed that the operating wavelength is $\lambda = 1550$ nm and $C_{n}^2 = 1.7 \times 10^{-14} \mathrm{m}^{-2/3}$. As expected, the error performance deteriorates as $L$ increases from $L=500$m to $L=1500$m. Moreover, it is evident that an increase in $L$ from 500m to 1000m results in a more severe performance deterioration than in the case where $L$ increases from 1000m to 1500m. In all cases considered, the analytical results obtained using \eqref{Eq:APEPapprox} are compared with the equivalent results obtained by means of Monte-Carlo computer simulations and again match very well. Next we compare the proposed OSM system with two alternative coherent FSO systems that can provide performance enhancements by means of transmit (MISO) or receive diversity (SIMO). It is noted that for similar aperture configurations, a fair comparison between coherent and IM/DD systems seems difficult as the same received laser power leads to different SNRs for each of these schemes \cite{J:Bayaki2}. On the other hand, in order to perform a fair comparison between OSM and the alternative MISO or SIMO systems under the same propagation channel conditions, the aperture configuration of the FSO systems under comparison should be selected carefully. Specifically, because of the fact that the diversity gain of OSM equals to only the number of the receive apertures only, i.e. no transmit diversity gain is provided, the number of transmit or receive apertures of the alternative systems must be hence selected to be equal to the number of receive apertures of the OSM system. To this end, for a fair comparison in our paper a $2\times2$ OSM system is compared with the following two alternative FSO communication systems which also employ coherent detection: i) A $1\times2$ heterodyne FSO communication system which employs Differential Phase Shift Keying \cite{J:Kiasaleh} and MRC or SC. ii) A $2\times1$ coherent FSO system employing the Alamouti scheme \cite{J:Niu3} and Binary Phase Shift Keying (BPSK). The instantaneous SNR at the output of the coherent MRC receiver assuming equal average SNR per receiving aperture, $\mu$ can be expressed as \cite{B:Alouini} \begin{equation}\label{Eq:MRC} \gamma_{\rm{MRC}} = \mu\sum_{n=1}^NI_n \end{equation} whereas for SC is \begin{equation} \gamma_{\rm{SC}} = \max\{ \mu I_1, \mu I_2\}. \end{equation} For MRC case, the ABEP can be deduced as \cite{B:Alouini} \begin{equation} P_E =\frac{1}{2}\prod_{n=1}^N\mathcal{M}_{I_n}(\mu). \end{equation} For SC case, an analytical expression for the ABEP is more difficult to be deduced and, therefore, ABEP will be evaluated by means of Monte Carlo simulation only. As far as the Alamouti scheme is concerned, the instantaneous SNR at the input of the demodulator of the optical receiver has a similar form as \eqref{Eq:MRC} \cite{J:Niu3}. For this scheme, the ABEP of BPSK can be evaluated as \begin{equation} P_E =\frac{1}{\pi}\int_0^{\pi/2}\prod_{n=1}^N\mathcal{M}_{I_n}\left(\frac{\mu}{\sin^2\theta}\right)\mathrm{d}\theta. \end{equation} In order to simplify the underlying mathematical analysis, it is assumed that the PDF of ${I_n}$ is given by \eqref{Eq:PDFIK} with the parameters $A_n$ being all zero, i.e. the PDF is the K-distribution. Thus, $\mathcal{M}_{I_n}(\mu)$ can be readily obtained in closed form from \eqref{Eq:MGFK} by replacing $b_{0,n}$ with $b_{0,n}/2$. In Fig.~\ref{Fig:MRCComparison}, the ABEP of $2\times 2$ MIMO OSM links is compared with the ABEP of $1 \times 2$ coherent FSO systems with DPSK. and identically distributed links are considered. In order to compare these systems under the same propagation conditions, it is assumed that $\alpha_n =1,5$, $b_{0,n} = 1.5$, $A_{2,n} = 0$ and $A_{1,n} = \{0,1,2,3\}$. As it can be observed, when either MRC or SC are employed, although coherent DPSK performs worse than OSM for values of $A_{1,n}$ up to approximately 1, it outperforms OSM at lower values of $A_{1,n}$. Moreover, although the OSM outperforms the Alamouti scheme for $A_{1,n} = 2$ and 3, it performs similarly for high SNR values when $A_{1,n} = 1$. It is noted that for $A_{1,n} = 1$ and lower values of $A_{1,n}$ the Alamouti scheme yields the best performance of the considered OSM schemes. When more transmit appertures are employed, however, this advantage is compensated by the superior spectral efficiency of OSM and its lower hardware complexity as compared to coherent MRC. Specifically, as pointed out in \cite{J:OSM2011}, OSM offers increased spectral efficiency by a factor $\log_2(M)$. Moreover, as only one transmitting aperture is activated at any symbol duration, OSM has a lower decoding complexity as compared to conventional MRC and Alamouti schemes. In Fig.~\ref{Fig:Coded}, upper bounds on the ABEP of convolutional coded $2\times 1$ and $2\times 1$ OSM systems are depicted, assuming similar propagation conditions to those considered in Fig.~\ref{Fig:ABEP2}. Considering a convolutional code with rate $1/3$ and constraint length of 3, its transfer function is given as \cite[Eq. (8.2.6)]{B:Proakis} \begin{equation}\label{Eq:Tfunction} T[D(\theta), N] = \frac{D(\theta)^6 N}{1-2ND(\theta)^2}. \end{equation} Substituting \eqref{Eq:Tfunction} to \eqref{Eq:bound2}, a union bound on the ABEP can be obtained as \begin{equation}\label{Eq:bound3} \bar{P}_{\mathrm{ub}} \leq \frac{1}{\pi\log_2(M)}\int_0^{\pi/2}\frac{D(\theta)^6}{(1-2D(\theta)^2)^2}\mathrm{d}\theta. \end{equation} The performance results of Fig.~\ref{Fig:Coded} clearly show that, as expected, the incorporation of convolutional coding significantly enhances the performance of OSM systems, even when a small number of receive apertures is employed, even for $N = 1$. \section{Conclusion}\label{Sec:Conclusions} In this paper, the use of spatial modulation technique for coherent FSO communication systems has been proposed. We have provided a comprehensive analytical framework for error performance analysis in the presence of atmospheric turbulence scattering channel models which include the H-K distribution. The proposed framework reveals important information about the performance of OSM over such turbulent channels, including the effect of fading severity and the achievable diversity gain. It also provides valuable insight into the impact of channel parameters on performance of OSM. Upper bounds for the ABEP performance of coded OSM systems have also been derived, demonstrating that coding techniques can greatly enhance the performance of OSM. Extensive computer simulation performance evaluation results have been also obtained which have verified the accuracy of the analytical approach. Important trends about the performance of OSM for a variety of atmospheric turbulent scenarios and MIMO setups have also been identified. For example, it was shown that OSM can provide significant performance enhancements in the presence of atmospheric turbulence. The improvements are comparable to the ones offered by conventional coherent systems with spatial diversity, while outperforming the latter in terms of spectral efficiency and hardware complexity. Besides, under specific propagation conditions, OSM can yield better performance than conventional SIMO systems employing MRC or SC. We believe that the proposed framework is a useful tool for understanding the performance trend, important properties and tradeoffs of outdoor OSM operating in the presence of atmospheric turbulence. \bibliographystyle{IEEEtran}
1,116,691,499,585
arxiv
\section*{Acknowledgements} The authors thank Gabriel Stoltz and Tony Leli\`evre for useful suggestions. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 766972 and from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 863473). S.~W.~C.\ acknowledges the support of the Faraday Institution through the CATMAT project (grant number FIRG016) and the Balena High Performance Computing Service at the University of Bath. \section*{Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
1,116,691,499,586
arxiv
\section*{Introduction} \label{sec:introduction} The \emph{Farrell-Jones Conjecture for algebraic $L$-theory} predicts for a group $G$ and a ring $R$ with involution $r \mapsto \overline{r}$ that the so called \emph{assembly map} \begin{eqnarray} & \asmb^{G,R}_n \colon H_n^G\bigl(\EGF{G}{{\mathcal{VC}\text{yc}}};{\mathbf L}_R^{\langle - \infty\rangle}\bigr) \to L_n^{\langle - \infty\rangle}(RG) & \label{ass_for_RG} \end{eqnarray} is bijective for all $n \in \mathbb{Z}$. Here the target is the \emph{$L$-theory} of the group ring $RG$ with the standard involution sending $\sum_{g \in G} r_g \cdot g$ to $\sum_{g \in G} \overline{r_g} \cdot g^{-1}$. This is the group one wants to understand. It is a crucial ingredient in the surgery program for the classification of closed manifolds. The source is a much easier to handle term, namely, a $G$-homology theory applied to the \emph{the classifying space $\EGF{G}{{\mathcal{VC}\text{yc}}}$ of the family ${\mathcal{VC}\text{yc}}$ of virtually cyclic subgroups of $G$}. There is also a $K$-theory version of the Farrell-Jones Conjecture. The original source for the (Fibered) Farrell-Jones Conjecture is the paper by Farrell-Jones~\cite[1.6 on page~257 and~1.7 on page~262]{Farrell-Jones(1993a)}. More information can be found for instance in the survey article~\cite{Lueck-Reich(2005)}. In this paper we study the Farrell-Jones Conjecture with coefficients in an additive $G$-category with involution. We show that this more general formulation of the conjecture allows to consider instead of the group ring $RG$ the crossed product ring with involution $R\ast _{c,\tau,w} G$ (see Section~\ref{sec:Crossed_product_rings_and_involutions}), which is a generalization of the twisted group ring, and to use more general involutions, for instance the one given by twisting the standard involution with a group homomorphism $w_1 \colon G \to \{\pm 1\}$. The data describing $R\ast _{c,\tau} G$ and more general involutions are pretty complicated. It turns out that it is convenient to put these into a more general but easier to handle context, where the coefficients are given by an additive $G$-categories $\mathcal{A}$ with involution (see Definition~\ref{def:additive_G-category_with_involution}). \begin{definition}[$L$-theoretic Farrell-Jones Conjecture] \label{def:L-theoretic_Farrell-Jones_Conjecture} A group $G$ together with an additive $G$-category with involution $\mathcal{A}$ satisfy the \emph{$L$-theoretic Farrell-Jones Conjecture with coefficients in $\mathcal{A}$} if the assembly map \begin{eqnarray} & \asmb^{G,\mathcal{A}}_n \colon H_n^G\bigl(\EGF{G}{{\mathcal{VC}\text{yc}}};{\mathbf L}_{\mathcal{A}}^{\langle - \infty\rangle}\bigr) \to H_n^G\bigl(\pt;{\mathbf L}_{\mathcal{A}}^{\langle - \infty\rangle}\bigr) = L_n^{\langle - \infty\rangle}\left(\intgf{G}{\mathcal{A}}\right). & \label{ass_for_cala} \end{eqnarray} induced by the projection $\EGF{G}{{\mathcal{VC}\text{yc}}} \to \pt$ is bijective for all $n \in \mathbb{Z}$.. A group $G$ satisfies the \emph{$L$-theoretic Farrell-Jones Conjecture} if for any additive $G$-category with involution $\mathcal{A}$ the \emph{$L$-theoretic Farrell-Jones Conjecture with coefficients in $\mathcal{A}$} is true. \end{definition} Here $\intgf{G}{\mathcal{A}}$ is a certain homotopy colimit which yields an additive category with involution and we use the $L$-theory associated to an additive category with involution due to Ranicki (see~\cite{Ranicki(1988)}, \cite{Ranicki(1992)} and~\cite{Ranicki(1992a)}). The $G$-homology theory $H_n^G\bigl(-;{\mathbf L}_{\mathcal{A}}^{\langle - \infty\rangle}\bigr)$ is briefly recalled in Section~\ref{sec:G-homology_theories}. If $R$ is a ring with involution, $\mathcal{A}$ is the additive category with involution given by finitely generated free $R$-modules and we equip $\mathcal{A}$ with the trivial $G$-action, then the assembly map~\eqref{ass_for_cala} agrees with the one for $RG$ in~\eqref{ass_for_RG} (see Theorem~\ref{the:FJC_for_crossed_products} below). This general setup is also a very useful framework when one is dealing with categories appearing in controlled topology, which is an important tool for proving the Farrell-Jones Conjecture for certain groups. Next we state the main results of this paper. \begin{theorem} \label{the:FJC_for_crossed_products} Suppose that $G$ satisfies the $L$-theoretic Farrell-Jones Conjecture in the sense of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}. Let $R$ be ring with the data $(c,\tau,w)$ and $R \ast_{c,\tau,w} G$ be the associated crossed product ring with involution as explained in Section~\ref{sec:Crossed_product_rings_and_involutions}. Then the assembly map \begin{eqnarray} & \asmb^{G,R_{c,\tau,w}}_n \colon H_n^G\bigl(\EGF{G}{{\mathcal{VC}\text{yc}}};{\mathbf L}_{R,c,\tau,w}^{\langle - \infty\rangle}\bigr) \to L_n^{\langle - \infty\rangle}(R \ast_{c,\tau,w} G) & \end{eqnarray} is bijective. Here ${\mathbf L}_{R,c,\tau,w}^{\langle - \infty\rangle}$ is a functor from the orbit category $\matheurm{Or}(G)$ to the category of spectra such that $\pi_n\bigl({\mathbf L}_{R,c,\tau,w}^{\langle - \infty\rangle}(G/H)\bigr)$ for $H \subseteq G$ agrees with $L_n^{\langle -\infty \rangle}(R \ast_{c|_H,\tau|_N,w|_H} H)$. \end{theorem} Another important feature is that in this setting the (unfibered) Farrell-Jones Conjecture does already imply the fibered version. \begin{definition}[Fibered $L$-theoretic Farrell-Jones Conjecture] \label{def:fibered_L-theoretic_Farrell-Jones_Conjecture} A group $G$ satisfies the \emph{fibered $L$-theoretic Farrell-Jones Conjecture} if for any group homomorphism $\phi \colon K \to G$ and additive $G$-category with involution $\mathcal{A}$ the assembly map \begin{eqnarray*} & \asmb^{\phi,\mathcal{A}}_n \colon H_*^K\bigl(\EGF{G}{\phi^*{\mathcal{VC}\text{yc}}};{\mathbf L}_{\phi^*\mathcal{A}}^{\langle - \infty\rangle}\bigr) \to L_n^{\langle - \infty\rangle}\left(\intgf{K}{\phi^*\mathcal{A}}\right). & \end{eqnarray*} is bijective for all $n \in \mathbb{Z}$, where the family $\phi^*{\mathcal{VC}\text{yc}}$ of subgroups of $K$ consists of subgroups $L \subseteq K$ with $\phi(L)$ virtually cyclic and $\phi^* \mathcal{A}$ is the additive $K$-category with involution obtained from $\mathcal{A}$ by restriction with $\phi$. \end{definition} Obviously the fibered version for the group $G$ of Definition~\ref{def:fibered_L-theoretic_Farrell-Jones_Conjecture} implies the version for the group $G$ of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}, take $\phi = \id$ in Definition~\ref{def:fibered_L-theoretic_Farrell-Jones_Conjecture}. The converse is also true. \begin{theorem} \label{the:fibered_versus_unfibered} Let $G$ be a group. Then $G$ satisfies the fibered $L$-theoretic Farrell-Jones Conjecture if and only if $G$ satisfies the $L$-theoretic Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}. \end{theorem} A general statement of a \emph{Fibered Isomorphism Conjecture} and the discussion of its inheritance properties under subgroups and colimits of groups can be found in~\cite[Section~4]{Bartels-Echterhoff-Lueck(2007colim)} (see also~\cite[Appendix]{Farrell-Jones(1993a)}, \cite[Theorem~7.1]{Farrell-Linnell(2003a)}). These very useful inheritance properties do not hold for the unfibered version of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}. The next three corollaries are immediate consequences of Theorem~\ref{the:fibered_versus_unfibered} and~\cite[Theorem~3.3, Lemma~4.4, Lemma~4.5 and Lemma~4.6]{Bartels-Echterhoff-Lueck(2007colim)}. \begin{corollary}\label{cor:directed_colimits} Let $\{G_i \mid i \in I\}$ be a directed system (with not necessarily injective) structure maps and let $G$ be its colimit $\colim_{i \in I} G_i$. Suppose that $G_i$ satisfy the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture} for every $i \in I$. Then $G$ satisfies the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}. \end{corollary} \begin{corollary} \label{cor:extensions} Let $1 \to K \to G \xrightarrow{p} Q \to 1$ be an extension of groups. Suppose that the group $Q$ and for any virtually cyclic subgroup $V \subseteq Q$ the group $p^{-1}(V)$ satisfy the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}. Then the group $G$ satisfies the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}. \end{corollary} \begin{corollary}\label{cor:subgroups} If $G$ satisfies the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}, then any subgroup $H \subseteq G$ satisfies the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture}. \end{corollary} Corollary~\ref{cor:extensions} and Corollary~\ref{cor:subgroups} have also been proved in~\cite{Hambleton-Pedersen-Rosenthal(2007)}. \begin{remark} \label{rem:products} Suppose that the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture} has been proved for the product of two virtually cyclic subgroups. Then Corollary~\ref{cor:extensions} and Corollary~\ref{cor:subgroups} imply that $G \times H$ satisfy the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture} if and only if both $G$ and $H$ satisfy the Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture} \end{remark} It is sometimes useful to have strict structures on $\mathcal{A}$, e.g., the involution is desired to be strict and there should be a (strictly associative) functorial direct sum. The functorial direct sum is actually needed in some proofs in order to guarantee good functoriality properties of certain categories arising from controlled topology. We will show \begin{theorem}\label{the:strict_coefficients} The group $G$ satisfies the $L$-theoretic Farrell-Jones Conjecture of Definition~\ref{def:L-theoretic_Farrell-Jones_Conjecture} if it satisfies the obvious version of it, where one only considers additive $G$-category with (strictly associative) functorial direct sum and strict involution (see Definition~\ref{def:additive_G-category_with_oplus_and_inv}). \end{theorem} The Farrell-Jones Conjecture with coefficients (in $K$- and $L$-theory) has been introduced in~\cite{Bartels-Reich(2005)}. Our treatment here is more general in that we allow involutions that are not necessarily strict and also deal with twisted involutions on the crossed product ring. All results mentioned here have obvious analogues for $K$-theory whose proof is actually easier since one does not have to deal with the involutions. The work was financially supported by the Sonderforschungsbereich 478 \--- Geometrische Strukturen in der Mathematik \--- and the Max-Planck-Forschungspreis of the second author. The paper is organized as follows:\\[1mm] \begin{tabular}{ll}% \ref{sec:Additive_categories_with_involution}. & Additive categories with involution \\% \ref{sec:Additive_categories_with_weak_(G,v)-action} & Additive categories with weak $(G,v)$-action \\% \ref{sec:Making_an_additive_categories_with_weak_(G,v)-action_strict} & Making an additive categories with weak $(G,v)$-action strict \\% \ref{sec:Crossed_product_rings_and_involutions}. & Crossed product rings and involutions \\% \ref{sec:Connected_groupoids_and_additive_categories}. & Connected groupoids and additive categories \\% \ref{sec:From_crossed_product_rings_to_additive_categories}. & From crossed product rings to additive categories \\% \ref{sec:Connected_groupoids_and_additive_categories_with_involutions}. & Connected groupoids and additive categories with involutions \\% \ref{sec:From_crossed_product_rings_with_involution_to_additive_categories_with_involution}. & From crossed product rings with involution to additive categories with involution \\% \ref{sec:G-homology_theories}. & $G$-homology theories \\% \ref{sec:Z-categories_and_additive_categories_with_involutionsG-homology_theories_and_restriction}. & $\mathbb{Z}$-categories and additive categories with involutions \\% \ref{sec:G-homology_theories_and_restriction}. & $G$-homology theories and restriction \\% \ref{sec:Proof_of_the_main_theorems}. & Proof of the main theorems \\ & References \end{tabular} \typeout{-------------- Section 1: Additive categories with involution -----------------} \section{Additive categories with involution} \label{sec:Additive_categories_with_involution} In this section we will review the notion of an additive category with involution as it appears and is used in the literature. This will be one of our main examples. Let $\mathcal{A}$ be an \emph{additive category}, i.e., a small category $\mathcal{A}$ such that for two objects $A$ and $B$ the morphism set $\mor_{\mathcal{A}}(A,B)$ has the structure of an abelian group and the direct sum $A \oplus B$ of two objects $A$ and $B$ exists and the obvious compatibility conditions hold. A covariant \emph{functor of additive categories} $F \colon \mathcal{A}_0 \to\mathcal{A}_1$ is a covariant functor such that for two objects $A$ and $B$ in $\mathcal{A}_0$ the map $\mor_{\mathcal{A}_0}(A,B) \to \mor_{\mathcal{A}_1}(F(A), F(B))$ sending $f$ to $F(f)$ respects the abelian group structures and $F(A \oplus B)$ is a model for $F(A) \oplus F(B)$. The notion of a contravariant functor of additive categories is defined analogously. An \emph{involution $(I,E)$ on an additive category} $\mathcal{A}$ is contravariant functor \begin{eqnarray} &I \colon \mathcal{A} \to \mathcal{A} & \label{involution_I_on_an_additive_category} \end{eqnarray} of additive categories together with a natural equivalence of such functors \begin{eqnarray} &E \colon \id_{\mathcal{A}} \to I^2 := I \circ I& \label{E_belonging_to_I} \end{eqnarray} such that for every object $A$ we have the equality of morphisms \begin{eqnarray} E(I(A)) & = & I(E(A)^{-1}) \colon I(A) \to I^3(A). \label{E(I(A))_is_I(E(A)-1)} \end{eqnarray} In the sequel we often write $I(A) = A^*$ and $I(f) = f^*$ for a morphism $f \colon A \to B$ in $\mathcal{A}$. If $I^2 = \id_\mathcal{A}$ and $E(A) = \id_A$ for all objects $A$, then we call $I = (I,\id)$ a \emph{strict involution}. \begin{definition}[Additive category with involution] \label{def:additive_category_with_involution} An \emph{additive category with involution} is an additive category together with an involution $(I,E)$. An \emph{additive category with strict involution} is an additive category together with a strict involution $I$. \end{definition} The following example is a key example and illustrates why one cannot expect in concrete situation that the involution is strict. \begin{example} \label{exa:R-mod_as_add_cat} Let $R$ be a ring. Let $\FGP{R}$ be the category of finitely generated projective $R$-modules. This becomes an additive category by the direct sum of $R$-modules and the elementwise addition of $R$-homomorphisms. A \emph{ring with involution} is a ring $R$ together with a map $R \to R, \;r \mapsto \overline{r}$ satisfying $\overline{1} = 1$, $\overline{r+s} = \overline{r} + \overline{s}$ and $\overline{r\cdot s} = \overline{s} \cdot \overline{r}$ for $r,s \in R$. Given a ring with involution $R$, define an involution $I$ on the additive category $\FGP{R}$ as follows. Given a finitely generated projective $R$-module $P$, let $I(P) = P^*$ be the finitely generated projective $\hom_R(P,R)$,where for $r \in R$ and $f \in \hom_R(P,R)$ the element $rf \in \hom_R(P,R)$ is defined by $rf(x) = f(x) \cdot \overline{r}$ for $x \in P$. The desired natural transformation $$E \colon \id_{\FGP{R}} \to I^2$$ assigns to a finitely generated projective $R$-module $P$ the $R$-isomorphism $P \xrightarrow{\cong} (P^*)^*$ sending $x \in P$ to $\hom_R(P,R) \to R, \; f \mapsto \overline{f(x)}$. \end{example} A \emph{functor of additive categories with involution} $(F,T) \colon \mathcal{A} \to \mathcal{B}$ consists of a covariant functor $F$ of the underlying additive categories together with a natural equivalence $T\colon F \circ I_{\mathcal{A}}\to I_{\mathcal{B}} \circ F$ such that for every object $A$ in $\mathcal{A}$ the following diagram commutes \begin{eqnarray} & \xymatrix@!C=8em{F(A) \ar[d]^-{E_{\mathcal{B}}(F(A))} \ar[r]^-{F(E_{\mathcal{A}}(A))} & F\left(A^{**}\right) \ar[d]^-{T(A^*)} \\ F(A)^{**} \ar[r]^-{T(A)^*} & F(A^*)^* } & \label{F(A)_F(A)aastast_F(Aastast)_F(Aast)ast} \end{eqnarray} If $T(A) = \id_A$ for all objects $A$, then we call $F$ a \emph{strict} functor of additive categories with involution. The \emph{composite} of functors of additive categories with involution $(F_1,T_1) \colon \mathcal{A}_1 \to \mathcal{A}_2$ and $(F_2,T_2) \colon \mathcal{A}_2 \to \mathcal{A}_3$ is defined to be $(F_2 \circ F_1, T_2 \circ T_1)$, where $F_2 \circ F_1$ is the composite of functors of additive categories and the natural equivalence $T_2 \circ T_1$ assigns to an object $A \in \mathcal{A}_1$ the isomorphism in $\mathcal{A}_3$ $$ F_2 \circ F_1 \circ I_{\mathcal{A}_1}(A) \xrightarrow{F_2(T_1(A))} F_2 \circ I_{\mathcal{A}_2} \circ F_1(A) \xrightarrow{T_2(F_1(A))} I_{\mathcal{A}_3} \circ F_2 \circ F_1(A).$$ A \emph{natural transformation $S \colon (F_1,T_1) \to (F_2,T_2)$ of functors $\mathcal{A}_1 \to \mathcal{A}_2$ of additive categories with involutions} is a natural transformation $S \colon F_1 \to F_2$ of functors of additive categories such that for every object $A$ in $\mathcal{A}$ the following diagram commutes \begin{eqnarray} & \xymatrix@!C=8em{F_1(I_{\mathcal{A}_1}(A)) \ar[r]^-{T_1(A)} \ar[d]^-{S(I_{\mathcal{A}_1}(A))} & I_{\mathcal{A}_2}(F_1(A)) \\ F_2(I_{\mathcal{A}_1}(A)) \ar[r]^-{T_2(A)} & I_{\mathcal{A}_2}F_2(A)) \ar[u]_-{I_{\mathcal{A}_2}(S(A))} } & \label{F_I_T} \end{eqnarray} \typeout{-------------- Section 2: Additive categories with weak $(G,v)$-action -----------------} \section{Additive categories with weak $(G,v)$-action} \label{sec:Additive_categories_with_weak_(G,v)-action} In the sequel $G$ is a group and $v \colon G \to \{\pm 1\}$ is a group homomorphism to the multiplicative group $\{\pm 1\}$. In this section we want to introduce the notion of an additive category with weak $(G,v)$-action such that the notion of an additive category with involution is the special case of an additive category with weak $(\mathbb{Z}/2,v)$-action for $v \colon \mathbb{Z}/2 \to \{\pm 1\}$ the unique group isomorphism and we can also treat $G$-actions up to natural equivalence. Notice that this will force us to deal with covariant and contravariant functors simultaneously. The homomorphism $v$ will take care of that. We call a functor $+1$-variant if it is covariant and $-1$-variant if it is contravariant. If $F_1 \colon \mathcal{C}_0 \to \mathcal{C}_1$ is an $\epsilon_1$-variant functor and $F_2 \colon \mathcal{C}_1 \to \mathcal{C}_2$ is an $\epsilon_2$-variant functor, then the composite $F_2 \circ F_1 \colon \mathcal{C}_0 \to \mathcal{C}_2$ is $\epsilon_1\epsilon_2$-variant functor. If $f \colon x_0 \to x_1$ is an isomorphism and $\epsilon \in \{\pm 1\}$, then define $f^{\epsilon} \colon x_0 \to x_1 $ to be $f$ if $\epsilon = 1$ and $f^{\epsilon} \colon x_1 \to x_0 $ to be the inverse of $f$ if $\epsilon = -1$. If $F \colon \mathcal{C}_0 \to \mathcal{C}_1$ is $\epsilon$-variant and $f \colon x_0 \xrightarrow{\cong} y_0$ is an isomorphism in $\mathcal{C}_0$, then $F(f)^{\epsilon} \colon F(x_0) \to F(x_1)$ is an isomorphism in $\mathcal{C}_1$. \begin{definition}[Additive category with weak $(G,v)$-action] \label{def:additive_category_with_weak_(G,v)-action} Let $G$ be a group together with a group homomorphism $v \colon G \to \{\pm 1\}$. An \emph{additive category with weak $(G,v)$-action} $\mathcal{A}$ is an additive category together with the following data: \begin{itemize} \item For every $g \in G$ we have a $v(g)$-variant functor $R_g \colon \mathcal{A} \to \mathcal{A}$ of additive categories; \item For every two elements $g,h \in G$ there is a natural equivalence of $v(gh)$-variant functors of additive categories $$L_{g,h} \colon R_{gh} \to R_h \circ R_g.$$ \end{itemize} We require: \begin{enumerate} \item $R_e = \id$ for $e \in G$ the unit element; \item $L_{g,e} = L_{e,g} = \id$ for all $g \in G$; \item \label{def:additive_category_with_weak_(G,v)-action:condition_for_(g,h,k)} The following diagram commutes for all $g,h,k \in G$ and objects $A$ in $\mathcal{A}$ $$\xymatrix@!C=10em{R_{ghk}(A) \ar[r]^-{L_{gh,k}(A)} \ar[d]^-{L_{g,hk}(A)} & R_k(R_{gh}(A)) \ar[d]^-{R_k(L_{g,h}(A))^{v(k)}} \\ R_{hk}(R_g(A)) \ar[r]^-{L_{h,k}(R_g(A))} & R_k(R_h(R_g(A)))} $$ \end{enumerate} If for every two elements $g,h \in G$ we have $L_{g,h} = \id$ and in particular $R_{gh} = R_{h}R_{g}$, we call $\mathcal{A}$ with these data an \emph{additive category with strict $(G,v)$-action} or briefly a \emph{additive $(G,v)$-category}. If $v$ is trivial, we will omit it from the notation. \end{definition} Let $\mathcal{A}$ and $\mathcal{B}$ be two additive categories with weak $(G,v)$-action and let $\epsilon \in \{\pm 1\}$. An \emph{$\epsilon$-variant functor $(F,T) \colon \mathcal{A} \to \mathcal{B}$ of additive categories with weak $(G,v)$-action} is a $\epsilon$-variant functor $F \colon \mathcal{A} \to \mathcal{B}$ of additive categories together with a collection $\{T_g \mid g \in G\}$ of natural transformations of $\epsilon v(g)$-variant functors of additive categories $T_g \colon F \circ R_g^{\mathcal{A}} \to R_g^{\mathcal{B}} \circ F$. We require that for all $g,h \in G$ and all objects $A$ in $\mathcal{A}$ the following diagram commutes \begin{eqnarray} && \xymatrix@!C=10em{ F(R_{hg}^{\mathcal{A}}(A))\ar[r]^-{F(L_{h,g}^{\mathcal{A}}(A))^{\epsilon}} \ar[d]^-{T_{hg}(A)} & F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) \ar[r]^-{T_g(R_h(A))} & R_g^{\mathcal{B}}(F(R_h^{\mathcal{A}}(A))) \ar[d]^-{R_g^{\mathcal{B}}(T_{h}(A))^{v(g)}} \\ R_{hg}^{\mathcal{B}}(F(A)) \ar[rr]^-{L_{hg}^{\mathcal{B}}(F(A))} & & R_g^{\mathcal{B}}(R_h^{\mathcal{B}}(F(A))) } \label{F_T_L_compatible} \end{eqnarray} The composite $(F_2,T_2) \circ (F_1,T_1) \colon \mathcal{A}_1 \to \mathcal{A}_3$ of an $\epsilon_1$-variant functor of additive categories with weak $(G,v)$-action $(F_1,T_1) \colon \mathcal{A}_1 \to \mathcal{A}_2$ and an $\epsilon_2$-variant functor of additive categories with weak $(G,v)$-action $(F_2,T_2) \colon \mathcal{A}_2 \to \mathcal{A}_3$ is the $\epsilon_1\epsilon_2$-variant functor of additive categories with weak $(G,v)$-action whose underlying $\epsilon_1\epsilon_2$-variant functor of additive categories is $F_2 \circ F_1 \colon \mathcal{A}_1 \to \mathcal{A}_3$ and whose required natural transformations for $g \in G$ are given for an object $A$ in $\mathcal{A}_1$ by $$F_2 \circ F_1 \circ R^{\mathcal{A}_1}_g(A) \xrightarrow{F_2((T_2)_g(A))^{\epsilon_2}} F_2 \circ R^{\mathcal{A}_2}_g \circ F_1(A) \xrightarrow{(T_2)_g(F_1(A))} R^{\mathcal{A}_3}_g \circ F_2 \circ F_1(A).$$ A \emph{natural transformation $S \colon (F_1,T_1) \to (F_2,T_2)$ of functors $\mathcal{A}_1 \to \mathcal{A}_2$ of additive categories with weak $(G,v)$-action} is a natural transformation $S \colon F_1 \to F_2$ of functors of additive categories such that for all $g \in G$ and objects $A$ in $\mathcal{A}_1$ the following diagram commutes \begin{eqnarray} & \xymatrix@!C=8em{F_1(R_g^{\mathcal{A}_1}(A)) \ar[r]^-{(T_1)_g(A)} \ar[d]^-{S(R_g^{\mathcal{A}_1}(A))} & R_g^{\mathcal{A}_2}(F_1(A)) \ar[d]^-{\left(R_g^{\mathcal{A}_2}(S(A))\right)^{v(g)}} \\ F_2(R_g^{\mathcal{A}_1}(A)) \ar[r]^-{(T_2)_g(A)} & R_g^{\mathcal{A}_2}(F_2(A)) } & \label{F_I_T_add_G-cat} \end{eqnarray} An \emph{$\epsilon$-variant functor $F \colon \mathcal{A} \to \mathcal{B}$ of additive categories with strict $(G,v)$-action} is an $\epsilon$-variant functor $F \colon \mathcal{A} \to \mathcal{B}$ of additive categories satisfying $F \circ R^{\mathcal{A}_1}_g = R^{\mathcal{A}_2}_g \circ F$ for all $g \in G$. A \emph{natural transformation $S \colon F_1 \to F_2$ of $\epsilon$-variant functors $\mathcal{A}_1 \to \mathcal{A}_2$ of additive categories with strict $(G,v)$-action} is a natural transformation $S \colon F_1 \to F_2$ of additive categories satisfying $S(R^{\mathcal{A}_1}_g(A)) = R^{\mathcal{A}_2}_g(S(A))^{v(g)}$ for all $g \in G$ and objects $A$ in $\mathcal{S}_1$. \begin{example}[Additive categories with involution] \label{exa:additive_categories_with_involution} Given an additive category $A$, the structure of an additive category with weak $(\mathbb{Z}/2,v)$-action for $v \colon \mathbb{Z}/2 \to \{\pm 1\}$ the unique group isomorphism is the same as an additive category with involution. Namely, let $t \in \mathbb{Z}/2$ be the generator. Given an involution $(I,E)$ in the sense of Definition~\ref{def:additive_category_with_involution}, define the structure of an additive category with weak $(\mathbb{Z}/2,v)$-action in the sense of Definition~\ref{def:additive_category_with_weak_(G,v)-action} by putting $R_e = \id$, $R_t = I$, $L_{e,e} = L_{t,e} = L_{t,e} = \id$ and $L_{t,t} = E$. Condition~\ref{def:additive_category_with_weak_(G,v)-action:condition_for_(g,h,k)} in Definition~\ref{def:additive_category_with_weak_(G,v)-action} follows from condition~\eqref{E(I(A))_is_I(E(A)-1)}. Given the structure of an additive category with weak $(\mathbb{Z}/2,v)$-action, define the involution $(E,I)$ by $E = R_t$ and $I = L_{t,t}$. The corresponding statement is true for functors of additive categories with weak $(\mathbb{Z}/2,v)$-action and natural transformations between them, where diagram~\eqref{F(A)_F(A)aastast_F(Aastast)_F(Aast)ast} corresponds to diagram~\eqref{F_T_L_compatible}. Analogously we get that the structure of a additive category with strict $(\mathbb{Z}/2,v)$-action is the same as an additive category with strict involution. \end{example} \typeout{-------------- Section 3: Making an additive categories with $(G,w)$-action strict -----------------} \section{Making an additive categories with weak $(G,v)$-action strict} \label{sec:Making_an_additive_categories_with_weak_(G,v)-action_strict} Many interesting examples occur as additive categories with weak $(G,v)$-action which are not necessarily strict. On the other hand additive categories with strict $(G,v)$-action are easier to handle. We explain how we can turn an additive category with weak $(G,v)$-action $\mathcal{A}$ to an additive category with strict $(G,v)$-action which we will denote by $\mathcal{S}(\mathcal{A})$. \begin{definition}[$\mathcal{S}(\mathcal{A})$] \label{def:cals(cala)} An object in $\mathcal{S}(\mathcal{A})$ is a pair $(A,g)$ consisting of an object $A \in \mathcal{A}$ and an element $g \in G$. A morphism $(A,g)$ to $(B,h)$ is a morphism $\phi \colon R_g(A) \to R_h(B)$ in $\mathcal{A}$. The composition of morphisms is given by the one in $\mathcal{A}$. The category $\mathcal{S}(\mathcal{A})$ inherits the structure of an additive category from $\mathcal{A}$ in the obvious way. \end{definition} Next we define the structure of an additive category with strict $(G,v)$-action on $\mathcal{S}(\mathcal{A})$. Define for $g\in G$ a functor $R^\mathcal{S}_{g} \colon \mathcal{S}(\mathcal{A}) \to \mathcal{S}(\mathcal{A})$ of additive categories as follows. Given an object $(A,h)$, define $$R^{\mathcal{S}}_{g}(A,h) = (A,hg).$$ Given a morphism $\phi \colon (A,h) \to (B,k)$ define $$R^{\mathcal{S}}_{g}(\phi) \colon R^{\mathcal{S}}_{g}(A,h) = (A,hg) \to R^{\mathcal{S}}_{g}(B,k) = (B,kg)$$ by the composite of morphisms in $\mathcal{A}$ $$ R_{hg}(A) \xrightarrow{L_{h,g}(A)} R_{g}\left(R_{h}(A)\right) \xrightarrow{R_{g}(\phi)} R_{g}\left(R_{k}(B)\right) \xrightarrow{L_{k,g}(B)^{-1}} R_{kg}(B). $$ if $v(g) = 1$ and $$R^{\mathcal{S}}_{g}(\phi) \colon R^{\mathcal{S}}_{g}(B,k) = (B,kg) \to R^{\mathcal{S}}_{g}(A,h) = (A,hg)$$ by the composite of morphisms in $\mathcal{A}$ $$R_{kg}(B)\xrightarrow{L_{k,g}(B)} R_{g}\left(R_{k}(B)\right) \xrightarrow{R_{g}(\phi)} R_{g}\left(R_{h}(A)\right) \xrightarrow{L_{h,g}(A)^{-1}} R_{hg}(A) $$ if $v(g) = -1$ A direct computation shows that $R^{\mathcal{S}}_g$ is indeed a functor of additive categories. We conclude $R^{\mathcal{S}}_e = \id_{\mathcal{S}(\mathcal{A})}$ from the conditions $R_e = \id$ and $L_{g,e} = L_{e,g} = \id$. We have to check $R^{\mathcal{S}}_{g_2} \circ R^{\mathcal{S}}_{g_1} = R^{\mathcal{S}}_{g_1g_2}$. We will do this for simplicity only in the case $v(g_1)=v(g_2) = 1$, the other cases are analogous. Given a morphism $\phi \colon (A,h) \to (B,k)$, the morphism $R^{\mathcal{S}}_{g_1g_2}(\phi)$ is given by the composite in $\mathcal{A}$ \begin{multline*}R_{hg_1g_2}(A) \xrightarrow{L_{h,g_1g_2}(A)} R_{g_1g_2}\left(R_hA)\right) \xrightarrow{R_{g_1g_2}(\phi)} R_{g_1g_2}\left(R_k(B)\right) \\ \xrightarrow{L_{k,g_1g_2}(B)^{-1}} R_{kg_1g_2}(B). \end{multline*} The morphism $R^{\mathcal{S}}_{g_2} \circ R^{\mathcal{S}}_{g_1}(\phi)$ is given by the composite in $\mathcal{A}$ \begin{multline*} R_{hg_1g_2}(A) \xrightarrow{L_{hg_1,g_2}(A)} R_{g_2}\left(R_{hg_1}(A)\right) \\ \xrightarrow{R_{g_2}(L_{h,g_1}(A))} R_{g_2}\left(R_{g_1}(R_h(A))\right) \xrightarrow{R_{g_2}\left(R_{g_1}(\phi)\right)} R_{g_2}\left(R_{g_1}(R_k(B))\right) \\ \xrightarrow{R_{g_2}\left(L_{k,g_1}(B)^{-1}\right)} R_{g_2}\left(R_{kg_1}(B)\right) \xrightarrow{L_{kg_1,g_2}(B)^{-1}} R_{kg_1g_2}(B). \end{multline*} Next we compute that these two morphisms agree. Because of condition~\ref{def:additive_category_with_weak_(G,v)-action:condition_for_(g,h,k)} in Definition~\ref{def:additive_category_with_weak_(G,v)-action} have \begin{eqnarray*} R_{g_2}(L_{h,g_1}(A)) \circ L_{hg_1,g_2}(A) & = & L_{g_1,g_2}(R_h(A)) \circ L_{h,g_1g_2}(A); \\ R_{g_2}(L_{k,g_1}(B)) \circ L_{kg_1,g_2}(B) & = & L_{g_1,g_2}(R_k(B)) \circ L_{k,g_1g_2}(B). \end{eqnarray*} Hence it suffices to show that the composite \begin{multline*} R_{g_1g_2}\left(R_h(A)\right) \xrightarrow{L_{g_1,g_2}(R_h(A))} R_{g_2}\left(R_{g_1}(R_h(A))\right) \\ \xrightarrow{R_{g_2}\left(R_{g_1}(\phi)\right)} R_{g_2}\left(R_{g_1}(R_k(B))\right) \xrightarrow{L_{g_1,g_2}(R_k(B))^{-1}} R_{g_1g_2}\left(R_k(B)\right) \end{multline*} agrees with $$R_{g_1g_2}\left(R_k(B)\right) \xrightarrow{R_{g_1g_2}(\phi)} R_{g_1g_2}\left(R_k(B)\right).$$ This follows from the fact that $L_{g_1,g_2} \colon R_{g_1g_2} \to R_{g_2} \circ R_{g_2}$ is a natural equivalence. Let $(F,T) \colon \mathcal{A} \to \mathcal{B}$ be an $\epsilon$-variant functor of additive categories with weak $(G,v)$-action. It induces an $\epsilon$-variant functor $\mathcal{S}(F,T) \colon \mathcal{S}(\mathcal{A}) \to \mathcal{S}(\mathcal{B})$ of additive categories with strict $(G,v)$-action as follows. For simplicity we will only treat the case $\epsilon = 1$, the other case $\epsilon = -1$ is analogous. The functor $\mathcal{S}(F,T)$ sends an object $(A,h)$ in $\mathcal{S}(\mathcal{A})$ to the object $(F(A),h)$ in $\mathcal{S}(\mathcal{B})$. It sends a morphism $\phi \colon (A,h) \to (B,k)$ in $\mathcal{S}(\mathcal{A})$ which is given by a morphism $\phi \colon R_h^{\mathcal{A}}(A) \to R_k^{\mathcal{A}}(B)$ in $\mathcal{A}$ to the morphism $\mathcal{S}(F,T)(\phi) \colon (F(A),h) \to (F(B),k)$ in $\mathcal{S}(\mathcal{B})$ which is given by the following composite of morphisms in $\mathcal{B}$ $$ R^{\mathcal{B}}_h(F(A)) \xrightarrow{T_h(A)^{-1}} F(R^{\mathcal{A}}_h(A)) \xrightarrow{F(\phi)} F(R^{\mathcal{A}}_k(B)) \xrightarrow{T_k(B)} R^{\mathcal{A}}_k(F(B)). $$ We have to show $R^{\mathcal{S}(\mathcal{B})}_g \circ \mathcal{S}(F) = \mathcal{S}(F) \circ R^{\mathcal{S}(\mathcal{A})}_g$ for every $g \in G$. We only treat the case $v(g) = 1$. This is obvious on objects since both composites send an object $(A,h)$ to $(F(A),hg)$. Let $\phi \colon (A,h) \to (B,k)$ be a morphism in $\mathcal{S}(\mathcal{A})$ which is given by a morphism $\phi \colon R_h^{\mathcal{A}}(A) \to R_k^{\mathcal{A}}(B)$ in $\mathcal{A}$. Then $R^{\mathcal{S}(\mathcal{B})}_g \circ \mathcal{S}(F)(\phi)$ is the morphism $(F(A),hg) \to (F(B),kg)$ in $\mathcal{S}(\mathcal{B})$ which is given by the composite in $\mathcal{B}$ \begin{multline*} R_{hg}^{\mathcal{B}}(F(A)) \xrightarrow{L_{h,g}^{\mathcal{B}}(F(A))} R_g^{\mathcal{B}}(R_h^{\mathcal{B}}(F(A))) \xrightarrow{R_g^{\mathcal{B}}(T_h(A)^{-1})} R_g^{\mathcal{B}}(F(R_h^{\mathcal{A}}(A))) \\ \xrightarrow{R_g^{\mathcal{B}}(F(\phi))} R_g^{\mathcal{B}}(F(R_k^{\mathcal{A}}(B))) \xrightarrow{R_g^{\mathcal{B}}(T_k(B))} R_g^{\mathcal{B}}(R_k^{\mathcal{B}}(F(B))) \xrightarrow{L_{k,g}^{\mathcal{B}}(F(B))^{-1}} R_{kg}^{\mathcal{B}}(F(B)) \end{multline*} and $\mathcal{S}(F) \circ R^{\mathcal{S}(\mathcal{A})}_g(\phi)$ is the morphism $(F(A),hg) \to (F(B),kg)$ in $\mathcal{S}(\mathcal{B})$ which is given by the composite in $\mathcal{B}$ \begin{multline*} R_{hg}^{\mathcal{B}}(F(A)) \xrightarrow{T_{hg}(A)^{-1}} F(R_{hg}^{\mathcal{A}}(A)) \xrightarrow{F(L_{h,g}^{\mathcal{A}}(A))} F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) \\ \xrightarrow{F(R_g^{\mathcal{A}}(\phi))} F(R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \xrightarrow{F(L_{k,g}^{\mathcal{A}}(B)^{-1})} F(R_{kg}^{\mathcal{A}}(B)) \xrightarrow{T_{kg}(A)} R_{kg}^{\mathcal{B}}(F(B)). \end{multline*} Since $T_g\colon F\circ R_g^{\mathcal{A}} \to R_g^{\mathcal{B}} \circ F$ is a natural transformation, the following diagram commutes $$\xymatrix@!C=10em{ F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) \ar[r]^-{F(R_g^{\mathcal{A}}(\phi))} \ar[d]^-{T_g(R_h^{\mathcal{A}}(A))} & F(R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \ar[d]^-{T_g(R_k^{\mathcal{A}}(B))} \\ R_g^{\mathcal{B}}(F(R_h^{\mathcal{A}}(A))) \ar[r]^-{R_g^{\mathcal{B}}(F(\phi))} & R_g^{\mathcal{B}}(F(R_k^{\mathcal{A}}(B))) } $$ Hence it suffices to show that the composite \begin{multline*}R_{hg}^{\mathcal{B}}(F(A)) \xrightarrow{L_{h,g}^{\mathcal{B}}(F(A))} R_g^{\mathcal{B}}(R_h^{\mathcal{B}}(F(A))) \xrightarrow{R_g^{\mathcal{B}}(T_h(A)^{-1})} R_g^{\mathcal{B}}(F(R_h^{\mathcal{A}}(A))) \\ \xrightarrow{T_g(R_h^{\mathcal{A}}(A))^{-1}} F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) \end{multline*} agrees with the composite $$ R_{hg}^{\mathcal{B}}(F(A)) \xrightarrow{T_{hg}(A)^{-1}} F(R_{hg}^{\mathcal{A}}(A)) \xrightarrow{F(L_{h,g}^{\mathcal{A}}(A))} F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) $$ and that the composite \begin{multline*} F(R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \xrightarrow{T_g(R_k^{\mathcal{A}}(B))} R_g^{\mathcal{B}}(F(R_k^{\mathcal{A}}(B))) \xrightarrow{R_g^{\mathcal{B}}(T_k(B))} R_g^{\mathcal{B}}(R_k^{\mathcal{B}}(F(B))) \\ \xrightarrow{L_{k,g}^{\mathcal{B}}(F(B))^{-1}} R_{kg}^{\mathcal{B}}(F(B)) \end{multline*} agrees with the composite $$ F(R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \xrightarrow{F(L_{k,g}^{\mathcal{A}}(B)^{-1})} F(R_{kg}^{\mathcal{A}}(B)) \xrightarrow{T_{kg}(A)} R_{kg}^{\mathcal{B}}(F(B)). $$ This follows in both cases from the commutativity of the diagram~\eqref{F_T_L_compatible}. This finishes the proof that $\mathcal{S}(F,T)$ is a functor of additive categories with strict $(G,v)$-action. Let $S \colon (F_1,T_1) \to (F_2,T_2)$ be a natural transformation of $\epsilon$-variant functors of additive categories with weak $(G,v)$-action $(F_1,T_1) \colon \mathcal{A}_1 \to \mathcal{A}_2$ and $(F_2,T_2) \colon \mathcal{A}_1 \to \mathcal{A}_2$. It induces a natural transformation $\mathcal{S}(S) \colon \mathcal{S}(F_1,T_1) \to \mathcal{S}(F_2,T_2)$ of functors of additive categories with strict $(G,v)$-action as follows. Given an object $(A,g)$ in $\mathcal{S}(\mathcal{A})$, we have to specify a morphism in $\mathcal{S}(\mathcal{A})$ $$\mathcal{S}(S)(A) \colon \mathcal{S}(F_1,T_1)(A,g) = (F_1(A),g) \to \mathcal{S}(F_2,T_2)(A,g) = (F_2(A),g),$$ i.e., a morphism $R^{\mathcal{A}}_g(F_1(A)) \to R^{\mathcal{A}}_g(F_2(A))$ in $\mathcal{A}$. We take $R^{\mathcal{A}}_g(S(A))^{v(g)}$. We leave it to the reader to check that this is indeed a natural transformation of $\epsilon$-variant functors of additive categories with strict $(G,v)$-action using the commutativity of the diagram~\eqref{F_I_T_add_G-cat}. Let ${(G,v)\text{-}\matheurm{Add}\text{-}\matheurm{Cat}}^{\epsilon}$ be the category of additive categories with weak $(G,v)$-action with $\epsilon$-variant functors as morphisms and let ${\sGaddcat}^{\epsilon} $ be the category of additive categories with strict $(G,v)$-action with $\epsilon$-variant functors as morphisms. There is the forgetful functor $$\forget \colon {\sGaddcat}^{\epsilon} \to {(G,v)\text{-}\matheurm{Add}\text{-}\matheurm{Cat}}^{\epsilon}$$ and the functor constructed above $$\mathcal{S} \colon {(G,v)\text{-}\matheurm{Add}\text{-}\matheurm{Cat}}^{\epsilon} \to {\sGaddcat}^{\epsilon}.$$ \begin{lemma} \label{lem:adjoint_pair_(cals,forget)} \begin{enumerate} \item \label{lem:adjoint_pair_(cals,forget):adjoint} We obtain an adjoint pair of functors $(\mathcal{S},\forget)$. \item \label{lem:adjoint_pair_(cals,forget):equivalence} We get for every additive category $\mathcal{A}$ with weak $(G,v)$-action a functor of additive categories with weak $(G,v)$-action $$P_{\mathcal{A}} \colon \mathcal{A} \to \forget(\mathcal{S}(\mathcal{A}))$$ which is natural in $\mathcal{A}$ and whose underlying functor of additive categories is an equivalence of additive categories. \end{enumerate} \end{lemma} \begin{proof} We will only treat the case, where $v$ is trivial and $\epsilon = 1$, the other cases are analogous. \\[2mm]\ref{lem:adjoint_pair_(cals,forget):adjoint} We have to construct for any additive category $\mathcal{A}$ with weak $G$-action and any additive category $\mathcal{B}$ with strict $G$-action to one another inverse maps $$\alpha \colon \func_{\sGaddcat}(\mathcal{S}(\mathcal{A}),\mathcal{B}) \to \func_{(G,v)\text{-}\matheurm{Add}\text{-}\matheurm{Cat}}(\mathcal{A},\forget(\mathcal{B}))$$ and $$\beta \colon \func_{(G,v)\text{-}\matheurm{Add}\text{-}\matheurm{Cat}}(\mathcal{A},\forget(\mathcal{B})) \to \func_{\sGaddcat}(\mathcal{S}(\mathcal{A}),\mathcal{B}). $$ For a functor of additive categories with strict $G$-action $F \colon \mathcal{S}(\mathcal{A}) \to \mathcal{B}$, the functor of additive categories with weak $G$-action, $\alpha(F) \colon \mathcal{A} \to \forget(\mathcal{B})$ is given by a functor $\alpha(F) \colon \mathcal{A} \to \forget(\mathcal{B})$ of additive categories and a collection of natural transformations $T(F)_g \colon \alpha(F) \circ R_g^{\mathcal{A}} \to R_g^{\mathcal{B}} \circ \alpha(F)$ satisfying certain compatibility conditions. We first explain the functor $\alpha(F) \colon \mathcal{A} \to \forget(\mathcal{B})$. It sends a morphism $f \colon A \to B$ in $\mathcal{A}$ to the morphism in $\mathcal{B}$ which is given by the value of $F$ on the morphism $(A,e) \to (B,e)$ in $\mathcal{S}(\mathcal{A})$ defined by $f$. For $g \in G$ the transformation $T(F)_g$ evaluated at an object $A$ in $\mathcal{A}$ is the morphism $$\alpha(F)(R_g^{\mathcal{A}}(A)) = F(R_g^{\mathcal{A}}(A),e) \to R_g^{\mathcal{B}}(\alpha(F)(A)) = R_g^{\mathcal{B}}(F(A,e))$$ defined as follows. It is given by the composite of the image under $F$ of the morphism $(R^{\mathcal{A}}_g(A),e) \to R_g^{\mathcal{S}(\mathcal{A})}(A,e) = (A,g)$ in $\mathcal{S}(\mathcal{A})$ which is defined by the identity morphism $\id \colon R_g^{\mathcal{A}}(A) \to R_g^{\mathcal{A}}(A)$ in $\mathcal{A}$ and the identity $F(R_g^{\mathcal{S}(\mathcal{A})}(A,e)) = R_g^{\mathcal{B}}(F(A,e))$ which comes from the assumption that $F$ is a functor of strict additive $G$-categories. One easily checks that $\alpha(F)$ satisfies condition~\eqref{F_T_L_compatible} since it is satisfied for $F$. Given a functor of additive categories with weak $G$-action $(F,T) \colon \mathcal{A} \to \forget(\mathcal{B})$, the functor of additive categories with strict $G$-action $\beta(F,T) \colon \mathcal{S}(\mathcal{A}) \to \mathcal{B}$ is defined as follows. It sends an object $(A,h)$ to $R_h^{\mathcal{B}}(F(A))$. A morphism $\phi \colon (A,h) \to (B,k)$ in $\mathcal{S}(A)$ which is given by a morphism $\phi \colon R^{\mathcal{A}}_hA \to R^{\mathcal{A}}_kB$ in $\mathcal{A}$ is sent to morphism in $\mathcal{B}$ given by the composite $$ R_h^{\mathcal{B}}(F(A)) \xrightarrow{T_h(A)^{-1}} F(R^{\mathcal{A}}_h(A)) \xrightarrow{F(\phi)} F(R^{\mathcal{A}}_k(B)) \xrightarrow{T_k(B)} R^{\mathcal{B}}_k(F(B)). $$ The following calculation shows that $\beta(F,T)$ is indeed a functor of additive categories with strict $G$-action. Given an element $g \in G$ the morphism $R_g^{\mathcal{S}(\mathcal{A})}(\phi) \colon (A,hg) \to (B,kg)$ in $\mathcal{S}(\mathcal{A})$ is given by the morphism in $\mathcal{A}$ $$R_{hg}^{\mathcal{A}}(A) \xrightarrow{L_{hg}^{\mathcal{A}}(A)} R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A)) \xrightarrow{R_g^{\mathcal{A}}(\phi)} R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B)) \xrightarrow{L_{k,g}^{\mathcal{A}}(B)^{-1}} R_{kg}^{\mathcal{A}}(B). $$ Hence $\beta(F,T) \circ R_g^{\mathcal{S}(\mathcal{A})}(\phi)$ is the morphism in $\mathcal{B}$ given by the composite \begin{multline*} R_{hg}^{\mathcal{B}}(F(A)) \xrightarrow{T_{hg}(A)^{-1}} F(R_{hg}^{\mathcal{A}}(A)) \xrightarrow{F(L_{hg}^{\mathcal{A}}(A))} F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) \\ \xrightarrow{F(R_g^{\mathcal{A}}(\phi))} F( R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \xrightarrow{F(L_{k,g}^{\mathcal{A}}(B)^{-1})} F(R_{kg}^{\mathcal{A}}(B))) \\ \xrightarrow{T_{kg}(B)} R_{kg}^{\mathcal{A}}(F(B))). \end{multline*} The morphism $R_g^{\mathcal{B}}(B) \circ \beta(F,T)(\phi)$ in $\mathcal{B}$ is given by the composite \begin{multline*} R_g^{\mathcal{B}}(R_h^{\mathcal{B}}(F(A))) \xrightarrow{R_g^{\mathcal{B}}(T_h(A)^{-1})} R_g^{\mathcal{B}}(F(R^{\mathcal{A}}_h(A))) \xrightarrow{R_g^{\mathcal{B}}(F(\phi))} R_g^{\mathcal{B}}(F(R^{\mathcal{A}}_k(B))) \\ \xrightarrow{R_g^{\mathcal{B}}(T_k(B))} R_g^{\mathcal{B}}(R^{\mathcal{B}}_k(F(B))). \end{multline*} Since $\mathcal{B}$ is a additive category with strict $G$-action by assumption, we have the equalities $R_g^{\mathcal{B}}(R_h^{\mathcal{B}}(F(A))) = R_{hg}^{\mathcal{B}}(F(A))$ and $R_g^{\mathcal{B}}(R^{\mathcal{B}}_k(B)) = R_{kg}^{\mathcal{A}}(F(B)))$. We must show that under these identifications the two morphisms in $\mathcal{B}$ above agree. Since $T_g$ is a natural transformation $F \circ R^{\mathcal{A}}_g \to R^{\mathcal{B}}_g \circ F$, the following diagram commutes $$\xymatrix@!C=10em{ F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) \ar[r]^-{F(R_g^{\mathcal{A}}(\phi))} \ar[d]^-{T_g(R_h^{\mathcal{A}}(A))} & F(R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \ar[d]^-{T_g(R_k^{\mathcal{A}}(B))} \\ R_g^{\mathcal{B}}(F(R_h^{\mathcal{A}}(A))) \ar[r]^-{R_g^{\mathcal{B}}(F(\phi))} & R_g^{\mathcal{B}}(F(R_k^{\mathcal{A}}(B))) } $$ Hence it suffices to show that the composites $$R_g^{\mathcal{B}}(R_h^{\mathcal{B}}(F(A))) = R_{hg}^{\mathcal{B}}(F(A)) \xrightarrow{T_{hg}(A)^{-1}} F(R_{hg}^{\mathcal{A}}(A)) \xrightarrow{F(L_{hg}^{\mathcal{A}}(A))} F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) $$ and $$R_g^{\mathcal{B}}(R_h^{\mathcal{B}}(F(A))) \xrightarrow{R_g^{\mathcal{B}}(T_h(A)^{-1})} R_g^{\mathcal{B}}(F(R^{\mathcal{A}}_h(A))) \xrightarrow{T_g(R_h^{\mathcal{A}}(A))^{-1}} F(R_g^{\mathcal{A}}(R_h^{\mathcal{A}}(A))) $$ agree and that the composites $$ F(R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \xrightarrow{F(L_{k,g}^{\mathcal{A}}(B)^{-1})} F(R_{kg}^{\mathcal{A}}(B))) \xrightarrow{T_{kg}(B)} R_{kg}^{\mathcal{A}}(F(B))) = R_g^{\mathcal{B}}(R^{\mathcal{B}}_k(F(B)))$$ and $$F( R_g^{\mathcal{A}}(R_k^{\mathcal{A}}(B))) \xrightarrow{T_g(R_k^{\mathcal{A}}(B))} R_g^{\mathcal{B}}(F(R_k^{\mathcal{A}}(B))) \xrightarrow{R_g^{\mathcal{B}}(T_k(B))} R_g^{\mathcal{B}}(R^{\mathcal{B}}_k(F(B))$$ agree. This follows in both cases from the commutativity of the diagram~\eqref{F_T_L_compatible}. This finishes the proof that $\beta(F)$ is a functor of additive categories with strict $G$-action. We leave it to the reader to check that both composites $\beta \circ \alpha$ and $\alpha \circ \beta$ are the identity. \\[1mm]\ref{lem:adjoint_pair_(cals,forget):equivalence} The in $\mathcal{A}$ natural functor of additive categories with weak $(G,v)$-action $$P_{\mathcal{A}} \colon \mathcal{A} \to \forget(\mathcal{S}(\mathcal{A}))$$ is defined to be the adjoint of the identity functor $\id \colon \mathcal{S}(\mathcal{A}) \to \mathcal{S}(\mathcal{A})$. Explicitly it sends an object $A$ to the object $(A,e)$ and a morphism $\phi \colon A \to B$ to the morphism $(A,e) \to (B,e)$ given by $\phi$. Obviously $P_{\mathcal{A}}$ induces a bijection $\mor_{\mathcal{A}}(A,B) \to \mor_{\mathcal{S}(\mathcal{A})}(P_{\mathcal{A}}(A),P_{\mathcal{A}}(B))$ and for every object $(A,g)$ in $\mathcal{S}(\mathcal{A})$ there is an object in the image of $P_{\mathcal{A}}$ which is isomorphic to $(A,g)$, namely, $P_{\mathcal{A}}(R^{\mathcal{A}}_{g}(A)) = (R_{g}^{\mathcal{A}}(A),e)$. Hence the underlying functor $R_{\mathcal{A}}$ is an equivalence of additive categories. \end{proof} \typeout{------------------------- Section 4: Crossed product rings and involutions ----------------------} \section{Crossed product rings and involutions} \label{sec:Crossed_product_rings_and_involutions} In this subsection we will introduce the concept of a crossed product ring. Let $R$ be a ring and let $G$ be a group. Let $e\in G$ be the unit in $G$ and denote by $1$ the multiplicative unit in $R$. Suppose that we are given maps of sets \begin{eqnarray} c\colon G & \to & \aut(R), \quad g \mapsto c_g \label{c:G_toaut(G)}; \\ \tau\colon G \times G & \to & R^{\times}. \label{tau:G_times_G_to_R_times} \end{eqnarray} We require \begin{eqnarray} c_{\tau(g,g')} \circ c_{gg'} & = & c_{g} \circ c_{g'}; \label{tau_and_c_group_homo} \\ \tau (g,g') \cdot \tau (gg',g'') & = & c_{g}(\tau (g',g''))\cdot \tau (g,g'g''); \label{tau_and_c_cocykel} \\ c_e & = & \id_R; \label{c_e-Is_id} \\ \tau(e,g) & = & 1; \label{tau(g,e)_is_1} \\ \tau(g,e) & = & 1, \label{tau(e,g)_is_1} \end{eqnarray} for $g,g',g'' \in G$, where $c_{\tau(g,g')}\colon R \to R$ is conjugation with $\tau(g,g')$, i.e., it sends $r$ to $\tau(g,g')r\tau(g,g')^{-1}$. Let $R \ast G = R \ast_{c,\tau} G$ be the free $R$-module with the set $G$ as basis. It becomes a ring with the following multiplication $$\left(\sum_{g \in G} \lambda_g g\right) \cdot \left(\sum_{h \in G} \mu_h h\right) = \sum_{g \in G} \left(\sum_{\substack{g',g'' \in G,\\g'g'' = g}} \lambda_{g'} c_{g'}(\mu_{g''}) \tau(g',g'') \right) g.$$ This multiplication is uniquely determined by the properties $g\cdot r = c_g(r)\cdot g$ and $g \cdot g' = \tau(g,g') \cdot (gg')$. The conditions~\eqref{tau_and_c_group_homo} and~\eqref{tau_and_c_cocykel} relating $c$ and $\tau$ are equivalent to the condition that this multiplication is associative. The other conditions~\eqref{c_e-Is_id}, \eqref{tau(g,e)_is_1} and~\eqref{tau(e,g)_is_1} are equivalent to the condition that the element $1 \cdot e$ is a multiplicative unit in $R\ast G$. We call \begin{eqnarray} R \ast G & = & R \ast_{c,\tau} G \label{R_ast_G} \end{eqnarray} the \emph{crossed product} of $R$ and $G$ with respect to $c$ and $\tau$. \begin{example} \label{exa:RG_as_cros._prod.} Let $1 \to H \xrightarrow{i} G \xrightarrow{p} Q \to 1$ be an extension of groups. Let $s\colon Q \to G$ be a map satisfying $p \circ s = \id$ and $s(e) = e$. We do not require $s$ to be a group homomorphism. Define $c\colon Q \to \aut(RH)$ by $c_q(\sum_{h \in H} \lambda_h h) = \sum_{h \in H} \lambda_h s(q)hs(q)^{-1}$. Define $\tau\colon Q \times Q \to (RH)^{\times}$ by $\tau (q,q') = s(q)s(q')s(qq')^{-1}$. Then we obtain a ring isomorphism $RH \ast Q \to RG$ by sending $\sum_{q \in Q} \lambda_q q$ to $\sum_{q \in Q} i(\lambda_q) s(q)$, where $i\colon RH \to RG$ is the ring homomorphism induced by $i\colon H \to G$. Notice that $s$ is a group homomorphism if and only if $\tau$ is constant with value $1 \in R$. \end{example} Next we consider the additive category with involution $\FGP{R}$ of finitely generated projective $R$-modules. For $g \in G$ we obtain a functor $\res_{c_g} \colon \FGP{R} \to \FGP{R}$ by restriction with the ring automorphism $c_g\colon R \to R$. Define natural transformation of functors $\FGP{R} \to \FGP{R}$ $$L_{\tau(g,h)} \colon \res_{c_{gh}} \to \res_{c_h} \circ \res_{c_g}$$ by assigning to a finitely generated projective $R$-module the $R$-homomorphism $$\res_{c_{gh}}P \to \res_{c_h}\res_{c_g}P,\quad p \mapsto \tau(g,h) p.$$ This is indeed a $R$-linear map because of the following computation for $r \in R$ and $p \in P$ \begin{multline*} \tau(g,h)c_{gh}(r) = \tau(g,h)c_{gh}(r)\tau(g,h)^{-1}\tau(g,h) = c_{\tau(g,h)} \circ c_{gh}(r)\tau(g,h) \\ = c_g \circ c_h(r)\tau(g,h). \end{multline*} \begin{lemma} \label{lem:weak_G-structure_on_FGP(R)} We get from the collections $\{\res_{c_g} \mid g \in G\}$ and $\{L_{\tau(g,h)} \mid g,h \in G\}$ the structure of an additive category with weak $G$-action on $\FGP{R}$. \end{lemma} \begin{proof} Condition~\eqref{tau_and_c_cocykel} implies that for every finitely generated projective $R$-module the composites $$\res_{c_{gg'g''}} P \xrightarrow{L_{\tau(g,g'g'')}} \res_{c_{g'g''}}\res_{c_g} P \xrightarrow{L_{c_g(\tau(g',g''))}} \res_{c_{g''}} \res_{C_{g'}} \res_{c_g} P$$ and $$\res_{c_{gg'g''}} P \xrightarrow{L_{\tau(gg',g'')}} \res_{c_{g''}}\res_{c_{gg'}} P \xrightarrow{L_{\tau(g,g')}} \res_{c_{g''}} \res_{C_{g'}} \res_{c_g} P$$ agree. This takes care of condition~\ref{def:additive_category_with_weak_(G,v)-action:condition_for_(g,h,k)} in Definition~\ref{def:additive_category_with_weak_(G,v)-action}. We conclude $(\res_{c(e)} = \id$, $L_{\tau(g,e)} = \id$ and $L_{\tau(e,g)} = \id$ for all $g \in G$ from~\eqref{c_e-Is_id}, \eqref{tau(g,e)_is_1} and~\eqref{tau(e,g)_is_1}. \end{proof} Because of Lemma~\ref{lem:weak_G-structure_on_FGP(R)} we obtain two additive categories with strict $G$-action from the constructions of Section~\ref{sec:Making_an_additive_categories_with_weak_(G,v)-action_strict} \begin{eqnarray} \FGP{R}_{c,\tau} & := & \mathcal{S}(\FGP{R}); \label{FGP(R)_(c,tau)} \end{eqnarray} >From now on assume that $R$ comes with an involution of rings $r \mapsto \overline{r}$. We want to consider extensions of it to an involution on $R \ast G$. Suppose that additionally we are given a map \begin{eqnarray} &w \colon G \to R. & \label{map_w} \end{eqnarray} We require the following conditions for $g,h \in G$ and $r \in R$ \begin{eqnarray} w(e) & = & 1; \label{w(e)_is_1} \\ w(gh) & = & w(h)c_{h^{-1}}(w(g))\tau (h^{-1},g^{-1}) c_{(gh)^{-1}}\left(\overline{\tau(g,h)}\right)^{-1}; \label{w(gh)} \\ \overline{w(g)} & = & w(g)c_{g}^{-1}\left(\tau (g,g^{-1})\overline{\tau (g,g^{-1})}^{-1}\right); \label{overlinew(g)} \\ \overline{c_{g}(r)} & = & c_g\left(\left(w(g)\tau(g^{-1},g)\right)^{-1}\overline{r}\left(w(g)\tau(g^{-1},g)\right)\right) \label{overlinec_g(r)}. \end{eqnarray} We claim that there is precisely one involution on $R \ast G$ with the properties that it extends the involution on $R$ and sends $g$ to $w(g) \cdot g^{-1}$. The candidate for the involution is \begin{eqnarray} \overline{\sum_{g \in G} r_g \cdot g} & := & \sum_{g \in G} w(g)c_{g^{-1}}(\overline{r_g}) \cdot g^{-1}. \label{inv_onR_ast_G} \end{eqnarray} One easily concludes from the requirements and the axioms of an involution that this is the only possible formula for such an involution. Namely, \begin{multline*} \overline{\sum_{g \in G} r_g \cdot g} = \sum_{g \in G} \overline{r_g \cdot g} = \sum_{g \in G} \overline{g} \cdot\overline{r_g} = \sum_{g \in G} w(g) \cdot g^{-1} \cdot \overline{r_g} \\ = \sum_{g \in G} w(g) \cdot \left(g^{-1} \cdot \overline{r_g} \cdot g\right) \cdot g^{-1} = \sum_{g \in G} (w(g)c_{g^{-1}}(\overline{r_g}) \cdot g^{-1}. \end{multline*} Before we explain that this definition indeed satisfies the axioms for an involution, we show that the conditions about $w$ above are necessary for this map to be an involution on $R \ast G$. So assume that we have an involution on $R \ast G$ that extends the involution on $R$ and sends $g$ to $w(g) \cdot g^{-1}$ for a given map $w \colon G \to R$. Denote by $1$ the multiplicative unit in both $R$ and $R \ast G$. From $$1 \cdot e = 1 = \overline{1} = \overline{1 \cdot e}= w(e) \cdot e$$ we conclude~\eqref{w(e)_is_1}. The equality \begin{multline*} w(gh) c_{(gh)^{-1}}\left(\overline{\tau(g,h)}\right) \cdot (gh)^{-1} = \overline{\tau(g,h) \cdot gh} = \overline{g\cdot h} \\ = \overline{h} \cdot \overline{g} = w(h) \cdot h^{-1} \cdot w(g) \cdot g^{-1} = w(h)\left(h^{-1} \cdot w(g) \cdot h\right) \cdot h^{-1} \cdot g^{-1} \\ = w(h)c_{h^{-1}}(w(g))\tau (h^{-1},g^{-1}) \cdot (gh)^{-1} \end{multline*} implies~\eqref{w(gh)}. If we take $h = g^{-1}$ in~\eqref{w(gh)} and use~\eqref{w(e)_is_1}, we get \begin{eqnarray} & 1 = w(e) = w(gg^{-1}) = w(g^{-1})c_{g}(w(g))\tau (g,g^{-1}) \overline{\tau(g,g^{-1})}^{-1}.& \label{eqn_about_1_is_w(e)_is} \end{eqnarray} This implies that for all $g \in G$ the element $w(g)$ is a unit in $R$ with inverse $$w(g)^{-1} = c_{g^{-1}}(w(g^{-1}))\tau (g^{-1},g) \overline{\tau(g^{-1},g)}^{-1}.$$ The equality \begin{multline*} g = \overline{\overline{g}} = \overline{w(g) \cdot g^{-1}} = \overline{g^{-1}} \cdot \overline{w(g)} = w(g^{-1}) \cdot g \cdot \overline{w(g)} \\ = w(g^{-1})\cdot \left(g\cdot \overline{w(g)}\cdot g^{-1}\right) \cdot g = w(g^{-1})c_{g}\left(\overline{w(g)}\right) \cdot g \end{multline*} together with~\eqref{eqn_about_1_is_w(e)_is} implies $$w(g^{-1})c_{g}\left(\overline{w(g)}\right) = 1 = w(g^{-1})c_{g}(w(g))\tau (g,g^{-1}) \overline{\tau(g,g^{-1})}^{-1}.$$ If we multiply this equation with $w(g^{-1})^{-1}$ and apply the inverse $c_g^{-1}$ of $c_g$, we derive condition~\eqref{overlinew(g)}. The equality \begin{multline*} \overline{r} \cdot w(g) \cdot g^{-1} = \overline{r} \cdot \overline{g} = \overline{g\cdot r} = \overline{(g\cdot r \cdot g^{-1}) \cdot g} = \overline{c_{g}(r)\cdot g} = \overline{g} \cdot \overline{c_{g}(r)} \\ = w(g) \cdot g^{-1} \cdot \overline{c_{g}(r)} = w(g) \cdot \left(g^{-1} \cdot \overline{c_{g}(r)} \cdot g\right) \cdot g^{-1} = w(g) \cdot c_{g^{-1}}\left(\overline{c_{g}(r)}\right) \cdot g^{-1} \end{multline*} implies that for all $g \in G$ and $r \in R$ we have $\overline{r} \cdot w(g) = w(g) \cdot c_{g^{-1}}\left(\overline{c_{g}(r)}\right)$ and hence $$\overline{c_{g}(r)} = c_{g^{-1}}^{-1}\left(w(g)^{-1}\overline{r}w(g)\right).$$ >From the relation~\eqref{tau_and_c_group_homo} we conclude $c_{\tau(g^{-1},g)} = c_{g^{-1}} \circ c_{g}$ and hence $ c_{g^{-1}}^{-1} = c_g \circ c_{\tau(g^{-1},g)}^{-1}$. Now condition~\eqref{overlinec_g(r)} follows. Finally we show that the conditions~\eqref{w(e)_is_1}, \eqref{w(gh)}, \eqref{overlinew(g)} and~\eqref{overlinec_g(r)} on $w$ do imply that we get an involution of rings on $R \ast G$ by the formula~\eqref{inv_onR_ast_G}. Obviously this formula is compatible with the additive structure on $R \ast G$ and sends $1$ to $1$. In order to show that it is an involution and compatible with the multiplicative structure we have to show $\overline{g\cdot h} = \overline{h}\cdot \overline{g}$, $\overline{rs} = \overline{s} \cdot \overline{r}$, $\overline{r \cdot g} = \overline{g} \cdot \overline{r}$, $\overline{g \cdot r} = \overline{r} \cdot \overline{g}$, $\overline{\overline{r}} = r$ and $\overline{\overline{g}} = g$ for $r,s \in R$ and $g,h \in G$. We get $\overline{rs} = \overline{s} \cdot \overline{r}$ and $\overline{\overline{r}} = r$ from the fact that we start with an involution on $R$. The other equations follow from the proofs above that~\eqref{inv_onR_ast_G} is the only possible candidate and that the conditions about $w$ are necessary for the existence of the desired involution on $R \ast G$, just read the various equations and implications backwards. We will denote the resulting ring with involution by \begin{eqnarray} &R \ast_{c,\tau,w} G. & \label{r_ast_c,tau,w_G} \end{eqnarray} \begin{example} \label{exa:RG_as_cros._prod.with_inv} Suppose that we are in the situation of Example~\ref{exa:RG_as_cros._prod.}. Suppose that we are additionally given a group homomorphism $w_1 \colon G \to \cent(R)^{\times}$ to the abelian group of invertible central elements in $R$ satisfying $\overline{w_1(g)} = w_1(g)$ for all $g \in G$. The $w_1$-twisted involution on $RG$ is defined by $\overline{\sum_{g \in G} r_g \cdot g} = \sum_{g \in G} \overline{r_g}w_1(g) \cdot g^{-1}$. It extends the $w_1|_H$-involution on $RH$. We obtain an involution on $RH \ast Q$ if we conjugate the $w_1$-twisted involution with the isomorphism $RH \ast Q \xrightarrow{\cong} RG$ which we have introduced in Example~\ref{exa:RG_as_cros._prod.}. This involution on $RH \ast Q$ sends $q \in Q$ to the element $w_1(s(q))\tau(q^{-1},q)^{-1} \cdot q^{-1}$ because of the following calculation in $RG$ for $q \in Q$ \begin{multline*} \overline{s(q)} = w_1(s(q)) \cdot s(q)^{-1} = w_1(s(q)) \cdot s(q)^{-1}\cdot s(q^{-1})^{-1} \cdot s(q^{-1}) \\ = w_1(s(q)) \cdot \left(s(q^{-1}) \cdot s(q)\right)^{-1} \cdot s(q^{-1}) = w_1(s(q)) \cdot \left(\tau(q^{-1},q)s(q^{-1}q)\right)^{-1} \cdot s(q^{-1}) \\ = w_1(s(q))\tau(q^{-1},q)^{-1} \cdot s(q^{-1}). \end{multline*} Define $$w \colon Q \to RH, \quad q \mapsto w_1(s(q))\tau(q^{-1},q)^{-1}.$$ Then $w$ satisfies the conditions \eqref{w(e)_is_1}, \eqref{w(gh)}, \eqref{overlinew(g)} and~\eqref{overlinec_g(r)} and the involution on $RH \ast Q$ determined by $w$ corresponds under the isomorphism $RH \ast Q \xrightarrow{\cong} RG$ to the $w_1$-twisted involution on $RG$. \end{example} Let \begin{eqnarray} & t_g \colon \res_{c_g} \circ I_{\FGP{R}} \to I_{\FGP{R}} \circ \res_{c_g} & \label{t_g} \end{eqnarray} be the natural transformation which assigns to a finitely generated projective $R$-module $P$ the $R$-isomorphism $t_g(P) \colon \res_{c_g} P^* \to (\res_{c_g} P)^*$ which sends the $R$-linear map $f \colon P \to R$ to the $R$-linear map $$t_g(P)(f) \colon \res_{c_g} P \to R, \quad p \mapsto c_g^{-1}(f(p))\left(w(g)\tau(g^{-1},g)\right)^{-1}.$$ We firstly check that $t_g(P)(f) \colon \res_{c_g} P \to R$ is $R$-linear by the following computation \begin{eqnarray*} t_g(P)(f)(c_g(r)p) & = & c_g^{-1}\left(f(c_g(r)p)\right)\left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & c_g^{-1}\left(c_g(r)f(p)\right)\left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & c_g^{-1}\left(c_g(r)\right)c_g^{-1}(f(p))\left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & rc_g^{-1}(f(p))\left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & rt_g(P)(f)(p). \end{eqnarray*} Finally we check that $t_g(P) \colon \res_{c_g} P^* \to (\res_{c_g} P)^*$ is $R$-linear by the following calculation for $f \in P^*$ and $p \in P$ \begin{eqnarray*} \lefteqn{t_g(P)\left((c_g(r) f)\right)(p)} & & \\ & = & c_g^{-1}\left((c_g(r) f)(p)\right)\left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & c_g^{-1}\left(f(p)\overline{c_g(r)}\right)\left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & c_g^{-1}(f(p))c_g^{-1}(\overline{c_g(r)})\left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & c_g^{-1}(f(p))c_g^{-1}\left(c_g\left(\left(w(g)\tau(g^{-1},g)\right)^{-1} \overline{r}\left(w(g)\tau(g^{-1},g)\right)\right)\right) \left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & c_g^{-1}(f(p)\left(w(g)\tau(g^{-1},g)\right)^{-1}\overline{r}\left(w(g)\tau(g^{-1},g)\right) \left(w(g)\tau(g^{-1},g)\right)^{-1} \\ & = & c_g^{-1}(f(p)\left(w(g)\tau(g^{-1},g)\right)^{-1}\overline{r} \\ & = & t_g(P)(f)(p)\overline{r} \\ & = & \left(rt_g(P)\right)(f)(p). \end{eqnarray*} \begin{definition}\label{def:additive_G-category_with_involution} An \emph{additive $G$-category with involution} $\mathcal{A}$ is an additive $G$-category, which is the same as an additive category with strict $G$-action (see Definition~\ref{def:additive_category_with_weak_(G,v)-action}), together with an involution $(I,E)$ of additive categories (see~\eqref{involution_I_on_an_additive_category} and \eqref{E_belonging_to_I}) with the following properties: $I \colon \mathcal{A} \to \mathcal{A}$ is a contravariant functor of additive $G$-categories, i.e., $R_g \circ I = I \circ R_g$ for all $g \in G$, and $E \colon \id_{\mathcal{A}} \to I \circ I$ is a natural transformation of functors of additive $G$-categories, i.e., for every $g \in G$ and every object $A$ in $\mathcal{A}$ the morphisms $E(R_g(A))$ and $R_g(E(A))$ from $R_g(A)$ to $I^2 \circ (R_g(A) = R_g \circ I^2(A)$ agree. \end{definition} \begin{lemma} \label{lem:involution_on_FGP(R_c,w)} The additive category with strict $G$-action $\FGP{R}_{c,\tau}$ of~\eqref{FGP(R)_(c,tau)} inherits the structure of an additive $G$-category with involution in the sense of Definition~\ref{def:additive_G-category_with_involution}. \end{lemma} \begin{proof} We firstly show that $$I_{\FGP{R}} \colon \FGP{R} \to \FGP{R}$$ together with the collection of the $\{t_g^{-1} \colon I_{\FGP{R}} \circ \res_{c_g} \to \res_{c_g} \circ I_{\FGP{R}} \mid g \in G\}$ (see~\eqref{t_g}) is a contravariant functor of additive categories with weak $G$-action. We have to verify that the diagram~\eqref{F_T_L_compatible} commutes. This is equivalent to show for every finitely generated projective $R$-module $P$ and $g,h \in G$ that the following diagram commutes $$ \xymatrix@!C=10em{\res_{c_{gh}} P^* \ar[rr]^-{t_{gh}(P)} \ar[d]^-{L_{\tau(g,h)}(P^*)} & & (\res_{c_{gh}} P)^* \\ \res_{c_{h}} \res_{c_{g}} P^* \ar[r]^-{\res_{c_h} t_g(P)} & \res_{c_{h}} (\res_{c_{g}} P)^* \ar[r]^-{t_h(\res_gP)} & (\res_{c_{h}} \res_{c_{g}} P)^* \ar[u]^-{L_{\tau(g,h)}(P)^*} } $$ We start with an element $f \colon P \to R$ in the left upper corner. Its image under the upper horizontal arrow is $p \mapsto c_{gh}^{-1}(f(p))\left(w(gh)\tau((gh)^{-1},gh)\right)^{-1}$. Next we list successively how its image looks like if we go in the anticlockwise direction from the left upper corner to the right upper corner. We first get $p \mapsto f(p)\overline{\tau (g,h)}$. After the second map we get $p \mapsto c_g^{-1}\left(f(p)\overline{\tau(g,h)}\right)\left(w(g)\tau(g^{-1},g)\right)^{-1}$. After applying the third map we obtain $p \mapsto c_h^{-1}\left(c_g^{-1}\left(f(p)\overline{\tau(g,h)}\right) \left(w(g)\tau(g^{-1},g)\right)^{-1}\right) \left(w(h)\tau(h^{-1},h)\right)^{-1}$. Finally we get $p \mapsto c_h^{-1}\left(c_g^{-1}\left(f(\tau(g,h)p)\overline{\tau(g,h)}\right) \left(w(g)\tau(g^{-1},g)\right)^{-1}\right) \left(w(h)\tau(h^{-1},h)\right)^{-1}$. Since $f$ lies in $P^*$, we have $f(\tau(g,h)p) = \tau(g,h)f(p)$. Hence it suffices to show for all $r \in R$ \begin{multline*} c_h^{-1}\left(c_g^{-1}\left(\tau(g,h)r\overline{\tau(g,h)}\right) \left(w(g)\tau(g^{-1},g)\right)^{-1}\right) \left(w(h)\tau(h^{-1},h)\right)^{-1} \\ = c_{gh}^{-1}(r)\left(w(gh)\tau((gh)^{-1},gh)\right)^{-1}. \end{multline*} (Notice that now $f$ has been eliminated.) By applying $c_{gh}$ we see that this is equivalent to showing \begin{multline*} c_{gh}\left(c_h^{-1}\left(c_g^{-1}\left(\tau(g,h)r\overline{\tau(g,h)}\right)\right)\right) \\ = rc_{gh}\left(\left(w(gh)\tau((gh)^{-1},gh)\right)^{-1}\left(w(h)\tau(h^{-1},h)\right) c_h^{-1}\left(w(g)\tau(g^{-1},g)\right)\right). \end{multline*} >From the relation~\eqref{tau_and_c_group_homo} we conclude that $c_{gh} \circ c_{h^{-1}} \circ c_{g^{-1}}(s) = \tau(g,h)^{-1}s\tau(g,h)$ holds for all $s \in R$. Hence it remains to show \begin{multline*} \tau(g,h)^{-1}\left(\tau(g,h)r\overline{\tau(g,h)}\right) \tau(g,h) \\ = rc_{gh}\left(\left(w(gh)\tau((gh)^{-1},gh)\right)^{-1}w(h)\tau(h^{-1},h) c_h^{-1}\left(w(g)\tau(g^{-1},g)\right)\right). \end{multline*} This reduces to proving for $g,h \in G$ \begin{multline*} \overline{\tau(g,h)} \tau(g,h) \\ = c_{gh}\left(\tau((gh)^{-1},gh)^{-1}w(gh)^{-1} w(h)\tau(h^{-1},h) c_h^{-1}\left(w(g)\tau(g^{-1},g)\right)\right). \end{multline*} (Notice that now $r$ has been eliminated.) By inserting condition~\eqref{w(gh)} and the conclusions $c_{\tau(h^{-1},h)} \circ c_h^{-1} = c_{h^{-1}}$ and $c_{\tau((gh)^{-1},gh)} \circ c_{gh}^{-1} = c_{(gh)^{-1}}$ from conditions~\eqref{tau_and_c_group_homo} and~\eqref{c_e-Is_id} we get \begin{eqnarray*} \lefteqn{w(gh)^{-1} w(h)\tau(h^{-1},h) c_h^{-1}\left(w(g)\tau(g^{-1},g)\right)} & & \\ & = & \left(w(h)c_{h^{-1}}(w(g))\tau (h^{-1},g^{-1}) c_{(gh)^{-1}}\left(\overline{\tau(g,h)}\right)^{-1}\right)^{-1}w(h) \\ & & \hspace{30mm} \tau(h^{-1},h) c_h^{-1}\left(w(g)\tau(g^{-1},g)\right) \tau(h^{-1},h)^{-1} \tau(h^{-1},h) \\ & = & c_{(gh)^{-1}}\left(\overline{\tau(g,h)}\right)\tau (h^{-1},g^{-1})^{-1} c_{h^{-1}}(w(g))^{-1}w(h)^{-1}w(h) \\ & & \hspace{30mm} c_{h^{-1}}\left(w(g)\tau(g^{-1},g)\right)\tau(h^{-1},h) \\ & = & c_{(gh)^{-1}}\left(\overline{\tau(g,h)}\right)\tau (h^{-1},g^{-1})^{-1} c_{h^{-1}}(w(g))^{-1}c_{h^{-1}}(w(g)) \\ & & \hspace{30mm} c_{h^{-1}}\left(\tau(g^{-1},g)\right) \tau(h^{-1},h) \\ & = & c_{(gh)^{-1}}\left(\overline{\tau(g,h)}\right)\tau (h^{-1},g^{-1})^{-1} c_{h^{-1}}\left(\tau(g^{-1},g)\right) \tau(h^{-1},h) \\ & = & \tau((gh)^{-1},gh) c_{gh}^{-1}\left(\overline{\tau(g,h)}\right) \tau((gh)^{-1},gh)^{-1}\tau (h^{-1},g^{-1})^{-1} \\ & & \hspace{30mm} c_{h^{-1}}\left(\tau(g^{-1},g)\right) \tau(h^{-1},h). \end{eqnarray*} This implies \begin{eqnarray*} \lefteqn{c_{gh}\left(\tau((gh)^{-1},gh)^{-1}w(gh)^{-1} w(h)\tau(h^{-1},h) c_h^{-1}\left(w(g)\tau(g^{-1},g)\right)\right)} & & \\ & = & c_{gh}\left(\tau((gh)^{-1},gh)^{-1}\tau((gh)^{-1},gh) c_{gh}^{-1} \left(\overline{\tau(g,h)}\right)\tau((gh)^{-1},gh)^{-1}\right. \\ & & \hspace{30mm} \left.\tau (h^{-1},g^{-1})^{-1}c_{h^{-1}}\left(\tau(g^{-1},g)\right) \tau(h^{-1},h) \right) \\ & = & \overline{\tau(g,h)}c_{gh}\left(\tau((gh)^{-1},gh)^{-1}\tau (h^{-1},g^{-1})^{-1} c_{h^{-1}}\left(\tau(g^{-1},g)\right) \tau(h^{-1},h) \right). \end{eqnarray*} Hence it remains to show $$\tau(g,h) = c_{gh}\left(\tau((gh)^{-1},gh)^{-1}\tau (h^{-1},g^{-1})^{-1} c_{h^{-1}}\left(\tau(g^{-1},g)\right) \tau(h^{-1},h) \right).$$ (Notice that we have eliminated any expression involving the involution.) >From condition~\eqref{tau_and_c_group_homo}, \eqref{tau_and_c_cocykel} and~\eqref{c_e-Is_id} we conclude \begin{eqnarray*} \tau(h^{-1},g^{-1}) \tau((gh)^{-1},g) & = & c_{h^{-1}}(\tau(g^{-1},g)); \\ \tau(gh)^{-1},g)\tau(h^{-1},h) & = &c_{(gh)^{-1}}(\tau(g,h))\tau((gh)^{-1},gh); \\ c_{gh}^{-1} & = & c_{\tau((gh)^{-1},gh)^{-1}} \circ c_{(gh)^{-1}}. \end{eqnarray*} Hence \begin{eqnarray*} \lefteqn{\tau((gh)^{-1},gh)^{-1}\tau (h^{-1},g^{-1})^{-1} c_{h^{-1}}\left(\tau(g^{-1},g)\right) \tau(h^{-1},h)} & & \\ & = & \tau((gh)^{-1},gh)^{-1}\tau (h^{-1},g^{-1})^{-1} \tau(h^{-1},g^{-1}) \tau((gh)^{-1},g) \tau(h^{-1},h) \\ & = & \tau((gh)^{-1},gh)^{-1}\tau((gh)^{-1},g) \tau(h^{-1},h) \\ & = & \tau((gh)^{-1},gh)^{-1}c_{(gh)^{-1}}(\tau(g,h))\tau((gh)^{-1},gh) \\ & = & c_{gh}^{-1}(\tau(g,h)). \end{eqnarray*} This finishes the proof of the commutativity of the diagram~\eqref{F_T_L_compatible}. Next we show that $E_{\FGP{R}} \colon \id_{\FGP{R}} \to I_{\FGP{R}} \circ I_{\FGP{R}}$ is a natural transformation of contravariant functors of additive categories with weak $G$-action. We have to show that the diagram~\eqref{F_I_T_add_G-cat} commutes. This is equivalent to show for every finitely generated projective $R$-module $P$ the following diagram commutes $$ \xymatrix@!C=10em{\res_{c_g} P \ar[r]^-{E_{\FGP{R}}(\res_{c_g} P)} \ar[d]^-{\res_{c_g} E_{\FGP{R}}(P)} & (\res_{c_g} P)^{**} \ar[d]^-{t_g(P)^*} \\ \res_{c_g} \left(P^{**}\right) \ar[r]^-{t_g(P^*)} & (\res_{c_g} P^*)^* } $$ We start with an element $p \in P$ in the left upper corner. It is sent under the left vertical arrow to the element given by $f \mapsto f(p)$. The image of this element under the lower horizontal is given by $f \mapsto c_g^{-1}(f(p))\left(w(g)\tau(g^{-1},g)\right)^{-1}$. The image of $p \in P$ under the upper horizontal arrow is $f \mapsto f(p)$. The image of this element under the right vertical arrow sends $f$ to $f \circ t_g(P)(p) = c_g^{-1}(f(p))\left(w(g)\tau(g^{-1},g)\right)^{-1}$. >From the naturality of the construction of the additive category with strict $G$-action $\FGP{R}_{c,\tau} := \mathcal{S}(\FGP{R})$ (see Section~\ref{sec:Making_an_additive_categories_with_weak_(G,v)-action_strict}) we conclude that $(I_{\FGP{R}},\{t_g \mid g \in G\})$ induces a functor of additive categories with strict $G$-action $$I_{\FGP{R}_{c,\tau}} \colon \FGP{R}_{c,\tau} \to \FGP{R}_{c,\tau}$$ and $E_{\FGP{R}}$ induces a natural transformation of functors of additive categories with strict $G$-action $$E_{\FGP{R}_{c,\tau}} \colon \id_{\FGP{R}} \to I_{\FGP{R}_{c,\tau}} \circ I_{\FGP{R}_{c,\tau}}.$$ It remains to prove that condition~\eqref{E(I(A))_is_I(E(A)-1)} holds for $(I_{\FGP{R}_{c,\tau}},E_{\FGP{R}_{c,\tau}})$. But this follows easily from the fact that condition~\eqref{E(I(A))_is_I(E(A)-1)} holds for $(I_{\FGP{R}},E_{\FGP{R}})$. \end{proof} The additive $G$-category with involution constructed in Lemma~\ref{lem:involution_on_FGP(R_c,w)} will be denoted in the sequel by \begin{eqnarray} & \FGP{R}_{c,\tau,w}.& \label{FGP(R)_(c,tau,w)} \end{eqnarray} \typeout{------------------------- Section 5: Connected groupoids an additive categories ----------------------} \section{Connected groupoids and additive categories} \label{sec:Connected_groupoids_and_additive_categories} Groupoids are always to be understood to be small. A groupoid is called \emph{connected} if for two objects $x$ and $y$ there exists a morphism $f \colon x \to y$. Let $\mathcal{G}$ be a connected groupoid. Let $\matheurm{Add\text{-}Cat}$ be the category of small additive categories. Given a contravariant functor $F \colon \mathcal{G} \to \matheurm{Add\text{-}Cat}$, we define a new small additive category, which we call its \emph{homotopy colimit} (see for instance~\cite{Thomason(1979)}) \begin{eqnarray} & \intgf{\mathcal{G}}{F} & \label{int_calg_F} \end{eqnarray} as follows. An object is a pair $(x,A)$ consisting of an object $x$ in $\mathcal{G}$ and an object $A$ in $F(x)$. A morphism in $\intgf{\mathcal{G}}{F}$ from $(x,A)$ to $(y,B)$ is a formal sum $$\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \phi_f$$ where $\phi_f \colon A \to F(f)(B)$ is a morphism in $F(x)$ and only finitely many coefficients $\phi_f$ are different from zero. The composition of a morphism $\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \phi_f \colon (x,A) \to (y,B)$ and a morphism $\sum_{g \in \mor_{\mathcal{G}}(y,z)} g \cdot \phi_g \colon (y,B) \to (z,C)$ is given by the formula $$\sum_{h \in \mor_{\mathcal{G}}(x,z)} h \cdot \Biggl(\sum_{\substack{f \in \mor_{\mathcal{G}}(x,y)\\g \in \mor_{\mathcal{G}}(y,z)\\ h = g\circ f}} F(f)(\psi_g) \circ \phi_f)\Biggr).$$ The decisive special case is $$(g \cdot \psi ) \circ (f \cdot \phi) = (g \circ f) \cdot (F(f)(\psi) \circ \phi).$$ The $\mathbb{Z}$-module structure on $\mor_{\intgf{\mathcal{G}}{F}}(x,y)$ is given by \begin{eqnarray*} \left(\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \phi_f\right) + \left( \sum_{f \in \mor(\mathcal{G})} f \cdot \psi_f\right) & = & \sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot (\phi_f + \psi_f). \end{eqnarray*} A model for the sum of two objects $(x,A)$ and $(x,B)$ is $(x,A \oplus B)$ if $A \oplus B$ is a model for the sum of $A$ and $B$ in $F(x)$. Since $\mathcal{G}$ is by assumption connected, we can choose for any object $(y,B)$ in $\intgf{\mathcal{G}}{F}$ and any object $x$ in $\mathcal{G}$ an isomorphism $f \colon x \to y$ and the objects $(x,F(f)(B))$ and $(y,B)$ in $\intgf{\mathcal{G}}{F}$ are isomorphic. Namely $f \cdot \id_{F(f)(B)}$ is an isomorphism $(x,F(f)(B) \xrightarrow{\cong} (y,B)$ whose inverse is $f^{-1} \cdot \id_{B} \cdot $ Hence the direct sum of two arbitrary objects $(x,A)$ and $(y,B)$ exists in $\intgf{\mathcal{G}}{F}$. Notice that we need the connectedness of $\mathcal{G}$ only to show the existence of a direct sum. This will become important later when we deal with non-connected groupoids. This construction is functorial in $F$. Namely, if $S \colon F_0 \to F_1$ is a natural transformation of contravariant functors $\mathcal{G} \to \matheurm{Add\text{-}Cat}$, then it induces a functor \begin{eqnarray} & \intgf{\mathcal{G}}{S} \colon \intgf{\mathcal{G}}{F_0} \to \intgf{\mathcal{G}}{F_1} & \label{intgf(calg)(S)} \end{eqnarray} of additive categories as follows. It sends an object $(x,A)$ in $\intgf{\mathcal{G}}{F_0}$ to the object $(x,S(x)(A))$ in $\intgf{\mathcal{G}}{F_1}$. A morphism $\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \phi_f \colon (x,A) \to (y,B)$ is sent to the morphism $$\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot S(x)(\phi_f) \colon (x,s(x)(A)) \to (y,s(y)(B)).$$ This makes sense since $S(x)(\phi_f)$ is a morphism in $F_1(x)$ from $S(x)(A)$ to $S(x)(F_0(f)(B)) = F_1(f)(S(y)(B))$. The decisive special case is that $\intgf{\mathcal{G}}{S}$ sends $(f \colon x \to y) \cdot \phi$ to $(f \colon x \to y) \cdot S(x)(\phi)$. One easily checks that $\intgf{\mathcal{G}}{S}$ is compatible with the structures of additive categories and we have \begin{eqnarray} \left(\intgf{\mathcal{G}}{S_2}\right) \circ \left(\intgf{\mathcal{G}}{S_1}\right) & = & \intgf{\mathcal{G}}{(S_2 \circ S_1)}; \label{int_S_2_circint_S_1_is_int_S_2_circS_1} \\ \intgf{\mathcal{G}}{\id_F} & = & \id_{\intgf{\mathcal{G}}{F}}. \label{int_id_is_id} \end{eqnarray} The construction is also functorial in $\mathcal{G}$. Namely, let $W \colon \mathcal{G}_1 \to \mathcal{G}_2$ be a covariant functor of groupoids. Then we obtain a covariant functor \begin{eqnarray} & W_* \colon \intgf{\mathcal{G}_1}{F} \circ W \to \intgf{\mathcal{G}_2}{F} & \label{W_ast} \end{eqnarray} of additive categories as follows. An object $(x_1,A)$ in $\intgf{\mathcal{G}_1}{F \circ W}$ is sent to the object $(W(x_1),A)$ in $\intgf{\mathcal{G}_2}{F}$. A morphism $\sum_{f \in \mor_{\mathcal{G}_1}(x_1,y_1)} f \cdot \phi_f \colon (x_1,A) \to (y_1,B)$ in $\intgf{\mathcal{G}_1}{F \circ W}$ is sent to the morphism $$\sum_{f \in \mor_{\mathcal{G}_2}(W(x_1),W(y_1))} f \cdot \Biggl(\sum_{\substack{f_1 \in \mor_{\mathcal{G}_1}(x_1,y_1)\\ W(f_1) = f}} \phi_{f_1}\Biggr) \colon(W(x_1),A) \to (W(y_1),B)$$ in $\intgf{\mathcal{G}_2}{F}$. Here the decisive special case is that $W_*$ sends the morphism $f \cdot \phi$ to $W(f) \cdot \phi$. One easily checks that $W_*$ is compatible with the structures of additive categories and we have for covariant functors $W_1 \colon \mathcal{G}_1 \to \mathcal{G}_2$, $W_2 \colon \mathcal{G}_2 \to \mathcal{G}_3$ and a contravariant functor $F \colon \mathcal{G} \to \matheurm{Add\text{-}Cat}$ \begin{eqnarray} (W_2)_* \circ (W_1)_* & = & (W_2 \circ W_1)_*; \label{(W_2)_ast_circ_(W_1)_ast_is_(W_2_circ_W_1)_ast} \\ (\id_{\mathcal{G}})_* & = & \id_{\intgf{\mathcal{G}}{F}}. \label{(id_calg)_ast_is_id_int_calg_F} \end{eqnarray} These two constructions are compatible. Namely, given a natural transformation $S_1 \colon F_1 \to F_2$ of contravariant functors $\mathcal{G} \to \matheurm{Add\text{-}Cat}$ and a covariant functor $W \colon \mathcal{G}_1 \to \mathcal{G}$, we get \begin{eqnarray} \left(\intgf{\mathcal{G}}{S} \right) \circ W_* & = & W_* \circ \left(\intgf{\mathcal{G}_1}{(S \circ W)} \right). \label{compatibility_of_W_ast_and_int_S} \end{eqnarray} A functor $F \colon \mathcal{C}_0 \to \mathcal{C}_1$ of categories is called an \emph{equivalence} if there exists a functor $F' \colon \mathcal{C}_1 \to \mathcal{C}_0$ with the property that $F' \circ F$ is naturally equivalent to the identity functor $\id_{\mathcal{C}_0}$ and $F \circ F'$ is naturally equivalent to the identity functor $\id_{\mathcal{C}_1}$. A functor $F$ is a natural equivalence if and only if it is \emph{full} and \emph{faithful}, i.e., it induces a bijection on the isomorphism classes of objects and for any two objects $c,d$ in $\mathcal{C}_0$ the induced map $\mor_{\mathcal{C}_0}(c,d) \to \mor_{\mathcal{C}_1}(F(c),F(d))$ is bijective. If $\mathcal{C}_0$ and $\mathcal{C}_1$ come with an additional structure such as of an additive category (with involution) and $F$ is compatible with this structure, we require that $F'$ and the two natural equivalences $F' \circ F \simeq \id_{\mathcal{C}_0}$ and $F \circ F' \simeq \id_{\mathcal{C}_1}$ are compatible with these. In this case it still true that $F$ is an equivalence of categories with this additional structure if and only if $F$ is full and faithful. One easily checks \begin{lemma} \label{lem:(F_1)_ast_and_int_calg_S_and_equivalences} \begin{enumerate} \item \label{lem:(F_1)_ast_and_int_calg_S_and_equivalences:(F_1)_ast} Let $W \colon \mathcal{G}_1 \to \mathcal{G}$ be an equivalence of connected groupoids. Let $F \colon \mathcal{G} \to \matheurm{Add\text{-}Cat}$ be a contravariant functor. Then $$W_* \colon \intgf{\mathcal{G}_1}{F \circ W} \to \intgf{\mathcal{G}}{F}$$ is an equivalence of additive categories. \item \label{lem:(F_1)_ast_and_int_calg_S_and_equivalences:int_calg_S} Let $\mathcal{G}$ be a connected groupoid. Let $S \colon F_1 \to F_2$ be a transformation of contravariant functors $\mathcal{G} \to \matheurm{Add\text{-}Cat}$ such that for every object $x$ in $\mathcal{G}$ the functor $S(x) \colon F_0(x) \to F_1(x)$ is an equivalence of additive categories. Then $$\intgf{\mathcal{G}}{S} \colon \intgf{\mathcal{G}}{F_1} \to \intgf{\mathcal{G}}{F_2}$$ is an equivalence of additive categories. \end{enumerate} \end{lemma} \typeout{------------------------- Section 6: From crossed product rings to additive categories ----------------------} \section{From crossed product rings to additive categories} \label{sec:From_crossed_product_rings_to_additive_categories} \begin{example}\label{exa:R_ast_c,tau_as_add_cat_with_G-action} Here is our main example of a contravariant functor $\mathcal{G} \to \matheurm{Add\text{-}Cat}$. Notice that a group $G$ is the same as a groupoid with one object and hence a contravariant functor from a group $G$ to $\matheurm{Add\text{-}Cat}$ is the same as an additive $G$-category what is the same as an additive category with strict $G$-action (see Definition~\ref{def:additive_category_with_weak_(G,v)-action}). Let $R$ be a ring together with maps of sets \begin{eqnarray*} c\colon G & \to & \aut(R), \quad g \mapsto c_g; \\ \tau\colon G \times G & \to & R^{\times}. \end{eqnarray*} satisfying~\eqref{tau_and_c_group_homo}, \eqref{tau_and_c_cocykel}, \eqref{c_e-Is_id}, \eqref{tau(g,e)_is_1} and \eqref{tau(e,g)_is_1}. We have introduced the additive $G$-category $\FGP{R}_{c,\tau}$ in~\eqref{FGP(R)_(c,tau)}. All the construction restrict to the subcategory $\FGF{R} \subseteq \FGP{R}$ of finitely generated free $R$-modules and lead to the additive $G$-category \begin{eqnarray} \FGF{R}_{c,\tau} & := & \mathcal{S}(\FGP{R}); \label{FGF(R)_(c,tau)} \end{eqnarray} \end{example} \begin{lemma} \label{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF} Consider the data $(R,c,\tau)$ and the additive category $\FGF{R}_{c,\tau}$ appearing in Example~\ref{exa:R_ast_c,tau_as_add_cat_with_G-action}. Let $\intgf{G}{\FGF{R}_{c,\tau}}$ be the additive category defined in~\eqref{int_calg_F}. Since $G$ regarded as a groupoid has precisely one object, we can (and will) identify the set of objects in $\intgf{G}{\FGF{R}_{c,\tau}}$ with the set of objects in $\FGF{R}_{c,\tau}$ which consists of pairs $(M,g)$ for $M$ a finitely generated free $R$-module and $g \in G$. Denote by $\left(\intgf{G}{\FGF{R}_{c,\tau}}\right)_e$ the full subcategory of $\intgf{G}{\FGF{R}_{c,\tau}}$ consisting of objects of the shape $(M,e)$ for $e \in G$ the unit element. Denote by $R \ast G = R \ast_{c,\tau} G$ the crossed product ring (see~\eqref{R_ast_G}). Then \begin{enumerate} \item \label{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF:R_ast_G} There is an equivalence of additive categories $$ \alpha \colon \left(\intgf{G}{\FGF{R}_{c,\tau}}\right)_e \to \FGF{R \ast_{c,\tau}G}; $$ \item \label{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF:e} The inclusion $$\left(\intgf{G}{\FGF{R}_{c,\tau}}\right)_e \to \intgf{G}{\FGF{R}_{c,\tau}}$$ is an equivalence of additive categories. \end{enumerate} \end{lemma} \begin{proof}\ref{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF:R_ast_G} An object $(M,e)$ in $\left(\intgf{G}{\FGF{R}_{c,\tau}}\right)_e$ is sent under $\alpha$ to the finitely generated free $R \ast_{c,\tau}G$-module $R \ast_{c,\tau}G \otimes_R M$. A morphism $\phi = \sum_{g \in G}g \cdot \left(\phi_g \colon M \to \res_{c_g}(N)\right)$ from $(M,e)$ to $(N,e)$ is sent to the $R \ast_{c,\tau}G$-homomorphism $$\alpha(\phi) \colon R \ast_{c,\tau}G \otimes_R M \to R \ast_{c,\tau}G \otimes_R N, \quad u \otimes x \mapsto \sum_{g \in G} u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \phi_g(x)$$ for $u \in R \ast_{c,\tau}G$ and $x \in M$. This is well-defined, i.e., compatible with the tensor relation, by the following calculation for $r \in R$ using~\eqref{tau_and_c_group_homo} and \eqref{c_e-Is_id}. \begin{eqnarray*} \lefteqn{u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \phi_g(rx)} & & \\ & = & u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes c_g(r) \phi_g(x) \\ & = & u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \cdot c_g(r) \otimes \phi_g(x) \\ & = & u\cdot \tau(g^{-1},g)^{-1} c_{g^{-1}}(c_g(r)) \cdot g^{-1}\otimes \phi_g(x) \\ & = & u\cdot \tau(g^{-1},g)^{-1} c_{g^{-1}}(c_g(r)) \tau(g^{-1},g)\tau(g^{-1},g)^{-1} \cdot g^{-1}\otimes \phi_g(x) \\ & = & u \cdot c_{g^{-1}g}(r)\tau(g^{-1},g)^{-1} \cdot g^{-1}\otimes \phi_g(x) \\ & = & u \cdot c_{e}(r)\tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \phi_g(x) \\ & = & (u \cdot r) \tau(g^{-1},g)^{-1} \cdot g^{-1}\otimes \phi_g(x). \end{eqnarray*} Next we show that $\alpha$ is a covariant functor. Obviously $\alpha(\id_{(M,e)}) = \id_{\alpha(M,e)}$. Consider morphisms $\phi = \sum_{g \in G} g \cdot \phi_g \colon (M,e) \to (N,e)$ and $\psi = \sum_{g \in G} g \cdot \psi_g\colon (N,e) \to (P,e)$ in $\left(\intgf{G}{\FGF{R}_{c,\tau}}\right)_e$. A direct computation shows for $u \in R \ast_{c,\tau}G$ and $x \in M$ \begin{eqnarray*} \lefteqn{\alpha(\psi)\left(\alpha(\phi)(u \otimes x)\right)} & & \\ & = & \alpha(\psi)\left(\sum_{k \in G} u\cdot \tau(k^{-1},k)^{-1} \cdot k^{-1} \otimes \phi_k(x)\right) \\ & = & \sum_{h \in G}\sum_{k \in G} u\cdot \tau(k^{-1},k)^{-1} \cdot k^{-1}\cdot \tau(h^{-1},h)^{-1} \cdot h^{-1} \otimes \psi_h\circ \phi_k(x) \\ & = & \sum_{h,k \in G} u\cdot \tau(k^{-1},k)^{-1} c_{k^{-1}}(\tau(h^{-1},h)^{-1}) \cdot k^{-1}\cdot h^{-1} \otimes \psi_h\circ \phi_k(x) \\ & = & \sum_{h,k \in G} u\cdot \tau(k^{-1},k)^{-1} c_{k^{-1}}(\tau(h^{-1},h)^{-1}) \tau(k^{-1},h^{-1}) \cdot (hk)^{-1} \otimes \psi_h\circ \phi_k(x) \end{eqnarray*} and \begin{eqnarray*} \lefteqn{\alpha(\psi \circ \phi)(u \otimes x)} & & \\ & = & \sum_{g \in G} u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes (\psi \circ \phi)_g(x) \\ & = & \sum_{g \in G} u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \left(\sum_{\substack{hk \in G,\\hk = g}} r_k(\psi_{h}) \circ \phi_{k}(x)\right) \\ & = & \sum_{g \in G} u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \left(\sum_{\substack{hk \in G,\\hk = g}} \tau(h,k)^{-1} \psi_{h} \circ \phi_{k}(x)\right) \\ & = & \sum_{g \in G} \sum_{\substack{hk \in G,\\hk = g}} u\cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \cdot \tau(h,k)^{-1} \otimes \psi_{h} \circ \phi_{k}(x) \\ & = & \sum_{g \in G} \sum_{\substack{hk \in G,\\hk = g}} u\cdot \tau(g^{-1},g)^{-1}c_{g^{-1}}(\tau(h,k)^{-1}) \cdot g^{-1} \otimes \psi_{h} \circ \phi_{k}(x) \\ & = & \sum_{hk \in G} u\cdot \tau((hk)^{-1},hk)^{-1}c_{(hk)^{-1}}(\tau(h,k)^{-1}) \cdot (hk)^{-1} \otimes \psi_{h} \circ \phi_{k}(x). \end{eqnarray*} Hence it remains to show for $h,k \in G$ \begin{eqnarray*} \tau(k^{-1},k)^{-1} c_{k^{-1}}(\tau(h^{-1},h)^{-1}) \tau(k^{-1},h^{-1}) & = & \tau((hk)^{-1},hk)^{-1}c_{(hk)^{-1}}(\tau(h,k)^{-1}), \end{eqnarray*} or, equivalently, \begin{eqnarray*} \tau(k^{-1},h^{-1})c_{(hk)^{-1}}(\tau(h,k))\tau((hk)^{-1},hk) & = & c_{k^{-1}}(\tau(h^{-1},h)) \tau(k^{-1},k). \end{eqnarray*} Since~\eqref{tau_and_c_cocykel} yields $$\tau((hk)^{-1},h)\tau(k^{-1},k) = c_{(hk)^{-1}}(\tau(h,k)\tau((hk)^{-1},hk),$$ it suffices to show \begin{eqnarray*} \tau(k^{-1},h^{-1})\tau((hk)^{-1},h) & = & c_{k^{-1}}(\tau(h^{-1},h)). \end{eqnarray*} But this follows from~\eqref{tau_and_c_cocykel} and~\eqref{tau(e,g)_is_1}. This finishes the proof that $\alpha$ is a covariant functor. Obviously it is compatible with the structures of an additive category. One easily checks that $\alpha$ induces a bijection between the isomorphism classes of objects. In order to show that $\alpha$ is a weak equivalence, we have to show for two objects $(M,e)$ and $(N,e)$ that $\alpha$ induces a bijection $$\mor_{\left(\intgf{G}{\FGF{R}_{c,\tau}}\right)_e}((M,e),(N,e)) \xrightarrow{\cong} \hom_{R \ast_{c,\tau}G}(R \ast_{c,\tau}\otimes_R M, R \ast_{c,\tau}\otimes_R N).$$ Since $\alpha$ is compatible with the structures of an additive category, it suffices to check this in the special case $M = N = R$, where it is obvious. \\[1mm]\ref{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF:e} An object of the shape $(M,g)$ in $\intgf{G}{\FGF{R}_{c,\tau}}$ is isomorphic to the object $(M,e)$, namely an isomorphism $(M,g) \xrightarrow{\cong} (M,e)$ in $\intgf{G}{\FGF{R}_{c,\tau}}$ is given by $g \cdot \id_{\res_ {c_g}(M)}$. \end{proof} \typeout{- Section 7: Connected groupoids and additive categories with involutions ---} \section{Connected groupoids and additive categories with involutions} \label{sec:Connected_groupoids_and_additive_categories_with_involutions} Next we want to enrich the constructions of Section~\ref{sec:Connected_groupoids_and_additive_categories} to additive categories with involutions. Let $\matheurm{Add\text{-}Cat_{\text{inv}}}$ be the category of additive categories with involution. Given a contravariant functor $(F,T) \colon \mathcal{G} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$, we want to define on the additive category $\intgf{\mathcal{G}}{F}$ the structure of an additive category with involution. Here the pair $(F,T)$ means that we assign to every object $x$ in $\mathcal{G}$ an additive category with involution $F(x)$ and for every morphism $f \colon x \to y$ in $\mathcal{G}$ we have a functor of additive categories with involution $(F(f),T(f)) \colon F(y) \to F(x)$. Next we construct for a functor $\mathcal{G} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$ an involution of additive categories \begin{eqnarray} & (I_{\intgf{\mathcal{G}}{F}},E_{\intgf{\mathcal{G}}{F}}) & \label{int_calg_F,E_int_calg_F} \end{eqnarray} on the additive category $\intgf{\mathcal{G}}{F}$ which we have introduced in~\eqref{int_calg_F}. On objects we put $$I_{\intgf{\mathcal{G}}{F}}(x,A) := (x,I_{\mathcal{G}}(A)) = (x,A^*).$$ Let $\phi = \sum_{f \in \mor_{\mathcal{G}}(x,y)}f \cdot \phi_f \colon (x,A) \to (y,B)$ be a morphism in $\intgf{\mathcal{G}}{F}$. Define $I_{\intgf{\mathcal{G}}{F}}(\phi)\colon B^* \to A^*$ to be the morphism $\phi^* = \sum_{f \in \mor_{\mathcal{G}}(y,x)} f \cdot (\phi^*)_f \colon (y,B^*) \to (x,A^*)$ in $\intgf{\mathcal{G}}{F}$ whose component for $f \in \mor_{\mathcal{G}}(y,x)$ is given by the composite \begin{multline*}(\phi^*)_f \colon B^* = F(f)\left(F(f^{-1})(B^*)\right) \xrightarrow{F(f)(T(f^{-1})(B))} F(f)\left(F(f^{-1})(B)^*\right) \\ \xrightarrow{F(f)\left((\phi_{f^{-1}})^*\right)} F(f)(A^*). \end{multline*} Next we show that $I_{\intgf{\mathcal{G}}{F}}$ is a contravariant functor. Obviously $I_{\intgf{\mathcal{G}}{F}}$ sends the identity $\id_{A}$ to $\id_{I_{\intgf{\mathcal{G}}{F}}(A)}$. We have to show $$I_{\intgf{\mathcal{G}}{F}}(\psi \circ \phi) = I_{\intgf{\mathcal{G}}{F}}(\phi) \circ I_{\intgf{\mathcal{G}}{F}}(\psi)$$ for morphisms $\phi = \sum_{h \in \mor_{\mathcal{G}}(x,y)} h \cdot \phi_h \colon (x,A) \to (y,B)$ and $\psi \colon \sum_{k \in \mor_{\mathcal{G}}(y,z)} k \cdot \psi_k \colon (y,B \to (z,C)$, or in short notation $(\psi \circ \phi)^* = \phi^* \circ \psi^*$. By definition $(\phi^* \circ \psi^*) = \sum_{g \in \mor_{\mathcal{G}}(z,x)} g \cdot (\phi^* \circ \psi^*)_g$ for $$(\phi^* \circ \psi^*)_{g} := \sum_{\substack{k \in \mor_{\mathcal{G}}(z,y),\\h \in \mor_{\mathcal{G}}(y,x),\\hk = g}} F(k)((\phi^*)_{h}) \circ (\psi^*)_{k}.$$ By definition \begin{multline*} (\psi^*)_{k} \colon C^* = F(k)(F(k^{-1})(C^*)) \xrightarrow{F(k)(T(k^{-1})(C))} F(k)(F(k^{-1})(C)^*) \\ \xrightarrow{F(k)((\psi_{k^{-1}})^*)} F(k)(B^*) \end{multline*} and \begin{multline*} (\phi^*)_{h} \colon B^* = F(h)(F(h^{-1})(B^*)) \xrightarrow{F(h)(T(h^{-1})(B))} F(h)(F(h^{-1})(B)^*) \\ \xrightarrow{F(h)((\phi_{h^{-1}})^*)} F(h)(A^*). \end{multline*} Hence the component $(\phi^* \circ \psi^*)_{g}$ of $(\phi^* \circ \psi^*)$ at $g \colon z \to x$ is given by the sum of morphisms from $C^*$ to $F(g)(A^*)$ \begin{multline*} \sum_{\substack{k \in \mor_{\mathcal{G}}(z,y),\\h \in \mor_{\mathcal{G}}(y,x),\\hk = g}} F(k)\left(F(h)((\phi_{h^{-1}})^*)\right) \circ F(k)\left(F(h)(T(h^{-1})(B))\right) \\ \circ F(k)((\psi_{k^{-1}})^*) \circ F(k)(T(k^{-1})(C)). \end{multline*} The component of $(\psi \circ \phi)^*_{g}$ of $(\psi \circ \phi)^*$ at $g\colon z \to x$ is given by \begin{multline*} C^* = F(g)\left(F(g^{-1})(C^*)\right) \xrightarrow{F(g)(T(g^{-1})(C))} F(g)\left(F(g^{-1})(C)^*\right) \\ \xrightarrow{F(g)\left(((\psi \circ \phi)_{g^{-1}})^*\right)} F(g)(A^*). \end{multline*} Since for $g \colon z \to x$ we have $$(\psi \circ \phi)_{g^{-1}} = \sum_{\substack{h \in \mor_{\mathcal{G}}(y,z),\\k \in \mor_{\mathcal{G}}(x,y),\\hk = g^{-1}}} F(k)(\psi_{h}) \circ \phi_{k},$$ the component of $(\psi \circ \phi)^*_{g}$ of $(\psi \circ \phi)^*$ at $g\colon z \to x$ is given by the sum of morphisms $C^*$ to $F(g)(A^*)$ $$\sum_{\substack{h \in \mor_{\mathcal{G}}(y,z),\\k \in \mor_{\mathcal{G}}(x,y),\\hk = g^{-1}}} F(g)\left((\phi_k)^*\right) \circ F(g)\left(F(k)(\psi_h)^*\right) \circ F(g)(T(g^{-1})(C)).$$ By changing the indexing by replacing $h$ with $k^{-1}$ and $k$ by $h^{-1}$, this transforms to $$\sum_{\substack{k \in \mor_{\mathcal{G}}(z,y),\\h \in \mor_{\mathcal{G}}(y,x),\\hk = g}} F(g)\left((\phi_{h^{-1}})^*\right) \circ F(g)\left(F(h^{-1})(\psi_{k^{-1}})^*\right) \circ F(g)(T(g^{-1})(C)).$$ Hence we have to show for every $k \colon z \to y$ and $h \colon y \to x$ with $hk = g$ that the two composites $$ F(k)\left(F(h)((\phi_{h^{-1}})^*)\right) \circ F(k)\left(F(h)(T(h^{-1})(B))\right) \circ F(k)((\psi_{k^{-1}})^*) \circ F(k)(T(k^{-1})(C))$$ and $$(F(g)((\phi_{h^{-1}})^*) \circ F(g)\left(F(h^{-1})(\psi_{k^{-1}})^*\right) \circ F(g)(T(g^{-1})(C))$$ agree. We compute for the first one \begin{eqnarray*} \lefteqn{F(k)\left(F(h)((\phi_{h^{-1}})^*)\right) \circ F(k)\left(F(h)(T(h^{-1})(B))\right) \circ F(k)((\psi_{k^{-1}})^*) \circ F(k)(T(k^{-1})(C))} & & \\ & = & (F(g)((\phi_{h^{-1}})^*) \circ F(g)(T(h^{-1})(B)) \circ F(g)\left(F(h^{-1})((\psi_{k^{-1}})^*)\right) \\ & & \hspace{70mm} \circ F(g)\left(F(h^{-1})(T(k^{-1})(C))\right). \end{eqnarray*} Hence it remains to show that the composites $$F(g^{-1})(C^*) \xrightarrow{T(g^{-1})(C)} F(g^{-1})(C)^* \xrightarrow{F(h^{-1})(\psi_{k^{-1}})^*} F(h^{-1})(B)^* $$ and \begin{multline*} F(g^{-1})(C^*) = F(h^{-1})\left(F(k^{-1})(C^*)\right) \xrightarrow{F(h^{-1})(T(k^{-1})(C))} F(h^{-1})(F(k^{-1})(C)^*) \\ \xrightarrow{F(h^{-1})((\psi_{k^{-1}})^*)} F(h^{-1})(B^*) \xrightarrow{T(h^{-1})(B)} F(h^{-1})(B)^* \end{multline*} agree. The second one agrees with the composite \begin{multline*} F(g^{-1})(C^*)= F(h^{-1})\left(F(k^{-1})(C^*)\right) \xrightarrow{F(h^{-1})(T(k^{-1})(C))} F(h^{-1})(F(k^{-1})(C)^*) \\ \xrightarrow{T(h^{-1})(F(k^{-1})(C))} F(h^{-1})(F(k^{-1})(C))^* \xrightarrow{F(h^{-1})(\psi_{k^{-1}})^*} F(h^{-1})(B)^* \end{multline*} since $T(h^{-1})$ is a natural transformation $F(h^{-1}) \circ I_{F(y)} \to I_{F(x)} \circ F(h^{-1})$. Since $$(F(h^{-1}),T(h^{-1})) \circ (F(k^{-1}),T(k^{-1})) = (F(k^{-1}h^{-1}),T(k^{-1}h^{-1})) = (F(g^{-1}),T(g^{-1}))$$ the map $T(g^{-1})(C)$ can be written as the composite \begin{multline*} T(g^{-1})(C) \colon F(g^{-1})(C^*) = F(h^{-1})\left(F(k^{-1})(C^*)\right) \\ \xrightarrow{F(h^{-1})\left(T(k^{-1})(C)\right)} F(h^{-1})\left(F(k^{-1})(C)^*\right) \\ \xrightarrow{T(h^{-1})\left(F(k^{-1})(C)\right)} F(h^{-1})\left(F(k^{-1})(C)\right)^* = F(g)(C)^*. \end{multline*} This finishes the proof that $I_{\intgf{\mathcal{G}}{F}}$ is a contravariant functor. The natural equivalence $$E_{\intgf{\mathcal{G}}{F}} \colon \id_{\intgf{\mathcal{G}}{F}} \to I_{\intgf{\mathcal{G}}{F}} \circ I_{\intgf{\mathcal{G}}{F}}$$ assigns to an object $(x,A)$ in $\intgf{\mathcal{G}}{F}$ the isomorphism $$\id_x \cdot \left(E_{\mathcal{G}}(A) \colon A \xrightarrow{\cong} A^{**}\right) \colon (x,A) \to (x,A^{**}).$$ We have to check that $E_{\intgf{\mathcal{G}}{F}}$ is a natural equivalence. Consider a morphism $\phi = \sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \left(\phi_f \colon (x,A) \to (y,B)\right)$ in $\intgf{\mathcal{G}}{F}$. Then $\left(I_{\intgf{\mathcal{G}}{F}} \circ I_{\intgf{\mathcal{G}}{F}}\right)(\phi)$ has as the component for $f \colon x \to y$ the composite \begin{multline*} A^{**} = F(f)\left(F(f^{-1})(A^{**})\right) \xrightarrow{F(f)\left((T(f^{-1})(A^*)\right)} F(f)\left(F(f^{-1})(A^*)^*\right) \\ \xrightarrow{F(f)\left(F(f^{-1})((\phi_{f})^*)^*\right)} F(f)\left(F(f^{-1})(F(f)(B)^*)^*\right) \\ \xrightarrow{F(f)\left(F(f^{-1})(T(f)(B))^*\right)} F(f)\left(F(f^{-1})(F(f)(B^*))^*\right) = F(f)(B^{**}). \end{multline*} Hence $\left(I_{\intgf{\mathcal{G}}{F}} \circ I_{\intgf{\mathcal{G}}{F}}\right)(\phi) \circ E_{\intgf{\mathcal{G}}{F}}(x,A)$ has as component for $f\colon x \to y$ the composite \begin{multline*} A \xrightarrow{E_{\mathcal{A}}(A)} A^{**} = F(f)\left(F(f^{-1})(A^{**})\right) \xrightarrow{F(f)\left((T(f^{-1})(A^*)\right)} F(f)\left(F(f^{-1})(A^*)^*\right) \\ \xrightarrow{F(f)\left(F(f^{-1})(\phi_{f}^*)^*\right)} F(f)\left(F(f^{-1})(F(f)(B)^*)^*\right) \\ \xrightarrow{F(f)\left(F(f^{-1})(T(f)(B))^*\right)} F(f)\left(F(f^{-1})(F(f)(B^*))^*\right) = F(f)(B^{**}). \end{multline*} The component of $E_{\intgf{\mathcal{G}}{F}}(y,B) \circ \phi$ at $f \colon x \to y $ is the composite $$A \xrightarrow{\phi_{f}} F(f)(B) \xrightarrow{F(f)(E_{\mathcal{A}}(B))} F(f)(B^{**}).$$ It remains to show that these two morphisms $A \to F(f)(B^{**})$ agree. The following two diagrams commute since $E_{\mathcal{A}}$ and $T(f^{-1})$ are natural transformations $$ \xymatrix@!C=14em{A \ar[r]^-{E_{\mathcal{A}}(A)} \ar[d]^-{\phi_{f}} & A^{**} = F(f)\left(F(f^{-1})(A^{**})\right) \ar[d]^-{\phi_{f}^{**} = F(f)\left(F(f^{-1})(\phi_{f}^{**}))\right)} \\ F(f)(B) \ar[r]^-{E_{\mathcal{A}}(F(f)(B))} & F(f)(B)^{**} = F(f)\left(F(f^{-1})(F(f)(B)^{**})\right) } $$ and $$ \xymatrix@!C=20em{ F(f)\left(F(f^{-1})(A^{**})\right) \ar[r]^-{F(f)(T(f^{-1})(A^*))} \ar[d]^-{F(f)\left(F(f^{-1})(\phi_{f}^{**}))\right)} & F(f)\left(F(f^{-1})(A^*)^*\right) \ar[d]^-{F(f)\left(F(f^{-1})(\phi_{f}^*)^*\right)} \\ F(f)\left(F(f^{-1})(F(f)(B)^{**})\right) \ar[r]^-{F(f)\left(T(f^{-1})(F(f)(B)^*)\right)} & F(f)\left(F(f^{-1})(F(f)(B)^*)^*\right). } $$ Hence we have to show that $$F(f)(B) \xrightarrow{F(f)(E_{\mathcal{A}}(B))} F(f)(B^{**})$$ agrees with the composite \begin{multline*} F(f)(B) \xrightarrow{E_{\mathcal{A}}(F(f)(B))} F(f)(B)^{**} = F(f)\left(F(f^{-1})(F(f)(B)^{**})\right) \\ \xrightarrow{{F(f)\left(T(f^{-1})(F(f)(B)^*)\right)}} F(f)\left(F(f^{-1})(F(f)(B)^*)^*\right) \\ \xrightarrow{F(f)\left(F(f^{-1})(T(f)(B))^*\right)} F(f)\left(F(f^{-1})(F(f)(B^*))^*\right) = F(f)(B^{**}). \end{multline*} (Notice that $\phi$ is not involved anymore.) The following diagram commutes by the axioms (see \eqref{F(A)_F(A)aastast_F(Aastast)_F(Aast)ast}) $$ \xymatrix@!C=17em{ F(f)(B) \ar[r]^-{E_{\mathcal{A}}(F(f)(B))} \ar[d]^-{F(f)(E_{\mathcal{A}}(B))} & F(f)(B)^{**} \ar[d]^-{T(f)(B)^*} \\ F(f)(B^{**}) \ar[r]^-{T(f)(B^*)} & F(f)(B^*)^* } $$ Hence it remains to show the commutativity of the following diagram (which does not involve $\phi$ and $E_{\mathcal{A}}$ anymore). $$ \xymatrix@!C=22em{ F(f)(B)^{**} = F(f)\left(F(f^{-1})(F(f)(B)^{*})\right)^* \ar[r]^-{F(f)\left(T(f^{-1})(F(f)(B)^*)\right)} \ar[d]^-{T(f)(B)^*} & F(f)\left(F(f^{-1})(F(f)(B)^*)^*\right) \ar[d]_-{F(f)\left(F(f^{-1})(T(f)(B))^*\right)} \\ F(f)(B^*)^* & F(f)(B^{**}) = F(f)\left(F(f^{-1})(F(f)(B^*))^*\right) \ar[l]^-{T(f)(B^*)} } $$ Since $(F(f),T(f)) \circ (F(f^{-1}),T(f^{-1})) = \id$, we have $$T(f)\left(F(f^{-1})(F(f)(B)^*)\right)~\circ~F(f)\left(T(f^{-1})(F(f)(B)^*)\right) = \id.$$ Hence it suffices to prove the commutativity of the following diagram $$\xymatrix@!C=22em{ F(f)(B)^{**} = F(f)\left(F(f^{-1})(F(f)(B)^*)\right)^* \ar[d]^-{T(f)(B)^*} & F(f)\left(F(f^{-1})(F(f)(B)^*)^*\right) \ar[d]_-{F(f)\left(F(f^{-1})(T(f)(B))^*\right)} \ar[l]_-{T(f)\left(F(f^{-1})(F(f)(B)^*)\right)} \\ F(f)(B^*)^* & F(f)(B^{**}) = F(f)\left(F(f^{-1})(F(f)(B^*))^*\right). \ar[l]^-{T(f)(B^*)} } $$ This follows because this diagram is obtained by applying the natural transformation $T(f)$ to the morphism $$F(f^{-1})(F(f)(B^*)) \xrightarrow{F(f^{-1})(T(f)(B))} F(f^{-1})(F(f)(B)^*).$$ The condition~\eqref{E(I(A))_is_I(E(A)-1)} is satisfied for $(I_{\intgf{\mathcal{G}}{F}},E_{\intgf{\mathcal{G}}{F}})$ since it holds for $(I_{\mathcal{A}},E_{\mathcal{A}})$. We will denote the resulting additive category $\intgf{\mathcal{A}}{F}$ with involution $(I_{\intgf{\mathcal{G}}{F}},E_{\intgf{\mathcal{G}}{F}})$ by \begin{eqnarray} & \intgf{\mathcal{G}}{(F,T)}. & \label{int_calg_F,T} \end{eqnarray} Let $(F_0,T_0)$ and $(F_1,T_1)$ be two contravariant functors $\mathcal{G} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$. Let $(S,U) \colon (F_0,T_0) \to (F_1,T_1)$ be a natural transformation of such functors. This means that we for each object $x$ in $\mathcal{G}$ we have an equivalence $(S(x),U(x)) \colon F_0(x) \to F_1(y)$ of additive categories with involution such that for all $f \colon x \to y$ in $\mathcal{G}$ the following diagram of functors of additive categories with involution commutes \begin{eqnarray} & \xymatrix@!C=8em{ F_0(y) \ar[r]^-{(S(y),U(y))} \ar[d]^-{(F_0(f),T_0(f))} & F_1(y) \ar[d]^-{(F_1(f),T_1(f))} \\ F_0(x) \ar[r]^-{(S(x),U(x))} & F_1(x) } & \label{(F_1(f),T_1(f))_circ_(S(y),U(y))_is_(S(x),U(x))_circ_(F_0(f),T_0(f))} \end{eqnarray} Then both $\intgf{\mathcal{G}}{(F_0,T_0)}$ and $\intgf{\mathcal{G}}{(F_1,T_1)}$ are additive categories with involutions. The functor of additive categories $\intgf{\mathcal{G}}{S} \colon \intgf{\mathcal{G}}{F_0} \to \intgf{\mathcal{G}}{F_1}$ defined in~\eqref{intgf(calg)(S)} extends to a functor of additive categories with involution \begin{eqnarray} & \intgf{\mathcal{G}}{(S,U)} \colon \intgf{\mathcal{G}}{(F_0,T_0)} \to \intgf{\mathcal{G}}{(F_1,T_1)} \label{intgf(calg)(S,U)} \end{eqnarray} as follows. We have to specify a natural equivalence $$ \widehat{U} \colon \left(\intgf{\mathcal{G}}{S}\right) \circ I_{\intgf{\mathcal{G}}{(F_0,T_0)}} \to I_{\intgf{\mathcal{G}}{(F_1,T_1)}}\circ \intgf{\mathcal{G}}{S}. $$ For an object $(x,A)$ in $\intgf{\mathcal{G}}{F_0}$ the isomorphism $$\widehat{U}(x,A) \colon \left(\intgf{\mathcal{G}}{S}\right) \circ I_{\intgf{\mathcal{G}}{(F_0,T_0)}}(x,A) \to I_{\intgf{\mathcal{G}}{(F_1,T_1)}}\circ \intgf{\mathcal{G}}{S}(x,A)$$ is given by the isomorphism $$\id_x \cdot U(x)(A) \colon (x,S(x)(A^*)) \to (x,S(x)(A)^*)$$ in $\intgf{\mathcal{G}}{F_1}$. Next we check that $\widehat{U}$ is a natural equivalence. Let $\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \phi_f \colon (x,A) \to (y,B)$ be a morphism in $\intgf{\mathcal{G}}{F_0}$, where by definition $\phi_f \colon A \to F(f)(B)$ is a morphism in the additive category $F_0(x)$. We have to show the commutativity of the following diagram in the additive category $\int_{\mathcal{G}} F_1$ $$\xymatrix@!C=18em{ \left(\intgf{\mathcal{G}}{S}\right) \circ I_{\intgf{\mathcal{G}}{(F_0,T_0)}}(y,B) \ar[r]^-{\left(\intgf{\mathcal{G}}{S}\right) \circ I_{\intgf{\mathcal{G}}{(F_0,T_0)}}(\phi)} \ar[d]^-{\widehat{U}(y,B)} & \left(\intgf{\mathcal{G}}{S}\right) \circ I_{\intgf{\mathcal{G}}{(F_0,T_0)}}(x,A) \ar[d]^-{\widehat{U}(x,A)} \\ I_{\intgf{\mathcal{G}}{(F_1,T_1)}}\circ \intgf{\mathcal{G}}{S}(y,B) \ar[r]^-{I_{\intgf{\mathcal{G}}{(F_1,T_1)}}\circ \intgf{\mathcal{G}}{S}(\phi)} & I_{\intgf{\mathcal{G}}{(F_1,T_1)}}\circ \intgf{\mathcal{G}}{S}(x,A) } $$ The morphism $I_{\intgf{\mathcal{G}}{(F_0,T_0)}}(\phi)$ in $\intgf{\mathcal{G}}{F_0}$ is given by $$\phi^* = \sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot (\phi^*)_f \colon (y,B^*) \to (x,A^*),$$ where the component $(\phi^*)_f$ is the composite \begin{multline*} (\phi^*)_f \colon B^* = F_0(f)\left(F_0(f^{-1})(B^*)\right) \xrightarrow{F_0(f)\left(T_0(f^{-1})(B)\right)} F_0(f)\left(F_0(f^{-1})(B)^*\right) \\ \xrightarrow{F_0(f)\left((\phi_{f{-1}})^*\right)} F_0(f)\left(A^*\right). \end{multline*} The morphism $\left(\intgf{\mathcal{G}}{S}\right) \circ I_{\intgf{\mathcal{G}}{(F_0,T_0)}}(\phi) \colon (y,B^*) \to (x,A^*)$ in $\intgf{\mathcal{G}}{F_1}$ is given by $\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \psi_f \colon (y,B^*) \to (x,A^*)$, where $\psi_f$ is the composite \begin{multline*} \psi_f \colon S(y)(B^*) = S(y)\left(F_0(f)\left(F_0(f^{-1})(B^*)\right)\right) \\ \xrightarrow{S(y)\left(F_0(f)\left(T_0(f^{-1})(B)\right)\right)} S(y)\left(F_0(f)\left(F_0(f^{-1})(B)^*\right)\right) \\ \xrightarrow{S(y)\left(F_0(f)\left((\phi_{f{-1}})^*\right)\right)} S(y)\left(F_0(f)\left(A^*\right)\right) = F_1(f)\left(S(x)\left(A^*\right)\right). \end{multline*} Hence the morphism $\widehat{U}(x,A) \circ \left(\intgf{\mathcal{G}}{S}\right) \circ I_{\intgf{\mathcal{G}}{(F_0,T_0)}}(\phi) \colon (y,B^*) \to (x,A^*)$ in $\intgf{\mathcal{G}}{F_1}$ is given by $\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \mu_f \colon (y,B^*) \to (x,A^*)$, where $\mu_f$ is the composite in $F_1(y)$ \begin{multline*} \mu_f \colon S(y)(B^*) = S(y)\left(F_0(f)\left(F_0(f^{-1})(B^*)\right)\right) \\ \xrightarrow{S(y)\left(F_0(f)\left(T_0(f^{-1})(B)\right)\right)} S(y)\left(F_0(f)\left(F_0(f^{-1})(B)^*\right)\right) \\ \xrightarrow{S(y)\left(F_0(f)\left((\phi_{f{-1}})^*\right)\right)} S(y)\left(F_0(f)\left(A^*\right)\right) = F_1(f)\left(S(x)\left(A^*\right)\right) \\ \xrightarrow{F_1(f)\left(U(x)(A)\right)} F_1(f)\left(S(x)(A)^*\right). \end{multline*} The morphism $\intgf{\mathcal{G}}{S}(\phi) \colon (x,S(x)(A)) \to (y,S(y)(B))$ in $\int_{\mathcal{G}} F_1$ is given by $$\sum_{f \in \mor_{\mathcal{G}}(x,y)} f \cdot \left(S(x)(\phi_f) \colon S(x)(A) \to S(x)(F_0(f)(B) = F_1(f)(S(y)(B)\right).$$ The morphism $I_{\intgf{\mathcal{G}}{(F_1,T_1)}}\circ \intgf{\mathcal{G}}{S}(\phi) \colon (y,S(y)(B)^*) \to (x,S(x)(A)^*)$ in $\int_{\mathcal{G}} F_1$ is given by $\sum_{f \in \mor_{\mathcal{G}}(y,x)} f \cdot \nu_f$, where $\nu_f$ is the composite in $F_1(y)$. \begin{multline*} \nu_f \colon S(y)(B)^* = F_1(f)\left(F_1(f^{-1})\left(S(y)(B)^*\right)\right) \\ \xrightarrow{F_1(f)\left(T_1(f^{-1})\left(S(y)(B)\right)\right)} F_1(f)\left(F_1(f^{-1})\left(S(y)(B)\right)^*\right) = F_1(f)\left(S(x)\left(F_0(f^{-1})(B)\right)^*\right) \\ \xrightarrow{F_1(f)\left((S(x)(\phi_{f^{-1}}))^*\right)} F_1(f)\left(S(x)(A)^*\right). \end{multline*} The morphism $I_{\intgf{\mathcal{G}}{(F_1,T_1)}}\circ \intgf{\mathcal{G}}{S}(\phi) \circ \widehat{U}(y,B) \colon (y,S(y)(B^*)) \to (x,S(x)(A)^*)$ in $\int_{\mathcal{G}} F_1$ is given by $\sum_{f \in \mor_{\mathcal{G}}(y,x)} f \cdot \omega_f$, where $\omega_f$ is the composite in $F_1(y)$. \begin{multline*} \omega _f \colon S(y)(B^*) \xrightarrow{U(y)(B)} S(y)(B)^* = F_1(f)\left(F_1(f^{-1})\left(S(y)(B)^*\right)\right) \\ \xrightarrow{F_1(f)\left(T_1(f^{-1})\left(S(y)(B)\right)\right)} F_1(f)\left(F_1(f^{-1})\left(S(y)(B)\right)^*\right) = F_1(f)\left(S(x)\left(F_0(f^{-1})(B)\right)^*\right) \\ \xrightarrow{F_1(f)\left((S(x)(\phi_{f^{-1}}))^*\right)} F_1(f)\left(S(x)(A)^*\right). \end{multline*} Hence we have to show for all $f \colon y \to x$ in $\mor_{\mathcal{G}}(y,x)$ that the two composites in $F_1(y)$ \begin{multline*} S(y)(B^*) = S(y)\left(F_0(f)\left(F_0(f^{-1})(B^*)\right)\right) \\ \xrightarrow{S(y)\left(F_0(f)\left(T_0(f^{-1})(B)\right)\right)} S(y)\left(F_0(f)\left(F_0(f^{-1})(B)^*\right)\right) \\ \xrightarrow{S(y)\left(F_0(f)\left((\phi_{f{-1}})^*\right)\right)} S(y)\left(F_0(f)\left(A^*\right)\right) = F_1(f)\left(S(x)\left(A^*\right)\right) \\ \xrightarrow{F_1(f)\left(U(x)(A)\right)} F_1(f)\left(S(x)(A)^*\right) \end{multline*} and \begin{multline*} S(y)(B^*) \xrightarrow{U(y)(B)} S(y)(B)^* = F_1(f)\left(F_1(f^{-1})\left(S(y)(B)^*\right)\right) \\ \xrightarrow{F_1(f)\left(T_1(f^{-1})\left(S(y)(B)\right)\right)} F_1(f)\left(F_1(f^{-1})\left(S(y)(B)\right)^*\right) = F_1(f)\left(S(x)\left(F_0(f^{-1})(B)\right)^*\right) \\ \xrightarrow{F_1(f)\left((S(x)(\phi_{f^{-1}}))^*\right)} F_1(f)\left(S(x)(A)^*\right). \end{multline*} agree. Since $S$ is a natural transformation from $F_0 \to F_1$, the first composite can be rewritten as the composite \begin{multline*} S(y)(B^*) = F_1(f)\left(S(x)\left(F_0(f^{-1})(B^*)\right)\right) \\ \xrightarrow{F_1(f)\left(S(x)\left(T_0(f^{-1})(B)\right)\right)} F_1(f)\left(S(x)\left(F_0(f^{-1})(B)^*\right)\right) \\ \xrightarrow{F_1(f)\left(S(x)\left((\phi_{f{-1}})^*\right)\right)} F_1(f)\left(S(x)\left(A^*\right)\right) \\ \xrightarrow{F_1(f)\left(U(x)(A)\right)} F_1(f)\left(S(x)(A)^*\right). \end{multline*} Since $U(x)$ is a natural transformation from $S(x) \circ I_{F_0(x)}$ to $I_{F_1(x)} \circ S(x)$, this agrees with the composite \begin{multline*} S(y)(B^*) = F_1(f)\left(S(x)\left(F_0(f^{-1})(B^*)\right)\right) \\ \xrightarrow{F_1(f)\left(S(x)\left(T_0(f^{-1})(B)\right)\right)} F_1(f)\left(S(x)\left(F_0(f^{-1})(B)^*\right)\right) \\ \xrightarrow{F_1(f)\left(U(x)\left(F_0(f^{-1})(B)\right)\right)} F_1(f)\left(S(x)\left(F_0(f^{-1})(B)\right)^*\right) \\ \xrightarrow{F_1(f)\left(\left(S(x)(\phi_{f{-1}})\right)^*\right)} F_1(f)\left(S(x)(A)^*\right). \end{multline*} Hence it suffices to show that the following two composites agree \begin{multline*} S(y)(B^*) = F_1(f)\left(S(x)\left(F_0(f^{-1})(B^*)\right)\right) \\ \xrightarrow{F_1(f)\left(S(x)\left(T_0(f^{-1})(B)\right)\right)} F_1(f)\left(S(x)\left(F_0(f^{-1})(B)^*\right)\right) \\ \xrightarrow{F_1(f)\left(U(x)\left(F_0(f^{-1})(B)\right)\right)} F_1(f)\left(S(x)\left(F_0(f^{-1})(B)\right)^*\right) \end{multline*} and \begin{multline*} S(y)(B^*) \xrightarrow{U(y)(B)} S(y)(B)^* = F_1(f)\left(F_1(f^{-1})\left(S(y)(B)^*\right)\right) \\ \xrightarrow{F_1(f)\left(T_1(f^{-1})\left(S(y)(B)\right)\right)} F_1(f)\left(F_1(f^{-1})\left(S(y)(B)\right)^*\right) = F_1(f)\left(S(x)\left(F_0(f^{-1})(B)\right)^*\right) \end{multline*} (Notice that $\phi_{f^{-1}}$ has been eliminated.) This will follow by applying $F_1(f)$ to the following diagram, provided we can show that it does commute. $$\xymatrix@!C=19em{ S(x)\left(F_0(f^{-1})(B^*)\right)= F_1(f^{-1})\left(S(y)(B^*)\right) \ar[r]^-{S(x)\left(T_0(f^{-1})(B)\right)} \ar[d]_-{F_1(f^{-1})\left(U(y)(B)\right)} & S(x)\left(F_0(f^{-1})(B)^*\right) \ar[d]_-{U(x)\left(F_0(f^{-1})(B)\right)} \\ F_1(f^{-1})\left(S(y)(B)^*\right) \ar[r]^-{T_1(f^{-1})\left(S(y)(B)\right)} & S(x)\left(F_0(f^{-1})(B)\right)^* } $$ But the latter diagram commutes because we require the following equality of functors of additive categories with involution for $f^{-1} \colon x \to y$ (see~\eqref{(F_1(f),T_1(f))_circ_(S(y),U(y))_is_(S(x),U(x))_circ_(F_0(f),T_0(f))}) $$(F_0(f^{-1}),T_0(f^{-1})) \circ (S(x),U(x)) = (S(y),U(y)) \circ (F_1(f^{-1}),T_1(f^{-1})).$$ This finishes the proof that $\widehat{U}$ is a natural equivalence. One easily checks that condition~\eqref{F(A)_F(A)aastast_F(Aastast)_F(Aast)ast} is satisfied by $\widehat{U}$ since it holds for $U(x)$ for all objects $x$ in $\mathcal{G}$. This finishes the construction of the functor of additive categories with involution $(S,U)$ (see~\eqref{intgf(calg)(S,U)}). One easily checks \begin{eqnarray} \left(\intgf{\mathcal{G}}{(S_2,U_2)}\right) \circ \left(\intgf{\mathcal{G}}{(S_1,U_1)}\right) & = & \intgf{\mathcal{G}}{(S_2,U_2) \circ (S_1,U_1)} \label{int_(S_2_U_2)_circ_int_(S_1,U_1)_is_int_(S_2,U_2)_circ_(S_1,U_1)} \\ \intgf{\mathcal{G}}{\id_F} & = & \id_{\intgf{\mathcal{G}}{F}}. \label{int_id_is_id_inv} \end{eqnarray} Given a functor of groupoids $W \colon \mathcal{G}_1 \to \mathcal{G}$ and a functor $(F,T) \colon \mathcal{G} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$, the composition with $W$ a yields a functor $(F \circ W, T \circ W)$. Hence both $\intgf{\mathcal{G}_1}{(F,T)} \circ W$ and $\intgf{\mathcal{G}}{(F,T)}$ are additive categories with involutions. One easily checks that $I_{\intgf{\mathcal{G}}{F}} \circ W_* = W_* \circ I_{\intgf{\mathcal{G}_1}{F \circ W}}$ holds for the functor $W_*$ defined in~\eqref{W_ast}. Hence \begin{eqnarray} &(W_*,\id) \colon \intgf{\mathcal{G}_1}{(F,T)} \circ W \to \intgf{\mathcal{G}}{(F,T)}. & \label{(W_ast,id)} \end{eqnarray} is a functor of additive categories with involution. One easily checks \begin{eqnarray} ((W_2)_*,\id) \circ ((W_1)_*,\id) & = & ((W_2 \circ W_1)_*,\id); \label{((W_2)_ast,id)_circ_((W_1)_ast,id)_is_((W_2_circ_W_1)_ast,id)} \\ (\id_{\mathcal{G}})_* & = & \id_{\intgf{\mathcal{G}}{F}}. \label{(id_calg)_ast_is_id_int_calg_F_inv} \end{eqnarray} These two constructions are compatible. Namely, we get \begin{eqnarray} \left(\intgf{\mathcal{G}}{(S,U)} \right) \circ (W_*,\id) & = & (W_*,\id) \circ \left(\intgf{\mathcal{G}_1}{(S\circ W,U\circ W)} \right). \label{compatibility_of_(W_ast,id)_and_int_(S,U)} \end{eqnarray} One easily checks \begin{lemma} \label{lem:(F_1,T_1)_ast_and_int_calg_S_and_equivalences} \begin{enumerate} \item \label{lem:(F_1,T_1)_ast_and_int_calg_S_and_equivalences:(F_1)_ast} Let $W \colon \mathcal{G}_1 \to \mathcal{G}$ be an equivalence of connected groupoids. Let $(F,T) \colon \mathcal{G} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$ be a contravariant functor. Then $$W_* \colon \intgf{\mathcal{G}_1}{(F,T) \circ W} \to \intgf{\mathcal{G}}{(F,T)}$$ is an equivalence of additive categories with involution. \item \label{lem:(F_1,T_1)_ast_and_int_calg_S_and_equivalences:int_calg_S} Let $\mathcal{G}$ be a connected groupoid. Let $S \colon (F_1,T_1) \to (F_2,T_2)$ be a transformation of contravariant functors $\mathcal{G} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$ such that for every object $x$ in $\mathcal{G}$ the functor $S(x) \colon F_0(x) \to F_1(x)$ is an equivalence of additive categories. Then $$\intgf{\mathcal{G}}{S} \colon \intgf{\mathcal{G}}{(F_1,T_1)} \to \intgf{\mathcal{G}}{(F_2,T_2)}$$ is an equivalence of additive categories with involution. \end{enumerate} \end{lemma} \typeout{--- Section 8: From crossed product rings with involution to additive categories with involution ----} \section{From crossed product rings with involution to additive categories with involution} \label{sec:From_crossed_product_rings_with_involution_to_additive_categories_with_involution} Next we want to extend Example~\ref{exa:R_ast_c,tau_as_add_cat_with_G-action} and Lemma~\ref{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF} to rings and additive categories with involutions. Let $R$ be a ring and let $G$ be a group. Suppose that we are given maps of sets \begin{eqnarray*} c\colon G & \to & \aut(R), \quad g \mapsto c_g; \\ \tau\colon G \times G & \to & R^{\times}; \\ w \colon G & \to & R, \end{eqnarray*} satisfying conditions~\eqref{tau_and_c_group_homo}, \eqref{tau_and_c_cocykel}, \eqref{c_e-Is_id}, \eqref{tau(g,e)_is_1}, \eqref{tau(e,g)_is_1}, \eqref{w(e)_is_1}, \eqref{w(gh)}, \eqref{overlinew(g)}, and~\eqref{overlinec_g(r)}. We have constructed in Section~\ref{sec:Crossed_product_rings_and_involutions} an involution on the crossed product $R\ast G = R \ast_{c,\tau} G$. We have denoted this ring with involution by $R\ast G = R \ast_{c,\tau,w} G$ (see~\eqref{r_ast_c,tau,w_G}). The additive category $\FGF{R\ast G}$ inherits the structure of an additive category with involution (see Example~\ref{exa:R-mod_as_add_cat}). We have introduced notion of an additive $G$-category with involution in Definition~\ref{def:additive_G-category_with_involution} and constructed an explicit example $\FGP{R}_{c,\tau,w}$ in~\eqref{FGP(R)_(c,tau,w)}. All these constructions restrict to the subcategory $\FGF{R} \subseteq \FGP{R}$ of finitely generated free $R$-modules. Thus we obtain the additive $G$-category with involution \begin{eqnarray} & \FGP{R}_{c,\tau,w} & \label{FGF(R)_(c,tau,w)} \end{eqnarray} \begin{lemma} \label{lem:int_G_R_FGF(R)_c,tau_w_is_R_ast_c,tau,w_G-FGF} Consider the data $(R,c,\tau,w)$ and the additive $G$-category with involution $\FGF{R}_{c,\tau,w}$ of~\eqref{FGF(R)_(c,tau,w)}. Let $\intgf{G}{\FGF{R}_{c,\tau,w}}$ be the additive category with involution defined in~\eqref{int_calg_F,E_int_calg_F}. Since $G$ regarded as a groupoid has precisely one object, we can (and will) identify the set of objects in $\intgf{G}{\FGF{R}_{c,\tau,w}}$ with the set of objects in $\FGF{R}_{c,\tau,w}$ which consists of pairs $(M,g)$ for $M$ a finitely generated free $R$-module and $g \in G$. Denote by $\left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e$ the full subcategory of $\intgf{G}{FGF{R}_{c,\tau,w}}$ consisting of objects of the shape $(M,e)$ for $e \in G$ the unit element. Denote by $R \ast G = R \ast_{c,\tau,w} G$ the ring with involution given by the crossed product ring (see~\ref{r_ast_c,tau,w_G}). Then \begin{enumerate} \item \label{lem:int_G_R_FGF(R)_c,tau_w_is_R_ast_c,tau_w_G-FGF:R_ast_G} There is an equivalence of additive categories with involution $$ (\alpha,\beta) \colon \left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e \to \FGF{R \ast_{c,\tau,w}G}; $$ \item \label{lem:int_G_R_FGF(R)_c,tau_w_is_R_ast_c,tau_wG-FGF:e} The inclusion $$\left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e \to \intgf{G}{\FGF{R}_{c,\tau,w}}$$ is an equivalence of additive categories with involution. \end{enumerate} \end{lemma} \begin{proof}\ref{lem:int_G_R_FGF(R)_c,tau_w_is_R_ast_c,tau_w_G-FGF:R_ast_G} We have already constructed an equivalence of categories $$ \alpha \colon \left(\intgf{G}{\FGF{R}_{c,\tau}}\right)_e \to \FGF{R \ast_{c,\tau}G}; $$ in Lemma~\ref{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF}~\ref{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF:R_ast_G}. We want to show that $\alpha$ is compatible with the involution, i.e., there is a functor of categories with involutions $$ (\alpha,\beta) \colon \left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e \to \FGF{R \ast_{c,\tau,w}G}. $$ The natural equivalence $\beta \colon \alpha \circ I_{\left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e} \to I_{\FGF{R \ast_{c,\tau,w}G}} \circ \alpha$ assigns to an object $(M,e)$ in $\left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e$ the $R \ast_{c,\tau}G$-isomorphism $$\beta(M,e) \colon R \ast_{c,\tau,w}G \otimes_R M^* \xrightarrow{\cong} \left(R \ast_{c,\tau,w}G \otimes_R M\right)^*$$ given by $\beta(M,e)(u \otimes f)(v \otimes m) = vf(m)\overline{u}$ for $f \in M^*$, $u \in R \ast_{c,\tau}G$ and $m \in M$. Obviously $\beta$ is compatible with the structures of additive categories. Next we check that $\beta$ is a natural transformation. We have to show for a morphism $\phi \colon (M,e) \to (N,e)$ in $\left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e$ that the following diagram commutes $$\xymatrix{ R \ast G \otimes_R N^* \ar[d]^-{\beta(N,e)} \ar[r]^-{\alpha(\phi^*)} & R \ast G \otimes_R M^* \ar[d]^-{\beta(M,e)} \\ \left(R \ast G \otimes_R N\right)^* \ar[r]^-{\alpha(\phi)^*} & \left(R \ast G \otimes_R M\right)^* } $$ Recall that a morphism $$\phi = \sum_{g \in G} g \cdot \phi_g \colon (M,e) \to (N,e)$$ in $\left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e$ is given by a collection of morphisms $\phi_g \colon (M,e) \to R_g(N,e) = (N,g)$ in $\FGF{R}_{c,\tau,w}$ for $g \in G$, where $\phi_g$ is a $R$-homomorphism $M \to \res_{c_g} N$. We want to unravel what the dual morphism $$\phi^* = \sum_{g \in G} g \cdot (\phi^*)_g \colon (N,e)^* = (N^*,e)\to (M,e)^* = (M^*,e)$$ in $\left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e$ is. It is given by a collection of morphisms $\{(\phi^*)_g \colon (N^*,e) \to R_g(M^*,e) = (M^*,g)\mid g \in G\}$ in $\FGF{R}_{c,\tau,w}$, where $(\phi^*)_g$ is a $R$-homomorphism $N^* \to \res_{c_g} M^*$. In $\FGF{R}_{c,\tau,w}$ the morphism $(\phi^*)_g$ is given by the composite $$(N^*,e) = (N^*,g^{-1})\cdot g = R_g(N^*,g^{-1})\xrightarrow{R_g((\phi_{g^{-1}})^*)} R_g(M^*,e) = (M^*,g).$$ The morphism $(\phi_{g^{-1}})^*$ is given by the composite $$\res_{c_{g^{-1}}} N^* \xrightarrow{t_{g^{-1}}(N)} \left(\res_{c_{g^{-1}}} N\right)^* \xrightarrow{\left(\phi_{g^{-1}}\right)^*} M^*.$$ Explicitly this is the map $$N^* \to M^*, \quad f(x) \mapsto c_{g^{-1}}^{-1}\left(f\circ \phi_{g^{-1}}(x)\right)\left(w(g^{-1})\tau(g,g^{-1})\right)^{-1}.$$ The morphism $R_g((\phi_{g^{-1}})^*)$ is the composite $$ N^* = \res_{c_{g^{-1}g}} N^* \xrightarrow{L_{\tau(g^{-1},g)}} \res_{g} \res_{c_{g^{-1}}} N^* \xrightarrow{\res_{c_g} (\phi_{g^{-1}})^*} \res_{c_g} M^*. $$ Hence the $R$-linear map $(\phi^*)_g \colon N^* \to \res_{c_g} M^*$ sends $f \in N^*$ to the element in $M^*$ given by $$x~\mapsto~c_{g^{-1}}^{-1}\left(f\circ \phi_{g^{-1}}(x)\overline{\tau(g^{-1},g)}\right) \left(w(g^{-1})\tau(g,g^{-1})\right)^{-1}.$$ This implies that the $R \ast G$-homomorphism $$\alpha(\phi^*) \colon R \ast G \otimes_R N^* \to R \ast G \otimes_R M^*$$ sends $u \otimes f$ for $u \in R\ast G$ and $f \in N^*$ to the $R$-linear map $M \to R$ given by $$\sum_{g \in G} u \cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \left(c_{g^{-1}}^{-1}\circ f\circ \phi_{g^{-1}}\right) c_{g^{-1}}^{-1}\left(\overline{\tau(g^{-1},g)}\right) \left(w(g^{-1})\tau(g,g^{-1})\right)^{-1}.$$ We conclude that the composite $\beta(M,e) \circ \alpha(\phi^*)$ sends $u \otimes f$ for $u \in R\ast G$ and $f \in N^*$ to the $R$-linear map $R\ast G\otimes_R M \to R \ast G$ which maps $v \otimes x$ for $v \in R \ast G$ and $x \in M$ to the element in $R\ast G$ \begin{multline*} \sum_{g \in G} v \cdot \left(c_{g^{-1}}^{-1}\circ f\circ \phi_{g^{-1}}\right)(x) c_{g^{-1}}^{-1} \left(\overline{\tau(g^{-1},g)}\right) \left(w(g^{-1})\tau(g,g^{-1})\right)^{-1} \\ \cdot \overline{u \cdot\tau(g^{-1},g)^{-1} \cdot g^{-1}}. \end{multline*} We compute that the composite $\alpha(\phi)^*\circ \beta(N,e)$ sends $u \otimes f$ for $u \in R\ast G$ and $f \in N^*$ to the $R$-linear map $R\ast G\otimes_R M \to R \ast G$ which maps $v \otimes x$ for $v \in R \ast G$ and $x \in M$ to the element in $R\ast G$ \begin{eqnarray*} \lefteqn{\beta(N,e)(u \otimes f)\left(\alpha(\phi)(v \otimes x)\right)} & & \\ & = & \beta(N,e)(u \otimes f)\left(\sum_{g \in G} v \cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \phi_g(x)\right) \\ & = & \sum_{g \in G} \beta(N,e)(u \otimes f) \left(v \cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \otimes \phi_g(x)\right) \\ & = & \sum_{g \in G} v \cdot \tau(g^{-1},g)^{-1} \cdot g^{-1} \cdot f\left(\phi_g(x)\right) \cdot\overline{u}. \end{eqnarray*} Hence it suffices to show for each $g \in G$, $u,v \in R \ast G$ and $x \in M$ \begin{multline*} v \left(c_{g^{-1}}^{-1}\circ f\circ \phi_{g^{-1}}\right)(x) c_{g^{-1}}^{-1} \left(\overline{\tau(g^{-1},g)}\right) \left(w(g^{-1})\tau(g,g^{-1})\right)^{-1} \\ \cdot \overline{u \cdot\tau(g^{-1},g)^{-1} \cdot g^{-1}} \\ = v \cdot \tau(g,g^{-1})^{-1} \cdot g \cdot f\left(\phi_{g^{-1}}(x)\right) \cdot\overline{u}. \end{multline*} Since \begin{eqnarray*} \overline{u \cdot\tau(g^{-1},g)^{-1} \cdot g^{-1}} & = & w(g^{-1}) c_{g}(\overline{\tau(g^{-1},g)^{-1}}) \cdot g \cdot \overline{u}; \\ g \cdot f\left(\phi_{g^{-1}}(x)\right) \cdot\overline{u} & = & c_g\left(f\left(\phi_{g^{-1}}(x)\right)\right) \cdot g \cdot\overline{u}, \end{eqnarray*} it remains to show for all $g \in G$ and $x \in M$ \begin{multline*} \left(c_{g^{-1}}^{-1}\circ f\circ \phi_{g^{-1}}\right)(x) c_{g^{-1}}^{-1} \left(\overline{\tau(g^{-1},g)}\right) \left(w(g^{-1})\tau(g,g^{-1})\right)^{-1} \\ \cdot w(g^{-1}) c_{g}(\overline{\tau(g^{-1},g)^{-1}}) \\ = \tau(g,g^{-1})^{-1} \cdot c_g\left(f\left(\phi_{g^{-1}}(x)\right)\right). \end{multline*} If we put $r = f \circ \phi_{g^{-1}}(x)$, this becomes equivalent to showing for all $g \in G$ and $r \in R$ \begin{multline*} c_{g^{-1}}^{-1}(r) c_{g^{-1}}^{-1} \left(\overline{\tau(g^{-1},g)}\right) \left(w(g^{-1})\tau(g,g^{-1})\right)^{-1} \cdot w(g^{-1}) c_{g}(\overline{\tau(g^{-1},g)^{-1}}) \\ = \tau(g,g^{-1})^{-1} \cdot c_g(r). \end{multline*} This is equivalent to showing $$\tau(g,g^{-1}) c_{g^{-1}}^{-1}\left(r\overline{\tau(g^{-1},g)}\right) \tau(g,g^{-1})^{-1} = c_{g}(r\overline{\tau(g^{-1},g)}).$$ From~\eqref{tau_and_c_group_homo} and~\eqref{c_e-Is_id} we conclude for $x \in R$ $$\tau(g,g^{-1}) c_{g^{-1}}^{-1}(x) \tau(g,g^{-1})^{-1} = c_g(x),$$ and the claim follows, i.e., $\beta$ is a natural equivalence. It remains to check that the following diagram (see~\eqref{F(A)_F(A)aastast_F(Aastast)_F(Aast)ast}) commutes for every object $(M,e)$ in $\FGF{R}_{c,\tau}[G]^e$. $$ \xymatrix@!C=18em{R\ast G \otimes _R M \ar[r]^-{E_{\FGF{R \ast_{c,\tau,w} G}}(R\ast G \otimes _R M)} \ar[d]^-{\alpha(E_{\FGF{R}_{c,\tau,w}}(M,e))} & \left(R\ast G \otimes _R M\right)^{**} \ar[d]^-{\left(\beta(M,e)\right)^*} \\ R\ast G \otimes _R M^{**} \ar[r]^-{\beta((M,e)^*)} &\left(R\ast G \otimes _R M^{*}\right)^* } $$ We consider an element $u \otimes x$ in the left upper corner for $u \in R \ast G$ and $x \in M$. It is sent by the upper horizontal arrow to the element in $\left(R\ast G \otimes _R M\right)^{**}$ which maps $h \in \left(R\ast G \otimes_R M\right)^{*}$ to $\overline{h(u \otimes x)}$. This element is mapped by the right vertical arrow to the element in $\left(R\ast G \otimes _R M^{*}\right)^*$ which sends $v \otimes f$ for $v \in R \ast G$ and $f \in M^*$ to $$ \overline{\beta(M,e)(v \otimes f)(u \otimes x)} = \overline{uf(x)\overline{v}} = v\overline{f(x)}\overline{u}. $$ The left vertical arrow sends $u \otimes x$ to $u \otimes I_{\FGF{R}}(x)$, where $I_{\FGF{R}}(x)$ sends $f \in M^*$ to $f(x)$. This element is mapped by the lower horizontal arrow to the element in $\left(R\ast G \otimes _R M^{*}\right)^*$ which sends $v \otimes f$ for $v \in R \ast G$ and $f \in M^*$ to \begin{eqnarray*} vI_{\FGF{R}}(x)(f)\overline{u} & = & v\overline{f(x)}\overline{u}. \end{eqnarray*} This finishes the proof that $$(\alpha,\beta) \colon \left(\intgf{G}{\FGF{R}_{c,\tau,w}}\right)_e \to \FGF{R \ast_{c,\tau,w}G}$$ is an equivalence of additive category with involutions. \\[1mm]\ref{lem:int_G_R_FGF(R)_c,tau_w_is_R_ast_c,tau_wG-FGF:e} This has already been proved in Lemma~\ref{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF}~\ref{lem:int_G_R_FGF(R)_c,tau_is_R_ast_c,tauG-FGF:e}. \end{proof} \typeout{-------------------------- Section 9: $G$-homology theories ----------} \section{$G$-homology theories} \label{sec:G-homology_theories} In this section we construct $G$-homology theories and discuss induction. \begin{definition}[Transport groupoid] \label{def:transport_groupoid} Let $G$ be a group and let $\xi$ be a $G$-set. Define the \emph{transport groupoid} $\mathcal{G}^G(\xi)$ to be the following groupoid. The set of objects is $\xi$ itself. For $x_1, x_2 \in \xi$ the set of morphisms from $x_1$ to $x_2$ consists of those elements $g$ in $G$ for which $gx_1 = x_2$ holds. Composition of morphisms comes from the group multiplication in $G$. A $G$-map $\alpha\colon \xi \to \eta$ of $G$-sets induces a covariant functor $\mathcal{G}^G(\alpha) \colon \mathcal{G}^G(\xi) \to \mathcal{G}^G(\eta)$ by sending an object $x \in \xi$ to the object $\alpha(x) \in \eta$. A morphism $g\colon x_1 \to x_2$ is sent to the morphism $g \colon \alpha(x_1) \to \alpha(x_2)$. \end{definition} Fix a functor \begin{eqnarray*} & {\mathbf E} \colon \matheurm{Add\text{-}Cat_{\text{inv}}} \to \matheurm{Spectra} & \end{eqnarray*} which sends weak equivalences of additive categories with involutions to weak homotopy equivalences of spectra. Let $G$ be a group. Let $\Groupoidsover{G}$ be the category of connected groupoids over $G$ considered as a groupoid with one object, i.e., objects a covariant functors $F \colon \mathcal{G} \to G$ with a connected groupoid as source and $G$ as target and a morphism from $F_0 \colon \mathcal{G}_0 \to G$ to $F_1 \colon \mathcal{G}_1 \to G$ is a covariant functor $W \colon \mathcal{G}_0 \to \mathcal{G}_1$ satisfying $F_1 \circ W = F_0$. For a $G$-set $S$ let $$\pr_G \colon \mathcal{G}^G(S) \to \mathcal{G}(G/G) = G$$ be the functor induced by the projection $S \to G/G$. The transport category yields a functor $$\mathcal{G}^G \colon \matheurm{Or}{G} \to \Groupoidsover{G}$$ by sending $G/H$ to $\pr_G \colon \mathcal{G}^G(G/H) \to \mathcal{G}^G(G/G) = G$. Let $\mathcal{A}$ be an additive $G$-category with involution in the sense of Definition~\ref{def:additive_G-category_with_involution}. We obtain a functor \begin{eqnarray} & {\mathbf E}_{\mathcal{A}} \colon \matheurm{Or}{G} \to \matheurm{Spectra}, \quad G/H \mapsto {\mathbf E}\left(\intgf{\mathcal{G}(G/H)}{\mathcal{A} \circ \pr_G}\right). & \label{bfE_calaG} \end{eqnarray} Associated to it there is a $G$-homology theory in the sense of~\cite[Section~1]{Lueck(2002b)} \begin{eqnarray} &H^G_*(-;{\mathbf E}_{\mathcal{A}})& \label{HG_ast(-;bfEG_cala)} \end{eqnarray} such that $H^G_*(G/H;{\mathbf E}_{\mathcal{A}}) \cong \pi_n\left({\mathbf E}_{\mathcal{A}}(G/H)\right)$ holds for every $n \in \mathbb{Z}$ and every subgroup $H \subseteq G$. Namely, define for a $G$-$CW$-complex $X$ $$H_n^G(X;{\mathbf E}_{\mathcal{A}}) = \pi_n\left(\map_G(G/?,X)_+ \wedge_{\OrG{G}} {\mathbf E}_{\mathcal{A}}(G/?)\right).$$ For more details about spectra and spaces over a category and associated homology theories we refer to~\cite{Davis-Lueck(1998)}. (Notice that there $\wedge_{\OrG{G}}$ is denoted by $\otimes_{\OrG{G}}$.) \begin{lemma}\label{lem:cala_to_calb} Let $f \colon \mathcal{A} \to \mathcal{B}$ be a weak equivalence of additive $G$-categories with involution. Then the induced map $$H_n^G(X;{\mathbf E}_{f}) \colon H_n^G(X;{\mathbf E}_{\mathcal{A}}) \xrightarrow{\cong} H_n^G(X;{\mathbf E}_{\mathcal{A}})$$ is a bijection for all $n \in \mathbb{Z}$. \end{lemma} \begin{proof} This follows from Lemma~\ref{lem:(F_1,T_1)_ast_and_int_calg_S_and_equivalences} and~\cite[Lemma~4.6]{Davis-Lueck(1998)}. \end{proof} Let $\phi\colon K \to G$ be a group homomorphism. Given a $K$-$CW$-complex $X$, let $G \times_{\phi} X$ be the $G$-$CW$-complex obtained from $X$ by induction with $\phi$. If $\mathcal{H}^G_*(-)$ is a $G$-homology theory, then $\mathcal{H}^G(\phi_*(-))$ is a $K$-homology theory. The next result is essentially the same as the proof of the existence of an induction structure in~\cite[Lemma~6.1]{Bartels-Echterhoff-Lueck(2007colim)}. \begin{lemma} \label{lem:homology_and_induction} Let $\phi\colon K \to G$ be a group homomorphism. Let $\mathcal{A}$ be an additive $G$-category with involution in the sense of Definition~\ref{def:additive_G-category_with_involution}. Let $\res_{\phi} \mathcal{A}$ be the additive $K$-category with involution obtained from $\mathcal{A}$ by restriction with $\phi$. Then there is a transformation of $K$-homology theories $$\sigma_* \colon H^K_*(-;{\mathbf E}_{\res_{\phi} \mathcal{A}}) \to H^G_*(\phi_* (-);{\mathbf E}_{\mathcal{A}})$$ If $X$ is a $K$-$CW$-complex on which $\ker(\phi)$ acts trivially, then $$\sigma_n \colon H^K_n(X;{\mathbf E}_{\res_{\phi} \mathcal{A}}) \xrightarrow{\cong} H^G_n(\phi_*X;{\mathbf E}_{\mathcal{A}})$$ is bijective for all $n \in \mathbb{Z}$. \end{lemma} \begin{proof} We have to construct for every $K$-$CW$-complex $X$ a natural transformation \begin{multline} \map_K(K/?,X)_+ \wedge_{\OrG{K}} {\mathbf E}\left(\intgf{\mathcal{G}^K(K/?)}{\res_{\phi} \mathcal{A} \circ \pr_K}\right) \\ \to \map_G(G/?,\phi_*X)_+ \wedge_{\OrG{G}} {\mathbf E}\left(\intgf{\mathcal{G}^G(G/?)}{\mathcal{A} \circ \pr_G}\right). \label{desired_map_of_spectra)} \end{multline} The group homomorphism $\phi$ induces for every transitive $K$-set $\xi$ a functor, natural in $\xi$, $$\mathcal{G}^{\phi}(\xi) \colon \mathcal{G}^K(\xi) \to \mathcal{G}^G(\phi_*\xi)$$ which sends an object $x \in \xi$ to the object $(e,x)$ in $G \times_{\phi} \xi$ and sends a morphism given by $k \in K$ to the morphism given by $\phi(k)$. We obtain for every transitive $K$-set $\xi$ a functor of additive categories with involutions, natural in $\xi$ (see~\eqref{(W_ast,id)}) $$ \mathcal{G}^{\phi}(\xi)_* \colon \intgf{\mathcal{G}^K(\xi)}{\res_{\phi} \mathcal{A} \circ \pr_K} = \intgf{\mathcal{G}^K(\xi)}{\mathcal{A} \circ \pr_G \circ \mathcal{G}^{\phi}(\xi)} \to \intgf{\mathcal{G}^G(\phi_* \xi)}{\mathcal{A} \circ \pr_G}. $$ Thus we obtain a map of spectra \begin{multline*} \map_K(K/?,X)_+ \wedge_{\OrG{K}} {\mathbf E}\left(\intgf{\mathcal{G}^K(K/?)}{\res_{\phi} \mathcal{A} \circ \pr_K}\right) \\ \to \map_K(K/?,X)_+ \wedge_{\OrG{K}} {\mathbf E}\left(\intgf{\mathcal{G}^G(\phi_*(K/?))}{\mathcal{A} \circ \pr_G}\right). \end{multline*} >From the adjunction of induction and restriction with the functor $$\OrG{\phi} \colon \OrG{K} \to \OrG{G}, \quad K/H \mapsto \phi_* K/H,$$ and the canonical map of contravariant $\matheurm{Or}(G)$-spaces $$\OrG{\phi}_*\left(\map_K(K/?,X)\right) \to \map_G(G/?,\phi_*X),$$ which is an isomorphism for a $K$-$CW$-complexes $X$, we obtain maps of spectra \begin{multline*} \map_K(K/?,X)_+ \wedge_{\OrG{K}} {\mathbf E}\left(\intgf{\mathcal{G}^G(\phi_*(K/?))}{\mathcal{A} \circ \pr_G}\right) \\ \cong \map_K(K/?,X)_+ \wedge_{\OrG{K}} \OrG{\phi}^*\left({\mathbf E}\left(\intgf{\mathcal{G}^G(G/?)}{\mathcal{A} \circ \pr_G}\right)\right) \\ \cong \OrG{\phi}_*\left(\map_K(K/?,X)\right)_+ \wedge_{\OrG{G}} {\mathbf E}\left(\intgf{\mathcal{G}^G(G/?)}{\mathcal{A} \circ \pr_G}\right) \\ \cong \map_G(G/?,\phi_*X)_+ \wedge_{\OrG{G}} {\mathbf E}\left(\intgf{\mathcal{G}^G(G/?)}{\mathcal{A} \circ \pr_G}\right). \end{multline*} Now the desired map of spectra~\eqref{desired_map_of_spectra)} is the composite of the two maps above. The proof that $\tau_n(X)$ is bijective if $\ker(\phi)$ acts freely on $X$ is the same as the one of~\cite[Lemma~1.5]{Bartels-Echterhoff-Lueck(2007colim)}. \end{proof} \typeout{------ Section 10: Z-categories and additive categories with involutions ------} \section{$\mathbb{Z}$-categories and additive categories with involutions} \label{sec:Z-categories_and_additive_categories_with_involutionsG-homology_theories_and_restriction} For technical reason it will be useful that $\mathcal{A}$ comes with a (strictly associative) functorial direct sum. It will be used in the definition of the category $\ind_{\phi}\mathcal{A} $ in~\eqref{ind_phi_cala} and in functorial constructions about categories arising in controlled topology. (See for instance~\cite[Section~2.2]{Bartels-Farrell-Jones-Reich(2004)}, \cite[Section~3]{Bartels-Lueck-Reich(2007hyper)}.) \begin{definition}[$\mathbb{Z}$-category (with involution)] \label{def:Z-category_with_inv} A \emph{$\mathbb{Z}$-category} $\mathcal{A}$ is an additive category except that we drop the condition that finite direct sums do exists. More precisely, a $\mathbb{Z}$-category $\mathcal{A}$ is a small category such that for two objects $A$ and $B$ the morphism set $\mor_{\mathcal{A}}(A,B)$ has the structure of an abelian group and composition yields bilinear maps $\mor_{\mathcal{A}}(A,B) \times \mor_{\mathcal{A}}(B,C) \to \mor_{\mathcal{A}}(B,C)$. The notion of a \emph{$\mathbb{Z}$-category with involution} $\mathcal{A}$ is defined analogously. Namely, we require the existence of the pair $(I_{\mathcal{A}},E_{\mathcal{A}})$ with the same axioms as in Section~\ref{sec:Additive_categories_with_involution} except that we forget everything about finite direct sums. \end{definition} Of course an additive category (with involution) is a $\mathbb{Z}$-category (with involution), just forget the existence of the direct sum of two objects. Given a $\mathbb{Z}$-category $\mathcal{A}$, we can enlarge it to an additive category $\mathcal{A}_{\oplus}$ with a functorial direct sums as follows. The objects in $\mathcal{A}_{\oplus}$ are $n$-tuples $\underline{A} = (A_1,A_2, \ldots, A_n)$ consisting of objects $A_i$ in $\mathcal{A}$ for $i = 1, 2, \ldots, n$ and $n = 0,1,2, \ldots$, where we think of the empty set as $0$-tuple which we denote by $0$. the $\mathbb{Z}$-module of morphisms from $\underline{A} = (A_1, \ldots , A_m)$ to $\underline{B} = (B_1, \ldots , B_n)$ is given by $$\mor_{\mathcal{A}_{\oplus}}(\underline{A},\underline{B}) := \bigoplus_{1 \le i \le m, 1 \le j \le n} \; \mor_{\mathcal{A}}(A_i,B_j).$$ Given a morphism $f\colon \underline{A} \to \underline{B}$, we denote by $f_{i,j} \colon A_i \to B_j$ the component which belongs to $i \in \{1, \ldots, m\}$ and $j \in \{1, \ldots, n\}$. If $A$ or $B$ is the empty tuple, then $\mor_{\mathcal{A}_{\oplus}}(A,B)$ is defined to be the trivial $\mathbb{Z}$-module. The composition of $f \colon \underline{A} \to \underline{B}$ and $\underline{g} \colon \underline{B} \to \underline{C}$ for objects $\underline{A} = (A_1, \ldots, A_m)$, $\underline{B} = (B_1, \ldots, B_n)$ and $\underline{C} = (C_1, \ldots, C_p)$ is defined by $$(g \circ f)_{i,k} := \sum_{j=1}^n g_{j,k} \circ f_{i,j}.$$ The sum on $\mathcal{A}_{\oplus}$ is defined on objects by sticking the tuples together, i.e., for $\underline{A} = (A_1, \ldots, A_m)$ and $\underline{B} = (B_1, \ldots, B_n)$ define $$\underline{A} \oplus \underline{B} := (A_1, \ldots ,A_m,B_1, \ldots ,B_n).$$ The definition of the sum of two morphisms is now obvious. The zero object is given by the empty tuple $0$. The construction is strictly associative. These data define the structure of an additive category with functorial direct sum on $\mathcal{A}_{\oplus}$. Notice that this is more than an additive category since for an additive category the existence of the direct sum of two objects is required but not a functorial model. In the sequel functorial direct sum is always to be understood to be strictly associative, i.e., we have for three objects $A_1$, $A_2$ and $A_3$ the equality $(A_1 \oplus A_2) \oplus A_3 = A_1 \oplus (A_2 \oplus A_3)$ and we will and can omit the brackets from now on in the notion. We have constructed a functor from the category of $\mathbb{Z}$-categories to the category of additive categories with functorial direct sum $$\oplus \colon \matheurm{\mathbb{Z}\text{-}Cat} \to \matheurm{Add\text{-}Cat}_{\oplus}, \quad \mathcal{A} \mapsto \mathcal{A}_{\oplus}.$$ Let $$\forget \colon \matheurm{Add\text{-}Cat}_{\oplus} \to \matheurm{\mathbb{Z}\text{-}Cat}$$ be the forgetful functor. \begin{lemma} \label{lem:adjoint_pair_(oplus,forget)} \begin{enumerate} \item \label{lem:adjoint_pair_(oplus,forget):adjoint} We obtain an adjoint pair of functors $(\oplus,\forget)$. \item \label{lem:adjoint_pair_(oplus,forget):equivalence} We get for every $\mathbb{Z}$-category $\mathcal{A}$ a functor of $\mathbb{Z}$-categories $$Q_{\mathcal{A}} \colon \mathcal{A} \to \forget(\mathcal{A}_{\oplus})$$ which is natural in $\mathcal{A}$. If $\mathcal{A}$ is already an additive category, $Q_{\mathcal{A}}$ is an equivalence of additive categories. \end{enumerate} \end{lemma} \begin{proof}\ref{lem:adjoint_pair_(oplus,forget):adjoint} We have to construct for every $\mathbb{Z}$-category $\mathcal{A}$ and every additive category $\mathcal{B}$ with functorial direct sum to one another inverse maps $$\alpha \colon \func_{\matheurm{Add\text{-}Cat}_{\oplus}}(\mathcal{A}_{\oplus},\mathcal{B}) \to \func_{\matheurm{\mathbb{Z}\text{-}Cat}}(\mathcal{A},\forget(\mathcal{B}))$$ and $$\beta \colon \func_{\matheurm{\mathbb{Z}\text{-}Cat}}(\mathcal{A},\forget(\mathcal{B})) \to \func_{\matheurm{Add\text{-}Cat}_{\oplus}}(\mathcal{A}_{\oplus},\mathcal{B}). $$ Given $F \colon \mathcal{A}_{\oplus} \to \mathcal{B}$, define $\alpha(F) \colon \mathcal{A} \to \mathcal{B}$ to be the composite of $F$ with the obvious inclusion $Q_{\mathcal{A}} \colon \mathcal{A} \to \mathcal{A}_{\oplus}$ which sends $A$ to $(A)$. Given $F \colon \mathcal{A} \to \forget(\mathcal{B})$, define $\beta(F) \colon \mathcal{A}_{\oplus} \to \mathcal{B}$ by sending $(A_1,A_2, \ldots , A_n)$ to $F(A_1) \oplus F(A_2) \oplus \cdots \oplus F(A_n)$. \\[2mm]\ref{lem:adjoint_pair_(oplus,forget):equivalence} We have defined $Q_{\mathcal{A}}$ already above. It is the adjoint of the identity on $\mathcal{A}_{\oplus}$. Obviously $Q_{\mathcal{A}}$ induces a bijection $\mor_{\mathcal{A}}(A,B) \to \mor_{\mathcal{A}_{\oplus}}(Q_{\mathcal{A}}(A),Q_{\mathcal{A}}(B))$ for every objects $A,B \in \mathcal{A}$. Suppose that $\mathcal{A}$ is an additive category. Then every object $(A_1,A_2, \ldots, A_n)$ in $\mathcal{A}_{\oplus}$ is isomorphic to an object in the image of $P_{\mathcal{A}}$, namely to $P_{\mathcal{A}}(A_1 \oplus A_2\oplus \cdots A_n) = (A_1 \oplus A_2\oplus \cdots \oplus A_n)$. Hence $Q_{\mathcal{A}}$ is an equivalence of additive categories. \end{proof} \begin{definition}[Additive category with functorial direct sum and involution] \label{def:additive_category_with_oplus_and_inv} An \emph{additive category with functorial sum and involution} is an additive category with (strictly associative) functorial sum $\oplus$ and involution $(I,E)$ which are strictly compatible with one another, i.e., if $A_1$ and $A_2$ are two objects in $\mathcal{A}$, then $I(A_1 \oplus A_2) = I(A_1) \oplus I(A_2)$ and $E(A_1 \oplus A_2) = E(A_1) \oplus E(A_2)$ hold. \end{definition} One easily checks that if the $\mathbb{Z}$-category $\mathcal{A}$ comes with an involution $(I_{\mathcal{A}},E_{\mathcal{A}})$, the additive category $\mathcal{A}_{\oplus}$ constructed above inherits the structure of an additive category with functorial direct sum and involution in the sense of Definition~\ref{def:additive_category_with_oplus_and_inv}. Namely, define \begin{eqnarray*} I_{\mathcal{A}_{\oplus}}\left((A_1,A_2, \ldots , A_n)\right) & = & \left(I_{\mathcal{A}}(A_1), I_{\mathcal{A}}(A_1), \ldots , I_{\mathcal{A}}(A_1)\right); \\ E_{\mathcal{A}_{\oplus}}\left((A_1,A_2, \ldots , A_n)\right) & = & E_{\mathcal{A}}(A_1) \oplus E_{\mathcal{A}}(A_2) \oplus \cdots \oplus E_{\mathcal{A}}(A_1). \end{eqnarray*} We obtain a functor from the category of $\mathbb{Z}$-categories with involution to the category of additive categories with functorial direct sum and involution $$\oplus \colon \matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}} \to \matheurm{Add\text{-}Cat_{\text{inv}}}_{\oplus}, \quad \mathcal{A} \mapsto \mathcal{A}_{\oplus}.$$ Let $$\forget \colon \matheurm{Add\text{-}Cat_{\text{inv}}}_{\oplus} \to \matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}}$$ be the forgetful functor. One easily extends the proof of Lemma~\ref{lem:adjoint_pair_(oplus,forget)} to the case with involution. \begin{lemma} \label{lem:adjoint_pair_(oplus,forget)_with_inv} \begin{enumerate} \item \label{lem:adjoint_pair_(oplus,forget)_with_inv:adjoint} We obtain an adjoint pair of functors $(\oplus,\forget)$. \item \label{lem:adjoint_pair_(oplus,forget)_with_inv:equivalence} We get for every $\mathbb{Z}$-category with involution $\mathcal{A}$ a functor of $\mathbb{Z}$-categories with involution $$Q_{\mathcal{A}} \colon \mathcal{A} \to \forget(\mathcal{A}_{\oplus})$$ which is natural in $\mathcal{A}$. If $\mathcal{A}$ is already an additive category with involution, then $Q_{\mathcal{A}}$ is an equivalence of additive categories with involution. \end{enumerate} \end{lemma} \begin{definition}\label{def:Z-G-category_with_involution} A \emph{$\mathbb{Z}$-$G$-category with involution} $\mathcal{A}$ is the same as an additive $G$-category in the sense of Definition~\ref{def:additive_G-category_with_involution} except that one forgets about the direct sum. \end{definition} \begin{definition}[Additive $G$-category with functorial sum and (strict) involution] \label{def:additive_G-category_with_oplus_and_inv} An \emph{additive $G$-category with functorial sum and involution} is an additive $G$-category with (strictly associative) functorial sum $\oplus$ and involution $(I,E)$ which are strictly compatible with one another, i.e., we have: \begin{enumerate} \item If $A_1$ and $A_2$ are two objects in $\mathcal{A}$, then $I(A_1 \oplus A_2) = I(A_1) \oplus I(A_2)$ and $E(A_1 \oplus A_2) = E(A_1) \oplus E(A_2)$ hold; \item If $A_1$ and $A_2$ are two objects in $\mathcal{A}$ and $g \in G$, then $R_g(A_1) \oplus R_g(A_2) = R_g(A_1 \oplus A_2)$ holds; \item If $A$ is an object in $\mathcal{A}$, then $I(R_g(A)) = R_g(I(A))$ and $E(R_g(A)) = R_g(E(A))$ hold. \end{enumerate} If the involution is strict in the sense of Section~\ref{sec:Additive_categories_with_involution}, i.e., $E = \id$ and $I \circ I = \id$, we call $\mathcal{A}$ an \emph{additive $G$-category with functorial sum and strict involution}. Define a $\mathbb{Z}$-$G$-category with (strict) involution analogously, just forget the direct sum. \end{definition} We obtain a functor from the category of $\mathbb{Z}$-$G$-categories with involution to the category of additive categories with functorial direct sum and involution $$\oplus \colon \matheurm{\mathbb{Z}\text{-}G\text{-}Cat_{\text{inv}}} \to \matheurm{Add\text{-}G\text{-}Cat_{\text{inv}}}_{\oplus}, \quad \mathcal{A} \mapsto \mathcal{A}_{\oplus}.$$ Let $$\forget \colon \matheurm{Add\text{-}G\text{-}Cat_{\text{inv}}}_{\oplus} \to \matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}}$$ be the forgetful functor. One easily extends the proof of Lemma~\ref{lem:adjoint_pair_(oplus,forget)} to the case with $G$-action and involution. \begin{lemma} \label{lem:adjoint_pair_(oplus,forget)_with_inv_and_G} \begin{enumerate} \item \label{lem:adjoint_pair_(oplus,forget)_with_inv_and_G:adjoint} We obtain an adjoint pair of functors $(\oplus,\forget)$. \item \label{lem:adjoint_pair_(oplus,forget))_with_inv_and_G:equivalence} We get for every $\mathbb{Z}$-$G$-category with involution $\mathcal{A}$ a functor of $\mathbb{Z}$-categories with involution $$Q_{\mathcal{A}} \colon \mathcal{A} \to \forget(\mathcal{A}_{\oplus})$$ which is natural in $\mathcal{A}$. If $\mathcal{A}$ is already an additive $G$-category with involution, then $Q_{\mathcal{A}}$ is an equivalence of additive $G$-categories with involution; \item \label{lem:adjoint_pair_(oplus,forget))_with_inv_and_G:strict} The corresponding definitions and results carry over to the case of strict involutions. \end{enumerate} \end{lemma} \begin{remark} \label{rem:comparison_int_ast} Given an additive $G$-category $\mathcal{A}$ and a $G$-set $T$, we have constructed the additive $G$-category $\left(\intgf{\mathcal{G}(T)}{\mathcal{A} \circ \pr_G}\right)_{\oplus}$. Let $\mathcal{A} \ast_G T$ be the additive $G$-category defined in~\cite[Definition~2.1]{Bartels-Reich(2005)}. We obtain a functor of $\mathbb{Z}$-categories $$\rho(T) \colon \intgf{\mathcal{G}(T)}{\mathcal{A} \circ \pr_G} \to \mathcal{A} \ast_G T$$ by sending an object $(x,A)$ to the object $\{B_t \mid t \in T\}$ for which $B_x = A$ if $x = t$ and $B_x = 0$ if $x \not= t$. It induces a functor of additive categories with functorial direct sum $$\rho(T)_{\oplus} \colon \left(\intgf{\mathcal{G}(T)}{\mathcal{A} \circ\pr_G}\right)_{\oplus} \to \left(\mathcal{A} \ast_G T\right)_{\oplus}.$$ Recall that we have the functor of $\mathbb{Z}$-categories $$Q_{\mathcal{A} \ast_G T} \colon \mathcal{A} \ast_G T \to \left(\mathcal{A} \ast_G T\right)_{\oplus}.$$ One easily checks that both $\rho(T)_{\oplus}$ and $Q_{\mathcal{A} \ast_G T}$ are equivalences of additive categories and natural in $T$. If $\mathcal{A}$ is an additive $G$-category with strict involution, then we obtain on the source and the target of $\rho(T)_{\oplus}$ and of $Q_{\mathcal{A} \ast_G T}$ strict involutions such that both $\rho(T)_{\oplus}$ and $Q_{\mathcal{A} \ast_G T}$ are equivalences of additive categories with strict involution. This implies that the $G$-homology theories constructed for $K$- and $L$-theory here and in~\cite[Definition~2.1]{Bartels-Reich(2005)} are naturally isomorphic and lead to isomorphic assembly maps. \end{remark} \typeout{------------------ Section 11: $G$-homology theories and restriction ----------} \section{$G$-homology theories and restriction} \label{sec:G-homology_theories_and_restriction} Fix a functor \begin{eqnarray} & {\mathbf E} \colon \matheurm{Add\text{-}Cat_{\text{inv}}} \to \matheurm{Spectra} & \label{bfE_addcatinf_to_spectra} \end{eqnarray} which sends weak equivalences of additive categories with involutions to weak homotopy equivalences of spectra. We call it \emph{compatible with direct sums} if for any family of additive categories with involutions $\{\mathcal{A}_i\mid i \in I\}$ the map induced by the canonical inclusions $\mathcal{A}_i \to \bigoplus_{i\in I} \mathcal{A}_i$ for $i \in I$ $$\bigvee_{i \in I} {\mathbf E}(\mathcal{A}_i) \to {\mathbf E}\left(\bigoplus_{i\in I} \mathcal{A}_i\right)$$ is a weak homotopy equivalence of spectra. \begin{example} \label{rem:K-and_L_are_compatible_with_direct_sums} The most important examples for ${\mathbf E}$ will be for us the functor which sends an additive category $\mathcal{A}$ to its non-connective algebraic $K$-theory spectrum ${\mathbf K}_{\mathcal{A}}$ in the sense of Pedersen-Weibel~\cite{Pedersen-Weibel(1985)}, and the functor which sends an additive category with involution $\mathcal{A}$ to its algebraic $L^{-\infty}$-spectrum ${\mathbf L}^{-\infty}_{\mathcal{A}}$ in the sense of Ranicki (see~\cite{Ranicki(1988)}, \cite{Ranicki(1992)} and~\cite{Ranicki(1992a)}). Both functors send weak equivalences to weak homotopy equivalences and are compatible with direct sums. The latter follows from the fact that they are compatible with finite direct sums and compatible with directed colimits. This is proven for rings in~\cite[Lemma~5.2]{Bartels-Echterhoff-Lueck(2007colim)}, the proof carries over to additive categories with involution. \end{example} Given a $G$-$CW$-complex $X$ and a group homomorphism $\phi \colon K \to G$, let $\phi^* X$ be the $K$-$CW$-complex obtained from $X$ by restriction with $\phi$. Given a $K$-homology theory $\mathcal{H}_*^K$, we obtain a $G$-homology theory by sending a $G$-$CW$-complex $X$ to $\mathcal{H}^K_*(\phi^* X)$. Recall that we have assigned to an additive $G$-category $\mathcal{A}$ with involution a $G$-homology theory $H_*^G(-;{\mathbf E}_{\mathcal{A}})$ in~\eqref{HG_ast(-;bfEG_cala)}. The main result of this section is \begin{theorem} \label{the:homology_theories-and_restriction} Suppose that the functor ${\mathbf E}$ of~\eqref{bfE_addcatinf_to_spectra} is compatible with direct sums. Let $\phi \colon K \to G$ be a group homomorphism. Let $\mathcal{A}$ be a $\mathbb{Z}$-$K$-category with involution in the sense of Definition~\ref{def:Z-G-category_with_involution}. Let $\ind_{\phi} \mathcal{A}$ be the $G$-$\mathbb{Z}$-category with involution defined in~\eqref{ind_phi_cala}. Then there is a natural equivalence of $G$-homology theories $$\tau_* \colon \mathcal{H}^K\bigl(\phi^*(-);{\mathbf E}_{\mathcal{A}_{\oplus}}\bigr) \xrightarrow{\cong} \mathcal{H}^G\bigl(-;{\mathbf E}_{(\ind_{\phi} \mathcal{A})_{\oplus}}\bigr).$$ \end{theorem} Its proof needs some preparation. Given a contravariant functor $F \colon \mathcal{G} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$ from a groupoid into the category $\matheurm{Add\text{-}Cat_{\text{inv}}}$ of additive categories with involution, we have defined an additive category with involution $\intgf{\mathcal{G}}{F}$ in~\eqref{int_calg_F,T}, provided that $\mathcal{G}$ is connected. We want to drop the assumption that $\mathcal{G}$ is connected. The connectedness of $\mathcal{G}$ was only used in the construction of the direct sum of two objects in $\intgf{\mathcal{G}}{F}$. Hence everything goes through if we refine us to the construction of $\mathbb{Z}$-categories with involution. Namely, if we drop the connectivity assumption on $\mathcal{G}$, all constructions and all the functoriality properties explained in Section~\ref{sec:Connected_groupoids_and_additive_categories_with_involutions} remain true if we work within the category $\matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}}$ instead of $\matheurm{Add\text{-}Cat_{\text{inv}}}$. Let $G$ and $K$ be groups. Consider a (left) $K$-set $\xi$ and a $K$-$G$-biset $\eta$. Then $G$ acts from the right on the transport groupoid $\mathcal{G}^K(\eta)$. Namely, for an element $g \in G$ the map $R_g \colon \eta \to \eta, \; x \mapsto xg$ is $K$-equivariant and induces a functor $\mathcal{G}^K(R_g) \colon \mathcal{G}^K(\eta) \to \mathcal{G}^K(\eta)$. Consider a $K$-$\mathbb{Z}$-category with involution $\mathcal{A}$. Let $\pr_K \colon \mathcal{G}^K(\eta) \to \mathcal{G}^K(K/K) = K$ be the functor induced by the projection $\eta \to K/K$. Then $\mathcal{A} \circ \pr_K$ is a contravariant functor $\mathcal{G}^K(\eta) \to \matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}}$. We obtain a $\mathbb{Z}$-category with involution $\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}$ (compare~\eqref{int_calg_F,T}). Given $g \in G$, the functor $\mathcal{G}^G(R_g) \colon \mathcal{G}^K(\eta) \to \mathcal{G}^K(\eta)$ induces a functor of $\mathbb{Z}$-categories with involution (compare~\eqref{(W_ast,id)}) $$\mathcal{G}^G(R_g) \colon \intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K} = \intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K \circ \mathcal{G}^K(R_g)} \to \intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K},$$ which strictly commutes with the involution. Thus $\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}$ becomes a $\mathbb{Z}$-$G$-category with involution in the sense of Definition~\ref{def:Z-G-category_with_involution}. We conclude that $\left(\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}\right) \circ \pr_G$ is a contravariant functor $\mathcal{G}^G(\xi) \to \matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}}$. We obtain a $\mathbb{Z}$-category with involution (compare~\eqref{int_calg_F,T}) $$\intgf{\mathcal{G}^G(\xi)}{\left(\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}\right) \circ \pr_G}.$$ Consider $\eta \times \xi$ as a left $G \times K$ set by $(g,k) \cdot (y,x) = (kyg^{-1},gx)$. Then $\mathcal{A} \circ \pr_{G \times K}$ is a contravariant functor $\mathcal{G}^{G \times K}(\eta \times \xi) \to \matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}}$. We obtain a $\mathbb{Z}$-category with involution (compare~\eqref{int_calg_F,T}) $$\intgf{\mathcal{G}^{G \times K}(\eta \times \xi)}{\mathcal{A} \circ \pr_{G\times K}}.$$ \begin{lemma} \label{lem:indent:int_G_times_K_and_int_G_int_K} There is an isomorphism of $\mathbb{Z}$-categories with involution $$\omega \colon \intgf{\mathcal{G}^G(\xi)}{\left(\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}\right) \circ \pr_G} \xrightarrow{\cong} \intgf{\mathcal{G}^{G \times K}(\eta \times \xi)}{\mathcal{A} \circ \pr_{G\times K}}$$ which is natural in both $\xi$ and $\eta$. \end{lemma} \begin{proof} An object in $\intgf{\mathcal{G}^G(\xi)}{\left(\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}\right) \circ \pr_G}$ is given by $(x,(y,A))$, where $x \in \xi$ is an object in $\mathcal{G}^G(\xi)$ and $(y,A)$ is an object in $\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}$ which is given by an object $y \in \eta$ in $\mathcal{G}^K(\eta)$ and an object $A$ in $\mathcal{A}$. The object $(x,(y,A))$ is sent under $\omega$ to the object $((y,x),A)$ given by the object $(y,x)$ in $\mathcal{G}^{G \times K}(\eta \times \xi)$ and the object $A \in \mathcal{A}$. A morphism $\phi$ in $\intgf{\mathcal{G}^G(\xi)}{\left(\intgf{\mathcal{G}^K(\eta)}{\mathcal{A} \circ \pr_K}\right) \circ \pr_G}$ from $(x_1,(y_1,A_1))$ to $(x_1,(y_2,A_2))$ is given by $g \cdot \psi$ for a morphism $g \colon x_1 \to x_2$ in $\mathcal{G}^G(\xi)$ and a morphism $\psi \colon (y_1,A_1) \to \mathcal{G}^K(R_g)_*(y_2,A_2)$. The morphism $\psi$ itself is given by $k \cdot \nu$ for a morphism $k \colon y_1 \to y_2g$ in $\mathcal{G}^K(\eta)$ and a morphism $\nu \colon A \to r_k(A)$ in $\mathcal{A}$. Define the image of $\phi$ under $\omega$ to be the morphism in $\intgf{\mathcal{G}^{G \times K}(\eta \times \xi)}{\mathcal{A} \circ \pr}$ given by the morphism $(g^{-1},k) \colon (y_1,x_1) \to (y_2,x_2)$ in $\mathcal{G}^{G \times K}(\eta \times \xi)$ and the morphism $\phi \colon A \to r_k(A)$. This makes sense since $r_k(A)$ is the image of $A$ under the functor $\mathcal{A} \circ \pr(g^{-1},k)$. One easily checks that $\omega$ is an isomorphism of $\mathbb{Z}$-categories with involution and natural with respect to $\xi$ and $\eta$. \end{proof} Let $\phi \colon K \to G$ be a group homomorphism and $\xi$ be a $G$-set. Let $\phi^* \xi$ be the $K$-set obtained from the $G$-set $\xi$ by restriction with $\phi$. Consider a $K$-$\mathbb{Z}$-category with involution $\mathcal{A}$ in the sense of Definition~\ref{def:Z-G-category_with_involution}. Let $\phi^* G$ be the $K$-$G$-biset for which multiplication with $(k,g) \in K \times G$ sends $x \in G$ to $\phi(k)xg^{-1}$. We have explained above how $\intgf{\mathcal{G}^K(\phi^*G)}{\mathcal{A}}$ can be considered as a $G$-$\mathbb{Z}$-category with involution. We will denote it by \begin{eqnarray} \ind_{\phi} \mathcal{A} & := & \intgf{\mathcal{G}^K(\phi^*G)}{\mathcal{A}}. \label{ind_phi_cala} \end{eqnarray} \begin{lemma} \label{lem:int_phiast_xi_A_is_int_xi_ind_A} For every $G$-set $\xi$ there is a natural equivalence of $\mathbb{Z}$-categories with involutions $$\tau \colon \intgf{\mathcal{G}^G(\xi)}{\ind_{\phi} \mathcal{A}} \xrightarrow{\simeq} \intgf{\mathcal{G}^K(\phi^*\xi)}{\mathcal{A} \circ \pr_K}.$$ It is natural in $\xi$. \end{lemma} \begin{proof} Because of Lemma~\ref{lem:indent:int_G_times_K_and_int_G_int_K} it suffices to construct a natural equivalence $$\tau \colon \intgf{\mathcal{G}^{G \times K}(\phi^* G \times \xi)}{\mathcal{A} \circ \pr} \xrightarrow{\simeq} \intgf{\mathcal{G}^K(\phi^*\xi)}{\mathcal{A}\circ \pr_K}.$$ Consider the functor $$W \colon \mathcal{G}^{G \times K}(\phi^* G \times \xi) \to \mathcal{G}^K(\phi^*\xi)$$ sending an object $(x,y) \in G \times \xi$ in $\mathcal{G}^{G \times K}(\phi^* G \times \xi)$ to the object $xy \in \xi$ in $\mathcal{G}^K(\phi^*\xi)$ and a morphism $(g,k) \colon (x_1,y_1) \to (x_2,y_2)$ to the morphism $k \colon xy_1 \to xy_2$. Now define $\tau$ to be $$W_* \colon \intgf{\mathcal{G}^{G \times K}(\phi^* G \times \xi)}{\mathcal{A} \circ \pr} = \intgf{\mathcal{G}^{G \times K}(\phi^* G \times \xi)}{\mathcal{A} \circ \pr_K \circ W } \xrightarrow{\simeq} \intgf{\mathcal{G}^K(\phi^*\xi)}{\mathcal{A}}$$ (see~\eqref{(W_ast,id)}). Since $W$ is a weak equivalence of groupoids, $\tau$ is a weak equivalence of additive categories with involution by Lemma~\ref{lem:(F_1)_ast_and_int_calg_S_and_equivalences}~\ref{lem:(F_1)_ast_and_int_calg_S_and_equivalences:(F_1)_ast}. One easily checks that this construction is natural in $\xi$. \end{proof} Now we can give the proof of Theorem~\ref{the:homology_theories-and_restriction}. \begin{proof} In the sequel we write $E_{\oplus} \colon \matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}} \to \matheurm{Spectra}$ for the composite of the functor ${\mathbf E}$ of~\eqref{bfE_addcatinf_to_spectra} and the functor $\matheurm{\mathbb{Z}\text{-}Cat_{\text{inv}}} \to \matheurm{Add\text{-}Cat_{\text{inv}}}$ sending $\mathcal{A}$ to $\mathcal{A}_{\oplus}$. Given a $G$-$CW$-complex $X$, we have to define a weak equivalence of spectra \begin{multline*} \map_K(K/?,\phi^*X)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/?)}{\mathcal{A} \circ \pr_K} \right) \\ \to \map_G(G/?,X)_+ \wedge_{\OrG{G}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^G(G/?)}{\ind_{\phi}\mathcal{A} \circ \pr_G} \right). \end{multline*} The left hand side can be rewritten as \begin{eqnarray*} \lefteqn{\map_K(K/?,\phi^*X)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/?)}{\mathcal{A} \circ \pr_K}\right)} & & \\ & = & \map_G(\phi_*(K/?),X)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/?)}{\mathcal{A} \circ \pr_K}\right) \\ & = & \map_G(G/?,X)_+ \wedge_{\OrG{G}} \map_K(\phi_*(K/??),G/?)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left( \intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K}\right) \\ & = & \map_G(G/?,X)_+ \wedge_{\OrG{G}} \map_K(K/??,\phi^*G/?)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K}\right). \end{eqnarray*} Because of Lemma~\ref{lem:int_phiast_xi_A_is_int_xi_ind_A} the right hand side can be identified with \begin{eqnarray*} \lefteqn{\map_G(G/?,X)_+ \wedge_{\OrG{G}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^G(G/?)}{\ind_{\phi}\mathcal{A} \circ \pr_G}\right)} & & \\ & = & \map_G(G/?,X)_+ \wedge_{\OrG{G}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(\phi^*G/?)}{\mathcal{A} \circ \pr_K}\right). \end{eqnarray*} Hence we need to construct for every $K$-set $\xi$ a weak homotopy equivalence, natural in $\xi$ $$ \rho(\xi) \colon \map_K(K/??,\xi)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K}\right) \to {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(\xi)}{\mathcal{A} \circ \pr_K} \right). $$ The map $\rho(\xi)$ sends an element in the source given by $(\phi,z)$ for a $K$-map $\phi \colon K/?? \to \xi$ and $z \in {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K}\right)$ to ${\mathbf E}_{\oplus}\left(\mathcal{G}^K(\phi)_*\right)(z)$, where $$\mathcal{G}^K(\phi)_* \colon \intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K} = \intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K \circ \mathcal{G}^K(\phi)} \to \intgf{\mathcal{G}^K(\xi)}{\mathcal{A} \circ \pr_K}$$ has been defined in~\eqref{(W_ast,id)}. Obviously it is natural in $\xi$ and is an isomorphism if $\xi$ is a $K$-orbit. For a family of $K$-sets $\{\xi_i \mid i \in I\}$ there is a natural isomorphism of spectra \begin{multline*} \bigvee_{i \in I} \left(\map_K(K/??,\xi_i)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K}\right)\right) \\ \xrightarrow{\cong} \map_K\left(K/??,\coprod_{i \in I} \xi_i\right)_+ \wedge_{\OrG{K}} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(K/??)}{\mathcal{A} \circ \pr_K}\right). \end{multline*} We have \begin{eqnarray*} \coprod_{i \in I} \mathcal{G}^K(\xi_i) & \cong &\mathcal{G}^K\left(\coprod_{i \in I} \xi_i\right); \\ \coprod_{i \in I} \intgf{\mathcal{G}^K(\xi_i)}{\mathcal{A} \circ \pr_k} & \cong & \intgf{\coprod_{i \in I} \mathcal{G}^K(\xi_i)}{\mathcal{A} \circ \pr_k}; \\ \bigoplus_{i \in I} \left(\intgf{\mathcal{G}^K(\xi_i)}{\mathcal{A} \circ \pr_k}\right)_{\oplus} & \cong & \left(\coprod_{i \in I} \intgf{\mathcal{G}^K(\xi_i)}{\mathcal{A} \circ \pr_k}\right)_{\oplus}. \end{eqnarray*} By assumption ${\mathbf E}$ is compatible with direct sums. Hence we obtain a weak equivalence $$\bigvee_{i \in I} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K(\xi_i)}{\mathcal{A} \circ \pr_K}\right) \xrightarrow{\simeq} {\mathbf E}_{\oplus}\left(\intgf{\mathcal{G}^K\left(\coprod_{i \in I} \xi_i\right)}{\mathcal{A} \circ \pr_K}\right).$$ We conclude that $\rho\left(\coprod_{i \in I} \xi_i\right)$ is a weak homotopy equivalence if and only if $\bigvee_{i \in I} \rho(\xi_i)$ is a weak homotopy equivalence. Since a $K$-set is the disjoint union of its $K$-orbits and a wedge of weak homotopy equivalences of spectra is again a weak homotopy equivalence, $\rho(\xi)$ is a weak homotopy equivalence for every $K$-set $\xi$. This finishes the proof of Theorem~\ref{the:homology_theories-and_restriction}. \end{proof} \typeout{------------------ Section 12: Proof of the main theorems ----------} \section{Proof of the main theorems} \label{sec:Proof_of_the_main_theorems} In this section we can finally give the proofs of Theorem~\ref{the:FJC_for_crossed_products}, Theorem~\ref{the:fibered_versus_unfibered} and Theorem~\ref{the:strict_coefficients}. \begin{proof}[Proof of Theorem~\ref{the:FJC_for_crossed_products}] This follows from Lemma~\ref{lem:(F_1,T_1)_ast_and_int_calg_S_and_equivalences} and Lemma~\ref{lem:int_G_R_FGF(R)_c,tau_w_is_R_ast_c,tau,w_G-FGF}. \end{proof} \begin{proof}[Proof of Theorem~\ref{the:fibered_versus_unfibered}] Let $\phi \colon K \to G$ be a group homomorphism and let $\mathcal{B}$ be a additive $K$-category with involution. We have to show that the following assembly map is bijective \begin{eqnarray*} & \asmb^{K,\mathcal{B}}_n \colon H_*^K(\EGF{K}{\phi^*{\mathcal{VC}\text{yc}}};{\mathbf L}_{\mathcal{B}}) \to H_n^K(\pt;{\mathbf L}_{\mathcal{B}}) = L_n\left(\intgf{K}{\mathcal{B}}\right). & \end{eqnarray*} Since $\phi^*\EGF{G}{{\mathcal{VC}\text{yc}}}$ is a model for $\EGF{K}{\phi^*{\mathcal{VC}\text{yc}}}$, this follows from the commutative diagram \begin{eqnarray*} &\xymatrix@!C=19em{ H_n^K(\EGF{K}{\phi^*{\mathcal{VC}\text{yc}}};{\mathbf L}_{\mathcal{B}}) \ar[d]^-{H^K_n(\id;{\mathbf L}_{Q_{\mathcal{B}}})}_-{\cong} \ar[r]^-{H_n^K(\pr;{\mathbf L}_{\mathcal{B}}) } & H_n^K(\pt;{\mathbf L}_{\mathcal{B}}) = L_n\left(\intgf{K}{\mathcal{B}}\right) \ar[d]^-{H_n^K(\id;{\mathbf L}_{Q_{\mathcal{B}}})}_-{\cong} \\ H_n^K(\EGF{K}{\phi^*{\mathcal{VC}\text{yc}}};{\mathbf L}_{\mathcal{B}_{\oplus}}) \ar[d]^-{\tau_n^{\phi}(\EGF{G}{{\mathcal{VC}\text{yc}}})}_-{\cong} \ar[r]^-{H_n^K(\pr;{\mathbf L}_{\mathcal{B}_{\oplus}}) } & H_n^K(\pt;{\mathbf L}_{\mathcal{B}_\oplus}) = L_n\left(\intgf{K}{\mathcal{B}_{\oplus}}\right) \ar[d]^-{\tau_n^{\phi}(\pt)}_-{\cong} \\ H_n^G\bigl(\EGF{G}{{\mathcal{VC}\text{yc}}};{\mathbf L}_{(\ind_{\phi} \mathcal{B})_{\oplus}}\bigr) \ar[r]^-{H_n^G\bigl(\pr;{\mathbf L}_{(\ind_{\phi} \mathcal{B})_{\oplus}}\bigr)} & H_n^G\bigl(\pt;{\mathbf L}_{(\ind_{\phi} \mathcal{B})_{\oplus}}\bigr) = L_n\left(\intgf{G}{(\ind_{\phi} \mathcal{B})_{\oplus}}\right) } & \end{eqnarray*} where $\pr$ denotes the projection onto the one-point-space $\pt$ and $Q_{\mathcal{B}} \colon \mathcal{B} \to \mathcal{B}_{\oplus}$ is the natural equivalence coming from Lemma~\ref{lem:adjoint_pair_(oplus,forget)_with_inv_and_G} and the vertical arrows are isomorphisms because of Lemma~\ref{lem:cala_to_calb} and Theorem~\ref{the:homology_theories-and_restriction}. \end{proof} \begin{proof}[Proof of Theorem~\ref{the:strict_coefficients}] Given an additive category $\mathcal{A}$ with involution $\mathcal{A}$, we can consider it as an additive category with $(\mathbb{Z}/2,v)$-operation as explained in Example~\ref{exa:additive_categories_with_involution}. If we apply Lemma~\ref{lem:adjoint_pair_(cals,forget)}, we obtain an additive category with strict involution $\mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})$ together with a weak equivalence of additive categories with involutions $$P_{\mathcal{A}} \colon \mathcal{A} \to \mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})$$ If $\mathcal{A}$ is an additive $G$-category with involution in the sense of Definition~\ref{def:additive_G-category_with_involution}, then $P_{\mathcal{A}}$ is an equivalence of additive $G$-categories with involution. If we apply Lemma~\ref{lem:adjoint_pair_(oplus,forget)_with_inv_and_G}, we obtain an additive $G$-category with functorial direct sum and strict involution $\mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})_{\oplus}$ in the sense of Definition~\ref{def:additive_G-category_with_oplus_and_inv} and an equivalence of additive $G$-categories with strict involution $$Q_{\mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})} \colon \mathcal{S}^{\mathbb{Z}/2}(\mathcal{A}) \to \mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})_{\oplus}$$ The composite $$f:= Q_{\mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})} \circ P_{\mathcal{A}} \colon \mathcal{A} \to \mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})_{\oplus}$$ is a weak equivalence of additive $G$-category with involution. Now the claim follows from the following commutative diagram \begin{eqnarray*} &\xymatrix@!C=19em{ H_n^G(\EGF{G}{{\mathcal{VC}\text{yc}}};{\mathbf L}_{\mathcal{A}}) \ar[d]^-{H_n^K(\id;{\mathbf L}_{f})}_-{\cong} \ar[r]^-{H_n^K(\pr;{\mathbf L}_{\mathcal{A}}) } & H_n^G(\pt;{\mathbf L}_{\mathcal{A}}) = L_n\left(\intgf{G}{\mathcal{A}}\right) \ar[d]^-{H_n^K(\id;{\mathbf L}_{f})}_-{\cong} \\ H_n^G\bigl(\EGF{G}{{\mathcal{VC}\text{yc}}};{\mathbf L}_{\mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})_{\oplus}}\bigr) \ar[r]^-{H_n^K\bigl(\pr;{\mathbf L}_{\mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})_{\oplus}}\bigr)} & H_n^G\bigl(\pt;{\mathbf L}_{\mathcal{A}}\bigr) = L_n\left(\intgf{G}{\mathcal{S}^{\mathbb{Z}/2}(\mathcal{A})_{\oplus}}\right) } & \end{eqnarray*} whose vertical arrows are isomorphisms by Lemma~\ref{lem:cala_to_calb}. \end{proof} \typeout{-------------------------------- References ---------------------------------}
1,116,691,499,587
arxiv
\section{The tensions} \label{sec:Intro} The current concordance cosmological model consisting of a cosmological constant ($\Lambda$) and cold dark matter (CDM) has been remarkably successful in explaining cosmological observables at both high and low redshift \citep{planck15,alam16,Betoule:2014frx, Hildebrandt16,Aghanim:2018eyx}. However, within this $\Lambda$CDM model, some tensions between datasets have emerged that merit attention. One is the ``$H_0$ tension'', which is a mismatch between the direct measurement of the present expansion rate, or Hubble constant, and the value inferred from observations of the cosmic microwave background (CMB)~\cite{Riess:2016jrr,riess19,Aghanim:2018eyx}. The second is the ``$\sigma_8$ tension'', which is a discrepancy between the RMS of the linear matter density field ($\sigma(R,z)$) on $8 \, h^{-1} \, {\rm Mpc}$ scales at redshift $z=0$ ($\sigma_8$) inferred from the CMB and measured by Sunyaev-Zel'dovich (SZ) cluster counts (e.g.~\citep{sz2014,sz2016,Bocquet:2018ukq}). It should be noted that this tension depends on the adopted calibration of the SZ flux to cluster halo mass \cite{Pan:2018zha}, which is still uncertain. There are also indications that the value of $S_8=\sigma_8 \Omega_{\rm m}^{0.5}$, where $\Omega_{\rm m}$ is the matter density relative to the critical density today, as measured through weak gravitational lensing tomography is in tension with the inference from the CMB (e.g.~\cite{heymans13,joudaki16a,Hildebrandt16,joudaki16b,Kohlinger:2017sxk,Joudaki:2017zdt,vanUitert:2017ieu,Troxel:2017xyo,Abbott:2017wau,Hikage:2018qbn}). In addition to the discordance between the direct measurement of $H_0$ and the CMB, there is also a discordance between the distances calibrated by them. The supernova (SNe) distances calibrated by the local $H_0$ measurement do not agree at $z\simeq 0.5$ with the distances inferred from the baryon acoustic oscillation (BAO) feature in the correlation function of luminous red galaxies (LRGs)~\cite{Beutler:2016ixs} calibrated by the CMB~\cite{Aylor:2018drw}. One way to bring the SNe and LRG distances into agreement is if the true $r_{\rm drag}$ is smaller than the value from the $\Lambda$CDM fit to the Planck data~\cite{Aylor:2018drw}. Though these disagreements could be due to unknown systematic uncertainties in the measurements, an interesting possibility is that these tensions point to new physics. This point of view has merit because $H_0$ and $\sigma_8$ obtained from the CMB are derived parameters calculated from a model-dependent projection over three orders of magnitude in the scale factor, $a$. One way to evolve the Universe to low redshift in a manner different from $\Lambda$CDM is to relax the requirement that the dark energy is a cosmological constant. Evolving dark energy is often quantified by the Chevallier-Polarski-Linder parametrization \cite{Chevallier:2000qy,Linder:2002et}, but more dramatic changes are possible. For instance, scalar-tensor theories satisfying the LIGO/Virgo bound on the propagation speed of gravitational waves \cite{Monitor:2017mdv} can include a dark energy component with equation of state that varies rapidly with redshift \cite{Kase:2018iwp,DeFelice:2010pv}. An alternative possibility is that new physics at or before last scattering gives rise to a larger expansion rate in the early Universe and smaller sound horizon at the drag epoch, $r_{\rm drag}$, compared to $\Lambda$CDM. This shifts the CMB prediction and the low-redshift (BAO) features to better agree with the measured value of $H_0$ \cite{Bernal:2016gxb,Aylor:2018drw,Poulin:2018cxd}. The difficulty in this strategy comes from the fact that the CMB anisotropies are precisely measured~\cite{Aghanim:2018eyx}. It is difficult to modify the distance scales away from $\Lambda$CDM predictions without spoiling its successful predictions for the temperature and polarization anisotropies \cite{Bernal:2016gxb}. Models with new physics at $z>1000$ typically add one or more extra degrees of freedom. Three possibilities that have been studied are dark radiation \cite{Bernal:2016gxb}, strongly interacting massive neutrinos \cite{Kreisch:2019yzn}, and early dark energy \cite{Poulin:2018cxd}. By adding extra degrees of freedom, any predictions for low-redshift quantities from these models will be more uncertain relative to the $\Lambda$CDM prediction, and currently proposed modifications at $z>1000$ use this reduction in significance to alleviate the $H_0$ tension \cite{Joudaki:2017zhq,Bernal:2016gxb,Poulin:2018cxd,Kreisch:2019yzn}. However, should either the CMB polarization power spectrum or $H_0$ measurements become more precise while maintaining the same central values, the tension would reemerge. An appealing aspect of these high-$z$ modifications that lead to smaller $r_{\rm drag}$ is that it can make $z<1$ distance measurements consistent with each other~\cite{Aylor:2018drw}. Our work in this paper asks the complementary question of how these tensions may be alleviated if the expansion rate at last scattering and $r_{\rm drag}$ are unchanged from the $\Lambda$CDM inferences. In this case, the path forward is to explore the uncertainty in the dark energy density evolution. We do so in a model-independent (Gaussian process (GP) regression) framework, and with a parametric model (Transitional Dark Energy (TDE) model) for dark energy evolution. Our approach highlights the interesting possibility of evolving dark energy models that alleviate the $H_0$ tension and predict a slower growth of structure at late times~\cite{joudaki16b}. \section{Model-independent expansion history}\label{sec:GP} We use a GP regression to infer the expansion history of the Universe following the procedure used in our previous work~\cite[][hereafter J18]{Joudaki:2017zhq}. The repository for that code is archived here~\cite{gphistdoi}. The present analysis differs from J18 in that we forecast results assuming a precision measurement of the Hubble constant at the 1\% level, a remarkable feat that could be achieved in the near future. This may be possible through better calibration of Cepheids with Gaia \cite{Freedman:2017yms,2012arXiv1202.4459S,Riess:2016jrr,2018ApJ...855..136R}, through a standard siren technique with LIGO and Virgo \cite{Chen:2017rfc}, via Population~II distance indicators with Gaia \cite{Beaton:2016nsw}, or via strong lensing time-delay measurements by the H0LiCOW collaboration \cite{Birrer:2018vtm}. As in J18, we condition the GP regression using direct measurements of the Hubble distance $D_H(z) = c/H(z)$, as well as indirect constraints on $D_H$ from measurements of the angular diameter distance $D_A(z) = D_C(z) / (1+z)$, and the luminosity distance $D_L(z) = (1+z)D_C(z)$, where $D_C(z) = \int_0^z D_H(z) dz$. Throughout this work, we assume spatial flatness. We divide out a fiducial expansion history, $D_H^0$, based on the best-fit Planck+WP flat $\Lambda$CDM cosmology from Ade~et~al.~(2013)~\cite{planck13} (the differences between the 2013 and 2018 Planck results are small, and do not noticeably change the GP regression results) with $H_0 = 67.04~{\rm km}~{\rm s}^{-1}~{\rm Mpc}^{-1}$, present matter density $\Omega_{\rm m} = 0.3183$, present dark energy density $\Omega_{\rm DE} = 0.6817$, effective number of neutrinos $N_{\rm eff} = 3.046$, and one massive neutrino species with mass $m_\nu = 0.06$ eV. We define a GP for $\gamma(z)=\ln(D_H(z)/D_H^0(z))$ with zero mean and covariance function \begin{equation} \langle \gamma(z_1)\gamma(z_2) \rangle = h^2 \exp(-(s(z_1) - s(z_2))^2/(2\sigma^2)), \end{equation} where the evolution variable $s(z)$ is taken to be $s(z) = \ln(1+z)/\ln(1+z_*)$, and $z_* = 1090.48$ to match the redshift of last scattering for the Planck+WP best fit. Note that $s(z)$ goes from 0 to 1 as $z$ changes from 0 to $z_*$. We marginalize over the grid of hyperparameters $\{0.01<h<0.2,0.001<\sigma < 1.0 \}$. \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{referee__main.pdf} \caption{Posteriors for the expansion history as determined by the GP regression (inner 68\% and outer 95\% confidence levels). The Hubble and angular diameter distances, $D_H(z)$ and $D_A(z)$, are shown in the top and bottom panels, respectively. These distances are shown relative to the fiducial Planck $\Lambda$CDM model. The orange shaded regions correspond to the results with the Riess~et~al.~(2016)~\cite{Riess:2016jrr} uncertainty on $H_0$ as calculated in J18, while the blue shaded regions correspond to forecasts with 1\% precision on $H_0$. The orange and blue solid lines illustrate the median results of the GPs. Note the split linear-logarithmic redshift axis. } \label{fig:main} \end{figure*} We use the following datasets to constrain the GP: \begin{itemize} \item Planck 2015 CMB temperature and polarization dataset consisting of `TT', `TE', `EE', and `lowP' angular power spectra \citep{planckone2015,planck15} which was used to compute the posterior mean and covariance for $D_H$ and $D_A$ at the redshift of last scattering, $z_*$. \item Distances inferred from the BAO signal encoded in the clustering of LRGs from Beutler~et~al.~(2016) \cite{Beutler:2016ixs}. \item Distances inferred from the BAO signal in the auto-correlation of the flux transmission of the Ly$\alpha$ forest and cross-correlation with quasars from Bourboux~et~al.~(2017)~\cite{Bourboux:2017cbm}. Each of these BAO distances scale with $r_{\rm drag}$, which we fix to the best-fit value from Planck 2015~\cite{planck15}. \item The `Pantheon' binned Type Ia SNe from Scolnic~et~al.~(2018)~\cite{Scolnic:2017caz}, which measure the ratio $D_L/D_{H_0}$. \item A direct Hubble constant measurement, similar to the 2.4\% determination from Riess~et~al.~(2016)~\cite{Riess:2016jrr}. \end{itemize} In this paper, we forecast results using the same posterior mean for $H_0$ as in Riess~et~al.~(2016)~\cite{Riess:2016jrr}, but with uncertainties of only 1\%. The updated GP regression code is archived here~\cite{gphistdoi2} and can be found in the public repository: \href{https://github.com/rekeeley/GPHistTDE}{\faGithub}\footnote{\url{https://github.com/rekeeley/GPHistTDE}}. The recently released $H_0$ constraint in Riess~et~al.~(2019)~\cite{riess19}, after the completion of this work, is now at the level of 1.9\% and continues to be discrepant with the CMB-inferred value (now at $4.4\sigma$), further strengthening the approach taken here and conclusions reached in this paper. Following J18, we infer the evolution of the dark energy ($\rho_{\rm DE}(z)$) and matter densities from the expansion rate, by assuming flatness and no new physics at last scattering. We also infer the dark energy equation of state through the energy conservation equation as $w(z)=-1-\rho_{\rm DE}'/(3\rho_{\rm DE})$, where the prime denotes derivative with respect to $\ln(a)$. The results of the forecast GP regression appear in Fig.~\ref{fig:main}, which shows that the median inference (blue) favors $D_H$ values larger than the fiducial model for redshifts above $z\sim1.5$ and smaller below. At $z=0$, $D_A$ begins significantly below the fiducial values, eventually meeting them at $z=z_*$. Such a transition in $D_H$ arises from the need to satisfy the constraint on $D_A(z_*)$. In Fig.~\ref{fig:main}, we also show the constraints from J18 (orange) using the current 2.4\% precision on $H_0$. The error bars on the GP inference are smaller at high redshift (where there are no constraints) despite the increased precision in the $H_0$ measurement. This is a feature of using GP as a prior. In GP regression the size of the error bars are tied to the size of the fluctuations away from the mean function (the fiducial cosmology) unless there is data to constrain it. As the increased precision on $H_0$ favors a significant deviation away from the fiducial cosmology, the error bars are larger at high redshift. The forecast inference of the dark energy evolution is shown in the upper panel of Fig.~\ref{fig:GP_w_f} and shows a transition in $w(z)$ that corresponds to the one in $D_H(z)$. Here, the GP regression picks out a median value for $w(0)$ greater than $-1$ and, interestingly, the median inference quickly transitions to values much less than $-1$. This $w(z)$ behavior is consistent with that found necessary to reconcile current cosmic shear and Planck CMB temperature measurements \cite{joudaki16b}. To understand why such an evolution in the dark energy component is preferred by the forecast data, note that the physical matter density at $z=0$ is set by the constraint on $D_H(z_*)$ (see J18 for discussion). With this information known, the physical dark energy density at $z=0$ is then set by the large value of $H_0$. In the case of a cosmological constant, the inferred matter and dark energy density would make $D_H(z)$ too small to explain the observed value of $D_A(z_*)$. Thus, in some interval between redshift zero and $z_*$, $D_H$ needs to be increased, and this can be only achieved by allowing the dark energy component to evolve. The other datasets constrain the redshift dependence of dark energy. For example, the SNe constrain the shape of $D_H(z)$ and hence how fast the dark energy can evolve at low redshifts, which explains why the evolution starts above $z=1$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{referee__w_growth_only.pdf} \caption{The top panel of the figure shows the inferred dark energy equation of state as a function of redshift from the GP regression. The bottom panel shows the growth rate $f = d\ln(D)/d\ln(a)$ from the GP regression. As in Fig.~\ref{fig:main}, the blue shaded regions correspond to the 68\% and 95\% confidence levels and the solid line corresponds to the median of the GP inference. The black solid line in the bottom panel corresponds to the $\Lambda$CDM growth rate. } \label{fig:GP_w_f} \end{figure} \begin{figure*} \centering \includegraphics[width=0.24\textwidth]{H0w2.pdf} \includegraphics[width=0.24\textwidth]{H0r2.pdf} \includegraphics[width=0.24\textwidth]{H0fs8.pdf} \includegraphics[width=0.24\textwidth]{H0s8.pdf} \caption{Posteriors of $H_0$ and representative derived parameters from the MCMC inference of the TDE model (inner 68\% CL, outer 95\% CL), fitting the same datasets as the GP. Each of these panels show the derived parameter evaluated at $z=0$, $z=0.5$, and $z=2$ (blue, green, violet). These representative parameters are the equation of state (left), dark energy density scaled to the present critical density (center left), $f\sigma_8$ (center right), and $\sigma_8$ (right). The dashed horizontal lines correspond to the fiducial $\Lambda$CDM values for $w(z)$ $f \sigma_8 (z)$ and $\sigma_8 (z)$. } \label{fig:MCMC_H0_de_growth} \end{figure*} The growth history is inferred in the same manner as in J18, by solving the following differential equation: \begin{equation}\label{eq:growth} \phi'' + (4+H'/H)\phi' + (3+2H'/H)\phi = 0 , \end{equation} where $\phi$ is the gravitational potential. This equation can be derived from the spatial part of the perturbed Einstein equations in the conformal Newtonian gauge~\cite{mb95} by setting $\delta T^i_j=0$, i.e., no shear or pressure perturbations. Another way to derive this equation is to use the covariant conservation of the energy momentum tensor~\cite{mb95}, with the Poisson equation for $\phi$ on sub-horizon scales, and setting the two metric potentials ($\phi$ in the space-space part and $\psi$ in the time-time part of the metric) equal to each other and pressure perturbations to zero. For a cosmology with pressure-less matter and a cosmological constant, Eq.~\ref{eq:growth} is the same as the usual growth equation $\ddot{\delta}_m+2H\dot{\delta}_m-4\pi G \rho_m \delta_m=0$ on sub-horizon scales, where $\rho_m \delta_m$ is the perturbation to the matter density and overdot denotes derivative with respect to coordinate time. Eq.~\ref{eq:growth} is a good way to explore modifications of the expansion history because early on ($z \gtrsim 2$) data prefers a matter-dominated cosmology and at late times ($z \lesssim 0.5$) data prefers dark energy with $w\simeq -1$. In making this assessment, we are implicitly assuming that the effective Gravitational constant appearing in the Poisson equation for the metric potential $\psi$ is the same as the Newtonian one and that the gravitational ``slip" \cite{Bertschinger:2006aw,Caldwell:2007cw} is negligible on small scales (i.e., $\phi/\psi = 1$). Our results later indicate that a gravitational slip is not required to match the observed growth history. We define the growth function $D=a\phi$ and the growth rate $f = D'/D$. In Fig.~\ref{fig:GP_w_f}, we show how the inferred expansion history, which is significantly different from the fiducial $\Lambda$CDM expansion history, causes the corresponding growth history to differ from the fiducial $\Lambda$CDM expectation. The key result to note is that for $z<1$, the expansion rate $H(z)$ is larger than the fiducial expansion rate and hence the growth of perturbations is slower. This demonstrates that the $H_0$ and $\sigma_8$ tensions could have a common origin. We have not provided a concrete model for the preferred dark energy evolution and the growth rate encapsulated in Eq.~\ref{eq:growth}. An interesting avenue to pursue is extensions to General Relativity (GR) that can motivate the kind of dark energy evolution that we have inferred. One such example is a new ``Galileon'' degree of freedom~\cite{DeFelice:2010pv}, which under the right initial conditions could have a dark energy equation of state $w=-2$ at high redshift and evolve towards $w=-1$ at low redshift, due to a de Sitter fixed point~\cite{Kase:2018iwp}. Such a $w(z)$ evolution is broadly consistent with our results but there is more growth than predicted by Eq.~\ref{eq:growth}. There are also Generalized Proca theories (vector-tensor theories) with three propagating degrees of freedom where the early universe $w(z)$ could be $-1-s$ with a late Universe de Sitter attractor~\cite{Heisenberg:2014rta,DeFelice:2016yws}. However, $s$ is constrained from cosmological data (expansion history and growth) to be around $0.2$ \cite{deFelice:2017paw} and hence not consistent with solving the $H_0$ tension. \section{Transitional Dark Energy model}\label{sec:Tanh} \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{cluster_counts_zoom.pdf} \caption{Sunyaev-Zel'dovich cluster counts $dN/dM/dz$ for a $\Lambda$CDM model consistent with Planck (black), a $\Lambda$CDM model with $\sigma_8=0.75$ (green), and an example TDE model (red). The shaded bands correspond to the cosmic variance 68\% and 95\% CLs. } \label{fig:tanh} \end{figure} To further investigate the implications of a rapid change in the dark energy density, we switch to a concrete parameterization of the dark energy evolution. This allows us to compute observables that are sensitive to the growth history, such as SZ cluster counts \cite{Ade:2015fva} using the cosmological Boltzmann solver \texttt{CLASS}~\cite{2011JCAP...07..034B}, which was modified to allow for a rapid transition in the dark energy equation of state. To this end, we define the TDE model as, \begin{eqnarray} &&\rho_{\rm DE}(z) =\rho_{\rm DE,0}(1+z)^{3(1+W(z))},\\ && W(z) = ((w_0+w_1) + (w_1-w_0)\tanh((z-z_t)/\Delta_z))/2,\nonumber \end{eqnarray} where $W(z)$ is related to the equation of state, \begin{equation} W(z) = \frac{1}{\ln(1+z)} \int_{1/(1+z)}^1 w(a) \frac{da}{a}. \end{equation} This function is equivalent to the equation of state $w(z)$ in the regimes where $w(z)$ is constant. The equation of state tends towards $w_0$ at $z<z_t$ and towards $w_1$ at $z>z_t$. The width of this transition is parametrized by $\Delta_z$. The values that fit the {\em median} GP inference are $w_0=-0.95$, $w_1=-1.95$, $z_t=2.5$, and $\Delta_z=0.9$. These values are used to calculate the growth functions in \texttt{CLASS}. Effectively, at early times the dark energy component is completely absent and then rapidly turns on by around redshift of 1. Similar models have been explored in the past~\cite{Bassett:2002qu,Shafieloo:2009ti}. We performed a Markov Chain Monte Carlo (MCMC) analysis to sample the forecast posterior for the TDE model. We used the same datasets as in the forecast GP analysis, and varied all of the parameters of the TDE model ($h$, $\omega_{\rm m}$, $w_0$, $w_1$, $\Delta_z$, $z_T$) in the MCMC. In Fig.~\ref{fig:MCMC_H0_de_growth}, we show the forecast posteriors of the derived parameters $w(z)$, $\rho_{\rm DE}(z)/\rho_{\rm crit,0}$, $\sigma_8(z)$, and $f\sigma_8 (z)$ for redshifts $z=0,0.5,2$. These forecast posteriors illustrate that the datasets considered indeed would favor a drastic change in the dark energy equation of state at intermediate redshift, with little to no dark energy at $z=2$ and a relatively sharp transition in the redshift range 0.5 to 2. We compute the Bayes factor ($K$) between the TDE model and the $\Lambda$CDM model and find $\ln K = 6.8$ in favor of the TDE model, corresponding to odds of 900 to 1 and `decisive' preference for the TDE model when using Jeffreys' scale~\cite{KassRaftery,Jeffreys}. A corresponding preference is found when computing the Deviance information criterion (DIC~\cite{spiegelhalter02}), with $\Delta \rm{DIC} = 24.2$. The correlations shown in the forecast posteriors are particularly interesting. The fact that large values of $H_0$ are correlated with smaller amounts of dark energy (or equivalently, more negative values of the equation of state) at $z=2$ favors the idea that in order explain the observed value of $H_0 = 73 \,\rm km\,s^{-1}\,Mpc^{-1}$, along with all of the other considered datasets, dark energy must be evolving in some form \cite{DiValentino:2016hlg,joudaki16b,Zhao:2017cud}. Moreover, as larger values of $H_0$ are correlated with less growth at $z<2$ (most notably in $f\sigma_8$), this resolution to the $H_0$ problem would have interesting consequences for the $\sigma_8$ tension. When using the 2.4\% uncertainty on $H_0$ from Riess~et~al.~(2016)~\cite{Riess:2016jrr} instead of the projected 1\% uncertainty, the posteriors for the TDE model become consistent with a cosmological constant (as in the GP regression from J18) and the global fit favors a value of $H_0$ around 69~km\,s$^{-1}$\,Mpc$^{-1}$. In Sec.~\ref{app:CMBTDE} of the Appendix, we check that the TDE model can reproduce the best-fit $\Lambda$CDM $C_\ell$s by plotting the best-fit TDE parameters from a fit to the full Planck 2018 likelihoods~\cite{Aghanim:2018eyx}. Fig.~\ref{fig:Cell} shows the $C_\ell$s from $\Lambda$CDM and TDE are identical above $\ell \sim 30$. The high $\ell$ agreement implies the best-fit $\Lambda$CDM and TDE models have the same values of $\theta_s$ and the diffusion damping scale, $k_D$, which further implies matching values for $D_H(z_*)$ and $D_A(z_*)$ for the best-fit $\Lambda$CDM and TDE models \cite{Joudaki:2017zhq}. Thus, we can conclude that using $\Lambda$CDM-derived values for $D_H(z_*)$ and $D_A(z_*)$ for TDE inferences and GP inferences is self-consistent. It is likely the low $\ell$ disagreement arises from the late-time integrated Sachs-Wolfe effect and bears future study. While the forecast TDE fitting and GP regression agree well on the preference for a transition in the dark energy evolution, the two methods show some differences in the details of the evolution. The GP inference allows for negative dark energy and so it has greater flexibility to fit both high and low-redshift data. By contrast, the dark energy density in the TDE model is constrained to be positive. Thus, in order to fit the CMB's $D_A$, the fit favors a transition in $D_H$ that is sharper and at lower redshift than the transition in the median of the GP inference (see Figs.~\ref{fig:main} and \ref{fig:MCMC_dist_ztdzw0w1}). Using \texttt{CLASS}, we calculate the various measures of the growth of perturbations, such as the matter power spectra and $\sigma(R,z)$. We also compare the growth function from \texttt{CLASS} with the less model-dependent solution to Eq.~\ref{eq:growth} in the Appendix. The TDE model can change $\sigma_8$ through the clustering of dark energy (depending on the microphysics) and the change in the growth function due to the modified expansion history~\cite{Fang:2008sn}. We assume that the dark energy density does not cluster significantly and, in keeping with this assumption, we keep the primordial power spectrum and transfer function fixed to that in $\Lambda$CDM but calculate the growth function at late times from our TDE dark energy model using {\tt CLASS}. The resulting inferences of $f\sigma_8$ and $\sigma_8$ at $z=\{0, 0.5, 2\}$ are shown in Fig.~\ref{fig:MCMC_H0_de_growth}. Relative to the $\Lambda$CDM expectation, we find a noticeably slower growth rate today and at $z=0.5$ ($\Delta f \sigma_8 \simeq 0.05$ for both redshifts), and mildly larger at $z=2$ (by approximately 1--2$\sigma$). We also find that $\sigma_8$ at present is smaller in the preferred TDE model, at mild significance ($\simeq2\sigma$). The predictions from the TDE model are consistent with the current measurements of $f\sigma_8$ at $z \lesssim 2$~\cite{Aghanim:2018eyx}. The differences from the $\Lambda$CDM predictions for $f\sigma_8$ are small compared to the uncertainties in current growth rate measurements but measurable by future surveys~\cite{surveywfirst, surveyhetdex, surveyeuclid, surveydesi, survey4most, surveyeboss, surveypfs}. \section{SZ cluster abundance}\label{sec:SZ} As a concrete test of the observable differences between the TDE model and $\Lambda$CDM, we focus on the SZ cluster abundance. Using $\sigma(R,z)$ for the $\Lambda$CDM and TDE models, we can calculate the expected number density per unit mass of gravitationally collapsed objects, \begin{equation} \frac{dN}{dM dV}(M,z) = -\frac{\rho}{M^2} f_m(\sigma(M,z)) \frac{d\ln\sigma}{d \ln M}, \end{equation} where $N$ is the number of clusters in some volume $V$, $M$ is the mass of the clusters, and $\rho$ is the matter density. The multiplicity function $f_m(\sigma(M,z))$ is determined by fitting $dN/dV/dM$ to large volume $N$-body simulations, and we use the fitting function from Tinker~et~al.~(2008)~\cite{Tinker:2008ff}. The number of clusters per unit mass per redshift is \begin{equation} \frac{dN}{dMdz}= \frac{dN}{dMdV}\frac{dV}{dz}, \end{equation} where $dV/dz =4\pi D_C^2 D_H$. We evaluate these functions at masses that for the two cosmologies give the same values of the SZ flux $Y_{500}$. Using the $M_{500} - Y_{500}$ relation from~\cite{Ade:2015fva}, we use $5\times10^{14} \rm M_\odot$ for the $\Lambda$CDM case and $4.5\times10^{14} \rm M_\odot$ (the redshift dependence of the relation is averaged over the redshift window where the SZ clusters are observed $0<z<0.4$) for the TDE case. The results of this calculation are shown in Fig.~\ref{fig:tanh}. There are two main sources of differences in the expected number of clusters in a redshift survey between $\Lambda$CDM and TDE models. One is in the multiplicity function through the dependence of $\sigma(R,z)$ on the TDE parameters and the other is in $dV/dz$ through the distances. We find that $f_m(\sigma(M,z))$ is smaller for the TDE model by 2--10\% between $z=1$ and $z=0$. For the volumetric factor, the TDE model predicts a roughly 15--30\% reduction in cluster counts between $z=1.5$ and $z=0$. Together, the smaller volumetric factor and smaller growth factors work to suppress the number of clusters relative to $\Lambda$CDM by 15--40\% at these redshifts. For tomographic cosmic shear measurements, similar considerations will apply and we expect the angular power spectra to be suppressed. We leave the potential of weak lensing and redshift space distortions to test these models for future work. \section{Internal consistency of low-z distance measurements}\label{sec:bao} \begin{figure} \centering \includegraphics[width=0.24\textwidth]{H0w2_vardrag.pdf} \includegraphics[width=0.24\textwidth]{H0w2_linscale.pdf} \caption{Posteriors of $H_0$ and $w(z)$ for $z=0,0.5,2.0$ (blue, green violet) for the cases where $r_{\rm drag}$ is varied independently (left) and scaled linearly with $D_H(z_*)$ (right). The black dashed line corresponds to the $\Lambda$CDM equation of state $w=-1$.} \label{fig:vardrag_w} \end{figure} Recent work \cite{Aylor:2018drw} has highlighted the tension between the BAO distances, calibrated to the value of $r_{\rm drag}$ from the $\Lambda$CDM fit to the Planck data, and the SN distances, calibrated to $H_0$. This tension is also present in our analysis. A possible resolution \cite{Aylor:2018drw} is that $r_{\rm drag}$ is smaller than the value inferred for $\Lambda$CDM from Planck, but here we have assumed that there is no new physics at $z>1000$ and hence $r_{\rm drag}$ is unchanged. In this section, we report on results when deviating from our main analysis in two ways. Fig.~\ref{fig:vardrag_w} summarizes these results. In the first test, we allow $r_{\rm drag}$ to vary independently of other distances and scale BAO distances accordingly. A transition in the DE density between $z=2$ and today is still inferred, but the equation of state varies more gently. The recovered value of $r_{\rm drag}$ is smaller, indicating the presence of a real tension between the BAO and SNe + $H_0$ datasets \cite{Aylor:2018drw}. We discuss these points in greater detail with relevant plots in the Appendix. In the second test, we allow $r_{\rm drag} \propto D_H(z_*)$ as an illustrative example to explore the degeneracy between new physics at early ($z>1000$) and late times ($z<3$). We allow the errors on $D_H(z_*)$ and $D_A(z_*)$ to be larger (TT+TE+EE+lowP constraints on $\Lambda{\rm CDM} + N_{\rm eff}$ model in J18), as would be expected with the addition of new parameters. The inferred errors on the dark energy evolution are larger and it is not possible to reach a strong conclusion about the DE density at $z=2$, although a sharp transition in the TDE equation of state is still allowed. \vspace{4.5mm} \section{Conclusions}\label{sec:discussion} We performed a GP regression for the expansion history of the Universe using Planck measurements of the CMB, BOSS measurements of the BAO signal in the Ly$\alpha$ forest and LRGs, Pantheon compilation of Type Ia SNe, and a measurement of the present Hubble parameter with forecasted 1\% uncertainty. The forecast regression prefers a dark energy component with equation of state $w>-1$ at present, and has the density transition to zero by $z\simeq2$. An interesting corollary of our result is the wide range of possibilities for the equation of state in the future, with a de Sitter phase not being favored. We calculated the growth history assuming no extra sources of clustering except for matter, and showed that the inferred growth rate in this model is measurably different from the Planck $\Lambda$CDM expectation. Our forecast GP results are recovered when using a parametric model for dark energy evolution that allows for a sharp transition in the dark energy density. We used \texttt{CLASS} to calculate the predictions of this TDE model for SZ cluster counts and found that the TDE model predicts noticeably less SZ clusters than the best-fit $\Lambda$CDM model, potentially alleviating the $\sigma_8$ tension. Similar, but less sharp, results were found when we allowed $r_{\rm drag}$ to vary independently of the CMB distances to explore the internal consistency between the $z<1$ distance measurements. However, when $r_{\rm drag}$ was taken to scale linearly with $D_H(z_*)$ as an illustrative example of new physics at $z>1000$, an evolving dark energy component was still allowed but not strongly preferred. In this case, the low-redshift distances agree better but that comes at the cost of not fitting the CMB precisely. Direct reconstruction of the Universe's expansion history via the BAO signal that will be observed by future surveys, such as DESI, LSST, WFIRST, and Euclid should be able to robustly detect a transition in the dark energy equation of state. The fact that the TDE model predicts less growth of perturbations than $\Lambda$CDM offers another way to test this model through redshift space distortion measurements and tomographic weak lensing analysis in the future. Our results suggest that a sharp transition in the dark energy equation of state for $1<z<2$ could simultaneously explain the $H_0$ and $\sigma_8$ tensions. \section{Acknowledgments} We are grateful to Lloyd Knox for discussions that prompted us to focus on the internal consistency of low redshift distance measurements. MK was supported by National Science Foundation Grant No. PHY-1620638. SJ acknowledges support from the Beecroft Trust and ERC 693024. This work was supported by the high performance computing cluster Seondeok at the Korea Astronomy and Space Science Institute. \bibliographystyle{plain}
1,116,691,499,588
arxiv
\section{introduction} In this work we deal with relativistic models described by a single real scalar field with generalized dynamics in four-dimensional space-time. The study is inspired on the Galileon field, which is a real scalar field that engenders Galilean invariance, that is, if $\pi=\pi(x)$ is real, it is a Galileon field if its Lagrange density is symmetric under the Galilean and shift transformation $\pi\to\pi+a {\cdot} x+b$, with $a$ being a constant vector and $b$ a constant scalar. The Galileon field was studied in \cite{g1,g2} aimed to investigate self-accelerating solutions in the absence of ghosts, and has been further investigated in a diversity of contexts, with direct phenomenological applications, as one can see in the recent reviews \cite{r1,r2,r3}. In particular, in \cite{gs1,gs2,gs3,gs4,gsuper1,gsuper2} the authors deal with solitonic solutions and supersymmetrization. In \cite{gs1} it is shown that the Galileon field cannot give rise to static solitonic solutions; however, in \cite{gs2} one investigates the presence of soliton-like traveling waves for the Galileon field in two-dimensional space-time. Also, in Refs.~\cite{gs3,gs4} the authors offer other interesting results on solitons and Galileons. In \cite{gsuper1}, supersymmetry is implemented starting from ghost condensate theories \cite{gsuper3}; see also Refs.~\cite{s1,s2,gs5} for other studies on supersymmetry, generalized models and integrability. One motivation to study the Galileon field is inspired on the fact that the Galilean invariance is capable of inducing an important feature to the Galileon field, which keeps its equation of motion a second-order differential equation. This and the presence of supersymmetry suggest that we search for a first-order framework, that is, for first-order differential equation that solve the equation of motion. We shall do this, extending the model, using the Galileon field to control the kinematics, but adding other terms, which break the Galilean symmetry and allow for the presence of spontaneous symmetry breaking, giving rise to localized static solutions. We call the scalar field, generalized Galileon field. We remark that the Galilean symmetry forbids the appearance of static solutions \cite{gs1}, so we are forced to generalize the model, to break the Galilean symmetry to study the appearance of nontrivial static structures. Another motivation comes from gravity: we know that minimal coupling of Galileons to gravity leads to equations of motion which have higher-order derivatives of the metric; however, this can be remedied with non-minimal couplings, at the expense of breaking the Galilean symmetry \cite{DD}. Here we focus attention on the model \begin{equation} {\cal L}= K(\pi,X) + F(\pi,X) \Box \pi, \end{equation} in four-dimensional space-time. We consider that $K(\pi,X)$ and $F(\pi,X)$ are in principle arbitrary functions of $\pi$ and $X$, with $X$ being defined as \begin{equation} X=\frac12 \partial_\mu \pi \partial^\mu \pi. \end{equation} We are using $\Box \equiv g^{\alpha\beta}\partial_\alpha\partial_\beta$, the metric is diagonal $(+,-,-,-)$ and the scalar field, space and time coordinates, and coupling constants are all dimensionless. Like in \cite{Kobayashi:2010cm,Deffayet:2010qz}, we change the term $\partial_\mu \pi \partial^\mu \pi \Box \pi$ to the more general form $F(\pi,X)\Box\pi$. The generalization that we consider may break the Galilean symmetry, but the equation of motion preserves the second-order structure. We are interested in solutions of these theories in the presence of spontaneous symmetry breaking, and we shall search in particular for planar domain walls and for its classical stability. As one knows, domain walls are non-perturbative classical solutions which find applications in many areas in physics, describing transitions between disconnected states of minimum energy \cite{V,MS}. The main issue here is to study domain walls in models of scalar fields with generalized dynamics of the Galileon type. We may also include k-field dynamics \cite{B}, as we have done before in Refs.~\cite{Bazeia:2013yza, Bazeia:2008tj, Avelino:2010bu,Bazeia:2013euc,Bazeia:2010vb}. Here we focus on similar issues, with the scalar field now having generalized dynamics. The results show that the Galileon-like field may make the static solution compact, and may split the zero mode. Moreover, if we add generalized kinematics to the dynamical field, making the scalar field a generalized k-Galileon, the two contributions may contribute to split the static structure itself. The investigation is organized as follows. In the next two sections we introduce the model and study linear stability on general grounds. We focus in particular on the first-order framework, where we search for first-order ordinary differential equations whose solutions also solve the equation of motion, which is second-order ordinary differential equation. In Sec.~\ref{sec3} we employ the method in order to investigate some distinct models explicitly, searching for static solutions and showing that they may engender interesting features. We end the work in Sec.~\ref{sec4}, where we include our comments and conclusions. \section{The model}\label{sec1} We consider the case of a single real scalar field in four-dimensional space-time with action \begin{equation}\label{action} {\cal S}=\int d^4x\,\left( K(\pi,X) + F(\pi,X) \Box \pi\right). \end{equation} Here $K(\pi,X)$ and $F(\pi,X)$ are in principle generic functions, and we get the equation of motion \begin{eqnarray}\label{eqofmotiong} \partial_\mu \left(K_X \partial^\mu\pi\right) - K_\pi+\partial_\mu \left(F_X S^\mu\right)- \partial_\mu F_\pi \partial^\mu \pi -2 F_\pi \Box \pi = 0\,, \end{eqnarray} where \begin{eqnarray} S^\mu \equiv \Box \pi \partial^\mu \pi- \partial_\nu \pi\partial^\mu \partial^\nu \pi\,. \end{eqnarray} We see that for a generic field configuration, the above equation of motion is second-order partial differential equation. We can use the general formulation for the energy-momentum tensor to obtain \cite{Moeller:2002vx} \begin{eqnarray}\label{emtensor} T_{\mu\nu}=&&-(K+F\Box \pi) g_{\mu\nu}+K_X \partial_\mu \pi \partial_\nu \pi+F_X \Box \pi \partial_\mu \pi \partial_\nu \pi -\partial_\mu F \partial_\nu \pi + F \partial_\mu \partial_\nu \pi\,. \end{eqnarray} Since we are interested in investigating domain walls, we suppose that the scalar field is static, that is, $\pi=\pi(x)$, such that \begin{equation}\label{boundary} \pi^{\prime} (x \to \pm \infty) \to 0\,, \end{equation} where prime stands for derivative with respect to the spatial coordinate $x$. In this case, $S^\mu$ vanish and the equation of motion \eqref{eqofmotiong} reduces to \begin{eqnarray}\label{secondorde} \left[(K_X+2XK_{XX})-2(F_\pi+ X F_{\pi X})\right]\pi^{\prime\prime} -2X(K_{\pi X}-F_{\pi\pi}) +K_\pi=0\,. \end{eqnarray} It can be integrated once to give \begin{equation}\label{fistorde} K-2XK_X +2X F_\pi=C\,, \end{equation} where $ C $ is constant of integration. This equation only depends on the first derivative of the scalar field, so it is a first-order differential equation. We note that if we take the derivative of \eqref{fistorde} with respect to $x$, we get back to \eqref{secondorde}, so the solutions of \eqref{fistorde} also solve the equation of motion. For the static field $\pi(x)$, the only non-trivial components of the energy-momentum tensor \eqref{emtensor} are \begin{subequations} \begin{eqnarray} \label{rho}T_{00}&=&-K + F \pi^{\prime\prime}\,, \\ T_{11}&=&K-2XK_X +2X F_\pi\,, \end{eqnarray} \end{subequations} We use the first-order equation \eqref{fistorde} to see that the stress component of $T_{\mu\nu}$ is constant, that is, $T_{11}=C$. The total energy of the field configuration $\pi(x)$ can be obtained as \begin{equation}\label{energy} E=\int^{\infty}_{-\infty} dx\,\left(-K(\pi,X) + F(\pi,X) \pi^{\prime\prime}\right)\,. \end{equation} This expression is important to elaborate on stability, following the Derrick/Hobart scaling argument \cite{DH}, which introduces a necessary condition for the stability of the solution. To do this, we follow \cite{bmm} and introduce $\pi_\lambda(x) = \pi(\lambda x)$. We use $\pi_\lambda(x)$ to define $E_\lambda$ in the form \begin{equation} E_\lambda=\int^{\infty}_{-\infty} dx\,\left(-K(\pi_\lambda,X_\lambda) + F(\pi_\lambda,X_\lambda) \pi_\lambda^{\prime\prime}\right)\,. \end{equation} It leads to \begin{equation}\label{enerderic} E_\lambda=\int^{\infty}_{-\infty} \!\!\!\!dx\,\left(-\lambda^{\!-1} K(\pi,\lambda^2 X) \!+ \!\lambda F(\pi,\lambda^2X) \pi^{\prime\prime}\right)\,. \end{equation} We see that $ E_\lambda|_{\lambda\to1}\to E$, and so we search for \begin{equation}\label{condictions} \frac{\partial E_\lambda}{\partial \lambda}\bigg|_{\lambda\to1}\!\!\!\!\to0\,. \end{equation} This condition allows that we write equation \eqref{enerderic} as \begin{equation} \int^{\infty}_{-\infty} \!\!\!\!\!\!dx\left(K(\pi,X)\! -\! 2K_X X \!-\! (2F_X X\!+\!F(\pi,X)) \pi^{\prime\prime}\right)\!=\!0.\;\;\; \end{equation} We integrate by parts the last term and consider \eqref{boundary} to get \begin{equation} \int^{\infty}_{-\infty} \!\!\!dx\,(K - 2K_X X + 2F_\pi X)=\int^{\infty}_{-\infty}\!\!\! dx \,T_{11}=0\,. \end{equation} Since $T_{11}$ is constant, we then have to set $T_{11}=0.$ This extends the result obtained in Ref. \cite{Bazeia:2007df} to the present situation. It is the stressless condition, and it is necessary condition for stability of the static solution. \section{Linear Stability} \label{sec2} In this section we investigate linear stability of the static solution. For completeness, we start investigating the behavior of the general solution of the equation of motion \eqref{eqofmotiong}. We introduce general fluctuations for the scalar fields in the form: $\pi(\vec x, t) = \pi(\vec x) + \eta(\vec x, t)$, where $ \pi(\vec x)$ represents the statical solution. In this case, up to first-order in the fluctuations we have \begin{subequations} \begin{equation} X\rightarrow X + \partial_\nu \pi \partial^\nu \eta\,, \end{equation} with this we get the contributions for $S^\mu$ as \begin{eqnarray} S^\mu \rightarrow S^\mu+ M^{\mu\nu\alpha\beta} (\partial_\alpha\partial_\beta \pi \partial_\nu \eta+ \partial_\alpha\partial_\beta\eta\partial_\nu \pi)\,, \end{eqnarray} \end{subequations} where $M^{\mu\nu\alpha\beta}=g^{\mu\nu} g^{\alpha\beta} -g^{\mu\alpha}g^{\nu\beta}$. After some algebraic manipulations, we can write \begin{equation}\label{Eqofperturb} \partial_\beta \left(A^{\alpha\beta}\partial_\alpha\eta\right) = B \eta\,, \end{equation} where \begin{subequations} \begin{eqnarray} \!\!\!\!\!\!\!\!A^{\alpha\beta}(\vec x)\!\!\!&=&\!\!\!g^{\alpha\beta}K_X+K_{XX}\partial^\alpha\pi \partial^\beta\pi-2g^{\alpha\beta} F_\pi+M^{\mu\nu\alpha\beta}\left[F_X \partial_\mu \partial_\nu \pi+\partial_\mu(F_X\partial_\nu \pi)\right]-F_{\pi X}\partial^\alpha\pi\partial^\beta\pi+F_{XX}\partial^\alpha\pi S^\beta\,,\;\;\;\\ \!\!\!\!\!\!\!\!B(\vec x)\!\!\!&=&\!\!\!K_{\pi\pi}-\partial_\mu( K_{\pi X }\partial^\mu\pi)- \partial_\mu\left(S^\mu F_{\pi X}\right)+\left(\partial_\mu F_{\pi\pi}\right)\partial^\mu \pi+2F_{\pi\pi}\square \pi\,. \end{eqnarray} \end{subequations} Despite the complexity of the above equation, it can be simplified for the specific case of planar domain wall, where $\pi=\pi(x)$. Here we get \begin{eqnarray}\label{Epert} A^{00}\ddot \eta+\left(A^{11}\eta^{\prime}\right)^{\prime}+A^{ij}\partial_i\partial_j\eta = B \eta\,, \end{eqnarray} for $i,j\neq 1$, where: \begin{subequations} \begin{eqnarray} \!\!A^{00}\!\!&=&\!\!K_X-2F_\pi-F_X\pi^{\prime\prime} -\left(F_X\pi^\prime\right)^{\prime}\,;\\ \!\!A^{11}\!\!&=&\!\!-(K_X+2XK_{XX})+2(F_\pi+XF_{\pi X})\,;\\ \!\!A^{ij}\!\!&=&\!\!-\delta^{ij}A^{00}\,;\\ \!\!B\!&=&\!\!K_{\pi\pi}+( K_{\pi X }\pi^{\prime})^{\prime}-( F_{\pi\pi})^{\prime}\pi^{\prime}-2F_{\pi\pi} \pi^{\prime\prime}.\,\, \end{eqnarray} \end{subequations} We can separate variables and write the perturbation as \begin{equation} \eta(\vec x,t)=\sum_n\xi_n(t,y,z) \psi_n(x)\,, \end{equation} where \begin{equation} \xi_n(t,y,z)=\cos(w_n t)\cos(k_y y)\cos(k_z z). \end{equation} Thus, the above equation \eqref{Epert} can be written as \begin{equation}\label{ppp1} -\left(|A^{11}|\psi_n^{\prime}\right)^{\prime} =B\psi_n + A^{00}M_n^2\psi_n\,, \end{equation} where $M_n^2=w_n^2-k_y^2-k_z^2$. In order to ease the investigation, we consider the case with $k_y=k_z=0$. It is appropriate to introduce new variables, and we make the changes \begin{equation}\label{chances} dz=\frac{dx}{a(x)};\,\,\,\,\,\,\,\,\,\,\, \psi_n(x)=\frac{u_n(z)}{\sqrt{A^{00}a(x)}}, \end{equation} where \begin{equation}\label{stab} a^2(x)=\frac{|A^{11}|}{A^{00}}\,. \end{equation} This allows that we obtain the Schrodinger-like equation \begin{equation}\label{schroeq} -(u_n)_{zz}+U(z)u_n=w_n^2 u_n\,, \end{equation} where \begin{eqnarray}\label{Uquant} U(z)&=&\frac{\left(\sqrt{A^{00}a}\right)_{zz}}{\sqrt{A^{00}a}}-\frac{1}{A^{00}}\left(K_{\pi\pi}+\frac1{a}\left( K_{\pi X }\frac{\pi_z}{a}\right)_{\!z}\right)+\frac{1}{A^{00}\pi_z}\left(\left(\frac{\pi_z}{a}\right)^2F_{\pi\pi}\right)_{\!z }\,, \end{eqnarray} is the stability potential we have to solve to get the corresponding eigenvalues and eigenstates. We see that if $F$ vanishes, we get back to the result obtained in \cite{Bazeia:2008tj}. This is the general result, and we see that linear stability requires the eigenvalues $w_n^2$ to be non-negative. This depends crucially on the potential $U(z)$, which has to be investigated for each one of the specific models that we explore in the next section. \section{Examples}\label{sec3} Let us now investigate some specific models, to illustrate how the above investigation works for particular cases. \subsection{Generalized Galileons} We start with the case $K=0$. The model describes the generalized Galileon field, and the equation of motion reduces to \begin{equation} (F_\pi)^\prime \pi^\prime + 2F_\pi \pi^{\prime\prime}=0\,, \end{equation} which leads to the first-order equation $F_\pi X =C$. If we consider stressless solutions, we have to take $C=0$, and so there are no nontrivial localized static solutions in this case, in agreement with the results of Ref.~\cite{gs1}. Here we note that the necessary condition that comes from the Derrick/Hobart scaling argument very much simplifies the investigation on stability. \subsection{Generalized Galileons and symmetry breaking} Let us now study generalizations with $F(\pi, X)$, but supposing that $K(\pi,X)$ represents standard model, that is, \begin{equation} K(\pi,X)=X-V(\pi). \end{equation} In this case the first-order Eq.~\eqref{fistorde} can be written as \begin{equation}\label{fistordes} \pi^{\prime2}\left(1-2F_\pi\right)=2V\,, \end{equation} where we used $C=0$. Note that if the potential $V(\pi)$ vanishes, we get back to the trivial result: the model supports no nontrivial localized static solutions. \begin{figure}[t] \includegraphics[scale=0.7]{fig1} \caption{\small{The potential \eqref{pot2}, plotted for $b=0$ (thicker line), $b=1.366$ (thick line), and $b=2$ (thinner line).}} \label{figure1} \end{figure} If the potential does not vanish, we show explicitly that the model engenders nontrivial solutions if we take \begin{equation}\label{model2} F(\pi,X)=b X \pi\,. \end{equation} Here the equation \eqref{fistordes} becomes \begin{equation}\label{fistorde2} \pi^{\prime2}\left(1+b \pi^{\prime2}\right)=2V\,. \end{equation} It can be written as \begin{equation}\label{foeq} \pi^{\prime 2}=\frac{\sqrt{1+8bV(\pi)\,}-1}{2b}, \end{equation} which has to be investigated after specifying the potential. We note that the above equation is consistent with \eqref{fistorde2} in the limit $b\to0$. We also note that there is another solution of \eqref{fistorde2}, with the minus sign for the square root in \eqref{foeq}. It leads to imaginary configurations, an issue which is out of the scope of the current work. As an interesting example, we consider the potential in the form \begin{equation}\label{pot2} V(\pi)=\frac12\left(1-\pi^2\right)^2\left[1+b\left(1-\pi^2\right)^2\right]\,. \end{equation} It allows to solve the first-order equation analytically, with \begin{equation}\label{sta} \pi(x)=\tanh(x), \end{equation} which is static solution we also have in the case $b=0$, in the standard model. Fig.~\ref{figure1} shows the behavior of the potential \eqref{pot2} for some values of $b$. The value $b=1.366$ is in fact $b=1/2+\sqrt{3}/2$, and is the value where the zero mode start to split; see Fig.~\ref{figure4}. In fact, there is another (negative) value of $b$, given by $1/2-\sqrt{3}/2$, where the zero mode also splits, but we will not consider it here. In the case of $b$ positive, the energy density becomes \begin{equation}\label{enerdensity2} \rho(x)=S^4\left(1+b\,S^2-\frac12\, b\,S^4\right)\,. \end{equation} where $S={\rm sech}(x)$. In Fig.~\ref{figure2} one shows the behavior of the energy density for some values of the parameter $b$. \begin{figure}[t] \includegraphics[scale=0.7]{fig2} \caption{\small{The energy density \eqref{enerdensity2}, plotted for $b=0$ (thicker line), $b=1.366$ (thick line), and $b=2$ (thinner line).}} \label{figure2} \end{figure} We go further and investigate linear stability of the model. We use Eq.~\eqref{ppp1} to get \begin{equation} -\left[(1+2b\pi^{\prime2})\psi_n^{\prime}\right]^{\prime} =-V_{\pi\pi}\psi_n + \left(1-2b\pi\pi^{\prime\prime}\right)w_n^2\psi_n\,. \end{equation} We can thus make the following change of variables \begin{subequations} \begin{equation} dz\!=\sqrt{\!\frac{1+4 bS^2\left(1-S^2\right)}{1+2bS^4}\;}\;dx\,, \end{equation} and \begin{equation} u_n(z)\!=\![(1\!+\!4 bS^2(1\!-\! S^2))(1+2bS^4)]^{1/4}\psi_n\,. \end{equation} \end{subequations} In this case the parameter $b$ should be greater than $-1/2$. The stability potential cannot be written analytically, but in Fig.~\ref{figure3} one shows how it behaves numerically for some values of $b$. It goes to the same value, $4$, as $z\to\pm\infty$, independently of $b$. We see that it engenders a double well behavior as $b$ increases, so we also investigate the zero mode, which is depicted in Fig.~\ref{figure4}. As expected, the zero mode responds to the double well behavior splitting, to accommodate itself into the two wells. This splitting of the zero mode is an interesting new behavior: it does not appear in the standard case, for $b=0$. To see how the splitting appears, we note from \eqref{schroeq} that the zero mode $u_0(z)$ obeys $-(u_0)_{zz}+U(z)u_0=0$. Moreover, in order to split, the zero mode has to change from a maximum to a local minimum at the origin, so it has to have an inflection point at $z=0$, such that $(u_0)_{zz}|_{z=0}=0$. As we see from the equation for the zero mode, this implies that the stability potential has to vanish at the origin, that is, $U(z=0)=0$. For the model under investigation, this is achieved for $b=1.366$, as we illustrate in Figs.~\ref{figure3} and \ref{figure4}. Let us further study this model with another potential. We change \eqref{pot2} to the new form \begin{equation}\label{pot3} V(\pi)=\frac12(1+b)(1-\pi^2)^{2}. \end{equation} We use \eqref{foeq} to write \begin{equation}\label{eqb} \pi^{\prime 2}=\frac{\sqrt{1+4b(1+b)(1-\pi^2)^{2}}-1}{2b}, \end{equation} which is consistent with \eqref{fistorde2} in the limit $b\to0$. \begin{figure}[t] \includegraphics[scale=0.7]{fig3} \caption{\small{The stability potential, plotted for $b=0$ (thicker line), $b=1.366$ (thick line), and $b=2$ (thinner line).}} \label{figure3} \end{figure} \begin{figure}[t] \includegraphics[scale=0.7]{fig4} \caption{\small{Plot of the zero mode for $b=0$ (thicker line), $b=1.366$ (thick line), and $b=2$ (thinner line).}} \label{figure4} \end{figure} \begin{figure}[t] \includegraphics[scale=0.7]{fig5} \caption{\small{Plot of the static solution that solves \eqref{eqb} for $b=0$ (thicker line), $b=5$ (thick line), and $b=100$ (thinner line)}.} \label{figure5} \end{figure} We solve this equation numerically, and we plot the solution in Fig.~\ref{figure5}, for some values of $b$. We note that as $b$ increases to very large values, the static solution shrinks, suggesting the appearance of a compact solution; see, e.g., Ref.~\cite{Bazeia:2010vb}. To see how the compact solution appears, we proceed as follows: we take $b$ very large, and from \eqref{eqb} we get \begin{equation} \pi^\prime=|1-\pi^2|^{1/2}. \end{equation} This equation is solved by \begin{eqnarray} \pi(x) = \left\{ \begin{array}{clc} \sin(x) & \mbox{ for } &|x|\leq \pi/2\,, \\ {\rm {sign}}(x) & \mbox{ for } & |x|> \pi/2\,. \end{array} \right. \end{eqnarray} which is compact solution, which we depict in Fig.~\ref{figure6}. We see that the solution for $b=100$ in Fig.~\ref{figure5} is essentially the compact solution plotted in Fig.~\ref{figure6}. We also note that if one changes the potential \eqref{pot3} to the new form \begin{equation} V(\pi)=\frac12(1+b)(1-\pi^2)^{4}, \end{equation} then in the limit of very large $b$ we get $\pi^\prime=(1-\pi^2)$. This result leads us back to the standard solutions, described by Eq.~\eqref{sta}. \begin{figure}[t] \includegraphics[scale=0.7]{fig6} \caption{\small{Plot of the static compact solution which appears for $b$ very large}.} \label{figure6} \end{figure} \subsection{Generalized k-Galileon and symmetry breaking} Let us consider another model, now changing the $K(\pi,X)$ contribution to the generalized form \begin{equation}\label{x2} K=X+b X^2-V. \end{equation} This generalized form is sometimes called k-field; see, e.g., Ref.~\cite{Bazeia:2013yza, Bazeia:2008tj, Avelino:2010bu,Bazeia:2013euc,Bazeia:2010vb}. This explains the term k-Galileon that we are using to name this subsection. Here we also take \begin{equation}\label{f2} F=\frac32b\pi X, \end{equation} which is essentially the same $F$ we have considered in the previous subsection; the factor $3/2$ is included in the above function for simplicity. We are adding the $X^2$ term in \eqref{x2} with the same parameter $b$, to simplify the first-order equation, as we show below. We study no other possibility in this work. We use \eqref{x2} and \eqref{f2}, and now the first-order equation changes to \begin{equation} \pi^{\prime 2}=2V. \end{equation} If we take the standard potential \begin{equation} V(\pi)=\frac12(1-\pi^2)^2, \end{equation} the solution becomes the standard one, $\pi(x)=\tanh(x)$, as in \eqref{sta}. Here, however, the energy density has the form \begin{equation} \rho(x)=S^4-\frac{b}{4}S^6 \Big(7S^2-6\Big). \end{equation} It has an inflection point at $x=0$, for $b=4/5$. Thus, for $b>4/5$ the energy density starts to split, indicating that the static structure engenders the interesting behavior of splitting itself. To see how the splitting appears, in Fig.~\ref{figure7} we plot the energy density for some values of $b$. Moreover, to study the behavior of the model under stability, we note that in this new model, stability leads us to \begin{figure}[t] \includegraphics[scale=0.7]{fig7} \caption{{\small The energy density, depicted for $b=0$ (dashed line), $b=0.65$ (thicker line), $b=0.8$ (thick line), and $b=0.95$ (thinner line).}} \label{figure7} \end{figure} \begin{equation} -\psi_n^{\prime\prime}=-V_{\pi\pi}\psi_n+\Big[1-b \left(\pi^{\prime 2}+3\pi\pi^{\prime\prime}\right)\Big]w_n^2\psi_n. \end{equation} To write the Schroedinger-like equation and identify the stability potential we take \begin{equation} dz=\sqrt{1-b\,S^2\Big(7S^2-6 \Big)}\,dx, \end{equation} and \begin{equation} u_n(z)=\Big[1-b\,S^2\left(7 S^2-6 \right)\Big]^{1/4}\psi_n(x), \end{equation} which requires that $b<1$. In this case, the stability potential cannot be given explicitly, but we can depict it numerically, as we show in Fig.~\ref{figure8}. Also, we can investigate the zero mode numerically. The study shows that it also splits for $b\in(1/3,1)$, and in Fig.~\ref{figure9} we depict the zero mode for some values of $b$. \begin{figure}[ht] \includegraphics[scale=0.7]{fig8} \caption{{\small The stability potential, depicted for $b=0$ (dashed line), $b=1/6$ (thicker line), $b=1/3$ (thick line), and $b=2/3$ (thinner line).}} \label{figure8} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.7]{fig9} \caption{{\small The zero mode, depicted for $b=0$ (dashed line), $b=1/6$ (thicker line), $b=1/3$ (thick line), and $b=2/3$ (thinner line).}} \label{figure9} \end{figure} If we want to focus on the splitting of the static structure and make it more evident, we choose another, more appropriate model. To illustrate this situation we follow \cite{S2}, and we introduce the potential \begin{equation}\label{pmodel} V(\pi)=\frac12 \left(\pi^{\frac{p-1}{p}}-\pi^{\frac{p+1}{p}}\right)^2 \end{equation} where $p$ is an integer, odd, $p=1,3,5,...$. The case $p=1$ leads us back the previous model. For $p$ odd, arbitrary, the solution is \begin{equation} \pi(x)=\tanh^p(x/p); \;\;\;p=1,3,5,..., \end{equation} and the energy density gets to the form \begin{eqnarray} \rho(x)&=&S_p^4\; T_p^{2 p-2}\Biggl(1+\frac{3b}{4}\;S_p^2\; T_p^{2 p}+b\;\Big(\frac{3-4p}{4p}\Big)\; S_p^4\;T_p^{2p-2}\;\Biggr), \end{eqnarray} where $S_p={\rm sech}(x/p)$ and $T_p=\tanh(x/p)$. It depends on $b$ and $p$, and it is depicted in Fig.~\ref{figure10} for $b=0.95$ and for $p=1,3$ and $5$. The figure shows explicitly that the new paramer $p$ directly contributes to expand the splitting of the defect structure. Similar effects appear in the corresponding zero modes; the calculations follow the previous model, so we omit them here. The model \eqref{pmodel} is of interest, since one knows that the parameter $p$ mimics the presence of temperature, as it was introduced in another model \cite{S1}, described by a complex scalar field, used to split the brane in the braneworld scenario with a single extra dimension of infinite extent. See, e.g., Refs.~{\cite{S2,S1}} for further investigations on this issue. \begin{figure}[t] \includegraphics[scale=0.7]{fig10} \caption{{\small The energy density, depicted for $b=0.95$, and for $p=1$ (thinner line), $p=3$ (thick line), and $p=5$, thicker line.}} \label{figure10} \end{figure} \section{Conclusions}\label{sec4} In this work we investigated the presence of localized static domain wall solutions in generalized models, described by the Galileon field, but enlarged to accommodate spontaneous symmetry breaking to support localized static solutions. The study is implemented under the first-order framework, with the help of the Derrick/Hobart scaling argument and the stressless condition for stability. The general investigation is then illustrated with some distinct models, from which we could construct stable domain wall configurations, having the form of the standard domain wall, which appears analytically as the hyperbolic tangent. Moreover, we could find compact solutions, depending on the way the scalar field self-interacts. In particular, we identified an interesting behavior: the splitting of the zero mode, controlled by $b$, the parameter that induces deviation of the model from the standard model, making the scalar field a Galileon-like field. The splitting of the zero mode may modify the scattering of static structures, and may contribute to change their collective behavior, a subject of current interest in high energy physics; see, e.g., \cite{Sc1,Sc2,Sc3} and references therein. We have investigated another model, in which one includes k-field kinematics and the Galileon-like behavior. We studied the case where the two effects cancel each other from the first-order equation, leaving it as in the standard model. However, they change the energy density and stability, and split the zero mode and the static solution itself. The splitting of the static structure is another interesting feature, which we think is generic and will remain in the braneworld scenario with a single extra dimension of infinite extent; see, e.g., Refs.~\cite{S2,S1,S3,S4,S5,S6}. This fact motivates us to investigate the models studied in this work minimally coupled to gravity, in the thick braneworld scenario with a single extra spatial dimension of infinite extent. We shall further report on this elsewhere. The authors would like to thank CAPES and CNPq, for partial financial support.
1,116,691,499,589
arxiv
\section{Introduction} In a recent paper, \cite{all1}, a model of interaction between political parties has been proposed. The model describes a decision making procedure, deducing the time evolution of three so-called \emph{decision functions} (DFs), one for each part{y} considered in our system. These functions describe the interest of each party to form or not an alliance with some other party. Their decisions are driven by the interaction of each party with the other parties, with their own electors, and with a set of undecided voters (i.e. people who have not yet decided to vote for which party {(if at all they decide to vote))}. The approach adopted in \cite{all1} uses an operatorial framework {(see also} \cite% {bagbook}), in which the DFs are suitable mean values of certain number operators associated to the parties. The dynamics {are} driven by a suitable hamiltonian which implements the various interactions between the different actors of the system. The limitation of the model, as described in \cite{all1}, is that the hamiltonian is quadratic and, as a consequence, the equations of motion are linear. This simplifies quite a bit the analysis of the time evolution of the system. {I}n fact an exact solution can be deduced in that case, but the price we pay is that the model is not entirely realistic, since the hamiltonian does not include contributions which might be relevant in a concrete situation. In this paper we introduce several \textit{non-linear contributions} in the model, and we solve, adopting a suitable approximation, the related non-linear differential equations. These non-linear terms are needed to introduce in the model some sort of three-bod% {y} interactions, which were not included in \cite{all1}. The reason why these terms are interesting is because they describe (please see below for more details) the role of, say, the first party ($\mathcal{P}_{1}$), in the explicit strength of the interaction between the other two parties, $% \mathcal{P}_{2}$ and $\mathcal{P}_{3}$. This is important, since it is natural to assume that the DFs of both $\mathcal{P}_{2}$ and $\mathcal{P}_{3} $ also depend on what $\mathcal{P}_{1}$ is doing. It is important to notice that not many contributions exist in the mathematical and physic{s} literature on politics, and only {% very }few of them adopt a quantum mechanical (or operator) point of view, as the one used in \cite{all1}. We refer to \cite% {otto,havkhre,galam1,galam2} for some recent and not so recent contributions on this topic. After a long discussion on politics, we also show how the same hamiltonian can be used, with just some minor changes, to deduce the dynamics of a buy-and-sell financial system. The paper is organized as follows: in the next section we introduce the model, we derive the differential equations and we propose an approximation scheme to solve them. In Section III we show how to model a {simple }% financial system using the same general settings. Section IV contains our conclusions. To keep the paper self-contained, and to make it {also }% more readable to those who are not familiar with quantum mechanics, we have added an appendix where a few crucial aspects of operators and quantum dynamics are reviewed. \section{Modelling alliances in politics and its dynamics} \label{sectII} In this section we discuss the details of our model {and we will first }construct the vectors describing the players and the hamiltonian of the system. {We then }deduce the differential equations of motion. To keep the paper self contained, we recall first {a }few important facts which were already discussed in \cite{all1}. In our system we have three parties, $\mathcal{P}_{1}$, $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$, which, together, form the system $\mathcal{S}_{% \mathcal{P}}$. Each party has to make a choice, and it can choose only `one' or `zero', {which }corresponds respectively to {either }\emph{% form a coalition} or not. This is, in fact, the only aspect of the parties we are interested in. Hence, we have eight different possibilities, {% to }which we associate eight different and mutually orthogonal vectors in a% {n} eight-dimensional Hilbert space $\mathcal{H}_{\mathcal{P}}$. These vectors are $\varphi _{i,k,l}$, with $i,k,l=0,1$. {As an example, }the first vector, $\varphi _{0,0,0}$, describes the fact that, at $% t=0$, no party wants to ally with the other parties. Of course, this attitude can change during the time evolution. {What is interesting to know is: }how does this {attitude }change? And how {can one} describe this change? {Let us consider another example. F}or instance, $\varphi _{0,1,0}$, describes the fact that, at $t=0$, $\mathcal{P}% _{1}$ and $\mathcal{P}_{3}$ do {not want} to form any coalition, while $\mathcal{P}_{2}$ does. $\mathcal{F}_{\varphi }=\{\varphi _{i,k,l},\,i,k,l=0,1\}$ is an orthonormal basis for $\mathcal{H}_{\mathcal{P}% }$. A generic vector of $\mathcal{S}_{\mathcal{P}}$, for $t=0$, is a linear combination of the form \begin{equation} \Psi =\sum_{i,k,l=0}^{1}\alpha _{i,k,l}\varphi _{i,k,l}, \label{24} \end{equation}% where we assume $\sum_{i,k,l=0}^{1}|\alpha _{i,k,l}|^{2}=1$ in order to normalize the total probability, \cite{khren2}. In particular, for instance, $|\alpha _{0,0,0}|^{2}$ represents the probability that $\mathcal{S}_{% \mathcal{P}}$ is, at $t=0$, in a state $\varphi _{0,0,0}$, i.e. that $% \mathcal{P}_{1}$, $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$ have chosen `0' (no coalition). As in \cite{all1}, and for the same reasons (see below), we construct the vectors $\varphi _{i,k,l}$ in a very special way, starting with the vacuum of three fermionic operators, $p_{1}$, $p_{2}$ and $p_{3}$, i.e. three operators which, together with their adjoint, satisfy the canonical anticommutation relation (CAR) $\{p_{k},p_{l}^{\dagger }\}=\delta _{k,l}$ and $\{p_{k},p_{l}\}=0$. Here $\{x,y\}=xy+yx$, for all pairs $x$ and $y$. More in detail, $\varphi _{0,0,0}$ is such that $p_{j}\varphi _{0,0,0}=0$, $% j=1,2,3$. The other vectors $\varphi _{i,j,k}$ can be constructed acting on $% \varphi _{0,0,0}$ with the operators $p_{1}^{\dagger }$, $p_{2}^{\dagger }$ and $p_{3}^{\dagger }$: \begin{equation*} \varphi _{1,0,0}=p_{1}^{\dagger }\varphi _{0,0,0},\quad \varphi _{0,1,0}=p_{2}^{\dagger }\varphi _{0,0,0},\quad \varphi _{1,1,0}=p_{1}^{\dagger }\,p_{2}^{\dagger }\varphi _{0,0,0},\quad \varphi _{1,1,1}=p_{1}^{\dagger }\,p_{2}^{\dagger }\,p_{3}^{\dagger }\varphi _{0,0,0}, \end{equation*}% and so on. Let now $\hat{P}_{j}=p_{j}^{\dagger }p_{j}$ be the so-called \emph{number operator} of the $j$-th party, which is constructed using $p_{j} $ and its adjoint, $p_{j}^{\dagger }$. Since $\hat{P}_{j}\varphi _{n_{1},n_{2},n_{3}}=n_{j}\varphi _{n_{1},n_{2},n_{3}}$, for $j=1,2,3$, it is clear that $\varphi _{n_{1},n_{2},n_{3}}$ are eigenvectors of these operators, while their eigenvalues, zero and one, correspond to the only possible choices admitted for the three parties at $t=0$. This is, in fact, the main reason why we have used here the fermionic operators $p_{j}$: they automatically produce only these eigenvalues. Our first effort now consists in \emph{giving a dynamics} to the number operators $\hat{P}_{j}$, following the general scheme proposed in \cite{bagbook}. Hence, we look for an Hamiltonian $H$ which describes the interactions between the various constituents of the system. Once $H$ is given, we can compute first the time evolution of the number operators as $\hat{P}_{j}(t):=e^{iHt}\hat{P}% _{j}e^{-iHt}$, and {we can }then {ascertain }their mean values on some suitable state describing the system at $t=0$, in order to get what we have already called \emph{decision functions}, (DFs) ({please }see below). The \emph{rules} needed to write down $H$ are described in \cite% {bagbook}, and adopted in \cite{all1} where it is also discussed why the three parties are just part of a larger system which must {also }% include the set of electors. In fact, it is mainly this interaction which creates the final decision. Hence, $\mathcal{S}_{\mathcal{P}}$ must be \emph{% open}, {and we mean }with this that there must {exist} some \emph{large} environment, $\mathcal{R}$, {which }interact{s} with $\mathcal{P}_{1}$, $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$, {and it} produces some sort of feedback used by $\mathcal{P}_{j}$ to decide. Fermionic operators (depending also on a continuous index) are also used to describe their environment, \cite{all1}. The various elements of our model are described in Figure \ref{figscheme}, where the various arrows show all the admissible interactions. \vspace*{1cm} \begin{figure}[tbp] \begin{center} \begin{picture}(450,90) \put(160,65){\thicklines\line(1,0){45}} \put(160,85){\thicklines\line(1,0){45}} \put(160,65){\thicklines\line(0,1){20}} \put(205,65){\thicklines\line(0,1){20}} \put(183,75){\makebox(0,0){${\cal P}_2$}} \put(300,35){\thicklines\line(1,0){45}} \put(300,55){\thicklines\line(1,0){45}} \put(300,35){\thicklines\line(0,1){20}} \put(345,35){\thicklines\line(0,1){20}} \put(323,45){\makebox(0,0){${\cal P}_3$}} \put(10,35){\thicklines\line(1,0){45}} \put(10,55){\thicklines\line(1,0){45}} \put(10,35){\thicklines\line(0,1){20}} \put(55,35){\thicklines\line(0,1){20}} \put(33,45){\makebox(0,0){${\cal P}_1$}} \put(10,-55){\thicklines\line(1,0){45}} \put(10,-35){\thicklines\line(1,0){45}} \put(10,-55){\thicklines\line(0,1){20}} \put(55,-55){\thicklines\line(0,1){20}} \put(33,-45){\makebox(0,0){${\cal R}_1$}} \put(140,-55){\thicklines\line(1,0){85}} \put(140,-35){\thicklines\line(1,0){85}} \put(140,-55){\thicklines\line(0,1){20}} \put(225,-55){\thicklines\line(0,1){20}} \put(183,-45){\makebox(0,0){${\cal R}_2$}} \put(300,-55){\thicklines\line(1,0){45}} \put(300,-35){\thicklines\line(1,0){45}} \put(300,-55){\thicklines\line(0,1){20}} \put(345,-55){\thicklines\line(0,1){20}} \put(323,-45){\makebox(0,0){${\cal R}_3$}} \put(140,-155){\thicklines\line(1,0){85}} \put(140,-95){\thicklines\line(1,0){85}} \put(140,-155){\thicklines\line(0,1){60}} \put(225,-155){\thicklines\line(0,1){60}} \put(183,-125){\makebox(0,0){${\cal R}_{und}$}} \put(70,44){\thicklines\vector(1,0){220}} \put(70,44){\thicklines\vector(-1,0){3}} \put(70,44){\thicklines\vector(3,1){80}} \put(70,44){\thicklines\vector(-3,-1){3}} \put(290,44){\thicklines\vector(-3,1){80}} \put(290,44){\thicklines\vector(3,-1){3}} \put(31,27){\thicklines\vector(0,-1){55}} \put(31,27){\thicklines\vector(0,1){3}} \put(322,27){\thicklines\vector(0,-1){55}} \put(322,27){\thicklines\vector(0,1){3}} \put(165,57){\thicklines\vector(0,-1){85}} \put(165,57){\thicklines\vector(0,1){3}} \put(35,27){\thicklines\vector(1,-1){115}} \put(35,27){\thicklines\vector(-1,1){3}} \put(318,27){\thicklines\vector(-1,-1){115}} \put(318,27){\thicklines\vector(1,1){3}} \put(195,57){\thicklines\vector(0,-1){145}} \put(195,57){\thicklines\vector(0,1){3}} \end{picture} \end{center} \par \vspace*{5.3cm} \caption{The system and its multi-component reservoir.} \label{figscheme} \end{figure} In this figure $\mathcal{R}_{j}$ represents the set of the electors of $% \mathcal{P}_{j}$, while $\mathcal{R}_{und}$ is the set of all the undecided voters. Figure \ref{figscheme} shows, for instance, that $\mathcal{P}_{1}$ can interact with $\mathcal{R}_{1}$ and $\mathcal{R}_{und}$, but n{% either} with $\mathcal{R}_{2}$ {n}or with $\mathcal{R}_{3}$. We also see that $\mathcal{P}_{1}$ interacts with both $\mathcal{P}_{2}$ and $% \mathcal{P}_{3}$. To define the hamiltonian which describes, in our framework, the scheme in Figure \ref{figscheme}, we start introducing the following purely quadratic operator, which is, essentially, the one adopted in \cite{all1}: \begin{equation} \left\{ \begin{array}{ll} h=H_{0}+H_{PBs}+H_{PB}+H_{int}, & \\ H_{0}=\sum_{j=1}^{3}\omega _{j}p_{j}^{\dagger }p_{j}+\sum_{j=1}^{3}\int_{% \mathbb{R}}\Omega _{j}(k)B_{j}^{\dagger }(k)B_{j}(k)\,dk+\int_{\mathbb{R}% }\Omega (k)B^{\dagger }(k)B(k)\,dk, & \\ H_{PBs}=\sum_{j=1}^{3}\lambda _{j}\int_{\mathbb{R}}\left( p_{j}B_{j}^{\dagger }(k)+B_{j}(k)p_{j}^{\dagger }\right) \,dk, & \\ H_{PB}=\sum_{j=1}^{3}\tilde{\lambda}_{j}\int_{\mathbb{R}}\left( p_{j}B^{\dagger }(k)+B(k)p_{j}^{\dagger }\right) \,dk, & \\ H_{int,l}=\mu _{12}^{(0)}\left( p_{1}^{\dagger }p_{2}+p_{2}^{\dagger }p_{1}\right) +\nu _{12}^{(0)}\left( p_{1}^{\dagger }p_{2}^{\dagger }+p_{2}p_{1}\right) +\mu _{13}^{(0)}\left( p_{1}^{\dagger }p_{3}+p_{3}^{\dagger }p_{1}\right) + & \\ \qquad +\nu _{13}^{(0)}\left( p_{1}^{\dagger }p_{3}^{\dagger }+p_{3}p_{1}\right) +\mu _{23}^{(0)}\left( p_{2}^{\dagger }p_{3}+p_{3}^{\dagger }p_{2}\right) +\nu _{23}^{(0)}\left( p_{2}^{\dagger }p_{3}^{\dagger }+p_{3}p_{2}\right) . & \end{array}% \right. \label{22} \end{equation}% Here $\omega _{j}$, $\lambda _{j}$, $\tilde{\lambda}_{j}$, $\mu _{ij}^{(0)}$ and $\nu _{ij}^{(0)}$ are real quantities, while $\Omega _{j}(k)$ and $% \Omega (k)$ are real-valued functions. Their meaning is explained in detail in \cite{all1}. As already anticipated, the following CAR's for the operators of the reservoir are assumed: \begin{equation} \{B_{i}(k),B_{l}^{\dagger }(q)\}=\delta _{i,l}\delta (k-q)\,1\!\!1,\qquad \{B_{i}(k),B_{l}(k)\}=0, \label{23} \end{equation}% as well as \begin{equation} \{B(k),B^{\dagger }(q)\}=\delta (k-q)\,1\!\!1,\quad \{B(k),B(k)\}=0. \label{23b} \end{equation}% Moreover each $p_{j}^{\sharp }$ anti-commutes with each $B_{j}^{\sharp }(k)$ and with $B^{\sharp }(k)$: $\{b_{j}^{\sharp },B_{l}^{\sharp }(k)\}=\{b_{j}^{\sharp },B^{\sharp }(k)\}=0$ for all $j$, $l$ and for all $k$% , and we further assume that $\{B^{\sharp }(q),B_{l}^{\sharp }(k)\}=0$. Here $X^{\sharp }$ stands for $X$ or $X^{\dagger }$. The full hamiltonian is now obtained by adding to $h$ another term, $\delta h $, which contains some non quadratic terms: \begin{equation} \left\{ \begin{array}{ll} H=h+\delta h, & \\ \delta h=h_{int}^{ex}+h_{int}^{coop}, & \\ h_{int}^{ex}=\left( \mu _{12}^{(2)}+(\mu _{12}^{(1)}-\mu _{12}^{(2)})N_{3}\right) \left( p_{1}^{\dagger }p_{2}+p_{2}^{\dagger }p_{1}\right) +\left( \mu _{13}^{(2)}+(\mu _{13}^{(1)}-\mu _{13}^{(2)})N_{2}\right) \left( p_{1}^{\dagger }p_{3}+p_{3}^{\dagger }p_{1}\right) + & \\ \qquad +\left( \mu _{23}^{(2)}+(\mu _{23}^{(1)}-\mu _{23}^{(2)})N_{1}\right) \left( p_{2}^{\dagger }p_{3}+p_{3}^{\dagger }p_{2}\right) , & \\ h_{int}^{coop}=\left( \nu _{12}^{(2)}+(\nu _{12}^{(1)}-\nu _{12}^{(2)})N_{3}\right) \left( p_{1}^{\dagger }p_{2}^{\dagger }+p_{2}p_{1}\right) +\left( \nu _{13}^{(2)}+(\nu _{13}^{(1)}-\nu _{13}^{(2)})N_{2}\right) \left( p_{1}^{\dagger }p_{3}^{\dagger }+p_{3}p_{1}\right) + & \\ \qquad +\left( \nu _{23}^{(2)}+(\nu _{23}^{(1)}-\nu _{23}^{(2)})N_{1}\right) \left( p_{2}^{\dagger }p_{3}^{\dagger }+p_{3}p_{2}\right) , & \end{array}% \right. \label{24b} \end{equation}% where, again, $\mu _{ij}^{(1)}$, $\mu _{ij}^{(2)}$, $\nu _{ij}^{(1)}$ and $% \nu _{ij}^{(2)}$ are real quantities. Let us now explain the various terms in $H$. The first contribution in (\ref{22}) is $H_{0}$, which describes the free evolution of the operators of $\mathcal{S}=\mathcal{S}_{\mathcal{P}}\otimes \mathcal{R}$, where $\mathcal{R}=(\mathcal{R}_{1}\otimes \mathcal{R}% _{2}\otimes \mathcal{R}_{3})\otimes \mathcal{R}_{und}$. If, in particular, all the interaction parameters $\lambda _{j},\tilde{\lambda}_{j}$, $\mu _{ij}^{(l)}$ and $\nu _{ij}^{(l)}$ are zero, then $H=H_{0}$. Hence, since in this case $[H,\hat{P}_{j}]=0$, the number operators describing the choices of the three parties (and their related DFs) stay constant in time. In other words, in {the }absence of interactions, the original choice of each $% \mathcal{P}_{j}$ is not affected by the time evolution. Translating this in the Schr\"{o}dinger representation, this means that if $\mathcal{S}_{% \mathcal{P}}$ is in an eigenstate $\varphi _{n_{1},n_{2},n_{3}}$ of $H_{0}$, then it remains in the same state also for $t>0$. However, we should also add that if $\mathcal{S}_{\mathcal{P}}$ is in the state $\Psi $ in (\ref{24}% ), we might have non trivial dynamics already at this level. As discussed in \cite{all1}, $H_{PBs}$ describes the interaction between the three parties and their related groups of electors: $p_{j}B_{j}^{\dagger }(k)$ describes the fact that, when some sort of \emph{global reaction against alliance} (GRAA) increases, then $\mathcal{P}_{j}$ tends to chose `0' (no coalition). On the other hand, $B_{j}(k)p_{j}^{\dagger }$ describes the fact that $% \mathcal{P}_{j}$ look{s} for some coalition when the GRAA of its electors decreases. This is because of the raising and lowering operators $% p_{j}^{\dagger }$ and $p_{j}$ in these interaction terms, coupled respectively with the lowering ($B_{j}(k)$) and raising ($B_{j}^{\dagger }(k) $) operators of the electors of $\mathcal{P}_{j}$. A similar interpretation holds for $H_{PB}$, with the difference that the interaction is now between the parties and a single set of undecided voters. The last contribution in $h $, $H_{int,l}$, is introduced to describe the fact that the parties {% also} {attempt} to talk to each other to get some agreement. Two possibilities are allowed; \textbf{i)} the parties act \emph{cooperatively} (they make the same choice, and we have terms like $p_{j}^{\dagger }p_{k}^{\dagger }$), and; \textbf{ii) }they make opposite choices. For instance $\mathcal{P}_{1}$ tr{ies} to form some alliance, while $% \mathcal{P}_{2}$ excludes this possibility (and we have terms like $% p_{1}^{\dagger }p_{2}$). Of course, the relative magnitude of $\mu _{jk}^{(0)}$ and $\nu _{jk}^{(0)}$ decides which is the leading contribution in $H_{int,l}$. It is important to stress that all the terms in $H_{int,l}$ are quadratic, so that the contributions they produce in the differential Heisenberg equations turn out to be linear. This is the reason why it was possible, in \cite{all1}, to produce an analytical solution for the time evolution of the system. However, the extra terms in (\ref{24b}) make, in our opinion, the situation more interesting from the point of view of the real interpretation. In fact, whil{st} in $H_{int,l}$ the will of $% \mathcal{P}_{1}$ to form or not an alliance with $\mathcal{P}_{2}$ is totally independent of what $\mathcal{P}_{3}$ is doing, this is not so when we also consider $\delta h$. For instance, let us {consider} the interaction between $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$, and in particular {let us focus on} the \emph{exchange} term, which we now rewrite as follows:% \begin{equation*} \left( \mu _{12}^{(2)}+(\mu _{12}^{(1)}-\mu _{12}^{(2)})N_{3}\right) \left( p_{1}^{\dagger }p_{2}+p_{2}^{\dagger }p_{1}\right) =\mu _{12}^{(1)}N_{3}\left( p_{1}^{\dagger }p_{2}+p_{2}^{\dagger }p_{1}\right) +\mu _{12}^{(2)}(1\!\!1-N_{3})\left( p_{1}^{\dagger }p_{2}+p_{2}^{\dagger }p_{1}\right) . \end{equation*}% The meaning of the two contributions is now evident: the first term, i.e. the one proportional to $\mu _{12}^{(1)}$ in the RHS, describes the fact that the more $\mathcal{P}_{3}$ is willing to ally with $\mathcal{P}_{1}$ or $\mathcal{P}_{2}$, the more these two parties tend to behave differently: one is \emph{pleased} with $\mathcal{P}_{3}$'s attentions, the other is not. The other term, the one proportional to $\mu _{12}^{(2)}$, describes a specula{tive} behavior. $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ tend to behave differently when the interest of $\mathcal{P}_{3}$ to form a coalition is low. In other words, what decides the relative strength of the $% \mathcal{P}_{1}\leftrightarrow \mathcal{P}_{2}$ interaction is not (only) the relative value of $\mu _{12}^{(1)}$ and $\mu _{12}^{(2)}$, but also, and more interestingly, the attitude of $\mathcal{P}_{3}$ to form (or not) a coalition. The behavior of $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ is related also to what $\mathcal{P}_{3}$ is doing. Of course, a similar analysis can be repeated for the other terms in $h_{int}^{ex}$, while for what concerns $h_{int}^{coop}$ the presence of $N_{j}$ or $1\!\!1-N_{j}$ introduces, again, different weights in the various terms of the hamiltonian. {However, }the other two parties {now }tend to behave in the same way. For instance, rewriting \begin{equation*} \left( \nu _{12}^{(2)}+(\nu _{12}^{(1)}-\nu _{12}^{(2)})N_{3}\right) \left( p_{1}^{\dagger }p_{2}^{\dagger }+p_{2}p_{1}\right) =\nu _{12}^{(1)}N_{3}\left( p_{1}^{\dagger }p_{2}^{\dagger }+p_{2}p_{1}\right) +\nu _{12}^{(2)}(1\!\!1-N_{3})\left( p_{1}^{\dagger }p_{2}^{\dagger }+p_{2}p_{1}\right) , \end{equation*}% we see that when $\mathcal{P}_{3}$ wants to form some coalition, then both $% \mathcal{P}_{1}$ and $\mathcal{P}_{2}$ react in the same way. They both try to form (or not to form) a coalition, with $\mathcal{P}_{3}$, or between themselves. Moreover, we are also considering the possibility in which the strength of the interaction is proportional to $1\!\!1-N_{3}$ rather than to $N_{3}$. Of course, we stress again that other than the value of $N_{3}$, what is also crucial in deciding the strength of the various terms in $% \delta h$, {are} the numerical values of the parameters $\mu _{ij}^{(k)}$ and $\nu _{ij}^{(k)}$. \vspace{1mm} We are now ready to continue with the analysis of the dynamics of the system. The Heisenberg equations of motion $\dot X(t)=i[H,X(t)]$, \cite% {bagbook}, can be deduced by using the CAR (\ref{23}) and (\ref{23b}) above. The result can be written as follows: \begin{equation} \left\{ \begin{array}{ll} \dot p_1(t)=l_1(t)+nl_1(t), & \\ \vspace{1mm} \dot p_2(t)=l_2(t)+nl_2(t), & \\ \vspace{1mm} \dot p_3(t)=l_3(t)+nl_3(t), & \\ \vspace{1mm} \dot B_j(q,t)=-i\Omega_j(q) B_j(q,t)+i\lambda_j p_j(t),\qquad j=1,2,3, & \\ \vspace{1mm} \dot B(q,t)=-i\Omega(q) B(q,t)+i\sum_{j=1}^3\tilde\lambda_j p_j(t), \label{26} & \end{array}% \right. \end{equation} where we have introduced the following quantities: \begin{equation} \left\{ \begin{array}{ll} l_1(t)=-i\omega_1 p_1(t)+i\lambda_1\int_{\mathbb{R}}B_1(q,t)\,dq+i\tilde% \lambda_1\int_{\mathbb{R}}B(q,t)\,dq-i(\mu_{12}^{(0)}+\mu_{12}^{(2)})p_2(t)+ & \\ \qquad \quad -i(\mu_{13}^{(0)}+\mu_{13}^{(2)})p_3(t) -i(\nu_{12}^{(0)}+\nu_{12}^{(2)})p_2^\dagger(t)-i(\nu_{13}^{(0)}+% \nu_{13}^{(2)})p_3^\dagger(t), & \\ \vspace{1mm} l_2(t)=-i\omega_2 p_2(t)+i\lambda_2\int_{\mathbb{R}% }B_2(q,t)\,dq+i\tilde\lambda_2\int_{\mathbb{R}}B(q,t)\,dq-i(\mu_{12}^{(0)}+% \mu_{12}^{(2)})p_1(t)+ & \\ \qquad \quad -i(\mu_{23}^{(0)}+\mu_{23}^{(2)})p_3(t) +i(\nu_{12}^{(0)}+\nu_{12}^{(2)})p_1^\dagger(t)-i(\nu_{23}^{(0)}+% \nu_{23}^{(2)})p_3^\dagger(t), & \\ \vspace{1mm} l_3(t)=-i\omega_3 p_3(t)+i\lambda_3\int_{\mathbb{R}% }B_3(q,t)\,dq+i\tilde\lambda_3\int_{\mathbb{R}}B(q,t)\,dq-i(\mu_{13}^{(0)}+% \mu_{13}^{(2)})p_1(t)+ & \\ \qquad \quad -i(\mu_{23}^{(0)}+\mu_{23}^{(2)})p_2(t) +i(\nu_{13}^{(0)}+\nu_{13}^{(2)})p_1^\dagger(t)+i(\nu_{23}^{(0)}+% \nu_{23}^{(2)})p_2^\dagger(t), \label{26b} & \end{array}% \right. \end{equation} which are all linear in their entries, and these other functions, which are not linear: \begin{equation} \left\{ \begin{array}{ll} nl_1(t)=-i(\mu_{12}^{(1)}-\mu_{12}^{(2)})N_3(t)p_2(t)-i(\mu_{13}^{(1)}-% \mu_{13}^{(2)})N_2(t)p_3(t)+ & \\ \qquad \quad -i(\nu_{12}^{(1)}-\nu_{12}^{(2)})N_3(t)p_2^\dagger(t)-i(\nu_{13}^{(1)}-% \nu_{13}^{(2)})N_2(t)p_3^\dagger(t)+ & \\ \qquad \quad -i (\mu_{23}^{(1)}-\mu_{23}^{(2)})p_1(t)(p_2^\dagger(t) p_3(t)+p_3^\dagger(t) p_2(t))+ & \\ \qquad \quad -i (\nu_{23}^{(1)}-\nu_{23}^{(2)})p_1(t)(p_2^\dagger(t) p_3^\dagger(t)+p_3(t) p_2(t)), & \\ \vspace{1mm} nl_2(t)=-i(\mu_{12}^{(1)}-\mu_{12}^{(2)})N_3(t)p_1(t)-i(% \mu_{23}^{(1)}-\mu_{23}^{(2)})N_1(t)p_3(t)+ & \\ \qquad \quad +i(\nu_{12}^{(1)}-\nu_{12}^{(2)})N_3(t)p_1^\dagger(t)-i(\nu_{23}^{(1)}-% \nu_{23}^{(2)})N_1(t)p_3^\dagger(t)+ & \\ \qquad \quad -i (\mu_{13}^{(1)}-\mu_{13}^{(2)})p_2(t)(p_1^\dagger(t) p_3(t)+p_3^\dagger(t) p_1(t))+ & \\ \qquad \quad -i (\nu_{13}^{(1)}-\nu_{13}^{(2)})p_2(t)(p_1^\dagger(t) p_3^\dagger(t)+p_3(t) p_1(t)), & \\ \vspace{1mm} nl_3(t)=-i(\mu_{13}^{(1)}-\mu_{13}^{(2)})N_2(t)p_1(t)-i(% \mu_{23}^{(1)}-\mu_{23}^{(2)})N_1(t)p_2(t)+ & \\ \qquad \quad +i(\nu_{13}^{(1)}-\nu_{13}^{(2)})N_2(t)p_1^\dagger(t)+i(\nu_{23}^{(1)}-% \nu_{23}^{(2)})N_1(t)p_2^\dagger(t)+ & \\ \qquad \quad -i (\mu_{12}^{(1)}-\mu_{12}^{(2)})p_3(t)(p_1^\dagger(t) p_2(t)+p_2^\dagger(t) p_1(t))+ & \\ \qquad \quad -i (\nu_{12}^{(1)}-\nu_{12}^{(2)})p_3(t)(p_1^\dagger(t) p_2^\dagger(t)+p_2(t) p_1(t)). \label{26c} & \end{array}% \right. \end{equation} The last two equations in (\ref{26}) can be rewritten as \begin{equation*} B_j(q,t)=B_j(q)e^{-i\Omega_j(q)t}+i\lambda_j\int_0^t p_j(t_1)e^{-i\Omega_j(q)(t-t_1)}\,dt_1 \end{equation*} and \begin{equation*} B(q,t)=B(q)e^{-i\Omega(q)t}+i\int_0^t \sum_{j=1}^3\tilde\lambda_j p_j(t_1)e^{-i\Omega(q)(t-t_1)}\,dt_1, \end{equation*} which, assuming that $\Omega_j(k)=\Omega_j\, k$ and $\Omega(k)=\Omega\, k$, $% \Omega,\Omega_j>0$, produce \begin{equation} \int_{\mathbb{R}}B_j(q,t)\,dq=\int_{\mathbb{R}}B_j(q)e^{-i\Omega_j q t}\,dq+i\pi\frac{\lambda_j}{\Omega_j}\,p_j(t), \label{27} \end{equation} and \begin{equation} \int_{\mathbb{R}}B(q,t)\,dq=\int_{\mathbb{R}}B(q)e^{-i\Omega q t}\,dq+i\pi% \frac{\sum_{j=1}^3\tilde\lambda_j\,p_j(t)}{\Omega}. \label{28} \end{equation} Now, long but straightforward computations, allow us to rewrite $l_j(t)$ and $nl_j(t)$ is a simpler form. In particular we find \begin{equation} \left\{ \begin{array}{ll} l_1(t)=-\tilde\omega_1 p_1(t)-\tilde\gamma_{12}p_2(t)-\tilde\gamma_{13}p_3(t)-i\nu_{12}p_2^% \dagger(t)-i\nu_{13}p_3^\dagger(t)+\eta_1(t), & \\ l_2(t)=-\tilde\omega_2 p_2(t)-\tilde\gamma_{12}p_1(t)-\tilde\gamma_{23}p_3(t)+i\nu_{12}p_1^% \dagger(t)-i\nu_{23}p_3^\dagger(t)+\eta_2(t), & \\ l_3(t)=-\tilde\omega_3 p_3(t)-\tilde\gamma_{13}p_1(t)-\tilde\gamma_{23}p_2(t)+i\nu_{13}p_1^% \dagger(t)+i\nu_{23}p_2^\dagger(t)+\eta_3(t), \label{29} & \end{array}% \right. \end{equation} and \begin{equation} \left\{ \begin{array}{ll} nl_1(t)=-i\delta_{12}^\mu N_3(t) p_2(t)-i\delta_{13}^\mu N_2(t) p_3(t)-i\delta_{12}^\nu N_3(t) p_2^\dagger(t)-i\delta_{13}^\nu N_2(t) p_3^\dagger(t)+ & \\ \qquad \quad -i\delta_{23}^\mu p_1(t)(p_2^\dagger(t)p_3(t)+p_3^\dagger(t)p_2(t))-i\delta_{23}^\nu p_1(t)(p_2^\dagger(t)p_3^\dagger(t)+p_3(t)p_2(t)), & \\ nl_2(t)=-i\delta_{12}^\mu N_3(t) p_1(t)-i\delta_{23}^\mu N_1(t) p_3(t)+i\delta_{12}^\nu N_3(t) p_1^\dagger(t)-i\delta_{23}^\nu N_1(t) p_3^\dagger(t)+ & \\ \qquad \quad -i\delta_{13}^\mu p_2(t)(p_1^\dagger(t)p_3(t)+p_3^\dagger(t)p_1(t))-i\delta_{13}^\nu p_2(t)(p_1^\dagger(t)p_3^\dagger(t)+p_3(t)p_1(t)), & \\ nl_3(t)=-i\delta_{13}^\mu N_2(t) p_1(t)-i\delta_{23}^\mu N_1(t) p_2(t)+i\delta_{13}^\nu N_2(t) p_1^\dagger(t)+i\delta_{23}^\nu N_1(t) p_2^\dagger(t)+ & \\ \qquad \quad -i\delta_{12}^\mu p_3(t)(p_1^\dagger(t)p_2(t)+p_2^\dagger(t)p_1(t))-i\delta_{12}^\nu p_3(t)(p_1^\dagger(t)p_2^\dagger(t)+p_2(t)p_1(t)). \label{210} & \end{array}% \right. \end{equation} Here we have introduced the following simplifying notation: \begin{equation*} \tilde\omega_l:=i\omega_l+\pi\left(\frac{\lambda_l^2}{\Omega_l}+\frac{% \tilde\lambda_l^2}{\Omega}\right), \quad \tilde\gamma_{k,l}:=i\left(\mu_{k,l}^{(0)}+\mu_{k,l}^{(2)}\right)+\frac{\pi}{% \Omega}\tilde\lambda_k\tilde\lambda_l, \end{equation*} \begin{equation*} \nu_{kl}=\nu_{kl}^{(0)}+\nu_{kl}^{(2)}, \quad \delta_{kl}^\mu=\mu_{kl}^{(1)}-\mu_{kl}^{(2)}, \quad \delta_{kl}^\nu=\nu_{kl}^{(1)}-\nu_{kl}^{(2)}, \end{equation*} for $k,l=1,2,3$, as well as the operator-valued functions: \begin{equation*} \eta_j(t)=i\left(\lambda_j\beta_j(t)+\tilde\lambda_j\beta(t)\right), \end{equation*} where \begin{equation*} \beta_j(t)=\int_{\mathbb{R}}B_j(q)e^{-i\Omega_j q t}dq, \quad\mbox{ and }% \quad \beta(t)=\int_{\mathbb{R}}B(q)e^{-i\Omega q t}dq. \end{equation*} \vspace{2mm} \textbf{Remark:--} We notice that these equations return those in \cite{all1} when we put to zero all the coefficients measuring the non-linearity. Therefore, in this case, they can be explicitly solved. \vspace{2mm} Once we have deduced $p_{j}(t)$, we need to compute the DFs $P_{j}(t)$, which are defined as follows: \begin{equation} P_{j}(t):=\left\langle \hat{P}_{j}(t)\right\rangle =\left\langle p_{j}^{\dagger }(t)p_{j}(t)\right\rangle , \label{add1} \end{equation}% $j=1,2,3$. Here $\left\langle .\right\rangle $ is a state over the full system. These states, \cite{bagbook}, are taken to be suitable tensor products of vector states for $\mathcal{S}_{\mathcal{P}}$ and states on the reservoir which obey some standard rules ({please }see below). More in detail, for each operator of the form $X_{\mathcal{S}}\otimes Y_{\mathcal{% R}}$, $X_{\mathcal{S}}$ being an operator of $\mathcal{S}_{\mathcal{P}}$ and $Y_{\mathcal{R}}$ an operator of the reservoir, we put \begin{equation} \left\langle X_{\mathcal{S}}\otimes Y_{\mathcal{R}}\right\rangle :=\left\langle \varphi _{n_{1},n_{2},n_{3}},X_{\mathcal{S}}\varphi _{n_{1},n_{2},n_{3}}\right\rangle \,\omega _{\mathcal{R}}(Y_{\mathcal{R}}). \label{add2} \end{equation}% Here $\varphi _{n_{1},n_{2},n_{3}}$ is one of the vectors introduced at the beginning of this section, and each $n_{j}$ represents, as discussed before, the tendency of $\mathcal{P}_{j}$ to form (or not) some coalition at $t=0$. Moreover, $\omega _{\mathcal{R}}(.)$ is a state on $\mathcal{R}$ satisfying the following standard properties, \cite{bagbook}: \begin{equation} \omega _{\mathcal{R}}(1\!\!1_{\mathcal{R}})=1,\quad \omega _{\mathcal{R}% }(B_{j}(k))=\omega _{\mathcal{R}}(B_{j}^{\dagger }(k))=0,\quad \omega _{% \mathcal{R}}(B_{j}^{\dagger }(k)B_{l}(q))=N_{j}(k)\,\delta _{j,l}\delta (k-q), \label{211} \end{equation}% as well as \begin{equation} \omega _{\mathcal{R}}(B(k))=\omega _{\mathcal{R}}(B^{\dagger }(k))=0,\quad \omega _{\mathcal{R}}(B^{\dagger }(k)B(q))=N(k)\,\delta (k-q), \label{211bis} \end{equation}% for some suitable functions $N_{j}(k)$ and $N(k)$, which we take here to be constant in $k$: $N_{j}(k)=N_{j}$ and $N(k)=N$. Also, we assume $\omega _{% \mathcal{R}}(B_{j}(k)B_{l}(q))=\omega _{\mathcal{R}}(B(k)B(q))=0$, for all $j $ and $l$. The reason why we use the state in (\ref{add2}) is because it describes, in our framework, the fact that, at $t=0$, $\mathcal{P}_{j}$'s decision is $n_{j}$, while the overall feeling of the voters $\mathcal{R}_{j} $ is $N_{j}$, and that of the undecided ones is $N$. Of course, these might appear as oversimplifying assumptions, but they {still }produce in many concrete applications, rather interesting dynamics for the model. \subsection{The solution} \label{sectII1} {To begin with, }we consider now a simple but still non-trivial situation, which allows us to write the differential equations of the system in a reasonably simple way and to find an approximate solution. {This }suggests a strategy which can be easily generalized to other situations. This is, in fact, what we will do in the last part of this section. Let us assume for the moment that the coefficients in $\delta h$ are such \begin{equation*} \delta _{13}^{\mu }=\delta _{13}^{\nu }=\delta _{23}^{\mu }=\delta _{23}^{\nu }=\delta _{12}^{\nu }=0, \end{equation*}% while $\delta _{12}^{\mu }=\mu _{12}^{(1)}-\mu _{12}^{(2)}\neq 0$, and for simplicity we call this difference $\delta $: $\delta =\delta _{12}^{\mu }$. This makes the system non-linear, but not extremely complicated (at least {not} from the point of view of the notation). The first three equations of system (\ref{26}), together with their adjoints, can be rewritten as \begin{equation} \dot{P}(t)=TP(t)+\eta (t)+i\delta \Lambda (P(t)), \label{41} \end{equation}% where we have introduced the following vectors: \begin{equation*} P(t)=\left( \begin{array}{c} p_{1}(t) \\ p_{2}(t) \\ p_{3}(t) \\ p_{1}^{\dagger }(t) \\ p_{2}^{\dagger }(t) \\ p_{3}^{\dagger }(t) \\ \end{array}% \right) ,\quad \eta (t)=\left( \begin{array}{c} \eta _{1}(t) \\ \eta _{2}(t) \\ \eta _{3}(t) \\ \eta _{1}^{\dagger }(t) \\ \eta _{2}^{\dagger }(t) \\ \eta _{3}^{\dagger }(t) \\ \end{array}% \right) ,\quad \Lambda (P(t))=\left( \begin{array}{c} -N_{3}(t)p_{2}(t) \\ -N_{3}(t)p_{1}(t) \\ -p_{3}(t)\left( p_{1}^{\dagger }(t)p_{2}(t)+p_{2}^{\dagger }(t)p_{1}(t)\right) \\ p_{2}^{\dagger }(t)N_{3}(t) \\ p_{1}^{\dagger }(t)N_{3}(t) \\ \left( p_{1}^{\dagger }(t)p_{2}(t)+p_{2}^{\dagger }(t)p_{1}(t)\right) p_{3}^{\dagger }(t) \\ \end{array}% \right) , \end{equation*}% as well as the matrix \begin{equation*} T=\left( \begin{array}{cccccc} -\tilde{\omega}_{1} & -\tilde{\gamma}_{12} & -\tilde{\gamma}_{13} & 0 & -i\nu _{12} & -i\nu _{13} \\ -\tilde{\gamma}_{12} & -\tilde{\omega}_{2} & -\tilde{\gamma}_{23} & i\nu _{12} & 0 & i\nu _{23} \\ -\tilde{\gamma}_{13} & -\tilde{\gamma}_{23} & -\tilde{\omega}_{3} & i\nu _{13} & i\nu _{23} & 0 \\ 0 & i\nu _{12} & i\nu _{13} & -\overline{\tilde{\omega}_{1}} & -\overline{% \tilde{\gamma}_{12}} & -\overline{\tilde{\gamma}_{13}} \\ -i\nu _{12} & 0 & i\nu _{23} & -\overline{\tilde{\gamma}_{12}} & -\overline{% \tilde{\omega}_{2}} & -\overline{\tilde{\gamma}_{23}} \\ -i\nu _{13} & -i\nu _{23} & 0 & -\overline{\tilde{\gamma}_{13}} & -\overline{% \tilde{\gamma}_{23}} & -\overline{\tilde{\omega}_{3}} \\ & & & & & \end{array}% \right) . \end{equation*}% Solving exactly equation (\ref{41}) is quite hard, if not impossible, due to the non-linearity included in $\Lambda (P(t))$. However, it is easy to set up a recursive approximation approach which might converge to, or at least approximate, the solution. The idea is simple, and {it }works better under the assumption that $\delta $ is sufficiently small. In this case we replace (\ref{41}) with the following, much simpler, equation: $\dot{P}% _{0}(t)=TP_{0}(t)+\eta (t)$, which is linear and can be easily solved. The solution is \begin{equation*} P_{0}(t)=e^{Tt}\left( P(0)+\int_{0}^{t}e^{-Tt_{1}}\eta _{0}(t_{1})dt_{1}\right) , \end{equation*}% where we have introduced, for reasons which will be clear in a moment, $\eta _{0}(t)\equiv \eta (t)$. We can now use this {zero-th order approximation} of $P(t)$ in $\Lambda (P(t))$, in equation (\ref{41}), which becomes $\dot{P}% _{1}(t)=TP_{1}(t)+\eta _{1}(t)$, where $\eta _{1}(t)=\eta _{0}(t)+i\delta \Lambda (P_{0}(t))$. Notice that $\eta _{1}(t)$ is now a known function. The solution of this equation is \begin{equation*} P_{1}(t)=e^{Tt}\left( P(0)+\int_{0}^{t}e^{-Tt_{1}}\eta _{1}(t_{1})dt_{1}\right) . \end{equation*}% Of course, we can iterate the procedure, and the $n$-th approximation is \begin{equation} P_{n}(t)=e^{Tt}\left( P(0)+\int_{0}^{t}e^{-Tt_{1}}\eta _{n}(t_{1})dt_{1}\right) , \label{42} \end{equation}% where $\eta _{n}(t)=\eta (t)+i\delta \Lambda (P_{n-1}(t))$, for $n\geq 1$. Hence, at least in principle, we can reach the level of approximation we want. However, we should also say that it is not guaranteed that the sequence $\{P_{n}(t)\}$ really converges to the solution of (\ref{41}), even if this might appear rather reasonable. Similar problems often occur when non-linear differential equations are considered, as it happens in our system. Summarizing, we cannot, a priori, say that (i) $\lim_{n\rightarrow \infty }P_{n}(t)$ exists (in some suitable topology), and (ii) even if it exists, if this limit is the solution of equation (\ref{41}). Nevertheless, what we can safely say, is that $P_{n}(t)$ is a certain approximation of $% P(t)$, and we suspect that this approximation is sufficiently good for small $\delta $ and $t$, and for large $n$. Of course, more could be said only after numerical computations or looking for some a priori estimates. This is indeed part of our work in progress. However, there is a situation in which the computations can be carried out explicitly. In fact, if $\delta _{kl}^{\mu }=\delta _{kl}^{\nu }=0$ for all $k,l$, then, as already observed, the equations reduce to those for the linear system\footnote{Incidentally we observe that this does not imply that all the parameters of $\delta h$ are zero. It only means that they coincide in pairs.}. Hence, they are exactly solvable and the result has been discussed in \cite{all1}. Looking at the analytical form of $% \delta h$ in (\ref{24b}), this can be understood since {it} correspond% {s} to the fact that, for instance, $\mathcal{P}_{1}$ and $\mathcal{P}% _{2}$ react with the same strength to the will of $\mathcal{P}_{3}$ to {either }create or not an alliance. In the next section we will briefly show that we can consider cases other than the one considered above. In fact, a general solution can also be found even when the parameters in $\delta h$ are different from each other. \subsection{A more general situation} It is clear that when we give up the working assumptions we have considered above (i.e. $\delta _{13}^{\mu }=\delta _{13}^{\nu }=\delta _{23}^{\mu }=\delta _{23}^{\nu }=\delta _{12}^{\nu }=0$), the explicit form of the non-linear term $i\delta \Lambda (P(t))$ changes. This is due to the presence of several parameters and not of just one. Consequently, it is convenient to modify the strategy and this can be done as follows: the starting point is the equation $$\dot{P}(t)=TP(t)+\eta (t)+\tilde{\Lambda}(P(t)),$$ where $\tilde{\Lambda}(P(t))$ is the strong non-linear contribution which extends the term $i\delta \Lambda (P(t))$ in (\ref{41}). Introducing now $P_1(t)=e^{-Tt}P(t)$, $\eta_1(t)=e^{-Tt}\eta (t)$ and $\tilde{\Lambda}_1(P_1(t))=e^{-Tt}\tilde{\Lambda}(e^{Tt}P(t))$, the equation for $P_1(t)$ becomes $$\dot{P_1}(t)=\eta_1 (t)+\tilde{\Lambda}_1(P_1(t)),$$ which can be still be re-written in a more convenient form by introducing further the $\eta_2(t)=\int_0^t\eta_1(t_1)dt_1$, and the new unknown $P_2(t)=P_1(t)-\eta_2(t)$. In fact, calling now $\tilde{\Lambda}_2(P_2(t))=\tilde{\Lambda}_1(P_2(t)+\eta_2(t))$, we get a very simple differential equation, $$ \dot P_2(t)=\tilde{\Lambda}_2(P_2(t)), $$ whose formal solution is \begin{equation}\int dP_2 \tilde{\Lambda}_2^{-1}(P_2)=t+\alpha,\label{43}\end{equation} $\alpha$ being an integration constant. Of course, this solution is {\em formal} because of several reasons: firstly, we don't know a priori if $\tilde{\Lambda}_2^{-1}(P_2)$ exists. Secondly, we are not sure we can compute its integral. Thirdly, we are working with operators (and not with simple functions). This makes the situation even more complicated. However, in principle, formula (\ref{43}) produces the solution of the general problem, without any approximation. Hence, from a certain point of view, it looks much more interesting than the solution deduced in the previous section. We will devote a future analysis to a deeper, and more explicit, analysis of the results arising from equation (\ref{43}). \section{Dynamics of buying and selling} We have already {remarked }in several papers ({please }see in particular \cite{baghav1,baghav2} for recent results) that the above extended hamiltonian framework c{ould} be applied to economics and finance. We show now that this is true {and in so doing we }chang% {e} the interpretation of the model considered here. In particular, we will now discuss that the resulting framework becomes akin to a formal structure which can describe the dynamics of buying and selling ({of financial assets for instance)}. However, the framework does not explicitly provide for a mechanism by which prices can be generated. We note first that when considering the different terms which are part of $H=h+\delta h$, we can in effect make an argument that the hamiltonians $H_{0}$, $H_{PBs}$, $% H_{PB}$ are associated with public information which occurs at a macroscale, since they are connected with some reservoirs which describe in fact (% {please }see below), large groups of people. As is reported in \cite% {MS2000}, the reaction of traders on this public information is then transferred onto smaller scales, i.e. to traders themselves. The scale at which this happens is cast by the hamiltonians $H_{int,l}$ and $h$. The above framework, we insist, hints back to {the binary choice of either }buying and selling. The key reason for that is that the eigenvalues of the number operators are either `$0$' or `$1$'. The financial system which we want to emulate with $H=h+\delta h$ must contain interactions and hence we can not be satisfied with just using $H_{0}$. This interaction in the framework proposed here can be either at the macroscale and/or the microscale (i.e. between the traders). The division of two grand types of information, i.e. public and private information occurs typically (and intuitively) at respectively the macroscale and the microscale. One can of course be rigorous about this. Work by \cite{Scheinkman2004} for instance shows that private information has no effect at all on traders when they behave in a rational expectations model. To make sure we use some reference framework from the economics literature on how to properly define public versus private information, we resort to \cite{BHR2013} who define public information {as} having the potential to be known by everyone, whilst private information may be known by one single individual. In our situation, the decision functions $P_{j}(t)$ describe the will of the three traders\footnote{% Of course, we are sticking here to just three traders because of our previous application to political alliances, but it is not difficult to extend the model to more traders.}, $\mathcal{P}_{1}$, $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$, to buy (zero) or to sell (one) some assets. This choice is driven by public information (i.e. by $\mathcal{R}_{j}$ and $\mathcal{R}% _{und}$, see below) and by private information (i.e. by the mutual interaction between the traders). On the basis of public information, traders can adjust their portfolio holdings and this, as \cite{BHR2013} indicates, can affect prices in the market. The opposite may well be true in the case of private information, where a single party profits but with no {necessary }effect on price behavior. What is interesting is the statement by \cite{BHR2013} that almost always (see p. 224), will there be processes operating which will `publicize' private information. Please consider again $H_{PBs}$, $H_{PB}$ which was mentioned in the context of the politics example above. Assume we have three traders who have the binary elemental task of either selling or buying. Denote $H=h+\delta h$ as the hamiltonian which describes the dynamics of buying and selling over time, under the influence of both private and public information. Besides the no-interaction hamiltonian, $% H_{0}$, the dynamic drivers which are associated to public information are, as stated, $H_{PBs}$ and $H_{PB}$. From an economics point of view, the baths $\mathcal{R}_{1}$, $\mathcal{R}_{2}$ and $\mathcal{R}_{3}$ now signify a vast collection of \textit{informed} traders with which our three traders interact with (in view of performing the elemental task of buying and selling). Whilst the bath $\mathcal{R}_{und}$ consists of a vast collection of traders, who can be interpreted as \textit{noise }traders. This can be easily achieved in our model by assuming some randomness of the $\tilde{% \lambda}_{j}$ in (\ref{22}). A question arises whether we can be rigorous in defining those two types of traders. In \cite{HK2013} noise traders are defined as traders who act upon information which is more often than not, spurious information. Informed traders have at their command information which can be objectively used in decisions involving buying or selling. The contribution of $H_{PBs}$ in $H$ (i.e. the {full} driver of the dynamics of buying and selling) describes the interaction between the three traders and the baths of informed traders. Clearly, we want to point out that this interaction is occurring via the medium of public information, given the size of the baths. The mechanism that $p_{j}B_{j}^{\dagger }(k)$ describes now leads to (say) the action of selling by traders given {% that }some public information (from informed traders) has been released that `selling' is what one should do. In identical fashion do we argue for a buying signal when $B_{j}(k)p_{j}^{\dagger }$ occurs. But note also the contribution of $H_{PB}$, which now influences traders to sell or buy given public information coming from noise traders. Both those buying and selling signals, whether they either derive from the interaction with the baths $% \mathcal{R}_{1}$, $\mathcal{R}_{2}$ and $\mathcal{R}_{3}$ or the $\mathcal{R}% _{und}$ bath, have the potential to ultimately influence price setting given that public information is at stake. What is contained in $H_{int^{\prime }l\text{ }}$ and $\delta h$ are communications between traders, without recourse to the public information baths. We have three traders, and by virtue of this very small size, it is perfectly intuitive to call the information, upon which traders make decisions within this interaction setting, to be private information. But as has been remarked above, whilst in $H_{int^{\prime }l\text{ }}$ the individual's traders decision of buying and selling does not affect their `partner' traders, there is a very explicit dependence between the individual's traders built in when considering $\delta h$. However, private information as such is not expected to influence price behavior. Private information, as we have remarked above, seems to be subject to the act of `publicizing' private information. Well known notions like information leakage and uncertainty creation can be following from such an act. See \cite% {BHR2013} and \cite{MA2011} for a discussion. In \cite{MA2011} (see also \cite{HK2013}), information leakage is defined as \textquotedblleft situations where agents wish to reveal truthfully their private possessed information to others\textquotedblright . Such type of release of information invites in cooperation amongst agents and it also very clearly creates an interdependence between agents. Information leakage can be selective, i.e. agent 1 can release information only to agent 2 and thereby alienate agent 3. Similarly, in the case of so called `uncertainty creation', information is created which is on purpose false or erroneous, so as to induce other agents in error so it can serve one's own investment strategy. This is again an example of private information which, on purpose, creates dependencies between traders. One can even get more precise by considering the quality of the private information. Trader 1 can release private information with noise to trader 2 but without any noise to trader 3. See \cite{BRUNNER2001} (p. 71). One can even be more refined and introduce so called knowledge operators in the modeling of information. See again \cite{BRUNNER2001} (p. 4-). Of course, these several different effects all suggest the relevance of the full hamiltonian $H$ in (\ref{24b}), and importance {is given }to its various contributions also in this economics context. Incidentally, this means that the differential equations governing this {particular} application are again (\ref{26}), so that the same approximation procedure discussed in Section \ref{sectII1} can be adopted. Needless to say, that for this particular application, our next step will surely be to produce numerical solutions and/or analytical estimates. This is, for the present model, a hard task. However, it can be easily done in the linear case, simply {by }adapting what we have done in \cite{all1} to the present situation. Before doing that, we would like to mention that this analogy presented here in this section does query however, how departures from equilibrium can be caused by $H_{int,l\text{ }}$ and $\delta h$ if we align those hamiltonians with the existence of private information. As such buying and selling ensuing from private information is unlikely to affect price behavior. Hence, this is unlikely to affect the equilibrium price obtained out of public information-based buying and selling. \subsection{Back to the linear case} In this section we see what happens when $\delta h=0$, i.e. when $\mu _{kl}^{(1)}=\mu _{kl}^{(2)}=\nu _{kl}^{(1)}=\nu _{kl}^{(2)}=0$ for all $k$ and $l$. In this case, $H=h$, which is quadratic in creation and annihilation operators, and the differential equations (\ref{41}) become linear. Essentially, we go back to what we have done, in a political context, in \cite{all1}. In fact the numerical plots are completely analogous. For instance, Figures \ref{fig_a} and \ref{fig_b} show the three DFs for two different choices of the parameters of the hamiltonian and for certain initial conditions ({please }see {the} figure's caption). These two sets of parameters correspond to two different situations. In the first {situation}, Figure \ref{fig_a}, each trader interacts with its related $\mathcal{R}_{j}$, but not with $\mathcal{R}_{und} $. They also interact amongst them, but only adopt the mutual different mechanism described by terms like $p_{1}^{\dagger }p_{2}+p_{2}^{\dagger }p_{1}$ in (\ref{22}). In {the second situation,} Figure \ref{fig_b}, we describe a similar situation but with the difference that the only possible interaction between the traders is of {the }cooperative type: only terms like $p_{1}^{\dagger }p_{2}^{\dagger }+p_{2}p_{1}$ survive. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\textwidth]{figura_3_P1t.pdf}\hspace{8mm} % \includegraphics[width=0.4\textwidth]{figura_3_P2t.pdf}\hfill\\[0pt] \includegraphics[width=0.4\textwidth]{figura_3_P3t.pdf} \end{center} \caption{{\protect\footnotesize $P_1(t)$ (top left), $P_2(t)$ (top right) and $P_3(t)$ (bottom) for $\protect\mu_{1,2}^{(0)}=0.2$, $\protect\mu% _{1,3}^{(0)}=0.1$, $\protect\mu_{2,3}^{(0)}=0.15$, $\protect\nu% _{k,l}^{(0)}=\tilde\protect\lambda_j=0$, $\protect\omega_1=0.1$, $\protect% \omega_2=\protect\omega_3=0.2$, $\Omega_1=\Omega_3=1$, $\Omega_2=2$, $% \Omega=0.1$, $\protect\lambda_1=0.1$, $\protect\lambda_2=0.2$, $\protect% \lambda_3=0.05$, and $n_1=0$, $n_2=n_3=1$, $N_1=0$, $N_2=N_3=N=1$.}} \label{fig_a} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\textwidth]{figura_6_P1t.pdf}\hspace{8mm} % \includegraphics[width=0.4\textwidth]{figura_6_P2t.pdf}\hfill\\[0pt] \includegraphics[width=0.4\textwidth]{figura_6_P3t.pdf} \end{center} \caption{{\protect\footnotesize $P_1(t)$ (top left), $P_2(t)$ (top right) and $P_3(t)$ (bottom) for $\protect\nu_{1,2}^{(0)}=0.1$, $\protect\nu% _{1,3}^{(0)}=0.08$, $\protect\nu_{2,3}^{(0)}=0.1$, $\protect\mu% _{k,l}^{(0)}=\tilde\protect\lambda_j=0$, $\protect\omega_1=0.1$, $\protect% \omega_2=\protect\omega_3=0.2$, $\Omega_1=\Omega_3=1$, $\Omega_2=2$, $% \Omega=0.1$, $\protect\lambda_1=0.1$, $\protect\lambda_2=0.2$, $\protect% \lambda_3=0.05$, and $n_1=n_2=n_3=1$, $N_1=N_2=1$, $N_3=N=0$.}} \label{fig_b} \end{figure} From both figures we see that, with these choices of parameters and initial conditions, the three DFs begin oscillating and then reach some asymptotic value, which is not just zero or one. In \cite{all1} we have discussed why this is so, and when a sharp result can be really deduced. The conclusion, here, is that it is quite unlikely that the traders reach some decision they are completely satisfied {with}. However, see for instance $P_{1}(t)$ and $P_{2}(t)$ {in} Figure \ref{fig_b}, the asymptotic values of both these DFs are close to one. {Hence,} we see that the decision process produce{s} a sort of unique decision. On the other hand, $\mathcal{P}% _{3}$ is not really sure of what he has to do, since $P_{3}(t)$ for large $t$ approaches 0.4, which is not so close to zero. \vspace{2mm} A different story is described by Figure \ref{fig_c}, where we are assuming that the traders only interact among themselves and not with any $\mathcal{R}% _{j}$ or with $\mathcal{R}_{und}$. When this happens {it }is clear that none of the traders is able to reach a final decision on whether to buy or sell the asset. They just oscillate between different \emph{feelings}, but a conclusion can only be reached when the traders also have some input from the larger sets of informed and noise traders. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\textwidth]{figura_7_P1t.pdf}\hspace{8mm} % \includegraphics[width=0.4\textwidth]{figura_7_P2t.pdf}\hfill\\[0pt] \includegraphics[width=0.4\textwidth]{figura_7_P3t.pdf} \end{center} \caption{{\protect\footnotesize $P_1(t)$ (top left), $P_2(t)$ (top right) and $P_3(t)$ (bottom) for $\protect\nu_{1,2}^{(0)}=0.1$, $\protect\nu% _{1,3}^{(0)}=0.08$, $\protect\nu_{2,3}^{(0)}=0.1$, $\protect\mu% _{1,2}^{(0)}=2 $, $\protect\mu_{1,3}^{(0)}=1$, $\protect\mu_{2,3}^{(0)}=3$, $% \tilde\protect\lambda_j=\protect\lambda_j=0$, $\protect\omega_1=0.1$, $% \protect\omega_2=\protect\omega_3=0.2$, $\Omega_1=\Omega_3=\Omega=0.1$, $% \Omega_2=0.2$ and $n_1=0$, $n_2=n_3=1$, $N_1=N_2=1$, $N_3=N=0$.}} \label{fig_c} \end{figure} \section{Conclusions} In this paper we have shown how to use operatorial techniques, and an Heisenberg-like dynamics, to describe two different, but somehow related, decision making processes. One {such process is }related to political alliances and {the other process relates }to buy and sell phenomena. A non-linear model which extends th{e model} proposed in \cite{all1}% , has been introduced and an approximate procedure for the solution of the related equations of motion has also been proposed. We postpone to a second part of the paper the explicit analysis of these solutions, and a detailed analysis of the role of the parameters of the model. We claim that, for small values of the parameters governing the non-linearity, and for time intervals sufficiently small, these solutions do not differ significantly from those deduced in \cite{all1}. It is of course of interest to check what happens for longer intervals, and this {will form} part of a {% forthcoming} project. Also, it can be interesting to extend the system described in Figure \ref{figscheme} adding more arrows. In particular, a natural extension of the model discussed in Section \ref{sectII} can be constructed by admitting that, for instance, $\mathcal{P}_{1}$ also interacts with $\mathcal{R}_{2}$ and $\mathcal{R}_{3}$ (i.e. to try to convince them to change their intentions of vote). \section*{Acknowledgements} This work was partially supported by the University of Palermo and by G.N.F.M. The authors thank Prof. Andrei Khrennikov for many useful discussions. F.B. acknowledges the warm hospitality of the IQSCS institute at the University of Leicester. \renewcommand{\arabic{section}.\arabic{equation}}{A.\arabic{equation}} \section*{Appendix: A few results on the number representation} To keep the paper self-contained, we discuss here {a }few important facts in quantum mechanics and in the so--called number representation. Let $\mathcal{H}$ be a Hilbert space, and $B(\mathcal{H})$ the set of all the (bounded) operators on $\mathcal{H}$. Let $\mathcal{S}$ be our physical system, and ${\mathfrak{A}}$ the set of all the operators useful for a complete description of $\mathcal{S}$, which includes the \emph{observables} of $\mathcal{S}$. For simplicity, it is convenient (but not really necessary) to assume that ${\mathfrak{A}}$ coincides with $B(\mathcal{H})$ itself. The description of the time evolution of $\mathcal{S}$ is related to a self--adjoint operator $H=H^{\dagger }$ which is called the \emph{% hamiltonian} of $\mathcal{S}$, and which in standard quantum mechanics represents the energy of $\mathcal{S}$. In this paper, we have adopted the so--called \emph{Heisenberg} representation, in which the time evolution of an observable $X\in {\mathfrak{A}}$ is given by \begin{equation} X(t)=\exp (iHt)X\exp (-iHt), \label{a1} \end{equation}% or, equivalently, by the solution of the differential equation \begin{equation} \frac{dX(t)}{dt}=i\exp (iHt)[H,X]\exp (-iHt)=i[H,X(t)], \label{a2} \end{equation}% where $[A,B]:=AB-BA$ is the \emph{commutator} between $A$ and $B$. The time evolution defined in this way is a one--parameter group of automorphisms of $% {\mathfrak{A}}$. An operator $Z\in{\mathfrak{A}}$ is a \emph{constant of motion} if it commutes with $H$. Indeed, in this case, equation (\ref{a2}) implies that $% \dot Z(t)=0$, so that $Z(t)=Z$ for all $t$. In some previous applications, \cite{bagbook}, a special role was played by the so--called \emph{canonical commutation relations}. Here, these are replaced by the so--called \emph{canonical anti--commutation relations} (CAR): we say that a set of operators $\{a_{\ell },\,a_{\ell }^{\dagger },\ell =1,2,\ldots ,L\}$ satisfy the CAR if the conditions \begin{equation} \{a_{\ell },a_{n}^{\dagger }\}=\delta _{\ell n}1\!\!1,\hspace{8mm}\{a_{\ell },a_{n}\}=\{a_{\ell }^{\dagger },a_{n}^{\dagger }\}=0 \label{a3} \end{equation}% hold true for all $\ell ,n=1,2,\ldots ,L$. Here, $1\!\!1$ is the identity operator and $\{x,y\}:=xy+yx$ is the \emph{anticommutator} of $x$ and $y$. These operators, which are widely analyzed in any {quantum mechanics }% textbook (see, for instance, \cite{mer,rom}) are those which are used to describe $L$ different \emph{modes} of fermions. From these operators we can construct $\hat{n}_{\ell }=a_{\ell }^{\dagger }a_{\ell }$ and $\hat{N}% =\sum_{\ell =1}^{L}\hat{n}_{\ell }$, which are both self--adjoint. In particular, $\hat{n}_{\ell }$ is the \emph{number operator} for the $\ell $% --th mode, while $\hat{N}$ is the \emph{number operator of $\mathcal{S}$}. Compared with bosonic operators, the operators introduced here satisfy a very important feature: if we try to square them (or to rise to higher powers), we simply get zero: for instance, from (\ref{a3}), we have $a_{\ell }^{2}=0$. This is related to the fact that fermions satisfy the Fermi exclusion principle \cite{rom}. The Hilbert space of our system is constructed as follows: we introduce the \emph{vacuum} of the theory, that is a vector $\varphi_{\mathbf{0}}$ which is annihilated by all the operators $a_\ell$: $a_\ell\varphi_{\mathbf{0}}=0$ for all $\ell=1,2,\ldots,L$. Such a non zero vector surely exists. Then we act on $\varphi_{\mathbf{0}}$ with the operators $a_\ell^\dagger$ (but not with higher powers, since these powers are simply zero!): \begin{equation} \varphi_{n_1,n_2,\ldots,n_L}:=(a_1^\dagger)^{n_1}(a_2^\dagger)^{n_2}\cdots (a_L^\dagger)^{n_L}\varphi_{\mathbf{0}}, \label{a4} \end{equation} $n_\ell=0,1$ for all $\ell$. These vectors form an orthonormal set and are eigenstates of both $\hat n_\ell$ and $\hat N$: $\hat n_\ell\varphi_{n_1,n_2,\ldots,n_L}=n_\ell\varphi_{n_1,n_2,\ldots,n_L}$ and $% \hat N\varphi_{n_1,n_2,\ldots,n_L}=N\varphi_{n_1,n_2,\ldots,n_L},$ where $% N=\sum_{\ell=1}^Ln_\ell$. Moreover, using the CAR, we deduce that \begin{equation*} \hat n_\ell\left(a_\ell\varphi_{n_1,n_2,\ldots,n_L}\right)=(n_\ell-1)(a_\ell% \varphi_{n_1,n_2,\ldots,n_L}) \end{equation*} and \begin{equation*} \hat n_\ell\left(a_\ell^\dagger\varphi_{n_1,n_2,\ldots,n_L}\right)=(n_% \ell+1)(a_l^\dagger\varphi_{n_1,n_2,\ldots,n_L}), \end{equation*} for all $\ell$. Then $a_\ell$ and $a_\ell^\dagger$ are called the \emph{% annihilation} and the \emph{creation} operators. Notice that, in some sense, $a_\ell^\dagger$ is {also} an annihilation operator since, acting on a state with $n_\ell=1$, we destroy that state. The Hilbert space $\mathcal{H}$ is obtained by taking the linear span of all these vectors. Of course, $\mathcal{H}$ has a finite dimension. In particular, for just one mode of fermions, $dim(\mathcal{H})=2$. This also implies that, contrarily to what happens for bosons, all the fermionic operators are bounded. The vector $\varphi_{n_1,n_2,\ldots,n_L}$ in (\ref{a4}) defines a \emph{% vector (or number) state } over the algebra ${\mathfrak{A}}$ as \begin{equation} \omega_{n_1,n_2,\ldots,n_L}(X)= \langle\varphi_{n_1,n_2,\ldots,n_L},X\varphi_{n_1,n_2,\ldots,n_L}\rangle, \label{a5} \end{equation} where $\langle\,,\,\rangle$ is the scalar product in $\mathcal{H}$. As we have discussed in \cite{bagbook}, these states are useful to \emph{project} from quantum to classical dynamics and to fix the initial conditions of the considered system.
1,116,691,499,590
arxiv
\section{Introduction} Consider the following question from the theory of linear inequalities over the reals: Given a (finite) system $Ax \leq b$, exactly which linear inequalities $\langle a,x\rangle \leq \beta$ are \emph{valid}, i.e., satisfied for every $x$ that satisfies the given system? The answer is given, of course, by the Farkas Lemma, or, equivalently, by the strong duality theory of linear optimization. As is well-known, this duality theory is symmetric: The dual of a linear optimization problem is again a linear optimization problem, and the dual of the dual is the original (primal) optimization problem. The question becomes much harder when all or some of the variables are constrained to be integers. The theory of valid linear inequalities here is called \emph{cutting plane theory}. Over the past 60 years, a vast body of research has been carried out on this topic, the largest part of it regarding the polyhedral combinatorics of integer hulls of particular families of problems. The general theory again is equivalent to the duality theory of integer linear optimization problems. Here the dual objects are not linear, but \emph{superadditive} (or subadditive) functionals, making the general form of this theory infinite-dimensional even though the original problem started out with only finitely many variables. These superadditive (or subadditive) functionals appear in integer linear optimization in various concrete forms, for example in the form of \emph{dual-feasible functions} \cite{alves-clautiaux-valerio-rietz-2016:dual-feasible-book}, \emph{superadditive lifting functions} \cite{louveaux-wolsey-2003:lifting-superadditivity-mir-survey}, and \emph{cut-generating functions}~\cite{conforti2013cut}. In the present paper, we describe some aspects of our software \cite{infinite-group-relaxation-code} for cut-generating functions in the classic 1-row Gomory--Johnson \cite{infinite,infinite2} model. In this theory, the main objects are the so-called \emph{minimal valid functions}, which are the $\bb Z$-periodic, subadditive functions $\pi\colon \bb R\to\bb R_+$ with $\pi(0)=0$, $\pi(f)=1$, that satisfy the \emph{symmetry condition} $\pi(x) + \pi(f - x) = 1$ for all $x\in\bb R$. (Here $f$ is a fixed number.) We refer the reader to the recent survey \cite{igp_survey,igp_survey_part_2}. Our software is a tool that enables mathematical exploration and research in this domain. It can also be used in an educational setting, where it enables hands-on teaching about modern cutting plane theory based on cut-generating functions. It removes the limitations of hand-written proofs, which would be dominated by tedious case analysis. The first version of our software \cite{infinite-group-relaxation-code} was written by the first author, C.~Y. Hong, during a Research Experience for Undergraduates in summer 2013. It was later revised and extended by M.~K\"oppe and again by Y.~Zhou. The latter added an electronic compendium \cite{zhou:extreme-notes} of extreme functions found in the literatur , and added code that handles the case of discontinuous functions. Version 0.9 of our software was released in 2014 to accompany the survey \cite{igp_survey,igp_survey_part_2}; the software has received continuous updates by the second and third authors since.\footnote{Two further undergraduate students contributed to our software. P.~Xiao contributed some documentation and tests. M.~Sugiyama contributed additional functions to the compendium, and added code for superadditive lifting functions.} Our software is written in Python, making use of the convenient framework of the open-source computer algebra system SageMath \cite{sage}. It can be run on a local installation of SageMath, or online via \emph{SageMathCloud}. \section{Continuous and discontinuous piecewise linear $\bb Z$-periodic functions} The main objects of our code are the $\bb Z$-periodic functions~$\pi\colon \bb R\to\bb R$. Our code is limited to the case of piecewise linear functions, which are allowed to be discontinuous; see the definition below. In the following, we connect to the systematic notation introduced in \cite[section 2.1]{basu-hildebrand-koeppe:equivariant}; see also \cite{igp_survey,igp_survey_part_2}. In our code, the periodicity of the functions is implicit; the functions are represented by their restriction to the interval $[0,1]$.\footnote{The functions are instances of the class \sage{FastPiecewise}, which extends an existing SageMath class for piecewise linear functions.} They can be constructed in various ways using Python functions named \sage{piecewise\_function\_from\_breakpoints\_and\_values} etc.; see the source code of the electronic compendium for examples. We also suppress the details of the internal representation; instead we explain the main ways in which the data of the function are accessed. \begin{description} \item[\sage{$\pi$.end\_points()}] is a list $0=x_0 < x_1 < \dots < x_{n-1} < x_n=1$ of possible breakpoints \begin{comment} \footnote{If the function~$\pi$ has been constructed with \sage{merge=True} (the default), then it is guaranteed that all end points $x_i$, with the possible exception of $0$ and $1$, are actual breakpoints of the $\bb Z$-periodic function~$\pi$.} \end{comment} of the function in $[0,1]$. In the notation from \cite{basu-hildebrand-koeppe:equivariant,igp_survey,igp_survey_part_2}, these endpoints are extended periodically as \begin{math} B = \{\, x_0 + t, x_1 + t, \dots, x_{n-1}+t\st t\in\bb Z\,\} \end{math}. Then the set of 0-dimensional faces is defined to be the collection of singletons, $\bigl\{\, \{ x \} \st x\in B\,\bigr\}$, and the set of one-dimensional faces to be the collection of closed intervals, \begin{math} \bigl\{\, [x_i+t, x_{i+1}+t] \st i=0, \dots, {n-1} \text{ and } t\in\bb Z \,\bigr\}. \end{math} Together, we obtain $\P = \P_{B}$, a locally finite polyhedral complex, periodic modulo~$\bb Z$. \item[\sage{$\pi$.values\_at\_end\_points()}] is a list of the function values $\pi(x_i)$, $i=0, \dots,n$. This list is most useful for continuous piecewise linear functions, as indicated by \sage{$\pi$.is\_continuous()}, in which case the function is defined on the intervals $[x_i, x_{i+1}]$ by linear interpolation. \item[\sage{$\pi$.limits\_at\_end\_points()}] provides data for the general, possibly discontinuous case in the form of a list \sage{limits} of 3-tuples, with \begin{align*} \sage{limits[$i$][0]} &= \pi(x_i) \\ \sage{limits[$i$][1]} &= \pi(x_i^+) = \lim_{x\to x_i, x>x_i} \pi(x)\\ \sage{limits[$i$][-1]} &= \pi(x_i^-) = \lim_{x\to x_i, x<x_i} \pi(x). \end{align*} The function is defined on the open intervals $(x_i, x_{i+1})$ by linear interpolation of the limit values $\pi(x_i^+)$, $\pi(x_{i+1}^-)$. \item[\sage{$\pi$($x$)} and \sage{$\pi$.limits($x$)}] evaluate the function at $x$ and provide the 3-tuple of its limits at $x$, respectively. \item[\sage{$\pi$.which\_function($x$)}] returns a linear function, denoted $\pi_I\colon\bb R\to\bb R$ in \cite{basu-hildebrand-koeppe:equivariant,igp_survey,igp_survey_part_2}, where $I$ is the smallest face of~$\P$ containing $x$, so $\pi(x) = \pi_I(x)$ for $x \in \relint(I)$. \end{description} Functions can be plotted using the standard SageMath function \sage{plot($\pi$)}, or using our function \sage{plot\_with\_colored\_slopes($\pi$)}, which assigns a different color to each different slope value that a linear piece takes.\footnote{See also our function \sage{number\_of\_slopes}. We refer the reader to \cite[section 2.4]{igp_survey} for a discussion of the number of slopes of extreme functions, and \cite{bcdsp:arbitrary-slopes} and \sagefunc{bcdsp_arbitrary_slope} for the latest developments in this direction.} Examples of such functions are shown in Figures \ref{fig:2d_diagram_with_cones} and \ref{fig:2d_diagrams_discontinuous_function}. \section{The diagrams of the decorated 2-dimensional polyhedral complex $\Delta\P$} We now describe certain 2-dimensional diagrams which record the subadditivity and additivity properties of a given function. These diagrams, in the continuous case, have appeared extensively in \cite{igp_survey,igp_survey_part_2,zhou:extreme-notes}. An example for the discontinuous case appeared in \cite{zhou:extreme-notes}. We have engineered these diagrams from earlier forms that can be found in \cite{tspace} (for the discussion of the \sage{merit\_index}) and in \cite{basu-hildebrand-koeppe:equivariant}, to become power tools for the modern cutgeneratingfunctionologist. Not only is the minimality of a given function immediately apparent on the diagram, but also the extremality proof for a given class of piecewise minimal valid functions follows a standard pattern that draws from these diagrams. See \cite[prelude]{igp_survey_part_2} and \cite[sections 2 and 4]{zhou:extreme-notes} for examples of such proofs. \subsection{The polyhedral complex and its faces} Following \cite{basu-hildebrand-koeppe:equivariant,igp_survey,igp_survey_part_2}, we introduce the function \[\Delta\pi \colon \bb R \times \bb R \to \bb R,\quad \Delta\pi(x,y) = \pi(x)+\pi(y)-\pi(x+y),\] which measures the slack in the subadditivity condition.\footnote{It is available in the code as \sage{delta\_pi($\pi$, $x$, $y$)}; in \cite{infinite}, it was called $\nabla(x,y)$.} Thus, if $\Delta\pi(x,y)<0$, subadditivity is violated at $(x, y)$; if $\Delta\pi(x,y)=0$, additivity holds at $(x,y)$; and if $\Delta\pi(x,y)>0$, we have strict subadditivity at $(x,y)$. The piecewise linearity of $\pi(x)$ induces piecewise linearity of $\Delta\pi(x,y)$. To express the domains of linearity of $\Delta\pi(x,y)$, and thus domains of additivity and strict subadditivity, we introduce the two-dimensional polyhedral complex $\Delta\P$. The faces $F$ of the complex are defined as follows. Let $I, J, K \in \mathcal{P}$, so each of $I, J, K$ is either a breakpoint of $\pi$ or a closed interval delimited by two consecutive breakpoints. Then $$ F = F(I,J,K) = \setcond{\,(x,y) \in \bb R \times \bb R}{x \in I,\, y \in J,\, x + y \in K\,}.$$ In our code, a face is represented by an instance of the class \sage{Face}. It is constructed from $I, J, K$ and is represented by the list of vertices of $F$ and its projections $I'=p_1(F)$, $J'=p_2(F)$, $K'= p_3(F)$, where $p_1, p_2, p_3 \colon \bb R \times \bb R \to \bb R$ are defined as $p_1(x,y)=x$, $p_2(x,y)=y$, $p_3(x,y) = x+y$. The vertices $\verts(F)$ are obtained by first listing the basic solutions $(x,y)$ where $x$, $y$, and $x+y$ are fixed to endpoints of $I$, $J$, and $K$, respectively, and then filtering the feasible solutions. The three projections are then computed from the list of vertices. \begin{comment} \footnote{We do not use the formulas for the projections given by \cite[Proposition 3.3]{bhk-IPCOext}, \cite[equation (3.11)]{igp_survey}.} \end{comment} Due to the $\bb Z$-periodicity of $\pi$, we can represent a face as a subset of $[0,1]\times[0,1]$. See \autoref{fig:construct_a_face} for an example. Because of the importance of the projection $p_3(x,y)=x + y$, it is convenient to imagine a third, $(x+y)$-axis in addition to the $x$-axis and the $y$-axis, which traces the bottom border for $0 \leq x+y \leq 1$ and then the right border for $1 \leq x+y \leq 2$. To make room for this new axis, the $x$-axis should be drawn on the top border of the diagram. \begin{figure}[tp] \centering \includegraphics[width=.5\linewidth]{graphics-for-algo-paper/construct_a_face.png} \caption{An example of a face $F = F(I, J, K)$ of the 2-dimensional polyhedral complex $\Delta\P$, set up by \sage{F = Face([[0.2, 0.3], [0.75, 0.85], [1, 1.2]])}. It has vertices (\emph{blue}) $(0.2, 0.85), (0.3, 0.75), (0.3, 0.85), (0.2, 0.8), (0.25, 0.75)$, whereas the other basic solutions (\emph{red}) $(0.2, 0.75), (0.2, 1), (0.3, 0.9), (0.35, 0.85), (0.45, 0.75)$ are filtered out because they are infeasible. The face $F$ has projections (\emph{gray shadows}) $I' = p_1(F) = [0.2, 0.3]$ (\emph{top border}), $J' = p_2(F) = [0.75, 0.85]$ (\emph{left border}), and $K' = p_3(F) = [1, 1.15]$ (\emph{right border}). Note that $K'\subsetneq K$. } \label{fig:construct_a_face} \end{figure} \subsection{\sage{plot\_2d\_diagram\_with\_cones}} We now explain the first version of the 2-dimensional diagrams, plotted by the function \sage{plot\_2d\_diagram\_with\_cones($\pi$)}; see \autoref{fig:2d_diagram_with_cones . \begin{figure}[t] \centering \begin{minipage}{.49\textwidth} \centering \includegraphics[width=.8\linewidth]{graphics-for-algo-paper/2d_diagram_with_cones_continuous_deco.png} \end{minipage} \begin{minipage}{.49\textwidth} \centering \includegraphics[width=.8\linewidth]{graphics-for-algo-paper/2d_diagram_with_cones_discontinuous.png} \end{minipage} \caption{Two diagrams of functions and their polyhedral complexes $\Delta\mathcal{P}$ with colored cones at $\verts(\Delta\mathcal{P})$, as plotted by the command \sage{plot\_2d\_diagram\_with\_cones(h)}. \textit{Left}, continuous function \sage{h = not\_minimal\_2()}. \textit{Right}, a random discontinuous function generated by \sage{h = random\_piecewise\_function(xgrid=5, ygrid=5, continuous\_proba=1/3, symmetry=True)}.} \label{fig:2d_diagram_with_cones} \end{figure} At the border of these diagrams, the function $\pi$ is shown twice (\emph{blue}), along the $x$-axis (\emph{top border}) and along the $y$-axis (\emph{left border}). The solid grid lines in the diagrams are determined by the breakpoints of $\pi$: vertical, horizontal and diagonal grid lines correspond to values where $x$, $y$ and $x+y$ are breakpoints of $\pi$, respectively. The vertices of the complex $\Delta\mathcal{P}$ are the intersections of these grid lines. \textbf{In the continuous case}, we indicate the sign of $\Delta\pi(x,y)$ for all vertices by colored dots on the diagram: \emph{red} indicates $\Delta\pi(x,y)<0$ (subadditivity is violated); \emph{green} indicates $\Delta\pi(x,y)=0$ (additivity holds). \begin{example} In \autoref{fig:2d_diagram_with_cones} (left), the vertex $(x, y)=(\frac{1}{5}, \frac{3}{5})$ is marked green, since \begin{align*} \Delta\pi(\tfrac{1}{5},\tfrac{3}{5}) & = \pi(\tfrac{1}{5})+\pi(\tfrac{3}{5})-\pi(\tfrac{4}{5}) = \tfrac{1}{5} +\tfrac{4}{5} -1 = 0. \end{align*} \end{example} \textbf{In the discontinuous case}, beside the subadditivity slack $\Delta\pi(x,y)$ at a vertex $(x, y)$, one also needs to study the limit value of $\Delta\pi$ at the vertex $(x,y)$ approaching from the interior of a face $F \in \Delta\mathcal{P}$ containing the vertex $(x,y)$. This limit value is defined by \[\Delta\pi_F(x,y) = \lim_{\substack{(u,v) \to (x,y)\\ (u,v) \in \relint(F)}} \Delta\pi(u,v), \quad \text{where } F \in \Delta\mathcal{P} \text{ such that } (x, y) \in F.\] We indicate the sign of $\Delta\pi_F(x,y)$ by a colored cone inside $F$ pointed at the vertex $(x, y)$ on the diagram. There could be up to $12$ such cones (including rays for one-dimensional $F$) around a vertex $(x, y)$. \begin{example} In \autoref{fig:2d_diagram_with_cones} (right), the lower right corner $(x, y)=(\frac{2}{5}, \frac{4}{5})$ of the face $F = F(I, J, K)$ with $I =[\frac{1}{5}, \frac{2}{5}]$, $J=[\frac{4}{5}, 1]$, $K=[1, \frac{6}{5}]$ is green, since \begin{align*} \Delta\pi_F(x,y) & = \lim_{\substack{(u,v) \to (\frac{2}{5},\frac{4}{5})\\ (u,v) \in \relint(F)}} \Delta\pi(u,v) \\ & = \lim_{u\to\frac{2}{5},\; u < \frac{2}{5}}\pi(u)+\lim_{v\to\frac{4}{5}, \; v > \frac{4}{5}}\pi(v)-\lim_{w\to\frac{6}{5},\; w <\frac{6}{5}}\pi(w) \\ & = \pi(\tfrac{2}{5}^-) +\pi(\tfrac{4}{5}^+) -\pi(\tfrac{1}{5}^-) \quad \text{(as } \pi(\tfrac{6}{5}^-) =\pi(\tfrac{1}{5}^-) \text{ by periodicity)} \\ & = 0 + 1 - 1 = 0. \end{align*} The horizontal ray to the left of the same vertex $(x, y)=(\frac{2}{5}, \frac{4}{5})$ is red, because approaching from the one-dimensional face $F' = F(I', J', K')$ that contains $(x,y)$, with $I' =[\frac{1}{5}, \frac{2}{5}]$, $J'=\{\frac{4}{5}\}$, $K'=[1, \frac{6}{5}]$, we have the limit value \[\Delta\pi_{F'}(x,y) = \lim_{\substack{(u,v) \to (\frac{2}{5},\frac{4}{5})\\ (u,v) \in \relint(F')}} \hspace{-1.5em}\Delta\pi(u,v) = \lim_{\substack{u\to\frac{2}{5} \\ u < \frac{2}{5}}}\pi(u)+\pi(\tfrac{4}{5})-\lim_{\substack{w\to\frac{6}{5}\\ w <\frac{6}{5}}}\pi(w) = 0 + \tfrac{3}{5} - 1 < 0.\] \end{example} \subsection{\sage{plot\_2d\_diagram} and additive faces} Now assume that $\pi$ is a subadditive function. Then there are no red dots or cones on the above diagram of the complex $\Delta\mathcal{P}$. See \autoref{fig:2d_diagrams_discontinuous_function}. \textbf{For a continuous subadditive function $\pi$}, we say that a face $F \in \Delta\mathcal{P}$ is \emph{additive} if $\Delta\pi =0$ over all $F$. Note that $\Delta\pi$ is affine linear over $F$, and so the face $F$ is additive if and only if $\Delta\pi(x, y) = 0$ for all $(x, y) \in \verts(F)$. It is clear that any subface $E$ of an additive face $F$ ($E \subseteq F$, $E \in \Delta\mathcal{P}$) is still additive. Thus the additivity domain of~$\pi$ can be represented by the list of inclusion-maximal additive faces of $\Delta\mathcal{P}$; see \cite[Lemma~3.12]{igp_survey}.\footnote{This list is computed by \sage{generate\_maximal\_additive\_faces($\pi$)}.} \begin{figure}[tp] \centering \begin{minipage}{.49\textwidth} \centering \includegraphics[width=.8\linewidth]{graphics-for-algo-paper/2d_with_cones_discontinuous.png} \end{minipage} \begin{minipage}{.49\textwidth} \centering \includegraphics[width=.8\linewidth]{graphics-for-algo-paper/2d_with_faces_discontinuous.png} \end{minipage} \caption{Diagrams of $\Delta\mathcal{P}$ of a discontinuous function \sage{h = hildebrand\underscore 2\underscore sided\underscore discont\underscore 2\underscore slope\underscore 1()}, with (\textit{left}) additive limiting cones as plotted by the command \sage{plot\_2d\_diagram\_with\_cones(h)}; (\textit{right}) additive faces as plotted by the command \sage{plot\_2d\_diagram(h)}.} \label{fig:2d_diagrams_discontinuous_function} \end{figure} \textbf{For a discontinuous subadditive function $\pi$}, we say that a face $F \in \Delta\mathcal{P}$ is \emph{additive} if $F$ is contained in a face $F' \in \Delta\mathcal{P}$ such that $\Delta\pi_{F'}(x,y) =0$ for any $(x,y) \in F$.\footnote{Summarizing the detailed additivity and additivity-in-the-limit situation of the function using the notion of additive faces is justified by \cite[Lemmas 2.7 and 4.5]{basu-hildebrand-koeppe:equivariant} and their generalizations. } Since $\Delta\pi$ is affine linear in the relative interiors of each face of $\Delta\mathcal{P}$, the last condition is equivalent to $\Delta\pi_{F'}(x,y) =0$ for any $(x,y) \in \verts(F) . Depending on the dimension of $F$, we do the following. \begin{enumerate} \item Let $F$ be a two-dimensional face of $\Delta\mathcal{P}$. If $\Delta\pi_{F}(x,y) =0$ for any $(x,y) \in \verts(F)$, then $F$ is additive. Visually on the 2d-diagram with cones, each vertex of $F$ has a green cone sitting inside $F$. \item Let $F$ be a one-dimensional face, i.e., an edge of $\Delta\mathcal{P}$. Let $(x_1, y_1), (x_2, y_2)$ be its vertices. Besides $F$ itself, there are two other faces $F_1, F_2 \in \Delta\mathcal{P}$ that contain $F$. If $\Delta\pi_{F'}(x_1,y_1)=\Delta\pi_{F'}(x_2,y_2) =0$ for $F' = F$, $F_1$, or $F_2$, then the edge $F$ is additive. \item Let $F$ be a zero-dimensional face of $\Delta\mathcal{P}$, $F = \{(x, y)\}$. If there is a face $F' \in \Delta\mathcal{P}$ such that $(x,y) \in F'$ and $\Delta\pi_{F'}(x,y)=0$, then $F$ is additive. Visually on the 2d-diagram with cones, the vertex $(x,y)$ is green or there is a green cone pointing at $(x,y)$. \end{enumerate} On the diagram \autoref{fig:2d_diagrams_discontinuous_function} (right), the additive faces are shaded in green. The projections $p_1(F)$, $p_2(F)$, and $p_3(F)$ of a two-dimensional additive face $F$ are shown as gray shadows on the $x$-, $y$- and $(x+y)$-axes of the diagram, respectively. \section{Additional functionality} \begin{description} \item[\sage{minimality\_test($\pi$)}] implements a fully automatic test whether a given function is a minimal valid function, using the information that the described 2-dimensional diagrams visualize. The algorithm is equivalent to the one described, in the setting of discontinuous pseudo-periodic superadditive functions, in Richard, Li, and Miller \cite[Theorem 22]{Richard-Li-Miller-2009:Approximate-Liftings}. \item[\sage{extremality\_test($\pi$)}] implements a grid-free generalization of the automatic extremality test from \cite{basu-hildebrand-koeppe:equivariant}, which is suitable also for piecewise linear functions with rational breakpoints that have huge denominators. Its support for functions with algebraic irrational breakpoints such as \sagefunc{bhk_irrational} \cite[section~5]{basu-hildebrand-koeppe:equivariant} is experimental and will be described in a forthcoming paper. \item[\sage{generate\_covered\_intervals($\pi$)}] computes connected components of covered (affine imposing \cite{basu-hildebrand-koeppe:equivariant}) intervals. This is an ingredient in the extremality test. \item[\sage{extreme\_functions}] is the name of a Python module that gives access to the electronic compendium of extreme functions; see \cite{zhou:extreme-notes} and \cite[Tables 1--4]{igp_survey}. \item[\sage{procedures}] provides transformations of extreme functions; see \cite[Table 5]{igp_survey}. \item[\sage{random\_piecewise\_function()}] generates a random piecewise linear function with prescribed properties, to enable experimentation and exploration. \item[\textsf{demo.sage}] demonstrates further functionality and the use of the help system. \end{description} \providecommand\ISBN{ISBN } \bibliographystyle{../amsabbrvurl}
1,116,691,499,591
arxiv
\section{Introduction} Let $\mathcal{H}$ consists of all complex-valued harmonic mappings $f=h+\bar{g}$ in the unit disk $\mathbb{D}:=\left\lbrace z\in\mathbb{C}, |z|<1\right\rbrace$, where $h$ and $g$ are analytic mappings. Let $\mathcal{S}_H^0$ be the sub-class of $\mathcal{H}$ consists of all mappings $f$ in the class $\mathcal{H}$ that are univalent, sense-preserving and normalized by the conditions $f(0)=f_{\bar{z}}(0)=f_z(0)-1=0$. Let $\mathcal{K}_H^0$ and $\mathcal{S}_H^{*0}$ denote the sub-classes of $\mathcal{S}_H^0$ consisting of mappings which maps $\mathbb{D}$ onto convex and starlike domains respectively. The sub-classes $\mathcal{S}$, $\mathcal{K}$ and $\mathcal{S}^*$ of analytic mappings consisting of univalent, convex and starlike mappings are respectively sub-classes of $\mathcal{S}_H^0$, $\mathcal{K}_H^0$ and $\mathcal{S}_H^{*0}$. Clunie and Sheil-Small\cite{1} constructed two important mappings, the harmonic right-half plane mapping $L \in \mathcal{K}_H^{*0} $ and the harmonic Koebe mapping $K\in{\mathcal{S}_H^0}$. These mappings are expected to play the role of extremal mappings respectively in classes $\mathcal{S}_H^0$ and $\mathcal{K}_H^0$ as played by the analytic right-half plane mapping and analytic Koebe mapping respectively in the classes $\mathcal{K}$ and $\mathcal{S}^*$. These mappings $K=H+\overline{G}$ and $L=M+\overline{N}$ are defined in $\mathbb{D}$ by \[H(z)=\frac{z-z^2/2+z^3/6}{(1-z)^3},\quad G(z)=\frac{z^2/2+z^3/6}{(1-z)^3}\] and \[M(z)=\frac{z-z^2/2}{(1-z)^2}, \quad N(z)=\frac{-z^2/2}{(1-z)^2}.\] A domain $D$ is said to be \textit{convex in direction ${\theta}$} $(0\leq\theta<2\pi),$ if every line parallel to the line joining $0$ and $\emph{e}^{\textit{i}\theta}$ lies completely inside or outside the domain $D$. If $\theta=0$ ( or $\pi/2$), such a domain $D$ is called convex in the direction of real (or imaginary) axis. In this paper we study the directional convexity of the convolution of these and some other mappings with $n$-starlike mapping introduced by S\~{a}l\~agean\cite{2} and their partial sums. Let $\mathcal{A}$ be the class of all analytic mappings $f:\mathbb{D}\longrightarrow\mathbb{C}$ with $f(0)=0$, and $ f'(0) = 1$. The function $f\in\mathcal{A}$ has the Taylor series expansion $f(z)=z+\sum_{k=2}^{\infty}a_kz^k$. For the function $f \in \mathcal{A}$, $n\geq0$, S\~{a}l\~{a}gean \cite{2} defined the differential operator $\mathcal{D}^n:\mathcal{A} \longrightarrow \mathcal{A}$ by \[\mathcal{D}^nf(z)=z+\sum_{k=2}^{\infty}k^na_kz^k.\] By using this operator, S\~{a}l\~{a}gean introduced the class of $n$-starlike mappings of order $\alpha$ $(0\leq{\alpha<{1}})$ defined by\[\mathcal{S}_n(\alpha):= \bigg\{f\in \mathcal{A}: \RE{\frac{\mathcal{D}^{n+1}f(z)}{\mathcal{D}^{n}f(z)}>{\alpha},\quad z\in \mathbb{D} }\bigg\}.\] Equivalently, a function $f\in\mathcal{A}$ is $n$-starlike of order $\alpha$ if and only if the function $\mathcal{D}^nf$ is starlike of order $\alpha$. Clearly $\mathcal{S}^*(\alpha)= \mathcal{S}_0(\alpha)$ and $\mathcal{K}(\alpha)= \mathcal{S}_1(\alpha)$ are respectively the classes of starlike and convex mappings of order $\alpha$ introduced by Robertson\cite{21}. Denote $\mathcal{S}_n(0)$ by $\mathcal{S}_n$ and the mappings in this class are called $n$-starlike mappings. Also, $\mathcal{S}^*= \mathcal{S}_0$ and $\mathcal{K}= \mathcal{S}_1$. Suppose $\phi$ and $\psi$ are analytic mappings on $\mathbb{D} $ with $\phi(z)=\sum_{n=1}^\infty a_nz^n$ and $ \psi(z)=\sum_{n=1}^\infty b_nz^n,$ their convolution $\phi *\psi$ is defined by $(\phi *\psi)(z):=\sum_{n=1}^\infty a_n b_nz^n.$ Convolution of harmonic mappings $f=h+\bar{g}$ and $F=H+\overline{G}$ is defined by $f*F:= h*H+ \overline{g*G}.$ Also, the convolution $\tilde{*}$ of harmonic mapping $f=h+\bar{g}$ with analytic mapping $\phi$ is defined by$ f\tilde{*}\phi:=h*\phi+\overline{g*\phi}.$ It is well known that the convolution of two harmonic convex mappings is not necessarily convex/univalent. In \cite{6}, Dorff studied the directional convexity of harmonic mappings and proved that convolution of two right half-plane mappings is univalent and convex in the direction of real axis provided the convolution is locally univalent. Later, Dorff \textit{et al.} \cite{7} extended such results to slanted half-plane and strip mappings. Other recent related work in this direction can be found in \cite{1,2,5,9,10,11,13,14,15,16,17,18,19,20,25}. For analytic convex mappings $\phi$, the convolution $K\tilde{*}\phi$ is not necessarily univalent. However, Nagpal and Ravichandran \cite{8} showed that $K\tilde{*}\phi\in\mathcal{S}_H^0$ and is convex in the direction of real axis, if $\phi$ is a $2$-starlike mapping.\ In Section 2, we prove that the convolution of certain harmonic mappings with $n$-starlike mappings is univalent and convex in a particular direction. In particular, for $0\leq\alpha<\pi$, we prove that the convolution of an analytic convex mapping with the slanted half-plane mapping is univalent and convex in the direction of $\pi/2-\alpha$. Lastly, in Section 3, we discuss the partial sums of $n$-starlike mappings and prove that all the partial sums of $n$-starlike mappings with $n\geq 4$ are $(n-4)$-starlike. By using this, we prove that all the partial sums of the convolution of $6$-starlike mappings with the mappings $L$ and $K$ are univalent and convex in the direction of real-axis. \section{Convolution of some harmonic mappings with $n$-starlike mappings } We first give some convolution properties of $n$-starlike mappings, which will be useful throught the paper. From the defnition of $\mathcal{S}_n(\alpha)$, one can easily see that\begin{equation}\label{eq0} f\in\mathcal{S}_n(\alpha)\Leftrightarrow\mathcal{D}^{n-m}f\in\mathcal{S}_m(\alpha). \end{equation} Using this relation, we get the following result regarding the convolution of mappings in class $\mathcal{S}_n$. \begin{lemma}\label{theom1a} Let $n + m\geq{1}$. If the function $f \in \mathcal{S}_n$ and the function $g\in \mathcal{S}_m$, then the convolution $f*g\in \mathcal{S}_{n+m-1}.$ \end{lemma} \begin{proof} Assume that $n\geq1$. Since the function $f \in \mathcal{S}_n$ and the function $g\in \mathcal{S}_m$, by \eqref{eq0}, the function $\mathcal{D}^{n-1}f\in \mathcal{K}$ and the function $\mathcal{D}^{m}g\in \mathcal{S}^*$. Therefore, from \cite{22}, we have\[\mathcal{D}^{n+m-1}(f*g)=\mathcal{D}^{n-1}f*\mathcal{D}^{m}g\in\mathcal{S}^*.\] Hence, by \eqref{eq0}, it follows that the convolution $f*g\in \mathcal{S}_{n+m-1}$. \end{proof} \begin{theorem}\label{theomd}\cite{24} If the function $f(z)=z+\sum_{n=2}^\infty a_n z^n\in \mathcal{A}$ satisfies the inequality $\sum_{n=2}^\infty (n-\alpha)|a_n|\leq1-\alpha,$ then the function $f$ is starlike of order $\alpha.$ \end{theorem} By using \eqref{eq0} and Theorem \ref{theomd}, we get the following result. \begin{theorem}\label{theomd2}\cite{24} Let $0\leq\alpha<1$. If the function $f(z)=z+\sum_{n=2}^\infty a_n z^n$ in $\mathbb{D}$ satisfies the inequality $\sum_{n=2}^\infty n^{m-1}(n-\alpha)|a_n|\leq1-\alpha,$ then the function $f$ is $(m-1)$-starlike of order $\alpha.$ \end{theorem} The harmonic mappings $f$ considered in this paper are assumed to be normalized by$f(0)=f_{z}(0)-1=f_{\bar{z}}(0)=0$, unless otherwise specified. The following result due to Clunie and Sheil-Small is used for constructing univalent harmonic mappings convex in a given direction. \begin{lemma}\label{lem130}\cite{3} A locally univalent harmonic mapping $f=h+\overline{g}$ on $\mathbb{D}$ is univalent and maps $\mathbb{D}$ onto a domain convex in the direction of $\phi$ if and only if the analytic mapping $h-\emph{e}^{2\textit{i}\phi}g$ is univalent and maps $\mathbb{D}$ onto a domain convex in the direction of $\phi$. \end{lemma} \begin{lemma} \label{lem 13a} Let the function $ f= h+ \bar {g} $ be harmonic and the function $\phi$ be analytic in $\mathbb{D}$. If the function $ (h-e^{-2\textit{i}\beta}g)*\phi$ is convex and, for some real number $\gamma$, \begin{equation}\label{eq11} \RE\frac{(\phi*h)'(z)}{\left((\phi*h)'-e^{-2\textit{i}\gamma}(\phi*g)'\right)(z)}>\frac{1}{2} \quad \text {for z} \in \mathbb{D}, \end{equation} then the convolution $f\tilde{\ast}\phi \in \mathcal{S}_H^0$ and is convex in the direction of $-\beta$. \end{lemma} \begin{proof} Since the function $(h-e^{-2\textit{i}\beta}g)*\phi$ is convex and hence convex in the direction of $-\beta$, in view of Lemma \ref{lem130}, it is enough to prove that the mapping $f\tilde{\ast}\phi$ is locally univalent. Clearly \eqref{eq11} shows that $(h*\phi)'(z)\neq0$ for $z\in \mathbb{D}$. Therefore, using \eqref{eq11}, we see that the dilatation $w_{e^{\textit{i}\gamma}f\tilde{\ast}\phi}=(e^{-\textit{i}\gamma}g*\phi)'/(e^{\textit{i}\gamma}h*\phi)'$ of $e^{\textit{i}\gamma}f\tilde{\ast}\phi$ satisfies \begin{align} \RE\frac{1+w_{e^{\textit{i}\gamma}f\tilde{\ast}\phi}(z)}{1-w_{e^{\textit{i}\gamma}f\tilde{\ast}\phi}(z)}&= \RE\frac{(e^{\textit{i}\gamma}h*\phi)'(z)+(e^{-\textit{i}\gamma}g*\phi)'(z)}{(e^{\textit{i}\gamma}h*\phi)'(z)-(e^{-\textit{i}\gamma}g*\phi)'(z)}\notag\\&=2\RE\frac{(\phi*h)'(z)}{\left((\phi*h)'-e^{-2\textit{i}\gamma}(\phi*g)'\right)(z)}-1>\frac{1}{2}, \quad z\in \mathbb{D}.\label{eq12} \end{align}This shows that $|w_{e^{\textit{i}\gamma}f\tilde{\ast}\phi}(z)|<1$ for $z\in\mathbb{D}$, or equivalently $|w_{f\tilde{\ast}\phi}(z)|<1$ for $z\in\mathbb{D}$, where $w_{f\tilde{\ast}\phi}=(g*\phi)'/(h*\phi)'$ is dilatation of the function $f\tilde{\ast}\phi$. The result now follows by Lewis theorem. \end{proof} \begin{theorem}\label{theom15a} Let the harmonic function $ f= h+ \bar {g} $ in $\mathbb{D}$ satisfies $h(z)- e^{-2\textit{i}\gamma}g(z)=z$ for some real number $\gamma$. If the function $h*\phi\in\mathcal{S}_2$ and the function $(h-e^{-2\textit{i}\beta}g)*\phi\in\mathcal{K}$ for some analytic function $\phi$, then the convolution $f\tilde{*}\phi\in\mathcal{S}_H^0$ and is convex in the direction of $-\beta$. \end{theorem} \begin{proof} Since the function $h*\phi\in\mathcal{S}_2$, we have $z(h*\phi)'\in\mathcal{K}$. Hence, from \cite[Corollary 1, p.251]{23}, we get \[\RE\frac{(\phi*h)'(z)}{\left((\phi*h)'-e^{-2\textit{i}\gamma}(\phi*g)'\right)(z)}=\RE(h*\phi)'(z)>\frac{1}{2} \quad\text{for }z\in\mathbb{D}.\] Also, the function $(h-e^{-2\textit{i}\beta}g)*\phi$ is convex. The result now follows from Lemma \ref{lem 13a}. \end{proof} \begin{corollary}\label{corl15b} If the function $\phi(z)=z+\sum_{n=2}^\infty a_nz^n\in\mathcal{S}_2$ satisfies the inequality $\sum_{n=2}^\infty n^2|a_n|\leq 1/\sqrt{2(1-\cos2\theta)}$ for some $\theta \in [0,\pi/2]$, then, for real number $\gamma$, the function $\phi+\overline{e^{2\textit{i}\gamma}(\phi-z)}\in\mathcal{S}_H^0$ and is convex in the direction $-\beta$ for all $\beta$ satisfying $|\beta+\gamma|\leq\theta$. \end{corollary} \begin{proof} The harmonic mapping $f=h+\bar g$, with $h(z)=z/(1-z)$ and $g(z)=e^{2\textit{i}\gamma}(z/(1-z)-z)$, satisfy $h(z)-e^{-2\textit{i}\gamma}g(z)=z$. Also, the function $h*\phi=\phi\in\mathcal{S}_2.$ Furthermore, we see that \begin{align*} (h(z)- e^{-2\textit{i}\beta}g(z))*\phi(z)&=\phi(z)-e^{-2\textit{i}(\beta+\gamma)}(\phi(z)-z)\\&=z+\sum_{n=2}^\infty(1-e^{-2\textit{i}(\beta+\gamma)})a_nz^n \end{align*} Now, for $|\beta+\gamma|\leq\theta$, we have \begin{align*} \sum_{n=2}^\infty n^2|(1-e^{-2\textit{i}(\beta+\gamma)})a_n|&=|1-e^{-2\textit{i}(\beta+\gamma)}|\sum_{n=2}^\infty n^2|a_n|\\&=\sqrt{2(1-\cos2(\beta+\gamma))}\sum_{n=2}^\infty n^2|a_n|\\&\leq\sqrt{2(1-\cos2(\beta+\gamma))}\frac{1}{\sqrt{2(1-\cos2\theta)}}\leq1. \end{align*} Therefore, by Lemma \ref{theomd2}, the function $(h- e^{-2\textit{i}\beta}g)*\phi$ is convex for every $\beta$ such that $|\beta+\gamma|\leq\theta$. Hence, by Theorem \ref{theom15a}, the function $f\tilde{*}\phi=\phi+\overline{e^{2\textit{i}\gamma}(\phi-z)}\in\mathcal{S}_H^0$ and is convex in every direction $-\beta$ satisfying $|\beta+\gamma|\leq\theta$. \end{proof} \remark\label{remaka1} For $\theta=\pi/2$ in Corollary \ref{corl15b}, we see the function $\phi+\overline{e^{2\textit{i}\gamma}(\phi-z)}\in\mathcal{K}_H^0$, if the function $\phi=z+\sum_{n=2}^\infty a_nz^n\in\mathcal{S}_2$ and satisfies the inequality $\sum_{n=2}^\infty n^2|a_n|\leq 1/2.$ \remark\label{remaka2}If the function $\phi=z+\sum_{n=2}^\infty a_nz^n$ satisfies the inequality $\sum_{n=2}^\infty n^3|a_n|\leq 1$, then $\sum_{n=2}^\infty n^2|a_n|\leq 1/2$ and, by Lemma \ref{theomd2}, the function $\phi \in \mathcal{S}_2$. Therefore, Remark \ref{remaka1} shows that $\phi+\overline{e^{2\textit{i}\gamma}(\phi-z)}\in\mathcal{K}_H^0$. \begin{theorem}\label{theom16a}Let the function $\phi\in\mathcal{S}_2$ and the function $f= h+ \bar {g} $ be a harmonic mapping in $\mathbb{D}$ satisfying $h(z)- e^{-2\textit{i}\gamma}g(z)=h(z)*\log(1/(1-z))$ for some real number $\gamma$ with the function $h\in\mathcal{S}^*$. Then the convolution $f\tilde{*}\phi\in\mathcal{S}_H^0$ and is convex in the direction $-\gamma$. Furthermore, if for any $\beta$ real, the function $h-e^{-2\textit{i}\beta}g\in\mathcal{S}^*$, then the convolution $f\tilde{*}\phi$ is convex in the direction $-\beta$. \end{theorem} \begin{proof} Since the function $h\in\mathcal{S}^*$ and the function $\log(1/(1-z))\in\mathcal{S}_2$, by Lemma \ref{theom1a}, we have the function $h-e^{-2\textit{i}\gamma}g=h*\log1/(1-z)\in\mathcal{K}$. Hence, from \cite[Corollary 1, p.251]{23}, we have \begin{equation}\label{eq16b} \RE\frac{h(z)}{h(z)*\log\frac{1}{1-z}}>\frac{1}{2}, \quad z \in\mathbb{D}. \end{equation} Note that, we can write \begin{equation}\label{eq16c} \RE\frac{(\phi*h)'(z)}{\left((\phi*h)'-e^{-2\textit{i}\gamma}(\phi*g)'\right)(z)} =\RE\frac{z\phi'(z)*(h(z)*\log\frac{1}{1-z})\left(\frac{h(z)}{h(z)*\log(\frac{1}{1-z})}\right)}{z\phi'(z)*(h(z)*\log\frac{1}{1-z})}, \end{equation} where the function $z\phi'\in\mathcal{K}$ and the function $h*\log1/(1-z)\in\mathcal{S}^*$. Therefore, in view of $\eqref{eq16b}$, $\eqref{eq16c}$ and \cite[Theorem 2.4, p.54]{8}, it follows that \begin{equation}\label{eq16d} \RE\frac{(\phi*h)'(z)}{\left((\phi*h)'-e^{-2\textit{i}\gamma}(\phi*g)'\right)(z)}>\frac{1}{2}, \quad z\in \mathbb{D}. \end{equation} As the function $h\in\mathcal{S}^*$ and the function $\phi\in\mathcal{S}_2$, Lemma \ref{theom1a} gives that the function $(h-e^{-2\textit{i}\gamma}g)*\phi=h*\log(1/(1-z))*\phi\in\mathcal{K}$. Similarly, the function $(h-e^{-2\textit{i}\beta}g)*\phi\in\mathcal{K}$. Therefore, in view of \eqref{eq16d}, the result follows by Lemma \ref{lem 13a}. \end{proof} \begin{corollary}\label{corla1}Let the function $\phi\in\mathcal{S}_2$ and the function $f= h+ \bar {g} $ be a harmonic mapping in $\mathbb{D}$ satisfying $h(z)- g(z)=h(z)*\log(1/(1-z))$. Then, we have the following.\begin{itemize} \item[1] If the function $h\in\mathcal{S}^*$, then the convolution $f\tilde{*}\phi\in \mathcal{S}_H^0$ and is convex in the direction of real axis. \item[2] If, for some $\theta$ $(0\leq\theta<\pi)$, the function $h(z)=z+\sum_{n=2}^\infty a_nz^n\in\mathcal{S}^*$ satisfies the inequality $\sum_{n=2}^\infty |a_n|\sqrt{2n(n-1)(1-\cos2\theta)+1}\leq1$, then the convolution $f\tilde{*}\phi\in \mathcal{S}_H^0$ and is convex in every direction $-\beta$ such that $|\beta|\leq \theta$. \item[3] If, for some $\theta$ $(0\leq\theta<\pi)$ such that $\cos2\theta\leq1/4$, the function $h(z)=z+\sum_{n=2}^\infty a_nz^n$ satisfies the inequality $\sum_{n=2}^\infty |a_n|\sqrt{2n(n-1)(1-\cos2\theta)+1}\leq1$, then the convolution $f\tilde{*}\phi\in \mathcal{S}_H^0$ and is convex in every direction $-\beta$ such $|\beta|\leq \theta$ \end{itemize} \end{corollary} \begin{proof} $(1)$ is obvious from Theorem \ref{theom16a}. For $(2)$ and $(3)$, in view of Theorem \ref{theom16a}, it is enough to prove that the function $h-e^{-2\textit{i}\beta}g\in\mathcal{S}^*$ for every $\beta$ such that $|\beta|\leq \theta$ and $h\in\mathcal{S}^*$. If $\cos2\theta\leq1/4$, then the inequality $\sum_{n=2}^\infty |a_n|\sqrt{2n(n-1)(1-\cos2\theta)+1}\leq1$ implies the inequality $\sum_{n=2}^\infty n|a_n|\leq1$. Hence, by Theorem \ref{theomd}, the function $h\in\mathcal{S}^*$. Also, for $\beta$ real, we have \begin{align*} h(z)-e^{-2\textit{i}\beta}g(z)&=h(z)-e^{-2\textit{i}\beta}(h(z)-h(z)*\log(1/1-z))\\&=z+\sum_{n=2}^\infty a_n(1-e^{-2\textit{i}\beta}(n-1)/n ). \end{align*} Therefore, in view of $(2)$ and $(3)$, we see, for $|\beta|\leq\theta$, that the function $h-e^{-2\textit{i}\beta}g$ satisfies the inequality \begin{align*} \sum_{n=2}^\infty n| a_n(1-e^{-2\textit{i}\beta}(n-1/n)|&=\sum_{n=2}^\infty | a_n|\sqrt{2n(n-1)(1-\cos2\beta)+1}\\&\leq\sum_{n=2}^\infty | a_n|\sqrt{2n(n-1)(1-\cos2\theta)+1}\leq1. \end{align*}Hence, by Theorem \ref{theomd}, the function $h-e^{-2\textit{i}\beta}g\in\mathcal{S}^*$. \end{proof} \remark\label{remaks1} Taking $\theta=\pi/2$ in Corollary \ref{corla1}, we see that, if the function $\phi\in\mathcal{S}_2$ and the function $h(z)=z+\sum_2^\infty a_nz^n$ satisfies the inequality $\sum_2^\infty (2n-1)|a_n|\leq1$, then the convolution $f\tilde{*}\phi\in\mathcal{K}_H^0$, where the function $f=h+\bar g$ and the function $g(z)=h(z)-h(z)*\log(1/(1-z)) $. \begin{example}Let the harmonic function $f=h+\bar {g}$ be given by the function $h(z)=z/(1-z)^2$ and the function $g(z)=z/(1-z)^2-z/(1-z)=z/(1-z)^2-z/(1-z)^2*\log(1/(1-z))$. Now, taking the function $\phi(z)=\log(1/(1-z))$ in Theorem \ref{theom16a}, we get the convolution $f(z)*\log(1/(1-z))=z/(1-z)+\overline{z/(1-z)-\log(1/(1-z))}\in\mathcal{S}_H^0$ and is convex in the direction of real axis. \end{example} \begin{example}Let the harmonic function $f=h+\bar {g}$ be given by the function $h(z)=z+z^2/3$ and the function $g(z)=z^2/6$ and let the function $\phi(z)=\log(1/(1-z))$. Then, the functions $f$ and $\phi$ satisfy the conditions in Remark \ref {remaks1}, and hence the convolution $(f*\phi)(z)=z+z^2/6+\overline{z^2/12}\in\mathcal{K}_H^0$. \end{example} We denote the convolution $f*f*\dots*f$ ($n$-times) by $\left(f\right)_*^n$. A simple calculation shows that, for $|\alpha|=1$ and $|z|<1$, \[\left(\frac{z}{(1-z\alpha)^2}\right)^n_**\left(\alpha \log\frac{1}{1-z/\alpha}\right)_*^n=\frac{z}{1-z}\quad\text{ for all } n\in \mathbb{N}.\] Since the function $z/(1-z)$ is convolution identity, the inverse under convolution of the function $\left(\frac{z}{(1-z\alpha)^2}\right)^n_*$ is the function $\left(\alpha \log\frac{1}{1-z/\alpha}\right)_*^{n}.$ For the function $f$, we write the inverse of $\left(f\right)_*^n$ by $\left(f\right)_*^{-n}.$ \begin{theorem}\label{theom4a} Let $n\in\mathbb{N}$, $|\alpha| =|\gamma|=1$ and $0\leq\beta<2\pi$. Let the function $ f= h+ \bar {g} $ be a harmonic mapping in $\mathbb{D}$ satisfying \[ h(z)-e^{-2\textit{i}\beta}g(z)=\frac{z}{(1-\alpha z)^2}*\bigg( \frac{z}{(1-\gamma z)^2}\bigg)_*^{n-2},\quad z \in\mathbb{D},\] and let the function $\phi \in \mathcal{S}_n$. If the function $ (h-e^{-2\textit{i}\delta}g)*\phi\in\mathcal{K}$ for some real number $\delta$ and \begin{equation}\label{eq13} \RE\left\lbrace\frac{(1-\alpha z)^2}{z}\left( h(z)*\bigg(\gamma \log\frac{1}{1- z/\gamma}\bigg)_*^{n-2}\right) \right\rbrace>\frac{1}{2},\quad z \in \mathbb{D}, \end{equation} then the convolution $f\tilde{\ast}\phi \in \mathcal{S}_H^0$ and is convex in the direction of $-\delta$. \end{theorem} \begin{proof}We have\[ (h(z)-e^{-2\textit{i}\beta}g(z))*\phi=\frac{z}{(1-\alpha z)^2}*\bigg( \frac{z}{(1-\gamma z)^2}\bigg)_*^{n-2}*\phi,\] where the functions $z/(1-\alpha z)^2$, $ z/(1-\gamma z)^2\in\mathcal{S}^*$ and the function $\phi\in\mathcal{S}_n$. Therefore, by repeated application of Theorem \ref{theom1a}, we see that the function $(h-e^{-2\textit{i}\beta}g)*\phi\in\mathcal{K}.$ Also, we have \begin{align} \lefteqn{\frac{(\phi*h)'(z)}{\left((\phi*h)'-e^{-2\textit{i}\beta}(\phi*g)'\right)(z)}}\notag\\ & =\frac{z\phi'(z)*\left\lbrace h(z)*\bigg(\frac{z}{(1-\gamma z)^2}\bigg)^{n-2}_**\bigg(\gamma \log\frac{1}{1-z/\gamma}\bigg)_*^{n-2}\right\rbrace}{z\phi'(z)*(h-e^{-2\textit{i}\beta}g)(z)}\notag\\& =\frac{z\phi'(z)*\bigg(\frac{z}{(1-\gamma z)^2}\bigg)^{n-2}_**\frac{z}{(1-\alpha z)^2} \left\lbrace \frac{(1-\alpha z)^2}{z}\left( h(z)*\bigg(\gamma \log\frac{1}{1- z/\gamma}\bigg)_*^{n-2}\right)\right\rbrace}{z\phi'(z)*\bigg(\frac{z}{(1-\gamma z)^2}\bigg)^{n-2}_**\frac{z}{(1-\alpha z)^2}}.\label{eq13a} \end{align} Since the function $ \phi \in \mathcal{S}_n$ and the function $z/(1-\gamma z)^2\in\mathcal{S}^*$, from Lemma \ref{theom1a}, we have \begin{equation}\label{eq13b} z\phi'(z) *\left(\frac{z}{(1-\gamma z)^2}\right)^{n-2}_* \in \mathcal{K}. \end{equation} Since the function $z/(1-\alpha z)^2 \in\mathcal{S}^*$, in view of \eqref{eq13}, \eqref{eq13a} and \eqref{eq13b}, it follows, by \cite[Theorem 2.4, p.54]{8}, that \[\RE\frac{(\phi*h)'(z)}{\left((\phi*h)'-e^{-2\textit{i}\beta}(\phi*g)'\right)(z)}>1/2,\quad z\in\mathbb{D}.\] The result now follows from Lemma \ref{lem 13a}. \end{proof} \remark\label{remak38} Take $n=2$ in Theorem \ref{theom4a}. Let $|\alpha| =1$, $0\leq\beta<2\pi$. Let the harmonic mapping $f=h+\bar g$ satisfy \begin{equation}\RE\label{eq1} {\frac{(1-\alpha z)^2}{z}}h(z)> \frac{1}{2}, \quad h(z)-e^{-2\textit{i}\beta}g(z)=\frac{z}{(1-\alpha z)^2 }, \quad z \in \mathbb{D}, \end{equation} and the function $ (h-e^{-2\textit{i}\delta}g)\in\mathcal{S}^*$ for some real number $\delta$. If the function $\phi \in \mathcal{S}_2$, then the convolution $f\tilde{\ast}\phi \in \mathcal{S}_H^0$ and is convex in the direction $-\delta$, and in particular in the direction $-\beta$. \remark\label{remak39} Take $n=\gamma=1$ in Theorem \ref{theom4a} and notice that \begin{align*} \frac{z}{(1-\alpha z)^2}*\bigg( \frac{z}{(1-z)^2}\bigg)_*^{-1}=\frac{z}{(1-\alpha z)^2}* \log \frac{1}{1-z}=\frac{z}{(1-\alpha z)}, \end{align*} and \begin{align*} \frac{(1-\alpha z)^2}{z}\left( h(z)*\bigg(\log\frac{1}{1-z}\bigg)_*^{-1}\right) = \frac{(1-\alpha z)^2}{z}\left( h(z)*\frac{z}{(1-z)^2}\right)= \frac{(1-\alpha z)^2}{z} \mathcal{D}h(z). \end{align*} Let $|\alpha| =1$, $0\leq\beta<2\pi$. Let the harmonic mapping $f=h+\bar g$ satisfy \begin{equation} h(z)-e^{-2\textit{i}\beta}g(z)=\frac{z}{(1-\alpha z)},\quad\RE\left\lbrace\frac{(1-\alpha z)^2}{z} \mathcal{D}h(z)\right\rbrace>\frac{1}{2},\quad z\in\mathbb{D}, \end{equation} and the function $ h-e^{-2\textit{i}\delta}g\in\mathcal{K}$ for some real number $\delta$. If the function $\phi\in\mathcal{K}$, then the convolution $f\tilde{\ast}\phi \in \mathcal{S}_H^0$ and is convex in the direction $-\delta$, and in particular in the direction $-\beta$. \remark\label{remak40} Take $n=3$, $\gamma=1$ in Theorem \ref{theom4a}. First notice that \[ \frac{z}{(1-\alpha z)^2}*\frac{z}{(1-z)^2} = \frac{z+\alpha z^2}{(1-\alpha z)^3}.\] Let $|\alpha| =1$, $0\leq\beta<2\pi$. Let the harmonic mapping $f=h+\bar g$ satisfy\[ h(z)-e^{-\textit{i}\beta}g(z)= \frac{z+\alpha z^2}{(1-\alpha z)^3},\quad \RE\frac{(1-\alpha z)^2}{z}\left\lbrace h(z)*\log\frac{1}{1-z}\right\rbrace >1/2,\quad z\in\mathbb{D}.\]If the function $\phi\in\mathcal{S}_3$, then the convolution $f\tilde{\ast}\phi \in \mathcal{S}_H^0$ and is convex in the the direction $-\beta$. Taking $\alpha=1$, $\beta=0$ in Remark \ref{remak38}, we get the following result. \begin{corollary} \label{theom2a}\cite{8} Let the function $\phi \in \mathcal{S}_2$ and the function $ f= h+ \bar {g} $ be a harmonic mapping in $\mathbb{D}$ satisfying $ h(z)-g(z)=z/(1-z)^2$ for all $z \in\mathbb{D}$. If\[\RE{\frac{(1-z)^2}{z}}h(z)> 1/2, \quad \text{ for z} \in \mathbb{D},\] then the convolution $f\tilde{\ast}\phi \in \mathcal{S}_H^0$ and is convex in the the direction of real axis. \end{corollary} Next, we give two examples of non-univalent convolution products. \begin{example}\label{exam42} For $a\geq{-1}$ $(a\neq 0)$, consider the harmonic mapping $f_a=h+\bar{g}$ given by $h(z)=(1+ z/a)l(z)$ and $g(z)= zl(z)/a$, where $l(z)=z/(1-z)$ is analytic right-half plane mapping. Then, for the function $\phi(z)=z+z^2/2\in\mathcal{S}^*$, we have \[(f_a\tilde{*}\phi)(z)=z+\frac{1+a}{2a}z^2+\frac{1}{2a}\bar{z}^2.\]Its Jacobian $\mathit{J}_{f_a\tilde{*}\phi}$, given by\[\mathit{J}_{f_a\tilde{*}\phi}(z)=|(h*\phi)'(z)|^2-|(g*\phi)'(z)|^2=1+\frac{2+a}{a}|z|^2+2\frac{1+a}{a}\RE z,\]vanishes at $z=-a/(a+2)\in\mathbb{D}.$ \end{example} \begin{example}\label{exam42a} For $0<b<1/2$, consider the harmonic mapping $F_b=h+\bar{g}$ given by \[h(z)=\frac{z+(1+2b)z^2}{(1-z)^3}\quad\text{and}\quad g(z)= \frac{2bz^2}{(1-z)^3}.\]Then, for the function $\phi(z)=z+z^2/8\in\mathcal{S}_2$, the Jacobian $\mathit{J}_{F_b\tilde{*}\phi}$ of the convolution $F_b\tilde{*}\phi$, given by\[\mathit{J}_{F_b\tilde{*}\phi}(z)=1+(1+b)|z|^2+(2+b)\RE z,\]vanishes at $z=-1/(1+b)\in\mathbb{D}.$ \end{example} Examples \ref{exam42}-\ref{exam42a} show that if the function $\phi\in\mathcal{S}^*$ and $\mathcal{S}_2$, then respectively the convolutions $f_a\tilde{*}\phi$ and $F_b\tilde{*}\phi$ need not be univalent, where the functions $f_a$ and $F_b$ are respectively given in the Examples \ref{exam42}-\ref{exam42a}. However, the results are true respectively for the function $\phi\in\mathcal{K}$ and $\mathcal{S}_3$. In fact we have the following the results. \begin{corollary} \label{cor2a} For $a\geq6$, let the function $ f_a=h+ \bar{g}$ be the harmonic mapping given in Example \ref{exam42}. If the function $\phi\in\mathcal{K}$, then the convolution $f_a\tilde{\ast}\phi \in \mathcal{S}_H$ and is convex in the direction of real axis. \end{corollary} \begin{proof} We have $h(z)-g(z)=l(z)=z/(1-z)$. Also, for $a\geq6$, we see that \begin{align*} \RE\left\lbrace\frac{(1-z)^2}{z} \mathcal{D}h(z)\right\rbrace &=\RE\left\lbrace(1-z)^2h'(z)\right\rbrace\\&=\RE\left\lbrace 1+\frac{z}{a}(2-z)\right\rbrace>1/2 \quad\text{for }z\in\mathbb{D}. \end{align*} By Remark \ref{remak39}, the result follows. \end{proof} \begin{corollary}\label{corl3a} Let the function $ F_b=h+ \bar{g}$ be the harmonic mapping given in Example ~\ref{exam42a}. If $\phi\in\mathcal{S}_3$, then $F_b\tilde{\ast}\phi \in \mathcal{S}_H^0$ and is convex in the the direction of real axis. \end{corollary} \begin{proof} From the definition of $ F_b$, we have \[h(z)-g(z)= \frac{z+z^2}{(1-z)^3},\] and \[ \mathcal{D}\bigg( \frac{z+bz^2}{(1-z)^2} \bigg) =\frac{z+(1+2b)z^2}{(1-z)^3}= h(z).\] Therefore, for $|b|\leq1/2$, \begin{align*} \RE\left\lbrace\frac{(1-z)^2}{z}\left( h(z)*\log\frac{1}{(1-z)}\right)\right\rbrace &=\RE\left\lbrace\frac{(1-z)^2}{z}\left( \mathcal{D} \frac{z+bz^2}{(1-z)^2} *\log\frac{1}{(1-z)}\right)\right\rbrace\\&=\RE\left\lbrace\frac{(1-z)^2}{z} \frac{z+bz^2}{(1-z)^2}\right\rbrace\\ &=\RE (1+bz) > 1/2. \end{align*} The result now follows from Remark \ref{remak40}. \end{proof} For $ 0\leq \alpha <2\pi $, let $\mathcal{S}^0(H_\alpha)\subset\mathcal{S}_H^0 $ denote the class of all harmonic mappings that maps $\mathbb{D} $ onto $H_\alpha $, where\[ H_\alpha :=\left\lbrace z \in \mathbb{C}:\RE (e^{\textit{i}\alpha}z)>-\frac{1}{2}\right\rbrace .\]In \cite{7}, it is shown that if $f=h+\bar{g}\in\mathcal{S}^0(H_\alpha),$ then \begin{equation}\label{eq3} h(z)+e^{-2\textit{i}\alpha}g(z)=\frac{z}{1-e^{\textit{i}\alpha} z}. \end{equation} Using this result, we check the convexity of the convolution $f \tilde{*} \phi$, where the function $f\in\mathcal{S}^0(H_\alpha)$ and the function $\phi\in\mathcal{K}.$ Before this, we give an example of a mapping in class $H_\alpha$. \begin{example}\label{exam5}In \cite{3}, it is shown that the harmonic right half-plane mapping $L=M+\overline{N}$, where \[M(z)=\frac{z-z^2/2}{(1-z)^2} \quad \text {and} \quad N(z)=-\frac{z^2/2}{(1-z)^2},\] maps $\mathbb{D}$ onto the right-half plane $\left\lbrace w:\RE w>-1/2\right\rbrace.$ Consider the mapping $f_{\alpha}=h_{\alpha}+\bar g_{\alpha}$, where \[h_{\alpha}(z)=\frac{z-e^{\textit{i}\alpha}z^2/2}{(1-e^{\textit{i}\alpha}z)^2} \quad \text{and} \quad g_{\alpha}(z)=-\frac{e^{3\textit{i}\alpha}z^2/2}{(1-e^{\textit{i}\alpha}z)^2}.\] Now, for $w=e^{\textit{i}\alpha}z$, we have \begin{align*} e^{\textit{i}\alpha}f_{\alpha}(z)=e^{\textit{i}\alpha}h_{\alpha}(z)+\overline{e^{-\textit{i}\alpha}g_{\alpha}(z)}=&\frac{ze^{\textit{i}\alpha}-e^{2\textit{i}\alpha}z^2/2}{(1-e^{\textit{i}\alpha}z)^2}+\overline{\frac{-e^{2\textit{i}\alpha}z^2/2}{(1-e^{\textit{i}\alpha}z)^2}}\\&=\frac{w-w^2/2}{(1-w)^2}+\overline{\frac{-w^2/2}{(1-w)^2}}=L(w). \end{align*} Therefore the function $e^{\textit{i}\alpha}f_{\alpha}$ maps $\mathbb{D}$ onto the right-half plane $\left\lbrace w':\RE w'>-1/2\right\rbrace$. Hence the function $f_{\alpha}\in \mathcal{S}^0(H_\alpha).$ \end{example} \begin{example}\label{exam7} Consider the function $\phi=z+z^2/2\in\mathcal{S}^*$. Then, for the harmonic mapping $f_{\alpha}$ given in Example \ref{exam5}, the convolution $f_{\alpha}\tilde{*}\phi$ is given by\[f_{\alpha}\tilde{*}\phi(z)=(h_{\alpha}*\phi)(z)+\overline{(g_{\alpha}*\phi)(z)}=z+\frac{3}{4}\emph{e}^{\textit{i}\alpha}z^2-\frac{1}{2}\overline{\emph{e}^{3\textit{i}\alpha}z^2}.\]Its Jacobian $\mathit{J}_{f_{\alpha}\tilde{*}\phi},$ given by\[\mathit{J}_{f_a\tilde{*}\phi}(z)=|(h_{\alpha}*\phi)'(z)|^2-|(g_{\alpha}*\phi)'(z)|^2=1+2|z|^2+3\RE(\emph{e}^{\textit{i}\alpha} z),\]vanishes at $z=-\emph{e}^{-\textit{i}\alpha}/2\in\mathbb{D}.$\ Example \ref{exam7} shows that for the function $\phi\in\mathcal{S}^*$, the convolution $f_{\alpha}\tilde{*}\phi$ is not univalent, where $f_{\alpha}$ is given in Example \ref{exam5}. However, the result is true for the function $\phi\in\mathcal{K}$. In fact we have the following strong result. \end{example} \begin{theorem}\label{theom9a} Let the harmonic mapping $f=h+\bar{g} \in\mathcal{S}^0(H_\alpha)$. If the function $\phi\in\mathcal{K}$, then the convolution $f \tilde{*} \phi\in \mathcal{S}_H^0$ and is convex in the direction $ \pi/2-\alpha.$ \end{theorem} \begin{proof} Since the harmonic mapping $f=h+\bar g \in\mathcal{S}^0(H_\alpha)$, from equation \eqref{eq3}, we have \begin{equation}\label{eq4a} h(z)+e^{-2\textit{i}\alpha}g(z)=\frac{z}{(1-e^{\textit{i}\alpha} z)} \end{equation} or\begin{equation}\label{eq4b} h(z)-e^{-2\textit{i}(\alpha-\pi/2)}g(z)=\frac{z}{(1-e^{\textit{i}\alpha} z)}. \end{equation} Upon differentiating \eqref{eq4a} and writing the dilatation $g'/h'$ of $f$ by $\omega$, we get $$h'(z)=\frac{1}{(1-e^{\textit{i}\alpha} z)^2(1+e^{-2\textit{i}\alpha} \omega(z))}.$$Therefore \begin{equation}\label{eq4} \RE\frac{(1-e^{\textit{i}\alpha} z)^2}{z} \mathcal{D}h(z)=\RE\frac{1}{1+e^{-2\textit{i}\alpha} \omega(z)}>\frac{1}{2} \quad\text{for }z\in\mathbb{D}. \end{equation} Using \eqref{eq4b} and \eqref{eq4} in Remark \ref{remak39}, we see that the convolution $f \tilde{*} \phi\in \mathcal{S}_H^0$ and is convex in the direction $\pi/2-\alpha$. \end{proof} In the next result, we show that the convolution of the harmonic mapping $f_{\alpha}=h_{\alpha}+\overline{g_{\alpha}}$, given in Example \ref{exam5}, with the mappings in class $\mathcal{S}_2$ is convex in two perpendicular directions. \begin{theorem}\label{theomab} Let the function $f_{\alpha}=h_{\alpha}+\overline{g_{\alpha}}$ be the harmonic mapping defined in Example \ref{exam5}. If the function $\phi\in\mathcal{S}_2$, then the convolution $f_{\alpha}*\phi\in\mathcal{S}_H^0$ and is convex in the directions $-\alpha$ and $\pi/2-\alpha$. \end{theorem} \begin{proof} From Example \ref{exam5}, we have \begin{equation}\label{eqab1} h_{\alpha}(z)-e^{-2\textit{i}\alpha}g_{\alpha}(z)=\frac{z}{(1-e^{\textit{i}\alpha}z)^2}, \end{equation} and \begin{equation}\label{eqab2} \RE\left\lbrace\frac{(1-e^{\textit{i}\alpha}z)^2}{z}h_{\alpha}(z)\right\rbrace=1-\frac{1}{2}\RE (e^{\textit{i}\alpha}z)>\frac{1}{2}\quad \text{for }z \in\mathbb{D}. \end{equation} Also, the function $f_{\alpha}\in\mathcal{S}^0(H_\alpha)$. Therefore, \eqref{eq3} gives \begin{equation}\label{eqab3} h(z)-e^{-2\textit{i}(\alpha-\pi/2)}g(z)=\frac{z}{1-e^{\textit{i}\alpha} z}. \end{equation} Since the functions $z/(1-e^{\textit{i}\alpha}z)^2$ and $z/(1-e^{\textit{i}\alpha}z)$ are starlike, in view of \eqref{eqab1}, \eqref{eqab2}, \eqref{eqab3} and Remark \ref{remak38}, the result follows. \end{proof} \section{Partial sums of functions in class $\mathcal{S}_n$ and their convolution\\ with harmonic mappings} For $m$, $n\in\mathbb{N}$, we define the $n{th}$ partial sum of the function $f_m(z)= \sum_{l=1}^{\infty} z^l/l^m $ by $f_{m,n}(z):=\sum_{l=1}^n z^l/l^m$. We now investigate the starlikeness nature of these partial sums. \begin{lemma}\label{theom5a} For $m$, $n\in\mathbb{N}$, the partial sum $f_{m,n}(z)=\sum_{l=1}^n z^l/l^m$ of the function $f_m(z)= \sum_{l=1}^{\infty} z^l/l^m $ satisfies the following: \begin{itemize} \item[(1)]$f_{1,2}(z)= z+\frac{z^2}{2} \in\mathcal{S^*}$; \item[(2)]$f_{2,3}(z)= z+\frac{z^2}{2^2}+ \frac{z^3}{3^2}\in\mathcal{S^*}$; \item[(3)]$f_{3,n}(z)=\sum_{l=1}^n z^l/l^3\in\mathcal{S^*}$, for all $n\in \mathbb{N} $. \end{itemize} \end{lemma} \begin{proof} $(1)$ and $(2)$ follows from Theorem \ref{theomd}. For $l\in\mathbb{N}$, let $a_l=1/l^3$ for $1\leq l\leq n$ and $a_l=0$ for all $l>n$. Then, we have \begin{align*} \sum_{l=2}^\infty l|a_l|&<-1+\sum_{n=1}^\infty\frac{1}{n^2}=-1+\frac{\pi^2}{6}<1. \end{align*}Therefore, $(3)$ follows from Theorem \ref{theomd}. \end{proof} Using $\mathcal{D}^mf_{{m+l},n}= f_{l,n}$, Lemma \ref{theom5a} and equation \eqref{eq0} gives the following: \noindent \begin{lemma}\label{remak1b}The partial sums satisfies: \begin{itemize} \item[(1)]$f_{m,2}(z)= z+\frac{z^2}{2^{m}} \in\mathcal{S}_{m-1}$, $\text{if m}\geq{1}$; \item[(2)]$f_{m,3}(z)= z+\frac{z^2}{2^{m}}+ \frac{z^3}{3^m}\in\mathcal{S}_{m-2}$, $ \text{if m}\geq{2}$; \item[(3)]$f_{m,n}(z)= z+\frac{z^2}{2^m}+ \frac{z^3}{3^m}+\dots+\frac{z^n}{n^m} \in\mathcal{S}_{m-3}$, for all $n\in\mathbb{N}, \text{if m}\geq{3}$. \end{itemize} \end{lemma} \begin{lemma}\label{theom7a} Let $\phi_p$ denotes the $p{th}$ partial sum of the function $ \phi\in \mathcal{S}_m$. Then, we have the following: \begin{itemize} \item[(1)] $\phi_2 \in \mathcal{S}_{m-2}$, if $m\geq{2}$; \item[(2)] $\phi_3 \in \mathcal{S}_{m-3}$, if $m\geq{3}$; \item[(3)] $\phi_p \in \mathcal{S}_{m-4}$, for all $p\in\mathbb{N}$, if $m\geq{4}$. \end{itemize} \end{lemma} \begin{proof} Let $\phi(z)= z+ a_2z^2+a_3z^3+\dots$, $z\in \mathbb{D}$. Then we can write $\phi_p $ as \begin{align*} \phi_p(z)&= z+ a_2z^2+a_3z^3+\dots+a_pz^p\\ &=\left(z+2^n a_2z^2+3^na_3z^3+\dots\right)*\bigg(z+\frac{z^2}{2^n}+ \frac{z^3}{3^n}+\dots+\frac{z^p}{p^n}\bigg)\\&= \mathcal{D}^n\phi(z)*f_{n,p}(z). \end{align*} Since the function $ \phi\in \mathcal{S}_m$, \eqref{eq0} shows that the function $\mathcal{D}^n\phi \in \mathcal{S}_{m-n}$. Therefore, by using Lemma \ref{remak1b} and Lemma \ref{theom1a}, we get the result. \end{proof} Let the function $f=h +\bar{g}$ be a harmonic mapping, where \[h(z)=\sum_{k=1}^{\infty}a_kz^k\quad \text{and}\quad g(z)=\sum_{k=1}^{\infty}b_kz^k.\] We define the $n{th}$ partial sum of $f $ by\[f_n(z):=\sum_{k=1}^{n}a_kz^k+\overline{\sum_{k=1}^{n}b_kz^k}.\] Therefore, we can write $f_n=f\tilde{\ast}l_n$, where $l_{n}(z)= \sum_{k=1}^{n}z^k$ is the $n{th}$ partial sum of right-half plane mapping $l(z)=z/(1-z)$. \begin{theorem}\label{theom8a} For $n\in\mathbb{N}$, let the function $f=h +\bar{g}$ be a harmonic mapping in $\mathbb{D}$ with \begin{equation}\label{eq8b} h(z)-g(z)=\left(\frac{z}{(1-z)^2}\right)_*^{n-1}\quad\text{for z} \in\mathbb{D}, \end{equation} and \begin{equation}\label{eq8c} \RE{\frac{(1-z)^2}{z}}\left\lbrace h(z)*\bigg(\log\frac{1}{1-z}\bigg)_*^{n-2}\right\rbrace> 1/2\quad\text{ for z} \in \mathbb{D}. \end{equation} Also, let the function $\phi\in \mathcal{S}_m$. Then, for the partial sum $(f\tilde{\ast}\phi)_p$ of the convolution $f\tilde{\ast}\phi$, we have the following: \begin{itemize} \item[(1)] If $m\geq n+2 $, then $(f\tilde{\ast}\phi)_2 \in \mathcal{S}_H^0$ and is convex in the the direction of real axis; \item[(2)] If $m\geq n+3$, then $(f\tilde{\ast}\phi)_2$, $(f\tilde{\ast}\phi)_3 \in \mathcal{S}_H^0$ and are convex in the the direction of real axis; \item[(3)] If $m\geq n+4$, then $(f\tilde{\ast}\phi)_p \in \mathcal{S}_H^0$ and is convex in the the direction of real axis for all $p\in\mathbb{N}.$ \end{itemize} \end{theorem} \begin{proof} We know that $(f\tilde{\ast}\phi)_p =(f\tilde{\ast}\phi)\tilde{\ast}l_p=f\tilde{\ast}(\phi*l_p)= f\tilde{\ast}\phi_p$, where $\phi_p$ is the $p{th}$ partial sum of the function $\phi$. Therefore, in order to apply Theorem \ref{theom4a}, we need $\phi_p$ to be in the class $\mathcal{S}_n$, which follows from Lemma \ref{theom7a}. The result now follows by Theorem \ref{theom4a}. \end{proof} \begin{corollary} For $a\geq6$, let the function $ f_a=h+ \bar{g}$ be the harmonic mapping given in Example \ref{exam42}. Then, we have the following: \begin{itemize} \item[(1] If the function $\phi \in \mathcal{S}_3$, then $(f_a\tilde{*}\phi)_2\in\mathcal{S}_H^0$ and is convex in the direction of real axis; \item[(2] If the function $\phi \in \mathcal{S}_4$, then $(f_a\tilde{*}\phi)_2$, $(f_a\tilde{*}\phi)_3\in\mathcal{S}_H^0$ and are convex in the direction of real axis; \item[(3] If the function $\phi \in \mathcal{S}_5$, then $(f_a\tilde{*}\phi)_p\in\mathcal{S}_H^0$ and is convex in the direction of real axis for all $p\in\mathbb{N}.$ \end{itemize} \end{corollary} \begin{proof} We have $h(z)-g(z)=l(z)=z/(1-z)$. Also, for $a\geq6$ we have \begin{align*} \RE\frac{(1-z)^2}{z}\left(h(z)*\left(\frac{z}{(1-z)^2}\right)\right) &=\RE(1-z)^2h'(z)\\&=\RE\left( 1+\frac{z}{a}(2-z)\right)>\frac{1}{2}. \end{align*} Therefore, the mapping $f_a$ satisfies \eqref{eq8b} and \eqref{eq8c} with $n=1$. Hence, the result follows from the Theorem \ref{theom8a}. \end{proof} \begin{corollary}\label{corl14a} For the harmonic Koebe mapping $K= H+\overline{G}$ and the harmonic half-plane mapping $ L = M+ \overline{N}$, we have the following: \begin{itemize} \item[(1] If the function $\phi \in \mathcal{S}_4$, then $(K\tilde{*}\phi)_2$, $(L\tilde{*}\phi)_2\in\mathcal{S}_H^0$ and are convex in the direction of real axis; \item[(2] If the function $\phi \in \mathcal{S}_5$, then $(K\tilde{*}\phi)_2$, $(K\tilde{*}\phi)_3$, $(L\tilde{*}\phi)_2$, $(L\tilde{*}\phi)_3\in\mathcal{S}_H^0$ and are convex in the direction of real axis; \item[(3] If the function $\phi \in \mathcal{S}_6$, then $(K\tilde{*}\phi)_p$, $(L\tilde{*}\phi)_p\in\mathcal{S}_H^0$ and are convex in the direction of real axis for all $p\in\mathbb{N}$. \end{itemize} \end{corollary} \begin{proof} The harmonic Koebe mapping $K(z)= H(z)+\overline{G(z)}$ is given by \[H(z)=\frac{z-z^2/2+z^3/6}{(1-z)^3} \quad\text{ and }\quad G(z)=\frac{z^2/2+z^3/6}{(1-z)^3}.\] Therefore, \[H(z)-G(z)=\frac{z}{(1-z)^2}\] and \[\RE\frac{(1-z)^2}{z}H(z) =\RE\frac{1-z/2+z^2/6}{1-z} =\RE\bigg(\frac{2/3}{1-z} + \frac{1}{3} + \frac{z}{6}\bigg) >\frac{1}{2}.\] Also, the half plane mapping $L(z)= M(z)+\overline{N(z)}$ is given by \[M(z)=\frac{z-z^2/2}{(1-z)^2}\quad \text{and} \quad N(z)=\frac{-z^2/2}{(1-z)^3}.\] Then, \[M(z)-N(z)=\frac{z}{(1-z)^2}\quad\text{and}\quad \RE\frac{(1-z)^2}{z}M(z) =\RE\left(1-\frac{z}{2}\right)>\frac{1}{2}.\] Therefore, the functions, $K$ and $L$ satisfies \eqref{eq8b} and \eqref{eq8c} with $n=2$. Hence, the result follows from the Theorem \ref{theom8a}. \end{proof} \begin{example} For $m\in\mathbb{N}$, we can easily see that the function \[f_m(z):= z+\frac{z^2}{2^m}+ \frac{z^3}{3^m}+\dots \in \mathcal{S}_{m+1}.\] Therefore, by Corollary \ref{corl14a}, the following functions \begin{itemize} \item[(1)]$(K\tilde{*}f_3)_2(z) =z+\frac{5}{16}z^2 +\frac{1}{16}\bar{z}^2$; \item[(2)]$(K\tilde{*}f_4)_3(z) =z+\frac{5}{32}z^2+\frac{14}{243}z^3 +\frac{1}{32}\bar{z}^2+\frac{5}{243}\bar{z}^3 $; \item[(3)]$(K\tilde{*}f_5)_4(z) =z+\frac{5}{64}z^2+\frac{14}{729}z^3 +\frac{15}{2048}z^4+\frac{1}{64}\bar{z}^2+\frac{5}{729}\bar{z}^3+ \frac{7}{2048}\bar{z}^4$ \end{itemize} belongs to $\mathcal{S}_H^0$ and are convex in the direction of real axis. \end{example} \begin{corollary} For $|b|\leq1/2$, let the function $ F_b=h+ \bar{g}$ be the harmonic mapping given in Example \ref{exam42a}. Then, we have the following: \begin{itemize} \item[(1] If the function $\phi \in \mathcal{S}_5$, then $(F_b\tilde{*}\phi)_2\in\mathcal{S}_H^0$ and is convex in the direction of real axis. \item[(2] If the function $\phi \in \mathcal{S}_6$, then $(F_b\tilde{*}\phi)_2$, $(F_b\tilde{*}\phi)_3\in\mathcal{S}_H^0$ and are convex in the direction of real axis. \item[(3] If the function $\phi \in \mathcal{S}_7$, then $(F_b\tilde{*}\phi)_p\in\mathcal{S}_H^0$ and are convex in the direction of real axis for all $p\in\mathbb{N}$. \end{itemize} \end{corollary} \begin{proof} We have \[h(z)-g(z)= \frac{z+z^2}{(1-z)^3} = \left(\frac{z}{(1-z)^2}\right)_*^{2},\] and \[ \mathcal{D} \left(\frac{z+bz^2}{(1-z)^2}\right) =\frac{z+(1+2b)z^2}{(1-z)^3}= h(z).\] Using above equation, we get, for $|b|\leq1/2$, \begin{align*} \RE\frac{(1-z)^2}{z}\left(h(z)*\log\frac{1}{(1-z)}\right) &=\RE\frac{(1-z)^2}{z}\left( \mathcal{D} \frac{z+bz^2}{(1-z)^2} *\log\frac{1}{(1-z)}\right)\\&=\RE\frac{(1-z)^2}{z} \frac{z+bz^2}{(1-z)^2}\\ &=\RE {(1+bz)} > 1/2. \end{align*} Therefore, the function $F_b$ satisfies \eqref{eq8b} and \eqref{eq8c} with $n=3$. Hence, the result follows from the Theorem \ref{theom8a}. \end{proof} \begin{theorem} For the mapping $f=h+\bar{g} \in\mathcal{S}^0(H_\alpha)$, we have the following: \begin{itemize} \item[(1] If the function $\phi \in \mathcal{S}_3$, then $(f\tilde{*}\phi)_2\in \mathcal{S}_H^0$ and is convex in the direction $(\pi/2-\alpha).$ \item[(2] If the function $\phi \in \mathcal{S}_4$, then $(f\tilde{*}\phi)_2$, $(f\tilde{*}\phi)_3\in \mathcal{S}_H^0$ and are convex in the direction $(\pi/2-\alpha).$ \item[(3] If the function $\phi \in \mathcal{S}_5$, then $(f\tilde{*}\phi)_p\in \mathcal{S}_H^0$ and are convex in the direction $(\pi/2-\alpha)$ for all $p\in\mathbb{N}.$ \end{itemize} \end{theorem} \begin{proof} Since $(f\tilde{\ast}\phi)_p =f\tilde{\ast}\phi_p$, where $\phi_p$ is the $p{th}$ partial sum of the function $\phi$, therefore, in order to apply Theorem \ref{theom9a}, we need the function $\phi_p$ to be in the class $\mathcal{K}$, which follows from Lemma \ref{theom7a}. The result now follows from Theorem \ref{theom9a}. \end{proof} \begin{theorem} For the harmonic mapping $f_{\alpha}$ defined in the Example \ref{exam5}, we have the following: \begin{itemize} \item[(1] If the function $\phi \in \mathcal{S}_4$, then $(f_{\alpha}\tilde{*}\phi)_2\in \mathcal{S}_H^0$ and is convex in the directions $-\alpha$ and $(\pi/2-\alpha).$ \item[(2] If the function $\phi \in \mathcal{S}_5$, then $(f_{\alpha}\tilde{*}\phi)_2$, $(f_{\alpha}\tilde{*}\phi)_3\in \mathcal{S}_H^0$ and are convex in the directions $-\alpha$ and $(\pi/2-\alpha).$ \item[(3] If the function $\phi \in \mathcal{S}_6$, then $(f_{\alpha}\tilde{*}\phi)_p\in \mathcal{S}_H^0$ and is convex in the directions $-\alpha$ and $(\pi/2-\alpha)$ for all $p\in\mathbb{N}$. \end{itemize} \end{theorem} \begin{proof} Since $(f_{\alpha}\tilde{\ast}\phi)_p= f_{\alpha}\tilde{\ast}\phi_p$, where $\phi_p$ is the $p{th}$ partial sum of the function $\phi$, therefore, in order to apply Theorem \ref{theomab}, we need the function $\phi_p$ to be in the class $\mathcal{S}_2$, which follows from Lemma \ref{theom7a}. The results now follows from Theorem \ref{theomab}. \end{proof}
1,116,691,499,592
arxiv
\section{Transformers for LEGO}\label{sec:transformers} \begin{figure}[h] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=15cm]{figures/transformer_diagram.pdf} \caption{Illustration of a transformer model applied to LEGO Task 1 on input sentence \texttt{d=-c;\,b=-a;\, c=+b;\, a=+1;}. We apply a linear classification head to the output representations of each clause's first token to generate predictions for the variables assignment. } \label{fig:transformer-lego-diagram} \vspace{-0.2cm} \end{figure} We apply transformer models in the token classification pipeline to predict the assignments of the variables in the input sentence, depicted in Figure~\ref{fig:transformer-lego-diagram}. To evaluate the out-of-distribution generalization (referred to simply as generalization), we introduce the notation of $n_{tr}\le n$, such that during training, supervision is provided only on the first $n_{tr}$ clauses (first in the graph representation of the input sentence). We mainly focus on BERT \cite{devlin2018bert} and ALBERT \cite{lan2019albert} architectures. These two models are representative large transformer architectures for NLP tasks, and we observe they exhibit intriguing behavior difference on our tasks which we will detail in Section~\ref{sec:4}. See appendix for training hyper-parameters and dataset construction details. \begin{figure}[t] \centering \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/result_shortcut.pdf} \end{minipage} \end{subfigure} \hrule \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \centering \includegraphics[trim={0cm .5cm 0cm 0cm},clip, width=13cm]{figures/result_gen.pdf} \end{minipage} \end{subfigure} \vspace{-0.2cm} \caption{Solving LEGO (Task 1) using two standard transformer models -- BERT and ALBERT, trained from a random initialization. Each curve corresponds to test accuracy of a single variable appearing in the sentence over the course of training. The variable numbers in the legend correspond to their position in the reasoning chain (or graph representation) of the input sentence, rather than the position in the sentence itself. For example, on the input sentence: \texttt{b=-a;\,d=-c;\,c=+b;\,a=+1;}, variable $\# 0$ is \texttt{a}, $\#1$ is \texttt{b}, $\#2$ is \texttt{c}, and $\#3$ is \texttt{d}. Top: models are trained to fit all variables, i.e., $n=12, n_{tr}=12$. Bottom: models are trained to fit the first $6$ variables but test on all 12 variables, i.e., $n=12, n_{tr}=6$. Dashed curves represent variables unseen during training.} \label{fig:result_shortcut} \vspace{-0.3cm} \end{figure} In Figure~\ref{fig:result_shortcut}, we report initial results on LEGO with $n=12$ and $n_{tr}=6, 12$. Both BERT and ALBERT are able to achieve good classical generalization, while only ALBERT appears to generalize even to slightly longer sequence length. We observe similar behavior across different lengths of inputs too. This suggests that classical generalization might be a deceptive metric to evaluate learning of true logic/reasoning tasks. Motivated by these initial results, in the next section we focus on breaking down the learning dynamics of BERT and ALBERT for the LEGO task towards carefully understanding their strengths and weaknesses. \section{Shortcut solutions and their effect on generalization}\label{app:shortcut} As explained in Section~\ref{sec:shortcut}, we have observed that the randomly initialized models first learn a ``shortcut'' solution---predicting the last variable in the chain by counting the overall number of minus signs---instead of the ``common sense'' iterative solution. This can be seen in the top two plots in Figure~\ref{fig:result_shortcut}, where the accuracy of variable \#11 improves earlier than most of the other variables.\footnote{This behavior is not expected in the bottom two plots, since in the top ones the models are trained to fit all 12 variables including \#11, while in the bottom plots the models are only trained to fit the first 6 variables, which precludes learning the shortcut solution.} To be more precise, let us describe the two solutions in detail with an example. Consider the input: $\texttt{a=+1;\,d=-c;\,b=-a;\,c=b;}$ Initially, only the variable $\texttt{a}$ is resolved. The iterative solution identifies an unresolved variable that appears in the same clause with an already resolved variable, resolves it according to that clause, and repeats. In this example, it would resolve \texttt{b} to $-1$ by the clause \texttt{b=-a}, then resolve \texttt{c} to $-1$ by the clause \texttt{c=b}, and then resolve \texttt{d} to $1$ by the clause \texttt{d=-c}. The shortcut solution identifies an unresolved variable that appears only once, and resolves it to $1$ if the overall number of minus signs is even, and to $-1$ otherwise. In the above example, where \texttt{d} is the last variable in the reasoning chain, the shortcut solution correctly resolves it to $1$. As further mentioned in Section~\ref{sec:shortcut}, the shortcut solution to LEGO may be related to the phenomenon of spurious features, where models learn to perform tasks in ways that circumvents the intended ``common sense'' solution a human would use. Such spurious solutions are often considered undesirable, as they are known to generalize poorly even to mild variants of the task. Indeed, the shortcut solution to LEGO is brittle even under simple variations to the problem: \begin{CompactItemize} \item Repeated clauses, e.g., \texttt{a=+1;\,b=-a;\,b=-a;} \item Redundant clauses, e.g., \texttt{a=+1;\,b=a;\,c=-a;\,c=-b;} \item Multiple jointly rooted reasoning chains, e.g., \texttt{a=+1;\,b=-a;\,c=-a;} \item Multiple disjoint reasoning chains, e.g., \texttt{a=+1;\,b=+1;\,c=-a;\,d=-b;} \end{CompactItemize} and more. In all those settings the shortcut solutions would fail, whereas the ``common sense'' iterative solution would succeed. This motivates us to empirically study the effect of the shortcut solution on the ability of the models to generalize. We pose the following questions: \begin{figure}[t] \centering \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/5-12_gen.pdf} \end{minipage} \end{subfigure} \vspace{0.1cm} \hrule \vspace{0.1cm} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/5-wo_gen.pdf} \end{minipage} \end{subfigure} \caption{Learning shortcut impedes generalization. (a) Train on variables \#0-\#4 and \#11. (b) Train on variables \#0-\#4 only. In both plots we test on all 12 variables. } \label{fig:shortcut-gen} \end{figure} \subsubsection*{Q1. How does reliance on shortcut solutions affect the ability of the network to generalize?} A first indication that the shortcut solution is undesirable for LEGO can be gleaned already from Figure~\ref{fig:result_shortcut}: along with the early improvement in the accuracy of variable \#11 (which indicates that the shortcut solution is being learned), we observe a drop in the accuracy of some of the variables that were already learned (\#2 in ALBERT and \#3 in BERT). This may suggest that the shortcut solution impedes even classical generalization. Indeed, the models seem to ``realize'' that, as we can see that the accuracy of \#11 drops before improving again together with the other variables, indicating that the shortcut solution is being abandoned in favor of a solution that can predict all variables correctly. To gain insight into the effect of the shortcut solution on out-of-distribution generalization, we performed an experiment where the models are trained to fit the first five variables (\#0-\#4) and the last one (\#11), and are asked to predict all 12 variables at test time. This is different from the top plots in Figure~\ref{fig:result_shortcut} where the model was trained to fit all 12 variables (and thus no out-of-distribution generalization is observed), and from the bottom plots in Figure~\ref{fig:result_shortcut} where the model is trained to fit the first six variables (\#0-\#5), without \#11 (and thus learning the shortcut solution is not possible). The results of this experiment are reported in the top two plots in Figure~\ref{fig:shortcut-gen}. The bottom plots depict a control experiment where the models are trained to fit only the first five variables, without \#11 (and thus, again, learning the shortcut solution is not possible). The results show that the models exhibit inferior out-of-distribution generalization (to variables \#5 in BERT and \#6 in ALBERT) when provided supervision for \#11 (top plots), even though they are given strictly more information during training than in the bottom plots, ostensibly making the task easier. We thus infer that the shortcut solution in LEGO has an adverse effect on out-of-distribution generalization. \begin{figure}[t] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/mimicking_shortcut.pdf} \caption{Shortcut solutions are avoided via either pre-training or mimicking. } \label{fig:avoid-shortcut} \end{figure} \subsubsection*{Q2. What are effective ways to prevent models from learning shortcuts, and do they result in better generalization?} In Section~\ref{sec:4} we studied the effect of pretraining on LEGO, and observed that the pretrained BERT model exhibits much better out-of-distribution generalization than the randomly initialized BERT model. This naturally suggests that pretrained BERT possibly avoids the shortcut solution. We confirm experimentally in Figure~\ref{fig:avoid-shortcut} (top), where pretrained BERT and ALBERT are finetuned to fit all 12 variables. Indeed observe the accuracy of \#11 improving either later than or concurrently with all other variables, suggested that the shortcut solution is not being learned. We speculate that this may have to do with the number of epochs it takes to learn the iterative solution: By the time the randomly initialized BERT has learned the shortcut solution, pretrained BERT already attains full accuracy on the entire chain of variables. Avoiding the shortcut solution may partly explain the superior out-of-distribution generalization performance of pretained BERT over randomly initialized BERT, seen in Figure~\ref{fig:pretrain}. Section~\ref{sec:4} also showed that our mimicking technique---which directly mimics the attention patterns of the pretrained model without training on any data---can recover much of the benefit in pretraining for LEGO. This extends to avoiding the shortcut solution as well: the bottom plots in Figure~\ref{fig:avoid-shortcut} show that the mimicking BERT and ALBERT models exhibit similar accuracy patterns to their pretrained counterparts, suggesting that the shortcut solution is not being learned by them. We will show in Section~\ref{sec:conv-lego} below that a certain convolutional modification of the transformer architecture is also capable of avoiding shortcut and generalizing better to longer sequences. \section{Data generation and training details for the LEGO task} \subsection{Data generation} \label{sec:data-generation} We specify the data generation mechanism for Task 1 in the following. We use lowercase alphabets as variables $A=\{a,b,c,d,\ldots, z\}$. Given $n$, we generate a sentence from our distribution $s\sim \mathcal{D}(n)$ as follows: \begin{CompactItemize} \item[1.] Sample $n$ variables $a_1,a_2,\ldots,a_n \in A$ and their corresponding assignments (or labels) $y_1,y_2,\ldots y_n\in X$ uniformly, i.e., $\forall\,i\in[n]$, $a_i\sim\text{Unif}(A)$ and $y_i=\pm 1 \text{ w.p. } 0.5$. \item[2.] The $n$ clauses are then generated as $a_i=g_i a_{i-1}$ for $i=1,2,\ldots, n$, where $a_0=r=1$ and the group elements $g_1,g_2,\ldots g_n\in G$ are uniquely chosen so that the clauses are consistent with assignments $a_i=y_i$ for all $i\in[n]$. \item[3.] The sentence $s$ is generated by concatenating a random ordering of the $n$ clauses with a semicolon $;$. Finally, the sentence is padded with \texttt{[BOS]} and \texttt{[EOS]} tags to denote the beginning and end of sentence, respectively. See Figure~\ref{fig:data_sample_task1} for example sentences from our distribution. \end{CompactItemize} \begin{figure}[htb] \centering \begin{verbatim} [BOS] j=-f; f=-b; y=+t; o=+e; d=+y; v=+d; h=-o; b=-i; i=+1; t=+l; e=-j; l=-h; [EOS] [BOS] j=+o; s=-y; p=-r; y=-m; u=-a; a=-f; k=+p; o=-k; q=+u; m=+1; f=+s; r=+q; [EOS] [BOS] z=+d; b=+1; m=+t; d=-u; u=-h; a=-b; j=+m; i=-j; t=+x; f=+i; h=-f; x=-a; [EOS] [BOS] j=-f; f=-b; y=+t; o=+e; d=+y; v=+d; h=-o; b=-i; i=+1; t=+l; e=-j; l=-h; [EOS] [BOS] w=+l; m=+c; c=-i; f=-d; p=-m; a=+b; y=-a; b=+p; i=+f; l=-v; d=+1; v=+y; [EOS] \end{verbatim} \caption{Samples of sentences generated from our distribution $\mathcal{D}(n)$ for Task $1$ with $n=12$. } \label{fig:data_sample_task1} \end{figure} \subsection{Training} \label{training}Our vocabulary for data generated as above thus consists of symbols of variables $a\in A$, group operations $+,-\in G$, the root node $1\in X$, the equal sign `=', and the semicolon `;' along with the \texttt{[BOS]} and \texttt{[EOS]} tags. To apply transformers to the LEGO task we convert each symbol in our vocabulary to vectors in $\mathbb R^d$ (referred to as {\em tokens}) using a learnable linear embedding layer. Thus, a sentence of $n$ clauses now corresponds to an ordered list of $5n+2$ tokens, which we turn into an unordered list using positional embedding (see \cite{Vas17}). These tokens are processed iteratively by {\em transformer blocks}. Each transformer block maps $5n$ tokens in $\mathbb{R}^d$ to another set of $5n$ tokens using a multi-head attention layer followed by a one-hidden layer feedforward net (there are also residual connections and layer normalization, for full details of architecture see \cite{Vas17}). We use \texttt{bert-base-uncased}\footnote{\url{https://huggingface.co/bert-base-uncased}} and \texttt{albert-base-v1}\footnote{\url{https://huggingface.co/albert-base-v1}} along with their pretrained weights from the open source Huggingface transformers library~\cite{wolf-etal-2020-transformers}. The Rand-Init models have identical configurations to their pretrained counterparts but randomly initialized weights. Our training and test datasets are i.i.d. samples from our distribution $D(n)$ as described above. We generate $10^4\times n$ and $10^3\times n$ datapoints for training and test, respectively, and sanity checked that there is no overlap between train and test data. Recall that during training we provide supervision on the first $n_{tr}$ appearing in the graph representation of the sentence, but test the accuracy on all $n$ samples at test time. Note that since the clause positions are randomized in our input sentence (e.g., Figure~\ref{fig:data_sample_task1}), the first $n_{tr}$ clauses in the graph representation can appear at arbitrary positions in the sentence which allows for training the positional encodings for longer sequences than those seen in training. In all the LEGO experiments, we use cross entropy loss averaged over the $n_{tr}$ clauses as our sample loss during training. We train for $200$ epochs using the Adam optimizer with batch size of $1000$ samples, $5\times10^{-5}$ learning rate, $\beta_1=0.9$, $\beta_2 = 0.999$, $\epsilon=1\times10^{-8}$, and cosine learning rate schedule with $T_{\max}=200$. For tokenization, we use the pretrained BERT tokenizer for all the experiments, which merely converts the input symbols into integers which are used as token ids. Each run is conducted on a cluster with $4$ A100 GPUs. \paragraph{A note on variance of training across LEGO tasks} When training BERT and ALBERT models for the LEGO task using our experimental setup, we see non-trivial variance in absolute test accuracies across different runs. In Figure~\ref{fig:variance}, we show three different runs of BERT and ALBERT models trained on LEGO tasks of length $n=12$ (the configuration used in most results in the paper). While we see that the absolute values of the test accuracies vary significantly, the qualitative observations made in our paper hold across all the runs: importantly, across all the runs, we see that using iterative ALBERT architecture as well as pre-training non-iterative BERT architecture leads to better generalization to unseen lengths. This shows that conclusions derived in our paper hold despite the variance across runs. Methodologically, we attribute the variance in the absolute test accuracy to the relatively small size of our datasets (e.g., the number of training examples for $n=12$ is $120K$ tokens) for training standard trained language models which are otherwise trained on hundreds of millions of tokens. Furthermore, note that in our experiments the variance across runs arise both from having different train and test datasets as well as different random seeds for model initialization and training algorithm. \begin{figure}[htb] \centering \begin{minipage}[l]{0.9\textwidth} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/n=12_run0_nolegend.pdf} \end{minipage} \end{subfigure} \vspace{0.1cm} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/n=12_run1_nolegend.pdf} \end{minipage} \end{subfigure} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/n=12_run2_nolegend.pdf} \end{minipage} \end{subfigure} \end{minipage} \begin{minipage}[l]{0.05\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=1.5cm]{figures/legend.pdf} \end{minipage} \caption{Three sample runs of models trained on LEGO tasks with $n=12$ and $n_{tr}=6$. We observe that while there are variances across different runs of the models, the qualitative conclusions stated in the paper holds for all the runs: i.e, iterative ALBERT models and pretraining lead to better generalization to unseen task lengths. } \label{fig:variance} \end{figure} \subsection{The Mimicking procedure} \label{sec:mimic} Starting with a randomly initialized transformer model, we would like to make sure that some of its attention heads implement the manipulation and association functionalities, \emph{prior to} the fine-tuning. To do so, we craft the desired attention patterns of a manipulation head and an association head, and train the model with gradient descent to match them. Specifically, given a random input $x\in\mathbb R^{T}$ of length $T$, we hard code the following matrix $M\in\mathbb R^{T\times T}$ as the target attention pattern for the manipulation head\footnote{Here we choose the Gaussian filter $[1,2,4,2,1]$ for illustration. In practice, we find that the final performance is robust to various pattern choices as long as they are localized and shift-invariant.}, and derive from input $x$ the following matrix $A\in\mathbb R^{T\times T}$ whose $A_{i,j}$ entry indicates whether $x_i$ and $x_j$ are identical tokens. Note that we further specify that $A$ has a zero diagonal in observance of the association head in Figure~\ref{fig:attention_heads} having a vanishing diagonal. In reality, attention maps have unit row sums, thus we normalize $M$ and $A$ accordingly to obtain $\Tilde{M}$ and $\Tilde{A}$ such that their rows $\Tilde{M}[t,:]$ and $\Tilde{A}[t,:]$ are valid distributions. \begin{align*} M = \begin{bmatrix} &\ddots &\ddots& \ddots& & & & & & \\ & 1 & 2 & 4 & 2 & 1 & & & & \\ & & 1 & 2 & 4 & 2 & 1 & & & \\ & & & 1 & 2 & 4 & 2 & 1 & & \\ & & & & & \ddots & \ddots & \ddots & & \end{bmatrix},~~~ A_{ij}=\begin{cases}1~,~~~\text{if}~ x_i=x_j~\text{and}~i\neq j\\ \\ 0~,~~~\text{otherwise}\end{cases} \end{align*} At every layer, we randomly appoint two attention heads to mimic the manipulation and association operators while leave the other heads off the hook. Upon seeing a input sequence $\mathbf{x}\in\mathbb R^T$, we denote the attention maps of the appointed heads at the $l-$the layer as $\mathtt{Attn}_0^{(l)}(\mathbf{x}), \mathtt{Attn}_1^{(l)}(\mathbf{x})\in\mathbb R^{T\times T}$. For the mimicking objective, we draw input sequence $\mathbf{x}$'s whose tokens are independent and uniform over the vocabulary, and then compute the the Kullback–Leibler divergence between each row of $\mathtt{Attn}_0^{(l)}(\mathbf{x}), \mathtt{Attn}_1^{(l)}(\mathbf{x})$ and the corresponding rows of $\Tilde{M}$, $\Tilde{A}$. Thus the overall mimicking loss is \begin{align*} L_{\mathtt{mimic}} = \underset{\text{rand. seq.}~ \mathbf{x}}{\mathbb E}\left[\sum_{l=0}^{L-1} \frac{1}{T}\sum_{t=1}^{T} \left[\mathbf{KL}\left(\mathtt{Attn}_0^{(l)}(\mathbf{x})[t, :] \bigg\|\Tilde{M}[t, :]\right) + \mathbf{KL}\left(\mathtt{Attn}_1^{(l)}(\mathbf{x})[t, :] \bigg\|\Tilde{A}[t, :]\right) \right]\right] \end{align*} Note that the above mimicking loss pertains only two attention heads per layer and we leave out all the rest. Then the mimicking procedure boils down to updating the transformer model's parameters to minimize the $L_{\mathtt{mimic}}$. We find that a vanilla Adam optimizer drives the mimicking loss down to near zero in a negligible amount of time compared to large-scale pre-training, even for large models such as BERT and ALBERT. \section{Details of the hybrid convolutional transformer experiments} \subsection{Architecture details} In the original multi-head attention module of transformers, we have four matrices $W_Q, W_K, W_V, W_O\in\mathbb{R}^{d\times d}$ as learnable parameters. On a token sequence $h_{in}\in\mathbb R^{d\times T}$, the module computes the output as \begin{align*} h_{out} = W_O ~\mathbf{MHA}\left(W_Q h_{in}, W_K h_{in}, W_V h_{in}\right) \end{align*} where $\mathbf{MHA}:\mathbb R^{d\times T}\times\mathbb R^{d\times T}\times\mathbb R^{d\times T}\rightarrow \mathbb R^{d\times T}$ is the multi-head attention function that takes query, key, value as inputs. In our proposed convoluitonal attention module, we keep all the existing components of the original multi-head attention module but introduce three depth-wise temporal convolution operators $\mathtt{DepthConv}_Q,$ $\mathtt{DepthConv}_K,$ $\mathtt{DepthConv}_V,$ before applying the $W_Q, W_K, W_V$ maps, where $k$ is the length of each filter. Different from the vanilla full convoluitons, a depth-wise convolution treats each dimension of the input separately, i.e., applies a single convolutional filter along the time axis for each dimension of the input sequence\footnote{we implement this using the PyTorch \texttt{torch.nn.Conv1d} module with the number of groups set equal to the number of input channels.}. As a result, each depth-wise convoluiton operators contains $d\times k\times k$ learnable parameters. Overall, the convolutional attention module computes the output as \begin{align*} h_{out} = W_O ~\mathbf{MHA}\left(W_Q \mathtt{DepthConv}_Q(h_{in}), W_K \mathtt{DepthConv}_K(h_{in}), W_V \mathtt{DepthConv}_V(h_{in})\right) \end{align*} Notably the depthwise convolution modules incur relatively few extra parameters to the model, since each of $W_Q, W_K, W_V, W_O$ matrices contain $d\times d$ learnable parameters. This provides a significant parameter count advantage over full convoluiton opertors which would have $d\times d\times k\times k$ parameters. In our experiments, we find that full convoluitons usually lead to a slightly better result but make the models' paramter counts significantly larger. Thus we choose depth-wise convolutions as a trade-off. \subsection{Hybrid convolutional transformers on LEGO} \label{sec:conv-lego} \begin{figure}[htb] \centering \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/conv_shortcut.pdf} \end{minipage} \end{subfigure} \vspace{0.1cm} \hrule \vspace{0.1cm} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.05\textwidth} \caption{} \end{minipage} \begin{minipage}[l]{0.95\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/conv_gen.pdf} \end{minipage} \end{subfigure} \caption{Hybrid convolutional models on LEGO Task 1: (a) $n=n_{tr}=12$ (classical generalization), and (b) $n=12$, $n_{tr}=6$ (generalization to unseen lengths). We see that hybrid convolutional transformers avoid shortcuts and (almost) match the performance of pretrained models.} \label{fig:conv-lego} \end{figure} As a sanity check, we evaluate the convolutional variants of BERT and ALBERT trained from random initialization on the LEGO task. In Figure~\ref{fig:conv-lego}, we report the results for both classical generalization and out-of-distribution generalization. We show that in spite of being randomly initialized, the convolutional models greatly resemble the behaviors of pretrained models in that they avoid shortcut solutions as well as generalize better to longer sequences than their non-convolutional, randomly initialized counterparts. This is another strong evidence supporting our hypothesis on the role of pretraining. \subsection{The restricted C dataset and training hyper-parameters} We evaluated the convolutional version of BERT architecture with various filter sizes on the task of executing C programs (see Section~\ref{sec:hybrid} and Figure~\ref{fig:cprog}. The details of training and dataset are provided below. We use the open source restricted C dataset from \cite{chen2021latent} consisting of $500$K/$1$K/$1$K training/validation/test snippets of C programs. Each program is also provided with $5$ pairs of input and output arrays of $5$ variables. Both the input values and final values are guaranteed to be integers between $-4$ and $4$ inclusively. For our purpose, we consider the task of predicting variables' values in the output array, given the input array and the text of the program itself, largely resembling the task of LEGO. We treat each pair of input and output arrays as independent sample, even though different pairs may come from the same program. Since the output variables can only take $9$ different values (integers among -4 and 4), we use a $9-$way softmax classifier to generate predictions. For all the experiments on restricted C dataset, we train all the models from random initialization for $20$ epochs using the Adam optimizer with batch size of $500$ samples, $5\times10^{-5}$ learning rate, $\beta_1=0.9$, $\beta_2 = 0.999$, $\epsilon=1\times10^{-8}$, and cosine learning rate schedule with $T_{\max}=20$. For each configuration, we conduct $3$ independent runs, and report the standard deviations as error bars. We present the results in Figure~\ref{fig:cprog} in the main paper. \section{Effect of length of LEGO chains and depth of model} All our experiments in the main paper were on a typical instance of LEGO task with length $n=12$ chains and standard BERT and ALBERT models of depth $D=12$. In this Appendix, we briefly explore the effect of the LEGO chain length $(n,n_{tr})$ and transformer models depth $D$. The chain structure of information flow in our LEGO Task 1 would suggest that for a transformer network of depth $D$, the learning and generalization on LEGO task would crucially depend on its maximum chain length $n$. If $n$ (or importantly $n_{tr}$) is too small, the training data might not have enough information to guide generalization to longer lengths, on the other hand, if $n$ is too large, the model might not be able to propagate information along the chain in a natural iterative manner. For example, implementing a ``natural" iterative algorithm of resolving one clause of the task at a time (as described in the beginning of Appendix~\ref{app:shortcut}) would require models of depth $D\ge n$. We study this behavior by first repeating the experiments from our main paper on depth $D=12$ models on LEGO tasks of varying lengths $n$. In all the cases, we proportionally increase the length of chain for which supervision is provided by training on first $n_{tr}=n-6$ clauses in the chain. In Figure~\ref{fig:varying-n}, we show the performance of a typical run of transformer models in this setting. When trained on small length chains of $n=8$ and $n_{tr}=2$, we indeed see that all the models including pretrained-BERT is able to learn the short training length (classical generalization) but the supervision does not contain enough information for the models to learn to generalize to unseen lengths. On the other hand, for larger chain lengths we see that the generalization only gets better, with a very strong monotonic trend for pretrained-BERT models. In particular, the strong generalization performance at $n=20$ with $n_{tr}=14$ would suggest that these models might not really be implementing the ``natural" iterative solution for the task. Rather, it is possible that training on longer sequences leads the models to learn a more compact representation that nevertheless generalizes remarkably well to much longer lengths than seen during training. \begin{figure}[H] \centering \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.1\textwidth} \caption{$n=8$} \end{minipage} \begin{minipage}[l]{0.85\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/n=8_run0.pdf} \end{minipage} \end{subfigure} \vspace{0.1cm} \hrule \vspace{0.1cm} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.10\textwidth} \caption{$n=12$} \end{minipage} \begin{minipage}[l]{0.85\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/n=12_run0.pdf} \end{minipage} \end{subfigure} \vspace{0.1cm} \hrule \vspace{0.1cm} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.1\textwidth} \caption{$n=16$} \end{minipage} \begin{minipage}[l]{0.85\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/n=16_run2.pdf} \end{minipage} \end{subfigure} \vspace{0.1cm} \hrule \vspace{0.1cm} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.1\textwidth} \caption{$n=20$} \end{minipage} \begin{minipage}[l]{0.85\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/n=20_run0.pdf} \end{minipage} \end{subfigure} \caption{Generalization performance of BERT and ALBERT models (depth $D=12$) trained on LEGO task of varying chain lengths $n$: (a) $n=8$, (b) $n=12$, (c) $n=16$, (d) $n=20$. } \label{fig:varying-n} \end{figure} Complementing our results on varying the length of LEGO chains, we also look at how BERT and ALBERT architectures of smaller depth $D<12$ learn our LEGO task of length $n=12$. In Figure~\ref{fig:varying-depth} we show the results of models trained from random initialization (we do not have pre-trained models at these depths). Here we do see the trend that larger depth improves generalization. In particular, for a task of chain length $n$, there does appear to be a minimum threshold of $D$ at which the models learn to generalize even in the classical sense (on lengths the models were trained on), but this relationship appears sub-linear rather than linear with $D\ge n$ as speculated by the ``natural" iterative algorithm. It would be of interest in future studies to explore this relation between depth and length better. \begin{figure}[H] \centering \begin{minipage}[l]{0.75\textwidth} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.2\textwidth} \caption{$D=2$} \end{minipage} \begin{minipage}[l]{0.75\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/D=2_run0.pdf} \end{minipage} \end{subfigure} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.2\textwidth} \caption{$D=4$} \end{minipage} \begin{minipage}[l]{0.75\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/D=4_run0.pdf} \end{minipage} \end{subfigure} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.2\textwidth} \caption{$D=8$} \end{minipage} \begin{minipage}[l]{0.75\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/D=8_run0.pdf} \end{minipage} \end{subfigure} \begin{subfigure}{\textwidth} \begin{minipage}[r]{0.2\textwidth} \caption{$D=12$} \end{minipage} \begin{minipage}[l]{0.75\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/D=12_run0.pdf} \end{minipage} \end{subfigure} \end{minipage} \begin{minipage}[l]{0.20\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=1.5cm]{figures/legend.pdf} \end{minipage} \caption{Rand-Init BERT and ALBERT models of varying depth $D$ trained on LEGO tasks of length $n=12$.} \label{fig:varying-depth} \end{figure} \section{LEGO Task 2: dihedral group} As a generalization of the main task (i.e., LEGO Task 1) analyzed so far, we present LEGO Task 2 of learning the dihedral group\footnote{\url{https://en.wikipedia.org/wiki/Dihedral_group}} $D_3$ of order $6$ which is isomorphic to the symmetric group $S_3$. Note that LEGO Task 1 can be viewed as learning the dihedral group $D_1$ of order $2$. Clearly, the shortcut solution described in Section~\ref{app:shortcut} is not valid here. We repeat the out-of-distribution generalization experiments on Task 2 with the exact same model configurations and training hyper-parameters as Task 1 (see Section~\ref{training}). For dataset creation, we largely follow the pipeline detailed in Section~\ref{sec:data-generation} with group elements from $D_3$ for which we create corresponding tokens. The only modification here is that the labels are categorical with $6$ classes, since every variable in Task 2 may take $6$ candidate values. \begin{figure}[htb] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=15cm]{figures/mimic_dihedral.pdf} \caption{Out-of-distribution generalization on LEGO Task 2. Task 2 appears to be significantly more challenging than Task 1 and pretraining plays a more important role on top of rand-init models, while the mimicking procedure is able to match and even outperform pretraining.} \label{fig:dihedral} \end{figure} In Figure~\ref{fig:dihedral}, we show these preliminary results and observe that Task 2 is indeed more challenging than Task 1, and the extent to which pretraining provides benefits is noticeably larger. Without further hyper-parameter tuning, the randomly initialized models even face optimization issues in fitting the training labels. Interestingly, we also find our proposed mimicking procedure introduced in Section~\ref{sec:mimic} is not only able to match pretraining's performance and also outperform it. So far, none of the models here is capable of non-trivially generalizing to more than one extra variable. It is an important future direction to search for suitable adaptations to the current transformer architectures/training algorithms that can eventually solve this task. \input{discussion} \section{Discussion of limitations} Our current work focuses on a synthetic task for logical reasoning. Even though we have gained valuable insights into Transformer models' behavior, it remains to be shown whether (or how much of) these insights carry on to general natural language tasks. Furthermore, so far we have used two representative Transformer architectures, namely BERT and ALBERT, throughout our investigation. It is well-known that these model are trained with the masked token prediction objective, and thus they tend to behave differently from the ones trained with next token prediction objective such as the GPT models~\cite{brown2020language}. It is interesting to see how these two types of Transformers perform differently on LEGO as well as other logical reasoning tasks. The aforementioned two main limitations shed light on directions for future works that we believe are of great importance. \newpage \section{Attention maps of pretrained BERT on LEGO} \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_0.pdf} \caption*{layer 0} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_1.pdf} \caption*{layer 1} \end{subfigure} \label{fig:all-attention} \hfill \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_2.pdf} \caption*{layer 2} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_3.pdf} \caption*{layer 3} \end{subfigure} \label{fig:all-attention} \hfill \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_4.pdf} \caption*{layer 4} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_5.pdf} \caption*{layer 5} \end{subfigure} \label{fig:all-attention} \end{figure} \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_6.pdf} \caption*{layer 6} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_7.pdf} \caption*{layer 7} \end{subfigure} \label{fig:all-attention} \hfill \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_8.pdf} \caption*{layer 8} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_9.pdf} \caption*{layer 9} \end{subfigure} \label{fig:all-attention} \hfill \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_10.pdf} \caption*{layer 10} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=8cm]{figures/attention_maps/attention_layer_11.pdf} \caption*{layer 11} \end{subfigure} \label{fig:all-attention} \end{figure} \makeatother \section{Shortcut solutions in learning with Transformers} \label{sec:shortcut} \begin{figure}[H] \vspace{-0.5cm} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=12cm]{figures/n=2_shortcut_rand.pdf} \caption{Shortcut solutions. In randomly initialized models, the prediction for the last variable (\# 11) improves faster than any other intermidiate variables, suggesting these models find a shortcut by counting the number of \textquoteleft +\textquoteright~ and \textquoteleft -\textquoteright ~tokens appearing in the input sequence and implementing a parity function.} \label{fig:shortcut-rand} \end{figure} \subsection{Pre-training avoids shortcut solutions} \begin{figure}[H] \vspace{-0.5cm} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=12cm]{figures/n=2_shortcut_pretrained.pdf} \caption{Finetuning pre-trained Transformers avoids shortcut solutions. Starting with pre-trained models, learning progresses sequentially along the assignment chain, in sharp constrast to learning with randomly initialized models shown in Figure~\ref{fig:shortcut-rand}.} \label{fig:shortcut-pretrained} \end{figure} \begin{figure}[H] \vspace{-0.5cm} \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=12cm]{figures/attention_eyeball.pdf} \caption{Visualization of attention maps from a pre-trained but not yet fine-tuned ALBERT model's first layer. On a LEGO ($N=2$) input sequence unseen by the model, certain heads tend to implement local, convolution-like operators (heads 2 and 3), while some others implement global, association operators (heads 7 and 10).} \label{fig:gen_longer_seq} \end{figure} \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{} \item Did you discuss any potential negative societal impacts of your work? \answerNA{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{In Figure~\ref{fig:cprog}, we provide error bars from 3 independent runs per configuration. The other experimental results are qualitative and robust to various hyper-parameter settings. } \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerYes{} \item Did you include any new assets either in the supplemental material or as a URL? \answerYes{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerYes{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \end{enumerate} \subsection{GPT-3} \section{Experiments with GPT-3} We tried a zero shot learning with GPT-3 on the OpenAI website\footnote{\url{https://beta.openai.com/playground/p/default-grammar}}. The red text is the completion given by the ``text-davinci-002'' model with Temperature $=0$ and Top P $=0$: \begin{Verbatim}[commandchars=\\\{\}] a=+1; b=-a; c=-b; d=+c; e=+d; f=-e; g=-f; h=+g; i=+h; j=-i; k=-j; l=+k; m=+l; n=-m; o=+n; p=-o; q=-p; r=+q; s=+r; t=-s; u=+t; v=-u; w=-v; x=+w; y=+x; z=-y; a=1; b=-1; c=1; d=1; e=\textcolor{red}{1; f=-1; \underline{g=-1}; h=1; i=1; j=-1; k=-1; l=1; m=1;} \textcolor{red}{n=-1; o=1; p=-1; q=-1; r=1; s=1; t=-1; u=1; v=-1; w=-1; x=1; y=1; z=-1} \end{Verbatim} Note that the variable \texttt{g} is computed incorrectly (it is the first variable computed incorrectly in the chain of variables). Next, we added the text ``\texttt{Compute the variables from left to right:}'' in front of the previous prompt: \begin{Verbatim}[commandchars=\\\{\}] Compute the variables from left to right: a=+1; b=-a; c=-b; d=+c; e=+d; f=-e; g=-f; h=+g; i=+h; j=-i; k=-j; l=+k; m=+l; n=-m; o=+n; p=-o; q=-p; r=+q; s=+r; t=-s; u=+t; v=-u; w=-v; x=+w; y=+x; z=-y; a=1; b=-1; c=1; d=1; e=\textcolor{red}{1; f=-1; g=1; h=1; i=1; j=-1; k=1; l=1; m=1;} \textcolor{red}{n=-1; \underline{o=1}; p=-1; q=1; r=1; s=-1; t=1; u=1; v=-1; w=1; x=1; y=-1; z=1} \end{Verbatim} This improves the performance but the variable \texttt{o} is computed incorrectly (it is the first variable computed incorrectly in the chain of variables). Adding example computations (few show learning) did not help to improve the performance. There are 3 significant differences between our setup and GPT-3. First, GPT-3 doesn't require any finetuning or training on our synthetic reasoning task. Second, the architecture of GPT-3 is entirely a decoder-based architecture with a causal attention mask while all our architectures have bidirectional attention. Third, the text generation done by GPT-3 proceeds in a loop by executing the model and generating tokens in a sequential manner. Our models generate the token embeddings all at once. This makes the generation process of GPT-3 more powerful as it doesn't need to compute the values of the variables all at once but might do that sequentially. \iffalse \begin{figure}[H] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/openai.jpg} \caption{The colored text is the completion given by GPT-3 for the prompt.} \label{fig:gpt3} \end{figure} \begin{figure}[H] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/copilot.jpg} \caption{The shaded text is the completion given by GitHub Copilot for the prompt.} \label{fig:gpt3} \end{figure} \paragraph{Drawbacks} The drawbacks are twofold. First, both GPT-3 and GitHub Copilot requires effort to design the prompt (requires prompt tuning). For similar prompts the outcomes can be very different. Second, even for a fixed outcome, the outcome might be randomized (depending on the text generation procedure). {\color{red} SG: may be also add an example with tokens like $+\to floo$, $-\to boo$ few short learning -- it would be interesting both if it succeeds with $~10$ examples or fails Arturs: even this doesn't work: \begin{figure}[H] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/copilot2.jpg} \caption{The shaded text is the completion given by GitHub Copilot for the prompt.} \label{fig:gpt3} \end{figure} } \fi \section{Generalization to longer sequences} \section{Unveiling transformers with LEGO} \label{sec:4} \subsection{BERT vs. ALBERT: Iterative reasoning in iterative architectures}\label{sec:iterative} A salient feature of many reasoning tasks is an iterative component, meaning they can (or must) be solved by sequentially repeating certain operations. In this section, we use LEGO to study and compare transformer architectures through the lens of iterative reasoning. A natural solution to LEGO---and arguably the go-to solution for a human---is to implement a ``for loop'', where each iteration resolves one step in the reasoning chain. The iteration could look for the next unresolved variable token whose value could be resolved in one step. Iterative transformer architectures such as ALBERT, where the weights are shared across different layers, inherently implement a for loop with number of iterations equal to the number of layers. If the model manages to learn to implement one such iteration during training, the network would immediately be capable of performing out-of-distribution generalization, solving longer sequences than the ones it has been trained on. If this indeed occurs, it would point to a clear advantage of ALBERT over BERT in our setting. This leads to the $3$ following questions, addressed in turn below: \subsubsection*{Q1. Do iterative architectures indeed exhibit better out-of-distribution generalization?} The bottom plots of Figure \ref{fig:result_shortcut} display the out-of-distribution generalization result for BERT and for ALBERT. They show the clear advantage of recurrence: While the non-iterative BERT achieves only somewhat better-than-random accuracy for one variable (\#6) beyond the ones accounted for during training (\#0- -\#5), the iterative ALBERT reaches near perfect accuracy on two additional variables (\#6 and \#7), and nontrivial accuracy on the third (\#8). These results clearly support that iterative architectures do generalize better in the iterative LEGO reasoning task. \subsubsection*{Q2. To what extent does the ALBERT architecture actually implement the above for loop?} To a lesser extent, Figure~\ref{fig:result_shortcut} also hints at a positive answer to \textbf{Q2}. Observe that ALBERT exhibits out-of-distribution generalization to variable \#6 immediately (in terms of epochs) as soon as it fits the training variables (\#0 -- \#5), whereas for BERT, the corresponding plot (\#6) climbs gradually even after the training variables are predicted perfectly. This \emph{Eureka moment} behavior of ALBERT suggests that once it manages to learn the operations required for one step of reasoning, it can immediately implement those operations over a larger number of iterations not required in training. In order to gain stronger evidence regarding \textbf{Q2}, we designed an experiment attempting to gauge the dependence between the location of a variable token in the chain and the layer in which its value is typically resolved. To this end, given a trained model, we train one linear classifier per layer whose purpose is to predict the value of a variable token based only on its token representation at corresponding layer (without using other information at all), while keeping the original model unchanged. The function of the classifiers is therefore to probe the representations of resolved tokens at the intermediate layers. This allows us to gauge which variables are already resolved in each layer, and thus observe the rate at which information percolates along the reasoning chain in terms of layers per reasoning step. If the model indeed implements a for loop, one expects a linear relation between the number of layers and the number of reasoning steps already completed. \begin{figure}[t] \centering \includegraphics[trim={7cm 0cm 0cm 2cm},clip, width=15cm]{figures/percolation_modified.pdf} \caption{Visualization of information percolation within the fine-tuned models. For a fine-tuned model, we associate its every layer with a new binary linear classifier. At each layer, we apply the associated classifier on the intermediate representation of a variable token from the input sequence, and we train the classifiers at all layers jointly to predict the variables' labels, while keeping the model parameters intact. Color indicates test accuracy of each classifier. Brighter is higher. } \label{fig:perco} \end{figure} We depict the results in Figure~\ref{fig:perco}, visualizing the test accuracy of prediction as a function of the layer in the network and depth in the chain. While not perfectly linear in either of the cases, the relation clearly looks closer to linear in ALBERT, suggesting that the for loop which is (by definition) implemented by the architecture of the model is in fact aligned with the iterations of the ``natural" for loop associated with the problem. {However, the fact that the ALBERT model does not generalize to ``all" test variables suggests that the relationship is not ``exact".} \subsubsection*{Q3. In iterative reasoning tasks, how can we incentivize models to learn iterative solutions?} We attempt to incentivize the model to implement the ``natural" for loop solution. We rely on the observation that if each iteration of the for loop simply percolates the information one more step (assigning a value to the next variable), then adding more layers with the same weights should not affect the output, and in fact, one should be able to read out the output of the calculation from any layer of the neural network, as long as its depth exceeds the length of the chain. Moreover, once the computation has resolved the values of the variable tokens, the tokens should be a fixed point of the iteration, meaning that adding more layers (with the same parameters) should not affect the output. With this observation in mind, we can think of the network as having variable depth chosen at random independently of the input. We train a ALBERT model where the depth is fixed, but the loss function is taken to be a mixture of the losses corresponding to testing the output on different layers, similar to stochastic depth from \cite{huang2016deep}. The results are depicted in Figure~\ref{fig:stochasticdepth}, which show a clear improvement in out-of-distribution generalization using stochastic depth, suggesting that the modification of the loss function indeed incentivizes the model to implement the expected iteration. \begin{figure}[t] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=13cm]{figures/stochastic_depth.pdf} \caption{Generalization of ALBERT trained with stochastic depth. At training time, depth is uniformly sampled from integers between 6 and 12 per mini-batch, while we fix depth to be 12 at test time.} \label{fig:stochasticdepth} \vspace{-0.5cm} \end{figure} \subsection{Rand-Init vs. Pretrained: Structural advantages emerging from pretraining} Pretraining large models has emerged as a prominent and highly successful paradigm in large-scale deep learning. It advocates first training the model on a large dataset to perform a generic task, followed by task-specific fine-tuning on the task at hand. Our goal in this section is to use LEGO as a testing ground for this paradigm. To this end, we compare (a) training the BERT architecture for LEGO from random initializations to (b) fine-tuning the standard pre-trained BERT model to solve LEGO. Figure \ref{fig:pretrain} (left and center plots) shows that pretraining helps generalization in LEGO dramatically: the pre-trained model generalizes to unseen sequence lengths (the dashed plots) much better, and within a far smaller number of epochs, than the randomly initialized model. We investigate the root causes for this advantage. Since pre-trained transformer-based networks are demonstrably capable of performing reasoning tasks to a certain extent (see discussion in the appendix), one may postulate that the success of pre-trained BERT on LEGO is due to a reasoning capability acquired during pretraining. However, our findings will show that its success can be attributed---for the most part---not to ``reasoning'', nor to any actual data seen during pretraining, but rather to certain sturctural patterns of ``information transfer" which are shared by many tasks. Indeed, we demonstrate that much of the advantage can be recovered by directly initializing the model to ``mimic'' those patterns explicitly, without pretraining and without seeing any prior data at all (see results in the right plot in Figure \ref{fig:pretrain}). Drawing on our findings, we study in Section~\ref{sec:hybrid} a hybrid transformer model that explicitly builds those patterns into the architecture. \begin{figure}[H \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/mimicking_gen.pdf} \caption{Pre-trained BERT exhibits significant performance advantages over its Rand-Init counterpart, while the mimicking procedure (a simple initialization scheme we describe below) heads closes the gap.} \label{fig:pretrain} \end{figure} \textbf{Why does pretraining help in LEGO?} Since LEGO is a fundamental reasoning task, one possible explanation for the success of pre-trained BERT on this task is having acquired an innate ability to ``reason'' during pretraining. While this is hard to verify or disprove directly, one may come up and test simpler explanations for this impressive performance. One simple explanation is that pre-trained BERT is already aware of the semantics of tokens like `=' or `-', and can immediately interpret them correctly upon encountering them in the LEGO task. We have easily ruled out this possibility, by replacing those tokens with arbitrary ones that do not encompass the same semantic meanings; this does not affect the performance of pretrained BERT. A more intriguing explanation pertains to the attention mechanism itself. At its basis, LEGO requires two fundamental types of information transfer: \begin{itemize}[topsep=0pt,itemsep=0ex,partopsep=1ex,parsep=1ex, leftmargin=3ex] \item \textbf{\emph{Association:}} encoding long range dependencies that transfer a value between two occurrences of the same variable. For example, if the input contains the two clauses ``$a=+1$'' and ``$b=-a$'' (with arbitrary separation between them), the architecture must associate the two occurrences of the variable $a$ in order to correctly set $b$ to $-1$. \item \textbf{\emph{Manipulation:}} encoding short range dependencies of transferring a value from the right-hand to the left-hand side of the clause. For example, to successfully process the clause ``$b=-a$'', the architecture must associate these particular occurrences of $a$ and $b$ with each other, in order to transfer the value of $a$ (after applying to it the group element $-1$) into $b$. \end{itemize} Notice that the two types correspond to different attention patterns. Association corresponds to a purely global attention pattern, completely reliant on the \emph{content} of the tokens and oblivious to their \emph{position} in the input sequence. Manipulation, in contrast, corresponds to a purely local attention pattern, where nearby positions attend to each other (note that feed-forward layers might be useful for manipulation too). The LEGO task crystallizes the roles of these fundamental patterns, so it is natural to ask whether they are indeed manifested in the transformer attention heads in practice. \begin{figure}[t] \centering \includegraphics[trim={3cm 0cm 0cm 0cm},clip, width=15cm]{figures/attention_eyeball.pdf} \caption{Visualization of attention maps from a pre-trained BERT model not yet fine-tuned on LEGO. On a LEGO input sequence, certain heads implement local, convolution-like manipulation operators (left), while some others implement global, long range association operators (right). Find more visualization in appendix.} \label{fig:attention_heads} \end{figure} We answer in the affirmative. Fig.~\ref{fig:attention_heads} shows two select attention heads in the first two layers of pre-trained BERT on an input LEGO sequence without any fine-tuning. The right head clearly depicts association: each token attends to all other occurrences of the same token in the input sequence. The left one clearly depicts an attention pattern that facilitates learning manipulation: each token attends to the tokens immediately before and after it in the sequence. The other heads (shown in appendix) are either similar to one of these two, or appear to be ``blank'' that do not exhibit any clear pattern. It thus appears that pre-trained BERT has learned during pretraining to realize the association and manipulation patterns, as they indeed arise in many natural language tasks. We hypothesize that this is the explanation for the impressive performance of pretraining for the LEGO task. \textbf{Mimicking BERT.} To test this hypothesis, we craft a \emph{mimicking procedure} to directly initialize the attention heads to perform association and manipulation, without access to pretraining data. We achieve this by specifying the target attention matrices (one for association and one for manipulation), and training the model on random data to minimize a ``mimicking loss'' that measures how well the actual attention matrices at every layer match the target matrices. The precise mimicking loss and training protocol are specified in the appendix. The results, depicted in the right plot in Figure \ref{fig:pretrain}, show that BERT with mimicking initialization attains significant advantage in generalization over randomly initialized BERT, despite not being pre-trained on any real data (and thus not having learned to ``reason''). This confirms that much of the advantage of pre-trained BERT stems from having learned these information transfer patterns. \subsection{Shortcut solutions and their effect on generalization} \label{sec:shortcut} As discussed in Section~\ref{sec:iterative}, the arguably natural solution to LEGO is to resolve variables iteratively by the order of their depth in the chain. Nonetheless, to our surprise, we found that the randomly initialized BERT and ALBERT models first learn a ``shortcut'' solution: they immediately resolve the \emph{last} variable in the reasoning chain, by counting the total number of minus signs. Indeed, the last variable can be easily identified as it appears only once (whereas every other variable appears twice), and its value is fully determined by the parity of the number of minus signs. This behavior is clearly seen in the top two plots in Figure~\ref{fig:result_shortcut}, in which the randomly initialized models are trained to fit all $12$ variables: the last variable (\#11) improves in accuracy earlier than almost all other ones. This behavior may be somewhat related to the well-observed phenomenon of \emph{spurious features}: a model trained to distinguish cows from boats could learn to identify cows by the grass around them, rather than relying on any actual features of cows, circumventing the intended solution. While possibly effective in fitting the training data, such solutions may interfere with generalization. We use LEGO as a case study of shortcut solutions and their effect on generalization. Instead of training the model to fit the first six variables (as in bottom Figure~\ref{fig:result_shortcut}), we train it to fit the first five (\#0--\#4) and the last variable (\#11). This allows us to measure out-of-distribution generalization (to \#5--\#10) in a setting where models can learn the shortcut. The results show significantly degraded performance, implying that shortcut solutions impede generalization. We then study ways to prevent models from learning them, by pretraining and mimicking. The full section appears in the appendix. \section{Hybrid convolutional transformer architecture} \label{sec:hybrid} Our above findings suggest a natural modification to the transformer architecture. Out of the two fundamental attention patterns---association and manipulation---only the latter depends on the positions of tokens in the sequence (while association is position-oblivious). Since this pattern appears crucial for LEGO, and indeed for many symbolic reasoning tasks, it seems helpful to encode it directly in the architecture, rather than letting the model learn it indirectly from positional encodings. This is akin to the way convolutional layers directly encode relations between adjacent pixels in their architecture. Since such ``local'' or convolutional attention patterns are likely valuable in virtually all natural language tasks, it is intriguing to also apply this idea to language models in general. We instantiate this idea by implementing the following hybrid between convolutional and transformer architectures. In each attention head, the key, query and value linear transformations are applied not just to each token on its own, but to an outcome of a one-dimensional convolution along the temporal dimension of the input sequence. To avoid excessive amount of additional parameters, we adopt depth-wise convolution~\cite{chollet2017xception}, i.e., one filter per input dimension. We give precise details in the appendix. We are aware of existing extremely similar, popularized ideas in computer vision~\cite{dai2021coatnet,liu2021Swin, liu2022convnet} and natural language processing~\cite{jiang2020convbert, Cordonnier2020On}. We intend to provide evidential motivations for its use in reasoning tasks rather than to claim novelty. On LEGO, modified BERT and ALBERT not only are capable of matching the performance of their pre-trained unmodified versions, but also avoid the shortcut solutions in Section~\ref{sec:shortcut}. \begin{figure}[H] \centering \begin{minipage}{0.45\textwidth} \begin{lstlisting}[breaklines] [BOS] a[0]=1; a[1]=-3; a[2]=2; a[3]=-2; a[4]=0; [SEP] int * func_1(int a[]){ int p_0 = 4; int l_7 = 2; ++a[l_7]; for (p_0 = 0; p_0 <= 2; p_0++) { a[p_0] = 0; for (int p_1 = 3; p_1 >= 2; p_1--) { a[p_1]--;} } return a; } [EOS] [BOS] a[0]=-4; a[1]=1; a[2]=3; a[3]=0; a[4]=1; [SEP] int * func_1(int a[]) { int p_0 = 1; int l_9 = 1; a[l_9] = (0 * a[p_0]); return a; } [EOS] [BOS] a[0]=0; a[1]=3; a[2]=-1; a[3]=4; a[4]=-2; [SEP] int * func_1(int a[]) { int p_0 = 0; int l_9 = 3; for (p_0 = 4; p_0 >= 3; p_0--) { a[p_0] = 0; a[p_0] = (a[l_9] * a[p_0]);} return a; }[EOS] ...... \end{lstlisting} \end{minipage} \hspace{\fill} \begin{minipage}{0.5\textwidth} \centering \includegraphics[trim={0cm 0cm 0cm 0cm}, clip, width=6cm, height=4.2cm]{figures/conv_ccode.pdf} \end{minipage} \caption{Left: Example input sequences adapted from the restricted C dataset. Right: Performance of vanilla BERT versus our hybrid architecture with various convolutional filter sizes (5, 10, 15). Error bars are standard deviations of 3 independent runs, plotted every 2 epochs for visibility.} \label{fig:cprog} \end{figure} We further test on a more advanced symbolic reasoning task than LEGO --- learning to execute C programs. We use the restricted C dataset~\cite{chen2021latent} of $500$K programs written in C, each provided with the initial values of input variables as well as the outcome. We consider the task is to predict the outcome given the initial values and the program. To use transformer here, we convert each program together with the initial assignment into plain text and feed it to the model as the input sentence. To make predictions we apply a linear classifier to the output representations, as in LEGO. Exact setups are in the appendix. Figure~\ref{fig:cprog} shows that the convolutional architecture attains a significant performance advantage over the vanilla BERT on this task, with less than $0.5\%$ extra parameters. \subsection{LEGO: A synthetic reasoning task} Core components of reasoning include the ability to {\em associate} concepts, and to {\em manipulate} them. We propose a simple task that captures these two aspects, which we call LEGO (Learning Equality and Group Operations). In LEGO, the input describes a sequence of {\em variable assignments} as well as {\em operations} on these variables by a fixed (mathematical) group. One needs to be able to deal with both long-range assignment (the same variable appearing in different parts of the input should be viewed as a being {\em equal} to same quantity), as well as short-range operations (describing what group element is applied to which variable). A key parameter of an input sequence will be its length, which roughly correspond to the number of sequential reasoning steps one has to do in order to resolve the value of each variable. We will mostly train with a fixed sequence length (say $12$). We often provide supervision only on part of the sequence (say the first $6$ variables). We do so in order to test the generalization capabilities from smaller length sequence to longer length sequence without introducing potential errors due to the positional encoding in transformers. \subsection{Some takeaways} \label{sec:takeaways} We distinguish {\em classical generalization} (i.e., training and test distribution are the same) and {\em out-of-distribution} generalization, which we refer to as simply {\em generalization}. In the context of LEGO, this (out-of-distribution) generalization refers to the setting where we train on shorter sequence lengths (e.g., supervision on only the first $6$ variables) and test on a long sequences (e.g., accuracy computed on $12$ variables). A summary of our empirical observations is as follows: \begin{CompactItemize} \item[1.] First, classical generalization happens reliably for all architectures and data regimes. \item[2.] More interestingly, (out-of-distribution) generalization seems to depend on architectural/data composition choices. Specifically, BERT-like models without special data preparation do {\em not} generalize to longer sequences, while other models like ALBERT, or BERT with carefully selected data (such as diverse sequence lengths, or pre-trained BERT) {\em do} generalize. \item[3.] The generalizing models all seem to evolve attention heads dedicated to either {\em association} (long-range identity matching) or {\em manipulation} (short-range operations). We provide evidence that pre-trained BERT (which is pretrained on a seemingly unrelated dataset) generalizes because it has learned such heads, rather than because it would have ``learned to reason" on LEGO task. \item[4.] The non-generalizing models seem to solve the classical generalization problem using shortcut-like solutions, whereby using the specificity of the group operations they are able to jump to the end of the chain of reasoning, and then complete the rest of the variables by following the reasoning both from the start {\em and} the end of the chain. \end{CompactItemize} We interpret these observations as suggesting the following more general insights: \begin{CompactItemize} \item[(i)] Classical generalization can be a deceptive metric, as there might be unexpected ways to solve the problem. This is famously related to the issue of embedding machine learning systems with {\em common sense reasoning}. Namely, we hope that when a ML system solves a task, it does so in ``the way humans do it", but of course nothing guarantees that this will happen. Our findings are consistent with the current methodology of increasing the diversity of the training data, which seem crucial for (out-of-distribution) generalization. \item[(ii)] ALBERT-like models, where a layer is repeated several times, seem to be an ideal structure for problems which could be described algorithmically as a ``for loop" (as is the case with following a chain of reasoning). Indeed we find that ALBERT generalizes in data regimes where BERT does not, clearly separating these two architectures. \item[(iii)] The success of pretraining/fine-tuning in vastly different tasks might actually come from a ``simple" better initialization, rather than complex knowledge encoded in the pretrained network. \item[(iv)] The interplay between short-range (close-by information in sentence) and long-range (same concept appearing in different places in the sentence) is relevant more broadly than in our synthetic task. We observe that the networks effectively learn to deal with short-range information by implementing a sort of convolutional filter using the positional encoding information. It is natural to consider architectures that would have such a mechanism implemented in it from the start, motivating us to study a hybrid convolutional-transformer architecture, which basically replaces the linear operators defining key/query/value by convolutional operators. \end{CompactItemize} \subsection{Related works} \label{sec:related} In \cite{PVR}, the PVR (Pointer Value Retrieval) task is introduced, with a similar high level goal to ours in introducing the LEGO task, namely to study how neural networks learn to reason in a controlled setting. In a PVR task, part of the input indicates another part of the input where a function of potentially varying complexity has to be computed. Like us, they use distribution shift to investigate how various network architectures learn this task, and they observe that networks can learn the task at hand (``classical generalization") yet fail to generalize to mild distribution shift. They then ask the following questions: ``Are there architectural changes that can enforce better priors and withstand distribution shift? Can novel learning objectives prevent these adversarial correlations? Progress on these questions holds promise for greater robustness." Our study attacks these questions directly in the context of the LEGO task (e.g., ALBERT versus BERT, and training set composition investigations), and our preliminary results indicate that this is indeed a fruitful direction to obtain better models in some aspects (e.g., more interpretable). Other examples of recent synthetic benchmark with a similar philosophy include SCAN (Simplified version of the CommAI Navigation) \cite{lake2018generalization}, CFQ (Compositional Freebase Questions) \cite{keysers2020measuring}, and BONGARD-LOGO \cite{nie2020bongard}. In SCAN for example, one has to ``translate" a command of the form ``turn left twice and jump" into a sequence of actions ``LTURN LTURN JUMP" (see \cite{patel2022revisiting} for more recent progress on this dataset). Again, similarly to the PVR tasks, these works focus on understanding generalization (in these cases, {\em compositional generalization}). Another related line of works is on studying transformers to recognize various formal languages, see e.g., \cite{bhattamishra-etal-2020-ability}. As far as we know, none of these works try to probe the inner workings of the networks in the same depth as we do here. On the other hand, networks trained on real data are being extensively scrutinized, see for example \cite{rogers2020primer} where they try to understand some of the attention heads of BERT (see also \cite{saha2020prover} and the references therein). However, making sense of these real-data-trained networks is a daunting task, and a key contribution of our work is to show that in a more limited setting one can obtain a much clearer picture of what transformers learn. The LEGO task is also naturally related to the growing literature on testing mathematical/coding abilities of transformers (e.g., \cite{DBLP:conf/emnlp/SahaGSB20}), specifically the simpler tasks of checking the correctness of a proof (or simplifying one, such as in \cite{agarwal2021analyzing} which studies simplification of polynomials), or executing code for a given input \cite{chen2021latent}. It would be interesting to see if some of the insights we derive in the present paper apply to currently challenging mathematical tasks such as MATH \cite{hendrycks2021measuring} and IsarStep \cite{li2021isarstep}. \section{Generalization to longer sequences} \begin{figure}[H] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip, width=14cm]{figures/n=2_pretrained_vs_randinit.pdf} \caption{Predicting longer sequences. We finetune/train BERT and ALBERT models that are pre-trained/random-initialized on input sequences consisting of 12 variables in total. During training, we only provide label information on the first 6 variables (\#0 to \#5) along the chain. However, For testing, we instead evaluate the models' predictions on all 12 variables (\#0 to \#11) of hold-out test sequences.} \label{fig:gen_longer_seq} \end{figure} \section{Introduction} \label{sec:intro} \input{intro} \section{Learning equality and group operation (LEGO)} \label{sec:LEGO} \input{LEGOtask} \input{experiment} \clearpage \bibliographystyle{alpha}
1,116,691,499,593
arxiv
\section{Introduction} \label{sec:intro} Recently, in connection with the first positive measurements of the $\Lambda$--hyperon spin polarization~\cite{STAR:2017ckg,Adam:2018ivw}, a lot of interest has been triggered in theoretical studies analyzing the spin polarization and vorticity formation in heavy-ion collisions. One expects that the spin polarization can be related to the global rotation of the strongly interacting matter created in the non-central collisions, in a~way similar to the magnetomechanical Barnett effect \cite{Barnett:1935} and Einstein and de Haas effect \cite{dehaas:1915}. Vorticity can also give rise to new phenomena such as the chiral vortical effect \cite{Kharzeev:2010gr, Kharzeev:2015znc}. Interestingly, the longitudinal polarization of ${\bar \Lambda}$ was discussed already in 1980s by Jacob and Rafelski in connection with the quark-gluon plasma formation \cite{Jacob:1987sj}. However, the negative results were reported by the first heavy-ion experiments that measured the $\Lambda$ spin polarization in Dubna \cite{Anikina:1984cu}, at CERN \cite{Bartke:1990cn} and BNL \cite{Abelev:2007zk}. In the context of various effects associated with the spin polarization and vorticity, many theoretical studies have been performed that refer to the spin-orbit coupling \cite{Liang:2004ph,Liang:2004xn,Gao:2007bc,Chen:2008wh}, statistical properties of matter in equilibrium \cite{Weert:1982,Zubarev:1979,Becattini:2009wh,Becattini:2012tc,Becattini:2013fla,Becattini:2015nva,Hayata:2015lga}, and kinetic models with spin \cite{Gao:2012ix,Chen:2012ca,Fang:2016vpj,Fang:2016uds}. Moreover, closely related works on hydrodynamics with triangle anomalies \cite{Son:2009tf,Kharzeev:2010gr} and on the Lagrangian formulation of hydrodynamics have been reported in Refs.~\cite{Montenegro:2017rbu,Montenegro:2017lvf,Montenegro:2018bcf}. A natural framework for dealing simultaneously with polarization and vorticity would be relativistic hydrodynamics of polarized fluids. An example of such a framework has been recently proposed in Refs.~\cite{Florkowski:2017ruc,Florkowski:2017dyn}. It is based on the local equilibrium distribution functions for particles and antiparticles with spin ${\nicefrac{1}{2}}$, in the form introduced in~Ref.~\cite{Becattini:2013fla}. This framework can describe the full space-time evolution of the spin polarization in systems created in high-energy nuclear collisions. We note, that the inclusion of the spin degrees of freedom into a hydrodynamic approach represents one of several novel developments in relativistic hydrodynamics which forms the basis for our understanding of space-time evolution of matter created in heavy-ion collisions (for recent reviews on progress in relativistic hydrodynamics see \cite{Florkowski:2017olj,Romatschke:2017ejr}). \medskip In this paper we perform a detailed comparison of the thermodynamic and kinetic approaches which deal with the phenomenon of polarization-vorticity coupling in heavy-ion collisions. By the thermodynamic approach we mean a series of papers by Becattini and his collaborators~\cite{Becattini:2009wh,Becattini:2012tc,Becattini:2013fla,Becattini:2013vja,Becattini:2015nva,Becattini:2016gvu,Becattini:2017gcx}, where the authors analyze predominantly the properties of matter in global equilibrium with a rigid rotation. On the other hand, by the kinetic approach we mean here Refs.~\cite{Gao:2012ix,Chen:2012ca,Fang:2016uds,Fang:2016vpj}, where collisionless kinetic equations for the Wigner functions of spin-${\nicefrac{1}{2}}$ particles have been studied. Similarly to Refs.~\cite{Gao:2012ix,Chen:2012ca,Fang:2016uds,Fang:2016vpj} we perform herein a semiclassical expansion of the Wigner function. This method was successfully used in the past (see, for example, Refs. ~\cite{Elze:1986qd,Vasak:1987um,Elze:1989un,Zhuang:1995pd,Florkowski:1995ei}) to construct a classical limit of quantum kinetic equations, which yields dynamic equations for both the phase-space distribution functions and the spin phase-space densities. The novel feature of our present work is that we use the form of the equilibrium functions for particles with spin~${\nicefrac{1}{2}}$, proposed in Ref.~\cite{Becattini:2013fla}, as an input for the semiclassical expansion. In this way, we can check directly how the thermodynamic and kinetic frameworks are complementary to each other and what one approach implies for the other one. In order to make our formalism as simple as possible, and to concentrate primarily on the relation between the spin polarization and vorticity, we neglect in this work the effects of the electromagnetic and other mean fields. The inclusion of such fields is left for a separate analysis. One of our findings is that recent formulations of the kinetic theory ~\cite{Gao:2012ix,Chen:2012ca,Fang:2016uds,Fang:2016vpj} do not imply the spin polarization induction by the vorticity. Although there exist solutions of the kinetic equations where the two phenomena are interconnected, they are in general independent. This is due to the fact that the collision term is neglected in such frameworks and the collisionless kinetic equation alone cannot imply the growth of polarization due to vorticity~\footnote{We do not discuss here the chiral kinetic theory \cite{Stephanov:2012ki,Chen:2014cla,Gorbar:2017toh} as its relation to the thermodynamic approach of Refs.~\cite{Becattini:2009wh,Becattini:2012tc,Becattini:2013fla} is not known at the moment and requires a separate analysis.}. We further show that the kinetic-theory results demonstrating relations between polarization and vorticity correspond to the exact solutions of the collisionless kinetic equation. Thus, they can be interpreted as description of global thermodynamic equilibrium. Only in this case, the thermodynamic and kinetic results are fully consistent. To clarify this point, besides the concept of global and local equilibrium, we introduce also the ideas of extended global and extended local equilibrium. Finally, we analyze different possible ways leading from the kinetic theory to the hydrodynamic equations with spin. They are all based on the application of the conservation laws for charge, energy, linear momentum, and angular momentum. Using the semiclassical expansion for the Wigner function, we introduce hydrodynamic equations starting from the kinetic-theory formulation by de Groot, van Leeuwen, and van Weert (GLW) \cite{deGroot:1980}, and using directly the canonical formalism \cite{Itzykson:1980rh}. In the GLW case the energy-momentum tensor is symmetric and the spin tensor is conserved, while in the canonical case the energy-momentum tensor is asymmetric and the spin tensor is not conserved (in both cases the total angular momentum is always conserved). Interestingly, the two approaches are connected by the pseudo-gauge transformation, which we have explicitly constructed. \smallskip {\it Conventions and notation:} Below we use the following conventions and notation for the metric tensor, Levi-Civita's tensor, and the scalar product: $g_{\mu\nu} = \hbox{diag}(+1,-1,-1,-1)$, $\epsilon^{0123} = -\epsilon_{0123} = 1$, $a \cdot b = g_{\mu \nu} a^\mu b^\nu = a^0 b^0 - {\boldsymbol a} \cdot {\boldsymbol b}$. Throughout the text we use $c = \hbar = k_B~=1$, however, we explicitly display $\hbar$ in the discussion of the semiclassical expansion of the Wigner function. All calculations are done using the Dirac representation for the gamma matrices. The operator $\Delta^{\mu\nu}$ projecting on the space orthogonal to the flow vector $u^\mu$ is defined as $\Delta^{\mu\nu} = g^{\mu\nu} - u^\mu u^\nu$. The Lorentz invariant measure in the momentum space is denoted as $dP$, namely \bel{eq:dP} dP = \frac{d^3p}{(2 \pi )^3 E_p}, \end{eqnarray} where $E_p = \sqrt{m^2 + {\boldsymbol p}^2}$ is the on-mass-shell particle energy, and $p^\mu = (E_p, {\boldsymbol p})$. The particle momenta which are not necessarily on the mass shell and appear as arguments of the Wigner functions are denoted by the four-vector $k^\mu = (k^0, {\boldsymbol k})$. The square brackets denote antisymmetrization, $t^{[\mu \nu]} = \left(t^{\mu\nu} - t^{\nu\mu} \right)/2$. The symbol of tilde is used to denote dual tensors, which are obtained from the rank-two antisymmetric tensors by contraction with the Levi-Civita symbol and division by a factor of two. For example, ${\tilde \omega}_{\mu\nu}$ denotes the dual spin polarization tensor defined by the equation \bel{eq:dual} {\tilde \omega}_{\mu\nu} = \f{1}{2} \epsilon_{\mu\nu\alpha\beta} \omega^{\alpha\beta}, \end{eqnarray} where $\omega^{\alpha\beta}$ is the original spin polarization tensor. The inverse transformation is \bel{eq:dualdual} \omega^{\rho \sigma} = -\f{1}{2} \epsilon^{\rho \sigma \mu \nu} {\tilde \omega}_{\mu \nu}. \end{eqnarray} \section{Basic concepts and methodology} \label{sec:bconcepts} \subsection{Spinless particles --- global and local equilibrium} \label{sec:spinless} Before we start our discussion of various effects connected with spin, it is useful to recall basic features of the kinetic theory for spinless particles: In this case, the relativistic Boltzmann equation for the phase-space distribution function $f(x,p)$ contains two terms: the drift term and the collision integral. This can be schematically written as \begin{eqnarray} p^\mu \partial_\mu f(x,p) = C[f(x,p)]. \label{eq:simpkeq} \end{eqnarray} The collision integral $C[f]$ vanishes in two special cases: i) for non-interacting, free streaming particles, and ii) for global or local thermodynamic equilibrium. In the first case the distribution function satisfies exactly the drift equation ($p^\mu \partial_\mu f_{\rm fs}(x,p) = 0$) describing, unrelated to the present study, free motion of particles. In the second case, which is of main interest for us, we have to distinguish between the global and local equilibrium. In the global thermodynamic equilibrium, the equilibrium distribution function $f_{\rm eq}(x,p)$ satisfies again the equation of the form $p^\mu \partial_\mu f_{\rm eq}(x,p) = 0$, which leads in this case to the constraints on the hydrodynamic parameters used to specify the form of $f_{\rm eq}(x,p)$. In particular, the $\beta_\mu(x)$ field, defined traditionally as the ratio of the local fluid four-velocity $u_\mu(x)$ to the local temperature $T(x)$, satisfies the Killing equation \begin{eqnarray} \partial_\mu \beta_\nu(x) + \partial_\nu \beta_\mu(x) = 0. \label{eq:Killing} \end{eqnarray} Equation~\rfn{eq:Killing} has the solution of the form~\footnote{The method of solving the Killing equation is presented in App.~\ref{sec:Killing}.} \begin{eqnarray} \beta_\mu(x) = \beta^0_\mu + \varpi^0_{\mu \nu} x^\nu, \label{eq:Killingsol} \end{eqnarray} where the vector $\beta^0_\mu$ and the antisymmetric tensor $\varpi^0_{\mu \nu}$ are constant. For any form of the field $\beta_\mu(x)$, we define thermal vorticity as the rotation \bel{eq:thvor} \varpi_{\mu \nu} = -\frac{1}{2} \left(\partial_\mu \beta_\nu - \partial_\nu \beta_\mu \right). \end{eqnarray} Hence, Eqs.~\rfn{eq:Killing} and \rfn{eq:Killingsol} imply that the thermal vorticity in global equilibrium is constant, $\varpi_{\mu \nu}=\varpi^0_{\mu \nu}$. Additionally, in global equilibrium the ratio of the chemical potential to the local temperature should be constant, $\xi(x) = \mu(x)/T(x) = \xi^0 = \hbox{const}$. In the case of local equilibrium, the right-hand side of~\rf{eq:simpkeq} vanishes, while its left-hand side, strictly speaking, does not. In this case one should add a correction $\delta f$ to the equilibrium function $f_{\rm eq}$, which describes dissipative phenomena. Nevertheless, if the gradients of local hydrodynamic variables are sufficiently small, the dissipative terms can be neglected. In this case the hydrodynamic variables in $f_{\rm eq}$ remain unconstrained. In order to determine them, one adds further assumptions, most commonly, that specific moments of~\rf{eq:simpkeq} in the momentum space (those that yield the conservation laws for energy, momentum or charge) vanish. This methodology leads to the perfect-fluid description. \subsection{Particles with spin} \label{sec:spin} The treatment of the collisionless kinetic equation for the Wigner function ${\cal W}(x,k)$ that includes spin degrees of freedom has many features in common with the simple spinless system discussed above. As the free-streaming case is not interesting, we are left again with essentially two different physics cases which represent global and local thermodynamic equilibrium. Both of them can be analyzed with the help of the equilibrium distribution functions $f^+(x,p)$ and $f^-(x,p)$, for particles and antiparticles with spin ${\nicefrac{1}{2}}$, introduced by Becattini and collaborators in \cite{Becattini:2013fla}. As the matter of fact, these functions are two-by-two Hermitian matrices that can be interpreted as spin density matrices for each value of the space-time position $x$ and momentum $p$. Besides typical dependence on the hydrodynamic variables $\beta_\mu = u_\mu/T$ and $\xi = \beta \mu = \mu/T$, they depend in addition on the antisymmetric spin polarization tensor $\omega_{\mu\nu}$ ($\omega_{\mu\nu} = -\omega_{\nu\mu}$). The equilibrium Wigner function ${\cal W}_{\rm eq}(x,k)$, constructed from the functions $f^+(x,p)$ and $f^-(x,p)$, also depends on $\beta_\mu$, $\xi$, and $\omega_{\mu\nu}$. Consequently, it turns out that we can distinguish between four rather than two different types of equilibrium. They can be classified as follows: \begin{itemize} % \item{} global equilibrium --- in this case the $\beta_\mu$ field is a Killing vector satisfying \rf{eq:Killing}, $\varpi_{\mu \nu} = -\frac{1}{2} \left(\partial_\mu \beta_\nu - \partial_\nu \beta_\mu \right) = \hbox{const}$, the spin polarization tensor is constant and agrees with thermal vorticity, $\omega_{\mu\nu} = \varpi_{\mu \nu}$, in addition $\xi = \hbox{const}$, % \item{} extended global equilibrium --- $\beta_\mu$ field is a Killing vector, $\varpi_{\mu \nu} = -\frac{1}{2} \left(\partial_\mu \beta_\nu - \partial_\nu \beta_\mu \right) = \hbox{const}$, the spin polarization tensor is constant but $\omega_{\mu\nu} \neq \varpi_{\mu \nu}$, $\xi = \hbox{const}$, % \item{} local equilibrium --- $\beta_\mu$ field is not a Killing vector but we still have $\omega_{\mu\nu}(x) = \varpi_{\mu \nu}(x)$, $\xi$ is allowed to depend on space-time coordinates, $\xi = \xi(x)$, % \item{} extended local equilibrium --- $\beta_\mu$ field is not a Killing vector and $\omega_{\mu\nu}(x) \neq \varpi_{\mu \nu}(x)$, moreover $\xi = \xi(x)$. \end{itemize} The global and extended global equilibrium states correspond to the case where ${\cal W}_{\rm eq}(x,k)$ satisfies exactly the collisionless kinetic equations. On the other hand, in the local and extended local equilibrium states only certain moments of the kinetic equation for ${\cal W}_{\rm eq}(x,k)$ can be set equal to zero. They can be used to construct perfect-fluid hydrodynamic equations including spin. We stress that in this work we assume that the collision term vanishes for each type of equilibrium listed above, provided the equilibrium Wigner function ${\cal W}_{\rm eq}(x,k)$ has the form derived from the functions $f^+(x,p)$ and $f^-(x,p)$. This assumption should be verified in the future by detailed studies of various collision terms for particles with spin. Such studies may also shed new light on the form of the equilibrium distributions. Before the results of such investigations are known, we continue to assume that the collision term vanishes for ${\cal W}_{\rm eq}(x,k)$. Before we turn to discussion of the kinetic equation for the Wigner function ${\cal W}_{\rm eq}(x,k)$ it is useful to characterize global thermodynamic equilibrium in the framework of relativistic quantum mechanics. This leads to a natural distinction between the global and extended global equilibrium. \section{Global thermodynamic equilibrium in relativistic \\ quantum mechanics} \label{sec:global} In this section we introduce general features of global thermodynamic equilibrium constructed in the framework of relativistic quantum mechanics. We follow here closely the treatment of Zubarev \cite{Zubarev:1974} and Becattini \cite{Becattini:2012tc}. The main object of interest in this approach is a density operator ${\hat \rho}$ defined by the expression \bel{eq:rho} {\hat \rho}(t) = \exp\left[-\int d^3\Sigma_\mu(x) \left( {\hat T}^{\mu\nu}(x) b_\nu(x) - \f{1}{2} {\hat J}^{\mu, \alpha\beta}(x) \omega_{\alpha \beta}(x) - {\hat{N}^{\mu}}(x) \xi(x) \right)\right]. \end{eqnarray} Here $d^3\Sigma_\mu$ is an element of a space-like, three-dimensional hypersurface $\Sigma_\mu$. We may assume that it corresponds to a fixed value of the time coordinate. In this case $d^3\Sigma_\mu=(dV,0,0,0)$ and ${\hat \rho}$ becomes a function of $t$. The operators ${\hat T}^{\mu\nu}(x)$, ${\hat J}^{\mu, \alpha\beta}(x)$ and ${\hat{N}^{\mu}}(x)$ are quantum versions of the energy-momentum tensor, angular momentum tensor, and charge current. They obey the following conservation laws: \begin{eqnarray} \partial_\mu {\hat T}^{\mu\nu}(x) = 0, \label{cons_enm} \end{eqnarray} \begin{eqnarray} \partial_\mu {\hat J}^{\mu, \alpha\beta}(x) = 0, \label{cons_angm} \end{eqnarray} \begin{eqnarray} \partial_\mu {\hat N}^{\mu}(x) = 0. \label{cons_crt} \end{eqnarray} Note that ${\hat J}^{\mu, \alpha\beta}(x)$ is asymmetric in the last two indices, ${\hat J}^{\mu, \alpha\beta}(x)=-{\hat J}^{\mu, \beta \alpha}(x)$ and can be, in general, represented as a sum of the orbital and spin parts \begin{eqnarray} {\hat J}^{\mu, \alpha\beta}(x) = {\hat L}^{\mu, \alpha\beta}(x) + {\hat S}^{\mu, \alpha\beta}(x). \label{eq:angular_momentum} \end{eqnarray} The orbital part is expressed by the space-time coordinates and the energy-momentum-tensor components \begin{eqnarray} {\hat L}^{\mu, \alpha\beta}(x) = x^\alpha {\hat T}^{\mu \beta}(x) - x^\beta {\hat T}^{\mu \alpha}. \label{eq:orbital_ang_mntm} \end{eqnarray} Using \rftwo{cons_enm}{cons_angm} we find \begin{eqnarray} \partial_\mu {\hat S}^{\mu, \alpha\beta}(x) = {\hat T}^{\beta \alpha}(x) - {\hat T}^{\alpha \beta}(x). \label{eq:spin_ang_mntm} \end{eqnarray} Thus, the spin contribution to the angular momentum is usually not conserved --- it is conserved only if the energy momentum operator ${\hat T}^{\alpha \beta}(x)$ is symmetric. The functions $b_\nu(x)$, $\omega_{\alpha\beta}(x)$, and $\xi(x)$ are Lagrange multipliers that should be chosen to maximize the system's entropy. Note that $\omega_{\alpha\beta}(x) = - \omega_{\beta \alpha}(x)$ as any symmetric part of $\omega_{\alpha\beta}(x)$ does not give contribution to \rf{eq:rho}. In global thermodynamic equilibrium we require that the operator ${\hat \rho}(t)$ is independent of time. This condition leads to the constraint \begin{eqnarray} && \partial_\mu \left({\hat T}^{\mu\nu}(x) b_\nu(x) - \f{1}{2} {\hat J}^{\mu, \alpha\beta}(x) \omega_{\alpha \beta}(x) -{\hat{N}^{\mu}(x)} \xi(x) \right) \nonumber \\ && \hspace{1cm} = {\hat T}^{\mu\nu}(x) \left( \partial_\mu b_\nu(x) \right) - \f{1}{2} {\hat J}^{\mu, \alpha\beta}(x) \left( \partial_\mu \omega_{\alpha \beta}(x)\right) - {\hat{N}^{\mu}}(x) \partial_\mu \xi(x)= 0. \label{div} \end{eqnarray} From this equation we can conclude that the parameters $\xi$ and $\omega_{\alpha\beta}$ are constants, $\xi=\xi^0$ and $\omega_{\alpha\beta}=\omega^0_{\alpha\beta}$~\footnote{We note that if the tensor ${\hat J}^{\mu, \alpha\beta}$ has additional symmetries, for example, it is completely antisymmetric, more general solutions for $ \omega_{\alpha \beta}(x)$ may exist.}. The form of $b_{\nu}$ depends on the symmetry of the energy-momentum tensor ${\hat T}^{\mu\nu}(x)$. For symmetric ${\hat T}^{\mu\nu}$, we require that $\partial_\mu b_\nu + \partial_\nu b_\mu =0$, hence $b_\nu$ is a Killing vector, \bel{eq:bS} b_{\nu} = b^{0}_{\nu}+\delta\omega^{0}_{\nu\rho}\, x^{\rho}, \end{eqnarray} where $b^{0}_{\nu}$ and $\delta\omega^{0}_{\nu\rho}=-\delta\omega^{0}_{\rho\nu}$ are constants. On the other hand, for a not symmetric (asymmetric) ${\hat T}^{\mu\nu}$ we require that $\partial_\mu b_\nu=0$, hence $b_\nu$ must be a constant vector, $b_{\nu} = b^{0}_{\nu}$. Using the decomposition of the angular momentum into the orbital and spin parts, see \rf{eq:angular_momentum}, one can show that the two cases discussed above can be expressed by a single form of the density operator \begin{eqnarray} {\hat \rho}_{\rm EQ}&=& \exp\left[-\int d^3\Sigma_\mu(x) \left( {\hat T}^{\mu\nu}(x)\beta_\nu(x) - \f{1}{2} {\hat S}^{\mu, \alpha\beta}(x) \omega^0_{\alpha \beta}-{\hat{N}^{\mu}}(x) \xi^0 \right) \right]. \label{eq:rhoEQ1} \end{eqnarray} For asymmetric energy-momentum tensor $\beta_\mu(x) = b^0_\mu + \omega^0_{\mu \gamma} x^\gamma$ (with constant $b^0_\mu$ and $\omega^0_{\mu \gamma}$). This implies that $\beta_\mu(x)$ is a Killing vector and thermal vorticity defined by \rf{eq:thvor} agrees with the spin polarization tensor $\omega_{\mu \gamma}=\omega^0_{\mu \gamma}$. On the other hand, for symmetric energy-momentum tensor $\beta_\mu(x) = b^0_\mu + (\delta\omega^0_{\mu \gamma} + \omega^0_{\mu \gamma} ) x^\gamma$ (with constant $b^0_\mu$, $\delta\omega^0_{\mu \gamma}$ and $\omega^0_{\mu \gamma}$). In this case $\beta_\mu(x)$ is again a Killing vector, however, thermal vorticity defined by \rf{eq:thvor} does not necessarily agree with the spin polarization tensor. Our discussion indicates that depending on the symmetry of the energy-momentum tensor, we may deal with global or extended global equilibrium, as they have been defined in the end of Sec.~\ref{sec:bconcepts}. For completeness, we define the statistical operator for local equilibrium by the same form as \rf{eq:rhoEQ1}, \begin{eqnarray} {\hat \rho}_{\rm eq}&=& \exp\left[-\int d^3\Sigma_\mu(x) \left( {\hat T}^{\mu\nu}(x)\beta_\nu(x) - \f{1}{2} {\hat S}^{\mu, \alpha\beta}(x) \omega_{\alpha \beta}(x)-{\hat{N}^{\mu}} (x) \xi(x) \right) \right], \label{eq:rhoEQ2} \end{eqnarray} allowing for arbitrary form of $\beta_\mu(x)$ and $\xi(x)$, and for two options for $\varpi_{\mu\nu}(x)$: either $\varpi_{\mu\nu}(x) = \omega_{\mu\nu}(x)$ (local equilibrium) or $\varpi_{\mu\nu}(x) \neq \omega_{\mu\nu}(x)$ (extended local equilibrium). \section{Equilibrium Wigner functions} \label{sec:geq} \subsection{Spin-dependent equilibrium distribution functions} \label{sec:spindistr} To include the spin degrees of freedom, the scalar equilibrium distribution functions are generalized to two-by-two spin density matrices for each value of the space-time position $x$ and momentum $p$~\cite{deGroot:1980}, \begin{eqnarray} \left[ f^+(x,p) \right]_{rs} \equiv f^+_{rs}(x,p) &=& \frac{1}{2m} {\bar u}_r(p) X^+ u_s(p), \label{fplusrsxp} \\ \left[ f^-(x,p) \right]_{rs} \equiv f^-_{rs}(x,p) &=& - \frac{1}{2m}{\bar v}_s(p) X^- v_r(p). \label{fminusrsxp} \end{eqnarray} Here $m$ is the (anti)particle mass, while $u_r(p)$ and $v_r(p)$ are Dirac bispinors (with the spin indices $r$ and $s$ running from 1~to~2), and the normalizations: \bel{eq:unorm} {\bar u}_r(p) u_s(p)=\,2m\, \delta_{rs}, \qquad \sum_{r=1}^{2} u^r_\alpha(p) {\bar u}^r_\beta(p) = (\slashed{p}+m)_{\alpha \beta}, \end{eqnarray} \bel{eq:vnorm} {\bar v}_r(p) v_s(p)=-\,2m\, \delta_{rs}, \qquad \sum_{r=1}^{2} v^r_\alpha(p) {\bar v}^r_\beta(p) = (\slashed{p}-m)_{\alpha \beta}. \end{eqnarray} Note the minus sign and different ordering of spin indices in \rf{fminusrsxp} compared to \rf{fplusrsxp}. The objects $f^\pm(x,p)$ are two-by-two Hermitian matrices with the matrix elements defined by \rftwo{fplusrsxp}{fminusrsxp}. Following Ref.~\cite{Becattini:2013fla}, we use the four-by-four matrices \bel{XpmM} X^{\pm} = \exp\left[\pm \xi(x) - \beta_\mu(x) p^\mu \right] M^\pm, \end{eqnarray} where \bel{Mpm} M^\pm = \exp\left[ \pm \f{1}{2} \omega_{\mu\nu}(x) {\Sigma}^{\mu\nu} \right] . \end{eqnarray} In \rftwo{XpmM}{Mpm} we use the same notation as that introduced in the previous sections, namely: $\beta^\mu(x)= \umU(x)/T(x)$ and $\xi(x) = \mu(x)/T(x)$, with $\mu(x)$ being the chemical potential (connected with a charge that can be identified, for example, with the baryon number or electric charge). The quantity $\omega_{\mu\nu}(x)$ is the spin polarization tensor, while ${\Sigma}^{\mu\nu}$ is the spin operator expressed in terms of the Dirac gamma matrices, ${\Sigma}^{\mu\nu} = (i/4) [\gamma^\mu,\gamma^\nu]$. For the sake of simplicity, we restrict ourselves to classical Boltzmann statistics in this work. Following \rfc{Florkowski:2017ruc} we further assume that the spin polarization tensor $\omega_{\mu\nu}$ satisfies the two conditions~\footnote{The conditions \rfn{eq:conditions} are satisfied in a natural way if only space components $\omega_{ij}$ are different from zero. This happens, for example, in the case of global equilibrium with a rigid rotation. The non-zero $\omega_{0i}$ components appear, on the other hand, for global equilibrium with a constant acceleration along the fluid stream lines, see Refs.~\cite{Becattini:2015nva,Becattini:2017ljh,Florkowski:2018myy,Prokhorov:2018qhq,Prokhorov:2018bql}}. \begin{eqnarray} \omega_{\mu\nu} \omega^{\mu\nu} \geq 0, \quad \omega_{\mu\nu} \tilde {\omega}^{\mu\nu} = 0, \label{eq:conditions} \end{eqnarray} In this case we introduce the variables $\zeta$ and $\Omega$ defined by the expression \bel{eq:zeta} \zeta = \f{\Omega}{T} = \f{1}{2} \sqrt{ \frac{1}{2} \omega_{\mu\nu} \omega^{\mu\nu} }. \end{eqnarray} It turns out, see \rfc{Florkowski:2017ruc}, that $\Omega$ plays a role of the chemical potential related with spin. Using \rf{eq:conditions} one finds \bel{eq:Mpmexp} M^\pm &=& \cosh(\zeta) \pm \f{\sinh(\zeta)}{2\zeta} \, \omega_{\mu\nu} {\Sigma}^{\mu\nu}. \end{eqnarray} \subsection{Equilibrium Wigner functions} \label{sec:eqWig} The equilibrium phase-space distribution functions $f^+(x,p)$ and $f^-(x,p)$ can be used to determine explicit expressions for the corresponding equilibrium (particle and antiparticle) Wigner functions. We construct them using the expressions from Ref.~\cite{deGroot:1980}, \begin{eqnarray} {\cal W}^{+}_{\rm eq}(x,k) = \frac{1}{2} \sum_{r,s=1}^2 \int dP\, \delta^{(4)}(k-p) u^r(p) {\bar u}^s(p) f^+_{rs}(x,p), \label{eq:Weqpxk} \end{eqnarray} \begin{eqnarray} {\cal W}^{-}_{\rm eq}(x,k) = -\frac{1}{2} \sum_{r,s=1}^2 \int dP\, \delta^{(4)}(k+p) v^s(p) {\bar v}^r(p) f^-_{rs}(x,p). \label{eq:Weqmxk} \end{eqnarray} The total Wigner function is a simple sum of these two contributions \bel{eq:totW} {\cal W}_{\rm eq}(x,k) = {\cal W}^{+}_{\rm eq}(x,k) + {\cal W}^{-}_{\rm eq}(x,k). \end{eqnarray} Using Eqs.~\rfn{fplusrsxp}--\rfn{eq:vnorm} we find \begin{eqnarray} {\cal W}^{+}_{\rm eq}(x,k) = \frac{1}{4 m} \int dP\, \delta^{(4)}(k-p) (\slashed{p}+m) X^+ (\slashed{p}+m), \label{eq:Weqpxk1} \end{eqnarray} \begin{eqnarray} {\cal W}^{-}_{\rm eq}(x,k) = \frac{1}{4 m} \int dP\, \delta^{(4)}(k+p) (\slashed{p}-m) X^- (\slashed{p}-m). \label{eq:Weqmxk1} \end{eqnarray} With the help of \rf{eq:Mpmexp} we can further rewrite these equations in the following form \begin{eqnarray} {\cal W}^{+}_{\rm eq}(x,k) &=& \frac{e^\xi}{4 m} \int dP \,e^{-\beta \cdot p }\,\, \delta^{(4)}(k-p) \nonumber \\ && \times \left[2m (m+\slashed{p}) \cosh(\zeta)+ \f{\sinh(\zeta)}{2\zeta} \, \omega_{\mu\nu} \,(\slashed{p}+m) {\Sigma}^{\mu\nu} (\slashed{p}+m) \right], \label{eq:Weqpxk2} \end{eqnarray} \begin{eqnarray} {\cal W}^{-}_{\rm eq}(x,k) &=& \frac{e^{-\xi}}{4 m} \int dP\,e^{-\beta \cdot p }\,\, \delta^{(4)}(k+p) \nonumber \\ && \times \left[2m (m-\slashed{p}) \cosh(\zeta)- \f{\sinh(\zeta)}{2\zeta} \, \omega_{\mu\nu} \,(\slashed{p}-m) {\Sigma}^{\mu\nu} (\slashed{p}-m) \right]. \label{eq:Weqmxk2} \end{eqnarray} \section{Spinor decomposition of the equilibrium Wigner function} \label{sec:spdecWeq} \subsection{Clifford-algebra expansion} \label{sec:clifford} The equilibrium Wigner functions ${\cal W}^{\pm}_{\rm eq}(x,k)$, being four-by-four matrices satisfying the relations ${\cal W}^{\pm}_{\rm eq}(x,k) = \gamma_0 {\cal W}^{\pm}_{\rm eq}(x,k)^\dagger \gamma_0$, can be always expanded in terms of the 16 independent generators of the Clifford algebra \cite{Itzykson:1980rh,Vasak:1987um}, \begin{eqnarray} {\cal W}^{\pm}_{\rm eq}(x,k) &=& \f{1}{4} \left[ {\cal F}^{\pm}_{\rm eq}(x,k) + i \gamma_5 {\cal P}^{\pm}_{\rm eq}(x,k) + \gamma^\mu {\cal V}^\pm_{{\rm eq}, \mu}(x,k) \right. \nonumber \\ && \left. \hspace{1cm} + \gamma_5 \gamma^\mu {\cal A}^\pm_{{\rm eq}, \mu}(x,k) + {\Sigma}^{\mu\nu} {\cal S}^\pm_{{\rm eq}, \mu \nu}(x,k) \right]. \label{eq:wig_expansion} \end{eqnarray} The coefficient functions in the equilibrium Wigner function expansion \rfn{eq:wig_expansion} can be obtained by the folowing traces: \begin{eqnarray} {\cal F}^{\pm}_{\rm eq}(x,k)&=&{\rm tr}\left[{\cal W}^{\pm}_{\rm eq}(x,k)\right], \label{eq:Feqpm}\\ {\cal P}^{\pm}_{\rm eq}(x,k)&=&-i\,{\rm tr}\left[\gamma^5{\cal W}^{\pm}_{\rm eq}(x,k)\right], \label{eq:Peqpm} \\ {\cal V}^{\pm}_{{\rm eq}, \mu}(x,k)&=&{\rm tr}\left[\gamma_{\mu}{\cal W}^{\pm}_{\rm eq}(x,k)\right], \label{eq:Veqpm}\\ {\cal A}^\pm_{{\rm eq}, \mu}(x,k) &=& {\rm tr}\left[\gamma_{\mu}\gamma^5 {\cal W}^{\pm}_{\rm eq}(x,k)\right], \label{eq:Aeqpm}\\ {\cal S}^\pm_{{\rm eq}, \mu \nu}(x,k)&=&2\,{\rm tr}\left[{\Sigma}_{\mu\nu} {\cal W}^{\pm}_{\rm eq}(x,k)\right]. \label{eq:Seqpm} \end{eqnarray} Using \rftwo{eq:Weqpxk2}{eq:Weqmxk2} in the expressions \rfn{eq:Feqpm}--\rfn{eq:Seqpm}, and employing the identities for the Dirac matrices \rfn{eq:idf}--\rfn{eq:ids}, see App. \ref{sec:trgammas}, we find \begin{eqnarray} {\cal F}^{\pm}_{\rm eq}(x,k) &=& 2 m \cosh(\zeta)\,\int dP\, \,e^{-\beta \cdot p \pm \xi}\,\,\delta^{(4)} (k\mp p), \label{eq:FEeqpm} \\ {\cal P}^{\pm}_{\rm eq}(x,k) &=& 0, \label{eq:PEeqpm} \\ {\cal V}^{\pm}_{{\rm eq}, \mu}(x,k) &=& \pm\,2 \cosh(\zeta) \, \int dP\,e^{-\beta \cdot p \pm \xi}\,\,\delta^{(4)} (k\mp p)\,p_{\mu}, \label{eq:VEeqpm} \\ {\cal A}^\pm_{{\rm eq}, \mu}(x,k) &=& -\frac{\sinh(\zeta)\, }{\zeta} \,\int dP\,e^{-\beta \cdot p \pm \xi}\,\,\delta^{(4)}(k\mp p)\, \tilde{\omega }_{\mu \nu}\,p^{\nu}, \label{eq:AEeqpm} \\ {\cal S}^\pm_{{\rm eq}, \mu \nu}(x,k) &=& \! \pm\frac{ \sinh(\zeta) }{m \zeta} \!\int \!dP\,e^{-\beta \cdot p \pm\xi}\,\,\delta^{(4)}(k\mp p) \left[ \left( p_\mu \omega_{\nu \alpha} - p_\nu \omega_{\mu \alpha} \right) p^\alpha \!+\! m^2\omega_{\mu \nu} \right]\!.\,\,\,\,\,\,\, \label{eq:SEeqpm} \end{eqnarray} \subsection{Relations between equilibrium coefficient functions} \label{sec:relations} Using Eqs.~\rfn{eq:FEeqpm}--\rfn{eq:SEeqpm} one can verify that the equilibrium coefficient functions satisfy the following set of constraints: \bel{eq:Wid1} k^\mu \, {\cal V}^{\pm}_{{\rm eq}, \mu}(x,k) = m \, {\cal F}^{\pm}_{{\rm eq}}(x,k), \end{eqnarray} \bel{eq:Wid2} k_\mu \, {\cal F}^{\pm}_{{\rm eq}}(x,k) = m \, {\cal V}^{\pm}_{{\rm eq}, \mu}(x,k), \end{eqnarray} \bel{eq:Wid3} {\cal P}^\pm_{{\rm eq}}(x,k) = 0, \end{eqnarray} \bel{eq:Wid4} k^\mu \, {\cal A}^{\pm}_{{\rm eq}, \,\mu}(x,k) = 0, \end{eqnarray} \bel{eq:Wid5} k^\mu \, {\cal S}^{\pm}_{{\rm eq}, \,\mu \nu}(x,k) = 0. \end{eqnarray} \bel{eq:Wid6} k^\beta \, {\tilde {\cal S}}^{\pm}_{{\rm eq}, \mu \beta}(x,k) + m \, {\cal A}^{\pm}_{{\rm eq}, \,\mu}(x,k) = 0, \end{eqnarray} \bel{eq:Wid7} \epsilon_{\mu \nu \alpha \beta} \, k^\alpha \, {\cal A}^{\pm \, \beta}_{{\rm eq}}(x,k) + m \, {\cal S}^{\pm}_{{\rm eq}, \,\mu \nu}(x,k) = 0. \end{eqnarray} We note that such constraints are fulfilled also by the total Wigner function given by the sum of the particle and antiparticle contributions, see \rf{eq:totW}. We also note that Eqs.~\rfn{eq:Wid1}--\rfn{eq:Wid7} follow from the algebraic structure of the equilibrium Wigner functions and are satsified for any form of the fields: $\beta_\mu(x)$, $\xi(x)$, and $\omega_{\mu \nu}(x)$. Thus, they hold for four different types of equilibrium specified in the end of Sec.~\ref{sec:bconcepts}. \section{Semi-classical expansion} \label{sec:semiclass} In the previous section we have introduced the spinor decomposition of the equilibrium Wigner functions and obtained explicit expressions for the equilibrium coefficient functions. Such a decomposition can be naturally used for any Wigner function (describing particles with spin ${\nicefrac{1}{2}}$) and, in fact, it was frequently used in the past to derive classical kinetic equations from the underlying quantum field theory~\cite{Elze:1986qd,Vasak:1987um,Elze:1989un,Zhuang:1995pd,Florkowski:1995ei}). In this section we follow closely this approach and write \begin{eqnarray} {\cal W}(x,k) &=& \f{1}{4} \left[ {\cal F}(x,k) + i \gamma_5 {\cal P}(x,k) + \gamma^\mu {\cal V}_{\mu}(x,k) \right. \nonumber \\ && \left. \hspace{1cm} + \gamma_5 \gamma^\mu {\cal A}_{\mu}(x,k) + {\Sigma}^{\mu\nu} {\cal S}_{\mu \nu}(x,k) \right]. \label{eq:gen_wig_expansion} \end{eqnarray} In the case where the effects of both the mean fields and collisions can be neglected, the Wigner function satisfies the equation of the form \bel{eq:eqforW} \left(\gamma_\mu K^\mu - m \right) {\cal W}(x,k) = 0. \end{eqnarray} Here $K^\mu$ is the operator defined by the expression \bel{eq:K} K^\mu = k^\mu + \frac{i \hbar}{2} \,\partial^\mu. \end{eqnarray} Using \rftwo{eq:gen_wig_expansion}{eq:K} in \rf{eq:eqforW} and comparing the real and imaginary parts of the coefficients in the Clifford-algebra basis we obtain two sets of equations. The real parts give: \begin{eqnarray} k^\mu {\cal V}_\mu - m {\cal F} &=& 0, \label{eq:rF} \\ \frac{\hbar}{2} \partial^\mu {\cal A}_\mu + m {\cal P} &=& 0, \label{eq:rP} \\ k_\mu {\cal F} - \frac{\hbar}{2} \partial^\nu {\cal S}_{\nu\mu} - m {\cal V}_\mu &=& 0, \label{eq:rV} \\ -\frac{\hbar}{2} \partial_\mu {\cal P} + k^\beta {\tilde {\cal S}}_{\mu \beta} + m {\cal A}_\mu &=& 0, \label{eq:rA} \\ \frac{\hbar}{2} \left( \partial_\mu {\cal V}_\nu - \partial_\nu {\cal V}_\mu \right) - \epsilon_{\mu \nu \alpha \beta} k^\alpha {\cal A}^\beta - m {\cal S}_{\mu \nu} &=& 0, \label{eq:rS} \end{eqnarray} while the imaginary parts yield: \begin{eqnarray} \hbar \partial^\mu {\cal V}_\mu &=& 0, \label{eq:iF} \\ k^\mu {\cal A}_\mu &=& 0, \label{eq:iP} \\ \frac{\hbar}{2} \partial_\mu {\cal F} + k^\nu {\cal S}_{\nu\mu} &=& 0, \label{eq:iV} \\ k_\mu {\cal P} + \frac{\hbar}{2} \partial^\beta {\tilde {\cal S}}_{\mu \beta} &=& 0, \label{eq:iA} \\ \left(k_\mu {\cal V}_\nu - k_\nu {\cal V}_\mu \right) +\frac{\hbar}{2} \epsilon_{\mu \nu \alpha \beta} \partial^\alpha {\cal A}^\beta &=& 0. \label{eq:iS} \end{eqnarray} The form of Eqs.~\rfn{eq:rF}--\rfn{eq:iS} suggests that we can search for solutions for the expansion coefficient functions in the form of the series: \bel{eq:series1} {\cal F} = {\cal F}^{(0)} + \hbar {\cal F}^{(1)} + \hbar^2 {\cal F}^{(2)}+ \cdots, \quad {\cal P} = {\cal P}^{(0)} + \hbar {\cal P}^{(1)} + \hbar^2 {\cal P}^{(2)} + \cdots, \end{eqnarray} \bel{eq:series2} {\cal V}_\mu = {\cal V}^{(0)}_\mu + \hbar {\cal V}^{(1)}_\mu + \hbar^2 {\cal V}^{(2)}_\mu + \cdots, \quad {\cal A}_\mu = {\cal A}^{(0)}_\mu + \hbar {\cal A}^{(1)}_\mu + \hbar^2 {\cal A}^{(2)}_\mu + \cdots, \end{eqnarray} \bel{eq:series3} {\cal S}_{\mu\nu} = {\cal S}^{(0)}_{\mu\nu} + \hbar {\cal S}^{(1)}_{\mu\nu} + \hbar^2 {\cal S}^{(2)}_{\mu\nu} + \cdots. \quad \end{eqnarray} \subsection{Zeroth order} \label{sec:zeroth} The leading order (the zeroth order in $\hbar$) of the real parts gives: \begin{eqnarray} k^\mu {\cal V}^{(0)}_\mu - m {\cal F}^{(0)} &=& 0, \label{eq:rF0} \\ {\cal P}^{(0)} &=& 0, \label{eq:rP0} \\ k_\mu {\cal F}^{(0)} - m {\cal V}^{(0)}_\mu &=& 0, \label{eq:rV0} \\ k^\beta {\tilde {\cal S}}_{\mu \beta}^{(0)} + m {\cal A}^{(0)}_\mu &=& 0, \label{eq:rA0} \\ \epsilon_{\mu \nu \alpha \beta} k^\alpha {\cal A}_{(0)}^\beta + m {\cal S}_{\mu \nu}^{(0)} &=& 0, \label{eq:rS0} \end{eqnarray} while the leading order of the imaginary parts gives~\footnote{The imaginary part of the scalar zeroth-order part of \rf{eq:eqforW} vanishes, see \rf{eq:iF}, whereas the imaginary part of the axial-vector zeroth-order part of \rf{eq:eqforW} gives \rf{eq:rP0}, see \rf{eq:iA}. Therefore, we consider only three equations obtained from the imaginary parts.} \begin{eqnarray} k^\mu {\cal A}^{(0)}_\mu &=& 0, \label{eq:iP0} \\ k^\nu {\cal S}^{(0)}_{\nu\mu} &=& 0, \label{eq:iV0} \\ k_\mu {\cal V}^{(0)}_\nu - k_\nu {\cal V}^{(0)}_\mu &=& 0. \label{eq:iS0} \end{eqnarray} Equations \rfn{eq:rF0}--\rfn{eq:iS0} indicate the coefficients ${\cal F}_{(0)}$ and ${\cal A}^\mu_{(0)}$ may be treated as the basic independent ones, provided ${\cal A}^\mu_{(0)}$ satisfies the orthogonality condition \rfn{eq:iP0}. The coefficient ${\cal V}^\mu_{(0)}$ is defined by \rf{eq:rV0}, which gives \bel{eq:rV0a} {\cal V}^\mu_{(0)} = \frac{k^\mu}{m} {\cal F}_{(0)}, \end{eqnarray} and the coefficient ${\cal S}_{\mu \nu}^{(0)}$ is obtained from \rf{eq:rS0}, \bel{eq:rS0a} {\cal S}_{\mu \nu}^{(0)} = -\frac{1}{m} \epsilon_{\mu\nu \alpha \beta} k^\alpha {\cal A}^\beta_{(0)}. \end{eqnarray} Equation \rfn{eq:rS0a} leads directly to the dual tensor ${\tilde {\cal S}}_{\mu \nu}^{(0)}$ of the form \bel{eq:rS0ad} {\tilde {\cal S}}_{\mu \nu}^{(0)} = \frac{1}{m} \left( k^\mu {\cal A}^\nu_{(0)} - k^\nu {\cal A}^\mu_{(0)} \right). \end{eqnarray} One can easily check that expressions~\rfn{eq:rV0a}--\rfn{eq:rS0ad} solve Eqs.~\rfn{eq:rF0}--\rfn{eq:rS0} and Eqs.~\rfn{eq:iP0}--\rfn{eq:iS0} if the axial-vector coefficient ${\cal A}^\mu_{(0)}$ fulfills \rf{eq:iP0}. \subsection{First order} \label{sec:first} The next-to-leading order (the first order in $\hbar$) of the real parts gives: \begin{eqnarray} k^\mu {\cal V}^{(1)}_\mu - m {\cal F}^{(1)} &=& 0, \label{eq:rF1} \\ \frac{1}{2} \partial^\mu {\cal A}^{(0)}_\mu + m {\cal P}^{(1)} &=& 0, \label{eq:rP1} \\ k_\mu {\cal F}^{(1)} - \frac{1}{2} \partial^\nu {\cal S}^{(0)}_{\nu \mu}- m {\cal V}^{(1)}_\mu &=& 0, \label{eq:rV1} \\ -\frac{1}{2} \partial_\mu {\cal P}_{(0)} + k^\beta {\tilde {\cal S}}_{\mu \beta}^{(1)} + m {\cal A}^{(1)}_\mu &=& 0, \label{eq:rA1} \\ \frac{1}{2} \left(\partial_\mu {\cal V}^{(0)}_\nu - \partial_\nu {\cal V}^{(0)}_\mu \right) - \epsilon_{\mu \nu \alpha \beta} k^\alpha {\cal A}_{(1)}^\beta - m {\cal S}_{\mu \nu}^{(1)} &=& 0. \label{eq:rS1} \end{eqnarray} Equation \rfn{eq:rP1} defines the first order contribution to the pseudoscalar coefficient \bel{eq:rP1a} {\cal P}^{(1)} = -\frac{1}{2m} \, \partial^\mu {\cal A}^{(0)}_\mu. \end{eqnarray} Similarly, \rf{eq:rV1} can be interpreted as the definition of the first-order vector coefficient \bel{eq:rV1a} {\cal V}^{(1)}_\mu &=& \frac{1}{m} \left(k_\mu {\cal F}^{(1)} - \frac{1}{2} \partial^\nu {\cal S}^{(0)}_{\nu \mu} \right), \end{eqnarray} while \rf{eq:rS1} defines the first-order tensor coefficient \bel{eq:rS1a} {\cal S}_{\mu \nu}^{(1)} = \frac{1}{2m} \left(\partial_\mu {\cal V}^{(0)}_\nu - \partial_\nu {\cal V}^{(0)}_\mu \right) - \frac{1}{m} \epsilon_{\mu \nu \alpha \beta} k^\alpha {\cal A}_{(1)}^\beta. \end{eqnarray} By contraction of \rf{eq:rS1a} with the Levi-Civita tensor we find the dual first-order tensor coefficient \bel{eq:rS1ad} {\tilde {\cal S}}_{\mu \nu}^{(1)} = \frac{1}{4m^2} \epsilon^{\mu \nu \alpha \beta} \left(k_\alpha \partial_\beta - k_\beta \partial_\alpha \right) {\cal F}^{(0)} - \frac{1}{m} \epsilon_{\mu \nu \alpha \beta} k^\alpha {\cal A}_{(1)}^\beta. \end{eqnarray} Using \rf{eq:rS1ad} in \rf{eq:rA1} we find that the first-order axial coefficient should also be orthogonal to $k^{\mu}$, namely $k_\mu {\cal A}_{(1)}^\mu = 0$. The first order imaginary parts give: \begin{eqnarray} \partial^\mu {\cal V}^{(0)}_\mu &=& 0, \label{eq:iF1} \\ k^\mu {\cal A}^{(1)}_\mu &=& 0, \label{eq:iP1} \\ \frac{1}{2} \partial_\mu {\cal F}^{(0)} + k^\nu {\cal S}^{(1)}_{\nu\mu} &=& 0, \label{eq:iV1} \\ k_\mu {\cal P}^{(1)} + \frac{1}{2} \partial^\beta {\tilde {\cal S}}^{(0)}_{\mu\beta} &=& 0, \label{eq:iA1} \\ k_\mu {\cal V}^{(1)}_\nu - k_\nu {\cal V}^{(1)}_\mu + \frac{1}{2} \epsilon_{\mu\nu \alpha \beta} \, \partial^\alpha {\cal A}_{(0)}^\beta &=& 0. \label{eq:iS1} \end{eqnarray} Combining \rf{eq:iF1} with \rf{eq:rV0a} we find the important formula \bel{eq:kineqF0} k^\mu \partial_\mu {\cal F}_{(0)}(x,k) = 0. \end{eqnarray} This is nothing else but the kinetic equation to be satisfied by the scalar coeffficient of the Wigner function. Equation \rfn{eq:iP1} confirms that the axial-vector coefficient is orthogonal to $k$ in both the zeroth and first orders. Doing straightforward algebraic manipulations we can check that \rf{eq:iV1} is satisfied provided \rf{eq:kineqF0} holds. Equation~\rfn{eq:iA1} leads directly to the kinetic equation obeyed by the axial-vector coefficient \bel{eq:kineqA0} k^\mu \partial_\mu \, {\cal A}^\nu_{(0)} (x,k) = 0, \quad k_\nu \,{\cal A}^\nu_{(0)} (x,k) = 0. \end{eqnarray} Using \rf{eq:kineqA0} and the orthogonality condition \rfn{eq:iP1} we can check now that \rf{eq:iS1} is also satisfied. \subsection{Second order} \label{sec:second} By studying the zeroth and first orders of the semiclassical expansion we have found that the basic coefficient functions are the scalar and axial-vector components. Their leading-order terms ${\cal F}_{(0)}(x,k)$ and ${\cal A}^\nu_{(0)} (x,k)$ satisfy the kinetic equations \rfn{eq:kineqF0} and \rfn{eq:kineqA0}. The axial vector coefficient should be (in the zeroth and first orders) orthogonal to the four-vector~$k$. If the functions ${\cal F}_{(0)}(x,k)$ and ${\cal A}^\nu_{(0)} (x,k)$ are known, all other coefficient functions in the zeroth order can be determined through the algebraic relations \rfn{eq:rP0}, \rfn{eq:rV0a} and \rfn{eq:rS0a}. We emphasize that although the system of equations derived above is consistent up to the first order in $\hbar$ (the property demonstrated in several previous studies), it is not sufficient to determine the first-order coefficient functions. We are missing dynamic equations that could be used to determine the evolution of the coefficient functions ${\cal F}_{(1)}(x,k)$ and ${\cal A}^\nu_{(1)} (x,k)$. This is expected, since we have just seen that the zeroth order is not sufficient to determine the evolution of the functions ${\cal F}_{(0)}(x,k)$ and ${\cal A}^\nu_{(0)} (x,k)$ --- this requires going to the first order. Thus, the functions ${\cal F}_{(1)}(x,k)$ and ${\cal A}^\nu_{(1)} (x,k)$ should be obtained from the analysis of the second order. Such an analysis is completely analogous to that done in the first order and, in fact, leads to the same form of equations: \bel{eq:kineqF1} k^\mu \partial_\mu {\cal F}_{(1)}(x,k) = 0, \end{eqnarray} \bel{eq:kineqA1} k^\mu \partial_\mu {\cal A}^\nu_{(1)} (x,k) = 0, \quad k_\nu {\cal A}^\nu_{(1)} (x,k) = 0. \end{eqnarray} If ${\cal F}_{(1)}$ and ${\cal A}^\nu_{(1)}$ are determined, the quantities ${\cal P}^{(1)}$, ${\cal V}^{(1)}_\mu$, and ${\cal S}^{(1)}_{\mu \nu}$ are obtained from Eqs.~\rfn{eq:rP1a}, \rfn{eq:rV1a}, and \rfn{eq:rS1a}, respectively. \section{Exact solutions} \label{sec:exact} It is very interesting to observe that the algebraic structure of the equilibrium coefficient functions, defined by Eqs.~\rfn{eq:Wid1}--\rfn{eq:Wid7}, is consistent with the zeroth-order equations obtained from the semiclassical expansion of the Wigner function discussed in Sec.~\ref{sec:zeroth}, see Eqs.~\rfn{eq:rF0}--\rfn{eq:iS0}. This suggests that the global and extended global equilibrium distributions can be indeed constructed from the functions \rfn{eq:Weqpxk2} and \rfn{eq:Weqmxk2}, provided they fulfill in addition the kinetic equations \rfn{eq:kineqF0} and \rfn{eq:kineqA0}. We have to emphasize here, however, that the equilibrium coefficient functions defined by Eqs.~\rfn{eq:FEeqpm}--\rfn{eq:SEeqpm} specify only the leading order terms in $\hbar$ of the ``true'' equilibrium function that solves the kinetic equation~\footnote{ Our approach is based on the form postulated in Ref.~\cite{Becattini:2013fla} that may be missing some important quantum contributions. In particular, the functions ${\cal W}_{\rm eq}(x,k)$ are always on the mass shell, hence, they neglect off-shell quantum propagation of particles.}. To summarize our findings we can write: \begin{eqnarray} {\cal F}^{(0)} &=& {\cal F}_{\rm eq}, \label{eq:FC1} \\ {\cal P}^{(0)} &=&0, \label{eq:PC1} \\ {\cal V}^{(0)}_\mu &=& {\cal V}_{\rm eq, \mu}, \label{eq:VC1}\\ {\cal A}^{(0)}_\mu &=& {\cal A}_{\rm eq, \mu}, \label{eq:AC1}\\ {\cal S}^{(0)}_{\mu \nu} &=& {\cal S}_{{\rm eq}, \mu \nu}, \label{eq:SC1} \\ \bigskip \end{eqnarray} in the zeroth order, and similarly: \begin{eqnarray} {\cal P}^{(1)} &=& -\frac{1}{2m} \, \partial^\mu {\cal A}_{\rm eq, \mu}, \label{eq:rP1EQ} \\ {\cal V}^{(1)}_\mu &=& \frac{1}{m} \left(k_\mu {\cal F}^{(1)} - \frac{1}{2} \partial^\nu {\cal S}_{\rm eq, \nu \mu} \right), \label{eq:rV1EQ} \\ {\cal S}_{\mu \nu}^{(1)} &=& \frac{1}{2m} \left(\partial_\mu {\cal V}_{\rm eq, \nu} - \partial_\nu {\cal V}_{\rm eq, \mu} \right) - \frac{1}{m} \epsilon_{\mu \nu \alpha \beta} k^\alpha {\cal A}_{(1)}^\beta, \label{eq:rS1EQ} \end{eqnarray} in the first order. Let us check now the constraints imposed on the equilibrium coefficient functions by Eqs.~\rfn{eq:kineqF0} and \rfn{eq:kineqA0}. One can easily find that they lead to the equations: \bel{eq:kineqFC1} k^\mu \partial_\mu {\cal F}_{\rm eq}(x,k) = 0, \end{eqnarray} \bel{eq:kineqAC1} k^\mu \partial_\mu \, {\cal A}^\nu_{\rm eq} (x,k) = 0, \quad k_\nu \,{\cal A}^\nu_{\rm eq}(x,k) = 0. \end{eqnarray} Using Eqs.~\rfn{eq:FEeqpm} and \rfn{eq:AEeqpm} in Eqs.~\rfn{eq:kineqFC1} and \rfn{eq:kineqAC1} we conclude that the kinetic equations are exactly fulfilled if the $\beta_\mu$ field is the Killing vector defined by Eqs.~\rfn{eq:Killing} and ~\rfn{eq:Killingsol}, while the parameter $\xi$ and the spin polarization tensor $\omega_{\mu\nu}$ are constant (this implies that the parameter $\zeta$ defined by \rf{eq:zeta} is also constant). Consequently, the kinetic equations considered in this work (and also in the previous works that used the same mathematical setup) do not constrain the spin polarization tensor $\omega_{\mu\nu}$ to be equal to the thermal vorticity $\varpi_{\mu\nu}$. In the semiclassical approach discussed here, both tensors should be constant but may be not related to each other. This situation corresponds to extended global equilibrium rather than to global equilibrium. Most likely, the equality of the tensors $\omega_{\mu\nu}$ and $\varpi_{\mu\nu}$ (the fact expected on very general thermodynamic grounds, see Sec.~\ref{sec:global}) could follow from the proper entropy maximization. The present approach, however, does not offer any reliable method for such a calculation. We note that the first-order equations~\rfn{eq:kineqF1} and \rfn{eq:kineqA1} are decoupled in our equilibrium scheme, thus, we assume below that ${\cal F}^{(1)}(x,k)={\cal A}^{(1)}_\mu(x,k)=0$. It is also possible that the relation $\omega_{\mu\nu} = \varpi_{\mu\nu}$ can be necessary for the collision term to vanish. The form of the latter is, however, not known. As we have mentioned above, in this work we assume that any Wigner function of the form \rfn{eq:wig_expansion}, with the coefficient functions given by Eqs.~\rfn{eq:FEeqpm}--\rfn{eq:SEeqpm}, yields a vanishing collision integral. \section{Local conservation laws} \label{sec:con} Having explored consequences of the assumption that the equilibrium Wigner function satisfies exactly the kinetic equation \rfn{eq:eqforW}, we turn now to a discussion of approximate solutions. Usually, they are obtained by demanding that only certain moments of the kinetic equation \rfn{eq:eqforW} yield zero. The selection of such moments for particles with spin is, however, not obvious and one of the aims of this work is to give some insight into this problem. To set up the stage, we discuss in this section local conservation laws, which suggest which moments of \rfn{eq:eqforW} may be relevant for construction of the hydrodynamic framework. \subsection{Charge current} \label{sec:cc} Expressing the charge current ${\cal N}^\alpha (x) $ in terms of the Wigner function ${\cal W}(x,k)$ we obtain~\cite{deGroot:1980} \begin{eqnarray} {\cal N}^\alpha (x) &=& {\rm tr} \int d^4k \, \gamma^\alpha \, {\cal W}(x,k) = \int d^4k \, {\cal V}^\alpha (x,k). \label{eq:Nalphacal1} \end{eqnarray} In the equilibrium case we use Eqs.~\rfn{eq:VC1} and \rfn{eq:rV1EQ} for ${\cal V}^\alpha(x,k)$. In this way we find \begin{eqnarray} {\cal N}^\alpha_{\rm eq} (x) &=& N^\alpha_{\rm eq}(x) + \delta N^\alpha_{\rm eq}(x) , \label{eq:Nalphacal2} \end{eqnarray} where \begin{eqnarray} N^\alpha_{\rm eq} (x) &=& \frac{1}{m} \int d^4k \, k^\alpha {\cal F}_{\rm eq} (x,k) \label{eq:Nalpha} \end{eqnarray} and \begin{eqnarray} \delta N^\alpha_{\rm eq}(x) &=& - \frac{\hbar}{2m} \int d^4k \, \partial_\lambda {\cal S}_{\rm eq}^{\lambda \alpha}(x,k) . \label{eq:dNalpha} \end{eqnarray} We have assumed here that ${\cal F}^{(1)}(x,k)=0$, which is a trivial solution of the kinetic equation~\rfn{eq:kineqF1}. The charge current should be conserved, which is expressed by the equation \bel{eq:Ncon} \partial_\alpha N^\alpha_{\rm eq}(x) = 0. \end{eqnarray} Here we used the property $ \partial_\alpha \, \delta N^\alpha_{\rm eq}(x) = 0$, which follows from the antisymmetry of the tensor ${\cal S}_{\rm eq}^{\lambda \alpha}(x,k)$. One can check that \rf{eq:Ncon} holds in (extended) global equilibrium, due to~\rf{eq:kineqFC1}. In the (extended) local equilibrium \rf{eq:Ncon} becomes a condition for the hydrodynamic fields: $\beta_\mu(x)$, $\xi(x)$, and $\omega_{\mu \nu}(x)$ that may vary in space and time. Substituting \rf{eq:FEeqpm} into \rf{eq:Nalpha} we obtain \bel{eq:Nalpha1} N^\alpha_{\rm eq} = 4\cosh(\zeta) \sinh(\xi) \int \frac{d^3p}{(2\pi)^3 E_p} \, p^\alpha \,e^{-\beta \cdot p }, \end{eqnarray} which agrees with Eq.~(12) from \rfc{Florkowski:2017ruc}. Doing the integral over the momentum, one finds that the charge current is proportional to the flow vector, \bel{Nmu} N^\alpha_{\rm eq} = n u^\alpha, \end{eqnarray} where \bel{nden} n = 4 \, \cosh(\zeta) \sinh(\xi)\, n_{(0)}(T) \end{eqnarray} is the charge density~\footnote{One should include also the contribution from \rf{eq:dNalpha} to the charge current. We intend to analyze this issue in a separate paper~\cite{Florkowski:2018}. }. Here $n_{(0)}(T) = \langle(u\cdot p)\rangle_0$ is the number density of spin-0, neutral Boltzmann particles, obtained using the thermal average \bel{avdef} \langle \cdots \rangle_0 \equiv \int \f{d^3p}{(2\pi)^3 E_p} (\cdots) \, e^{- \beta \cdot p}. \end{eqnarray} \subsection{Energy-momentum and spin tensors} \label{sec:tmunu} \subsubsection{GLW formulation} \label{sec:slmnGLW} Adopting the kinetic-theory framework derived by de Groot, van Leeuwen, and van Weert in Ref.~\cite{deGroot:1980}, where the energy-momentum tensor is expressed directly by the trace of the Wigner function, we can use the following expression \bel{eq:tmunu1} T^{\mu\nu}_{\rm GLW}(x)=\frac{1}{m}{\rm tr} \int d^4k \, k^{\mu }\,k^{\nu }{\cal W}(x,k)=\frac{1}{m} \int d^4k \, k^{\mu }\,k^{\nu } {\cal F}(x,k). \end{eqnarray} In the equilibrium case, we consider \rf{eq:tmunu1} up to the first order in $\hbar$ using Eq.~\rfn{eq:FC1} and setting ${\cal F}^{(1)}(x,k)=0$, similarly as in the case of the charge current. Hence, with the help of \rf{eq:FEeqpm} we obtain \bel{eq:tmunu2} T^{\mu\nu}_{\rm GLW}(x)=4 \cosh (\zeta) \cosh (\xi) \int \frac{d^3p}{(2 \pi )^3 E_p}p^{\mu }p^{\nu }e^{-\beta \cdot p}. \end{eqnarray} In this way we reproduce the perfect-fluid formula given earlier in \rfc{Florkowski:2017ruc}, \bel{Tmn} T^{\mu\nu}_{\rm GLW}(x) &=& (\varepsilon + P ) u^\mu u^\nu - P g^{\mu\nu}, \end{eqnarray} where the energy density and pressure are given by the expressions \bel{enden} \varepsilon = 4 \, \cosh(\zeta) \cosh(\xi) \, \varepsilon_{(0)}(T) \end{eqnarray} and \bel{prs} P = 4 \, \cosh(\zeta) \cosh(\xi) \, P_{(0)}(T), \end{eqnarray} respectively. In analogy to the density $n_{(0)}(T)$, we define the auxiliary quantities $\varepsilon_{(0)}(T) = \langle(u\cdot p)^2\rangle_0$ and $P_{(0)}(T) = -(1/3) \langle \left[ p\cdot p - (u\cdot p)^2 \right] \rangle_0$. The energy-momentum tensor should be conserved, hence we demand \bel{eq:Tcon} \partial_\alpha T^{\alpha\beta}_{\rm GLW}(x) = 0. \end{eqnarray} Similarly to the case of the charge conservation, one can check that \rf{eq:Tcon} holds in (extended) global equilibrium, provided~\rf{eq:kineqFC1} is satisfied. Again, in the (extended) local equilibrium \rf{eq:Tcon} becomes a condition (strictly speaking, four equations) for the hydrodynamic fields: $\beta_\mu(x)$, $\xi(x)$, and $\omega_{\mu \nu}(x)$ . The GLW spin tensor has the following form~\cite{deGroot:1980} \bel{eq:Smunulambda_de_Groot1} S^{\lambda , \mu \nu }_{\rm GLW} =\frac{\hbar}{4} \, \int d^4k \, {\rm tr} \left[ \left( \left\{\sigma ^{\mu \nu },\gamma ^{\lambda }\right\}+\frac{2 i}{m}\left(\gamma ^{[\mu }k^{\nu ]}\gamma ^{\lambda }-\gamma ^{\lambda }\gamma ^{[\mu }k^{\nu ]}\right) \right) {\cal W}(x,k) \right]. \end{eqnarray} For dimensional reasons, we have implemented here the Planck constant. Its presence implies that in equilibrium we may take the leading order expression for the Wigner function and assume $ {\cal W}(x,k)={\cal W}_{\rm eq}(x,k)$. Using \rftwo{eq:Weqpxk2}{eq:Weqmxk2} in \rf{eq:Smunulambda_de_Groot1}, performing the appropriate traces, and then carrying out the integration over $k$ we get \begin{eqnarray} S^{\lambda , \mu \nu }_{\rm GLW}&=&\frac{\hbar \sinh (\zeta) {\cosh}(\xi)}{m^2\zeta }\int dP \, e^{-\beta \cdot p} p^{\lambda } \left(m^2\omega ^{\mu\nu}+2 p^{\alpha }p^{[\mu }\omega ^{\nu ]}{}_{\alpha } \right) \label{eq:Smunulambda_de_Groot22} \nonumber \\ &=& \frac{\hbar w}{4 \zeta} u^\lambda \omega^{\mu\nu} + \frac{2 \hbar \sinh (\zeta) {\cosh}(\xi)}{m^2\zeta} s^{\lambda , \mu \nu }_{\rm GLW} , \label{eq:Smunulambda_de_Groot2} \end{eqnarray} where we have introduced the spin density $w$ defined by the expression~\cite{Florkowski:2017ruc} \bel{eq:w} w = 4 \sinh(\zeta) \cosh(\xi) n_{(0)}(T), \end{eqnarray} the auxiliary tensor \begin{eqnarray} s^{\lambda , \mu \nu }_{\rm GLW} = Au^{\lambda}u^{\alpha}u^{[\mu }\omega ^{\nu ]}{}_{\alpha }+ B\left(\Delta ^{\lambda \alpha }u^{[\mu }\omega ^{\nu ]}{}_{\alpha }+u^{\lambda }\Delta ^{\alpha [\mu }\omega ^{\nu ]}{}_{\alpha }+u^{\alpha }\Delta ^{\lambda [\mu }\omega ^{\nu ]}{}_{\alpha}\right), \end{eqnarray} and the thermodynamic coefficients \begin{eqnarray} B=-\frac{1}{\beta} \left(\varepsilon_{(0)}+P_{(0)}\right), ~~~A=\frac{1}{\beta}\left[3 \varepsilon_{(0)}+\left(3 + \frac{m^2}{T^2}\right) P_{(0)}\right]=-3B+\frac{m^2}{T}P_{(0)}. \end{eqnarray} Since, the energy-momentum tensor derived in Ref.~\cite{deGroot:1980} is symmetric, the spin tensor \rfn{eq:Smunulambda_de_Groot2} should be also conserved (see, for example, \rf{eq:spin_ang_mntm}) \begin{eqnarray} \partial_\lambda S^{\lambda , \mu \nu }_{\rm GLW}(x) = 0 . \label{eq:SGLWcon} \end{eqnarray} This formula implies that the angular-momentum conservation holds separately for the orbital and spin parts. At this point, it is interesting to stress that the coefficient function ${\cal F}_{\rm eq}(x,k)$ involves all hydrodynamic variables, i.e., $\beta_\mu$, $\xi$, and the spin polarization tensor $\omega_{\mu \nu}$ --- altogether 11 independent functions. This makes the system of Eqs.~\rfn{eq:Ncon} and \rfn{eq:Tcon} insufficient to determine their space-time dependence unless some other information is taken into account. One possibility is to assume local equilibrium state as defined in the end of Sec.~\ref{sec:bconcepts} (the third point). In this case the spin polarization tensor is equal to the thermal vorticity and the number of independent equations becomes equal to the number of unknown functions. However, since the spin polarization tensor depends on the space-time gradients of the field $\beta_\mu$ in this case, the conservation laws become second-order partial differential equations. Clearly, they do not resemble standard hydrodynamic equations and it is not obvious at the moment how one can treat and solve them. Another possibility is to introduce extended local equilibrium (the fourth point discussed in the end of Sec.~\ref{sec:bconcepts}) and to treat the spin polarization tensor and thermal vorticity as independent quantities. The evolution of the $\omega_{\mu\nu}$ components should follow from the angular momentum conservation, which for the case discussed in this section is reduced to \rf{eq:SGLWcon}. This approach has been proposed originally in \rfc{Florkowski:2017ruc} with a phenomenological version of the spin tensor that agrees with the first term in the second line of \rf{eq:Smunulambda_de_Groot2}. \subsubsection{Canonical version} \label{sec:slmnCAN} The canonical forms of the energy-momentum and spin tensors, $T^{\mu\nu}_{\rm can}(x)$ and $S^{\lambda , \mu \nu }_{\rm can}(x)$, can be obtained directly from the Dirac Lagrangian by applying the Noether theorem~\cite{Itzykson:1980rh}: \bel{eq:tmunu1can1} T^{\mu\nu}_{\rm can}(x)= \int d^4k \,k^{\nu } {\cal V}^\mu(x,k) \end{eqnarray} and \begin{eqnarray} S^{\lambda , \mu \nu }_{\rm can}(x) &=& \frac{\hbar}{4} \, \int d^4k \,\text{tr}\left[ \left\{\sigma ^{\mu \nu },\gamma ^{\lambda }\right\} {\cal W}(x,k) \right] \nonumber \\ &=& \frac{\hbar}{2} \epsilon^{\kappa \lambda \mu \nu} \int d^4k \, {\cal A}_{ \kappa}(x,k) \equiv \frac{\hbar}{2} \epsilon^{\kappa \lambda \mu \nu} \, {\cal A}_{ \kappa}(x). \label{eq:Smunulambda_canonical1} \end{eqnarray} Here we have used the anticommutation relation $\left\{\sigma ^{\mu \nu },\gamma ^{\lambda }\right\} = -2 \epsilon^{\mu\nu\lambda\kappa} \gamma_\kappa \gamma_5$ to express directly the canonical spin tensor by the axial-vector coefficient function ${\cal A}_{\kappa}(x,k)$. Including the components of ${\cal V}^\mu(x,k)$ up to the first order in the equilibrium case we obtain \bel{eq:tmunu1can2} T^{\mu\nu}_{\rm can}(x) = T^{\mu\nu}_{\rm GLW}(x) + \delta T^{\mu\nu}_{\rm can}(x) \end{eqnarray} where \bel{deltaTmunu} \delta T^{\mu\nu}_{\rm can}(x) = -\frac{\hbar}{2m} \int d^4k k^\nu \partial_\lambda {\cal S}^{\lambda \mu}_{\rm eq}(x,k) = -\partial_\lambda S^{\nu , \lambda \mu }_{\rm GLW}(x). \end{eqnarray} The canonical energy-momentum tensor should be exactly conserved, hence, in analogy to \rf{eq:Tcon} we require \bel{eq:Tconcan} \partial_\alpha T^{\alpha\beta}_{\rm can}(x) = 0. \end{eqnarray} It is interesting to observe that the conservation laws \rfn{eq:Tcon} and \rfn{eq:Tconcan} are consistent, since $\partial_\mu \, \delta T^{\mu\nu}_{\rm can}(x) = 0$. The latter property follows directly from the definition of $\delta T^{\mu\nu}_{\rm can}(x) $, see \rf{deltaTmunu}. For the equilibrium spin tensor it is enough to consider the axial-vector component in \rf{eq:Smunulambda_canonical1} in the zeroth order, $ {\cal A}^{(0)}_{ \kappa}(x,k)= {\cal A}_{\rm eq, \kappa}(x,k)$. Then, using \rf{eq:AEeqpm} in \rf{eq:Smunulambda_canonical1} and carrying out the integration over the four-momentum $k$ we get \begin{eqnarray} S^{\lambda , \mu \nu }_{\rm can} &=& \frac{\hbar \sinh(\zeta)\cosh(\xi)}{\zeta} \int dP \, e^{-\beta \cdot p}\left(\omega ^{\mu \nu } p^{\lambda}+\omega ^{\nu \lambda } p^{\mu}+\omega ^{\lambda \mu } p^{\nu}\right) \nonumber \\ &=& \frac{\hbar w}{4 \zeta} \left( u^\lambda \omega^{\mu\nu} + u^\mu \omega^{\nu \lambda} + u^\nu \omega^{\lambda \mu} \right) \nonumber \\ &=& S^{\lambda , \mu \nu }_{\rm GLW} + S^{\mu , \nu \lambda }_{\rm GLW}+ S^{\nu , \lambda \mu }_{\rm GLW}, \label{eq:Smunulambda_canonical2} \end{eqnarray} It is interesting to notice that the energy-momentum tensor \rfn{eq:tmunu1can2} is not symmetric. In such a case, the spin tensor is not conserved and its divergence is equal to the difference of the energy-momentum components. For the case discussed in this section we obtain \begin{eqnarray} \partial_\lambda S^{\lambda , \mu \nu }_{\rm can}(x) = T^{\nu\mu}_{\rm can} - T^{\mu\nu}_{\rm can} = -\partial_\lambda S^{\mu , \lambda \nu }_{\rm GLW}(x) + \partial_\lambda S^{\nu , \lambda \mu }_{\rm GLW}(x). \label{eq:Scancon} \end{eqnarray} One can immediately check, using the last line of \rf{eq:Smunulambda_canonical2}, that \rf{eq:Scancon} is consistent with the conservation of the spin tensor in the GLW approach. \subsubsection{Pseudo-gauge transformation} \label{sec:PsG} In the last section we have discussed the energy-momentum and spin tensors obtained from the canonical formalism and related them to the expressions introduced by de Groot, van~Leeuven, and van~Weert. In this section we demonstrate that the two versions of tensors are connected by a pseudo-gauge transformation. Indeed, if we introduce the tensor $\Phi^{\lambda, \mu\nu}$ defined by the relation \bel{Phi} \Phi^{\lambda, \mu\nu} \equiv S^{\mu , \lambda \nu }_{\rm GLW}-S^{\nu , \lambda \mu }_{\rm GLW}, \end{eqnarray} we can write \bel{psg1} S^{\lambda , \mu \nu }_{\rm can}= S^{\lambda , \mu \nu }_{\rm GLW} -\Phi^{\lambda, \mu\nu} \end{eqnarray} and \bel{psg2} T^{\mu\nu}_{\rm can} = T^{\mu\nu}_{\rm GLW} + \frac{1}{2} \left( \Phi^{\lambda, \mu\nu}+\Phi^{\mu, \nu \lambda} + \Phi^{\nu, \mu \lambda} \right). \end{eqnarray} Here, we have used the property that both $S^{\lambda , \mu \nu }_{\rm GLW} $ and $\Phi^{\lambda, \mu\nu} $ are antisymmetric with respect to exchange of the last two indices. Equations \rfn{psg1} and \rfn{psg2} are an example of the pseudo-gauge transformation discussed widely in the literature~\cite{Hehl:1976vr}. The most common use of such a transformation is connected with a change from the canonical formalism to the Belinfante one \cite{Belinfante:1940} --- it provides a symmetric energy-momentum tensor and eliminates completely the spin tensor. In a very recent work, it has been argued that the use of tensors that differ by the pseudo-gauge transformation leads to different predictions for measurable quantities such as spectrum and polarization of particles~\cite{Becattini:2018duy}. The results presented in this work can be useful to study such effects in more detail within explicitly defined hydrodynamic models. \subsubsection{Hydrodynamics from moments of the kinetic equations} \label{sec:hydro} In this section we analyze finally the issue connected with the construction of the hydrodynamic framework from the kinetic theory, namely, we try to answer the question which moments of the kinetic equations should be included to derive hydrodynamic equations. As far as we concentrate on the charge, energy, and momentum conservations, the answer is known --- we should consider the zeroth and first moments of the kinetic equation \bel{eq:kineqFt} k^\mu \partial_\mu {\cal F}_{\rm eq}(x,k) = 0. \end{eqnarray} In this way we obtain \rf{eq:Ncon} and \rf{eq:Tcon}. In any case, the conservation laws for charge, energy, and momentum are not sufficient to determine the dynamics of spin and they should be supplemented by information coming from the equation for the axial coefficient of the equilibrium Wigner function. The latter can be rewritten in the following form \begin{eqnarray} 0 &=& k^\alpha \partial_\alpha \, \,\int dP\,e^{-\beta \cdot p }\, \frac{\sinh(\zeta)\, }{\zeta} \left[ \delta^{(4)}(k-p) e^{\xi} + \delta^{(4)}(k+p) e^{-\xi} \right] \, \tilde{\omega }_{\mu \nu}\,p^{\nu} . \label{eq:h0} \end{eqnarray} If we multiply the first line of \rf{eq:h0} by the four-vector $k_\beta$, contract it with the Levi-Civita tensor $\epsilon^{\mu\beta\gamma\delta}$, and then integrate the resulting equation again over $k$, we obtain the conservation of the spin tensor in the GLW version, see \rf{eq:SGLWcon}.~\footnote{We recall that in the derivation of the hydrodynamic equations we do not assume that the kinetic equations are fulfilled but expect that their specific moments vanish. We also note that the choice of the moments is not obvious. Some hints in this respect can be obtained, for example, by comparing exact solutions of the kinetic equations with the hydrodynamic equations, for example, see Ref.~\cite{Tinti:2015xra}. } This observation suggests that the form of the spin tensor derived by de Groot, van Leeuwen, and van Weert is, in fact, a very natural choice for the hydrodynamic treatment of spin. This would also indicate that one should make an attempt to derive hydrodynamic equations with spin using the GLW expression for the spin tensor. This can be done in the similar way as in Ref.~\cite{Florkowski:2017ruc}. However, it is not obvious at the moment how \rf{eq:Smunulambda_de_Groot2} can be included in a consistent construction of the hydrodynamic picture~\cite{Florkowski:2018}. We close this section with a remark concerning the hydrodynamic equations used in \cite{Gao:2012ix}. Equations (13) and (14) from this work imply that the flow vector $u^\mu$ satisfies the Killing equation, hence it is constant (see the end of Appendix \ref{sec:Killing}). Consequently, the vorticity considered in this work is zero and no conclusions about the vorticity-polarization coupling can be drawn from the analysis presented in \cite{Gao:2012ix}. \section{Summary and conclusions} \label{sec:summary} In this work we have compared thermodynamic and kinetic approaches used to study relations between the spin polarization tensor and fluid vorticity in systems consisting of spin-${\nicefrac{1}{2}}$ particles. We have first discussed the thermodynamic approach that refers to general properties of global thermal equilibrium with a rigid-like rotation. Such a framework demonstrates directly that the spin-polarization and thermal-vorticity tensors are indeed equal in global equilibrium (for asymmetric energy-momentum tensors). Then, we have turned to the discussion of the kinetic approach based on the concept of the semiclassical expansion of the Wigner function. We have analyzed in more detail the case where the Wigner functions satisfy kinetic equations with a vanishing collision term. We have found, in contrast to many earlier claims found in the literature, that this approach does not imply a direct relation between the thermal vorticity and spin polarization, except for the fact that the two should be constant in global equilibrium (we have dubbed this state an extended global equilibrium). Finally, we have outlined procedures for obtaining hydrodynamic equations from the kinetic equations with spin. In the GLW case the energy-momentum tensor is symmetric and the spin tensor is conserved, while in the canonical case the energy-momentum tensor has an antisymmetric part and the spin tensor is not conserved. Nevertheless, in these two cases the total angular momentum is always conserved. We have also found that the two approaches are connected by the pseudo-gauge transformation, which we have explicitly constructed. This observation opens up new perspectives for studies of hydrodynamics with spin. From a broader point of view we notice that the classical part of the canonical energy-momentum tensor is symmetric, hence, it is suitable for the use in the context of general theory of relativity, which is a classical theory. Our results fill the gap between two apparently different approaches to study polarization. They indicate the importance of inclusion of the collision term in the kinetic calculations involving the Wigner function. This may shed light on the form of the equilibrium distribution (Wigner) functions in connection with the entropy production processes. The open question remains to what extent the equilibrium distributions functions used in this work remain a good approximation to more accurate, quantum equilibrium Wigner functions (with particles being not necessarily on the mass shell). \acknowledgments We thank F. Becattini and E. Speranza for many illuminating discussions. This work was supported in part by the Polish National Science Center Grant No. 2016/23/B/ST2/00717.
1,116,691,499,594
arxiv
\section{Introduction} The latest developments in network science have largely contributed to a better understanding of the structure and dynamics of many real-wold complex systems \cite{Barrat08:book,Newman010:book,Boccaletti06:PR}. As a matter of fact, research done during the last 20 years have allowed to take key steps in our comprehension of seemingly diverse phenomena such as the large-scale spreading of diseases \cite{Vespignani2015,Arruda2018}, information dissemination \cite{Newman010:book}, cascading failures \cite{stanley2010}, diffusion dynamics \cite{Gomez2013, tejedor2018,masuda2017} and more recently, on how multilayer systems work \cite{Kivela2014,boccaletti2014,Aleta2019}. These advances are not only at a theoretical level. The increasing availability of new and rich data as well as our computational capabilities have made it possible to move from studying synthetic models, to characterize and model realistic systems. During these years, networks have been studied from many different angles, ranging from more theoretically-grounded studies (in the best tradition of graph theory) to fully data-driven models. Sometimes, the architecture of the substrate network is known and thus, it could be modeled explicitly. However, it is often the case in which the networks are synthetic either because we do not know the real connection patterns or because we need to simplify the structure of the system to enable analytical approximations. In the latter scenario, one reasonable assumption is to generate random graphs, so that one gets rid of possible correlations and isolates the impact of the connectivity among the system's constituents on its dynamics. Besides, random versions are often very useful as null models, that allow to individuate which properties of the system are truly unexpected and which are not \cite{cimini2019,payrato2019}. Among the many results that can be highlighted, perhaps the most useful ones are those that relate the structure of networks with their dynamics through the analysis of the spectral properties of the adjacency or Laplacian matrices of such networks. For instance, it has been shown that it is possible to characterize the critical properties of a disease spreading process in terms of the largest eigenvalue of the adjacency matrix of the network on top of which the dynamics takes place \cite{Vespignani2015,Arruda2018}. Admittedly, the fact that the epidemic threshold, i.e., the point beyond which the system experiences a macroscopic outbreak, can be expressed in terms of topological properties makes it possible to study what are the effects of the topology on the dynamics of complex networked systems. Another important example of the previous relationship between structure and dynamics is given by synchronization phenomena, where one finds that the stability of a fully synchronized system can be studied in terms of the spectral properties of the substrate network \cite{Barrat08:book,Newman010:book,Boccaletti06:PR}. In this paper, we follow the line of research mentioned above and study a class of networks that is often found in natural and artificial systems, namely, bipartite graphs. Within the classes of networks that have been analyzed in the last two decades, bipartite graphs have gone unnoticed in many regards, for instance, in relation to their spectral properties. We intend to fill this gap by studying the localization and spectral properties of random bipartite graphs within RMT approaches. This viewpoint has been successfully used to study some topological~\cite{MMRS19}, spectral~\cite{MAM15,MM19,GAC18}, eigenvector~\cite{MAM15,MM19}, and transport~\cite{MAM13} properties of ER--type random networks with a special focus on universality. Moreover, we have also performed scaling studies on other random network models, such as multilayer and multiplex networks~\cite{MFR17,MFR17b} and random--geometric and random--rectangular graphs~\cite{AMG18}. The rest of the paper is organized as follows. In Sec.~\ref{model} we define the random bipartite graph model we shall use in our study. Then, in Sec.~\ref{entropy} we perform a scaling analysis of the eigenvector properties (characterized by the Shannon or information entropy) of our bipartite graph model. The scaling analysis allows to define a universal parameter of the model that we validate in Sec.~\ref{spectra} with the scaling of the spectral properties (characterized by the distribution of ratios of consecutive energy-level spacings). We summarize our results in Sec.~\ref{conclusions} also discussing possible applications within the domain of ecosystems and their stability. \section{Bipartite graph model} \label{model} We consider bipartite graphs composed by two disjoint sets with $m$ and $n-m$ vertices each such that there are no adjacent vertices within the same set, being $n$ the total number of vertices in the bipartite graph. The connectivity between both sets is quantified by the parameter $\alpha$ which is the ratio of current adjacent pairs over the total number of possible adjacent pairs; that is, vertices are isolated when $\alpha=0$, whereas the bipartite graph is complete for $\alpha=1$. Vertices are connected randomly. We add to our bipartite graph model self-edges and further consider all edges to have random strengths, which allows that our bipartite graph model becomes a RMT model. Therefore, we define the corresponding adjacency matrices as members of the ensemble of $n\times n$ sparse real symmetric matrices whose non-vanishing elements are statistically independent random variables drawn from a normal distribution with zero mean $\left\langle A_{ij} \right\rangle=0$ and variance $\left\langle |A_{ij}|^2 \right\rangle=(1+\delta_{ij})/2$. According to this definition, a diagonal adjacency random matrix is obtained for $\alpha=0$, which is known as the Poisson ensemble in RMT terms. In Fig.~\ref{Fig1}, we show examples of adjacency matrices of random bipartite graphs with $n=100$ vertices and some combinations of $m$ and $\alpha$. Note that when labeling the vertices according to the set they belong to, the adjacency matrices of bipartite graphs have a block structure. Here we define $m$ (resp. $n-m$) as the number of vertices of the smaller (bigger) set. In this respect, the case $m=n/2$ is a limiting case where both sets have the same number of vertices, $m=n-m$. Moreover, the case $m=1$ is another limiting case in which the smaller set consists of a single vertex. Thus, in what follows we will consider random bipartite graphs characterized by the parameter set $(n,m,\alpha)$ with $1\le m\le n/2$ and $0\le \alpha \le 1$. Notice that the case $m>n/2$ is redundant because it is equivalent to the interchange of the sets. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{Fig1.pdf} \caption{Nonzero adjacency matrix elements of random bipartite graphs for some combinations of $m$ and $\alpha$: (a) $m=n/2$ and $\alpha =0.2$, (b) $m=n/4$ and $\alpha =0.75$, (c) $m=n/5$ and $\alpha =0.5$, (d) $m=n/10$ and $\alpha =0.25$. In all cases $n=100$.} \label{Fig1} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{Fig2.pdf} \caption{Shannon entropies $S^k$ of the eigenvectors of ten realizations of the adjacency matrices shown in Fig.~\ref{Fig1}. Dashed lines in panels (b-d) separate groups of entropies characterized by different average values.} \label{Fig2} \end{figure*} \section{Eigenvector properties. Scaling and universality} \label{entropy} In this study, we characterize the eigenvectors of random bipartite graphs by using information or Shannon entropy, which for the eigenvector $\Psi^k$ is given as \begin{equation} \label{S} S^k = -\sum_{j=1}^n \left| \Psi^k_j \right|^2 \ln \left| \Psi^k_j \right| ^2 \ . \end{equation} $S^k$ measures the number of principal components of the eigenvector $\Psi^k$ in a given basis. Therefore, the latter quantity is a good measure of eigenvector localization/delocalization. In fact, this quantity has already been used to characterize quantitatively the complexity and localization properties of the eigenvectors of the adjacency matrices of several random network models (see examples in~\cite{MAM15,MM19,MFR17,MFR17b,AMG18} and references therein). Below we use exact numerical diagonalization to compute the eigenvectors $\Psi^k$ and eigenvalues $\lambda_k$ ($k=1\ldots n$) of the adjacency matrices of large ensembles of random bipartite graphs characterized by the parameter set $(n,m,\alpha)$. In Fig.~\ref{Fig2}, we present the Shannon entropies $S^k$ of the eigenvectors of ten realizations of the adjacency matrices shown in Fig.~\ref{Fig1}. Note that for $m=n/2$ all rows of the adjacency matrix have the same average number of nonzero off-diagonal elements, see Fig.~\ref{Fig1}(a), therefore the corresponding eigenvectors are expected to be equivalent and they should have similar entropies; this can be verified in Fig.~\ref{Fig2}(a). In contrast, for any $m<n/2$, $m$ rows of the adjacency matrix have a larger number of nonzero off-diagonal elements than the remaining $n-m$ rows, see Figs.~\ref{Fig1}(b-d). Hence, as it can be seen in Figs.~\ref{Fig2}(b-d), the entropies of the corresponding eigenvectors can be grouped into two sets characterized by different average values $\left\langle S \right\rangle$ (see the dashed lines in these panels, which separate the two sets having different averages). Despite these differences, taking into account that we want to use the average entropy to find scaling properties in random bipartite graphs, and that for this purpose we need a single quantity regardless of the specific graph, we compute averages over all available eigenvectors, thus taking into account the contribution of both eigenvector sets. From definition~(\ref{S}), it follows that $\left\langle S \right\rangle=0$ when $\alpha=0$, since the eigenvectors of the (diagonal) adjacency matrices of our random bipartite graph model have only one non-vanishing component with magnitude equal to one. On the other hand, for $\alpha=1$ the bipartite graph is complete and $\left\langle S \right\rangle$ gets its maximal value, $S_{\tbox{MAX}}$, for a given combination of $n$ and $m$. Thus, when $0<\alpha<1$ we should observe $0<\left\langle S \right\rangle<S_{\tbox{MAX}}$. In Fig.~\ref{Fig3} we present the average Shannon entropy $\left\langle S \right\rangle$ as a function of the connectivity parameter $\alpha$ for the eigenvectors of random bipartite graphs and for several parameter combinations. We observe that the curves of $\left\langle S \right\rangle$, for any combination of $n$ and $m$, have a very similar functional form as a function of $\alpha$: The curves $\left\langle S \right\rangle$ show a smooth transition from approximately zero to $S_{\tbox{MAX}}$ when $\alpha$ increases from $\alpha\sim 0$ (mostly isolated vertices) to one (complete bipartite graphs). Recall that when $\left\langle S \right\rangle \approx 0$ the corresponding eigenvectors are localized (i.e., ~$\left\langle S \right\rangle \approx 0$ defines the localized regime). In contrast, when $\left\langle S \right\rangle \approx S_{\tbox{MAX}}$, the corresponding eigenvectors are delocalized. Thus, the curves of $\left\langle S \right\rangle$ versus $\alpha$ in Fig.~\ref{Fig3} display the delocalization transition of the eigenvectors of our random bipartite model. As a complementary information, in Fig.~\ref{Fig4} we report $S_{\tbox{MAX}}$, i.e., ~the value of $\left\langle S \right\rangle$ at $\alpha=1$, of random bipartite graphs for several combinations of $n$ and $m$. \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{Fig3.pdf} \caption{Average Shannon entropy $\left\langle S \right\rangle$ as a function of the connectivity $\alpha$ for random bipartite graphs (of sizes ranging from $n=100$ to 800) for several values of $m$ (as indicated in the panels). Each symbol was computed by averaging over $10^6$ eigenvectors.} \label{Fig3} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.55\columnwidth]{Fig4.pdf} \caption{Maximum values of the Shannon entropy $S_{\tbox{MAX}}$ as a function of the bipartite graph size $n$ for several values of $m$. The thick black line corresponds to $\ln(n/2.07)$, the approximate value of $\left\langle S \right\rangle_{\tbox{GOE}}$. The arrow indicates decreasing $m$.} \label{Fig4} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{Fig5.pdf} \caption{Average information entropy $\left\langle S \right\rangle$ normalized to $S_{\tbox{MAX}}$ as a function of the connectivity $\alpha$. Same data of Fig.~\ref{Fig3}.} \label{Fig5} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.55\columnwidth]{Fig6.pdf} \caption{Localization--to--delocalization transition point $\alpha^*$ (defined as the value of $\alpha$ for which $\left\langle S \right\rangle/S_{\tbox{MAX}} \approx 0.5$) as a function of the bipartite graph size $n$ for several values of $m$. Dashed lines are the fittings of the data with Eq.~(\ref{scalingEq1}). The arrow indicates decreasing $m$.} \label{Fig6} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{Fig7.pdf} \caption{Average information entropy $\left\langle S \right\rangle$ normalized to $S_{\tbox{MAX}}$ as a function of the scaling parameter $\xi$, see Eq.~(\ref{xi}). Same data of Fig.~\ref{Fig3}. Dashed vertical lines indicate the width of the transition region $\Delta$ defined as the full width at half maximum of the functions $d\left\langle S \right\rangle/d\xi$ vs.~$\xi$.} \label{Fig7} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.55\columnwidth]{Fig8.pdf} \caption{Scaled curves for the Shannon entropy for random bipartite graphs with several values of $m/n$. Arrows indicate decreasing $m/n$. All curves correspond to interpolated data with $n=800$. Inset: Width of the transition region $\Delta$ as a function of $m/n$.} \label{Fig8} \end{figure} It is important to stress that in our graph model with fixed $n$ the maximal number of nonzero adjacency matrix elements is obtained when $\alpha=1$ and $m=n/2$, but still in this case half of the off-diagonal adjacency matrix elements are equal to zero. Therefore the adjacency matrices of our random bipartite graphs never reproduce the Gaussian Orthogonal Ensemble (GOE) of RMT $-$the GOE is a random matrix ensemble formed by real symmetric random matrices $\bf A$ whose entries are statistically independent random variables drawn from a normal distribution with zero mean and variance $\left\langle |A_{ij}|^2\right\rangle=(1+\delta_{ij})/2$, see e.g.~\cite{metha}. Accordingly, one should expect $S_{\tbox{MAX}}<\left\langle S \right\rangle_{\tbox{GOE}}$, where $\left\langle S \right\rangle_{\tbox{GOE}}\approx\ln(n/2.07)$ is the average entropy of the (random and delocalized) eigenvectors of the GOE. However, surprisingly, we observe that $S_{\tbox{MAX}} \approx \left\langle S \right\rangle_{\tbox{GOE}}$ for $m=n/2$, while $S_{\tbox{MAX}}<\left\langle S \right\rangle_{\tbox{GOE}}$ indeed occurs for any $m<n/2$, see Fig.~\ref{Fig4}. Also, from Fig.~\ref{Fig4}, we can clearly see that \begin{equation} \label{SMAX} S_{\tbox{MAX}}\propto\ln(n) \ . \end{equation} Therefore, we can conclude that the maximal entropy setup in our random bipartite graph model corresponds to $m=n/2$ and $\alpha=1$ for which GOE statistics is observed for $\left\langle S \right\rangle$ and expected for other quantities. Now, to ease our analysis, in Fig.~\ref{Fig5} we plot again $\left\langle S \right\rangle$ but normalized to $S_{\tbox{MAX}}$. The fact that these curves, plotted in semi-log scale, are just shifted to the left on the $\alpha$-axis when increasing $n$ makes it possible to hypothesize the existence of a scaling parameter that depends on $n$. In order to check this hypothesis and find such a scaling parameter, we first define a quantity that allows characterizing the position of the curves $\left\langle S \right\rangle/S_{\tbox{MAX}}$ on the $\alpha$-axis: We choose the value of $\alpha$, that we label as $\alpha^*$, for which $\left\langle S \right\rangle/S_{\tbox{MAX}} \approx 0.5$. Notice that $\alpha^*$ characterizes the localization--to--delocalization transition of the eigenvectors of our graph model. Figure~\ref{Fig6} shows the localization--to--delocalization transition point $\alpha^*$ as a function of $n$ for several values of $m$. The linear trend of the data (in log-log scale) in Fig.~\ref{Fig6} implies a power-law relation of the form \begin{equation} \label{scalingEq1} \alpha^* = \mathcal{C} n^\delta \ . \end{equation} In fact, Eq.~(\ref{scalingEq1}) provides very good fittings to the data. The values of $\delta$ from the fittings are very close to -0.978 for all the values of $m$ considered here (see thick full lines in Fig.~\ref{Fig6}). From this observation we can propose the following scaling for the curves $\left\langle S \right\rangle/S_{\tbox{MAX}}$ vs~$\alpha$: By plotting again the curves of $\left\langle S \right\rangle/S_{\tbox{MAX}}$ now as a function of $\xi$, that we define as the ratio between the connectivity parameter and the localization--to--delocalization transition point \begin{equation} \label{xi} \xi = \frac{\alpha}{\alpha^*} \propto \frac{\alpha}{n^\delta} \approx \alpha n^{0.978}, \end{equation} we observe that curves for different bipartite graph sizes $n$ collapse on top of a single curve, see Fig.~\ref{Fig7}. That is, we conclude that, for a given ratio $m/n$, $\xi$ fixes the localization properties of the eigenvectors of the adjacency matrices of the random bipartite graphs, such that, when $\xi<1/10$ [$10<\xi$] the eigenvectors are localized [extended], while the localization--to--delocalization transition occurs in the interval $1/10<\xi<10$. Even though we were able to scale the Shannon entropy curves for random bipartite graphs, as shown in Fig.~\ref{Fig7}, there is still a dependence of those {\it universal} curves on the ratio $m/n$. To clearly show this, in Fig.~\ref{Fig8} we report scaled curves of the Shannon entropy for several values of $m/n$ in the localization--to--delocalization transition region. Here we can observe that the larger the ratio $m/n$, the sharper the localization--to--delocalization transition. Thus, we characterize the width of the transition region, that we call $\Delta$, as the full width at half maximum of the functions $d\left\langle S \right\rangle/d\xi$ vs.~$\xi$. In the inset of Fig.~\ref{Fig8} we report $\Delta$ as a function of $m/n$. From this figure, we observe a clear increase of $\Delta$ when decreasing the ratio $m/n$, an increase that seems to saturate for ratios as small as $m/n\sim 1/100$. \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Fig9.pdf} \caption{Eigenvalues $\lambda_k$ of the adjacency matrices of random bipartite graphs for several parameter combinations $(m,n,\alpha)$. Columns [rows] are characterized by a fixed $m/n$ [$\xi$]. A single graph realization is considered for each curve. Dashed lines in panels (h) and (i) coincide with those in Figs.~\ref{Fig2}(b) and \ref{Fig2}(d), respectively.} \label{Fig9} \end{figure*} It is worth stressing that once we have found that $\xi$ exists and that this parameter scales the eigenvector properties (characterized by their Shannon entropy) of the model of random bipartite graphs here studied, it is natural to expect that other properties (i.e.,~spectral properties, dynamical properties, transport properties, etc.) of the graph model would also scale with the same parameter. This is what we explore next, when we validate the previous surmise by closely inspecting the corresponding eigenvalues. \section{Spectral properties} \label{spectra} In Fig.~\ref{Fig9}, we present the spectra of the adjacency matrices of random bipartite graphs for several combinations of the parameters $m$, $n$, and $\alpha$. Each panel is characterized by a fixed ratio $m/n$ and a fixed scaling parameter $\xi$. So, from the results in the previous Section, one should expect the four spectra, reported in each of the panels of Fig.~\ref{Fig9} and corresponding to different graph sizes $n$, to fall one on top of the other. This is in fact the case, except for a small-size effect clearly observed in Fig.~\ref{Fig9}(d,g) when $n=100$. It is also interesting to note that the block structure of the adjacency matrix clearly reveals itself in the spectra, for large $\xi$ and small ratio $m/n$, see Fig.~\ref{Fig9}(h-i). To characterize the spectral properties of the random bipartite graph model, we use the ratios of consecutive energy-level spacings $r$, which are defined as follows. Let $\{ \lambda \}$ be a set of ordered eigenvalues, the corresponding spacings $s_k$ are \begin{equation} s_k = \frac{\lambda_{k+1}-\lambda_k}{\left\langle \lambda \right\rangle} \ , \label{r} \end{equation} where $\left\langle \lambda \right\rangle$ is the local mean eigenvalue density, while the ratios $r_k$ are defined as~\cite{OH07} \begin{equation} r_k = \frac{\mbox{min}(s_k,s_{k-1})}{\mbox{max}(s_k,s_{k-1})} \ , \label{r} \end{equation} such that $r_k\in[0,1]$ $\forall k$. Moreover, the probability distribution function of $r$ in the Poisson limit (which is reproduced by our random bipartite graph model when $\alpha=0$) is~\cite{ABGR13} \begin{equation} P_{\mbox{\tiny P}}(r) = \frac{2}{(1+r)^2} \ . \label{Prp} \end{equation} Another important limit, that we will use as a reference, is the GOE case for which $P(r)$ gets the form~\cite{ABGR13} \begin{equation} \label{Prg} P_{\mbox{\tiny GOE}}(r) = \frac{27}{4} \frac{r+r^2}{(1+r+r^2)^{5/2}} \ . \end{equation} It is important to stress that the nearest-neighbor energy-level spacing distribution $P(s)$~\cite{metha} is already a well accepted quantity to measure the degree of {\it chaos} or disorder in complex systems and has been extensively used to characterize spectral properties of complex networks (see examples in~\cite{MAM15,MFR17b,AMG18} and references therein). However, the use of $P(r)$ is more convenient here since it does not require the process known in RMT as spectral unfolding~\cite{metha}, whose implementation for spectra with kinks as those in Figs.~\ref{Fig9}(h-i) could be cumbersome. Figure~\ref{Fig10} presents histograms of $P(r)$ for random bipartite graphs with several combinations of parameters $(m,n,\alpha)$. As well as in Fig.~\ref{Fig9}, each panel is characterized by a fixed ratio $m/n$ and a fixed scaling parameter $\xi$. With this figure we verify the invariance of $P(r)$ for fixed $\xi$, except for a small size effect that is enhanced at $r\to 0$; see the insets in panels (a-c,g-i) where the convergence to a steady $P(r)$ is obtained for large enough $n$. Besides, from Fig.~\ref{Fig10}, we observe the Poisson to GOE transition in the shape of $P(r)$ when increasing $\xi$. Also, at the transition borders, i.e.~at $\xi=0.1$ and $\xi=10$, the shape of $P(r)$ is well described by the corresponding RMT predictions in the Poisson and GOE limits, respectively. This confirms our definition of the localization--to--delocalization transition region: $0.1<\xi<10$. While, as expected, for intermediate values of $\xi$, see e.g.,~Fig.~\ref{Fig10}(d-f), $P(r)$ has a shape which is intermediate between $P_{\mbox{\tiny P}}(r)$ and $P_{\mbox{\tiny GOE}}(r)$. \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Fig10.pdf} \caption{Distribution of ratios of consecutive energy-level spacings $P(r)$ for the eigenvalues of the adjacency matrices of random bipartite graphs with several parameters combinations $(m,n,\alpha)$. Columns [rows] are characterized by a fixed $m/n$ [$\xi$]. Each histogram is constructed with $10^6$ ratios. Dashed lines in panels (a-c) [(g-i)] correspond to the RMT prediction for $P(r)$ in the Poisson [GOE] limit, see Eq.~(\ref{Prp}) [Eq.~(\ref{Prg})]. In panels (d-f) both equations, Eq.~(\ref{Prp}) and~(\ref{Prg}), are shown in dashed lines. Insets are enlargements of the main panels for $r$ close to zero.} \label{Fig10} \end{figure*} Finally, we would like to add that it is quite surprising that even for $m/n=1/10$ the $P(r)$ is very close to $P_{\mbox{\tiny GOE}}(r)$ when $\xi$ is large, see Fig.~\ref{Fig10}(i). Recall that for any $m/n<2$ the corresponding adjacency matrices have more null than not null off-diagonal matrix elements (see Fig.~\ref{Fig1}), therefore, being very different from members of the GOE. Moreover, we would also like to recall that we found that $S_{\tbox{MAX}}\approx \left\langle S \right\rangle_{\tbox{GOE}}$ only for $m/n=1/2$, while $S_{\tbox{MAX}}<\left\langle S \right\rangle_{\tbox{GOE}}$ for any $m/n<1/2$. Therefore, for our random bipartite graph model, we can claim that $P(r)$ is less sensitive to deviations from GOE statistics than $\left\langle S \right\rangle$. \section{Conclusions} \label{conclusions} In this paper we have numerically studied the properties related to the eigenvectors and eigenvalues of the adjacency matrices of random bipartite graphs. Specifically, we have considered random bipartite graphs with self-loops, where all non-vanishing adjacency matrix elements are Gaussian random variables. Our random bipartite graph model depends on three parameters: The graph size $n$, the graph connectivity $\alpha$, and the size of the smaller set $m$ composing the bipartite graph. First, through a proper scaling analysis of the Shannon entropy of the eigenvectors of the adjacency matrices of such a random bipartite graph model, we defined a scaling parameter $\xi\equiv\xi(n,m,\alpha)$ that fixes the localization properties of the eigenvectors for a given ratio $m/n$. Moreover, our analysis provides a way to predict the localization properties of the random bipartite graphs: For $\xi<0.1$ the eigenvectors are localized, the localization--to--delocalization transition occurs for $0.1<\xi<10$, whereas when $10<\xi$ the eigenvectors are extended. Next, to broaden the applicability of our findings, we demonstrated that for a fixed $\xi$, the spectral properties (characterized by the distribution of ratios of consecutive energy-level spacings) of the graph model are also universal, namely, they do not depend on the specific values of the bipartite graph parameters. The results here derived are important in at least one applied field of research. Admittedly, the study of the stability of ecological systems makes use of the two main ingredients of our study. On the one hand, many ecosystems, including prey-predator and mutualistic systems, are faithfully represented by bipartite graphs, which are assumed to be random matrices when no information about the real structure is known. On the other hand, the analysis of the stability of such systems is often reduced to understand the eigenvalues and eigenvectors structure of the interaction matrices (or their Jacobian). Our results are important in so far they show that there are universal properties in such random bipartite networks, which might help to understand, in its turn, robust dynamical patterns of such systems regardless of their specific details such as size and interaction strengths. We plan to explore in more detail this potential application in the near future. \section*{Acknowledgements} JAM-B acknowledges financial support from FAPESP (Grant No.~2019/ 06931-2), Brazil, and VIEP-BUAP (Grant No.~100405811-VIEP2019) and PRODEP-SEP (Grant No.~511-6/2019.-11821), Mexico. YM acknowledges partial support from the Government of Aragon, Spain through grant E36-17R (FENOL), by MINECO and FEDER funds (FIS2017-87519-P) and by Intesa Sanpaolo Innovation Center. The funders had no role in study design, data collection, and analysis, decision to publish, or preparation of the manuscript.
1,116,691,499,595
arxiv
\section{Introduction} \label{sec:intro} Many practical problems in machine learning and data science can be naturally modeled by learning of high-dimensional geometric shapes from a given set of data points. In particular, learning of simplices from sample points arises in many fields ranging from bioinformatics to remote sensing \cite{satas2017tumor, chan2009convex, schwartz2010applying}. A simplex is defined as the set of all convex combinations of $K+1$ points in $\mathbb{R}^K$, for $K\in\mathbb{N}$. Formally speaking, our problem can be stated as follows: assume $n$ i.i.d. samples are drawn from a uniform measure over an unknown simplex in $\mathbb{R}^K$. Also, each sample is assumed to be corrupted by an additive zero-mean Gaussian noise with a known variance $\sigma^2$. In this regard, the main question that we try to address in this paper is: How large $n$ needs to be in terms of parameters such as $K$, $\sigma$ and etc., such that the true simplex is approximated up to a reasonable accuracy? Learning of simplices is a well-studied task. Due to its practical importance, many efficient algorithms are proposed that solve the problem in different scenarios such as noisy and sparse cases. From a theoretical point of view, however, the problem is less studied and fundamental results are not derived for diverse situations that the problem may face. Several efficient algorithms for learning of simplices are already proposed (see Section \ref{sec:intro:LR}) which still suffer from a very large sample complexity. In a concurrent line of work, researchers have been working on deriving information-theoretic bounds on sample complexity of the problem without putting any restrictions on the efficiency of the corresponding algorithms. Recently, optimal sample complexity bounds are derived for the noiseless case where Maximum Likelihood Estimator (MLE) is used. The output of MLE for this problem is the minimum volume simplex which contains all the samples. In \cite{najafi2021statistical}, Najafi et al. proved that MLE is a PAC-learning algorithm for finding a $K$-simplex with an asymptotically decreasing Total Variation (TV) distance. They showed that a maximum of $\tilde{O}\left(K^2/\epsilon \right)$\footnote{Logarithmic terms are ignored.} samples are sufficient to estimate the true simplex up to a TV distance of $\epsilon$ for $\epsilon>0$. However, we do not have access to noiseless data in real-world situations, and the minimum-volume inclusive simplex solution no longer is a valid answer for the problem. In fact, in noisy regimes the corrupted samples are not necessarily contained inside the true simplex. In this work, we aim at finding the information-theoretic sample complexity of learning simplices in noisy regimes. Mathematically speaking, samples are generated from the following equation: \begin{equation} \boldsymbol{y}_i = \boldsymbol{V} \boldsymbol{\phi}_i + \boldsymbol{z}_i \quad,\quad i=1,\ldots,n, \end{equation} where $\boldsymbol{V}$ is a $K\times (K+1)$ matrix that includes the vertices of the simplex as its columns. Each $\boldsymbol{\phi}_i \in \mathbb{R}^K$ is drawn from a uniform Dirichlet distribution and $\boldsymbol{z}_i \in \mathbb{R}^K$ are sampled from a multivariate Gaussian distribution $\mathcal{N}\left(0, \sigma^2\boldsymbol{I}_K\right)$. We tackle the task of estimating a noisy simplex as a density estimation problem. In particular, we prove that having at most $\tilde{O}\left(K^2/\epsilon^2 \right)$ noisy samples from a $K$-simplex is sufficient to estimate the simplex with a TV-distance of at most $\epsilon + \frac{\sigma}{\mathrm{Vol}^{\frac{1}{K}}}$, for any $\epsilon>0$. The algorithm that achieves this bound has three main steps. First, we find a ball which with high probability contains the main simplex. Then, we quantize this ball via covering it with many isolated points. Each combination of $K+1$ of such points forms a candidate simplex, and thus a candidate density to approximate the true simplex. Assuming the covering has been done with a sufficiently high precision, there exists at least one candidate that has a small TV-distance from the main simplex. In the final step, we choose a density from the candidate set such that, with high probability, it has a small distance from the main simplex. It should be noted that our algorithm does not run in polynomial-time, since our main goal is to find information-theoretic bounds for the sample complexity. To the best of our knowledge, our results are the first sample complexity bounds for this problem in noisy regimes, regardless of the efficiency of the corresponding algorithms. \subsection{Related Works} \label{sec:intro:LR} We categorize the existing works based on their focus, which could be either the efficiency of the proposed algorithms, or the fundamental sample complexity of the problem. In the former, the main purpose is mostly to provide a heuristic solution with promising results in practice. However, papers in the latter category aim to find fundamental and information-theoretic limitations for the problem. In the remainder of this part, we review a number of works from both categories. In \cite{anderson2013efficient}, Anderson et al. proved that by having $O\left(K^{22}\right)$ noiseless samples, they can estimate the true simplex via an efficient (polynomial-time) algorithm. The core idea behind their work is to use the 3rd moment and local search techniques from Independent Component Analysis (ICA). In \cite{najafi2021statistical}, it is shown that sample complexity of the MLE in noiseless case is $\tilde{O}\left(K^2/\epsilon\right)$, where $\epsilon$ is the permissible TV-distance of the output of the algorithm from the true simplex. They also provide an alternative heuristic, yet efficient approach as a practical surrogate to the MLE. To the best of our knowledge, theoretical study of this problem in the noisy setting is limited to the work of Bhattacharyya et al. \cite{bhattacharyya2020near}. Their main idea is based on a notion named sample compression which is also used in a seminal work from Ashtiani et al. \cite{ashtiani2018nearly} in learning high-dimensional Gaussian mixture models. In \cite{bhattacharyya2020near}, authors proved that a noisy simplex can be learned using $O\left(K^2\right)$ samples, assuming that there exist a sample near each vertex of the simplex. In fact, this is a very strong assumption, and a back of the envelope calculation shows that one needs around $\tilde{O}\left((1/\epsilon)^{K}\right)$ samples to guarantee the existence of at least one sample within a distance of $\epsilon$ from any vertex. As mentioned earlier, several heuristic methods have been introduced so far that try to deal with real world problems in, for example, bioinformatics and hyper-spectral imaging \cite{piper2004object, bioucas2012hyperspectral, lin2013endmember}. In hyper-spectral imaging, one aims finding the distribution of the constituent elements in an area through examining its remote hyper-spectral images. Each pixel in such an image can be thought as a random convex combination of some fixed prototype vectors that correspond to the pure elements of that region. The above assumptions turn the problem of finding constituent minerals and their distribution in the field into estimating an unknown simplex from presumable uniform samples \cite{ambikapathi2011chance, agathos2014robust, zhang2017robust}. Learning of simplices in bioinformatics usually arises in the study of complex tissues. A complex tissue refers to a tissue composed of multiple cell-types --a group of cells with similar characteristics-- like blood, brain, pancreas \cite{zuckerman2013self}, or even tumor cells \cite{tolliver2010robust}. Bulk data from complex tissues, such as gene expression vectors, can again be modeled by several convex combinations of its constituent cell-types. In this line of work, researchers aim at finding the structure of tissues through studying the state of cell-types, which naturally translates into learning a high-dimensional simplex \cite{shoval2012evolutionary, korem2015geometry}. The rest of the paper is organized as follows: In Section \ref{sec:notation}, we formally define the problem and present our notation and definitions. Our main theoretical results as well as the algorithm that achieves our bounds are discussed in Section \ref{sec:main}. Finally, Section \ref{sec:conc} concludes the paper and presents some suggestions for future works. \section{Notation and Definitions} \label{sec:notation} As mentioned before, a $K$-simplex is defined as the set of all convex combination of $K+1$ affinely independent points in $\mathbb{R}^{K}$. Let $\boldsymbol{V}= \left[\boldsymbol{v}_0\vert\boldsymbol{v}_1\vert\cdots\vert \boldsymbol{v}_{k}\right]\in\mathbb{R}^{K\times(K+1)}$ be a matrix whose columns are such points, then $K$-simplex $\mathcal{S}$ can be defined as follow: \begin{equation*} \mathcal{S} = \left\{ \boldsymbol{V}\boldsymbol{\phi} \bigg\vert~ \boldsymbol{\phi} \in \mathbb{R}^{K+1} ,~ \boldsymbol{\phi} \succ \boldsymbol{0},~ \boldsymbol{\phi}^{T}\boldsymbol{1} = 1 \right\}. \end{equation*} Here $\boldsymbol{v}_i$ denotes the $i$th vertex of the simplex. Also, let us denote the set of all possible $K$-simplices in $\mathbb{R}^K$ by $\mathbb{S}_{K}$. We show the uniform probability measure over a simplex $\mathcal{S}$ by $\mathbb{P}_{\mathcal{S}}$, and its probability density function by $f_\mathcal{S}\left(x\right)$. Thus, $f_\mathcal{S}\left(x\right)$ can be written as: \begin{equation*} f_\mathcal{S}\left(x\right) = \frac{\boldsymbol{1}\left(\boldsymbol{x} \in \mathcal{S}\right)}{\mathrm{Vol}\left(\mathcal{S}\right)}, \end{equation*} where $\mathrm{Vol}\left(\mathcal{S}\right)$ denotes the Lebesgue measure (or simple, the volume) of $\mathcal{S}$. In order to measure the difference between two distributions, we use the total variation distance. Consider two probability measures $\mathbb{P}_1$ and $\mathbb{P}_2$, with respective density functions $f_1$ and $f_2$, which are defined over $\mathbb{R}^K$. Then, TV distance between $\mathbb{P}_1$ and $\mathbb{P}_2$ can be defined as \begin{equation*} \operatorname{TV}(\mathbb{P}_1, \mathbb{P}_2) \triangleq \sup_{\mathrm{A} \in \mathcal{B}}{|\mathbb{P}_1\left(\mathrm{A}\right) - \mathbb{P}_2\left(\mathrm{A}\right)|} = \frac{1}{2}\|f_1 - f_2\| _1 = \frac{1}{2}\int_{\mathbb{R}^K}\! |f_1(x) - f_2(x)|\, \mathrm{d}x, \end{equation*} where $\mathcal{B}$ is the set of all Borel sets in $\mathbb{R}^K$. \begin{definition}[PAC-Learnability of a distribution class in realizable setting] We say a class of distributions $\mathcal{F}$ is \textbf{PAC learnable} in realizable setting, if there exists a learning method which for any distribution $g \in \mathcal{F}$ and any $\epsilon,\delta>0$, outputs an estimator $\hat{g}$ using $n \geq \mathrm{poly}\left(1/\epsilon, 1/\delta \right)$ i.i.d. samples from $g$, which with probability at least $1- \delta$ satisfies \begin{equation} \label{eq: realizable pac learning} \|\hat{g} - g\|_{\mathrm{TV}} < \epsilon. \end{equation} \end{definition} \begin{definition}[PAC-Learnability of a distribution class in agnostic setting] We say a class of distributions $\mathcal{F}$ is \textbf{PAC learnable} in agnostic setting, if there exists a learning method which for any distribution $g $ (not necessarily in $\mathcal{F}$) and any $\epsilon,\delta>0$, outputs an estimator $\hat{g}$ using $n \geq \mathrm{poly}\left(1/\epsilon, 1/\delta \right)$ i.i.d. samples from $g$, which with probability at least $1- \delta$ satisfies \begin{equation*} \|\hat{g} - g\|_{\mathrm{TV}} < C\cdot\min_{f \in \mathcal{F}}\|{f} - g\|_{\mathrm{TV}} + \epsilon. \end{equation*} \end{definition} \begin{definition}[Agnostic PAC-Learnability of Simplices] Recall that for any $K$-simplex $\mathcal{S}$ in $\mathbb{R}^K$, we define the uniform measure $\mathbb{P}_S$ over $\mathcal{S}$. We say algorithm $\mathcal{A}$, which takes a sample set as input and outputs a simplex $\mathcal{S}_{\mathcal{A}} \in \mathbb{S}_{K}$, is an \textbf{agnostic PAC learning method} for learning of simplices if for all $\mathcal{S} \in \mathbb{S}_K$, any $\epsilon_1,\epsilon_2,\delta>0$, any distribution $\mathbb{G}$ with $\|\mathbb{G} - \mathbb{P}_\mathcal{S}\|_{\mathrm{TV}} \leq \epsilon_1$, and assuming to have $n \geq \mathrm{poly}\left(1/\epsilon_2, 1/\delta \right)$ i.i.d. samples from $\mathbb{G}$, the following relation holds with probability at least $1-\delta$: \begin{equation*} \|\mathbb{P}_{\mathcal{S}_\mathcal{A}} - \mathbb{P}_{\mathcal{S}} \|_{\mathrm{TV}} < C_1\cdot\epsilon_1 + C_2\cdot\epsilon_2. \end{equation*} We say that the class of $K$-simplices is PAC-Learnable in agnostic setting, if such an algorithm exists. \label{df:PacLearningOfsimplices} \end{definition} We also need to define a series of geometric restrictions for the simplex in noisy cases. We discuss the necessity of such definitions in later stages. Similar to \cite{najafi2021statistical}, let us denote the volume of the largest facet of a $K$-simplex (here, volume needs be calculated in $\mathbb{R}^{K-1}$) by $\mathcal{A}_{\max}\left(\mathcal{S}\right)$, and the length of the largest line segment inside the simplex (diameter) by $\mathcal{L}_{\max}\left(\mathcal{S}\right)$. In this regard, we define the isoperimetricity of a $K$-simplex as follow: \begin{definition}[$\left(\underline{\theta},\bar{\theta}\right)$-isoperimetricity of simplices] A $K$-simplex $\mathcal{S}\in\mathbb{S}_K$ is defined to be $\left(\underline{\theta},\bar{\theta}\right)$-isoperimetric if the following inequalities hold: \begin{align*} \mathcal{A}_{\max}\left(\mathcal{S}\right) ~\leq~ \bar{\theta} \mathrm{Vol}\left(\mathcal{S}\right)^{\frac{K-1}{K}} \quad,\quad \mathcal{L}_{\max}\left(\mathcal{S}\right) ~\leq~ \underline{\theta}K \mathrm{Vol}\left(\mathcal{S}\right)^{\frac{1}{K}}. \end{align*} \end{definition} The overall concept of isoperimetricity for simplices reflects the fact that for a simplex to be (even partially) recoverable from noisy data, it should not be stretched too much in any direction or having highly acute angles. In other words, sample complexity in noisy regimes is also affected by the geometric shape of the simplex. \begin{definition}[$\epsilon$-representative set] For any $\epsilon>0$, we say that a finite set of distributions $\mathcal{G}$ is an $\epsilon$-representative set for a distribution class $\mathcal{F}$, if for any distribution $f\in \mathcal{F}$, there exists some $g \in \mathcal{G}$ that satisfies \begin{equation*} \mathrm{TV}\left(f, g\right) \leq \epsilon. \end{equation*} \end{definition} Throughout the paper, we use bold lowercase letters to show vectors, bold uppercase letters to show matrices, light uppercase letters to show random variables and light lower case letters to show realization of a random variable. In this paper and for the sake of simplicity in notation, for any fixed $\bar{\theta},\underline{\theta}>0$, whenever we say the class of simplices we actually mean the class of $\left(\underline{\theta},\bar{\theta}\right)$-isoperimetric simplices. \subsection{Problem Definition} In this paper, we aim at proving a special form of agnostic PAC-learnability for simplices. Based on definition \ref{df:PacLearningOfsimplices}, and in order to show that the class of $K$-simplices is agnostic PAC-learnable, we should propose an algorithm that for all $\mathcal{S} \in \mathbb{S}_K$, any positive $\epsilon_1,\epsilon_2,\delta>0$, given a dataset $\mathrm{S} = \{\boldsymbol{y}_1, \boldsymbol{y}_2, \cdots, \boldsymbol{y}_n \}$ of i.i.d. samples drawn from any probability measure $\mathbb{G}$ with $\|\mathbb{G} - \mathbb{P}_{\mathcal{S}}\|_{TV} \leq \epsilon_1$ and $n \geq \mathrm{poly}\left(\epsilon_2, \delta \right)$, outputs a simplex $\hat{\mathcal{S}}$ such that \begin{equation} P\left(\|\mathbb{P}_{\widehat{\mathcal{S}}}- \mathbb{P}_{\mathcal{S}}\|_{TV} \geq C_1\cdot\epsilon_1 +C_2\cdot\epsilon_2\right) \leq \delta, \end{equation} In our special case, we assume that samples are drawn from $\mathbb{P}_{\mathcal{S}}$ and then are corrupted by an additive independent zero mean Gaussian noise with a covariance matrix of $\sigma^2\boldsymbol{I}$. In fact, this procedure describes the distribution $\mathbb{G}$ in our work. In other words, we assume $\mathbb{G}$ belongs to the class of noisy simplices, which is the set of all convolution of uniform measures over simplices in $\mathbb{R}^K$ with a zero-mean multivariate Gaussian distribution $\mathcal{N}\left(\boldsymbol{0},\sigma^2\boldsymbol{I}\right)$. Our ultimate goals are the followings: First, we tend to show that $\epsilon_1$ in the above configuration, and w.r.t. our particular choice of $\mathbb{G}$, can be chosen as small as $O\left(\mathrm{SNR}^{-1}\right)$. Second, we aim at deriving explicit polynomial forms for the lower-bound $n\ge\mathrm{poly}\left(1/\epsilon_2,\log\left(1/\delta\right)\right)$, which (as we will show in Theorem \ref{Theorem2}) turns out to be $$ n\ge \tilde{O}\left[\frac{K^2}{\epsilon^2_2}\log\frac{1}{\delta}\right]. $$ \section{Statistical Learning of Simplices in Agnostic Setting} \label{sec:main} Before going through the details, let us first present an sketch of proof for agnostic PAC-learnability of noisy simplices. ({\it {Bounding the candidate set}}): In order to estimate the true simplex from noisy i.i.d. samples, we first split our data in half and use the first half to restrict the set of all $K$-simplices in $\mathbb{R}^{K}$ (denoted by $\mathbb{S}_{K}$), to a bounded set $\mathbb{S}^{\mathcal{D}}_{K}$ which consists of all the simplices that happen to entirely fall within a finite ball. This way, we can eliminate very far candidates and thus focus on simplices that are placed near the data samples. In this regard, we construct a bounded version of $\mathbb{S}_K$ denoted by $\mathbb{S}^{\mathcal{D}}_{K}$ in a way that, with high probability, includes the true simplex. ({\it {Quantization}}): Then, we quantize this bounded set and create a finite $\epsilon$-representative set of $K$-simplices denoted by $\widehat{\mathbb{S}}^{\mathcal{D}}_{K} = \{\mathcal{S}_1, \mathcal{S}_2, \cdots, \mathcal{S}_M\}$ for $M\in\mathbb{N}$, such that for each simplex $\mathcal{S} \in \mathbb{S}^{\mathcal{D}}_{K}$, there exists some $i\in\{1,2,\cdots,M\}$ where $\|\mathbb{P}_{\mathcal{S}_i}- \mathbb{P}_{\mathcal{S}}\|_{TV} \leq \epsilon$. ({\it {Density selection}}): In the last part of our proof, we use the second half of data and try to choose the best simplex in $\widehat{\mathbb{S}}^{\mathcal{D}}_{K}$. By the best simplex, we mean the one with minimum TV-distance from the true simplex. \subsection{Bounding The Candidate Set} In this part, we show how the first half of the dataset can be utilized in order to bound the set of all candidate simplices into a ball with a finite radius in $\mathbb{R}^K$. This procedure is crucial for later stages of the proof. In this regard, suppose that for any $K$-simplex $\mathcal{S}$ we define $\mathbb{G}_{\mathcal{S}}$ as the noisy version of $\mathbb{P}_{\mathcal{S}}$ as follows: \begin{align} \boldsymbol{y} \sim \mathbb{G}_\mathcal{S} \implies & \boldsymbol{y} = \boldsymbol{x} + \boldsymbol{z}, \nonumber \\ & \boldsymbol{x} = \boldsymbol{V}_{\mathcal{S}}\boldsymbol{\phi}, \quad \boldsymbol{\phi} \sim \mathrm{Dir}\left(1,1,\cdots,1\right), \nonumber \\ & \boldsymbol{z} \sim \mathcal{N}\left(\boldsymbol{0}, \sigma \boldsymbol{\mathrm{I}}\right), \label{noisy simplex definition} \end{align} where $\boldsymbol{V}_{\mathrm{S}}$ denotes the vertex matrix for $\mathcal{S}$. Also, for a noisy simplex $\mathbb{G}_{\mathcal{S}}$ we define signal-to-noise ratio or $\mathrm{SNR}$ as \begin{equation} \mathrm{SNR} = \frac{\mathrm{Vol}\left(\mathcal{S}\right)^\frac{1}{K}}{\sigma}. \end{equation} The following lemma shows that having enough samples from a noisy simplex $\mathbb{G}_{\mathcal{S}}$, one can find a hyper-sphere in $\mathbb{R}^K$ which, with high probability, contains $\mathcal{S}$. \begin{lemma}[Creating $\mathbb{S}^{\mathcal{D}}_K$] Suppose that we have a set of i.i.d. samples $\mathrm{S} = \left\{\boldsymbol{y}_1,\boldsymbol{y}_2,\cdots,\boldsymbol{y}_{2m}\right\}$ from $\mathbb{G}_{\mathcal{S}}$ for $m\in\mathbb{N}$. If $m\ge 72\underline{\theta}^4e^4\left(\frac{\left(K+1\right)\left(K+2\right)}{K}\right)^2\log{\frac{12}{\delta}}$, then the true simplex $\mathcal{S}$ with probability at least $1- \delta$ will be confined in a $K$-dimensional sphere with center point $\boldsymbol{p}$ and radius $R$, where $R$ and $\boldsymbol{p}$ are as follows: \begin{align} \mathrm{R} = \sqrt{\frac{4e^2(K+1)(K+2)\cdot\underline{\theta}^{2}\cdot\mathrm{D} }{ 1+ \frac{4e^2(K-2)}{\mathrm{SNR}^2} - \frac{4}{3\underline{\theta}\mathrm{SNR}} }} \left(1+ \frac{\bar{\theta}}{\underline{\theta}^2e^2\mathrm{SNR}\sqrt{K}}\right) \quad,\quad \mathrm{\boldsymbol{p}} = \frac{1}{2m}\sum_{i=1}^{2m}{\boldsymbol{y}_i}, \end{align} where $D$ is defined as $$ \mathrm{D} = \frac{1}{2m}\sum_{i =1}^{m}{\|\boldsymbol{y}_{2i} - \boldsymbol{y}_{2i-1}\|_2^2}. $$ \label{lemma2} \end{lemma} Proof can be found in appendix section \ref{ProofLemma1}. This way, the set $\mathbb{S}^{\mathcal{D}}_K$ can be fixed. Next, we discuss how to quantize this set properly in order to choose an appropriate final candidate for the true simplex. \subsection{Quantization} Let us call the hyper-sphere of Lemma \ref{lemma2} as $\mathrm{C}^{K}(\boldsymbol{p}, R)$, which with high probability contains the main simplex. In this part, we choose a set of points $\mathrm{T}^l_\epsilon(\mathrm{C}^{K}(\boldsymbol{p}, R)) = \{\boldsymbol{p}_1, \boldsymbol{p}_2, \cdots, \boldsymbol{p}_l\}$ inside this sphere such that for any point $\boldsymbol{x} \in \mathrm{C}^{K}(\boldsymbol{p}, R)$, there exists some $i\in\{1,2,\cdots,l\}$ such that $\|\boldsymbol{x} - \boldsymbol{p}_i\|_{2} \leq \epsilon$. We call $\mathrm{T}^l_\epsilon(\mathrm{C}^{K}(\boldsymbol{p}, R))$ a covering set for $\mathrm{C}^{K}(\boldsymbol{p}, R)$. One way to construct such covering set is to uniformly draw points from the sphere. We show that if the number of drawn random points exceeds $O\left(\left(1+\frac{2R}{\epsilon}\right)^{2K}\right)$, then with high probability, they form an $\epsilon$-covering set for $\mathrm{C}^{K}(\boldsymbol{p}, R)$. Suppose that we construct a random covering set as described above and denote it with $\mathrm{T}_{\epsilon}(\mathrm{C}^{K}(\boldsymbol{p}, R))$. Now, using each $K+1$ distinct points in $\mathrm{T}_\epsilon(\mathrm{C}^{K}(\boldsymbol{p}, R))$ we can build a $K$-simplex. Assume we collect all such simplices in a set called $\widehat{\mathbb{S}}(\mathrm{C}^{K}(\boldsymbol{p}, R))$, i.e., \begin{equation*} \widehat{\mathbb{S}}(\mathrm{C}^{K}(\boldsymbol{p}, R)) = \left\{\mathcal{S}(\boldsymbol{x}_1, \ldots, \boldsymbol{x}_{K+1}) \bigg\vert~ \boldsymbol{x}_i \in \mathrm{T}_{\epsilon}(\mathrm{C}^{K}(\boldsymbol{p}, R)), ~ i = 1,\ldots,K+1 \right\}. \end{equation*} Obviously, there exist at most $\binom{\vert \mathrm{T}_{\epsilon}(\mathrm{C}^{K}(\boldsymbol{p}, R))\vert}{K+1}$ simplices in $\widehat{\mathbb{S}}(\mathrm{C}^{K}(\boldsymbol{p}, R))$. The following lemma states that this set is a sufficiently good representative for all $\left(\underline{\theta},\bar{\theta}\right)$-isoperimetric $K$-simplices inside $\mathrm{C}^{K}(\boldsymbol{p}, R)$. \begin{lemma} For any $\epsilon \in \left(0,1\right)$, denote the set of all possible $K$-simplices with the vertices in $ \mathrm{T}_{\frac{\alpha \epsilon}{K+1}}(\mathrm{C}^{K}(\boldsymbol{p}, R))$ as $\widehat{\mathbb{S}}(\mathrm{C}^{K}(\boldsymbol{p}, R))$. Then, the resulting set is an $\epsilon$-representative set for all $\left(\underline{\theta},\bar{\theta}\right)$-isoperimetric $K$-simplices in $\mathrm{C}^{K}(\boldsymbol{p}, R)$, as long as we have: \begin{equation*} \alpha = \frac{\mathrm{Vol}\left(\mathcal{S}\right)^{\frac{1}{K}}}{5\cdot\bar{\theta}}. \end{equation*} \label{quantization lemma} \end{lemma} The proof of this lemma can be find in appendix section \ref{ProofLemma2}. This way, the finite candidate set $\mathbb{S}^{\mathcal{D}}_K$ can be formed and we can jump to the final stage of our algorithm for finding a ``good" candidate for the true simplex. \subsection{Density Selection} First, let us state a core result in \cite{devroye2012combinatorial} which plays a central role in the remainder of our derivations. At this stage, we have already created the finite set of representative simplices $\widehat{\mathbb{S}}^{\mathcal{D}}_{K}$ and wish to find the best simplex in this set. This can be done using the following theorem: \begin{thm2}[Theorem 6.3 of \cite{devroye2012combinatorial}] Let $\mathcal{F}$ be a finite class of distributions consisting of $M$ distinct members $\{ f_{1}, f_2, \cdots,f_M \}$. Also, suppose we have $n \geq \frac{\log{(3M^2/\delta)}}{2\epsilon^2}$ i.i.d. samples from an arbitrary distribution $g$ for some $\epsilon>0$. Then, there exist a deterministic algorithm $\mathcal{A}$, which outputs a number $j \in \{1,\ldots,M\}$ satisfying the following inequality with probability at least $1-\delta$: \begin{equation} \| f_j - g \|_{TV} \leq 3\cdot \min_{i \in \{1,2,\cdots,M\}} \|f_i - g\|_{TV} +4\epsilon. \end{equation} \label{combinatoeic algorithm} \end{thm2} Proof can be found inside the reference. Combining the results of Theorem \ref{combinatoeic algorithm} and Lemmas \ref{lemma2} and \ref{quantization lemma}, we can present one of our main results as follows: The set of all shape-restricted simplices in $\mathbb{R}^K$ which entirely fall inside a sphere with radius $R$ is agnostic PAC-learnable: \begin{thm2}[Agnostic PAC-Learnability of Simplices in $\mathrm{C}^{K}(\boldsymbol{p}, R)$] The class of $\left(\underline{\theta}, \bar{\theta}\right)$-isoperimetric $K$-simplices which are contained within a $K$-dimensional sphere of radius $R$, denoted by $\mathrm{C}^{K}(\boldsymbol{p}, R)$, is PAC-learnable in agnostic setting. Specifically, for some $\epsilon_1,\epsilon_2,\delta>0$, assume we have at least $n$ i.i.d. samples from a distribution $\mathbb{G}$ which is at most $\epsilon_1$-far from at least one simplex in $\mathrm{C}^{K}(\boldsymbol{p}, R)$, with \begin{equation} n \geq \frac{2K\cdot(K+1)\log \left(1+ \frac{10\bar{\theta}(K+1)}{\epsilon_2}\cdot\frac{R}{ \mathrm{Vol}\left(\mathcal{S}\right)^{\frac{1}{K}}} \right) + \log\frac{3}{\delta}}{\epsilon_2^2}. \end{equation} Then, there exists an algorithm $\mathcal{A}$ whose output $\mathcal{S}_{\mathcal{A}}$ satisfies the following inequality with probability at least $1-\delta$: \begin{equation} \|\mathbb{P}_{\mathcal{S}_\mathcal{A}} - \mathbb{P}_\mathcal{S}\|_{\mathrm{TV}} \leq \quad 4\cdot\epsilon_1 + 7\cdot\epsilon_2. \end{equation} \label{corrolary1} \end{thm2} The proof can be find in the appendix section \ref{proofThm2}. So far, we have proved that for any $\epsilon,R>0$ and $\boldsymbol{p}\in\mathbb{R}^K$, having $n\ge\tilde{O}\left(K^2\log R/\epsilon^2\right)$ samples from ``any" distribution $\mathbb{G}$ (not necessarily a uniform measure over a simplex), there exists an algorithm $\mathcal{A}$ that can output simplex $\mathcal{S}_{\mathcal{A}}$ such that with high probability $$ \left\Vert \mathbb{P}_{\mathcal{S}_{\mathcal{A}}}-\mathbb{G} \right\Vert_{\mathrm{TV}} \leq 4 \min_{\mathcal{S}\in\mathrm{C}^{K}(\boldsymbol{p}, R)} \left\Vert \mathbb{G}-\mathbb{P}_{\mathcal{S}} \right\Vert_{\mathrm{TV}} + \epsilon. $$ Note that $\bar{\theta}$ which represents the order in the shape of simplices is also present inside the bounds. In fact, trying to learn a highly stretched simplex from noisy data can become very tricky since even a small amount of noise can shoot almost all the samples outside of the simplex. Another limitation of Theorem \ref{corrolary1} is that it requires the simplices to be inside an sphere of radius $R$. Using Lemma \ref{lemma2}, we show that this is not an actual necessity, if the distribution $\mathbb{G}$ is a noisy version of $\mathbb{P}_\mathcal{S}$ for any $\mathcal{S}\in\mathbb{S}_K$. The following theorem is our main result in this paper: \begin{thm2}[Agnostic PAC-Learnability of Simplices in General] The class of $\left(\underline{\theta}, \bar{\theta}\right)$-isoperimetric $K$-simplices are agnostic PAC-learnable, as long as samples are drawn i.i.d. from any noisy simplex. In other words, having $n\geq \tilde{O}\left({K^2}/{\epsilon_2^2}\right)$ i.i.d. samples from a noisy simplex $\mathbb{G}_\mathcal{S}$, there exist an algorithm which outputs a simplex $\hat{\mathcal{S}}$ that with high probability satisfies \begin{equation} \|\mathbb{P}_{\hat{\mathcal{S}}} - \mathbb{P}_\mathcal{S}\|_{\mathrm{TV}} \leq C_1\cdot\epsilon_1 + C_2\cdot\epsilon_2, \end{equation} where $\epsilon_1$ is the TV-distance between the noisy simplex $\mathbb{G}_\mathcal{S}$ and the uniform measure over the true one $\mathbb{P}_\mathcal{S}$. \label{Theorem2} \end{thm2} Proof of the above theorem can be found in the appendix section \ref{proofThm3}. Obviously, the $\log R$ dependence inside the sample complexity of Theorem \ref{corrolary1} has been disappeared in that of Theorem \ref{Theorem2}. What remains to do is to show that whenever $\mathbb{G}=\mathbb{G}_{\mathcal{S}}$, i.e., a noisy version of an actual simplex, the term $\left\Vert \mathbb{G}_{\mathcal{S}}-\mathbb{P}_{\mathcal{S}} \right\Vert_{\mathrm{TV}}$ also decreases as one increases the SNR. The following lemma gives an upper bound for the TV-distance between a noisy simplex $\mathbb{G}_\mathcal{S}$, and the uniform measure over the noiseless one, i.e., $\mathbb{P}_\mathcal{S}$: \begin{lemma} For any $\left(\underline{\theta}, \bar{\theta}\right)$-isoperimetric simplex $\mathcal{S}$, the TV-distance between the noisy simplex $\mathbb{G}_{\mathcal{S}}$ defined via \ref{noisy simplex definition} and the noiseless one can be bounded as \begin{equation} \mathrm{TV}\left(\mathbb{G}_{\mathcal{S}}, \mathbb{P}_{\mathcal{S}}\right) \leq 3\frac{(K+1)\bar{\theta}}{\mathrm{SNR}}\cdot\sqrt{K+\sqrt{8K\log\frac{\mathrm{SNR}}{K+1}}} \end{equation} \end{lemma} Proof is given in appendix section \ref{proofLemma3}. This way, we showed the distance from a simplex and its noisy version in total variation sense is at most $\tilde{O}\left(\mathrm{SNR}^{-1}\right)$ and this proves all of our claims. \section{Conclusions} \label{sec:conc} We present sample complexity bounds for learning high-dimensional simplices in noisy regimes. Formally speaking, we prove that given a sufficient amount of noisy data, one can estimate a true high-dimensional simplex up to an error which is reciprocal to the standard deviation of the noise. Also, a presumably optimal polynomial dependence on a number of parameters, such as dimension has been achieved for the sample complexity of noisy regime which match those of the already-solved noiseless case. Our proofs are based on a number of recent techniques which have been previously used on learning of Gaussian mixture models. In particular, we use a parameter quantization technique to transform the problem into a density selection task, and ultimately prove the agnostic PAC-learnability of simplices which gives us the desired result for sample complexity. For future directions in this area, one can think of analyzing this problem under a broader noise and distortion model which might match the practice even more. Also, improving the bounds, specially trying to eliminate the term $O\left(\mathrm{SNR}^{-1}\right)$ from the overall error is another promising research path. \bibliographystyle{IEEEtran}
1,116,691,499,596
arxiv
\section{Introduction} It is a notoriously difficult problem to find static solutions of Einstein's equations coupled to brane sources. Exact solutions can sometimes be found in supergravity theories in the BPS limit but little is known for non-supersymmetric compactifications. Codimension two branes are in this regard special. In the simplest cases, as particles in 2+1 dimensions \cite{djh}, the branes do not curve the space outside of the source but only create a deficit angle. The simplified dynamics of gravity then allows to determine many interesting solutions \cite{carroll,navarro,codimension2}. Recently codimension two brane-worlds have also drawn a lot of attention especially in relation to the cosmological constant problem. In this note we study generalizations of the so called football shaped extra dimensions scenario \cite{carroll,navarro} to include several codimension two branes. Our results can also be repeated almost verbatim for the Supersymmetric Large Extra Dimensions scenario \cite{burgess}, which can be considered as a supersymmetric extension of this model, and more in general for product compactifications where the internal space is a sphere (warped compactifications in $6D$ supergravity have also been considered in \cite{warped}). In \cite{carroll,navarro}, the authors considered a compactification of six dimensional gravity to Minkowski space times a sphere, obtained by tuning the magnetic flux of a $U(1)$ gauge field through the sphere with the bulk cosmological constant. It was found that by placing \emph{equal} tension branes at the antipodal points of the sphere the internal space is deformed into a sphere with a wedge removed (a "football"). A very interesting feature of this scenario is that the large dimensions remain flat even in the presence of the branes. While the tuning between the tensions can be justified assuming a $\mathbb{Z}_2$ symmetry, certainly this solution appears very special. It is the purpose of this paper to show that these types of solutions are quite generic and \emph{no tuning} between the tensions needs to be invoked when several branes are considered. The mathematical problem consists in solving the Liouville equation with singularities, a topic which appears in $2D$ quantum gravity. Quite remarkably we will be able to find explicit solutions for the case with three branes but solutions exist in general. The space so constructed describes a sphere with conical singularities at the brane locations. This paper is organized as follows. In section 2 we review our model and generalize it to an arbitrary number of branes and curved background. In section \ref{liouvillesec} the problem of determining the metric on the internal space is related to the Liouville equation with singularities. Some background material regarding the solution of the Liouville equation is reviewed in the appendix. In \ref{3branes} we derive exact solutions for the metric with three branes. In \ref{morebranes} and \ref{riemann} we discuss the case where four or more branes are included and consider the scenario where the internal manifold is a Riemann surface. We derive the low energy effective action of the model in section \ref{effectiveaction}. In section \ref{conclusions} we summarize the results. \section{The model} \label{model} In this section we review and generalize the scenario introduced in \cite{carroll,navarro}. For appropriate values of the parameters this is just a truncation of the SLED scenario. The bulk action is $6D$ gravity with cosmological constant coupled to a $U(1)$ gauge field, \begin{equation} S_6=M_6^4 \int d^6x \sqrt{-G}\left(\frac 1 2 R -\frac 1 4 F^2-\lambda\right) \end{equation} The branes are assumed to be minimally coupled and infinitesimal so their action is just the Nambu-Goto action, \begin{equation} S_{branes}=-\sum_{i=1}^N T_i \int d^4 x \sqrt{-g_i} \end{equation} where $g_i$ is the induced metric on each brane. Thick branes have been considered in \cite{thick}. We will be interested in product compactifications of the form $M_4 \times K$ where $M_4$ is maximally symmetric and $K$ is a compact two dimensional manifold. The metric is given by, \begin{equation} ds^2=g_{\mu\nu}dx^\mu dx^\nu +\psi(z,\bar{z}) dz d\bar{z} \label{ansatz} \end{equation} where for convenience we have introduced complex coordinates on the internal manifold. The branes are located at points $z_i$ in the internal space. Consistently with the equations of motion it is assumed that the gauge field has a magnetic flux threading the internal space, \begin{equation} F=i \ B_0 \ \psi(z,\bar{z}) \ dz\wedge d{\bar{z}}, \label{flux} \end{equation} where $B_0$ is a constant. Using the ansatz (\ref{ansatz}) and (\ref{flux}) one finds (see \cite{carroll}),\footnote{We use normalizations where $\int d^2z \delta(z,\bar{z})=1$.} \begin{eqnarray} R^4_{\mu\nu}&=&\frac 1 2 \left(\lambda-\frac 1 2 B_0^2\right)g_{\mu\nu}\label{einstein1} \\ \partial_z\partial_{\bar{z}} \log \psi&=&-\frac k 2 \psi-\sum_{i=1}^N \frac {T_i} {M_6^4} \delta(z-z_i,\bar{z}-\bar{z}_i), \label{einstein2} \end{eqnarray} where $k$ is the curvature of the internal manifold, \begin{equation} k=\frac \lambda 2 + \frac 3 4 B_0^2. \end{equation} Looking at (\ref{einstein1}) we note that a very remarkable thing has happened: the four dimensional metric does not depend on the brane sources. The only effect of the branes in the vacuum is to change the geometry of the internal space without affecting the vacuum energy of the four dimensional ground state. As pointed out in \cite{porrati,cline}, however, this should not lead to easy enthusiasms regarding solutions of the cosmological constant problem. Eq. (\ref{einstein2}) is the famous Liouville equation describing a two dimensional metric of constant curvature $k$. We will study at length this equation and its solutions in the next section. Depending on the value of $B_0$ and $\lambda$ the four dimensional ground state will be de Sitter, anti-de Sitter or Minkowski space,\footnote{This has also been discussed long ago in \cite{randjbar}.} \begin{equation} \begin{cases} \lambda> \frac {B_0^2} 2 ~~~~~~~~~dS_4 \\ \lambda< \frac {B_0^2} 2 ~~~~~~~~~AdS_4\\ \lambda= \frac {B_0^2} 2 ~~~~~~~~~M_4 \end{cases} \label{cases} \end{equation} For the Minkowski and de Sitter case one finds that the curvature $k$ of the internal space is positive. In section \ref{riemann} we will also consider the case with negative $k$ where the ground state is AdS. This leads naturally to compactifications on Riemann surfaces. In \cite{carroll,navarro} the authors considered the case of a brane located at $z=0$. Assuming axial symmetry one readily finds the solution, \begin{equation} \psi=\frac {(1-\alpha_1)^2} k \frac {4(z\bar{z})^{-\alpha_1}}{\big[1+(z\bar{z})^{1-\alpha_1}\big]^2} \label{2branes} \end{equation} where we have defined, \begin{equation} \alpha_1=\frac {T_1} {2\pi M_6^4}. \label{alpha} \end{equation} With a simple change of variables one can see that this is just the metric of a sphere with radius $1/\sqrt{k}$ with a wedge removed, the football. The deficit angle is $2\pi \alpha_1$ so clearly $\alpha_1<1$. Physically we will only allow positive tension branes so we also assume $0<\alpha_1<1$. The solution (\ref{2branes}) implies the existence of a second brane with exactly the same tension at $z=\infty$ (the north pole of the sphere). In fact, up to reparametrization, this is the only solution (with no warping) with two branes (see appendix). As we shall show the tuning between the tensions can be removed considering three or more branes. \section{Liouville equation} \label{liouvillesec} The mathematical problem of determining the metric on the internal space consists in finding solutions of the Liouville equation with prescribed singularity on the complex plane, \begin{equation} \partial_z \partial_{\bar{z}}\log \psi=-\frac k 2 \psi-2\pi \sum_{i=1}^N \alpha_i~\delta(z-z_i,\bar{z}-\bar{z}_i) \label{liouville} \end{equation} where the $\alpha_i$'s are related to the tensions as in (\ref{alpha}). The left hand side of this equation is proportional to the two dimensional curvature $\sqrt{\gamma} R_2$ of the internal space. Integrating the Liouville equation and using the Gauss-Bonnet formula for compact surfaces with no boundaries, \begin{equation} \frac 1 {4 \pi}\int \sqrt{\gamma} R_2=2 -2 g \label{gaussbonnet} \end{equation} (where $g$ is the genus of the surface), one derives a simple formula for the volume, \begin{equation} V_2=\frac {2\pi} {k}(2-2g-\sum_i \alpha_i). \label{volume} \end{equation} Clearly a compact solution can only exist when $V_2>0$. For the case of negative curvature $k$ this equation has been extensively studied starting with the work of Poincar\'e and Picard, in particular in relation to the problem of uniformization of Riemann surfaces. The general result is that a unique solution describing a compact Riemann surface of genus $g$ exists unless it is forbidden by the volume formula (\ref{volume}) \cite{troyanov}. Until section \ref{riemann} we will be interested in the positive curvature case which is relevant for the Minkowski background. To the best of our knowledge much less is known in this case. In fact, we will find that an additional constraint on the tensions applies. Since we only allow positive tension branes, the positivity of the volume forces $g=0$ and\footnote{In the special case $k=0$ it is possible to compactify the space on the topology of the sphere but the tensions need to be tuned so that $\sum_i \alpha_i=2$ \cite{sundrum}. The metric in this case is easily found to be given by $\psi=A\, \Pi_i |z-z_i|^{-2 \alpha_i}$ and the volume remains arbitrary.} \begin{equation} \sum_{i=1}^N \alpha_i < 2. \end{equation} Away from the singularities the most general solution of the Liouville equation with positive curvature is given by, \begin{equation} \psi=\frac 1 k \frac {4 |w'|^2}{\left[1+|w|^2\right]^2} \label{solution} \end{equation} where $w(z)$ is an arbitrary holomorphic function. For the simplest case $w=z$ one recognizes (\ref{solution}) as the metric of the stereographically projected sphere.\footnote{A simple physical argument suggests the form of the solution (\ref{solution}). Since codimension two objects locally do not curve the space, away from the branes the metric must still be the metric of a sphere. In fact, starting with the metric of the Riemann sphere and performing the change of variables $z\to w(z)$ one obtains (\ref{solution}).} In terms of the K\"ahler potential the metric can be derived from, \begin{equation} K=\frac 4 k \log[1+w\bar{w}]. \end{equation} Given that in two dimensions, \begin{equation} \partial_z \partial_{\bar{z}}\log |z|^2=2\pi\,\delta(z,\bar{z}) \label{delta} \end{equation} the Liouville equation (\ref{liouville}) implies the following asymptotic behaviors near the singular points, \begin{eqnarray} \psi &\sim& |z-z_i|^{-2\alpha_i} ~~~~~\text{as $z\to z_i$} \nonumber \\ \psi &\sim& |z|^{-2(2-\alpha_{\infty})} ~~~~~~\text{as $z\to \infty$}. \end{eqnarray} Integrability of the metric around the singularities then requires \begin{equation} \alpha_i < 1. \end{equation} This is equivalent to the statement that the deficit angle around each singularity cannot exceed $2\pi$. For $\alpha_i \ge 1$ solutions can still be found but they do not describe compact spaces. Coming to the main point, the function $w(z)$ reproducing the prescribed singularities can be found using the technology of the fuchsian equations which we review in the appendix. In brief, given $N$ singularities ($z_i$, $\alpha_i$) one considers the fuchsian equation, \begin{equation} \frac {d^2 u} {dz^2}+\sum_{i=1}^{N}\left[\frac {\alpha_i(2-\alpha_i)}{4(z-z_i)^2}+\frac {c_i}{2(z-z_i)}\right]u=0. \label{fuch} \end{equation} where $c_i$ are known as the accessory parameters. The required function $w$ is then given by, \begin{equation} w(z)=\frac {u_1(z)} {u_2(z)} \end{equation} where $u_1$ and $u_2$ are two linearly independent solutions of (\ref{fuch}) such that their monodromy around the singular points is contained in $SU(2)$, i.e. $u_1$ and $u_2$ are multivalued functions on the complex plane and transform with an $SU(2)$ rotation going around the singularities. To see how this formalism works in practise we now turn to the case with three singularities. In the appendix the solution with two singularities is also derived using the technique of the fuchsian equations. \subsection{Solution with 3 branes} \label{3branes} With three branes an explicit solution of the Liouville equation can be found in terms of hypergeometric functions. Using reparametrization invariance it is convenient and conventional to choose the singularities at $(0,1,\infty)$.\footnote{Notice that the physical position of the singularities does not depend on this choice.} The relevant fuchsian equation is given by, \begin{equation} \frac {d^2 u} {dz^2}+\frac 1 4\left[\frac {\alpha_1(2-\alpha_1)}{z^2}+\frac {\alpha_2(2-\alpha_2)}{(z-1)^2}+\frac {\alpha_1(2-\alpha_1)+\alpha_2(2-\alpha_2)-\alpha_{\infty}(2-\alpha_\infty)}{z(1-z)}\right]u=0. \end{equation} To determine solutions with $SU(2)$ monodromies we follow \cite{ciafaloni} where the same problem for the case of $SU(1,1)$ monodromies was considered (see also \cite{hasadz} for similar work). Two linearly independent solutions of the previous equation are, \begin{eqnarray} u_1&=&\displaystyle{K_1 \, z^{(1-\frac {\alpha_1} 2)}\,(1-z)^{\frac {\alpha_2} 2}\, \tilde{F}[a_1,b_1,c_1,z]}\nonumber\\ u_2&=&\displaystyle{K_2 \, z^{\frac{\alpha_1} 2}\,(1-z)^{\frac {\alpha_2} 2}\,\tilde{F}[a_2,b_2,c_2,z]} \label{solutions} \end{eqnarray} where as in \cite{ciafaloni} we found it convenient to define modified hypergeometric functions, \begin{equation} \tilde{F}[a,b,c,z]=\frac {\Gamma[a]\Gamma[b]}{\Gamma[c]} ~ _2F_1[a,b,c,z], \end{equation} and the indexes are, \begin{eqnarray} a_1&=&\frac {(2-\alpha_1+\alpha_2-\alpha_{\infty})} 2 ~~~~~~~~~~~~~~~~~~~~~a_2=\frac {\alpha_1+\alpha_2-\alpha_{\infty}} 2 \nonumber\\ b_1&=&- \frac {(\alpha_1-\alpha_2-\alpha_{\infty})} 2 ~~~~~~~~~~~~~~~~~~~~~~~~b_2=\frac {-2+\alpha_1+\alpha_2+\alpha_{\infty}} 2 \nonumber\\ c_1&=& 2 -\alpha_1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~c_2= \alpha_1 \end{eqnarray} Since the hypergeometric functions are regular at the origin (they have a branch cut between 1 and $\infty$), the monodromy around $z=0$ is diagonal, \begin{equation} \displaystyle{M_0=\hat{M}(\alpha_1) \ = \ \left(\begin{array}{cc} e^{-i \pi \alpha_1} & 0 \\ 0 & e^{i \pi \alpha_1} \end{array} \right)}. \end{equation} Expanding (\ref{solutions}) around $z=1$ one finds, \begin{equation} u_i\sim a_{i1}(z-1)^{1-\frac {\alpha_2} 2}+ a_{i2} (z-1)^{\frac {\alpha_2} 2} \end{equation} where, \begin{equation} \displaystyle{a_{ij}\ =(A)_{ij}\ = } \left( \begin{array}{cc} \displaystyle{K_1\, \Gamma(\alpha_2-1)} & \displaystyle{K_1\, \frac {\Gamma(1-\alpha_2)\Gamma(a_1)\Gamma(b_1)}{\Gamma(c_1-a_1)\Gamma(c_1-b_1)}}\\ \\ \displaystyle{K_2\, \Gamma(\alpha_2-1)} & \displaystyle{K_2\,\frac {\Gamma(1-\alpha_2)\Gamma(a_2)\Gamma(b_2)}{\Gamma(c_2-a_2)\Gamma(c_2-b_2)}} \end{array} \right) \end{equation} This allows to compute the monodromy around $z=1$, \begin{equation} M_1=A \hat{M}(\alpha_2) A^{(-1)}= \left( \begin{array}{cc} \cos\pi\alpha_2 - i \displaystyle{\frac {a_{11}\,a_{22}+ a_{12}\,a_{21}} {a_{11}\,a_{22}-a_{12}\,a_{21}}} \sin \pi \alpha_2 & \displaystyle{2 i \frac { a_{11}\,a_{12}}{a_{11}\,a_{22}-a_{12}\,a_{21}}} \sin \pi \alpha_2 \\ \\ \displaystyle{-2 i\frac { a_{21}\,a_{22}}{a_{11}\,a_{22}-a_{12}\,a_{21}}} \sin \pi \alpha_2 & \cos\pi\alpha_2 + i \displaystyle{\frac {a_{11}\,a_{22}+ a_{12}\,a_{21}} {a_{11}\,a_{22} - a_{12}\,a_{21}}} \sin \pi \alpha_2 \end{array} \right) \end{equation} In general this is an $SL(2,\mathbb{C})$ matrix. The condition that the monodromy is contained in $SU(2)$ then boils down to, \begin{equation} (M_1)^+_{12}=-(M_1)_{21}, \end{equation} which determines the ratio $|K_1/K_2|$. A short computation shows, \begin{equation} \left|\frac {K_1} {K_2} \right|^2=-\frac {\Gamma[a_2]\Gamma[b_2]\Gamma[c_1-a_1]\Gamma[c_1-b_1]} {\Gamma[a_1]\Gamma[b_1]\Gamma[c_2-a_2]\Gamma[c_2-b_2]}= - \frac{\cos\pi(\alpha_1-\alpha_2)-\cos \pi \alpha_{\infty}}{\cos \pi(\alpha_1+\alpha_2)-\cos \pi \alpha_{\infty}} \end{equation} The expression above is not positive definite for each set ($\alpha_1$, $\alpha_2$, $\alpha_\infty$) that satisfies $\sum_i \alpha_i<2$. Assuming without loss of generality that $\alpha_{\infty}\ge \alpha_{1,2}$, the requirement that the right hand side be positive implies the non trivial constraint, \begin{equation} \alpha_\infty < \alpha_1+\alpha_2. \label{extracondition} \end{equation} This is an important result as it is independent from the Gauss-Bonnet formula.\footnote{This restriction agrees with the result recently found in \cite{eremenko}.} One can also check using the formulas in \cite{ciafaloni} that the monodromy at infinity does not give extra constraints. In general this is a consequence of the fact that, \begin{equation} \Pi_i M_i= 1 \end{equation} Having determined the functions ($u_1$, $u_2$) with $SU(2)$ monodromies, the Liouville equation is solved by $w=u_1/u_2$. In summary we have shown that a solution for the metric of the internal space with three branes exists as long as $\sum_i \alpha_i<2$ and $\alpha_\infty<\alpha_1+\alpha_2$. The solution is given in terms of the holomorphic function $w$, \begin{equation} w(z)=\frac {K_1} {K_2}\, \frac {\tilde{F}[a_1,b_1,c_1,z]}{\tilde{F}[a_2,b_2,c_2,z]}\,z^{1-\alpha_1} \label{finalsolution} \end{equation} which determines the metric on the Riemann sphere through (\ref{solution}). Physically when $\alpha_\infty \to \alpha_1+\alpha_2$ the proper distance between the point $z=0$ and $z=1$ goes to zero. In this limit the solution then reduces to the one with two singularities. In fact the condition (\ref{extracondition}) implies that when only two singularities are present $\alpha_1=\alpha_\infty$. \subsection{More branes} \label{morebranes} When four or more singularities are included the situation becomes immediately much more involved. In principle for $N$ singularities the canonical way to proceed would be to consider the fuchsian equation (\ref{fuch}). With an $SL(2,C)$ transformation we can again fix the positions of three singularities at (0, 1, $\infty$) leaving $N-3$ undetermined. The accessory parameters $c_i$ satisfy three linear equations (see appendix) so one can express $c_1$, $c_2$ and $c_\infty$ as linear combinations of $c_3$,...,$c_{N-1}$. The remaining accessory parameters should then be determined from the requirement that the monodromy of two linearly independent solutions of the fuchsian equation (\ref{fuch}) belongs to $SU(2)$. Counting the number of equations one sees that the position of $N-3$ singularities remains unconstrained. In physical terms this means that the physical position of $N>3$ branes is not fixed; $N-3$ complex moduli label different vacua. Unfortunately the solution of the fuchsian equation with more than two singularities (plus the one at infinity) is not known in closed form so we could not find explicit solutions. Some progress in this direction was done in \cite{menotti} where the problem with three finite singularities and one infinitesimal was solved in the context of $SU(1,1)$ monodromies. The same methods could be applied here. Besides the problem of finding exact solutions, it would be important, both from the physical and mathematical point of view, to determine for which values of $\alpha_i$ a solution of the Liouville equation with positive curvature exists and is unique. To the best of our knowledge, contrary to the negative curvature case, this is not known \cite{eremenko}. With no pretence of giving a proof here we notice that from the discussion at the end of the previous paragraph it would seem natural that, \begin{equation} \alpha_\infty<\sum_{i=1}^{N-1} \alpha_n, \label{nbranes} \end{equation} where we have assumed $\alpha_\infty\ge \alpha_i$. This generalizes the formula with two and three singularities and reduces to it when $N-3$ tensions are taken to zero. \subsection{Riemann Surfaces} \label{riemann} We shall now consider compactifications where the internal manifold has negative curvature (similar compactifications of string theory have appeared very recently in \cite{silverstein}). In the model under investigation this corresponds to, \begin{equation} \lambda<-\frac 3 2 B_0^2, \end{equation} which implies that the four dimensional ground state is AdS$_4$. In general, starting from a theory in AdS$_{d+3}$ we could consider compactifications to AdS$_{d+1}\times K$ which might have interest from the point of view of the AdS/CFT correspondence \cite{maldacena}. In absence of singularities, the metric of the internal space is, \begin{equation} \psi=-\frac 1 k \frac 4 {\big[1-z\bar{z}\big]^2},~~~~~~~~|z|\,<\,1 \end{equation} i.e. the hyperbolic metric on the unit disk $D$. This manifold is non-compact but we can obtain a compact space considering the coset $D/\Gamma$ where $\Gamma$ is an appropriately chosen discrete subgroup of the isometries $SU(1,1)$ that acts without fixed points in $D$. The space so constructed is a compact Riemann surface of constant negative curvature $k$ and genus $g$. Including branes leads again to the Liouville equation (\ref{liouville}) but $k$ is now negative. This is the case most commonly studied in the literature and a wealth of results is available (see \cite{takhtajan} and Refs. therein). The general solution of the Liouville equation with negative curvature is given by, \begin{equation} \psi=-\frac 1 {k} \frac {4|w'|^2}{\big[1-|w|^2\big]}. \label{solution2} \end{equation} The holomorphic function $w(z)$ can in principle be found using techniques similar to the ones described in section \ref{liouville}. According to Picard's theorem (and its generalizations \cite{troyanov}), a solution of the Liouville equation with negative curvature exists and is unique provided that the topological constraint (\ref{volume}), \begin{equation} \sum_{i=1}^N \alpha_i>(2-2g) \label{topological} \end{equation} is satisfied. Curiously deficit angles increase the volume when the curvature is negative. Notice that the additional condition $\alpha_\infty<\sum_{i=1}^{N-1}\alpha_i$ that appears when $k$ is positive is automatically satisfied. It should be mentioned that in the negative curvature case the singularities $\alpha_i=1$ are also allowed. These are called parabolic points and play a special r\"ole due to their relation to the uniformization of Riemann surfaces. The asymptotic behavior of the metric is, \begin{equation} \psi\sim \frac 1 {|z-z_i|^2 (\log|z-z_i|)^2}~~~~~~~~~~\text{as $z\to z_i$} \end{equation} The singularity is integrable so that the volume remains finite. The proper distance from the singularity to any point at finite $z$ is however infinite so the space constructed with these singularities is non compact. As an example we can consider the case $g=0$, the so called hyperbolic sphere. This requires at least three singularities such that $\sum_{i=1}^3 \alpha_i>2$. The fuchsian equation is exactly the same as the one studied in section \ref{3branes} but we need to impose that the monodromies belong to $SU(1,1)$. This requires, \begin{equation} \left|\frac {K_1} {K_2}\right|^2= \frac{\cos \pi(\alpha_1-\alpha_2)-\cos \pi \alpha_{\infty}}{\cos \pi(\alpha_1+\alpha_2)-\cos \pi \alpha_{\infty}} \end{equation} By inspection it is not hard to show that the right hand side of this equation is always positive definite for the allowed values of $\alpha_i$ so that a solution always exists. The function $w$ is again given by (\ref{finalsolution}). \section{Effective action} \label{effectiveaction} In this section we discuss the low energy effective action valid at energies smaller than the curvature $k$. We start by noting that in absence of branes and for positive curvature the internal space is a sphere whose isometry group is $SO(3)$. Upon Kaluza-Klein (KK) reduction one obtains an unbroken $SO(3)$ gauge theory\footnote{As is well known Riemann surfaces do not possess any continuous isometry so there are no massless KK gauge bosons from the metric when the curvature is negative.} (for the detailed KK reduction see \cite{randjbar}). In addition to this, from the reduction of the $6D$ gauge field one also obtains an extra $U(1)$ gauge field which however will not play a role in what follows. Placing equal tension branes at the poles has the effect of removing a wedge from the sphere. This breaks $SO(3)\to U(1)$ so that a massless $U(1)$ gauge boson survives. The other two gauge bosons are Higgsed by the presence of the branes. From the low energy point of view we can understand this as follows. Each brane carries two physical degrees of freedom describing the fluctuations of the brane in the internal space. Two of these degrees of freedom are precisely the Goldstone bosons necessary to implement the breaking $SO(3)\to U(1)$ spontaneously. These modes correspond to the overall rotation of the system. In this language choosing the singularities at fixed positions (0, $\infty$) corresponds to the unitary gauge. The remaining two degrees of freedom describe the relative motion of the branes. These modes are massive as the branes repel from each other. When the third brane is added the original $SO(3)$ symmetry is completely broken. Out of the two new degrees, the one describing the rotation around the axis is eaten by the $U(1)$ gauge boson while the other is massive (this is implied by the fact that the distance between the branes is fixed in the vacuum). Adding more branes obviously does not change this picture for the gauge bosons but introduces new massless degrees of freedom. As we have seen in section \ref{morebranes}, for $N>3$ the physical positions of the branes is not determined in the vacuum and they will appear as $N-3$ complex flat directions of the potential in the low energy effective theory. An interesting object to consider in this case would be the metric on the moduli space. This is related in a deep way to the accessory parameters of the associated fuchsian equation \cite{takhtajan}. For completeness let us now turn to the effective action for the breathing mode of the internal manifold (see also \cite{porrati,Aghababaie,randjbar}). Depending on the values of the parameters this mode might be as heavy as the first KK modes in which case it should be integrated out. It is however important to check that the mass is positive so that the compactification is stable. This is not guaranteed in general. To derive the effective action we consider the following ansatz for the metric, \begin{equation} ds^2=\phi^{-2}(x)g_{\mu\nu}(x)dx^\mu dx^\nu+\phi^2(x) \psi(z,\bar{z})dz d\bar{z} \end{equation} Conservation of the flux requires that $F$ remains at its ground state value (\ref{flux}). Plugging the ansatz into the action and using the Liouville equation for the background we obtain, \begin{equation} S_4=M_6^4\int \frac{\psi} 2 dz d\bar{z} \int d^4 x \sqrt{-g} \left(\frac {R_4} 2-2 \frac{\partial^\mu \phi \partial_\mu \phi} {\phi^2}-V\right) \label{s4} \end{equation} where, \begin{equation} V=\frac {\lambda} {\phi^2}-\left(\frac {\lambda} 2+\frac 3 4 B_0^2\right)\frac 1 {\phi^4}+\frac {B_0^2} {2 \phi^6} \end{equation} By means of the volume formula (\ref{volume}) the four dimensional Planck mass is, \begin{equation} M_4^2=M_6^4 V_2=M_6^4 \frac {2 \pi} {k}(2-2g-\sum_i \alpha_i) \end{equation} Notice that from the low energy point of view the only effect of the branes is to change the normalization of the Planck mass. It should be stressed that, as can be seen from (\ref{s4}), the KK reduction is consistent so that no tadpoles corrections arise to the classical effective action. As required the potential has a stationary point at $\phi=1$ which corresponds to dS, AdS or Minkowski space according to (\ref{cases}). The mass of $\phi$ is given by, \begin{equation} m_{\phi}^2=\frac 3 2 B_0^2- \lambda \end{equation} We conclude that the compactification is stable unless $\lambda>3/2 B_0^2$ which corresponds to dS space (see also \cite{santiago}). In this case the system will roll to the other stationary point of the potential at $\phi^2=3B_0^2/(2\lambda)$. \section{Conclusions} \label{conclusions} Let us summarize what we have achieved in this paper. Starting from the football shaped extra dimensions scenario with two equal tension branes \cite{carroll,navarro}, we have generalized the model to include an arbitrary number of branes. We have also considered the case where the ground state is dS or AdS space and the internal manifold is a Riemann surface. The internal space has constant curvature with conical singularities at the location of the branes. The problem of determining the metric consists in finding a solution of the Liouville equation with singularities, a topic which goes back to Poincar\'e and Picard. Explicit solutions have been presented for the case of three branes. Most importantly, contrary to the scenario with two branes, the tensions of the branes do not need to be tuned with each other but only satisfy mild constraints. For the case relevant to the Minkowski background, topologically the internal space is a sphere. For three branes (say $T_3\ge T_2\ge T_1$) solutions exist when, \begin{eqnarray} T_1+T_2+T_3&<& 4\pi M_6^4\nonumber \\ T_3&<&T_1+T_2 \end{eqnarray} where the first condition is a direct consequence of the Gauss-Bonnet theorem while the second has a more mysterious geometrical origin. We conjectured in (\ref{nbranes}) the generalization of this formula to the scenario with an arbitrary number of branes. Finally we have described the low energy effective action for the model. For more than three branes, the positions of the branes are not fixed in the ground state so $N-3$ complex moduli appear in the low energy effective theory. \section*{Acknowledgments} I am grateful to Massimo Porrati for very helpful discussions about the Liouville equation and Riemann surfaces. I would also like to thank Massimo Porrati and especially Jose Santiago for comments on the manuscript. This work was supported in part by the NSF grant PHY-0245068.
1,116,691,499,597
arxiv
\subsection*{\textbf{{{Leaking cycle}}s implies no privacy}} \begin{lemma} \label{lem:criticalcnec} A {\dipa} $\mathcal{A} $ is not differentially private if it has a reachable {{leaking cycle}}. \end{lemma} Let $\mathcal{A}= \defaut.$ Assume that $\mathcal{A}$ has a {{leaking cycle}} reachable from the state $\qinit.$ We give the proof first assuming that all states of $\mathcal{A}$ are input states. The proof for the case when the automata has both input and non-input states can be proved along similar lines and is left out. Let $\eta=\eabsexecl{m+n}$ for $k=0,\ldots,m+n-1$ be an {abstract path} such that $q_0=\qinit$, $q_m=q_{m+n}$, and the final $n$ transitions of $\rho$, i.e., the abstract path $C=\eabsexecsf{m}{m+n}$ is a {{leaking cycle}}. Let $t_k$ be the {$k$-th transition} of $\eta$ and $c_k$ be the {guard} of the $k$-th transition. Further, let $d_k$ and $\mu_k$ be such that $\parf(q_k) = (d_k,\mu_k)$ for each $k.$ We have that $c_0\:=\mathsf{true}$ and $t_0$ is an assignment transition. Let $i,j$ be the smallest integers such that $m\leq i<j<m+n$ and the following properties are satisfied: (a) $t_i$ is an assignment transition, (b) $c_j\neq \mathsf{true}$ and (c) for every $k_1$ such that $i<k_1<j$, $t_{k_{1}}$ is a non-assignment transition and $c_{k_{1}}=\mathsf{true}.$ We fix $i,j$ as above. Consider any integer $\ell>0$. We define an abstract path $\eta_\ell$ starting from $\qinit$ by repeating the cycle $t_m,\ldots t_{m+n-1}$, $\ell$ times. Formally, $\eta_\ell=\eabsexecl{m+\ell n}$ such that $q_k=q_{k-n}$ and $\sigma_k=\sigma_{k-n}$ for $m+n\leq k\leq m+\ell n.$ Let $\gamma(\ell)=o_0\cdots o_{m+\ell n-1}$ be the output sequence of length $m+\ell n$ such that $o_k=\sigma_k$ if $\sigma_k\in \outalph,$ otherwise $o_k=(\sigma_k,-\infty,\infty).$ Once again, we let $t_k$ be the {$k$-th transition} of $\eta_\ell$ and $c_k$ be the {guard} of the $k$-th transition. Now, given $\ell>0$, we define two neighboring input sequences $\alpha(\ell)=a_0\cdots a_{m+\ell n-1}$ and $\beta(\ell)=b_0\cdots b_{m+\ell n-1}$ each of length $m+\ell n.$ The sequence $\alpha(\ell)$ is chosen so that all the guards in the transitions of $\eta_{\ell}$ are satisfied with joint probability $>\frac{1}{2}$ {for large $\epsilon$}. The input $a_0=0$ and for $0<k<m+\ell n$, $a_k$ is defined inductively as given below: let $k'<k$ be the largest integer such that $t_{k'}$ is an assignment transition, then $a_k$ is given as follows: if $c_k$ is the guard ${\getest}$ then $a_k\:=\mu_{k'}-\mu_{k}+a_{k'}+1$, otherwise $a_k\:=\mu_{k'}-\mu_{k}+a_{k'}-1.$ Now, consider any $k$, $0\leq k<m+\ell n$, such that $c_k\neq \mathsf{true}$ and fix it. Let $k'<k$ be the largest integer such that $t_{k'}$ is an assignment transition. Let $X_{k'},X_{k}$ be the two random variables with distributions given by $\Lap{d_{k'}\epsilon,a_{k'}}$ and $\Lap{d_{k}\epsilon,a_{k}}.$ Let $Y_k$ denote the random variable denoting the $k^{th}$ output of $\eta$ on the input sequence $\alpha(\ell).$ Now consider the case when $c_k$ is ${\getest}.$ From the way, we defined $\alpha(\ell)$ it is the case that $\mu_k+a_k\:=\mu_{k'}+a_{k'}+1.$ Now $\prbfn{Y_k\neq o_k} \:= \prbfn{X_k < X_{k'}}\:=\prbfn{X_k\leq X_{k'}}.$ Let $d_{\mathsf{mx}}=\max(d_k,d_{k'})$ and $d_{\mathsf{mn}} = \min (d_k,d_{k'}).$ From Lemma \ref{lem:problessequal}, we see that if $d_k\neq d_{k'}$ then $$\prbfn{X_k \leq X_{k'}}< \frac{{d_{\mathsf{mx}}}^2}{2({d_{\mathsf{mx}}}^2-{d_{\mathsf{mn}}}^2)}\eulerv{-d_{\mathsf{mn}}\epsilon}.$$ If $k=k'$ then $$\prbfn{X_k \leq X_{k'}}<\frac{1}{2}\eulerv{-d_{k}\epsilon}(1+\frac{d_{k}\epsilon}{2}).$$ From the above, we see that $$\prbfn{Y_k\neq o_k}\leq r \eulerv{-d_{\mathsf{mn}}\epsilon}(1+\frac{d_{\mathsf{mx}}\epsilon}{2})$$ where $r$ is a constant that depends only on $\mathcal{A}$ (and not on $k$). Now consider the case when $c_k$ is $\lttest.$ In this case, $\mu_k+a_k\:=\mu_k+a_{k'}-1$ and $\prbfn{Y_k\neq o_k} \:= \prbfn{X_{k'}< X_{k}}.$ By a similar analysis, in this case also, $$\prbfn{Y_k\neq o_k}\leq r \eulerv{-d_{\mathsf{mn}}\epsilon}(1+\frac{d_{\mathsf{mx}}\epsilon}{2}).$$ Let $d_{\max}\:=\max\set{\pi_1(\parf(q))\st q\in Q}$ and $d_{\min}\:=\min\set{\pi_1(\parf(q))\st q\in Q}.$ Then, for every $k,0\leq k<m+\ell n$, $$\prbfn{Y_k\neq o_k}\leq r \eulerv{-d_{\min}\epsilon}(1+\frac{d_{\max}\epsilon}{2})$$ Using the union rule of probabilities, we see that, $$\prbfn{\exists k<m+\ell n,\: Y_k\neq o_k}\:\leq r(m+\ell n) \eulerv{-d_{\min}\epsilon}(1+\frac{d_{\max}\epsilon}{2}).$$ Given $\ell>0$, let $\epsilon_\ell\in\mathbb{R}$, be the smallest value such that $$\forall \epsilon\geq \epsilon_\ell,\: r(m+\ell n) \eulerv{-d_{\min}\epsilon}(1+\frac{d_{\max}\epsilon}{2}) \leq \frac{1}{2}.$$ Now, $$\pathprob{\epsilon,\rho_{\alpha}(\ell)}\:=1-\prbfn{\exists k<m+\ell n,\: Y_k\neq o_k}. $$ From the construction of $\epsilon_\ell$ and above observations, we see that $\forall \epsilon\geq \epsilon_\ell,\:{\pathprob{\epsilon,\rho_{\alpha}(\ell)}}\:\geq \frac{1}{2}.$ Now, recall the integers $i,j$ fixed earlier. Intuitively, we define $\beta(\ell)$ so that each of the guards in the transitions $t_{j+\ell' n}, 0\leq \ell'<\ell$ are satisfied with probability $<\frac{1}{2}$. For each $\ell',\:0\leq \ell'<\ell$, we let $b_{i+\ell' n}\:=a_{j+\ell' n}+\mu_j -\mu_i$ and $b_{j+\ell' n}\:=a_{i+\ell' n}+\mu_i-\mu_j.$ We observe the following. Now, for each $\ell',\:0\leq \ell'<\ell$, the following hold. $c_{j+\ell' n}\:=c_j\neq \mathsf{true}.$ If $c_{j+\ell' n}$ is the guard ${\getest}$ then $b_{i+\ell' n}+\mu_i\:=b_{j+\ell' n}+\mu_j+1$ since $a_{j+\ell' n}+\mu_j\:=a_{i+\ell' n}+\mu_i+1.$ If $c_{j+\ell' n}$ is the guard $\lttest$ then $b_{j+\ell' n}+\mu_j\:=b_{i+\ell' n}+ \mu_i+1$ since $a_{i+\ell' n}+\mu_i\:=a_{j+\ell' n}+\mu_j+1.$ We define $b_{i'}$, for all values of $i'<m+\ell n$ and $i'\notin \set{ i+\ell' n, j+\ell' n \st 0\leq \ell'<\ell}$, so that $\beta(\ell)$ is a neighbour of $\alpha(\ell).$ It is not difficult to see that such a sequence $\beta(\ell)$ can be defined. Let $\rho_{\beta}(\ell)$ be the path such that $\ensuremath{\mathsf{abstract}}(\rho_{{\beta}(\ell)})=\eta(\ell)$ and $\inseq(\rho_{\beta}(\ell))=\beta(\ell).$ For each $k, 0\leq k<m+\ell n$, let $U_k$ be the random variable with distribution given by $\Lap{d_{q_{k}}\epsilon,b_k}$ and $Z_k$ be denoting the $k^{th}$ output of $\eta$ on the input sequence $\beta(\ell).$ Let $d'\:=\min(d_{i},d_{j})$ and $d''\:=\max(d_{i},d_{j}).$ Now, $\prbfn{Z_j=o_j}$ is given by $\prbfn{U_j\geq U_i}$ if $c_j$ is the guard ${\getest}$, otherwise it is given by $\prbfn{U_j\leq U_i}.$ Using Lemma \ref{lem:problessequal} and similar reasoning as given earlier, we see that $$\prbfn{Z_j=o_j}\leq r'\eulerv{-d'\epsilon}(1+\frac{d''\epsilon}{2})$$ for some constant $r'.$ For each $\ell',\:0<\ell'<\ell$, using the same reasoning as above with the random variables $U_{i+\ell' n},U_{j+\ell'n}$, we see that $$\prbfn{Z_{j+\ell' n}=o_{j+\ell' n}}\leq r'\eulerv{-d'\epsilon}(1+\frac{d''\epsilon}{2}).$$ Since for any $\ell_1,\ell_2$ such that $0\leq\ell_1< \ell_2<\ell$, the random variables $U_{i+\ell_1 n},U_{j+\ell_1 n}$ are independent of $U_{i+\ell_2 n},U_{j+\ell_2 n}$, we see that $$\prbfn{\forall \ell',0\leq \ell'<\ell,\:Z_{j+\ell' n}=o_{j+\ell' n}}\leq {r'}^\ell \eulerv{-d'\ell \epsilon}(1+\frac{d''\epsilon}{2})^\ell.$$ Thus, $$\prbfn{\forall k, 0\leq k<m+\ell n, Z_k=o_k}\leq {r'}^\ell \eulerv{-d'\ell \epsilon}(1+\frac{d''\epsilon}{2})^\ell.$$ The LHS of the above equation is exactly {$\pathprob{\epsilon,\rho_{\beta}(\ell)}$.} Thus, for any $\ell>0$, $$\forall \epsilon\geq \epsilon_\ell,\: \frac{{\pathprob{\epsilon,\rho_{\alpha}(\ell)}}}{{\pathprob{\epsilon,\rho_{\beta}(\ell)}}}\geq\frac{1}{2}(\frac{\eulerv{d'\epsilon}}{r'(1+\frac{d''\epsilon}{2})})^\ell.$$ We claim that for any $s>0$, $\exists \ell,\epsilon$ such that $$\frac{1}{2}(\frac{\eulerv{d'\epsilon}}{r'(1+\frac{d''\epsilon}{2})})^\ell>\eulerv{s\epsilon}.$$ Now the above inequality holds if $$\frac{\eulerv{(d'\ell-s)\epsilon}}{(1+\frac{d'\epsilon}{2})^\ell}>2{r'}^\ell.$$ Choose $\ell$ so that $d'\ell>s.$ Since the denominator of the left hand side term of the last inequality grows polynomially in $\epsilon$, while its numerator grows exponentially in $\epsilon$, it is easy to see that $\exists \epsilon_{0}>\epsilon_\ell$ such that $$ \forall \epsilon\geq \epsilon_0, \:\frac{\eulerv{(d'\ell-s)\epsilon}}{(1+\frac{d'\epsilon}{2})^\ell}>2{r'}^\ell.$$ The crucial observation we now make is that, thanks to output determinism, for every input sequence $\alpha$ and output sequence $\gamma$, there is at most one path $\rho_{\alpha,\gamma}$ such that $\inseq(\rho_{\alpha,\gamma})=\alpha$ and $\outseq(\rho_{\alpha,\gamma})=\alpha.$ This observation combined with the above inequality shows that $\mathcal{A}$ is not differentially private. \subsection*{\textbf{{{Leaking pair}}s implies no privacy}} \begin{lemma} \label{lem:criticalpnec} A {\dipa} $\mathcal{A}$ is not differentially private if it has a {{leaking pair}} of cycles $(C,C')$ such that $C$ is reachable from the initial state of $\mathcal{A}.$ \end{lemma} \begin{proof} Thanks to Lemma~\ref{lem:criticalcnec}, we can assume $\mathcal{A}$ does not have a {{leaking cycle}}. Let $\mathcal{A}= \defaut.$ Assume that $\mathcal{A}$ has a {{leaking pair}} of cycles $(C,C')$ such that $C$ is reachable from $\qinit.$ Assume that $C$ is an {\ensuremath{\mathsf{L}}-cycle} and $C'$ is a {\ensuremath{\mathsf{G}}-cycle}. (The proof for the case when $C$ is a {\ensuremath{\mathsf{G}}-cycle} and $C'$ is an {\ensuremath{\mathsf{L}}-cycle} is similar but symmetric and is left out). Thanks to our assumption that we do not have {{leaking cycle}}s, it means that both $C,C'$ do not have assignment transitions. We further assume that $C,C'$ are distinct. If they are the same then it is straightforward to prove that $\mathcal{A}$ is not differentially private, using more or less the same proof. We also assume that all the states in $\mathcal{A}$ are input states. The case when $\mathcal{A}$ has both input and non-input states can also be proved using more or less the same proof. Let the lengths of $C,C'$ be $n_1,n_2$, respectively. Now, for any $\ell>0$, consider the following abstract path $\eta_{\ell}$ in $\mathcal{A}$ starting from $\qinit$ in which the cycles $C,C'$ are repeated $\ell$ times each. The path $$\begin{array}{lcl} \eta_{\ell}&=& q_0\sigma_0 \cdots q_u \sigma_u \cdots q_v \sigma_v \cdots q_{v+n_1\ell-1} \sigma_{v+n_1\ell-1}\cdots \\ && \hspace*{1cm} \cdots q_{w}\sigma_w\cdots o_{w+n_2\ell-1} q_{w+n_2\ell} \end{array}$$ where the following guards are satisfied. For each k, let $t_k$ be the $k$-th transition of $\eta.$ and $c_k$ be the gaurd of the $k$-th transition. \begin{enumerate} \item $q_0=\qinit$ \item $\eabsexecsf{v}{v+n_1}$ is the cycle C \item $t_{j+n_1}\:=t_{j}$ for all $j,\: v\leq j<v+n_1(\ell-1)$ \item $\eabsexecsf{w}{w+n_2}$ is the cycle $C'$ \item $t_{j+n_2}\:=t_{j}$ for all $j,\:w\leq j<w+n_2(\ell-1)$ \item $t_u$ is an assignment transition and $\forall\:j, u<j<v+n_1\ell$ and $\forall\:j, j\geq w$, $t_j$ is a non-assignment transition \item for all $j,\:v+n_1\ell\leq j<w $, if $t_j$ is an assignment transition then $c_j$ is the guard $\getest.$ \end{enumerate} Observe that the last assignment transition before $t_{v+n_1\ell}$ is $t_u$, all assignment transitions from $t_{v+n_1\ell}$ up to $t_w$ have ${\getest}$ as their guard, the segment of the path from $t_v$ to $t_v+n_1\ell-1$ is the part where cycle $C$ is repeated $\ell$ times and the segment of the path from $t_w$ to $t_w+n_2\ell-1$ is the part where cycle $C'$ is repeated $\ell$ times. Let $d_k$ and $\mu_k$ be such that $\parf(q_k) = (d_k,\mu_k)$ for each $k.$ We have that $c_0\:=\mathsf{true}$ and $t_0$ is an assignment transition. Let $\gamma(\ell)=o_0\cdots o_{m+\ell n-1}$ be the output sequence of length $m+\ell n$ such that $o_k=\sigma_k$ if $\sigma_k\in \outalph,$ otherwise $o_k=(\sigma_k,-\infty,\infty).$ Once again, we let $t_k$ be the {$k$-th transition} of $\eta_\ell$ and $c_k$ be the {guard} of the $k$-th transition. Now, given $\ell>0$, we define two neighboring input sequences $\alpha(\ell)=a_0\cdots a_{m+\ell n-1}$ and $\beta(\ell)=b_0\cdots b_{m+\ell n-1}$ each of length $m+\ell n.$ Now, we define two adjacent input sequences $\alpha(\ell)=a_0\cdotsa_{w+n_2\ell-1}$ and $\beta(\ell)\:=b_0\cdotsb_{w+n_2\ell-1}$ as follows. For all $j,0\leq j< v$ and for all $j,\:v+n_1\ell \leq j<w$,$a_j\:=b_j\:=0$; for all $j,\:v\leq j<v+n_1\ell$ and for all $j,\:w\leq j<w+n_2\ell$, if $c_j$ is the guard $\getest$ then $a_j\:=\frac{1}{2}-\mu_j,\: b_j\:=-\frac{1}{2}-\mu_j$, if $c_j$ is the guard $\lttest$ then $a_j\:=-\frac{1}{2}-\mu_j,\: b_j\:=\frac{1}{2}-\mu_j$ and if $c_j$ is $\mathsf{true}$ then $a_j\:=b_j\:=0.$ It is not difficult to see that $\alpha(\ell)$ and $\beta(\ell)$ are adjacent. Let $\rho_{\alpha}(\ell)$ be the path such that $\ensuremath{\mathsf{abstract}}(\rho_{{\alpha}(\ell)})=\eta(\ell)$ and $\inseq(\rho_{\alpha}(\ell))=\alpha(\ell).$ Let $\rho_{\beta}(\ell)$ be the path such that $\ensuremath{\mathsf{abstract}}(\rho_{{\beta}(\ell)})=\eta(\ell)$ and $\inseq(\rho_{\beta}(\ell))=\beta(\ell).$ Let $X_j,U_j$ be random variables with distributions given by $\Lap{d_{j}\epsilon,a_j+\mu_j}$ and $\Lap{d_{j}\epsilon,b_j+\mu_j}$, respectively. Observe that $t_u$ is the last assignment transition in $\eta_{\ell}.$ For each $j>u$, for any given $y\in \mathbb{R}$, let $g_j(y),h_j(y)$ be the probabilities defined as follows: if $c_j$ is the guard ${\getest}$ then $g_j(y)\:=\prbfn{X_j\geq y}$ and $h_j(y)\:=\prbfn{U_j\geq y}$; if $c_j$ is the guard $\lttest$ then $g_j(y)\:=\prbfn{X_j< y}$ and $h_j(y)\:=\prbfn{U_j< y}$; if $c_j$ is $\mathsf{true}$ then $g_j(y)\:=h_j(y)\:=1.$ It should be easy to see that, for all $j,\:u<j<v$ and for all $j,\:v+n_1\ell \leq j<w$, $a_j\:=b_j$ and hence $g_j(y)\:=h_j(y).$ Now, we have the following claim. {\bf Claim:} For all $j, v\leq j <v+n_1\ell$, and for all $j,\:w\leq j<w+n_2\ell$, it is the case that $g_j(y)\geq h_j(y)$ for all $y\in \mathbb{R}$, and the following additional inequalities hold. \begin{enumerate} \item If $y\leq 0$ and $c_j$ is the guard $\lttest$ then $g_j(y) \geq \eulerv{\frac{1}{2}d_{j}\epsilon}h_j(y).$ \item If $y>0$ and $c_j$ is the guard ${\getest}$ then $g_j(y) \geq \eulerv{\frac{1}{2}d_{j}\epsilon}h_j(y).$ \end{enumerate} \begin{proof} Observe that when $c_j\:=\mathsf{true}$ then trivially $g_j(y)\:=h_j(y).$ Now, consider the case when $y<-\frac{1}{2}.$ If $c_j$ is the guard $\getest$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$ and $h_j(y)\:=1-\frac{1}{2}\eulerv{-d_{j}\epsilon(-\frac{1}{2}-y)}$(this is so since $a_j+\mu_j=\frac{1}{2}$ and $b_j+\mu_j=-\frac{1}{2}$) ; in this case $\frac{1}{2}-y>-\frac{1}{2}-y$ and hence $g_j(y)\geq h_j(y).$ If $c_j$ is the guard $\lttest$ then $g_j(y)\:= \frac{1}{2}\eulerv{-d_{j}\epsilon(-\frac{1}{2}-y)}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$; from this we see that $g_j(y)\geq \eulerv{d_{j}\epsilon}h_j(y).$ Now consider the case when $y\in [-\frac{1}{2},0].$ If $c_j$ is the guard $\getest$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$; since $g_j(y)\geq \frac{1}{2}$ and $h_j(y)\leq \frac{1}{2}$, we see that $g_j(y) \geq h_j(y).$ If $c_j$ is the guard $\svar < x$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$; since $g_j(y)\geq \frac{1}{2}$, we see that $g_j(y)\geq \eulerv{\frac{1}{2}d_{j}\epsilon}h_j(y).$ Now consider the case when $y>0.$ If $y\leq \frac{1}{2}$ and $c_j$ is ${\getest}$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$; observe that $g_j(y)\geq \frac{1}{2}$ and $h_j(y)\leq \frac{1}{2}\eulerv{-\frac{1}{2}d_{j}\epsilon}$; from this we get the desired inequality. If $y\leq \frac{1}{2}$ and $c_j$ is $\lttest$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$; since $g_j(y)\geq \frac{1}{2}$ and $h_j(y)\leq \frac{1}{2}$, we see $g_j(y)\geq h_j(y).$ If $y>\frac{1}{2}$ and $c_j$ is ${\getest}$ then $g_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y-\frac{1}{2})}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$; from this we see that the desired inequality follows easily. If $y>\frac{1}{2}$ and $c_j$ is $\lttest$ then $g_j(y)\:=1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$ and $h_j(y)\:=1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y-\frac{1}{2})}$; it is easy to see that $g_j(y)\geq h_j(y).$ \end{proof} Let $S_1(\ell)$ be the set of all $j$ such that $v\leq j<v+n_1\ell$ and $c_j$ is the guard $\svar < x.$ Let $S_2(\ell)$ be the set of all $j$ such that $w\leq j<w+n_2\ell$, and $c_j$ is the guard ${\getest}.$ Since $C$ is an {\ensuremath{\mathsf{L}}-cycle} and $C'$ is a {\ensuremath{\mathsf{G}}-cycle}, we see that the cardinalities of both $S_1(\ell)$ and $S_2(\ell)$ are $\geq \ell.$ Let $d_{\min}\:= \min\set{d_{j}\st j\in S_1(\ell)\cup S_2(\ell)}.$ Clearly $d_{\min}>0.$ Let $\rho_{\alpha}(\ell)$ be the path such that $\ensuremath{\mathsf{abstract}}(\rho_{{\alpha}(\ell)})=\eta(\ell)$ and $\inseq(\rho_{\alpha}(\ell))=\alpha(\ell).$ For $k\leq q+n_2\ell$, let $\rho_{\alpha}(\ell)||k$ (resp, $\rho_{\beta}(\ell)||k$) be the suffix of $\rho_{\alpha}(\ell)$ (resp. $\rho_{\alpha}(\ell)||k$) starting with $q_k.$ Since $C'$ is a {\ensuremath{\mathsf{G}}-cycle}, from the above claim, we see that $\forall y\in \mathbb{R}$, $\pathprob{\rho_{\alpha}(\ell)||w,y}\geq \pathprob{\rho_{\beta}(\ell)||w,y}$, and $\forall y>0$, $\pathprob{\rho_{\alpha}(\ell)||w,y}\geq \eulerv{\frac{1}{2}d_{\min}\ell\epsilon}\pathprob{\rho_{\beta}(\ell)||w,y}.$ Using the above property and the previous claim, together with the assumption that $\forall j,\:v+n_1\ell\leq j<w$, if $t_j$ is an assignment transition then it's guard is $\getest$, the following can be proved by downward induction on $k$, $\forall k,\:v+n_1\ell \leq k <w$: $\forall y\in \mathbb{R}$, $\pathprob{\rho_{\alpha}(\ell)||k,y}\geq \pathprob{\rho_{\beta}(\ell)||k,y}$, and $\forall y>0$, $\pathprob{\rho_{\alpha}(\ell)||k,y}\geq \eulerv{\frac{1}{2}d_{\min}\ell\epsilon}\pathprob{\rho_{\beta}(\ell)||k,y}.$ Now, it should be easy to see that $\forall y\in\mathbb{R},$ $$ \begin{array}{ll} \pathprob{y,\rho_{\alpha}(\ell)||v} = (\prod_{v\leq j<v+n_1\ell}g_j(y)) \pathprob{y,\rho_{\alpha}(\ell)||v+n_{1}\ell} \\ \pathprob{y,\rho_{\beta}(\ell)||v} = (\prod_{v\leq j<v+n_1\ell}h_j(y)) \pathprob{y,\rho_{\beta}(\ell)||v+n_{1}\ell}. \end{array}$$ Observe that $\forall j,\: v\leq j<v+n_1\ell,$ $$\begin{array}{ll} \forall y\leq 0: & g_j(y)\geq \eulerv{\frac{1}{2}d_{\min}\epsilon} h_j(y), \\ &\hspace*{0.2cm}\pathprob{y,\rho_{\alpha}(\ell)||j}\geq \pathprob{y,\rho_{\beta}(\ell)||j} \\ \mbox{and}\\ \forall y>0: & g_j(y)\geq h_j(y),\\ &\hspace*{0.2cm} \pathprob{y,\rho_{\alpha}(\ell)||j}\geq \eulerv{\frac{1}{2}d_{\min}\ell \epsilon} \pathprob{y,\rho_{\beta}(\ell)||j}. \end{array}$$ From this we get the following: $$\forall y\in \mathbb{R},\: \pathprob{y,\rho_{\alpha}(\ell)||v}\geq \eulerv{\frac{1}{2}d_{\min}\ell \epsilon} \pathprob{y,\rho_{\beta}(\ell)||v}.$$ Using this we can show by the definition of probability of a path that $${\frac{\pathprob{\epsilon,\rho_{\alpha}(\ell)}}{\pathprob{\epsilon,\rho_{\beta}(\ell)}}}\geq \eulerv{\frac{1}{2}d_{\min}\ell\epsilon}.$$ Since $\ell$ can be made arbitrarily large, we see that $\mathcal{A}$ is not $d\epsilon$-differentially private, for any $d>0$. Hence $\mathcal{A}$ is not differentially private. \end{proof} \subsection*{\textbf{{{Disclosing cycle}}s implies no privacy}} \begin{lemma} \label{lem:violatingcnec} A {\dipa} $\mathcal{A} $ is not differentially private if it has a reachable {{disclosing cycle}}. \end{lemma} \begin{proof} Thanks to Lemma~\ref{lem:criticalcnec} and Lemma~\ref{lem:criticalpnec}, we can assume $\mathcal{A}$ does not have {{leaking cycle}}s or {{leaking pair}}s. Assume that $\mathcal{A}$ is well-formed, but there is a reachable {{disclosing cycle}} $C$ in $\mathcal{A}$ that has a transition whose output is $\svar.$ The proof for the case when $C$ has a transition whose output is $\svar'$ is simpler and is left out. Now, if the transition of $C$ whose output is $\svar$ has the guard $\mathsf{true},$ then it can be shown easily that repeating the cycle $\ell$ times incurs a privacy cost linear in $\ell\epsilon,$ and hence $\mathcal{A}$ cannot be $d\epsilon$-differentially private for any $d>0.$ Thus, we consider more interesting case when the guard is $\lttest$ or $\getest.$ We consider the case when $C$ has a transition with output $\svar.$ Since $\mathcal{A}$ is well-formed the cycle $C$ has no assignment transitions. Let $\eta=\eabsexecl{j+m}$ for $k=0,\ldots,j+m-1$ be an {abstract path} such that $q_0=\qinit$, $q_j=q_{j+m}$, and the final $m$ transitions of $\rho$ is the {abstract cycle} corresponding to C. Fix $0\leq r<m$ be such that $\sigma_{j+r}=\svar.$ We assume that the guard of the $(j+r)$-th transition is $\getest.$ The case when it is $\lttest$ is similar and left out. Further, let $d_k$ and $\mu_k$ be such that $\parf(q_k) = (d_k,\mu_k)$ for each $k.$ Fix $\ell>0.$ We define an abstract path $\eta_\ell$ starting from $\qinit$ by repeating the cycle $C$ $\ell$ times. Formally, $\eta_\ell=\eabsexecl{j+\ell m}$ such that $q_k=q_{k-m}$ and $\sigma_k=\sigma_{k-n}$ for $j+m\leq k\leq j+\ell m.$ Let $t_k$ be the {$k$-th transition} of $\eta_\ell$ and $c_k$ be the {guard} of the $k$-th transition. We have that $\sigma_{j+nm+r}\:=\svar$, for all $n$ such that $0\leq n<\ell.$ Now we construct two input sequences $\alpha(\ell)=a_0\cdots a_{j+\ell m-1}$ and $\beta(\ell)=b_0\cdots b_{j+\ell m-1}$ as follows. We take $a_k=-\mu_k$, for all $k, 0\leq k< j+\ell m$ such that $t_k$ is an input transition, otherwise we take $a_k=\tau.$ We take $b_k=-\mu_k-1$ if $k=j+nm+r$ for some $0\leq n<\ell$ and $b_k=a_k$ otherwise. Let $\rho(\ell)=\execl{j+\ell m}$ be the path such that \begin{itemize} \item $\eta=\ensuremath{\mathsf{abstract}}(\rho(\ell)),$ \item $\inseq(\rho(\ell))=\alpha(\ell),$ and \item all $k$, i) $o_k=\sigma_k$ if $\sigma_k \in \Gamma$, ii) $o_k=(\sigma_k,0,\infty)$ if $k=j+nm+r$ for some $0\leq n<\ell$, and iii) $o_k=(\sigma_k,-\infty,\infty)$ otherwise. \end{itemize} Let $\rho'(\ell)=\execlb{j+\ell m}$ be the path that is equivalent to $\rho$ and $\inseq(\rho'(\ell))=\beta(\ell).$ Let $\rho(\ell)|| k$ and $\rho'(\ell) || k$ be the suffixes of executions $\rho(\ell)$ and $\rho'(\ell)$ starting from state $q_k.$ Using backward induction, we can easily show that for each $x_0,$ $\pathprob{x_0,\rho(\ell)||k}, \pathprob{x_0,\rho'(\ell)||k} $ are non-zero and that $$ \pathprob{x_0,\rho(\ell)||k} = e^{ {\#(k) d_{j+r}\epsilon} } \pathprob{x_0,\rho'(\ell)||k}$$ where $\#(k)$ is the number of indices $k_1$ such that $k\leq k_1 < j+m\ell-1$ and $k_1=j+nm+r$ for some $0\leq n<\ell.$ Thus, $$\pathprob{\epsilon,\rho(\ell)}=e^{ {\ell d_{j+r}\epsilon} } \pathprob{\epsilon,\rho'(\ell)}.$$ Now, $\ell$ is arbitrary and hence for every $d>0$, there is an $\ell$ such that $\pathprob{\epsilon,\rho(\ell)}>e^{ {d\epsilon}} \pathprob{\epsilon,\rho'(\ell)}.$ Hence $\mathcal{A}$ is not differentially private. \end{proof} \subsection*{\textbf{{{Privacy violating path}}s implies no privacy}} \begin{lemma} \label{lem:violatingpnec} A {\dipa} $\mathcal{A} $ is not differentially private if it has a reachable {{privacy violating path}}. \end{lemma} \begin{proof} Thanks to Lemma~\ref{lem:criticalcnec}, Lemma~\ref{lem:violatingcnec} and Lemma~\ref{lem:criticalpnec}, we can assume $\mathcal{A}$ does not have {{leaking cycle}}s, {{disclosing cycle}}s or {{leaking pair}}s. We give the proof for one of the cases of a {violatingp}, where the path starts with a transition whose guard is $\lttest$ and which lies on an {\ensuremath{\mathsf{L}}-cycle} $C$ which is followed by an {\ensuremath{\mathsf{AG}}-path} ending in a transition with guard $\getest$ and whose output is $\svar.$ (The proofs for other cases of the privacy violating path are similar and are leftout.) Since $\mathcal{A}$ is well-formed, the cycle $C$ does not have an assignment transition. Fix $\ell>0.$ Consider an abstract path $\eta(\ell)=\eabsexecl{n}$ of length $n$ from the initial state $\qinit$ such that $\eta(\ell)$ contains the cycle $C$ repeated $\ell$ times, and upon exiting the cycle continues onto the {\ensuremath{\mathsf{AG}}-path} $p$ such that the last transition of the {\ensuremath{\mathsf{AG}}-path} has guard $\getest$ and outputs $\svar.$ Fix a transition of $C$ with guard $\lttest,$ and let $k_1,k_2,\ldots,k_\ell$ be the indices where this transition occurs in $\eta(\ell).$ Let $\parf(q_k)=(d_k,\mu_k).$ Next, we construct two input sequences $\alpha(\ell)=a_0\cdots a_n$ and $\beta(\ell)=b_0\cdots b_n$ of length $n$ as follows. If the $k$th transition of $\eta(\ell)$ is a non-input transition then $a_k=b_k=\tau.$ If $k\in \set{k_1,k_2,\ldots,k_\ell}$ then $a_k=-\mu_k$ and $b_k=-\mu_k+1.$ For all other $k$s, $a_k=b_k=-\mu_k.$ Let $\rho(\ell)=\execl{j+\ell m}$ be the path such that \begin{itemize} \item $\eta=\ensuremath{\mathsf{abstract}}(\rho(\ell)),$ \item $\inseq(\rho(\ell))=\alpha(\ell),$ and \item for all $k$, i) $o_k=\sigma_k$ if $\sigma_k \in \Gamma$, ii) $o_k=(\sigma_k,-\infty,0)$ if $k=n$, and iii) $o_k=(\sigma_k,-\infty,\infty)$ otherwise. \end{itemize} Let $\rho'(\ell)=\execlb{j+\ell m}$ ibe the path that is equivalent to $\rho$ and $\inseq(\rho'(\ell))=\beta(\ell).$ Please note that in $\rho(\ell),\rho'(\ell),$ the last output is a non-positive number. As the path $p$ is also an {\ensuremath{\mathsf{AG}}-path}, this implies that stored value of $\rvar$ during the $\ell$ executions of $C$ is also a non-positive number. Combined with the fact that $C$ is an {\ensuremath{\mathsf{L}}-cycle} and the construction of $\rho(\ell),\rho'(\ell)$, it can be shown that $$\pathprob{\epsilon, \rho(\ell)}=e^{ {\ell d_{k_1}\epsilon}} \pathprob{\epsilon,\rho'(\ell)}.$$ As in the case of {{disclosing cycle}} (See Lemma~\ref{lem:violatingcnec}), we can conclude that $\mathcal{A}$ is not differentially private. \end{proof} \subsection*{\textbf{{\dipautop} with Finite Outputs}} \begin{lemma} \label{lem:main} Let $\mathcal{A}=\defaut$ be a well-formed {\dipa} with finite outputs. Let $\rho$ be a path of length $n>0$ such that the initial transition (i.e. the $0$th transition), $t_0$, of $\rho$ is an assignment transition. Let $c_0$ be the guard of $t_0.$ Let $\rho'$ be a path that is equivalent to $\rho$ such that $\inseq(\rho')$ is a neighbor of $\inseq(\rho).$ Then the following properties hold for all $x_0\in \mathbb{R}.$ \begin{enumerate} \item If the guard $c_0$ is $\getest$, and the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{G}}-cycle} transition and no assignment transition with guard $\lttest$ appears before it, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0+1,\rho}.$$ \item If the guard $c_0$ is $\getest$ and one of the following holds: (a) $\rho$ has no cycle transitions, (b) the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{G}}-cycle} transition and an assignment transition with guard $\lttest$ appears before it, (c) the first cycle transition in $\rho$ is an {\ensuremath{\mathsf{L}}-cycle} transition, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0-1,\rho}. $$ \item If the guard $c_0$ is $\lttest$ and the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{L}}-cycle} transition and no assignment transition with guard $\getest$ appears before it, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon}\pathprob{x_0-1,\rho}.$$ \item If the guard $c_0$ is $\lttest$ and one of the following holds: (a) $\rho$ has no cycle transitions, (b) the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{L}}-cycle} transition and an assignment transition with guard $\getest$ appears before it, (c) the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{G}}-cycle} transition, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon}\pathprob{x_0+1,\rho}.$$ \item If the guard $c_0$ is $\mathsf{true}$, then $\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0,\rho}.$ \end{enumerate} \end{lemma} \begin{proof} Let $\rho=\execl{n}$ and $\rho'=\execlb{n}.$ Let $t_0,\ldots,t_{n-1}$ be the transitions of $\rho$ and let $c_0,\ldots,c_{n-1}$ be their respective guards. For each $k\leq n,$ let $d_k,\mu_k$ be such that $\parf(q_k)=(d_k,\mu_k).$ Recall that, for any $k,$ $\rho||k$ denotes the suffix of $\rho$ starting from $q_k.$ We assume that there are no cycle transitions that are assignments. This is because if there is a cycle with an assignment then the guards on all other transitions must be $\mathsf{true}.$ Hence, we can never exit the cycle. Further, it is easy to see that this cycle has the same \lq\lq behavior\rq\rq\ in both $\rho$ and $\rho'.$ For each $k$, such that $0\leq k<n$, let $g_k,g'_k,\theta_k$ be functions of a single variable given by $$g_k(y)\:=\begin{cases} \frac{d_k\epsilon}{2}\eulerv{-d_k\epsilon \abs{y-a_k-\mu_k}} &{t_i} \mbox{ is an input transition}\\ \frac{d_k\epsilon}{2}\eulerv{-d_k\epsilon \abs{y-\mu_k}} & \mbox{otherwise} \end{cases}, $$ $$g'_k(y)\:=\begin{cases} \frac{d_k\epsilon}{2}\eulerv{-d_k\epsilon \abs{y-b_k-\mu_k}} &{t_i} \mbox{ is an input transition}\\ \frac{d_k\epsilon}{2}\eulerv{-d_k\epsilon \abs{y-\mu_k}} & \mbox{otherwise} \end{cases} $$ and $$\theta_k\:= \begin{cases} b_k-a_k &{t_i} \mbox{ is an input transition}\\ 0 & \mbox{otherwise}. \end{cases} $$ Observe that, for each $k\geq 0$, $g'_k(y)\:=g_k(y-\theta_k).$ Since $\card{\theta_k}\leq 1$, we see that $g'_k(y)\geq \eulerv{-d_k\epsilon}g_k(y)$, for all $y\in \mathbb{R}.$ We prove the lemma by induction on the number of assignment transitions in $\rho.$ \paragraph*{\textbf{Base Case}} In the base case, $\rho$ has one assignment transition which is $t_0.$ Let $S_1$ and $S_2$ be the sets of $k>0$ such that $c_k$ is $\getest$ and $c_k$ is $\lttest$, respectively. Now, assume the condition of statement (1) of the Lemma is satisfied. Observe that $S_1$ includes all {\ensuremath{\mathsf{G}}-cycle} transitions whose guard is $\getest.$ Observe that, since $\mathcal{A}$ is well-formed, for all $k\in S_2$, $t_k$ does not lie on a cycle and hence is a {{critical transition}}. Similarly $t_0$ is also a {{critical transition}}. Now, we see that $$\pathprob{x_0,\rho'}\:=\:\int^\infty_{x_{0}}f(x)\prod_{k\in S_{1}}\int^\infty_xg'_k(y) dy\: dx$$ where $\displaystyle{f(x)\:=\:g'_0(x)\prod_{k\in S_{2}}\int^x_{-\infty} g'_k(y) dy}.$ Now, substituting $g'_k(y)=g_k(y-\theta_k)$ (for $k\in S_1$) in the above equation and using inequality (1) of Lemma \ref{lem:integralineq}, we see that $$\displaystyle{\pathprob{x_0,\rho'}\geq\int^\infty_{x_{0}+1}f(x-1)\prod_{k\in S_{1}}\int^\infty_xg_k(y)dy\:dx}.$$ Observe that $$f(x-1)\:=g_0(x-(1+\theta_0))\prod_{k\in S_{2}}\int^{x-1}_{-\infty}g_k(y-\theta_k) dy.$$ Now, by introducing a new variable $z$ such that $z\:=y+1$, we see that $$\int^{x-1}_{-\infty}g_k(y-\theta_k) dy\:= \int^{x}_{-\infty}g_k(z-(1+\theta_k)) dz.$$ From this, it is easy to see that $$f(x-1)\geq \eulerv{-2(d_0+\sum_{k\in S_{2}}d_k)\epsilon}g_0(x)\prod_{k\in S_{2}}\int^{x}_{-\infty}g_k(y) dy.$$ Observe that $\weight{\rho}\:\geq 2(d_0+\sum_{k\in S_{2}}d_k).$ Putting all the above observations together, we get \begin{dmath*} \pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \int^\infty_{x_{0}+1 }g_0(x)\prod_{k\in S_2}\int^x_{-\infty}g_k(y) dy\:\prod_{k\in S_{1}} \int^\infty_xg_k(y) dy. \end{dmath*} Observe that the right hand side of the above inequality is $\eulerv{-\weight{\rho}\epsilon}\pathprob{x_0+1,\rho}$. Property (1) of the lemma follows for the base case from this observation. Now, we prove the base case for property (2). Assume the condition of (2a) is satisfied, i.e., there are no cycle transitions in $\rho.$ Now, we see that \begin{dmath*}\pathprob{x_0,\rho'}\:=\:\int^\infty_{x_{0}}g'_0(x)\prod_{k\in S_{1}}\int^\infty_xg'_k(y) dy \prod_{k\in S_{2}}\int^x_{-\infty}g'_k(z) dz\:dx.\end{dmath*} By introducing new variables $u,v,w$ such $u\:=x-1,\:v=y-1,\:w=z-1$, we get \begin{dmath*} \pathprob{x_0,\rho'}\:=\:\int^\infty_{x_{0}-1}g'_0(u+1)\prod_{k\in S_{1}}\int^\infty_ug'_k(v+1) dv \prod_{k\in S_{2}}\int^u_{-\infty}g'_k(w+1) dw\:du. \end{dmath*} Observing that, for each $k\geq 0$, $g'_k(u+1)\geq \eulerv{-2d_k\epsilon}g_k(u)$ and $t_k$ is a {{critical transition}}, we get the inequality of property (2). Now observe that condition of (2b) can not be satisfied as $t_0$ is the only assignment transition in $\rho$. Now, assume the condition of (2c) is satisfied. Now, observe that, for all $k\in S_1$, $t_k$ is a {{critical transition}.} As before, we see that $$\pathprob{x_0,\rho'}\:=\:\int^\infty_{x_{0}}f(x)\prod_{k\in S_{2}}\int^x_{-\infty} g'_k(y) dy\: dx$$ where $\displaystyle{f(x)\:=\:g'_0(x)\prod_{k\in S_{1}}\int^\infty_x g'_k(y) dy}.$ Now, using inequality (2) of Lemma \ref{lem:integralineq}, we see that $$\displaystyle{\pathprob{x_0,\rho'}\geq\int^\infty_{x_{0}-1}f(x+1)\prod_{k\in S_{2}}\int^x_{-\infty} g_k(y)dy\:dx}.$$ Now, observe that $$f(x+1)\:=g_0(x-(\theta_0-1))\prod_{k\in S_{1}}\int_{x+1}^\infty g_k(y-\theta_k)dy.$$ Introducing a new variable $z$ and setting $z=y-1$, we see that $$f(x+1)\:=g_0(x-(\theta_0-1))\prod_{k\in S_{1}}\int_x^\infty g_k(z-(\theta_k-1))dz$$ and $$f(x+1)\geq \eulerv{-2(d_0+\sum_{k\in S_{1}}d_k)\epsilon}g_0(x)\prod_{k\in S_{1}}\int_x^\infty g_k(z)dz.$$ From this and the above inequality, it is easily seen that $$\pathprob{x_0,\rho'}\geq \eulerv{-2(d_0+\sum_{k\in S_{2}}d_k)\epsilon} \pathprob{x_0-1,\rho}.$$ From this we see that the inequality of property (2) holds. The proof for the base case of Properties (3) and (4) is symmetric to those of properties (1) and (2) and is left out. To prove property (5) for the base case, we see that the proof is similar to those of properties (1) and (3) depending on whether {\ensuremath{\mathsf{G}}-cycle} or {\ensuremath{\mathsf{L}}-cycle} transitions appear. There are two minor differences. The first difference is that if the first transition is a non-input transition then $\theta_0=0$ and hence it only incurs a cost of $d_0$ and not $2 d_0.$ The second difference is that the lower limit of the outer integral will be $-\infty$ in the former case, while the upper limit of the outer integral being $\infty$ in the latter case. In either case, it is straightforward to see that property (5) holds. \paragraph*{\textbf{Inductive Step}} Now, we prove the inductive step as follows. Assume that all the properties hold when $\rho$ has $\ell>0$ assignments. Now, consider the case when $\rho$ has $\ell+1$ assignments. Let $t_i$, for $i>0$, be the second assignment transition in $\rho.$ Let $S_1$ (resp., $S_2$) be the set of $k$, $0<k<i$, such that $c_k$ is $\getest$ (resp., $\lttest$). Consider the case when $c_0$ is $\getest.$ Now, we consider two sub-cases. We first consider the sub-case when there is no cycle transitions before $t_i.$ We have $\pathprob{x_0,\rho'}\:=\int^\infty_{x_{0}}f'(x) \pathprob{\rho'||i,x}dx$ where $$f'(x) \:=g'_0(x)\prod_{k\in S_{1}}\int_x^{\infty}g'_k(y)dy\prod_{k\in S_{2}}\int^x_{-\infty}g'_k(y)dy.$$ Applying the inductive hypothesis for the suffix $\rho||i$, we get an inequality involving $\pathprob{\rho'||i,x}$ and $\pathprob{x+1,\rho||i}$, or $\pathprob{\rho'||i,x-1}$, or $\pathprob{x,\rho||i}$, based on which of the five properties of the lemma are satisfied by $\rho||i.$ Suppose the condition of property (1) is satisfied by $\rho||i$, by using the inductive hypothesis, we get $\pathprob{x_0,\rho'}\geq \int^{\infty}_{x_{0}} f'(x) h(x) dx$, where $h(x)\:= \eulerv{-2 \weight{\rho||i}\epsilon}\pathprob{x+1,\rho||i} .$ Now, by taking $f(x)\:=f'(x)h(x)$, using inequality (1) of Lemma \ref{lem:integralineq} and {by taking $k=0$ in that inequality, we get property (1) for the path $\rho$ using the same simplification/reasoning used in the base case} and by observing that $$\begin{array}{l}\displaystyle{\pathprob{x_0+1,\rho}\:=\int^\infty_{x_{0}+1} g_0(x) \prod_{k\in S_1}\int_x^{\infty}g_k(y)dy}\\ \hspace*{3cm}\displaystyle{\prod_{k\in S_{2}}\int^x_{-\infty}g_k(y)dy \pathprob{x,\rho||i}dx}. \end{array}$$ We can similarly prove the inductive step when the suffix $\rho||i$ satisfies the other properties (i.e., 2 through 5) of the lemma. Now consider the sub-case when a cycle transition appears before $t_i.$ Assume that the cycle transitions are {\ensuremath{\mathsf{G}}-cycle} transitions. If $c_i$ is also $\getest$, then the suffix $\rho||i$ can satisfy any of the conditions of the first two properties of the lemma; In this situation, let $f(x)\:=f'(x)h(x)$ where $f'(x)\:=g'_0(x)\prod_{k\in S_{2}}\int^x_{-\infty}g'_k(y)dy$ and $h(x) \:=\eulerv{-2 \weight{\rho||i}\epsilon}\pathprob{x+1,\rho||i}.$ Observe that, if $\rho||i$ satisfies the condition of property (1) then $h(x)$ is the RHS of the inequality, we get, by applying the inductive hypothesis to $\rho||i.$ If $\rho||i$ satisfies the condition of property (2) of the lemma then, by applying the inductive hypothesis to $\rho||i$, we get $\pathprob{\rho'||i,x}\geq \eulerv{-2 \weight{\rho||i}\epsilon}\pathprob{x-1,\rho||i}.$ Since, $\pathprob{x-1,\rho||i}\geq \pathprob{x+1,\rho||i}$, we see that $\pathprob{\rho'||i,x}\geq \eulerv{-2 \weight{\rho||i}\epsilon}\pathprob{x+1,\rho||i}.$ Now, we have $\pathprob{x_0,\rho'}\:\geq \int^\infty_{x_{0}}f'(x)h(x)\prod_{k\in S_{1}}\int^\infty_{x}g'_k(z) dz dx.$ Applying the inequality (1) of Lemma \ref{lem:integralineq}, we get the desired result for the inductive step. On the other hand, if $c_i$ is $\lttest$ then the suffix $\rho||i$ can not satisfy the condition of property (3) of the lemma due to well-formedness of $\mathcal{A}$; however it can satisfy the condition of property (4). In this sub-case also, we can get the result for the induction case as above by using the inductive hypothesis for $\rho||i$ and using similar reasoning as in the base case and applying the first inequality of Lemma \ref{lem:integralineq}. Now consider the situation where the cycle transitions appearing before $t_i$ are {\ensuremath{\mathsf{L}}-cycle} transitions. Now, we apply inequality (2) of Lemma \ref{lem:integralineq} to prove that property (2) of the lemma is satisfied by $\rho.$ To do this, we define $f(x)\:=f'(x)h(x)$ where $f'(x)\:=g'_0(x)\prod_{k\in S_{1}}\int_x^{\infty}g'_k(y)dy$ and $h(x) \:= \eulerv{-2 \weight{\rho||i}\epsilon}\pathprob{x-1,\rho||i}.$ Next, applying the induction hypothesis to $\rho||i$, we show that $$\pathprob{x_0,\rho'}\geq \int^{\infty}_{x_{0}} f'(x)h(x) \prod_{k\in S_{2}}\int^x_{-\infty}g'_k(y)dy dx.$$ Since $\mathcal{A}$ is well-formed, $\rho||i$ cannot satisfy the condition of property (1) of the lemma. If $\rho||i$ satisfies the condition of property (2) or that of property (3) then, the above inequality follows directly from the induction hypothesis; If $\rho||i$ satisfies the condition of property (4), then the above inequality follows from the induction hypothesis and the observation that $\pathprob{x+1,\rho||i}\geq \pathprob{x-1,\rho||i}$; If $\rho||i$ satisfies the condition of property (5) then the above inequality follows from the induction hypothesis and the observation that $\pathprob{x,\rho||i}=\pathprob{x-1,\rho||i}$ as $\pathprob{x,\rho||i}$ is independent of $x.$ Rewriting the above inequality, we get $$\pathprob{x_0,\rho'}\geq \int^{\infty}_{x_{0}} f'(x)h(x) \prod_{k\in S_{2}}\int^x_{-\infty}g_k(y-\theta_k)dy.$$ Now, using the inequality (2) of Lemma \ref{lem:integralineq}, and using simplifications and reasoning as in the base cases, we see that property (2) of the lemma is satisfied by $\rho.$ The proof for the inductive step for the case when $c_0$ is $\lttest$ is symmetric. For the case, when $c_0$ is $\mathsf{true}$, the proof will be on the same lines excepting that if $t_0$ is a non-input transition then it incurs a cost of $d_0$ only and the limits of the outer integrals are $-\infty$ and $\infty.$ \end{proof} \subsection*{\textbf{{\dipautop} with Finite and Infinite Outputs}} \input{esvt-sufficient} \section{Proof of Theorem~\ref{thm:main} for {\dipa}} \label{app:diapa} In this section, we will consider {\dipautop}. Later, we shall show how the proof will extend to {\edipautop}. We shall start by defining some auxiliary definitions that shall help us in the proof. \subsection{Auxiliary definitions} Let $\mathcal{A}=\defaut$ be an {\edipautos}. For any execution/path $\eta=\execl{n}$ of $\mathcal{A}$, the \emph{abstraction} of $\rho$, denoted $\ensuremath{\mathsf{abstract}}(\rho)$, will be the word $\eabsexecl{n}$ where $$\sigma_i=\begin{cases} o_i & \mbox{if }o_i\in \outalph \\ \svar & \mbox{if } o_i=(\svar,r,s)\\ \svar' & \mbox{otherwise} \end{cases}$$ Note that for {\dipautop}, $\sigma_i=o_i$ for each $i.$ A sequence $\eta=\eabsexecl{n}$ is said to be an \emph{abstract path} if $\eta=\ensuremath{\mathsf{abstract}}(\rho)$ for some execution $\rho$. Further such a $\rho$ shall be called an execution of $\eta$ on input $\alpha=a_0\cdots a_n.$ Note that $\rho$ is unique if $\sigma_i\in \outalph$ for each $i.$ In general, two distinct sequences $\rho$ and $\rho'$ having the same abstraction $\eta$ will only differ at indices $i$ such that $\sigma_i \notin\outalph.$ At those indices, we would need to specify the values of the interval end-points, $r_i,s_i,$ where the real output is assumed to belong to. Fix an abstract path $\eta=\eabsexecl{n}.$ The $i$th-transition, denoted $\trname[i]$, is the word $q_i\sigma_{i}q_{i+1}.$ The Boolean condition of the $i$th transition is the unique $c$ such that $\delta(q_i,c)=(q_i,\sigma_i,b).$ The output sequence of $\eta$, denoted $\outseq(\eta)$ is the sequence $\sigma_0\cdots \sigma_n.$ Note that we can classify transitions of an abstract path as input, non-input, assignment and non-assignment as expected. The notions of cycles, reachability, {{leaking cycle}}, {{leaking pair}}, {{disclosing cycle}}, {{privacy violating path}} and {{critical transition}} extends naturally to abstract paths. \subsection{Proof of Lemma~\ref{lem:if1main}} \label{app:if1main} Let $\mathcal{A}= \defaut.$ Assume that $\mathcal{A}$ has a {{leaking cycle}} reachable from the state $\qinit.$ We give the proof first assuming that all states of $\mathcal{A}$ are input states. Later we will show how we can modify the construction for automata that have both input and non-input states. Let $\eta=\absexecl{m+n}$ for $k=0,\ldots,m+n-1$ be an \red{abstract path} such that $q_0=\qinit$, $q_m=q_{m+n}$, and the final $n$ transitions of $\rho$, i.e., the abstract path $C=\absexecsf{m}{m+n}$ is a {{leaking cycle}}. Let $t_k$ be the \red{$k$-th transition} of $\eta$ and $c_k$ be the \red{Boolean condition} of the $k$-th transition. Further, let $d_k$ and $\mu_k$ be such that $\parf(q_k) = (d_k,\mu_k)$ for each $k.$ \red{Without loss of generality}, we assume that $c_0\:=\mathsf{true}$ and $t_0$ is an assignment transition. Let $i,j$ be the smallest integers such that $m\leq i<j<m+n$ and the following properties are satisfied: (a) $t_i$ is an assignment transition, (b) $c_j\neq \mathsf{true}$ and (c) for every $k_1$ such that $i<k_1<j$, $t_{k_{1}}$ is a non-assignment transition and $c_{k_{1}}=\mathsf{true}.$ We fix $i,j$ as above. Consider any integer $\ell>0$. We define an abstract path $\eta_\ell$ starting from $\qinit$ by repeating the cycle $t_m,\ldots t_{m+n-1}$, $\ell$ times. Formally, $\eta_\ell=\absexecl{m+\ell n}$ such that $q_k=q_{k-n}$ and $o_k=o_{k-n}$ for $m+n\leq k\leq m+\ell n.$ Once again, we let $t_k$ be the \red{$k$-th transition} of $\eta_\ell$ and $c_k$ be the \red{Boolean condition} of the $k$-th transition. Now, given $\ell>0$, we define two neighboring input sequences $\alpha(\ell)=a_0\cdots a_{m+\ell n-1}$ and $\beta(\ell)=b_0\cdots b_{m+\ell n-1}$ each of length $m+\ell n.$ The sequence $\alpha(\ell)$ is chosen so that all the conditions in the transitions of $\eta_{\ell}$ are satisfied with joint probability $>\frac{1}{2}$ \red{for large $\epsilon$}. The input $a_0=0$ and for $0<k<m+\ell n$, $a_k$ is defined inductively as given below: let $k'<k$ be the largest integer such that $t_{k'}$ is an assignment transition, then $a_k$ is given as follows: if $c_k$ is the condition ${\getest}$ then $a_k\:=\mu_{k'}-\mu_{k}+a_{k'}+1$, otherwise $a_k\:=\mu_{k'}-\mu_{k}+a_{k'}-1.$ Let $\gamma(\ell)$ be the sequence of outputs given by $o_k$, for $0\leq k<m+\ell n.$ Now, consider any $k$, $0\leq k<m+\ell n$, such that $c_k\neq \mathsf{true}$ and fix it. Let $k'<k$ be the largest integer such that $t_{k'}$ is an assignment transition. Let $X_{k'},X_{k}$ be the two random variables with distributions given by $\Lap{d_{k'}\epsilon,a_{k'}}$ and $\Lap{d_{k}\epsilon,a_{k}}.$ Let $Y_k$ denote the random variable denoting the $k^{th}$ output of $\eta$ on the input sequence $\alpha(\ell).$ Now consider the case when $c_k$ is ${\getest}.$ From the way, we defined $\alpha(\ell)$ it is the case that $\mu_k+a_k\:=\mu_{k'}+a_{k'}+1.$ Now $\prbfn{Y_k\neq o_k} \:= \prbfn{X_k < X_{k'}}\:=\prbfn{X_k\leq X_{k'}}.$ Let $d_{\mathsf{mx}}=\max(d_k,d_{k'})$ and $d_{\mathsf{mn}} = \min (d_k,d_{k'}).$ From Lemma \ref{lem:problessequal}, we see that if $d_k\neq d_{k'}$ then $$\prbfn{X_k \leq X_{k'}}< \frac{{d_{\mathsf{mx}}}^2}{2({d_{\mathsf{mx}}}^2-{d_{\mathsf{mn}}}^2)}\eulerv{-d_{\mathsf{mn}}\epsilon}.$$ If $k=k'$ then $$\prbfn{X_k \leq X_{k'}}<\frac{1}{2}\eulerv{-d_{k}\epsilon}(1+\frac{d_{k}\epsilon}{2}).$$ From the above, we see that $$\prbfn{Y_k\neq o_k}\leq r \eulerv{-d_{\mathsf{mn}}\epsilon}(1+\frac{d_{\mathsf{mx}}\epsilon}{2})$$ where $r$ is a constant that depends only on $\mathcal{A}$ (and not on $k$). Now consider the case when $c_k$ is $\lttest.$ In this case, $\mu_k+a_k\:=\mu_k+a_{k'}-1$ and $\prbfn{Y_k\neq o_k} \:= \prbfn{X_{k'}< X_{k}}.$ By a similar analysis, in this case also, $$\prbfn{Y_k\neq o_k}\leq r \eulerv{-d_{\mathsf{mn}}\epsilon}(1+\frac{d_{\mathsf{mx}}\epsilon}{2}).$$ Let $d_{\max}\:=\max\set{\pi_1(\parf(q))\st q\in Q}$ and $d_{\min}\:=\min\set{\pi_1(\parf(q))\st q\in Q}.$ Then, for every $k,0\leq k<m+\ell n$, $$\prbfn{Y_k\neq o_k}\leq r \eulerv{-d_{\min}\epsilon}(1+\frac{d_{\max}\epsilon}{2})$$ Using the union rule of probabilities, we see that, $$\prbfn{\exists k<m+\ell n,\: Y_k\neq o_k}\:\leq r(m+\ell n) \eulerv{-d_{\min}\epsilon}(1+\frac{d_{\max}\epsilon}{2}).$$ Given $\ell>0$, let $\epsilon_\ell\in\mathbb{R}$, be the smallest value such that $$\forall \epsilon\geq \epsilon_\ell,\: r(m+\ell n) \eulerv{-d_{\min}\epsilon}(1+\frac{d_{\max}\epsilon}{2}) \leq \frac{1}{2}.$$ Now, $$\red{Pr(\alpha(\ell),\gamma(\ell),\epsilon)}\:=1-\prbfn{\exists k<m+\ell n,\: Y_k\neq o_k}. $$ From the construction of $\epsilon_\ell$ and above observations, we see that $\forall \epsilon\geq \epsilon_\ell,\:\red{Pr(\alpha(\ell),\gamma(\ell),\epsilon)}\:\geq \frac{1}{2}.$ Now, recall the integers $i,j$ fixed earlier. Intuitively, we define $\beta(\ell)$ so that each of the conditions in the transitions $t_{j+\ell' n}, 0\leq \ell'<\ell$ are satisfied with probability $<\frac{1}{2}$. For each $\ell',\:0\leq \ell'<\ell$, we let $b_{i+\ell' n}\:=a_{j+\ell' n}+\mu_j -\mu_i$ and $b_{j+\ell' n}\:=a_{i+\ell' n}+\mu_i-\mu_j.$ We observe the following. Now, for each $\ell',\:0\leq \ell'<\ell$, the following hold. $c_{j+\ell' n}\:=c_j\neq \mathsf{true}.$ If $c_{j+\ell' n}$ is the condition ${\getest}$ then $b_{i+\ell' n}+\mu_i\:=b_{j+\ell' n}+\mu_j+1$ since $a_{j+\ell' n}+\mu_j\:=a_{i+\ell' n}+\mu_i+1.$ If $c_{j+\ell' n}$ is the condition $\lttest$ then $b_{j+\ell' n}+\mu_j\:=b_{i+\ell' n}+ \mu_i+1$ since $a_{i+\ell' n}+\mu_i\:=a_{j+\ell' n}+\mu_j+1.$ We define $b_{i'}$, for all values of $i'<m+\ell n$ and $i'\notin \set{ i+\ell' n, j+\ell' n \st 0\leq \ell'<\ell}$, so that $\beta(\ell)$ is a neighbour of $\alpha(\ell).$ It is not difficult to see that such a sequence $\beta(\ell)$ can be defined. For each $k, 0\leq k<m+\ell n$, let $U_k$ be the random variable with distribution given by $\Lap{d_{q_{k}}\epsilon,b_k}$ and $Z_k$ be denoting the $k^{th}$ output of $\eta$ on the input sequence $\beta(\ell).$ Let $d'\:=\min(d_{i},d_{j})$ and $d''\:=\max(d_{i},d_{j}).$ Now, $\prbfn{Z_j=o_j}$ is given by $\prbfn{U_j\geq U_i}$ if $c_j$ is the condition ${\getest}$, otherwise it is given by $\prbfn{U_j\leq U_i}.$ Using Lemma \ref{lem:problessequal} and similar reasoning as given earlier, we see that $$\prbfn{Z_j=o_j}\leq r'\eulerv{-d'\epsilon}(1+\frac{d''\epsilon}{2})$$ for some constant $r'.$ For each $\ell',\:0<\ell'<\ell$, using the same reasoning as above with the random variables $U_{i+\ell' n},U_{j+\ell'n}$, we see that $$\prbfn{Z_{j+\ell' n}=o_{j+\ell' n}}\leq r'\eulerv{-d'\epsilon}(1+\frac{d''\epsilon}{2}).$$ Since for any $\ell_1,\ell_2$ such that $0\leq\ell_1< \ell_2<\ell$, the random variables $U_{i+\ell_1 n},U_{j+\ell_1 n}$ are independent of $U_{i+\ell_2 n},U_{j+\ell_2 n}$, we see that $$\prbfn{\forall \ell',0\leq \ell'<\ell,\:Z_{j+\ell' n}=o_{j+\ell' n}}\leq {r'}^\ell \eulerv{-d'\ell \epsilon}(1+\frac{d''\epsilon}{2})^\ell.$$ Thus, $$\prbfn{\forall k, 0\leq k<m+\ell n, Z_k=o_k}\leq {r'}^\ell \eulerv{-d'\ell \epsilon}(1+\frac{d''\epsilon}{2})^\ell.$$ The LHS of the above equation is exactly \red{$Pr(\alpha(\ell),\gamma(\ell),\epsilon)$.} Thus, for any $\ell>0$, $$\forall \epsilon\geq \epsilon_\ell,\: \frac{\red{Pr(\alpha(\ell),\gamma(\ell),\epsilon)}}{\red{Pr(\beta(\ell),\gamma(\ell),\epsilon)}}\geq\frac{1}{2}(\frac{\eulerv{d'\epsilon}}{r'(1+\frac{d''\epsilon}{2})})^\ell.$$ We claim that for any $s>0$, $\exists \ell,\epsilon$ such that $$\frac{1}{2}(\frac{\eulerv{d'\epsilon}}{r'(1+\frac{d''\epsilon}{2})})^\ell>\eulerv{s\epsilon}.$$ Now the above inequality holds if $$\frac{\eulerv{(d'\ell-s)\epsilon}}{(1+\frac{d'\epsilon}{2})^\ell}>2{r'}^\ell.$$ Choose $\ell$ so that $d'\ell>s.$ Since the denominator of the left hand side term of the last inequality grows polynomially in $\epsilon$, while its numerator grows exponentially in $\epsilon$, it is easy to see that $\exists \epsilon_{0}>\epsilon_\ell$ such that $$ \forall \epsilon\geq \epsilon_0, \:\frac{\eulerv{(d'\ell-s)\epsilon}}{(1+\frac{d'\epsilon}{2})^\ell}>2{r'}^\ell.$$ This shows that $\mathcal{A}$ is not differentially private. \red{What changes need to be made for non-input} \subsection{ {{leaking pair}} implies } \label{app:if2main} \begin{definition} Let $\alpha$ be an input sequence, $\gamma$ an output sequence, such that there is a path $\rho$ from initial state $\qinit$ with $\inseq(\rho)=\alpha$ and $\outseq(\rho)=\gamma.$ Given $\epsilon>0$, let $Y(\epsilon)$ be the random variable that models the value of the variable $\rvar$ at the end of execution of $\rho.$ Let $\pdfassgn{\alpha}{\gamma}{z}$ be the probability density function of the random variable $Y(\epsilon).$ \end{definition} \red{Sequences} \begin{proof} Let $\mathcal{A}= \defaut.$ Assume that $\mathcal{A}$ has a {{leaking pair}} of cycles $(C,C')$ such that $C$ is reachable from $\qinit.$ Assume that $C$ is an {\ensuremath{\mathsf{L}}-cycle} and $C'$ is a {\ensuremath{\mathsf{G}}-cycle}. (The proof for the case when $C$ is a {\ensuremath{\mathsf{G}}-cycle} and $C'$ is an {\ensuremath{\mathsf{L}}-cycle} is similar but symmetric and is left out). Without loss of generality, we assume that neither $C$ nor $C'$ by themselves are {{{leaking cycle}}}s; otherwise, by Lemma \ref{lem:if1main}, $\mathcal{A}$ is not differentially private. This assumption means both $C,C'$ do not have assignment transitions. We further assume that $C,C'$ are distinct. If they are the same then it is straightforward to prove that $\mathcal{A}$ is not differentially private, using more or less the same proof. We also assume that all the states in $\mathcal{A}$ are input states. The case when $\mathcal{A}$ has both input and non-input states can also be proved using more or less the same proof. Let the lengths of $C,C'$ be $n_1,n_2$, respectively. Now, for any $\ell>0$, consider the following abstract path $\eta_{\ell}$ in $\mathcal{A}$ starting from $\qinit$ in which the cycles $C,C'$ are repeated $\ell$ times each. The path $$\begin{array}{lcl} \eta_{\ell}&=& q_0o_0 \cdots q_u o_u \cdots q_v o_v \cdots q_{v+n_1\ell-1} o_{v+n_1\ell-1}\cdots \\ && \hspace*{1cm} \cdots q_{w}o_w\cdots o_{w+n_2\ell-1} q_{w+n_2\ell} \end{array}$$ where the following conditions are satisfied. For each k, let $t_k$ be the $k$-th transition of $\eta.$ and $c_k$ be the Boolean condition of the $k$-th transition. \begin{enumerate} \item $q_0=\qinit$ \item $\absexecsf{v}{v+n_1}$ is the cycle C \item $t_{j+n_1}\:=t_{j}$ for all $j,\: v\leq j<v+n_1(\ell-1)$ \item $\absexecsf{w}{w+n_2}$ is the cycle $C'$ \item $t_{j+n_2}\:=t_{j}$ for all $j,\:w\leq j<w+n_2(\ell-1)$ \item $t_u$ is an assignment transition and $\forall\:j, u<j<v+n_1\ell$ and $\forall\:j, j\geq w$, $t_j$ is a non-assignment transition \item for all $j,\:v+n_1\ell\leq j<w $, if $t_j$ is an assignment transition then $c_j$ is the condition $\getest.$ \end{enumerate} Observe that the last assignment transition before $t_{v+n_1\ell}$ is $t_u$, all assignment transitions from $t_{v+n_1\ell}$ up to $t_w$ have ${\getest}$ as their condition, the segment of the path from $t_v$ to $t_v+n_1\ell-1$ is the part where cycle $C$ is repeated $\ell$ times and the segment of the path from $t_w$ to $t_w+n_2\ell-1$ is the part where cycle $C'$ is repeated $\ell$ times. Let $d_k$ and $\mu_k$ be such that $\parf(q_k) = (d_k,\mu_k)$ for each $k.$ \red{Without loss of generality}, we assume that $c_0\:=\mathsf{true}$ and $t_0$ is an assignment transition. Now, we define two adjacent input sequences $\alpha(\ell)=a_0\cdotsa_{w+n_2\ell-1}$ and $\beta(\ell)\:=b_0\cdotsb_{w+n_2\ell-1}$ as follows. For all $j,0\leq j< v$ and for all $j,\:v+n_1\ell \leq j<w$,$a_j\:=b_j\:=0$; for all $j,\:v\leq j<v+n_1\ell$ and for all $j,\:w\leq j<w+n_2\ell$, if $c_j$ is the condition $\getest$ then $a_j\:=\frac{1}{2}-\mu_j,\: b_j\:=-\frac{1}{2}-\mu_j$, if $c_j$ is the condition $\lttest$ then $a_j\:=-\frac{1}{2}-\mu_j,\: b_j\:=\frac{1}{2}-\mu_j$ and if $c_j$ is $\mathsf{true}$ then $a_j\:=b_j\:=0.$ It is not difficult to see that $\alpha(\ell)$ and $\beta(\ell)$ are adjacent. Let $\gamma(\ell)\:=o_0\cdots o_j\cdots$ be the sequence of outputs in the transitions of $\eta_{\ell}.$ Let $X_j,U_j$ be random variables with distributions given by $\Lap{d_{j}\epsilon,a_j+\mu_j}$ and $\Lap{d_{j}\epsilon,b_j+\mu_j}$, respectively. Recall that $\pdfassgn{\alpha(\ell)|v}{\gamma(\ell)|v}{z}$ is the p.d.f of the random variable that models the value of $\rvar$ after $\ell$ steps of $\eta$ on input $\alpha.$ Since $\alpha(\ell)|v\:=\beta(\ell)|v$ we see that $\pdfassgn{\alpha(\ell)|v}{\gamma(\ell)|v}{z}\:= \pdfassgn{\beta(\ell)|v}{\gamma(\ell)|v}{z}.$ Observe that $t_u$ is the last assignment transition in $\eta_{\ell}.$ For each $j>u$, for any given $y\in \mathbb{R}$, let $g_j(y),h_j(y)$ be the probabilities defined as follows: if $c_j$ is the condition ${\getest}$ then $g_j(y)\:=\prbfn{X_j\geq y}$ and $h_j(y)\:=\prbfn{U_j\geq y}$; if $c_j$ is the condition $\lttest$ then $g_j(y)\:=\prbfn{X_j< y}$ and $h_j(y)\:=\prbfn{U_j< y}$; if $c_j$ is $\mathsf{true}$ then $g_j(y)\:=h_j(y)\:=1.$ It should be easy to see that, for all $j,\:u<j<v$ and for all $j,\:v+n_1\ell \leq j<w$, $a_j\:=b_j$ and hence $g_j(y)\:=h_j(y).$ Now, we have the following claim. {\bf Claim:} For all $j, v\leq j <v+n_1\ell$, and for all $j,\:w\leq j<w+n_2\ell$, it is the case that $g_j(y)\geq h_j(y)$ for all $y\in \mathbb{R}$, and the following additional inequalities hold. \begin{enumerate} \item If $y\leq 0$ and $c_j$ is the condition $\lttest$ then $g_j(y) \geq \eulerv{\frac{1}{2}d_{j}\epsilon}h_j(y).$ \item If $y>0$ and $c_j$ is the condition ${\getest}$ then $g_j(y) \geq \eulerv{\frac{1}{2}d_{j}\epsilon}h_j(y).$ \end{enumerate} \begin{proof} Observe that when $c_j\:=\mathsf{true}$ then trivially $g_j(y)\:=h_j(y).$ Now, consider the case when $y<-\frac{1}{2}.$ If $c_j$ is the condition $\getest$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$ and $h_j(y)\:=1-\frac{1}{2}\eulerv{-d_{j}\epsilon(-\frac{1}{2}-y)}$(this is so since $a_j+\mu_j=\frac{1}{2}$ and $b_j+\mu_j=-\frac{1}{2}$) ; in this case $\frac{1}{2}-y>-\frac{1}{2}-y$ and hence $g_j(y)\geq h_j(y).$ If $c_j$ is the condition $\lttest$ then $g_j(y)\:= \frac{1}{2}\eulerv{-d_{j}\epsilon(-\frac{1}{2}-y)}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$; from this we see that $g_j(y)\geq \eulerv{d_{j}\epsilon}h_j(y).$ Now consider the case when $y\in [-\frac{1}{2},0].$ If $c_j$ is the condition $\getest$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$; since $g_j(y)\geq \frac{1}{2}$ and $h_j(y)\leq \frac{1}{2}$, we see that $g_j(y) \geq h_j(y).$ If $c_j$ is the condition $\svar < x$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$; since $g_j(y)\geq \frac{1}{2}$, we see that $g_j(y)\geq \eulerv{\frac{1}{2}d_{j}\epsilon}h_j(y).$ Now consider the case when $y>0.$ If $y\leq \frac{1}{2}$ and $c_j$ is ${\getest}$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$; observe that $g_j(y)\geq \frac{1}{2}$ and $h_j(y)\leq \frac{1}{2}\eulerv{-\frac{1}{2}d_{j}\epsilon}$; from this we get the desired inequality. If $y\leq \frac{1}{2}$ and $c_j$ is $\lttest$ then $g_j(y)\:= 1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(\frac{1}{2}-y)}$; since $g_j(y)\geq \frac{1}{2}$ and $h_j(y)\leq \frac{1}{2}$, we see $g_j(y)\geq h_j(y).$ If $y>\frac{1}{2}$ and $c_j$ is ${\getest}$ then $g_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y-\frac{1}{2})}$ and $h_j(y)\:=\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$; from this we see that the desired inequality follows easily. If $y>\frac{1}{2}$ and $c_j$ is $\lttest$ then $g_j(y)\:=1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y+\frac{1}{2})}$ and $h_j(y)\:=1-\frac{1}{2}\eulerv{-d_{j}\epsilon(y-\frac{1}{2})}$; it is easy to see that $g_j(y)\geq h_j(y).$ \end{proof} Let $S_1(\ell)$ be the set of all $j$ such that $v\leq j<v+n_1\ell$ and $c_j$ is the condition $\svar < x.$ Let $S_2(\ell)$ be the set of all $j$ such that $w\leq j<w+n_2\ell$, and $c_j$ is the condition ${\getest}.$ Since $C$ is an {\ensuremath{\mathsf{L}}-cycle} and $C'$ is a {\ensuremath{\mathsf{G}}-cycle}, we see that the cardinalities of both $S_1(\ell)$ and $S_2(\ell)$ are $\geq \ell.$ Let $d_{\min}\:= \min\set{d_{j}\st j\in S_1(\ell)\cup S_2(\ell)}.$ Clearly $d_{\min}>0.$ Let $\rho_\ell=\eta[\alpha]$ be the path that results from the execution of $\eta$ on input $\alpha.$ Let $\rho'_\ell=\eta[\beta]$ be the path that results from the execution of $\eta$ on input $\beta.$ Recall that $\rho_{\ell}||w$ is the suffix of $\rho_{\ell}$ starting with $q_w$ and it is of length $n_2\ell.$ Since $C'$ is a {\ensuremath{\mathsf{G}}-cycle}, from the above claim, we see that $\forall y\in \mathbb{R}$, $\pathprob{\rho_{\ell}||w,y}\geq \pathprob{\rho'_{\ell}||w,y}$, and $\forall y>0$, $\pathprob{\rho_{\ell}||w,y}\geq \eulerv{\frac{1}{2}d_{\min}\ell\epsilon}\pathprob{\rho'_{\ell}||w,y}.$ Using the above property and the previous claim, together with the assumption that $\forall j,\:v+n_1\ell\leq j<w$, if $t_j$ is an assignment transition then it's condition is $\getest$, the following can be proved by downward induction on $k$, $\forall k,\:v+n_1\ell \leq k <w$: $\forall y\in \mathbb{R}$, $\pathprob{\rho_{\ell}||k,y}\geq \pathprob{\rho'_{\ell}||k,y}$, and $\forall y>0$, $\pathprob{\rho_{\ell}||k,y}\geq \eulerv{\frac{1}{2}d_{\min}\ell\epsilon}\pathprob{\rho'_{\ell}||k,y}.$ Now, it should be easy to see that $\forall y\in\mathbb{R},$ $$ \begin{array}{lcl} \pathprob{\rho_{\ell}||v,y} &=& (\prod_{v\leq j<v+n_1\ell}g_j(y)) \pathprob{\rho_{\ell}||v+n_{1}\ell,y} \\ \pathprob{\rho'_{\ell}||v,y} &=& (\prod_{v\leq j<v+n_1\ell}h_j(y)) \pathprob{\rho'_{\ell}||v+n_{1}\ell,y}. \end{array}$$ Observe that $\forall j,\: v\leq j<v+n_1\ell,$ $$\forall y\leq 0: g_j(y)\geq \eulerv{\frac{1}{2}d_{\min}\epsilon} h_j(y),\: \pathprob{\rho_{\ell}||j,y}\geq \pathprob{\rho'_{\ell}||j,y}$$ and $$\forall y>0,\: g_j(y)\geq h_j(y),\: \pathprob{\rho_{\ell}||j,y}\geq \eulerv{\frac{1}{2}d_{\min}\ell \epsilon} \pathprob{\rho'_{\ell}||j,y}.$$ From this we get the following: $$\forall y\in \mathbb{R},\: \pathprob{\rho_{\ell}||v,y}\geq \eulerv{\frac{1}{2}d_{\min}\ell \epsilon} \pathprob{\rho'_{\ell}||v,y}.$$ Now, we observe that $$ \begin{array}{l} \red{\Pr(\alpha(\ell),\gamma(\ell),\epsilon)}= \int^{\infty}_{-\infty} \pdfassgn{\alpha(\ell)|v}{\gamma(\ell)|v}{y} \pathprob{\rho_{\ell}||v,y}dy.\\ \red{ \Pr(\beta(\ell),\gamma(\ell),\epsilon)}= \int^{\infty}_{-\infty} \pdfassgn{\beta(\ell)|v}{\gamma(\ell)|v}{y} \pathprob{\rho'_{\ell}||v,y}dy.\\ \end{array} $$ From the facts that $\pathprob{\rho_{\ell}||v,y} \geq \eulerv{\frac{1}{2}d_{\min}\ell\epsilon}\pathprob{\rho'_{\ell}||v,y}$ and that $\pdfassgn{\alpha(\ell)|v}{\gamma(\ell)|v}{y}\:= \pdfassgn{\beta(\ell)|v}{\gamma(\ell)|v}{y}$, we see that $\frac{\red{\Pr(\alpha(\ell),\gamma(\ell),\epsilon)}}{\red{\Pr(\beta(\ell),\gamma(\ell),\epsilon)}}\geq \eulerv{\frac{1}{2}d_{\min}\ell\epsilon}.$ Since $\ell$ can be made arbitrarily large, we see that $\mathcal{A}$ is not $a\epsilon$-differentially private, for any $a>0$. Hence $\mathcal{A}$ is not differentially private. \end{proof} \section{\dipautop} \label{sec:dipauto} {\diptext} ({\bf \textsf{Di}}fferentially {\bf \textsf{P}}rivate) automata ({\dipa} for short) are a simple model to describe some differential privacy mechanisms known in the literature. Some of the features we hope to capture are those highlighted by Algorithms~\ref{fig:SVT} and~\ref{fig:NumSp}. Recall that the input to a differential privacy mechanism is a sequence of real numbers that correspond to answers to queries. The differential privacy mechanism is a randomized algorithm that processes this input, samples values from distributions like Laplace, and produces a sequence of values as output. These outputs could include real numbers (Algorithm~\ref{fig:NumSp}). Further, as observed in Example~\ref{ex:diff-privacy}, the behavior of the mechanism depends on the privacy budget $\epsilon$. {\dipautop} are a formal model that have these features. \subsection{Syntax} \blue{A {\dipa} is a \emph{parametric} automaton with finitely many control states and three real-valued variables $\svar,\svar'$ and $\rvar$. While the variables $\svar$ and $\svar'$ are freshly sampled in each step, the variable $\rvar$ can store real values to be used in later steps.} The value of the parameter $\epsilon$ (the privacy budget) influences the distribution from which reals values are sampled during an execution. The input to such an automaton is a finite sequence of real numbers. In each step the automaton does the following. \begin{enumerate} \item It samples two values, called $\svar$ and $\svar'$, drawn from the distributions $\Lap{d\epsilon, \mu}$ and $\Lap{d'\epsilon,\mu'}$, respectively. The scaling factors $d,d'$ and means $\mu, \mu'$ of these distributions depend on the current state. \item Depending on the current state, the automaton will either read a real number from the input, or not read anything from the input. If an input value $a$ is read, then $\svar$ and $\svar'$ are updated by adding $a$ to them. \item The transition results in changing the control state and outputting a value. The value output could either be a symbol from a finite set (like $\bot/\top$ in Algorithm~\ref{fig:SVT}) or one of the two real numbers $\svar$ and $\svar'$ that are sampled in this step (like in Algorithm~\ref{fig:NumSp}). If an input value is read then the transition could be guarded by the result of comparing the sampled value $\svar$ and the stored value $\rvar$. It is possible that for certain values of $\rvar$ and $\svar$, no transition is enabled from the current state. In such a case, the computation ends. \item Finally, the automaton may choose to store the sampled value $\svar$ in $\rvar$. \end{enumerate} The above intuition is captured by the formal definition of {\dipa} below and its semantics described later in this section. \begin{definition}[{\dipa}] \label{def:dipa} Let $\cnds$ be the set of \emph{guard conditions} $\set{\mathsf{true},\getest,\lttest}$. A \emph{\dipautos} $\mathcal{A} = \defaut$ where \begin{itemize} \item $Q$ is a finite set of states partitioned into two sets: the set of input states $Q_{\mathsf{in}}$ and the set of non-input states $Q_{\mathsf{non}}$, \item $\inalph = \mathbb{R}$ is the input alphabet, \item $\outalph$ is a finite output alphabet, \item $\qinit \in Q$ is the initial state, \item $\vars = \set{\rvar,\svar,\svar'}$ is the set of variables, \item $\parf : Q \to \mathbb{Q}^{\geq 0} \times \mathbb{Q} \times \mathbb{Q}^{\geq 0} \times \mathbb{Q}$ is the parameter function that assigns to each state a 4-tuple $(d,\mu,d',\mu')$, where $\svar$ is sampled from $\Lap{d\epsilon,\mu}$ and $\svar'$ is sampled from $\Lap{d'\epsilon,\mu'}$, \item and $\delta: (Q \times \cnds) \pto (Q \times (\outalph \cup \set{\svar,\svar'}) \times \set{\mathsf{true},\mathsf{false}})$ is the transition (partial) function that given a current state and result of comparing $\rvar$ with $\svar$, determines the next state, the output, and whether $\rvar$ should be updated to store $\svar$. The output could either be a symbol from $\outalph$ or the values $\svar$ and $\svar'$ that were sampled. \end{itemize} The transition function $\delta$ of a {\dipa} will satisfy the following four conditions. \vspace*{0.05in} \noindent {\bf \Detcond:} For any state $q \in Q$, if $\delta(q,\mathsf{true})$ is defined then $\delta(q,\getest)$ and $\delta(q,\lttest)$ are undefined. \vspace*{0.05in} \noindent {\bf \Outcond:} For any state $q \in Q$, if $\delta(q,\getest)$ is defined to be $(q_1,o_1,b_1)$ and $\delta(q,\lttest)$ is defined to be $(q_2,o_2,b_2)$ then $o_1 \neq o_2$, i.e., distinct transitions from a state have different outputs. Further at least one out of $o_1$ and $o_2$ belongs to $\outalph$, i.e., both transitions cannot output real values. \vspace*{0.05in} \noindent {\bf \Initcond:} The initial state $\qinit$ has only one outgoing transition of the form $\delta(\qinit,\mathsf{true}) = (q,o,\mathsf{true})$ where $q$ is a state and $o$ is an output symbol. In other words, the guard of the first transition is always $\mathsf{true}$ and the first value sampled is stored in $\rvar$. \vspace*{0.05in} \noindent {\bf \Noninpcond:} From any $q \in Q_{\mathsf{non}}$, if $\delta(q,c)$ is defined, then $c = \mathsf{true}$; that is, there is at most one transition from a non-input state which is always enabled. \end{definition} It is useful to classify transitions of a {\dipa} into different types. Consider a transition $\delta(q,c) = (q',o,b)$. If $q \in Q_{\mathsf{in}}$ then it is an \emph{input transition} and if $q \in Q_{\mathsf{non}}$ then it is a \emph{non-input transition}. If $b = \mathsf{true}$ then the transition will set $\rvar = \svar$, and hence it is called an \emph{assignment transition}. On the other hand, if $b = \mathsf{false}$, the transition will be said to be a \emph{non-assignment} transition. A \emph{pure assignment} transition is an assignment transition with $c = \mathsf{true}$. The {\initcond} condition says that the (only) transition out of the initial state of a {\dipa} is a pure assignment transition. \begin{example} \label{ex:autos} The differential privacy mechanisms in Example~\ref{ex:diff-privacy} can be modeled as {\dipautop}. These are shown in Fig.~\ref{fig:svt-auto} and~\ref{fig:numsp-auto}. When drawing {\dipa}s in this paper, we will follow these conventions. Input states will be represented as circles, while non-input states with be shown as rectangles. The name of each state is written above the line, while the scaling factor $d$ and mean $\mu$ of the distribution used to sample $\svar$ is written below the line. The parameters $d'$ and $\mu'$ for sampling $\svar'$ are not shown in the figures, but are mentioned in the caption and text when they are important; they are relevant only when $\svar'$ is output on a transition. Edges will be labeled with the guard of the transition, followed by the output, and a Boolean to indicate whether the transition is an assignment transition. \begin{figure} \begin{center} \begin{tikzpicture} \footnotesize \node[mynonstate, initial] (q0) {\vrule height .2cm depth .2cm width 0pt $q_0$ \nodepart{two} \vrule height .2cm depth .2cm width 0pt $\frac{1}{2},\ 0$}; \node[myinstate, right of=q0] (q1) {$q_1$ \nodepart{lower} $\frac{1}{4},\ 0$}; \node[myinstate, right of=q1] (q2) {$q_2$ \nodepart{lower} $\frac{1}{4},\ 0$}; \draw (q0) edge node[myedgelabel] {$\mathsf{true}$ \nodepart{two} $\bot, \mathsf{true}$} (q1); \draw (q1) edge[loop above] node[myedgelabel] {$\lttest$ \nodepart{two} $\bot, \mathsf{false}$} (q1) edge node[myedgelabel] {$\getest$ \nodepart{two} $\top, \mathsf{false}$} (q2); \end{tikzpicture} \end{center} \caption{{\dipa} $\svtauto$ modeling Algorithm~\ref{fig:SVT}. Threshold for the algorithm is $0$ (mean for sampling $\svar$ in state $q_0$).} \label{fig:svt-auto} \end{figure} The SVT algorithm (Algorithm~\ref{fig:SVT}) can be modeled as a {\dipa} $\svtauto$ shown in Fig.~\ref{fig:svt-auto}. Since $\svtauto$ does not output $\svar'$ in any transition, the parameters used for sampling $\svar'$ are not relevant. In this representation of SVT, the threshold used for comparison in the algorithm is hard-coded in the automaton as the mean parameter of the initial state $q_0$. In fact, without loss of generality we can take this to be $0$ as shown in Fig.~\ref{fig:svt-auto}. The initial state $q_0$ of the automaton is a non-input state with $d = \frac{1}{2}$ and $\mu = 0$ (the threshold for the algorithm). From $q_0$, the algorithm samples a value that corresponds to the perturbed threshold and stores this in variable $\rvar$. In state $q_1$, in each step it reads a query value (input), perturbs it by sampling, and compares this with the perturbed threshold stored in variable $\rvar$. If the sampled value is less that $\rvar$ it stays in $q_1$, outputs $\bot$ and leaves $\rvar$ unchanged. On the other hand, if $\getest$ then it outputs $\top$, and transitions to a terminal state $q_2$. $\svtauto$ can be used to illustrate our classification of transitions. The transition from $q_0$ to $q_1$ is the only non-input transition and the only assignment transition in the automaton; all other transitions are non-assignment, input transitions. In addition, the transition from $q_0$ to $q_1$ is also a pure assignment transition, since the guard is $\mathsf{true}$. \begin{figure} \begin{center} \begin{tikzpicture} \footnotesize \node[mynonstate, initial] (q0) {\vrule height .2cm depth .2cm width 0pt $q_0$ \nodepart{two} \vrule height .2cm depth .2cm width 0pt $\frac{4}{9},\ 0$}; \node[myinstate, right of=q0] (q1) {$q_1$ \nodepart{lower} $\frac{2}{9},\ 0$}; \node[myinstate, right of=q1] (q2) {$q_2$ \nodepart{lower} $\frac{2}{9},\ 0$}; \draw (q0) edge node[myedgelabel] {$\mathsf{true}$ \nodepart{two} $\bot, \mathsf{true}$} (q1); \draw (q1) edge[loop above] node[myedgelabel] {$\lttest$ \nodepart{two} $\bot, \mathsf{false}$} (q1) edge node[myedgelabel] {$\getest$ \nodepart{two} $\svar', \mathsf{false}$} (q2); \end{tikzpicture} \end{center} \caption{{\dipa} $\numspauto$ modeling Algorithm~\ref{fig:NumSp}. The threshold is taken to be $0$. Label of each state below the line shows the parameters for sampling $\svar$. Parameters for sampling $\svar'$ are not shown in the figure; they are $\frac{1}{9}$ (scaling factor) and $0$ (mean) in every state.} \label{fig:numsp-auto} \end{figure} Automaton $\numspauto$ modeling Numeric Sparse (Algorithm~\ref{fig:NumSp}) is shown in Fig.~\ref{fig:numsp-auto}. As in the case of $\svtauto$ (Fig.~\ref{fig:svt-auto}), the threshold is hard-coded in the automaton and is taken to be $0$ (without loss of generality). Parameters used to sample $\svar'$ are not shown in diagram depicting $\numspauto$. We take those to be just be $\frac{1}{9}$ (scaling factor) and $0$ (mean) in every state; in fact, these parameters for $\svar'$ are only important for state $q_1$. The automaton is very similar to $\svtauto$ (Fig.~\ref{fig:svt-auto}) with the only differences being the parameters used when sampling in each state, and the fact that $\svar'$ is output on the transition from $q_1$ to $q_2$ instead of $\top$. \end{example} \subsection{Paths and executions} A {\dipa} $\mathcal{A}$ defines a probability measure on the \emph{executions} or \emph{paths} of $\mathcal{A}$ (henceforth just called a path). Informally, a path is just a sequence of transitions taken by the automaton. Observe that the condition of {\outcond} ensures that knowing the current state and output, determines which transition is taken. The input read determines the value of $\svar$ and $\svar'$, and therefore, to define the probability of a path, we need to know the inputs read as well. Finally, on transitions where either $\svar$ or $\svar'$ are output, to define a meaningful measure space, we need to associate an interval $(v,w)$ in which the output value lies. Because of these reasons, we define a path to be one that describes the sequence of (control) states the automaton goes through and the sequence of inputs read and outputs produced. Before defining a path formally, it is useful to introduce the following notation. For a pair of states $p,q \in Q$, $a \in \inalph \cup \set{\tau}$ and $o \in \outalph \cup (\set{\svar,\svar'} \times \mathbb{R}_\infty \times \mathbb{R}_\infty)$, we say $p \trns{a,o} q$ if $a = \tau$ whenever $p \in Q_{\mathsf{non}}$ and $a \in \inalph$ whenever $p \in Q_{\mathsf{in}}$, and one of the following two conditions holds. \begin{itemize} \item If $o \in \outalph$ then there is a guard $c \in \cnds$ and Boolean $b \in \set{\mathsf{true},\mathsf{false}}$ such that $\delta(p,c) = (q,o,b)$. \item If $o$ is of the form $(y,v,w)$ where $y \in \set{\svar,\svar'}$ and $v,w \in \mathbb{R}_\infty$ then there is a guard $c \in \cnds$ and Boolean $b \in \set{\mathsf{true},\mathsf{false}}$ such that $\delta(p,c) = (q,y,b)$. Intuitively, an ``output'' of the form $(\svar,v,w)$ (or $(\svar',v,w)$) indicates that the value of $\svar$ ($\svar'$) was output in the transition and the result was a number in the interval $(v,w)$. \end{itemize} The \emph{unique} transition, or rather the quintuple $(p,c,q,o',b)$, that witnesses $p \trns{a,o} q$ will be denoted by $\trname(p \trns{a,o} q)$. \begin{definition}[Path] \label{def:exec} Let $\mathcal{A} = \defaut$ be a {\dipa}. An \emph{execution} or \emph{path} $\rho$ of $\mathcal{A}$ is a sequence of the form \[ \rho = \defexec \] where $q_i \in Q$ for $0 \leq i \leq n$, $a_j \in \inalph \cup \set{\tau}$ and $o_j \in \outalph \cup (\set{\svar,\svar'} \times \mathbb{R}_\infty \times \mathbb{R}_\infty)$ for $0 \leq j < n$. In addition, we require that $q_j \trns{a_j,o_j} q_{j+1}$ for all $0 \leq j < n$. Such a path $\rho$ is said to be from state $q_0$ ($\fstst(\rho)$) to state $q_n$ ($\lstst(\rho)$). Its \emph{length} (denoted $\len{\rho}$) is the number of transitions, namely, $n$. If the starting state and ending state of a path are the same (i.e., $q_0 = q_n$) and $\len{\rho} > 0$ then $\rho$ is said to be a \emph{cycle}. \end{definition} It will be convenient to introduce some notation associated with paths. \begin{notation} Let us consider a path \[ \rho = \defexec \] of length $n$. If $\len{\rho} > 0$, then the \emph{tail} of $\rho$, denoted $\tl(\rho)$, is the path of length $n-1$ given by \[ \tl(\rho) = q_1 \trns{a_1,o_1} q_2 \cdots q_{n-1} \trns{a_{n-1},o_{n-1}} q_n. \] The $i$th state of the path is $\stname(\ith{\rho}) = q_i$ and the $i$th transition is $\trname(\ith{\rho}) = \trname(q_i \trns{a_i,o_i} q_{i+1})$. The guard of the $i$th transition is $\grdname(\ith{\rho}) = c$, where $\trname(\ith{\rho}) = (q_i,c,q_{i+1},o',b)$. Finally, it will be useful to introduce notation for the sequence of inputs read and outputs produced in a path. The output produced will be an element of $(\outalph \cup (\mathbb{R}_\infty \times \mathbb{R}_\infty))^*$ that ignores the variable name that was output when a real value is output. For $o \in \outalph$, define $\tuple{o} = o$, and for $o$ of the form $(y,v,w)$ where $y \in \set{\svar,\svar'}$ and $v,w \in \mathbb{R}_\infty$, define $\tuple{o} = (v,w)$. \[ \begin{array}{l} \inseq(\rho) = a_0a_1\cdots a_{n-1}\\ \outseq(\rho) = \tuple{o_0}\tuple{o_1}\cdots \tuple{o_{n-1}} \end{array} \] Two paths $\rho_1$ and $\rho_2$ will be said to be \emph{equivalent} if they only differ in the sequence of inputs read. In other words, equivalent paths are of the same length, go through the same states, and produce the same outputs (and hence take the same transitions). \blue{Thanks to output distinction, two paths are equivalent if and only if they have the same output sequences. Thus, paths are uniquely determined by input and output sequences. Finally, modifying the values input in a path yields an equivalent path. \begin{proposition} \label{prop:factspath} Let $\rho_1$ and $\rho_2$ be two two paths of a {\dipa} $\mathcal{A}.$ \begin{itemize} \item $\rho_1$ and $\rho_2$ are equivalent if and only if $\outseq(\rho_1)=\outseq(\rho_2).$ \item If $\inseq(\rho_1)=\inseq(\rho_2)$ and $\outseq(\rho_1)=\outseq(\rho_2)$ then $\rho_1=\rho_2.$ \item For any sequence of reals $\overline a \in \Sigma^*$ such that $\len {\overline{a}}=\len{\inseq (\rho_1)}$, there is a path $\rho_3$ equivalent to $\rho_1$ such that $\inseq(\rho_3)=\overline{a}.$ \end{itemize} \end{proposition}} \end{notation} \subsection{Path probabilities} We will now formally define what the probability of each path is. Recall that in each step, the automaton samples two values from Laplace distributions, and if the transition is from an input state, it adds the read input value to the sampled values and compares the result with the value stored in $\rvar$. The step also outputs a value, and if the value output is one of the two sampled values, the path requires it to belong to the interval that labels the transition. The probability of such a transition thus is the probability of drawing a sample that satisfies the guard of the transition and (if the output is a real value) producing a number that lies in the interval in the output label. This intuition is formalized in a precise definition. Let us fix a path \[ \rho = \defexec \] of {\dipa} $\mathcal{A} = \defaut$. Recall that the parameters to the Laplace distribution in each step depend on the privacy budget $\epsilon$. In addition, the value stored in the variable $\rvar$ at the start of $\rho$ influences the behavior of $\mathcal{A}$. Thus, the probability of path $\rho$ depends on both the value for $\epsilon$ and the value of $\rvar$ at the start of $\rho$; we will denote this probability as $\pathprob{\epsilon,x,\rho}$, where $x$ is the initial value of $\rvar$. We define this inductively on $\len{\rho}$. For any $\epsilon$ and any path $\rho$ with $\len{\rho} = 0$, $\pathprob{\epsilon,x,\rho} = 1$. For a path $\rho$ of length $> 0$, let $(q_0,c,q_1,o_0,b) = \trname(q_0 \trns{a_0,o_0} q_1)$ be the $0$th transition of $\rho$. Let $\parf(q_0) = (d,\mu,d',\mu')$ and let $\tuple{a_0} = a_0$ if $a_0 \in \mathbb{R}$ and $\tuple{a_0} = 0$ if $a_0 = \tau$. We will define constants $\ell$ and $u$ as follows. If $o_0 \in \outalph$ then $\ell = -\infty$ and $u = \infty$. Otherwise, $o_0$ is of the form $(y,v,w)$ where $y \in \set{\svar,\svar'}$, and then we take $\ell = v$ and $u = w$. We assume that any integral of the form $\int_e^f g(y)dy = 0$ when $e > f$. Finally, when $o_0$ is of the form $(y,v,w)$ where $y \in \set{\svar,\svar'}$ (i.e., $o_0 \not\in \outalph$), define \[ \begin{array}{l} k = \int_v^w \frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\mu-\tuple{a_0}}}dz\\ k' = \int_v^w \frac{d'\epsilon}{2}\eulerv{-d'\epsilon\card{z-\mu'-\tuple{a_0}}}dz \end{array} \] The function $\pathprob{\cdot}$ is defined based on what $c$ and $b$ are. Let us fix $\nu = \mu+\tuple{a_0}$. We begin by considering the case when the $0$th transition of $\rho$ is a non-assignment transition, i.e., when $b = \mathsf{false}$. \begin{itemize} \item {\bf Case $c = \mathsf{true}$:} If $o_0 \in \outalph$ then $\pathprob{\epsilon,x,\rho} = \pathprob{\epsilon,x,\tl(\rho)}$. If $o_0 = (\svar,v,w)$ then $\pathprob{\epsilon,x,\rho} = k\pathprob{\epsilon,x,\tl(\rho)}$ and if $o_0 = (\svar',v,w)$ then $\pathprob{\epsilon,x,\rho} = k'\pathprob{\epsilon,x,\tl(\rho)}$ \item {\bf Case $c = \getest$:} If $o_0$ is of the form $(\svar',v,w)$ (i.e., $\svar'$ is output) then \[ \pathprob{\epsilon,x,\rho} = k'\left(\int_x^\infty \frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}dz \right)\pathprob{\epsilon,x,\tl(\rho)}. \] Otherwise, taking $\ell' = \max(x,\ell)$, \[ \pathprob{\epsilon,x,\rho} = \left(\int_{\ell'}^u \frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}dz \right)\pathprob{\epsilon,x,\tl(\rho)}. \] \item {\bf Case $c = \lttest$:} If $o_0$ is of the form $(\svar',v,w)$ (i.e., $\svar'$ is output) then \[ \pathprob{\epsilon,x,\rho} = k'\left(\int_{-\infty}^x \frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}dz \right)\pathprob{\epsilon,x,\tl(\rho)}. \] Otherwise, taking $u' = \min(x,u)$, \[ \pathprob{\epsilon,x,\rho} = \left(\int_{\ell}^{u'} \frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}dz \right)\pathprob{\epsilon,x,\tl(\rho)}. \] \end{itemize} Next, when the $0$th transition of $\rho$ is an assignment transition, i.e., $b = \mathsf{true}$, $\pathprob{\cdot}$ is defined as follows. \begin{itemize} \item {\bf Case $c = \mathsf{true}$:} If $o_0$ is of the form $(\svar',v,w)$ (i.e., $\svar'$ is output) then \[ \pathprob{\epsilon,x,\rho} = k'\int_{-\infty}^\infty \left(\frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}\right) \pathprob{\epsilon,z,\tl(\rho)} dz. \] Otherwise, \[ \pathprob{\epsilon,x,\rho} = \int_{\ell}^u \left(\frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}\right) \pathprob{\epsilon,z,\tl(\rho)} dz. \] \item {\bf Case $c = \getest$:} If $o_0$ is of the form $(\svar',v,w)$ (i.e., $\svar'$ is output) then \[ \pathprob{\epsilon,x,\rho} = k'\int_{x}^\infty \left(\frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}\right) \pathprob{\epsilon,z,\tl(\rho)} dz. \] Otherwise, taking $\ell' = \max(x,\ell)$, \[ \pathprob{\epsilon,x,\rho} = \int_{\ell'}^u \left(\frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}\right) \pathprob{\epsilon,z,\tl(\rho)} dz. \] \item {\bf Case $c = \lttest$:} If $o_0$ is of the form $(\svar',v,w)$ (i.e., $\svar'$ is output) then \[ \pathprob{\epsilon,x,\rho} = k'\int_{-\infty}^x \left(\frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}\right) \pathprob{\epsilon,z,\tl(\rho)} dz. \] Otherwise, taking $u' = \min(u,x)$, \[ \pathprob{\epsilon,x,\rho} = \int_{\ell}^{u'} \left(\frac{d\epsilon}{2}\eulerv{-d\epsilon\card{z-\nu}}\right) \pathprob{\epsilon,z,\tl(\rho)} dz. \] \end{itemize} We will abuse notation and use $\pathprob{\cdot}$ to also refer to $\pathprob{x,\rho} = \lambda \epsilon.\ \pathprob{\epsilon,x,\rho}$. Notice that when $\rho$ starts from $\qinit$, because of the {\initcond} condition of {\dipa}, the value of $\pathprob{\cdot}$ does not depend on the initial value of $\rvar$. For such paths, we may drop the initial value of $\rvar$ from the argument list of $\pathprob{\cdot}$ to reduce notational overhead. Even though we plan to use the same function name, the number of arguments to $\pathprob{\cdot}$ will disambiguate what we mean. \begin{example} \label{ex:execsem} Let use consider the {\dipa} $\svtauto$ shown in Fig.~\ref{fig:svt-auto}. A couple of example paths of the automaton are the following. \[ \begin{array}{l} \rho_1 = q_0 \trns{\tau,\bot} q_1 \trns{0,\bot} q_1 \trns{1,\top} q_2\\ \rho_2 = q_0 \trns{\tau,\bot} q_1 \trns{1,\bot} q_1 \trns{1,\top} q_2 \end{array} \] Paths $\rho_1$ and $\rho_2$ only differ in the inputs they read: $\inseq(\rho_1) = \tau\cdot 0\cdot 1 = 01$, while $\inseq(\rho_2) = 11$. Thus, $\rho_1$ and $\rho_2$ are equivalent paths. Notice that $\rho_1$ and $\rho_2$ are adjacent (Definition~\ref{def:adjacency}). The outputs produced in these executions is given by $\outseq(\rho_1) = \outseq(\rho_2) = \bot\bot\top$. Let us now consider $\pathprob{\epsilon,0,\rho_1}$. Since the transition out of $q_0$ is a pure assignment transition, the initial value of $\rvar$ (namely $0$ in this example) does not influence the value of $\pathprob{\epsilon,0,\rho_1}$. Let $X_T,X_1,X_2$ be random variables where $X_T \sim \Lap{\frac{\epsilon}{2},0}$, $X_1 \sim \Lap{\frac{\epsilon}{4},0} + 0$, and $X_2 \sim \Lap{\frac{\epsilon}{4},0}+1$. We can see that \[ \pathprob{\epsilon,0,\rho_1} = \prbfn{X_1 < X_T\ \wedge\ X_2 \geq X_T}. \] Based on how the random variables are distributed, this can be calculated to be \[ \prbfn{X_1 < X_T\: \wedge\: X_2 \geq X_T} = \frac{24 e^{\frac{3\epsilon}{4}} - 1 + 8 \eulerv{\frac{\epsilon}{4}} - 21 \eulerv{\frac{\epsilon}{2}} }{48 e^{\frac{3\epsilon}{4}}}. \] The calculation of $\pathprob{\epsilon,0,\rho_2}$ is similar. Let $X_1'$ be the random variable with $X_1' \sim \Lap{\frac{\epsilon}{4},0}+1$. Then the desired probability is same as $\prbfn{X_1' < X_T\ \wedge X_2 \geq X_T}$. This can be calculated to be \[ \begin{array}{rl} \pathprob{\epsilon,0,\rho_2} & = \prbfn{X_1' < X_T\: \wedge\: X_2 \geq X_T}\\ & = \frac{-22 + 32 \eulerv{\frac{\epsilon}{4}} -3 \epsilon }{48 e^{\frac{\epsilon}{2}}}. \end{array} \] \end{example} The focus of this paper is to study the computational problem of checking differential privacy for {\dipautop}. We conclude this section with a precise definition of this problem. In order to do that we first specialize the definition of differential privacy to the setting of {\dipa}. \blue{Recall that two paths are equivalent if and only if they have the same output sequences, and a path is uniquely determined by its input and output sequences (See Proposition~\ref{prop:factspath}). \begin{definition} \label{def:diff-priv-auto} A {\dipa} $\mathcal{A}$ is said to be $d\epsilon$-differentially private (for $d > 0$, $\epsilon > 0$) if for every pair of \emph{equivalent} paths $\rho_1, \rho_2$ such that $\inseq(\rho_1)$ and $\inseq(\rho_2)$ are \emph{adjacent}~\footnote{See Definition~\ref{def:adjacency} on Page~\pageref{def:adjacency}}, \[ \pathprob{\epsilon,\rho_1} \leq e^{d\epsilon} \; \pathprob{\epsilon,\rho_2}. \] \end{definition}} \vspace*{0.05in} \noindent {\bf Differential Privacy Problem:} Given a {\dipa} $\mathcal{A}$ (with privacy parameter $\epsilon$), determine if there is a $d > 0$ such that for every $\epsilon > 0$, $\mathcal{A}$ is $d\epsilon$-differentially private. \section{Deciding Differential Privacy} \label{sec:decidability} The central computational problem that this paper studies is the following: Given a {\dipa} ${\mathcal{A}}$ determine if there is a $d > 0$ such that for all $\epsilon > 0$, $\mathcal{A}$ is $d\epsilon$-differentially private. In this section we present the main result of this paper, namely, that this problem is efficiently decidable in {linear} time. We also show that we can compute an upper bound on $d$ in linear time if $\mathcal{A}$ is differentially private. The crux of the proof is the identification of simple graph-theoretic conditions that are both \emph{necessary and sufficient} to ensure a {\dipa} is $d\epsilon$-differentially private for all $\epsilon$ and some $d$. Before presenting the properties that are needed to guarantee differential privacy, we first define the notion of reachability. Let us fix a {\dipa} $\mathcal{A} = \defaut$. A state $q$ is said to be \emph{reachable} if there is a path $\rho$ starting from state $\qinit$ and ending in $q$. In addition, we say that a path (cycle) $\rho$ is reachable if there is a path $\rho'$ from $\qinit$ to $\fstst(\rho)$. We now start by identifying the first interesting property. \begin{definition} \label{def:leaky-paths} A path $\rho$ in a {\dipa} $\mathcal{A}$ is said to be a \emph{{leaking path}} if there exist indices $i,j$ with $0 \leq i < j < \len{\rho}$ such that the $i$th transition $\trname(\ith{\rho})$ is an assignment transition and the guard of the $j$th transition $\grdname(\ith[j]{\rho}) \neq \mathsf{true}$. A {{leaking path}} $\rho$ is said to be a \emph{{leaking cycle}} if it is also a cycle. \end{definition} Intuitively, in a {{leaking path}}, the variable $\rvar$ is assigned a value in some transition which is used in the guard of a later transition. Observe that if a path is {{leaking}} then all paths equivalent to it are also {{leaking}}. The presence of a reachable {{leaking cycle}} is a witness that the {\dipa} is not differentially private. The intuition behind this is as follows. One can show that there are a pair of adjacent inputs such that traversing {{leaking cycle}} $C$ on these inputs results in two paths the ratio of whose probability is at least $\eulerv{k\epsilon}$ for some number $k$. Thus, given $d$, we can find an $\ell$ and $\epsilon$ such that traversing the cycle $\ell$ times ``exhausts the privacy budget'', i.e., the adjacent input corresponding to these $\ell$ repetitions have probabilities that are more than $\eulerv{d\epsilon}$ apart. We illustrate this through our next example. \begin{figure} \begin{center} \begin{tikzpicture} \footnotesize \node[myinstate, initial] (q0) {$q_0$ \nodepart{lower} $\frac{1}{2},\ 0$}; \node[myinstate, right of=q0] (q1) {$q_1$ \nodepart{lower} $\frac{1}{4},\ 0$}; \node[myinstate, right of=q1] (q2) {$q_2$ \nodepart{lower} $\frac{1}{4},\ 0$}; \draw (q0) edge node[myedgelabel] {$\mathsf{true}$ \nodepart{two} $\bot, \mathsf{true}$} (q1); \draw (q1) edge[loop above] node[myedgelabel] {$\lttest$ \nodepart{two} $\bot, \mathsf{true}$} (q1) edge node[myedgelabel] {$\getest$ \nodepart{two} $\top, \mathsf{false}$} (q2); \end{tikzpicture} \end{center} \caption{{\dipa} $\sortauto$ modeling an algorithm that checks whether the sequence of real numbers given as input are sorted in descending order. Since $\svar'$ is not output in any state, the parameters used in sampling $\svar'$ are not important.} \label{fig:sort-auto} \end{figure} \begin{example} \label{ex:leakcycle} Consider an algorithm that checks whether the input sequence of real numbers is sorted in descending order. The goal of the algorithm is to read a sequence of numbers, output $\bot$ as long as it is sorted, and output $\top$ the first time it encounters two numbers in the wrong order and stop. A ``differentially private'' version of this algorithm is modeled by {\dipa} $\sortauto$ shown in Fig.~\ref{fig:sort-auto}. It works as follows. It starts by reading an input in state $q_0$, perturbing it by sampling from the Laplace distribution, outputting $\bot$, and storing the perturbed input in $\rvar$. In state $q_1$, $\sortauto$ repeatedly reads an input, perturbs it, and checks if it is less than the previous perturbed value read by the automaton, which is stored in $\rvar$. If it is, the automaton outputs $\bot$, saves the new perturbed value, and stays in $q_1$ to read the next input symbol. On the other hand, if the new value is greater, then it outputs $\top$ and moves to a terminal state. $\sortauto$ is almost identical to the automaton $\svtauto$ (Fig.~\ref{fig:svt-auto}) --- the only difference is that initial state of $\sortauto$ is an input state as opposed to a non-input state, and the self loop on state $q_1$ is an assignment transition. This difference (that the self loop on $q_1$ is an assignment transition) turns out to be critical; $\sortauto$ is not differentially private even though $\svtauto$ is. Observe that the cycle $q_1 \trns{a_0,\bot} q_1 \trns{a_1,\bot} q_1$ is a {{leaking cycle}} as the $0$th transition is an assignment transition and the $1$st transition's guard is $\lttest$. We can exploit this cycle to demonstrate why $\sortauto$ is not differentially private. Consider the paths of length $n$ given as \[ \begin{array}{l} \rho_1^n = q_0 \trns{0,\bot} q_1 \trns{-1,\bot} q_1 \trns{-2,\bot} q_1 \trns{-3,\bot} q_1 \trns{-4,\bot} q_1 \cdots\\ \rho_2^n = q_0 \trns{0,\bot} q_1 \trns{-2,\bot} q_1 \trns{-1,\bot} q_1 \trns{-4,\bot} q_1 \trns{-3,\bot} q_1 \cdots \end{array} \] Observe that for all $n$, $\inseq(\rho_1^n)$ and $\inseq(\rho_2^n)$ are adjacent (Definition~\ref{def:adjacency}). Moreover, for any $d > 0$, there is an $n$ and $\epsilon$, such that the ratio of $\pathprob{\epsilon,\rho_1^n}$ and $\pathprob{\epsilon,\rho_2^n}$ is $> \eulerv{d\epsilon}$. Thus, $\mathcal{A}$ is not $d\epsilon$-differentially private for any $d$. \end{example} Absence of a {{leaking cycle}} does not guarantee differential privacy. \blue{Privacy leaks can occur with other types of paths and cycles. We define one such path next.} \begin{definition} \label{def:lg-cycle} A cycle $\rho$ of a {\dipa} $\mathcal{A}$ is called an {\ensuremath{\mathsf{L}}-cycle} (respectively, {\ensuremath{\mathsf{G}}-cycle}) if there is an $i < \len{\rho}$ such that $\grdname(\ith{\rho}) = \lttest$ (respectively, $\grdname(\ith{\rho}) = \getest$). We say that a path $\rho$ of a {\dipa} $\mathcal{A}$ is an {\ensuremath{\mathsf{AL}}-path} (respectively, {\ensuremath{\mathsf{AG}}-path}) if all assignment transitions on $\rho$ have guard $ \lttest$ (respectively, $\getest$). \end{definition} Observe that a cycle can be both an {\ensuremath{\mathsf{L}}-cycle} and a {\ensuremath{\mathsf{G}}-cycle}. Further, a path with no assignment transitions (including the empty path) is simultaneously both an {\ensuremath{\mathsf{AL}}-path} and an {\ensuremath{\mathsf{AG}}-path}. \begin{definition} \label{def:leakingpair} A pair of cycles $(C,C')$ in a {\dipa} $\mathcal{A}$ is called a \emph{{leaking pair}} if one of the following two conditions is satisfied. \begin{enumerate} \item $C$ is an {\ensuremath{\mathsf{L}}-cycle}, $C'$ is a {\ensuremath{\mathsf{G}}-cycle} and there is an {\ensuremath{\mathsf{AG}}-path} from a state in $C$ to a state in $C'.$ \item $C$ is a {\ensuremath{\mathsf{G}}-cycle}, $C'$ is an {\ensuremath{\mathsf{L}}-cycle} and there is an {\ensuremath{\mathsf{AL}}-path} from a state in $C$ to a state in $C'.$ \end{enumerate} \end{definition} Observe that if $C$ is an {\ensuremath{\mathsf{L}}-cycle} as well as a {\ensuremath{\mathsf{G}}-cycle}, then the pair $(C,C)$ is a {{leaking pair}} with the empty path connecting $C$ to itself. Also, if $(C,C')$ is a {{leaking pair}}, then for any $C_1,C_2$ that are equivalent to $C,C'$ respectively, the pair $(C_1,C_2)$ is also a {{leaking pair}}. The presence of a {{leaking pair}} is also a witness to a {\dipa} not being differentially private. Consider a {\dipa} $\mathcal{A}$ that has no {{leaking cycle}} but has a {{leaking pair}} of cycles $(C,C')$ such that $C$ is reachable. Assume that $C'$ is a {\ensuremath{\mathsf{G}}-cycle}. The case when $C'$ is an {\ensuremath{\mathsf{L}}-cycle} is symmetric. Since $\mathcal{A}$ has no {{leaking cycle}}s, the value stored in $\rvar$ does not change while the automaton is executing the transitions in either $C$ or $C'$. Let $y$ be the value of $\rvar$ when $C'$ starts executing. One can show that if $y > 0$ then there are a pair of adjacent inputs such that traversing $C'$ on those inputs results in paths whose probabilities have ratios that are at least $\eulerv{k\epsilon}$ for some $k$. Moreover, this pair of inputs does not depend on the actual value of $y$. This once again means that by repeating $C'$ $\ell$ times, we can get adjacent inputs whose probabilities violate the $d\epsilon$ privacy budget (for any $d$). A similar observation holds for {\ensuremath{\mathsf{L}}-cycle} $C$ --- if the value of $\rvar$ at the start of $C$ is $\leq 0$ then we can find adjacent inputs such that traversing $C$ for those inputs results in paths whose probabilities have a ``high'' ratio. The next observation is that value stored in $\rvar$ at the end of an {\ensuremath{\mathsf{AG}}-path} is at least the value at the beginning of the path. We can now put all these pieces together to get our witness for a violation of differential privacy. If the value of $\rvar$ is $\leq 0$ at the start of $C$, then repeating $C$ $\ell$ times gives us a pair of adjacent inputs that violate the privacy budget. On the other hand, if $\rvar$ at the start of $C$ is $> 0$ then it will be $> 0$ even at the start of $C'$, and then repeating $C'$ $\ell$ times gives us the desired witnessing pair. Let us illustrate this through an example. \begin{figure} \begin{center} \begin{tikzpicture} \footnotesize \node[mynonstate, initial] (q0) {\vrule height .2cm depth .2cm width 0pt $q_0$ \nodepart{two} \vrule height .2cm depth .2cm width 0pt $\frac{1}{2},\ 0$}; \node[myinstate, right of=q0] (q1) {$q_1$ \nodepart{lower} $\frac{1}{4},\ 0$}; \node[myinstate, above of=q1] (q2) {$q_2$ \nodepart{lower} $\frac{1}{4},\ 0$}; \node[myinstate, left of=q2] (q3) {$q_3$ \nodepart{lower} $\frac{1}{4},\ 0$}; \draw (q0) edge node[myedgelabel] {$\mathsf{true}$ \nodepart{two} $\bot, \mathsf{true}$} (q1); \draw (q1) edge[loop right] node[myedgelabel] {$\lttest$ \nodepart{two} $\bot, \mathsf{false}$} (q1) edge node[myedgelabel, right] {$\getest$ \nodepart{two} $\top, \mathsf{false}$} (q2); \draw (q2) edge[loop right] node[myedgelabel] {$\getest$ \nodepart{two} $\top, \mathsf{false}$} (q2) edge node[myedgelabel] {$\lttest$ \nodepart{two} $\bot,\mathsf{false}$} (q3); \end{tikzpicture} \end{center} \caption{{\dipa} $\svtpauto$ modeling an algorithm that processes a sequence of real numbers and implements a ``noisy' version'' of the following process. As long as the input numbers are less than threshold $T$ ($=0$) it outputs $\bot$. Once it sees the first number $\geq T$, it moves to the second phase. In the phase, it outputs $\top$ as long as the numbers are $\geq T$. When it sees the first number $< T$, it outputs $\bot$ and stops. Since $\svar'$ is never output, parameters used in its sampling are not shown and not important.} \label{fig:svtp-auto} \end{figure} \begin{example} \label{ex:leakingpair} Consider the automaton $\svtpauto$ shown in Fig.~\ref{fig:svtp-auto}. It implements an algorithm that is a slight modification of Algorithm~\ref{fig:SVT} (or the {\dipa} $\svtauto$ in Fig.~\ref{fig:svt-auto}). Like in SVT, the automaton starts in state $q_0$ by sampling a value that is a perturbed value of a threshold $T$ (which is $0$ here). It stores this sampled value in $\rvar$ and moves to the first phase (state $q_1$). In this phase, the automaton outputs $\bot$ and stays in $q_1$ as long as a perturbed value of the input read is less than the perturbed threshold stored in $\rvar$. The first time it encounters a perturbed value that is at least $\rvar$, it moves to phase two (state $q_2$) and outputs $\top$. In state $q_2$, it outputs $\top$ as long as the perturbed inputs it samples are $\geq \rvar$. The first time it encounters a value $< \rvar$ it outputs $\bot$ and terminates. Throughout the computation, the automaton never over-writes the value stored in the first step in variable $\rvar$. $\svtpauto$ has a {{leaking pair}}. Observe that $C=q_1\trns{a_1,\bot}q_1$ is an {\ensuremath{\mathsf{L}}-cycle} and $C'=q_2\trns{a_2,\top}q_2$ is a {\ensuremath{\mathsf{G}}-cycle}. The path $q_1\trns{a_3,\top}q_2$ is an {\ensuremath{\mathsf{AG}}-path} from $C$ to $C'.$ Hence $(C,C')$ is a {{leaking pair}}. The presence of this {{leaking pair}} can be exploited to show that $\svtpauto$ is not $d\epsilon$-differentially private for any $d > 0$. Consider the following two paths. \[ \begin{array}{l} \rho_1^\ell = q_0 \trns{\tau,\bot} \left[ q_1 \trns{-\frac{1}{2},\bot} q_1 \right]^\ell \trns{0,\top} \left[ q_2 \trns{\frac{1}{2},\top} q_2 \right]^\ell \trns{0,\bot} q_3\\ \rho_2^\ell = q_0 \trns{\tau,\bot} \left[ q_1 \trns{\frac{1}{2},\bot} q_1 \right]^\ell \trns{0,\top} \left[ q_2 \trns{-\frac{1}{2},\top} q_2 \right]^\ell \trns{0,\bot} q_3 \end{array} \] In the above $[ p \trns{a,o} q ]^\ell$ means that the path consists of repeating this transition $\ell$ times. Notice that the $\inseq(\rho_1^\ell) = (-\frac{1}{2})^\ell 0 (\frac{1}{2})^\ell 0$ and $\inseq(\rho_2^\ell) = (\frac{1}{2})^\ell 0 (-\frac{1}{2})^\ell 0$ are adjacent. Moreover, for any $d > 0$, there is a $\ell$ such that for every $\epsilon$ the ratio of $\pathprob{\epsilon,\rho_1^\ell}$ and $\pathprob{\epsilon,\rho_2^\epsilon}$ is $> \eulerv{d\epsilon}$. Thus, for an appropriately chosen value for $\ell$, $\rho_1^\ell$ and $\rho_2^\ell$ witness the violation of differential privacy. \end{example} The two conditions we have identified thus far --- existence of reachable {{leaking cycle}} or {{leaking pair}} --- demonstrate differential privacy violations even in {\dipa}s that do not output any real value. In automata that output real values, there are additional sources of privacy violations. We identify these conditions next. \begin{definition} \label{def:disclosingcycle} A cycle $C$ of a {\dipa} $\mathcal{A}$ is a \emph{{disclosing cycle}} if there is an $i$, $0 \leq i < \len{C}$ such that $\trname(\ith{C})$ is an input transition that outputs either $\svar$ or $\svar'$. \end{definition} Again the existence of a reachable {{disclosing cycle}} demonstrates that the {\dipa} is not differentially private --- outputting a perturbed input repeatedly exhausts the privacy budget. We now present the last property of importance that pertains to paths that have transitions that output the value of $\svar$. We say that a state $q$ \emph{is in a cycle} ({\ensuremath{\mathsf{G}}-cycle} or {\ensuremath{\mathsf{L}}-cycle}) if there is a cycle ({\ensuremath{\mathsf{G}}-cycle}/{\ensuremath{\mathsf{L}}-cycle}) $C$ and index $i$ such that $q = \stname(\ith{C})$. \begin{definition} \label{def:violating} We say that a path $\rho = \defexec$ of length $n$ of {\dipa} $\mathcal{A}$ is a \emph{{privacy violating path}} if one of the following conditions hold. \begin{itemize} \item $\tl(\rho)$ is an {\ensuremath{\mathsf{AG}}-path} (resp., {\ensuremath{\mathsf{AL}}-path}) such that $\lstst(\rho)$ is in a {\ensuremath{\mathsf{G}}-cycle} (resp., {\ensuremath{\mathsf{L}}-cycle}) and the $0$th transition $\trname(\ith[0]{\rho})$ is an assignment transition that outputs $\svar$. \item $\rho$ is an {\ensuremath{\mathsf{AG}}-path} (resp., {\ensuremath{\mathsf{AL}}-path}) such that $\lstst(\rho)$ is in a {\ensuremath{\mathsf{G}}-cycle} (resp., {\ensuremath{\mathsf{L}}-cycle}) and the $0$th transition has $\grdname(\ith[0]{\rho}) = \lttest$ (resp., $\grdname(\ith[0]{\rho}) = \getest$) and outputs $\svar$. \item $\rho$ is an {\ensuremath{\mathsf{AG}}-path} (resp., {\ensuremath{\mathsf{AL}}-path}) such that $\fstst(\rho)$ is in an {\ensuremath{\mathsf{L}}-cycle} (resp., {\ensuremath{\mathsf{G}}-cycle}) and the last transition has guard $\grdname(\ith[n-1]{\rho}) = \getest$ (resp., $\grdname(\ith[n-1]{\rho}) = \lttest$) and outputs $\svar$. \end{itemize} \end{definition} Once again, the presence of a reachable {{privacy violating path}} demonstrates that the automaton is not differentially private. Let us provide some intuition why that is the case. We do this for some of the cases that form a {{privacy violating path}} with reasoning for the missing cases being similar. As before, let us assume that there is no {{leaking cycle}} because if there is one then we already know that the automaton is not differential privacy. A consequence of this that there are no assignment transitions in a {\ensuremath{\mathsf{G}}-cycle} or {\ensuremath{\mathsf{L}}-cycle} and hence the value stored in $\rvar$ remains unchanged in these cycles. Let us recall a couple of crucial observation that we used when we argued in the case of a {{leaking pair}}. First, the value stored in $\rvar$ at the end of an {\ensuremath{\mathsf{AG}}-path} is at least as large as the value at the beginning. Next, if a {\ensuremath{\mathsf{G}}-cycle} ({\ensuremath{\mathsf{L}}-cycle}) is traversed when the starting value in $\rvar$ is $>0$ ($\leq 0$) then we have a family of pairs of adjacent inputs that correspond to traversing the cycle multiple times with the property that the ratio of their probabilities diverges as the cycle is traversed more times. Let us now consider each of the cases in the definition of {{privacy violating path}}. If $\rho$ starts with an assignment transition that outputs $\svar$ and if the output of this first step is in the interval $(0,\infty)$ then the value of $\rvar$ is $>0$ at the end of $\rho$ when a {\ensuremath{\mathsf{G}}-cycle} can be traversed. These observations can be used to give us a pair of adjacent inputs that violate privacy. If $\rho$ starts with a transition whose guard is $\lttest$ that outputs $\svar$ and suppose the value output in this step is in the interval $(0,\infty)$ then the value in $\rvar$ at the start is $> 0$. Like in the previous case this can be used to get a violating pair of inputs. Finally, if $\rho$ ends in transition outputting $\svar$, guard $\getest$ and the value output in this last step in the interval $(-\infty,0)$, then we can conclude that the value in $\rvar$ at the end of $\rho$ is $\leq 0$. This combined with properties of {\ensuremath{\mathsf{AG}}-path}s means that $\rvar$ has a value $\leq 0$ at the beginning of $\rho$. This means the {\ensuremath{\mathsf{L}}-cycle} at the start of $\rho$ can be traversed with $\rvar$ having a value $\leq 0$ which means that a violating pair of inputs can be constructed. Let us illustrate this last condition through another example. \begin{figure} \begin{center} \begin{tikzpicture} \footnotesize \node[mynonstate, initial] (q0) {\vrule height .2cm depth .2cm width 0pt $q_0$ \nodepart{two} \vrule height .2cm depth .2cm width 0pt $\frac{4}{9},\ 0$}; \node[myinstate, right of=q0] (q1) {$q_1$ \nodepart{lower} $\frac{2}{9},\ 0$}; \node[myinstate, right of=q1] (q2) {$q_2$ \nodepart{lower} $\frac{2}{9},\ 0$}; \draw (q0) edge node[myedgelabel] {$\mathsf{true}$ \nodepart{two} $\bot, \mathsf{true}$} (q1); \draw (q1) edge[loop above] node[myedgelabel] {$\lttest$ \nodepart{two} $\bot, \mathsf{false}$} (q1) edge node[myedgelabel] {$\getest$ \nodepart{two} $\svar, \mathsf{false}$} (q2); \end{tikzpicture} \end{center} \caption{{\dipa} $\mathcal{A}_{\s{mod}}$ is a modification of $\numspauto$. Label of each state below the line shows the parameters for sampling $\svar$. Parameters for sampling $\svar'$ are not shown in the figure; they are $\frac{1}{9}$ (scaling factor) and $0$ (mean) in every state.} \label{fig:mod-auto} \end{figure} \begin{example} \label{ex:violating} Consider automaton $\mathcal{A}_{\s{mod}}$ (Fig.~\ref{fig:mod-auto}) which is a modification of the Numeric Sparse algorithm modeled by automaton $\numspauto$ (Fig.~\ref{fig:numsp-auto}). The only difference is that the transition from $q_1$ to $q_2$ outputs $\svar$ as opposed to $\svar'$. This change causes this automaton to be not differentially private. Observe that the state $q_1$ is in a $\ensuremath{\mathsf{L}}-cycle$ $q_1 \trns{a,\bot} q_1$ and then path $\rho = q_1 \trns{a,(\svar, (0,\infty))} q_2$ is an {\ensuremath{\mathsf{AG}}-path}. Finally, the last transition (or rather the only transition) of $\rho$ has guard $\getest$ that outputs $\svar$. Thus, $\rho$ is a {{privacy violating path}}. We can use $\rho$ to find a violation for privacy. Consider the following pair of paths. \[ \begin{array}{l} \rho_1^\ell = q_0 \trns{\tau,\bot} \left[ q_1 \trns{-\frac{1}{2},\bot} q_1 \right]^\ell \trns{0,(\svar,(0,\infty))} q_2\\ \rho_2^\ell = q_0 \trns{\tau,\bot} \left[ q_1 \trns{\frac{1}{2},\bot} q_1 \right]^\ell \trns{0,(\svar,(0,\infty))} q_2 \end{array} \] Observe that $\inseq(\rho_1^\ell) = (-\frac{1}{2})^\ell 0$ and $\inseq(\rho_2^\ell) = (\frac{1}{2})^\ell 0$ are adjacent. Moreover, for any $d > 0$, there is an $\ell$ such that for any $\epsilon$, the ratio of $\pathprob{\epsilon,\rho_1^\ell}$ and $\pathprob{\epsilon,\rho_2^\ell}$ is $> \eulerv{d\epsilon}$. Thus, $\rho_1^\ell$ and $\rho_2^\ell$ demonstrate the violation of privacy. \end{example} As the discussion and examples above illustrate, absence of {{leaking cycle}}s, {{leaking pair}}s, {{disclosing cycle}}s, and {{privacy violating path}}s is necessary for a {\dipa} to be differentially private. We call such automata \emph{well-formed}. \begin{definition} \label{def:well-formed} A {\dipa} $\mathcal{A}$ is said to be {\em well-formed} if $\mathcal{A}$ has no reachable {{leaking cycle}}, no {{leaking pair}} $(C,C')$ where $C$ is reachable, no reachable {{disclosing cycle}}, and no reachable {{privacy violating path}}. \end{definition} Our main theorem is that well-formed {\dipa}s are exactly the class of automata that are differentially private. \ifdefined The proof of this Theorem is carried out in the Appendix (See Appendix~\ref{app:necessity} for the \lq\lq only if\rq\rq\ direction and Appendix~\ref{app:sufficient} for the \lq\lq if\rq\rq\ direction). \fi \begin{theorem} \label{thm:main} Let $\mathcal{A}$ be a {\dipa}. There is a $d > 0$ such that for every $\epsilon > 0$, $\mathcal{A}$ is $d\epsilon$-differentially private if and only if $\mathcal{A}$ is well-formed. \end{theorem} \begin{remark} Before presenting a proof sketch for Theorem~\ref{thm:main}, it is useful to point out one special case for the result. Observe that {{disclosing cycle}}s and {{privacy violating path}}s pertain to paths that have transitions that output real values. For {\dipa}s that do not have real outputs, {{disclosing cycle}}s and {{privacy violating path}}s are not needed to get an exact characterization of differential privacy. More precisely, we say that a {\dipa} $\mathcal{A} = \defaut$ has \emph{finite valued outputs} if every transition in $\mathcal{A}$ outputs a value in $\outalph$. Now, a {\dipa} with finite valued outputs is differentially private if and only if it has no reachable {{leaking cycle}}s and {{leaking pair}}s. \end{remark} Discussion in this section has provided intuitions for why well-formed-ness is necessary for an automaton to be differentially private; the formal proof that captures these intuitions is subtle, long, and non-trivial. \ifdefined The proof is postponed to Appendix~\ref{app:necessity}. \fi We sketch some key properties that show why it is sufficient. Let us fix a transition $t = (p,c,q,o,b)$ in a {\dipa} $\mathcal{A} = \defaut$. The transition $t$ is said to lie on a cycle if there is a reachable cycle $\rho$ and index $i$ such that $\trname(\ith{\rho}) = t$. On the other hand, we will say $t$ is a \emph{{critical transition}} if $t$ does not lie on a cycle. Let $\parf(p) = (d,\mu,d',\mu')$ be the parameters for sampling $\svar$ and $\svar'$ in state $p$. We define the cost of $t$ as follows. \[ \cost{t}= \begin{cases} d & t \mbox{ is a {{critical non-input transition}}}\\ 2 d & t \mbox{ is a {{critical input transition}} and }\\ & o \neq \svar'\\ 2d+d' & t \mbox{ is a {{critical input transition}} and }\\ & o = \svar'\\ 0 & \mbox{otherwise} \end{cases}. \] For a path $\rho$, define weight of $\rho$ as $\weight{\rho} = \sum_{i=0}^{\len{\rho}-1} \cost{\trname(\ith{\rho})}$, i.e., the sum of the costs of all the transitions in $\rho$. Finally, define $\weight{\mathcal{A}}$ to be the supremum over all paths $\rho$, $\weight{\rho}$. In fact, the weight of $\mathcal{A}$ could have been defined as a maximum (as opposed to a supremum) because they are the same in this case. The crucial observation about weight of an automaton that is used in proving the sufficiency of well-formed-ness for differential privacy, is that it provides an upper bound on the privacy budget for $\mathcal{A}$. \begin{lemma} \label{lem:sufficiency} A well-formed {\dipa} $\mathcal{A}$ is $\weight{\mathcal{A}}\epsilon$-differentially private for all $\epsilon > 0$. \end{lemma} \begin{proof}(Sketch.) \blue{\ifdefined \else FIX THE PROOF. \fi} The Lemma is a consequence of the proof of Lemma~\ref{lem:main2} given in Appendix~\ref{app:sufficient}. This lemma relates the probabilities of two paths, $\rho$ and $\rho'$ of $\mathcal{A}$, such that $\rho$ and $\rho'$ are equivalent, $\inseq(\rho)$ and $\inseq(\rho')$ are neighbors, and the initial transition of $\rho$ and $\rho'$ are assignment transitions. More precisely, for an initial value of $\rvar$, $x_0,$ Lemma~\ref{lem:main2} shows that $\pathprob{\epsilon,x_0,\rho'}$ is at least $e^{-\weight{\rho}\epsilon}$ times one of three quantities: $\pathprob{\epsilon,x_0,\rho}$, $\pathprob{\epsilon,x_0+1,\rho}$ or $\pathprob{\epsilon,x_0-1,\rho}.$ The specific quantity the Lemma compares $\pathprob{\epsilon,x_0,\rho'}$ to depends on some properties of the path $\rho$ stated in Lemma~\ref{lem:main2}. Together these mutually exclusive properties serve as an exhaustive list of properties that the path $\rho$ can satisfy. The fact that the list is exhaustive is a consequence of well-formed-ness. In particular, one of the parts of the Lemma is that when the guard of the initial transition is $\mathsf{true}$ then $\pathprob{\epsilon,x_0,\rho'} \geq e^{-\weight{\rho}\epsilon} \pathprob{\epsilon,x_0,\rho}.$ This immediately implies the statement of the current Lemma. The proof of Lemma~\ref{lem:main2} itself is intricate and proceeds by induction on the number of assignment transitions in $\rho$. \end{proof} \begin{example} \label{ex:diff-priv-proof} Let us consider the automata $\svtauto$ (Fig.~\ref{fig:svt-auto}) and $\numspauto$ (Fig.~\ref{fig:numsp-auto}). Both these automata are well-formed and hence they are differentially private. Moreover, we can use Lemma~\ref{lem:sufficiency} to provide an upper bound on the required privacy budget. Observe that the only {{critical transition}}s in $\svtauto$ are $t_{01}$, the transition from $q_0$ to $q_1$, and $t_{12}$, the transition from $q_1$ to $q_2$. Now $\cost{t_{01}} = \frac{1}{2}$, while $\cost{t_{12}} = 2(\frac{1}{4}) = \frac{1}{2}$. Thus, $\weight{\svtauto} = \frac{1}{2}+\frac{1}{2} = 1$, or $\svtauto$ is $\epsilon$-differentially private for all $\epsilon$. Similarly, the only {{critical transition}}s in $\numspauto$ are again transition $t_{01}$ from $q_0$ to $q_1$ and transition $t_{12}$ from $q_1$ to $q_2$. They have the following costs: $\cost{t_{01}} = \frac{4}{9}$ and $\cost{t_{12}} = 2(\frac{2}{9}) + \frac{1}{9} = \frac{5}{9}$. Thus, $\weight{\numspauto} = \frac{4}{9} + \frac{5}{9} = 1$ and $\numspauto$ is $\epsilon$-differentially private for all $\epsilon > 0$. \end{example} \begin{remark} Observe that the means used in sampling $\svar$ and $\svar'$ do not play any role in the definition of well-formed (Definition~\ref{def:well-formed}). They also do not play any role in the calculation of the weight of an automaton or Lemma~\ref{lem:sufficiency}. This allows one to make some simple observations. Recall that $\svtauto$ and $\numspauto$ were defined by taking the threshold $T = 0$. However, these observations allow us to conclude that no matter what value is chosen for the threshold $T$, $\svtauto$ and $\numspauto$ are $\epsilon$-differentially private for all $\epsilon > 0$. \end{remark} We get as a corollary of Theorem~\ref{thm:main} that the problem of checking whether a {\dipa} $\mathcal{A}$ is differentially private can be checked using graph-theoretic algorithms in linear time. \begin{corollary} The differential privacy problem for {\dipautop} is decidable in linear time. In addition, $\weight{\mathcal{A}}$ can be computed in linear time, assuming addition and comparison of numbers takes constant time. \end{corollary} \begin{proof} We describe a linear time algorithm that checks whether a {\dipa} $\mathcal{A}$ is well-formed. The Corollary then follows from Theorem~\ref{thm:main}. Let us fix $\mathcal{A} = \defaut$. Consider the edge-labeled directed graph $\mathcal{G}$ whose vertex set is $Q$ and there is an edge-labeled $(c,b)$ from $p$ to $q$ if $\delta(p,c) = (q,o,b)$ for some $o$. Without loss of generality, we can assume that every state is reachable from $\qinit$. It is worth observing that because of the {\detcond} condition of {\dipa}s, the number of edges in $\mathcal{G}$ is at most twice the number of vertices. The subgraph $\mathcal{G}_{\s{AG}}$ of $\mathcal{G}$ has the same vertex set but an edge labeled $(c,b)$ is present in $\mathcal{G}_{\s{AG}}$ only if whenever $b = \mathsf{true}$, $c = \getest$. Similarly, the subgraph $\mathcal{G}_{\s{AL}}$ of $\mathcal{G}$ only has those edges labeled $(c,b)$ with the property that if $b = \mathsf{true}$ then $c = \lttest$. Notice that the graphs $\mathcal{G}$, $\mathcal{G}_{\s{AG}}$ and $\mathcal{G}_{\s{AL}}$ can each be constructed in linear time from $\mathcal{A}$. Next, we compute the maximal strongly connected components (SCC) of $\mathcal{G}$; this can also be done in linear time. Observe that a state $q$ is part of some {\ensuremath{\mathsf{G}}-cycle} if it's SCC has an edge with label $(\getest,b)$. Similarly, $q$ is part of some {\ensuremath{\mathsf{L}}-cycle} if it's SCC has an edge with label $(\lttest,b)$. Notice that the set of all states that belong to some {\ensuremath{\mathsf{G}}-cycle} and those that belong to some {\ensuremath{\mathsf{L}}-cycle} can be computed in linear time. Next, the set of all vertices that can be reached by an {\ensuremath{\mathsf{AG}}-path} from an {\ensuremath{\mathsf{L}}-cycle} can be computed in linear time by performing a BFS on $\mathcal{G}_{\s{AG}}$ starting from vertices that are on {\ensuremath{\mathsf{L}}-cycle}s. Similarly, we can compute all vertices from which a {\ensuremath{\mathsf{G}}-cycle} can be reached by an {\ensuremath{\mathsf{AG}}-path} in linear time. Using BFS on $\mathcal{G}_{\s{AL}}$ we can also compute the set of all vertices that can be reached from a {\ensuremath{\mathsf{G}}-cycle} by an {\ensuremath{\mathsf{AL}}-path}, and the set of all vertices from which an {\ensuremath{\mathsf{L}}-cycle} can be reached by an {\ensuremath{\mathsf{AL}}-path} in linear time. We can now check each of the conditions of well-formed-ness in linear time using the sets computed in the previous paragraph. \begin{itemize} \item \emph{{leaking cycle}}: Check if there is a SCC of $\mathcal{G}$ that has an edge labeled $(c,\mathsf{true})$ and an edge labeled $(c',b')$ where $c' \neq \mathsf{true}$. \item \emph{{leaking pair}}: Check if there is a state on an {\ensuremath{\mathsf{L}}-cycle} that can reach a {\ensuremath{\mathsf{G}}-cycle} by an {\ensuremath{\mathsf{AG}}-path} and check if there is a state on an {\ensuremath{\mathsf{G}}-cycle} that can reach a {\ensuremath{\mathsf{L}}-cycle} by an {\ensuremath{\mathsf{AL}}-path}. \item \emph{{disclosing cycle}}: Check if there is a SCC of $\mathcal{G}$ that contains an edge from an input state that outputs $\svar$ or $\svar'$. \item \emph{{privacy violating path}}: Check if any of the following conditions holds: (a) there is an {\ensuremath{\mathsf{AG}}-path} ({\ensuremath{\mathsf{AL}}-path}) from the target of an assignment transition to a state on a {\ensuremath{\mathsf{G}}-cycle} ({\ensuremath{\mathsf{L}}-cycle}); (b) there is an {\ensuremath{\mathsf{AG}}-path} ({\ensuremath{\mathsf{AL}}-path}) from the target of a non-assignment transition with output $\svar$ and guard $\lttest$ ($\getest$) to a state on a {\ensuremath{\mathsf{G}}-cycle} ({\ensuremath{\mathsf{L}}-cycle}); (c) there is an {\ensuremath{\mathsf{AG}}-path} ({\ensuremath{\mathsf{AL}}-path}) from a state on an {\ensuremath{\mathsf{L}}-cycle} ({\ensuremath{\mathsf{G}}-cycle}) to the source of a transition with guard $\getest$ ($\lttest$) that outputs $\svar$. \end{itemize} We now show how $\weight{\mathcal{A}}$ can be computed in linear time assuming that arithmetic operations take constant time. Observe that we can construct the graph of SCCs of $\mathcal{G}$ in linear time and that {{critical transition}}s are those that correspond to edges in this graph of SCCs. $\weight{\mathcal{A}}$ is the length of the longest path in this graph, where the weight of an edge is the cost of the corresponding transition. Note that this can be computed in linear time because the graph of SCCs is a DAG. \end{proof} \begin{remark} Observe that the well-formed-ness of an automata $\mathcal{A}$ does not depend on the parameter function $\parf$ of the automata. Hence, once we have established that $\mathcal{A}$ is differentially private, we establish it for all possible parameter functions. The weight of a well-formed $\mathcal{A}$, however, does indeed with the scaling parameters given by $\parf.$ It is independent of the mean parameters given by $\parf.$ \end{remark} \section{Preliminaries} \label{sec:prelims} \paragraph*{Sequences} For a set $\inalph$, $\inalph^*$ denotes the set of all finite sequences/strings over $\inalph$. We shall use $\tau$ to denote the empty sequence/string over $\inalph$. For two sequences/strings $\rho,\sigma \in \inalph^*$, we use their juxtaposition $\rho\sigma$ to indicate the sequence/string obtained by concatenating them in order. Consider $\sigma = a_0a_1\cdots a_{n-1} \in \inalph^*$ (where $a_i \in \inalph$). We use $\len{\sigma}$ to denote it's length $n$ and use $\ith{\sigma}$ to denote its $i$th symbol $a_i$. \paragraph*{Sets and functions} Let $\mathbb{N}, \mathbb{Z},\mathbb{Q},\mathbb{Q}^{\geq 0}, \mathbb{R}, \mathbb{R}^{>0}$ denote the set of natural numbers, integers, rational numbers, non-negative rationals, real numbers and positive real numbers, respectively. In addition, $\mathbb{R}_\infty$ will denote the set $\mathbb{R} \cup \set{-\infty,\infty}$, where $-\infty$ is the smallest and $\infty$ is the largest element in $\mathbb{R}_\infty$. For a real number $x \in \mathbb{R}$, $\card{x}$ denotes its absolute value, and $\s{sgn}(x)$ denotes the \emph{sign} function, i.e., $\s{sgn}(x) = 0$ if $x=0$, $\s{sgn}(x) = -1$ if $x < 0$ and $\s{sgn}(x) = 1$ if $x > 0. For any partial function $f\::A \pto B$, where $A,B$ are some sets, we let $\dom(f)$ be the set of $x\in A$ such that $f(x)$ is defined. \paragraph*{Laplace Distribution} Differential privacy mechanisms often add noise by sampling values from the \emph{Laplace distribution}. The distribution, denoted $\Lap{k, \mu}$, is parameterized by two values --- $k \geq 0$ which called the scaling parameter, and $\mu$ which is the mean. The probability density function of $\Lap{k, \mu}$, denoted $f_{k,\mu}$, is given by \[ f_{k,\mu}(x) = \frac{k}{2}\eulerv{-k\card{x-\mu}}. \] Therefore, for a random variable $X \sim \Lap{k,\mu}$ and $c \in \mathbb{R}$, we have \[ \prbfn{X \leq c} = \frac{1}{2}\left[1 + \s{sgn}(c-\mu)(1 - \eulerv{-k\card{c-\mu}})\right]. \] Finally observe that for any $\mu_1,\mu_2 \geq 0$, $\Lap{k,\mu_1+\mu_2}$ and $\Lap{k,\mu_1}+\mu_2$ are identically distributed. \paragraph*{Differential Privacy} \blue{Differential privacy~\cite{dmns06} is a framework that enables statistical analysis of databases containing sensitive, personal information of individuals while ensuring that individuals in the database are not adversely affected by the results of the analysis. In the differential privacy framework, a randomized algorithm, $M$, called the \emph{differential privacy mechanism} mediates the interaction between a (possibly dishonest) data analyst asking queries and a database $D$ responding with answers. Queries are deterministic functions and typically include aggregate questions about the data, like the mean, median, standard deviation of fields in the database. In response to such a sequence of queries, the differential privacy mechanism $M$ will respond with a series of answers, whose value is computed using the actual answers and random sampling, resulting in \lq \lq noisy \rq \rq answers. Thus, the differential privacy mechanism provides privacy at the cost of accuracy. Typically, the differential privacy mechanism's noisy response depends on a \emph{privacy budget} $\epsilon > 0$.} \blue{The crucial definition of differential privacy captures the privacy guarantees of individuals in the database $D$. For an individual $i$ in $D$, let $D \setminus \set{i}$ denote the database where $i$'s information has been removed A secure mechanism $M$ ensures that for any individual $i$ in $D$, and any sequence of possible outputs $\overline{o}$, the probability that $M$ outputs $\overline{o}$ on a sequence of queries is approximately the same whether the interaction is with the database $D$ or with $D \setminus \set{i}.$} To capture this definition formally, we need to characterize the inputs on which $M$ is required to behave similarly. Inputs to a differential privacy mechanism could be seen as answers to a sequence of queries asked by the data analyst. If queries are aggregate queries, then answers to $q$ on $D$ and $D \setminus \set{i}$, for individual $i$, are likely to be away by at most $1$. This intuition leads to an \blue{often-}used definition of \emph{adjacency}\blue{, such as in SVT~\cite{DNRRV09,lyu2016understanding,DR14} and NumericSparse~\cite{DR14},} that characterizes pairs of inputs on which the differential privacy mechanism $M$ is expected to behave similarly. \begin{definition} \label{def:adjacency} Two sequences $\rho,\sigma \in \mathbb{R}^*$ are said to be \emph{adjacent} if $\len{\rho} = \len{\sigma}$ and for \blue{each} $i \leq \len{\rho}$, $\card{\ith{\rho} - \ith{\sigma}} \leq 1$. \end{definition} Having defined adjacency between inputs, we are ready to formally define the notion of privacy. In response, to a sequence of inputs, a differential privacy mechanism produces a sequence of outputs from the set (say) $\outalph$. Since a differential privacy mechanism $M$ is a randomized algorithm, it will induce a probability distribution on $\outalph^*$. \begin{definition}[$\epsilon$-differential privacy] \label{def:diff-priv} A randomized algorithm $M$ that gets as input a sequence of real numbers and produces an output in $\outalph^*$ is said to be \emph{$\epsilon$-differentially private} if for all measurable sets $S \subseteq \outalph^*$ and adjacent $\rho,\sigma \in \mathbb{R}^*$ (Definition~\ref{def:adjacency}), \[ \prbfn{M(\rho) \in S} \leq \eulerv{\epsilon}\, \prbfn{M(\sigma) \in S}. \] In the above equation, $e$ is the Euler constant. \end{definition} \begin{example} \label{ex:diff-privacy} Let us look at a couple of classical differential privacy mechanisms from the literature. These will serve as running examples to motivate our definitions and highlight our results. \RestyleAlgo{boxed} \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \KwIn{$q[1:N]$} \KwOut{$out[1:N]$} \; $\mathsf{r}_T \gets \Lap{ \frac{\epsilon}{2} , T}$\; \For{$i\gets 1$ \KwTo $N$} { $\mathsf{r}\gets \Lap{\frac \epsilon {4} , q[i]}$\; \uIf{$\mathsf{r} \geq \mathsf{r}_T$}{ $out[i] \gets \top$\; exit } \Else{ $out[i] \gets \bot$} } \caption{SVT algorithm} \label{fig:SVT} \end{algorithm} Sparse Vector Technique (SVT)~\cite{DNRRV09,lyu2016understanding} is an algorithm to answer the following question in a privacy preserving manner: Given a sequence of query answers $q[1:N]$ and threshold $T$, find the first index $i$ such that $q[i] \geq T$. The algorithm is shown as Algorithm~\ref{fig:SVT}. It starts by sampling a value from the Laplace distribution with mean $T$, and stores this ``noisy threshold'' in the variable $\mathsf{r}_T$. After that the algorithm reads query answer $q[i]$, perturbs it by sampling from the Laplace distribution with mean $q[i]$ to get $\mathsf{r}$, and compares this ``noisy query'' $\mathsf{r}$ with the ``noisy threshold'' $\mathsf{r}_T$. If $\mathsf{r} < \mathsf{r}_T$ then the algorithm outputs $\bot$ and continues by reading the next query. On the other hand, if $\mathsf{r} \geq \mathsf{r}_T$ then the algorithm outputs $\top$ and stops. This algorithm is known to be $\epsilon$-differential private. It is worth observing that SVT is parameterized by $\epsilon$; each value of $\epsilon$ gives us a new algorithm which is $\epsilon$-differentially private for that particular value of $\epsilon$. \RestyleAlgo{boxed} \begin{algorithm} \DontPrintSemicolon \SetAlgoLined \KwIn{$q[1:N]$} \KwOut{$out[1:N]$} \; $\mathsf{r}_T \gets \Lap{ \frac{4\epsilon}{9} , T}$\; \For{$i\gets 1$ \KwTo $N$} { $\mathsf{r}\gets \Lap{\frac{2\epsilon}{9} , q[i]}$\; \uIf{$\mathsf{r} \geq \mathsf{r}_T$}{ $out[i] \gets \Lap{\frac{\epsilon}{9}, q[i]}$\; exit } \Else{ $out[i] \gets \bot$} } \caption{Numeric Sparse algorithm} \label{fig:NumSp} \end{algorithm} Consider Algorithm~\ref{fig:NumSp} which shows a differential privacy mechanism called Numeric Sparse~\cite{DR14}. The problem solved by this algorithm is very similar to the one solved by SVT (Algorithm~\ref{fig:SVT}) --- given a sequence of query answers $q[1:N]$ and threshold $T$, find the first index $i$ such that $q[i] \geq T$ \emph{and output $q[i]$}. Algorithm~\ref{fig:NumSp} is similar to Algorithm~\ref{fig:SVT}. The only difference is that instead of outputting $\top$ when $\mathsf{r} \geq \mathsf{r}_T$, it outputs a perturbed value of $q[i]$. This algorithm is also known to be $\epsilon$-differentially private for each possible assignment of value to $\epsilon$. \end{example} \section{Decidability of Differential Privacy for {\dipautos}} \label{sec:decidability} We shall now establish necessary and sufficient conditions for a {\dipa} $\mathcal{A}$ to be differentially private. These conditions will turn out to be simple graph-theoretic conditions on the ``underlying graph" of the {\dipa} $\mathcal{A}$, and shall be used to establish the decidability of the differential privacy problem. We start by formalizing these graph-theoretic properties. \red{0th and first} \begin{definition} Let $\mathcal{A}$ be a {\dipautos}. A path $\rho$ of length $n>0$ is called a {\em {{leaking path}}} if there exist values $i,j$ such that $0\leq i<j<n$ such that the $i$-th transition of $\rho$ is an assignment transition and the Boolean condition of $j$-th transition is $\lttest$ or $\getest.$ If the {{leaking path}} $\rho$ is a cycle then we call $\rho$ a {\em {{leaking cycle}}}. \end{definition} Intuitively, in a {{leaking path}}, the variable $\rvar$ is assigned a value in some transition which is used in the condition of a later transition. Observe that if a path is {leaking}, then all paths equivalent to it are also {{leaking}}. As we shall see later, the presence of a {{leaking cycle}} rules out the possibility of a {\dipa} being differentially private. \begin{example} \label{ex:leakcycle} Consider the {\dipa} $\sortauto$ from Example~\ref{ex:autos} on Page~\pageref{ex:autos}. The cycle $q_1\trns{a_1,\bot}q_1\trns{a_2,\bot}q_1$ is a {{leaking cycle}} for any $a_1,a_2$ as the 0th transition is an assignment transition and the first transition has Boolean condition $\lttest.$ \end{example} \begin{definition} A cycle of a {\dipa} $\mathcal{A}$ is called an {\ensuremath{\mathsf{L}}-cycle} (respectively, {\ensuremath{\mathsf{G}}-cycle}) if it has a transition whose Boolean condition is $\lttest$ (respectively, $\getest$). We say that a path of a {\dipa} $\mathcal{A}$ is an {\ensuremath{\mathsf{AL}}-path} (respectively, {\ensuremath{\mathsf{AG}}-path}) if all the assignment transitions on it have the condition $ \lttest$ (respectively, $\getest$). \end{definition} Observe that a cycle can be both an {\ensuremath{\mathsf{L}}-cycle} as well as a {\ensuremath{\mathsf{G}}-cycle}. Further, a path with no assignment transitions (including the empty path) is simultaneously both an {\ensuremath{\mathsf{AL}}-path} and an {\ensuremath{\mathsf{AG}}-path}. \begin{definition} Consider a pair of cycles $(C,C')$ in a {\dipa} $\mathcal{A}.$ We call the pair $(C,C')$ a {\em {{leaking pair}}} if one of the following two conditions is satisfied. \begin{enumerate} \item $C$ is an {\ensuremath{\mathsf{L}}-cycle}, $C'$ is a {\ensuremath{\mathsf{G}}-cycle} and there is an {\ensuremath{\mathsf{AG}}-path} from a state in $C$ to a state in $C'.$ \item $C$ is a {\ensuremath{\mathsf{G}}-cycle}, $C'$ is an {\ensuremath{\mathsf{L}}-cycle} and there is an {\ensuremath{\mathsf{AL}}-path} from a state in $C$ to a state in $C'.$ \end{enumerate} \end{definition} Observe that if $C$ is an {\ensuremath{\mathsf{L}}-cycle} as well as a {\ensuremath{\mathsf{G}}-cycle}, then the pair $(C,C)$ is a {{leaking pair}} with the empty path connecting $C$ to itself. Also, if $(C,C')$ is a {{leaking pair}}, then for any $C_1,C_2$ that are equivalent to $C,C'$ respectively, the pair $(C_1,C_2)$ is also a {{leaking pair}}. The presence of a {{leaking cycle}} rules out the possibility of a {\dipa} being differentially private. \begin{example} \label{ex:leakingpair} Consider the {\dipa} $\svtpauto$ from Example~\ref{ex:autos} on Page~\pageref{ex:autos}. The cycle $C=q_1\trns{a_1,\bot}q_1$ is an {\ensuremath{\mathsf{L}}-cycle} for any $a_1$ and the cycle $C'=q_2\trns{a_2,\top}q_2$ is a {\ensuremath{\mathsf{G}}-cycle} for any $a_2.$ The path $q_1\trns{a_3,\top}q_2$ is an {\ensuremath{\mathsf{AG}}-path} from $C$ to $C'.$ Hence $(C,C')$ is a {{leaking pair}}. \end{example} \red{Need to define reachability} If $\mathcal{A}$ does not have any {{leaking cycle}}s and {{leaking pair}}s then $\mathcal{A}$ will be differentially private. We shall call such {\dipautop} well-formed: \begin{definition} A {\dipa} $\mathcal{A}$ is said to be {\em well-formed} iff it has no {{leaking cycle}} that is reachable from the initial state and it has no {{leaking pair}} of cycles $(C,C')$ where $C$ is reachable from the initial state of $\mathcal{A}$. \end{definition} We have the following theorem which we shall establish shortly. \begin{theorem} \label{thm:main} A {\dipa} $\mathcal{A}$ is differentially private iff it is well-formed. \end{theorem} We get as a corollary of Theorem~\ref{thm:main} that the problem of checking whether a {\dipa} $\mathcal{A}$ is differentially private can be checked using graph-theoretic algorithms. \begin{corollary} The problem of checking whether a {\dipa} $\mathcal{A}$ is differentially private is decidable in time linear in the number of states of $\mathcal{A}$. \end{corollary} \begin{proof} We describe a linear time algorithm that checks whether a {\dipa} $\mathcal{A}$ is well-formed. The statement of Corollary then follows from Theorem~\ref{thm:main}. The algorithm proceeds as follows. Given the {\dipa} $\mathcal{A}=\defaut$, it constructs an edge-labeled directed graph $\mathcal{G}$ as follows. The vertices of $\mathcal{G}$ is $Q$, the set of states of $\mathcal{A}.$ There is an edge from vertex $p$ to vertex $q$ labeled $(c,b)$ iff there is an output $o$ such that $\delta(p,c)=(q,o,b).$ Without loss of generality, we assume that all states of $\mathcal{G}$ are reachable from $\qinit.$ The graph $\mathcal{G}$ can be constructed in linear time. Using Depth-First-Search on the graph $\mathcal{G}$ the algorithm constructs the the strongly connected components of $\mathcal{G}$ and constructs the component graph $\mathcal{G}_{\mathsf{com}}.$ It is easy to see that $\mathcal{A}$ has a {{leaking cycle}} if and only if $\mathcal{G}$ has a strongly connected component that has an edge labeled $(c,\mathsf{true})$ and an edge labeled $(c',b')$ for some $c,b'$ and $c'\ne \mathsf{true}.$ The algorithm checks for the latter condition and outputs NO if it finds that $\mathcal{A}$ has a {{leaking cycle}}. If $\mathcal{A}$ does not have a critical cycle then the algorithm proceeds to check whether $\mathcal{A}$ has a {{leaking pair}} as follows. \red{Complete the proof} \end{proof} \red{Need to make the statement about parameter function} \subsection{Necessary Condition for Differential Privacy} We shall now show that if a {\dipa} $\mathcal{A}$ is not well-formed then it is not $d\epsilon$-differentially private for any $d>0$. We start by considering the case when $\mathcal{A}$ has a {{leaking cycle}.} The intuition behind this result is that each time a {{leaking cycle}} $C$ is executed, it incurs a cost that is asymptotically linear in the parameter $\epsilon$. Thus, given $d,$ we can always find an $\ell$ and an $\epsilon$ such that executing the cycle $\ell$ times exhausts the privacy budget $d\epsilon.$ This intuition requires careful analysis and is captured in the following Lemma, whose non-trivial proof is given in Appendix~\ref{app:if1main}. \begin{lemma} \label{lem:if1main} A {\dipa} $\mathcal{A} $ is not differentially private if it has a {{leaking cycle}} reachable from its initial state. \end{lemma} \red{Do we need a proof sketch} We now discuss the intuition behind the case when $\mathcal{A}$ has no {{leaking cycle}}s, but has a {{leaking pair}} of cycles $(C,C')$ such that $C$ is reachable from initial state. Assume that $C'$ is a {\ensuremath{\mathsf{G}}-cycle}. The case when $C'$ is a {\ensuremath{\mathsf{L}}-cycle} is symmetric. Let $y$ be the stored value of $\rvar$ when $C'$ starts executing (which does not change during the execution of $C'$ as $C'$ is not a {{leaking cycle}}). If $y>0,$ then executing $C'$ incurs a privacy cost linear in the parameter $\epsilon.$ Executing $C'$ $\ell$ times would thus exhaust the privacy budget $d\epsilon$ for large $\ell$ if $y>0.$ Similarly, executing the {\ensuremath{\mathsf{L}}-cycle} $C$ $\ell$ times also exhausts the privacy budget $d\epsilon$ for large $\ell$ if the stored value of $\rvar \leq 0$ during the execution of $C.$ Consider the path in which $C$ is repeated $\ell$ times and upon exiting follows the {\ensuremath{\mathsf{AG}}-path} to $C'$ which is executed $\ell$ times. The {\ensuremath{\mathsf{AG}}-path} condition now ensures that privacy budget $d\epsilon$ gets exhausted for every possible initial stored value of $\rvar$ for large $\ell$. This intuition is is captured in the following Lemma, whose non-trivial proof is given in Appendix~\ref{app:if2main}. \begin{lemma} \label{lem:if2main} A {\dipa} $\mathcal{A}$ is not differentially private if it has a {{leaking pair}} of cycles $(C,C')$ such that $C$ is reachable from the initial state of $\mathcal{A}.$ \end{lemma} \section{Related Work} \label{sec:related} \paragraph*{Privacy proof construction} Several works~\cite{GHHNP13, AGHK18,RP10,AmorimAGH15,ZK17,CheckDP} have proposed the use of type systems to construct proofs of differential privacy. Some of the type-based approaches such as \cite{GHHNP13, AGHK18,RP10,AmorimAGH15} rely on linear dependent types, for which the type-checking and type-inference may be challenging. For example, the type checking problem for the type system in~\cite{AmorimAGH15} is undecidable. The type systems in Zhang and Kifer~\cite{ZK17}, later expanded on in~\cite{CheckDP}, rely on using the techniques of randomness alignments and can handle advanced examples such as the sparse vector technique. Barthe et al.~\cite{BKOZ13,BGGHS16,BFGGHS16} develop several program logics based on probabilistic couplings for reasoning about differential privacy, which have been used successfully to analyze standard examples from the literature, including the sparse vector technique. The probabilistic couplings and randomness alignment arguments are synthesized into coupling strategies by Albarghouthi and Hsu~\cite{AH18}. A shadow execution based method is introduced in~\cite{WDWKZ19}. Both~\cite{AH18} and~\cite{WDWKZ19} are automated and can handle advanced examples such as sparse vector technique efficiently. \blue{Probabilistic I/O automata are used in~\cite{TschantzKD11} to model interactive differential privacy algorithms. Simulation-based methods are used to verify differential privacy. They assume that inputs and outputs take values from a discrete domain and that the sampling is from discrete probability distributions.} While these approaches can handle arbitrarily long sequences of inputs and verify $\epsilon$-differential privacy, they are not shown to be complete and may fail to construct a proof of differential privacy even when the mechanism is differentially private. \paragraph*{Counterexample generation} Another investigation line develops automated techniques to search for privacy violations. Ding et al. ~\cite{DingWWZK18} use statistical techniques based on hypothesis testing for automatic generation of counterexamples. Bischel et al. ~\cite{BichselGDTV18} use optimization-based techniques and symbolic differentiation to search for counterexamples. These methods search only amongst a bounded sequence of inputs and assume a concrete value of the parameter $\epsilon.$ Wang et al.~\cite{CheckDP} use program analysis techniques to generate counterexamples when it fails to construct a proof. \paragraph*{Model-checking/Markov Chain approaches} \blue{The probabilistic model checking approach for verifying $\epsilon$-differential privacy is employed in~\cite{ChatzGP14,LiuWZ18}, where it is assumed that the program is given as a Markov Chain. These approaches do not allow for sampling from continuous random variables. Instead, they assume that the program behavior is given as a finite Markov Chain, and the transition probabilities are specified as inputs. Thus, they also implicitly assume a bounded sequence of inputs and a concrete value of $\epsilon.$ In~\cite{ChistikovKMP20}, the authors use labeled Markov Chains to model differential privacy algorithms. They consider discrete probability only, and can only model inputs taking values from a finite set. They also implicitly assume a concrete value of $\epsilon.$ Further, they check whether the ratio of probabilities of observations on neighboring inputs is bounded by a constant. If it is bounded, it implies the algorithm is $\epsilon$-differentially private for sufficiently large epsilon. However, they do not provide a method to compute a possible $\epsilon$.} \paragraph*{Decision Procedures} The decision problem of checking whether a randomized program is differentially private is studied in~\cite{BartheCJS020}, where it is shown to be undecidable for programs with a single input and single output, assuming that the program can sample from Laplacian distributions. They identify a language that restricts the mechanisms in order to obtain decidability. The restriction forces sampling from the Laplace distribution only a bounded number of times. The number of inputs and outputs are also bounded and constrained to take values from a finite domain. The decision procedure in~\cite{BartheCJS020} relies on the decision procedure for checking the validity of a sentence in the fragment of the theory of Reals with exponentiation identified in~\cite{mccallum2012deciding}, and has very high complexity. The decision procedure allows for verification of differential privacy for all $\epsilon.$ \paragraph*{Complexity} Gaboardi et. al~\cite{GaboardiNP19} study the complexity of deciding differential privacy for randomized Boolean circuits, and show that the problem is $\mathbf{coNP^{\#P}}$-complete. Their results are proved by reduction to majority problems. They assume finite number of inputs, the only probabilistic choices in~\cite{GaboardiNP19} are fair coin tosses, and $e^{\epsilon}$ is taken to be a fixed rational number. \section*{Acknowledgment} The authors would like to thank anonymous reviewers for their interesting and valuable comments. Rohit Chadha was partially supported by NSF CNS 1553548 and NSF CCF 1900924. A. Prasad Sistla was partially supported by NSF CCF 1901069, and Mahesh Viswanathan was partially supported by NSF NSF CCF 1901069 and NSF CCF 2007428. \subsection{Sufficiency proof} We shall now show that if a {\dipa} $\mathcal{A}$ is well-formed then it is differentially private. For simplicity, we will assume that all states are input states. The case when the $\mathcal{A}$ includes non-input states can be dealt with similarly. Finally, we also assume that there are no transitions that output the value of $\svar'.$ In case there are transitions from $\svar',$ Lemma~\ref{lem:main2} can be proved by appealing to the composition theorem of differential privacy (See Theorem 3.14 of~\cite{DR14}.) The following proposition follows directly from the definition of well-formed {\dipautop}. \begin{proposition} \label{prop:abs} Let $\mathcal{A}$ be a well-formed {\dipa} and $\rho$ be a path of $\mathcal{A}$ starting from a reachable state. Then $\rho$ satisfies the following properties. \begin{itemize} \item If $\rho$ starts with an assignment transition $t_0$ and has no further assignment transitions, and has a {\ensuremath{\mathsf{G}}-cycle} or an {\ensuremath{\mathsf{L}}-cycle} transition then the output of $t_0$ is from $\outalph$. \item If $\rho$ has no assignment transitions and has a {\ensuremath{\mathsf{G}}-cycle} (resp., {\ensuremath{\mathsf{L}}-cycle}) transition then the output of every transition in $\rho$, with guard $\lttest$ (resp., $\getest$), is from $\outalph$. \item If $\rho$ starts with an {\ensuremath{\mathsf{L}}-cycle} (resp., {\ensuremath{\mathsf{G}}-cycle}) transition and is an {\ensuremath{\mathsf{AG}}-path} (resp., {\ensuremath{\mathsf{AL}}-path}) then the output of every transition, with guard $\getest$ (resp., $\lttest$), is from $\outalph.$ \item If $\rho$ is an {\ensuremath{\mathsf{AG}}-path} (resp., {\ensuremath{\mathsf{AL}}-path}) ending with a {\ensuremath{\mathsf{G}}-cycle} (resp., {\ensuremath{\mathsf{L}}-cycle}) then the output of every transition, with guard $\lttest$ (resp., $\getest$) , is from $\outalph.$ \end{itemize} \end{proposition} Please note that Lemma~\ref{lem:sufficiency} is an immediate consequence of the following lemma. \begin{lemma} \label{lem:main2} Let $\mathcal{A}=\defaut$ be a well-formed {\dipa} and $\rho$ be a path of length $n>0$ Let $t_0$ be the initial transition, i.e., the $0$th transition of $\rho$, $c_0$ be its guard and $o_0$ be its output. Let $t_0$ be an assignment transition, and let $\rho'$ be a path that is equivalent to $\rho$ such that $\inseq(\rho')$ is a neighbor of $\inseq(\rho).$ Then the following properties hold for all $x_0\in \mathbb{R}.$ \begin{enumerate} \item If the guard $c_0$ is $\getest$, and the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{G}}-cycle} transition and no assignment transition with guard $\lttest$ appears before it, $o_0\in \outalph$ and $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0+1,\rho}.$$ \item If the guard $c_0$ is $\getest$ and either, (a) $\rho$ has no cycle transitions; or (b) the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{G}}-cycle} transition and an assignment transition with guard $\lttest$ appears before it; or (c) the first cycle transition in $\rho$ is an {\ensuremath{\mathsf{L}}-cycle} transition, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0,\rho}.$$ Furthermore, if the output of every transition, whose guard is $\getest$, is from $\outalph$, until the first assignment transition whose guard is $\lttest$ or until the end of $\rho$, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0-1,\rho}.$$ \item If the guard $c_0$ is $\svar < x$ and the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{L}}-cycle} transition and no assignment transition with guard $\getest$ appears before it, then $o_0\in \outalph$ and $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon}\pathprob{x_0-1,\rho}.$$ \item If the guard $c_0$ is $\svar < x$, and either (a) If $\rho$ has no cycle transitions; or (b) The first cycle transition in $\rho$ is an {\ensuremath{\mathsf{L}}-cycle} transition and an assignment transition with guard $\getest$ appears before it; or (c) the first cycle transition in $\rho$ is a {\ensuremath{\mathsf{G}}-cycle} transition, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon}\pathprob{x_0,\rho}.$$ Furthermore, if the output of every transition, whose guard is $\lttest$, is from $\outalph$, until the first assignment transition whose guard is $\getest$ or until the end of $\rho$, then $$\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0+1,\rho}.$$ \item If the guard $c_0$ is $\mathsf{true}$, then $\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0,\rho}.$ \end{enumerate} \end{lemma} \begin{proof} Let $\rho=\execl{n}$ and $\rho'=\execlb{n}.$ Let $t_0,\ldots,t_{n-1}$ be the transitions of $\rho$ and let $c_0,\ldots,c_{n-1}$ be their respective guards. For each $k\leq n,$ let $d_k,\mu_k$ be such that $\parf(q_k)=(d_k,\mu_k).$ Recall that, for any $k,$ $\rho||k$ denotes the suffix of $\rho$ starting from $q_k.$ Once again, we assume that there are no cycle transitions that are assignments. We show, how the proof of Lemma \ref{lem:main} can be modified to prove this Lemma. First, observe that properties (1), (3) and (5) of the Lemma are identical to the corresponding properties of the Lemma \ref{lem:main}. When $o_i\in \outalph$, for all $i, 0\leq i<n$, the second parts of the properties (2) and (4) subsume their first parts, and these two properties become identical to properties (2) and (4) of the Lemma \ref{lem:main}, respectively. For each $i, 0\leq i<n$, let $(u_i,v_i)$ be such that $o_i=(\svar,u_i,v_i)$ if $o_i\notin\outalph$, otherwise it is the interval $(-\infty,\infty).$ Let $g_k(y),g'_k(y)$ be the functions as defined in the proof of Lemma \ref{lem:main}, and $\theta_k=b_k-a_k$ for $0\leq k<n.$ As before, we prove the Lemma by induction on the number of assignment transitions in $\rho.$ In the base case, $\rho$ has one assignment transition which is $t_0.$ Let $S_1$ and $S_2$ be the sets of $k>0$ such that $c_k$ is $\svar \geq x$ and $c_k$ is $\svar < x$, respectively. Now, assume the condition of (1) is satisfied. Observe that $S_1$ includes all {\ensuremath{\mathsf{G}}-cycle} transitions whose guard is $\getest.$ Let $S_1'$ be the set of $k\in S_1$ such that $t_k$ is a {\ensuremath{\mathsf{G}}-cycle} transition and $S_1''= S_1 \setminus S_1'.$ Observe that, using the fact that $\mathcal{A}$ is well-formed and using Proposition \ref{prop:abs}, we see the following hold: (i) for all $k \in S_1'\cup S_2$, $o_k\in \outalph$; (ii) $t_0$ is a critical transition and $o_0\in \outalph$; (iii) for all $k\in S_2\cup S_1''$, $t_k$ does not lie on a cycle and hence is a critical transition. Note that, for any $k\in S_1''$, $o_k$ may be $\svar.$ Now, we see that $$\pathprob{x_0,\rho'}\:=\:\int^\infty_{x_{0}}f(x)\prod_{k\in S_{1}'}\int^\infty_xg'_k(y) dy\: dx$$ where $$\displaystyle{f(x)\:=\:g'_0(x)\prod_{k\in S_{2}}\int^x_{-\infty} g'_k(y) dy \prod_{k\in S_1''}\int^{v_{k}}_{\max(x,u_{k})}g'_k(z) dz}.$$ Now, substituting $g'_k(y)=g_k(y-\theta_k)$ (for $k\in S_1$) in the above equation and using inequality (1) of Lemma \ref{lem:integralineq}, we see that $$\displaystyle{\pathprob{x_0,\rho'}\geq\int^\infty_{x_{0}+1}f(x-1)\prod_{k\in S_{1}}\int^\infty_xg_k(y)dy\:dx}.$$ Now, using the same argument as in the proof of Lemma \ref{lem:main}, and observing that, for $k\in S_1''$, $\int^{v_{k}}_{\max(x-1,u_{k})}g'_k(z) dz\geq \int^{v_{k}}_{\max(x,u_{k})}g'_k(z) dz,$ it is easy to see that $$ \begin{array}{lcl} f(x-1)&\geq& \displaystyle{\eulerv{-2(d_{q_{0}}+\sum_{k\in S_{1}''\cup S_{2}}d_{q_{k}})\epsilon}g_0(x)}\\ && \hspace*{0.6cm} \displaystyle{ \prod_{k\in S_{2}}\int^{x}_{-\infty}g_k(y) dy} \\ && \hspace*{1.2cm} \displaystyle{\prod_{k\in S_{1}''}\int^{v_{k}}_{\max(x,u_{k})}g'_k(z) dz.} \end{array}$$ Putting all the above observations together, we see that property (1) holds. Now, we prove the base case for property (2). Assume the condition of (2a) is satisfied, i.e., there are no cycle transitions in $\rho.$ Now, we see that $$\begin{array}{lcl} \pathprob{x_0,\rho'}&=&\displaystyle{\int^{v_{0}}_{\max(x_{0},u_{0})}g'_0(x)\prod_{k\in S_{1}}\int^{v_{k}}_{\max(x,u_{k})}g'_k(y) dy}\\ && \hspace*{2.2cm} \displaystyle{ \prod_{k\in S_{2}}\int_{u_{k}}^{\min(x,v_{k})}g'_k(z) dz\:dx.}\\ \end{array} $$ It is fairly straightforward to see that $\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon}\pathprob{x_0,\rho}$ since $g'_k(y)\geq \eulerv{-d_{q_{k}}\epsilon}g_k(y)$, for all $y\in \mathbb{R}$, $0\leq k<n.$ From this, we see that the first part of property(2) holds. To see that the second part of property (2) holds, assume that $o_0\in \outalph$, and for all $k\in S_{1}, o_k\in \outalph$. This means that \begin{dmath*} \pathprob{x_0,\rho'}\:=\:\int^\infty_{x_{0}}g'_0(x)\prod_{k\in S_{1}}\int^\infty_xg'_k(y) dy \prod_{k\in S_{2}}\int_{u_{k}}^{\min(x,v_{k})}g'_k(z) dz\:dx. \end{dmath*} Now introducing new variables $w,y'$ and setting $w=x-1$ and $y'=y-1$, we see that $$ \begin{array}{lcl}\pathprob{x_0,\rho'}&=&\displaystyle{\int^\infty_{x_{0}-1}g'_0(w+1)\prod_{k\in S_{1}}\int^\infty_wg'_k(y'+1) dy' }\\ &&\hspace*{1.4cm} \displaystyle{\prod_{k\in S_{2}}\int_{u_{k}}^{\min(w+1,v_{k})}g'_k(z) dz\:dx.} \end{array} $$ Now, observe that, for $k\in S_{2}$, $\int_{u_{k}}^{\min(w+1,v_{k})}g'_k(z) dz\geq \int_{u_{k}}^{\min(w,v_{k})}g'_k(z) dz$. Using this we get, $$\begin{array}{lcl} \pathprob{x_0,\rho'}&\geq&\displaystyle{\int^\infty_{x_{0}-1}g'_0(w+1)\prod_{k\in S_{1}}\int^\infty_wg'_k(y'\blue{+1}) dy' }\\ && \hspace*{1.6cm} \displaystyle{\prod_{k\in S_{2}}\int_{u_{k}}^{\min(w,v_{k})}g'_k(z) dz\:dx.} \end{array}$$ Now, the second part of property (2), follows from the above inequality and the reasoning employed earlier. Now, condition of (2b) can not be satisfied as $t_0$ is the only assignment transition in $\rho$. Now, assume the condition of (2c) is satisfied. Let $S_2'$ be the set of all $k\in S_2$ such that $t_k$ is an {\ensuremath{\mathsf{L}}-cycle} transition and $S_2''= S_2\setminus S_2'.$ Now, using the fact that $\mathcal{A}$ is well-formed and using Proposition \ref{prop:abs} we observe that the following hold: (i) for all $k\in S_1\cup S_2''$, $t_k$ is a critical transition; (ii) $t_0$ is a critical transition and $o_0\in \outalph$; (iii) for all $k\in S_1\cup S_2'$, $o_k\in \outalph.$ Now, we see that that $$\pathprob{x_0,\rho'}\:=\:\int^\infty_{x_{0}}f(x)\prod_{k\in S'_{2}}\int^x_{-\infty} g'_k(y) dy\: dx$$ where $$\displaystyle{f(x)\:=\:g'_0(x)\prod_{k\in S_{1}}\int^\infty_x g'_k(y) dy \prod_{k\in S''_{2}}\int_{u_{k}}^{\min(x,v_{k})} g'_k(y) dy}.$$ Now, using inequality (2) of Lemma \ref{lem:integralineq}, we see that $$\displaystyle{\pathprob{x_0,\rho'}\geq\int^\infty_{x_{0}-1}f(x+1)\prod_{k\in S'_{2}}\int^x_{-\infty} g_k(y)dy\:dx}.$$ Now, observe that $$\begin{array}{lcl} f(x+1)&=&\displaystyle{g_0(x-(\theta_0-1))\prod_{k\in S_{1}}\int_{x+1}^\infty g_k(y-\theta_k)dy}\\ && \displaystyle{\prod_{k\in S''_{2}}\int_{u_{k}}^{\min(x+1,v_{k})} g'_k(y) dy.}\end{array}$$ Introducing a new variable $z$ and setting $z=y-1$, we see that $$\begin{array}{lcl} f(x+1)&=&\displaystyle{g_0(x-(\theta_0-1))\prod_{k\in S_{1}}\int_x^\infty g_k(z-(\theta_k-1))dz} \\ && \displaystyle{\prod_{k\in S''_{2}}\int_{u_{k}}^{\min(x+1,v_{k})} g'_k(y) dy}\end{array}$$ and $$\begin{array}{lcl}f(x+1)&\geq& \displaystyle{\eulerv{-2(d_{q_{0}}+\sum_{k\in S_{1}\cup S''_2}d_{q_{k}})\epsilon}g_0(x)}\\ && \hspace*{0.6cm}\displaystyle{\prod_{k\in S_{1}}\int_x^\infty g_k(z)dz \prod_{k\in S''_{2}}\int_{u_{k}}^{\min(x,v_{k})} g_k(y).}\end{array}$$ From this and the above inequality, it is easily seen that $\pathprob{x_0,\rho'}\geq \eulerv{-\weight{\rho}\epsilon} \pathprob{x_0-1,\rho}$. From this we see that the inequalities of both parts of property (2) hold. As before, the proof for the base case of Properties (3) and (4) is symmetric to those of properties (1) and (2) and is left out. Property (5) is proved as in the case of Lemma \ref{lem:main}. Now, we prove the inductive step as follows. Assume that all the properties hold when $\rho$ has $\ell>0$ assignments. Now, consider the case when $\rho$ has $\ell+1$ assignments. Let $t_i$, for $i>0$, be the second assignment transition in $\rho.$ Let $S_1$ (resp., $S_2$) be the set of $k$, $0<k<i$, such that $c_k$ is $\svar \geq x$ (resp., $\lttest$). Now, consider the case when $c_0$ is $\getest.$ Now, we consider two sub-cases. We first consider the sub-case when there is no cycle transitions before $t_i.$ We have $\pathprob{x_0,\rho'}\:=\int^{v_{0}}_{\max(x_{0},u_{0})}f'(x) \pathprob{x,\rho'||i} dx$ where $f'(x) \:=g'_0(x)\prod_{k\in S_{1}}\int^{v_{k}}_{\max(x,u_{k})}g'_k(y)dy\prod_{k\in S_{2}}\int_{u_{k}}^{\min(x,v_{k})}g'_k(y)dy.$ Applying the inductive hypothesis for the suffix ${\rho||i}$, we get an inequality involving $\pathprob{x,\rho'||i}$ and $\pathprob{x+1,\rho||i}$, or $\pathprob{x-1,{\rho||i}}$, or $\pathprob{x,\rho||i}$, based on which of the five properties of the Lemma are satisfied by ${\rho||i}.$ Suppose the condition of property (1) is satisfied by ${\rho||i}$. Let $j\geq i$ be the smallest integer such that $t_j$ is a {\ensuremath{\mathsf{G}}-cycle} transition. Now, since $p|j$, the prefix of $\rho$, is an {\ensuremath{\mathsf{AG}}-path}, using the fact that $\mathcal{A}$ is well-formed and using Proposition \ref{prop:abs}, it is easy to see that $o_0\in \outalph$, and for all $k\in S_2$, $o_k\in \outalph.$ By using the inductive hypothesis, we get $\pathprob{x_0,\rho'}\geq \int^{\infty}_{x_{0}} f'(x) h(x) dx$, where $h(x)\:= \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x+1,\rho||i} .$ Because of the previous observation, we see that $f'(x) \:=g'_0(x)\prod_{k\in S_{1}}\int^{v_{k}}_{\max(x,u_{k})}g'_k(y)dy\prod_{k\in S_{2}}\int^x_{-\infty}g'_k(y)dy.$ Now, observe that, for each $k\in S_1$, $\int^{v_{k}}_{\max(x-1,u_{k})}g'_k(y)dy\geq \int^{v_{k}}_{\max(x,u_{k})}g'_k(y)dy.$ From this, using the reasoning employed in the base case, we see that $$\begin{array}{lcl}f'(x-1)&\geq&\displaystyle{ \eulerv{-2(d_{q_{0}}+\sum_{k\in S_{1}\cup S_{2}}d_{q_{k}})\epsilon} g_0(x)}\\ && \hspace*{0.6cm} \displaystyle{\prod_{k\in S_{1}}\int^{v_{k}}_{\max(x,u_{k})}g_k(y)dy}\\ && \hspace*{1.2cm} \displaystyle{\prod_{k\in S_{2}}\int^x_{-\infty}g_k(y)dy.}\end{array}$$ Now, by taking $f(x)\:=f'(x)h(x)$, using inequality (1) of Lemma \ref{lem:integralineq} and by taking $k=0$ in that inequality, we get property (1) for the path $\rho$ using the same simplification/reasoning used in the base case and by observing that $$\begin{array}{lcl}G_{p}(x_0+1)&=&\displaystyle{\int^\infty_{x_{0}+1} g_0(x) \prod_{k\in S_1}\int^{v_{k}}_{\max(x,u_{k})}g_k(y)dy}\\ &&\hspace*{1cm}\displaystyle{\prod_{k\in S_{2}}\int^x_{-\infty}g_k(y)dy \pathprob{x,\rho||i} dx}. \end{array}$$ We can similarly prove the inductive step when the suffix ${\rho||i}$ satisfies the other properties (i.e., 2 through 5) of the Lemma. Now consider the sub-case when a cycle transition appears before $t_i.$ Assume that the cycle transitions are {\ensuremath{\mathsf{G}}-cycle} transitions. Let $S_1'$ be the set of $k\in S_1$ such that $t_k$ is a {\ensuremath{\mathsf{G}}-cycle} transition and $S_1''=S_1\setminus S_1'.$ Since $\mathcal{A}$ is well-formed, using Proposition \ref{prop:abs}, we see that $o_0\in \outalph$, and for every $k\in S_1'\cup S_2$, $o_k\in \outalph.$ Let $f(x)\:=f'(x)h(x)$ where $f'(x)\:=g'_0(x) \prod_{k\in S_{2}}\int^x_{-\infty}g'_k(y)dy \prod_{k\in S_{1}''}\int_{\max(x,u_{k})}^{v_{k}}g'_k(y)dy$ and $h(x) \:=\eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x+1,\rho||i}.$ If $c_i$ is also $\getest$, then the suffix ${\rho||i}$ can satisfy any of the conditions of the first two properties of the Lemma; In this situation, observe that, if ${\rho||i}$ satisfies the condition of property (1) then $h(x)$ is the right handside of the inequality, we get, by applying the inductive hypothesis to ${\rho||i}$; If ${\rho||i}$ satisfies the condition of property (2) of the Lemma then, by applying the inductive hypothesis to ${\rho||i}$, we get $\pathprob{x,\rho'||i}\geq \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x,\rho||i}$; since, $\pathprob{x,\rho||i}\geq \pathprob{x+1,\rho||i}$, we see that $\pathprob{x,\rho'||i}\geq \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x+1,\rho||i}.$ Now, assume that $c_i$ is $\lttest.$ Now, since $\mathcal{A}$ is well-formed, it is easy to see that the condition of property (3) of the Lemma cannot be satisfied. Assume that ${\rho||i}$ satisfies the condition of property (4) of the Lemma. Let $k'$ be the smallest integer such that, $i\leq k'\leq n$, and either $k'=n$, or $t_{k'}$ is an assignment transition and $c_{k'}$ is $\getest.$ Now, we see that the path starting with $t_1$ and ending with $t_{k'-1}$ is an {\ensuremath{\mathsf{AL}}-path}. Using Proposition \ref{prop:abs} and the fact that $\mathcal{A}$ is well-formed, we see that, for all $j, i\leq j<k'$, such that $c_j$ is $\lttest$, $o_j\in \outalph.$ Now, applying the induction hypothesis for ${\rho||i}$, using the second part of property (4), we get $\pathprob{x,\rho'||i}\geq \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x+1,\rho||i}.$ Now, if $c_i$ is $\mathsf{true}$, applying the induction hypothesis and using property (5), we see that $\pathprob{x,\rho'||i}\geq \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x,\rho||i}$; since $\pathprob{x,\rho||i}$ is independent of $x$, we see that $\pathprob{x,\rho'||i}\geq \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x+1,\rho||i}.$ Thus, irrespective of what guard $c_i$ is, we have $\pathprob{x,\rho'||i}\geq h(x).$ Now, we have $\pathprob{x_0,\rho'}\:\geq \int^\infty_{x_{0}}f'(x)h(x)\prod_{k\in S'_{1}}\int^\infty_{x}g'_k(z) dz dx.$ Applying the inequality (1) of Lemma \ref{lem:integralineq}, we get $\pathprob{x_0,\rho'}\:\geq \int^\infty_{x_{0}+1}f'(x-1)h(x-1)\prod_{k\in S'_{1}}\int^\infty_{x}g_k(z) dz dx.$ Observe that, for $k\in S_1''$, $\int_{\max(x-1,u_{k})}^{v_{k}}g'_k(y)dy\geq \int_{\max(x,u_{k})}^{v_{k}}g'_k(y)dy.$ Using this observation and the reasoning/simplification as in the base case, we see that property (1) is satisfied by $\rho.$ Now consider the situation where the cycle transitions appearing before $t_i$ are {\ensuremath{\mathsf{L}}-cycle} transitions. Now, we apply inequality (2) of Lemma \ref{lem:integralineq} to prove that property (2) of the Lemma is satisfied by $\rho.$ Let $S_2'$ be the set of $k\in S_2$ such that $t_k$ is an {\ensuremath{\mathsf{L}}-cycle} transition and $S_2''=S_2\setminus S_2'.$ Since $\mathcal{A}$ is well-formed, using Proposition \ref{prop:abs}, we see that $o_0\in \outalph$, and for every $k\in S_1\cup S'_2$, $o_k\in \outalph.$ Now, let $f(x)\:=f'(x)h(x)$ where $f'(x)\:=g'_0(x)\prod_{k\in S_{1}}\int_x^{\infty}g'_k(y)dy\prod_{k\in S_{2}''}\int_{u_{k}}^{\min(x,v_{k})}g'_k(z)dz$ and $h(x) \:= \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x-1,{\rho||i}}.$ Now, applying the induction hypothesis to ${\rho||i}$, we show that $$\pathprob{x_0,\rho'}\geq \int^{\infty}_{x_{0}} f'(x)h(x) \prod_{k\in S'_{2}}\int^x_{-\infty}g'_k(y)dy dx.$$ Since $\mathcal{A}$ is well-formed ${\rho||i}$ cannot satisfy the condition of property (1). Now, consider the case when ${\rho||i}$ satisfies the condition of property (2). Let $k'$ be the smallest integer such that, $i\leq k'\leq n$, and either $k'=n$ or $t_{k'}$ is an assignment transition and $c_{k'}$ is $\lttest.$ Now, we see that the path starting with $t_i$ and ending with $t_{k'-1}$ is a {\ensuremath{\mathsf{AG}}-path}. From this observation, using the fact that $\mathcal{A}$ is well-formed and using Proposition \ref{prop:abs}, we see that, for all $j, i\leq j<k'$, such that $c_j$ is $\getest$, $o_j\in \outalph.$ Now, applying the induction hypothesis for ${\rho||i}$, using the second part of property (2), we get $\pathprob{x,\rho'||i}\geq \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x-1,{\rho||i}}.$ If ${\rho||i}$ satisfies property (3), then we directly see from the induction hypothesis $\pathprob{x,\rho'||i}\geq \eulerv{-\weight{{\rho||i}}\epsilon}\pathprob{x-1,{\rho||i}}.$ If ${\rho||i}$ satisfies property(4), we get the above inequality, using the first part of the induction hypothesis and the observation that $\pathprob{x,\rho||i}\geq \pathprob{x-1,{\rho||i}}.$ If ${\rho||i}$ satisfies property (5) then, we get the above inequality from the induction hypothesis and the observation that $\pathprob{x,\rho||i}$ is independent of $x.$ In all the above cases, it is easy to see, $$\pathprob{x_0,\rho'}\geq \int^{\infty}_{x_{0}} f'(x)h(x) \prod_{k\in S'_{2}}\int^x_{-\infty}g_k(y-\theta_k)dy.$$ Now, using the inequality (2) of Lemma \ref{lem:integralineq}, and observing that, for all $k\in S_2''$, $\int_{u_{k}}^{\min(x+1,v_{k})}g'_k(z)dz \geq \int_{u_{k}}^{\min(x,v_{k})}g'_k(z)dz$, and using simplifications and reasoning as in the base cases, we see that property (2) of the Lemma is satisfied by $\rho.$ \end{proof} \section{Introduction} \input{Section1_introduction} \input{Section2_prelim} \input{NewSection3_dipaut} \input{NewSection4_decidability} \input{Section5__related} \section{Conclusion} \label{sec:conclusions} \input{Section6_conclusions} \bibliographystyle{IEEEtran}
1,116,691,499,598
arxiv
\section{Introduction} Recommender systems aim to discover users' interests and have been extensively employed in various online systems \cite{wang2018billion, ying2018graph}, such as E-commerce platforms, online advertising and social platforms. Despite the success of traditional matrix factorization (MF) models \cite{bokde2015matrix}, or popular deep learning models \cite{NCF}, one of the major challenges for most recommendation methods is the cold-start problem \cite{schein2002methods, gantner2010learning} because of the lack of user-item interactions. Since new users may abandon the system when they receive poor recommendations initially \cite{mcnee2006being}, it is critical to address this problem. \begin{figure}[] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.2cm} \centering \centering \subfigcapskip=-8pt \subfigure[Conventional meta-learning framework]{ \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[width=0.9\linewidth]{latex/figures/concept1-sig.pdf} \label{fig:concept_a} \end{minipage}% }\vspace{-4mm} \subfigcapskip=-6pt \subfigure[Proposed TMAG framework]{ \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[width=0.9\linewidth]{latex/figures/concept2-sig.pdf} \label{fig:concept_b} \end{minipage}% } \caption{Visualization of model parameter $\boldsymbol{\theta}$ of the conventional framework with 4 tasks and the proposed TMAG with 2 tasks aligned by ages. The gradient descent direction in conventional meta-learning framework is biased towards $u_4$, while in TMAG users in the same group have consistent optimization direction thus avoiding such local optimum.} \label{fig:concept} \end{figure} Traditional way to alleviate the cold-start problem is utilizing \textbf{feature-level} strategies, which can be categorized into two groups. Firstly, modeling \textit{inherent information}, (e.g., user profiles \cite{BriandSBMT21, gantner2010learning}, item attributes \cite{roy2016latent, chen2020esam} and cross-domain knowledge \cite{hu2018conet, krishnan2020transfer}) to enhance the representations of new users or items. Secondly, modeling \textit{feature interaction} via graph neural networks (GNN) \cite{WangZWMHW21} and heterogeneous information networks (HIN) \cite{liu2020heterogeneous, MvDGAE} to capture the high-order collaborative signal. Despite progress, these method handle the cold-start issue from the feature-level, which heavily relies on the availability and quality of features. On another line, at the \textbf{model-level}, recent works in few-shot learning \cite{vinyals2016matching} and meta-learning \cite{vanschoren2018meta} have made prominent progress to solve the data sparsity problem in various fields. Most of the current meta-learning methods \cite{MeLU, wei2020fast, MetaHIN} adopt optimization-based algorithms (e.g., MAML \cite{finn2017model}) to address the cold-start problem. The main idea is to learn a global parameter to initialize the parameter of personalized models. These methods construct diverse few-shot user preference tasks that mimic the cold-start scenarios and extract meta-knowledge across meta-training tasks as a strong generalization prior. Then the learned prior knowledge can be rapidly adapted to new users with scarce interactions during meta-testing. Practically, they have achieved promising results in cold-start recommendation. However, the existing meta-learning methods have the following limitations. They formulate each user as a task and learn globally shared meta-knowledge across all users. The coarse-grained global knowledge leads the model to local optimum when dealing with users whose gradient descent directions are different from major users \cite{dong2020mamo}. As illustrated in Figure \ref{fig:concept_a}, $\nabla\mathcal{L}_4$ dominates the direction of gradient descent due to the age difference. Therefore, the parameter $\boldsymbol{\theta}$ is biased towards the optimal solution. In addition, existing methods are deficient in taking full advantages of both inherent information and feature interactions, which is critical to model new user and item. These observations raise two research challenges: \begin{itemize}[leftmargin=*] \item How to ease the local optimum of meta-learning in cold-start scenarios? \item How to make full use of both inherent information and feature interactions to alleviate the sparsity? \end{itemize} With these challenges in mind, in this paper, we propose a $\textbf{T}$ask aligned $\textbf{M}$eta-learning based $\textbf{A}$ugmented $\textbf{G}$raph (TMAG) approach to address the cold-start recommendation at \textbf{both feature-level and model-level}. For the first challenge, a fine-grained task aligned constructor is proposed to cluster similar users and divide tasks for meta-learning. Specifically, an attribute-oriented autoencoder is used to extract latent representations for users and items according to the inherent attributes. Then, users with similar representations are clustered into a group, which is regarded as a task and has consistent optimization direction, thus easing the local optimum during the meta-training. For the second challenge, we propose an augmented graph neural network to capture the high-order user-item interactions. Specifically, two graph enhanced approaches are utilized to alleviate the data sparsity and explore potential interactive signals from the perspective of attribute and graph structure, respectively. The major contributions of our work are summarized as follows: \begin{itemize}[leftmargin=*] \item We propose a fine-grained task aligned constructor to capture the latent clustering knowledge that can be rapidly adapted to new users, which can address the local optimum issues. \item We augment the adjacency matrix of the graph by combining the graph structure information and attribute information, which alleviates the data sparsity problem. \item We employ a task-wise attribute contrastive regularization to enhance the latent clustering knowledge. \item We conduct extensive experiments on three public real-world datasets in various cold-start scenarios to demonstrate the state-of-the-art performance of TMAG. \end{itemize} \section{Related Work} \subsection{Cold-start Recommendation} A prevalent strategy to address the cold-start problem relies on side information, by incorporating user profiles or item content \cite{volkovs2017dropoutnet, van2013deep, BriandSBMT21, gantner2010learning} into traditional collaborative filtering \cite{ItemCF, linden2003amazon, NCF}. In particular, many approaches focus on clustering warm users by modeling group-level behaviors and then assigning cold users to existing clusters using the side information \cite{wu2016cccf, ma2019dbrec, krishnan2018insights, hu2018conet}. Building on these works, we will also utilize side information and incorporate a clustering component in Section \ref{method}. Beyond these content-based features and user-item interactions, another strategy is to augment data via adversarial regularization \cite{krishnan2018adversarial, chae2019rating, chae2020ar}, but adversarial learning is computationally intensive and cannot handle large massive. Some transfer learning-based methods \cite{man2017cross, bi2020dcdir, krishnan2020transfer} alleviate the cold-start problem by transferring well-learned representations of overlapped objects from the source domain to the target domain. The active learning scheme \cite{zhu2020addressing, jacobson2016music, shi2017local} explicitly encourages new users to rate items from the catalog through various interview processes with the extra costs or budgets. Additionally, GNN-based models \cite{Pinsage, NGCF, LightGCN, WangZWMHW21} have been developed for recommendation, which utilizes user-item bipartite graph to capture high-order collaborative signal. In general, GNN-based methods maximize a user's likelihood of adopting an item, rather than directly improving the embedding quality of cold-start users or items. We will improve them in our method to do fast adaption in cold-start scenarios. While these solutions demonstrate promising performance, they only address the cold-start issue at the feature-level and heavily rely on the availability and quality of side information or extra domains. \subsection{Meta-learning for Recommendation} Known as learning to learn, meta-learning mines the underlying patterns behind user-item interactions. Meta-learning aims to extract meta-knowledge across tasks that can be rapidly adapted to a new task with limited examples. Previous works on meta-learning for cold-start recommendation can be classified into two categories. One is the metric-based method \cite{vartak2017meta, sankar2021protocf}, which learns a similarity metric between new instances and examples in the training set. ProtoCF \cite{sankar2021protocf} learns to compose robust prototype representations for few-shot items without using side information. Most of meta-learning methods are optimization-based and introduce the framework of MAML \cite{finn2017model} into cold-start recommendation \cite{song2021cbml, yu2021personalized, zheng2021cold, du2019sequential}. MeLU \cite{MeLU} divides the model parameters into personalized parameters and the embedding parameters. These parameters are updated in the inner loop and outer loop respectively. MetaHIN \cite{MetaHIN} applies heterogeneous information networks (HIN) to capture rich semantics of meta-paths. MetaCF \cite{wei2020fast} learns an accurate collaborative filtering model well-generalized for fast adaption on new users. The above works learn globally shared parameters that are the same for all tasks. This may lead the model to the local optimum and converge slowly. To address this problem, MAMO \cite{dong2020mamo} improves MeLU by introducing task-specific memories and feature-specific memories to provide personalized bias terms when initializing the parameters. PAML \cite{wang2021preference} improving existing framework with better generalization capacity by leveraging users’ social relations. In contrast to existing works, to solve above issue, we cluster users with similar attributes into the same task to capture latent clustering knowledge that can be rapidly adapted to new users. \begin{figure*}[t] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.1cm} \centering \includegraphics[width=1.0\linewidth]{latex/figures/structure1.pdf} \caption{The overview of the proposed TMAG framework. (a) In the pretrain phase, we use two attribute-oriented autoencoders to learn user and item attribute embeddings. (b) In the meta-training phase, we cluster users into different groups and formulate each group of users as a task. These tasks are leveraged to train the meta-training model, and we obtain the parameter $\boldsymbol{\theta}$. (c) In the meta-testing phase, firstly, we employ the learned $\boldsymbol{\theta}$ as the initialization parameter to get user and item representations. Then, we augment the adjacency matrix of the graph by combining the graph structure information and attribute information. At last, we employ the updated $\boldsymbol{\theta}'$ for recommendation.} \label{fig:framework} \end{figure*} \section{Preliminary} \subsection{Problem Definition} Let $\mathcal{U}$ = $\left\{u_{1}, u_{2},...,u_{M}\right\}$ and $\mathcal{I}$ = $\left\{i_{1}, i_{2},...,i_{N}\right\}$, where $\mathcal{U}$ denotes the user set, $\mathcal{I}$ denotes the item set, $M$ and $N$ denote the number of users and items respectively. Therefore, we use a matrix $\boldsymbol{R} \in \mathbb{R}^{M \times N}$ to denote the interaction behavior between users and items. Formally, \begin{equation} \boldsymbol{R}_{ui}= \begin{cases} 1,& \text{if user $u$ has interacted with item $i$;}\\ 0,& \text{otherwise.} \end{cases} \label{eq:R} \end{equation} The user-item interactions can be transformed to a user-item bipartite graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ denotes the node set and $\mathcal{E}$ denotes the edge set in the graph, and we have $|\mathcal{V}| = (M + N)$ denoting the number of nodes in the graph. $\boldsymbol{A} \in \mathbb{R}^{M \times N}$ is the adjacency matrix of $\mathcal{G}$. Cold-start issue is a fundamental challenge in recommender systems, which can be categorized into complete cold-start and incomplete cold-start \cite{wei2020fast}. In the former, there are no interactions for new users and in the latter, there are only a few interactions for new users. In this paper, we are interested in the incomplete cold-start problem. Our model serves as an additional cold-start recall method. To evaluate the model performance, we divide the recommendation task into three sub-tasks as follows, \noindent$\boldsymbol{Task1:}$ Recommend existing (old) items to new users; \noindent$\boldsymbol{Task2:}$ Recommend new items to existing (old) users; \noindent$\boldsymbol{Task3:}$ Recommend new items to new users. \subsection{Background on Meta-learning} Meta-learning learns generic knowledge from a large class of tasks and generalizes it to new tasks. In meta-learning, we have a task set $\mathcal{T}$ which consists of meta-training tasks $\mathcal{T}^{tr}$ and meta-testing tasks $\mathcal{T}^{te}$. And we define two sets, the support set $\mathcal{S}_k$ and query set $\mathcal{Q}_k$ for each task $\boldsymbol{T}_k \sim p(\mathcal{T})$. Optimization-based meta-learning methods attempt to find desirable parameter $\boldsymbol{\theta}$ of the model $f$. During meta-training, there are two rounds of updates for $T_k \in \mathcal{T}^{tr}$. In the inner-loop update, the model adjusts the parameter $\boldsymbol{\theta}$ to $\boldsymbol{\theta}_k$ by gradient $\nabla_{\boldsymbol{\theta}}\mathcal{L}_k(f_{\boldsymbol{\theta}})$ in $\mathcal{S}_k^{tr}$, where $\mathcal{L}_k(f_{\boldsymbol{\theta}})$ denotes the training loss on task $T_k$ with global parameter $\boldsymbol{\theta}$. In the outer-loop update, the model trains the parameter to minimize the loss on the query set $\mathcal{L}_k(f_{\boldsymbol{\theta}_k})$ in $\mathcal{Q}_k^{tr}$, where $\mathcal{L}_k(f_{\boldsymbol{\theta}_k})$ is the testing loss on task $T_k$ with task-specific parameter $\boldsymbol{\theta}_k$. During the meta-testing process, the model only updates the parameter in the inner-loop on the support set $\mathcal{S}^{te}$ and evaluate on the query set $\mathcal{Q}^{te}$. \section{METHODOLOGY} \label{method} \subsection{Overview of TMAG} The framework of TMAG is illustrated in Figure \ref{fig:framework}. There are three components in the framework: (1) a task aligned constructor that extracts attribute embeddings and constructs tasks to capture latent clustering knowledge; (2) an augmented graph neural network to alleviate data sparsity and capture the high-order user-item interactions; and (3) a contrastive regularization that enhances the latent clustering prior knowledge. \subsection{Task Aligned Constructor} \subsubsection{Attribute-Oriented autoencoder} The proposed attribute-oriented autoencoder is shown in Figure \ref{fig:framework} (a). The model takes the user content (e.g., age and gender) and the item content(e.g., actor and genre) as the input. We employ an attribute-oriented autoencoder to obtain the user and item feature embeddings. First, we project the input into a low-dimensional hidden embedding. Second, from the latent embedding, we reconstruct the input content in the output space. Most attributes are categorical features. We define the user content input $\boldsymbol{x}_u$ for user $u$ and item content input $\boldsymbol{x}_i$ for item $i$ as follows, \begin{equation} \boldsymbol{x}_u = [c_{u1};\cdots;c_{uP}]; \quad \boldsymbol{x}_i = [c_{i1};\cdots;c_{iQ}], \end{equation} where $P$ is the number of user content field, $c_{up}$ is $d_p$-dimensional one-hot or multi-hot vector for content $p \in \left\{1,\cdots,P \right\} $ of user $u$. Similarly, $Q$ is the number of item content field, $c_{iq}$ is $d_q$-dimensional one-hot or multi-hot vector for content $q \in \left\{1,\cdots,Q \right\} $ of item $i$. We choose Rectified Linear Unit (ReLU) as the non-linear activation function to obtain the latent user embedding as follows, \begin{equation} \boldsymbol{z}_u = \sigma (\boldsymbol{W}_u^1 \boldsymbol{x}_u + \boldsymbol{b}_u^1), \end{equation} where $\boldsymbol{W}_u^1, \boldsymbol{b}_u^1$ are trainable parameters. Here we do not utilize graph convolutional network (GCN) \cite{GCN} to encode the user content because we note that aggregating too much information from neighbors affects the expressive power of user's attributes for the aligned task in meta-learning. And we design a decoder to reconstruct $\boldsymbol{x}_u$ as follows, \begin{equation} \boldsymbol{x}_u^r = \sigma (\boldsymbol{W}_u^2 \boldsymbol{z}_u + \boldsymbol{b}_u^2), \end{equation} where $\boldsymbol{W}_u^2, \boldsymbol{b}_u^2$ are trainable parameters. Mathematically, the objective function of the user attribute-oriented autoencoder is, \begin{equation} \boldsymbol{L}_u = \sum_{u \in \mathcal{U}} \left \| \boldsymbol{x}_u - \boldsymbol{x}_u^r \right \|^2 + \lambda \left \| \boldsymbol{W}_u \right \|^2, \label{auto_loss} \end{equation} where $\boldsymbol{W}_u$ denotes all trainable model parameters; $\lambda$ controls the $L_2$ regularization strength to prevent over-fitting. Analogously, we can obtain the latent item embedding $\boldsymbol{z}_i$ utilizing a new item attribute-oriented antoencoder. \subsubsection{Task Construction} In incomplete cold-start recommendation scenarios, users only have a few interactions with items. Therefore, user and item representations learned from user-item interaction pairs are inadequate. When new users (items) arrive, we are supposed to capture latent clustering knowledge among users with similar attributes to obtain more expressive representations. Similar to the classical meta-learning method MAML, we construct various meta tasks to locally share latent clustering knowledge. To be specific, we utilize the K-Means algorithm \cite{likas2003global} to divide all users $\mathcal{U}$ into different $K$ task groups according to $\boldsymbol{x_u}$ ($u \in \mathcal{U}$) learned from attributed-oriented module, i.e., $ \boldsymbol{C} = \{\boldsymbol{C}_1, \boldsymbol{C}_2 \cdots,\boldsymbol{C}_K $ \}, where $\boldsymbol{C}_k$ denotes the $k$-th user cluster. Following the meta-learning paradigm, we construct the meta-training support set $\mathcal{S}^{tr}$ = $\{\mathcal{S}^{tr}_1, \mathcal{S}^{tr}_2 \cdots \mathcal{S}^{tr}_K\}$ based on $\boldsymbol{C}$, where $\mathcal{S}^{tr}_k$ is a partial set of items that have been rated by users $u \in \boldsymbol{C}_k$. For new users in cold-start scenarios, we allocate them to their closest cluster center in $\boldsymbol{C}$. These new users form a new cluster set $\boldsymbol{C}'$, and we construct the meta-testing support set $\mathcal{S}^{te}$ according to $\boldsymbol{C}'$. The set $\mathcal{S}^{te}_k$ consists of a subset of items interacted by new users. Likewise, we can generate the meta-training query set $\mathcal{Q}^{tr}$, which contains user-item pairs simulated as unseen interactions. $\mathcal{Q}^{tr}$ is used to accumulate the task loss in meta-training, while the meta-testing query set $\mathcal{Q}^{te}$ is constructed to evaluate the recommendation results in meta-testing. It should be noted that the support set and query set are mutually exclusive in each task $T_k$ (i.e., $\mathcal{S}_k \bigcap \mathcal{Q}_k = \emptyset$). \subsection{Augmented Graph Neural Network} \subsubsection{Graph Embedding Propagation} After obtaining $K$ different tasks, we construct the user-item bipartite graph and leverage interactions of users in the $k$-th task as training data for $\boldsymbol{T}_k$. We perform GCN to capture the high-order structural information of the interaction graph. Firstly, we randomly initialize free embedding vectors $\boldsymbol{e}_u \in \mathbb{R}^d$ ($\boldsymbol{e}_i \in \mathbb{R}^d$) to denote a user $u$ (item $i$), where $d$ denotes the embedding dimension. Then we perform the light graph convolution (LGC) \cite{LightGCN} from user $u$'s neighbors, which is formulated as follows, \begin{equation} \boldsymbol{e}_u^{(l + 1)} = \sum_{i \in \mathcal{N}_u}\frac{1}{\sqrt{|\mathcal{N}_u||\mathcal{N}_i|}} \boldsymbol{e}_i^{(l)}, \label{eq:u_agg} \end{equation} where $\boldsymbol{e}_u^{(l+1)}$ ($\boldsymbol{e}_i^{(l+1)}$) denotes the embedding of node $u$ ($i$) in layer $l+1$, memorizing the messages from its $l$-layer neighbors; $\mathcal{N}_u$ ($\mathcal{N}_i$) denotes the set of neighbors of node $u$ ($i$); $|\mathcal{N}_u|$ ($|\mathcal{N}_i|$) denotes the neighbor size of node $u$ ($i$); $\boldsymbol{e}_u^{(0)}$ = $\boldsymbol{e}_u$ and $\boldsymbol{e}_i^{(0)}$ = $\boldsymbol{e}_i$. Similarly, we can obtain item $i$'s embedding through LGC. The purpose of this module is to learn more effective embedding only through the information propagation on user-item bipartite graph. \subsubsection{Graph Enhanced Generator} Considering that the limited interactions of users are insufficient to learn expressive representation in cold-start scenarios, we further adopt two strategies to generate potential interactions for users. In the first strategy, we mine the structure of the interaction graph and capture potential dependencies to the user-item pair that have not appeared in the graph. Specifically, we expand interactions that are similar to users' existing interacted items based on the graph embedding. From the perspective of Occam’s razor, we adopt a vanilla designed, weighted inner production to measure the interest of the user $u$ to the item $i$ as, \begin{equation} \boldsymbol{E}_{u,i}^1 = \sigma( \sum_{j \in \mathcal{N}_u} \boldsymbol{e}_j^{(L)} \boldsymbol{W}_g \boldsymbol{e}_i^{(L)}), \end{equation} where $\boldsymbol{e}_j^{(L)}$ refers to the final $L$-th layer embedding of item $j$, $\boldsymbol{W}_g$ refers to the matrix parameter capturing the structure information between users and items; and $\sigma$ is the sigmoid function. Different from ItemCF \cite{ItemCF}, the strategy not only leverages the neighborhood information but also captures the high-order relationships in the bipartite graph. In the second strategy, we utilize interacted items to represent users and incorporate potential items according to their attribute similarity with users. We also apply a weighted inner production to define the similarity between user $u$ and item $i$ as, \begin{equation} \boldsymbol{E}_{u,i}^2 = \sigma( \sum_{j \in \mathcal{N}_u} \boldsymbol{z}_j \boldsymbol{W}_a \boldsymbol{z}_i), \end{equation} where $\boldsymbol{W}_a$ refers to the matrix parameter capturing the attribute information between users and items; and $\sigma$ is the sigmoid function. We employ a hyper-parameter $\alpha$ in the range of $\left[0,1\right]$ to balance the two strategies, \begin{equation} \boldsymbol{E}_{u,i} = \alpha \boldsymbol{E}_{u,i}^1 + (1 - \alpha) \boldsymbol{E}_{u,i}^2. \label{gen} \end{equation} The loss function for training the potential interaction generator is as follows, \begin{equation} \mathcal{L}_{gen} = \left \| \boldsymbol{E} - \boldsymbol{A} \right \|^2. \label{gen_loss} \end{equation} We put the generated edges into the adjacency matrix by setting a threshold $t$, \begin{equation} \boldsymbol{\hat{A}}\left[ u, i\right]= \begin{cases} 1,& \text{if $\boldsymbol{E}_{u,i}$ > $t$;}\\ 0,& \text{otherwise.} \end{cases} \label{eq:new_A} \end{equation} where $\boldsymbol{\hat{A}}$ is the generated adjacency matrix. We utilize $\boldsymbol{\hat{A}}$ to augment the interaction, which is beneficial to alleviate the cold-start problem. \subsubsection{Model Prediction} In order to model the representation in a more fine-grained way, we concatenate the final $L$-th layer graph embedding of user $u$ with its corresponding attribute embedding as follows, \begin{equation} \boldsymbol{f}_u = \boldsymbol{e}_u^{(L)} \oplus \boldsymbol{z}_u \ , \label{eq:final} \end{equation} where $\oplus$ denotes concatenation operation. Analogously, we can obtain the final embedding $\boldsymbol{f}_i$ of item $i$. We adopt the inner product to estimate the user's preference towards the target item, \begin{equation} \hat{y}_{ui} = \boldsymbol{f}_u^T \boldsymbol{f}_i \ . \label{eq:prediction} \end{equation} We utilize the Bayesian Personalized Ranking (BPR) loss \cite{rendle2009bpr} to optimize model parameters, which is a pairwise loss that encourages the prediction of an observed interaction to be assigned higher than its unobserved ones. The prediction loss is defined as follows, \begin{equation} \mathcal{L}_{pre} = \sum_{(u,i,j) \in \mathcal{D}} -{\rm log}\sigma(\hat y_{ui} - \hat y_{uj}), \label{eq:bpr_loss} \end{equation} where $\mathcal{D}=\{(u,i,j)|(u,i) \in \mathcal{D}^+, (u,j) \in \mathcal{D}^- \}$ denotes the pairwise training data; $\mathcal{D}^+$ denotes the observed interactions and $\mathcal{D}^-$ denotes the unobserved interactions. $\sigma$ is the sigmoid function. \subsection{Contrastive Regularization} Optimizing a ranking-motivated loss \cite{Hao0FX020, bojchevski2018deep} is effective to capture the relationships between each pair of training samples. Contrastive learning \cite{hadsell2006dimensionality} is one kind of ranking-motivated loss. As an extension of Information Maximization (InfoMax) principle \cite{linsker1988self}, contrastive learning learns representations by maximizing the Mutual Information (MI), i.e., contrasting positive pairs with corresponding negative-sampled pairs. We have a similar learning objective in our problem that is to pull the latent attribute embeddings in the same cluster together while pushing the latent attribute embeddings in different clusters far away from each other. We design a task-wise contrastive regularization based on the attribute embeddings to enhance the latent clustering knowledge. More specifically, we treat attributes in the same task as the positive pairs, denoted by $\{(\boldsymbol{z}_u, \boldsymbol{z}_{u'})|u, u' \in \boldsymbol{C}_k\}$, and attributes in different tasks as negative pairs, denoted by $\{(\boldsymbol{z}_u, \boldsymbol{z}_{v})|u \in \boldsymbol{C}_k, v \notin \boldsymbol{C}_k\}$. The supervision of positive pairs retains consistency in the same task. Meanwhile, the supervision of negative pairs enhances the discriminatory capability of different tasks. Following the contrastive design paradigm \cite{chen2020simple}, we propose the contrastive regularization to maximize MI in task-wise as follows, \begin{equation} \begin{aligned} \mathcal{L}_{MI} = \sum_{k = 1}^K \sum_{u \in \boldsymbol{C}_k}-{\rm{log}}\frac{{\rm{exp}}({\rm{sim}}(z_u, z_{u'})/\tau)}{\sum_{v \notin \boldsymbol{C}_k}{\rm{exp}}({\rm{sim}}(z_u, z_v)/\tau)} , \label{eq:con_loss} \end{aligned} \end{equation} where ${\rm{sim}} (\cdot)$ is the function to measure the similarity between two vectors and $\tau$ is the hyper-parameter for softmax temperature. There are many choices of ${\rm{sim}} (\cdot)$, such as cosine similarity, dot product, etc. We observe that using the cosine similarity can achieve good results in our model. The summation in the denominator can be approximated by in-batch negative samples with mini-batch training. \begin{algorithm}[t] \caption{Meta-training of TMAG} \label{training} \begin{algorithmic}[1] \Require the graph $\mathcal{G}$; learning rates $\alpha$ and $\beta$; \State Randomly initialize the two attribute-oriented autoencoders, graph enhanced generator and other parameters (e.g., $\boldsymbol{\theta}$) \State Fix other parts, train the attribute-oriented autoencoders until convergence by Equation \eqref{auto_loss} and obtain $\boldsymbol{z}_u$, $\boldsymbol{z}_i$ \State Construct aligned training tasks $\mathcal{T}^{tr}$, each task $T_{k} \in \mathcal{T}^{tr}$ consisting of a support set $\mathcal{S}_k^{tr}$ and a query set $\mathcal{Q}_k^{tr}$ \While{not convergence} \State Randomly select a task $T_k \in \mathcal{T}^{tr}$ \State Compute $\boldsymbol{e}_u^{(L)}$, $\boldsymbol{e}_i^{(L)}$ with $\mathcal{G} $ by Equation \eqref{eq:u_agg} \State Calculate the final embedding $\boldsymbol{f}_u$, $\boldsymbol{f}_i$ by Equation \eqref{eq:final} \State Evaluate $\mathcal{L}_{T_k}(\boldsymbol{\theta}_k, \boldsymbol{S}_k^{tr})$ by Equation \eqref{eq:total_loss} \State Local update $\boldsymbol{\theta}_k' = \boldsymbol{\theta}_k - \alpha \nabla_{\boldsymbol{\theta}}\mathcal{L}_{T_k}(\boldsymbol{\theta}_k, \boldsymbol{S}_k^{tr})$ \State Evaluate $\mathcal{L}_{T_k}(\boldsymbol{\theta}_k', \boldsymbol{Q}_k^{tr})$ with query set $\boldsymbol{Q}_k^{tr}$ \State Global update $\boldsymbol{\theta} = \boldsymbol{\theta} - \beta \nabla_{\boldsymbol{\theta}}\mathcal{L}_{T_k}(\boldsymbol{\theta}_k', \boldsymbol{Q}_k^{tr})$ \State Generate $\boldsymbol{\hat{A}}$ by Equation \eqref{eq:new_A} and update $\mathcal{G}$ \EndWhile \end{algorithmic} \end{algorithm} \subsection{Optimization} To improve recommendation in cold-start scenarios, we leverage a multi-task training strategy to jointly optimize the main recommendation task (cf. Equation \eqref{eq:bpr_loss}), the interaction generator task (cf. Equation \eqref{gen_loss}) and the self-supervised learning task (cf. Equation \eqref{eq:con_loss}). \begin{equation} \mathcal{L} = \mathcal{L}_{pre} + \lambda_1 \mathcal{L}_{gen} + \lambda_2 \mathcal{L}_{MI} + \lambda_3 || \boldsymbol{\Theta} ||^2, \label{eq:total_loss} \end{equation} where $\boldsymbol{\Theta}$ denotes all trainable parameters in $\mathcal{L}_{pre}$ and $\mathcal{L}_{gen}$ since $\mathcal{L}_{MI}$ adds no additional parameters; $\lambda_1$, $\lambda_2$ and $\lambda_3$ parameterize the weights of different losses. Following the MAML framework, we perform a few second-order gradient descent updates in the meta-training process to obtain suitable initial parameter $\boldsymbol{\theta}$ for users and items. Then, the representation of new users and items can be rapidly adapted with only a few interactions. The whole algorithm is summarized in Algorithm \ref{training}. Taking $k$-th task $T_k$ as an example, we perform a few gradient descent updates according to the loss function defined in Equation \eqref{eq:total_loss}, i.e., inner-loop update. For simplicity, we perform one update on $\boldsymbol{S}_k^{tr}$ as follows: \begin{equation} \boldsymbol{\theta}_k' = \boldsymbol{\theta}_k - \alpha \nabla_{\boldsymbol{\theta}}\mathcal{L}_{T_k}(\boldsymbol{\theta}_k, \boldsymbol{S}_k^{tr}), \label{eq:inner} \end{equation} where $\mathcal{L}_{T_k}$ denotes the loss function of $T_k$; $\alpha$ is the inner-loop learning rate and $\boldsymbol{\theta}_k'$ denotes the new parameter. In the outer-loop process, we calculate the query set loss on $\boldsymbol{Q}_k^{tr}$ to update the initial parameter $\boldsymbol{\theta}$ as: \begin{equation} \boldsymbol{\theta} = \boldsymbol{\theta} - \beta \nabla_{\boldsymbol{\theta}}\mathcal{L}_{T_k}(\boldsymbol{\theta}_k', \boldsymbol{Q}_k^{tr}), \label{eq:outer} \end{equation} where $\beta$ is the outer-loop learning rate. \section{Experiments} In this section, we conduct extensive experiments to answer the following four questions: \begin{itemize}[leftmargin=*] \item \textbf{RQ1}: How does TMAG perform compared to state-of-the-art cold-start recommendation methods? \item \textbf{RQ2}: How do different components of TMAG affect the recommendation performance? \item \textbf{RQ3}: How is TMAG impacted by the sparsity of the support set and its hyper-parameters? \item \textbf{RQ4}: Can TMAG provide qualitative analyses of learned representations with regard to aligned tasks? \end{itemize} \subsection{Experimental Setup} \subsubsection{Experimental Setup} We conduct extensive experiments on the following three datasets, which are provided by \cite{MetaHIN}. \begin{itemize}[leftmargin=*] \item DBook\footnote{https://book.douban.com}: This is a widely used dataset for book recommendation obtained from Douban. We divide books into existing and new items based on their publishing year, with a roughly 8:2 ratio. Due to the lack of temporal information about users, we randomly picked 80\% of users as existing users and the remaining 20\% as new users. \item MovieLens\footnote{https://grouplens.org/datasets/movielens/}: This is a widely-used benchmark dataset published by GroupLens for movie recommendation, where movies ratings were released from 1919 to 2000. We divide movies into old items (released before 1998) and new items (issued between 1998 and 2000) with a roughly 8:2 ratio. To designate new users in the MovieLens dataset, we arrange users by their first rating timestamp, with the most recent 20\% of users considered to be new to MovieLens. \item Yelp\footnote{https://www.yelp.com/dataset/challenge}: This dataset is from Yelp business and is widely used for recommendation. Users who joined Yelp before May 1, 2014 are considered as existing users, while the rest are considered as new users. Similarly, we divide firms as old or new based on the date they were first rated. The ratio of new users to existing users is approximately 8:2. \end{itemize} In a nutshell, for each dataset, we split users and items into two groups: existing and new, based on the first user interacting time and first item releasing time. Then we separate each dataset into meta-training and meta-testing data. We consider existing user feedback on existing items as meta-training data and the rest as meta-testing data. We randomly select 10\% of them as the validation set. The meta-testing data is divided into three tasks: (1)Task1: Recommend existing items for new users; (2)Task2: Recommend new items for existing users; (3)Task3: Recommend new items for new users. We only add the additional recall method in cold-start scenarios to learn better initial embeddings for new users and items. Therefore, the recommendation results of warm users and items are not influenced, and we do not report the performance. On these three datasets, users' ratings on items are explicit and the ratings range from 1-5. Since we evaluate on implicit feedback, following the previous work \cite{kang2018self, sachdeva2019sequential}, we regard the ratings that are higher than 3 as positive feedback. Their statistics are summarized in Table \ref{tab:data}. Following the previous work \cite{MeLU, MetaHIN, MvDGAE}, we filter users whose interactions are more than 100 or less than 13. For each user $u$, we randomly select 10 interacted items as the query set, and the rest items are used as the support set. We will study how TMAG is impacted by the size of the support set in Section \ref{sec:sparsity}. \begin{table}[t] \small \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0cm} \caption{Statistics of Preprocessed Datasets} \label{tab:data} \begin{tabular}{c|c|c|c} \hline & DBook & MovieLens & Yelp \\ \hline \hline Users &10,592 &6,040 & 51,624 \\ Items &20,934 &3,881 & 34,199 \\ Interactions & 649,381 & 1,000,209 & 1,301,869 \\ Sparsity & 99.71\% & 95.73\% & 92.63\% \\ \hline User contents & \begin{tabular}[c]{@{}c@{}}Group,\\ Location\end{tabular} & \begin{tabular}[c]{@{}c@{}}Age, Occupation,\\ Gender, Zip code\end{tabular} & \begin{tabular}[c]{@{}c@{}}Fans,\\ Friends\end{tabular} \\ \hline Item contents & \begin{tabular}[c]{@{}c@{}}Publisher,\\ Author, Year\end{tabular} & \begin{tabular}[c]{@{}c@{}}Actor, Year,\\ Director, Genre\end{tabular} & \begin{tabular}[c]{@{}c@{}}Category, City, \\ Postal code, State\end{tabular} \\ \hline \end{tabular} \end{table} \subsubsection{Baselines} We compare our model against three different types of baselines, namely, traditional methods (MF and NeuMF), GNN-based methods (NGCF, GraphSAINT and LightGCN), and cold-start methods (MeLU, MetaHIN and CLCRec). For traditional methods and GNN-based methods, we train the base model on meta-training data. We further fine-tune the base model using support sets from the meta-testing data to adapt to the cold-start scenarios. \begin{itemize}[leftmargin=*] \item MF \cite{MF}: It utilizes conventional matrix factorization optimized by BPR loss for item recommendation, which only leverages direct interactions between users and items as the target value of the interaction function. \item NeuMF \cite{NCF}: It is the most representative deep neural network based method for collaborative filtering, which unifies MLP and matrix factorization to a general framework to learn the embedding of users and items. \item NGCF \cite{NGCF}: It is a highly effective collaborative filtering method based on GCN that captures collaborative signal by propagating graph embeddings. It injects the collaborative signal into the embedding process in an explicit manner. \item GraphSAINT \cite{zeng2019graphsaint}: It is a general GNN model, which builds a complete GCN from sampled subgraph. And it proposes normalization technique to eliminate bias, and sampling algorithms for variance reduction. \item LightGCN \cite{LightGCN}: It improves NGCF by discarding the feature transformation and nonlinear activation, which learns user and item embeddings by linearly propagating them on the user-item interaction graph. \item MeLU \cite{MeLU}: It applies MAML with a few steps of gradient updates to alleviate the user cold-start problem. It only considers user-item interactions and features. \item MetaHIN \cite{MetaHIN}: It is a meta-learning method that leverages the rich semantic of HIN to capture richer semantics via higher-order graph structures. \item CLCRec \cite{wei2021contrastive}: It maximizes the mutual information between collaborative embeddings of users and items. Besides, it maximizes the mutual information between collaborative embeddings and feature representations of items. \end{itemize} \begin{table*}[t] \small \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{-0cm} \caption{Comparison between our proposed model and other baselines at top-10. R is short for \emph{Recall}, ND for \emph{NDCG}, M for \emph{MAP}. Boldface denotes the highest score and underlines denotes the best performing baselines.} \label{data} \centering \begin{tabular}{c||c||ccc||ccc||ccc} \hline \multirow{2}{*}{Scenario} & \multirow{2}{*}{Model} & \multicolumn{3}{c||}{DBook} & \multicolumn{3}{c||}{MovieLens} & \multicolumn{3}{c}{Yelp} \\ \cline{3-11} & & R@10 & ND@10 & M@10 & R@10 & ND@10 & M@10 & R@10 & ND@10 & M@10 \\ \hline \multirow{10}{*}{\begin{tabular}[c]{@{}c@{}}Task1\\ (Existing items \\ for new users)\end{tabular}} & MF & 0.0834 & 0.0941 & 0.0388 & 0.1403 & 0.1588 & 0.0718 & 0.0517 & 0.0552 & 0.0193 \\ & NeuMF & 0.0852 & 0.0955 & 0.0395 & 0.1439 & 0.1612 & 0.0729 & 0.0520 & 0.0557 & 0.0196 \\ \cline{2-11} & NGCF & 0.0862 & 0.0980 & 0.0417 & 0.1524 & 0.1729 & 0.0797 & 0.0544 & 0.0607 & 0.0223 \\ & GraphSAINT & 0.0998 & 0.1149 & 0.0497 & 0.1980 & 0.2233 & 0.1066 & {\ul 0.0634} & {\ul 0.0706} & {\ul 0.0268} \\ & LightGCN & {\ul 0.1007} & {\ul 0.1175} & {\ul 0.0516} & 0.2022 & 0.2279 & 0.1106 & 0.0630 & 0.0703 & 0.0266 \\ \cline{2-11} & MeLU & 0.0854 & 0.0969 & 0.0398 & 0.1452 & 0.1634 & 0.0742 & 0.0530 & 0.0564 & 0.0203 \\ & MetaHIN & 0.0860 & 0.0973 & 0.0399 & 0.1503 & 0.1685 & 0.0768 & 0.0536 & 0.0579 & 0.0211 \\ & CLCRec & 0.0998 & 0.1106 & 0.0465 & {\ul 0.2077} & {\ul 0.2387} & {\ul 0.1163} & 0.0551 & 0.0559 & 0.0192 \\ \cline{2-11} & $\textbf{TMAG}$ & $\boldsymbol{0.1046}^*$ & $\boldsymbol{0.1212}^*$ & $\boldsymbol{0.0530}^*$ & $\boldsymbol{0.2115}^*$ & $\boldsymbol{0.2451}^*$ & $\boldsymbol{0.1210}^*$ & $\boldsymbol{0.0665}^*$ & $\boldsymbol{0.0732}^*$ & $\boldsymbol{0.0276}^*$ \\ \cline{2-11} & \%Improv. & 3.87\% & 3.15\% & 2.71\% & 1.83\% & 2.68\% & 4.04\% & 4.89\% & 3.68\% & 2.99\% \\ \hline \hline \multirow{10}{*}{\begin{tabular}[c]{@{}c@{}}Task2\\ (New items for \\ existing users)\end{tabular}} & MF & 0.1278 & 0.1450 & 0.0667 & 0.2336 & 0.2706 & 0.1359 & 0.0554 & 0.0582 & 0.0214 \\ & NeuMF & 0.1292 & 0.1470 & 0.0680 & 0.2362 & 0.2734 & 0.1377 & 0.0562 & 0.0576 & 0.0206 \\ \cline{2-11} & NGCF & 0.1499 & 0.1691 & 0.0784 & 0.2666 & 0.3086 & 0.1630 & 0.0745 & 0.0816 & 0.0316 \\ & GraphSAINT & 0.1664 & 0.1861 & 0.0890 & 0.2729 & 0.3163 & 0.1682 & 0.0845 & {\ul 0.0938} & {\ul 0.0382} \\ & LightGCN & {\ul 0.1673} & {\ul 0.1868} & {\ul 0.0896} & {\ul 0.2768} & {\ul 0.3198} & {\ul 0.1715} & {\ul 0.0849} & 0.0933 & 0.0378 \\ \cline{2-11} & MeLU & 0.1370 & 0.1526 & 0.0705 & 0.2466 & 0.2891 & 0.1492 & 0.0584 & 0.0657 & 0.0256 \\ & MetaHIN & 0.1392 & 0.1549 & 0.0717 & 0.2509 & 0.2930 & 0.1518 & 0.0602 & 0.0677 & 0.0263 \\ & CLCRec & 0.1437 & 0.1620 & 0.0741 & 0.2417 & 0.2840 & 0.1455 & 0.0809 & 0.0844 & 0.0307 \\ \cline{2-11} & $\textbf{TMAG}$ & $\boldsymbol{0.1732}^*$ & $\boldsymbol{0.1969}^*$ & $\boldsymbol{0.0968}^*$ & $\boldsymbol{0.2871}^*$ & $\boldsymbol{0.3314}^*$ & $\boldsymbol{0.1791}^*$ & $\boldsymbol{0.0906}^*$ & $\boldsymbol{0.0998}^*$ & $\boldsymbol{0.0397}^*$ \\ \cline{2-11} & \%Improv. & 3.53\% & 5.41\% & 8.04\% & 3.72\% & 3.63\% & 4.43\% & 6.71\% & 6.40\% & 3.93\% \\ \hline \hline \multirow{10}{*}{\begin{tabular}[c]{@{}c@{}}Task3\\ (New items\\ for new users)\end{tabular}} & MF & 0.0983 & 0.1085 & 0.0457 & 0.2256 & 0.2649 & 0.1319 & 0.0280 & 0.0308 & 0.0109 \\ & NeuMF & 0.1026 & 0.1128 & 0.0471 & 0.2281 & 0.2650 & 0.1316 & 0.0288 & 0.0311 & 0.0109 \\ \cline{2-11} & NGCF & 0.1152 & 0.1252 & 0.0552 & 0.2366 & 0.2755 & 0.1388 & 0.0667 & 0.0687 & 0.0264 \\ & GraphSAINT & 0.1248 & 0.1340 & 0.0571 & 0.2537 & 0.2951 & 0.1519 & {\ul 0.0698} & {\ul 0.0717} & {\ul 0.0266} \\ & LightGCN & {\ul 0.1263} & {\ul 0.1363} & {\ul 0.0586} & {\ul 0.2568} & {\ul 0.2971} & {\ul 0.1536} & 0.0686 & 0.0704 & 0.0262 \\ \cline{2-11} & MeLU & 0.1056 & 0.1187 & 0.0497 & 0.2326 & 0.2700 & 0.1354 & 0.0328 & 0.0329 & 0.0113 \\ & MetaHIN & 0.1077 & 0.1200 & 0.0503 & 0.2346 & 0.2726 & 0.1369 & 0.0339 & 0.0337 & 0.0115 \\ & CLCRec & 0.1068 & 0.1195 & 0.0506 & 0.2162 & 0.2525 & 0.1244 & 0.0607 & 0.0627 & 0.0223 \\ \cline{2-11} & $\textbf{TMAG}$ & $\boldsymbol{0.1325}^*$ & $\boldsymbol{0.1427}^*$ & $\boldsymbol{0.0619}^*$ & $\boldsymbol{0.2649}^*$ & $\boldsymbol{0.3062}^*$ & $\boldsymbol{0.1612}^*$ & $\boldsymbol{0.0723}^*$ & $\boldsymbol{0.0749}^*$ & $\boldsymbol{0.0281}^*$ \\ \cline{2-11} & \%Improv. & 4.91\% & 4.70\% & 5.63\% & 3.15\% & 3.06\% & 4.95\% & 3.58\% & 4.46\% & 5.64\% \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item \hspace*{1cm} * indicates the improvements of TMAG over the best baseline are statistically significant (i.e., one-sample t-test with \emph{p} < 0.05) . \end{tablenotes} \end{table*} \subsubsection{Evaluation Metrics} Cold-start recommendation can be considered as making top-n recommendations for users. We select three widely-used ranking metrics to evaluate the performance:Recall@$K$, Normalized Discounted Cumulative Gain (NDCG)@$K$ and Mean Average Precision (MAP)@$K$: \begin{itemize}[leftmargin=*] \item \emph{Recall}: It means the proportion of actually interacted items that appear within the top-$K$ ranking list. \item \emph{NDCG}: It is a standard ranking metric that reflects both the correlation and position for each recommended item. \item \emph{MAP}: The mean of the average precision scores for each user. And precision is the proportion of items actually interacted in the recommended list. \end{itemize} Here we use $K$ = 10. The two-tailed unpaired t-test~\cite{bhattacharya2002median} is performed to detect significant differences between TMAG and the best baseline. \subsubsection{Implementation Details} For MF and NeuMF, we use the codes in the NeuRec\footnote{https://github.com/wubinzzu/NeuRec} library. The source codes of NGCF, GraphSAINT, LightGCN, MeLU, MetaHIN and CLCRec have been released. And all of the settings are following the original suggested settings. We change parts of input data and evaluations to fit our experiment setting. We implement our TMAG model in Tensorflow \footnote{https://www.tensorflow.org/}. To make comparisons fair, the embedding size of users and items is fixed to 64 for all models. We train all the models with Adam optimizer \cite{Adam}, and we use the Xavier initializer \cite{glorot2010understanding} to initialize model parameters. We also apply the early stop strategy to prevent over-fitting. In terms of hyper-parameters, we apply a grid search for hyper-parameters: the inner-loop and outer-loop learning rate is tuned amongst $\{0.0001, 0.0005, 0.001, 0.005\}$; the cluster number ranges from 10 to 50 with the step length 10; the number of inner updates vary from 1 to 5; we search the temperature coefficient in range of $\{0.1, 0.2, 0.5, 1.0\}$; the augmentation threshold is tuned amongst $\{0.5, 0.6, 0.7, 0.8, 0.9\}$; $\lambda_1$, $\lambda_2$ and $\lambda_3$ are searched in $\{0.005, 0.01, 0.05, 0.1, 0.5, 1.0\}$. \subsection{Performance Comparison (RQ1)} Table \ref{data} shows the top-10 recommendation performance on three datasets. From the table, we have the following observes: \begin{itemize}[leftmargin=*] \item In the first type of baseline methods, we can find that the performance of MF method is slightly lower than the deep learning method NeuMF, highlighting the critical role of nonlinear feature interactions between user and item embeddings. However, neither MF nor NeuMF models the generalization ability to new users and items, which leads to poor performance in cold-start scenarios. \item In the second type of baseline methods, GNN-based methods achieve great improvements in all datasets and scenarios over traditional methods, especially LightGCN and GraphSAINT. Because we focus on the incomplete cold-start problem, GNN-based model is capable of improving cold-start user and item representations through exploring the high-order relationships. And on Yelp dataset, we can observe that GraphSAINT beats LightGCN slightly. The possible reason is that the limited data on Yelp causes over-fitting and thus it restricts the performance of the model, while GraphSAINT alleviates the problem by utilizing subgraphs to promote robustness. \item In the third type of baseline methods, CLCRec achieves competitive performance because it captures more information related to the collaborative signal by maximizing the mutual information. As for meta-learning-based methods, they show superior performance than traditional methods owing to the well-designed training process that results in a personalized parameter initialization for new users. MetaHIN surpasses MeLU in every scenario because it incorporates multiple semantics obtained from higher-order structures such as meta-paths. Since these methods ignore the high-order graph structure, the performance is inferior to those GNN-based methods. \item TMAG consistently outperforms all baseline methods in all scenarios across the datasets. For instance, TMAG improves over the best baseline w.r.t. Recall@10 by 3.53-4.91\%, 1.83-3.72\%, and 3.58-6.71\% on three datasets. The reasons are concluded as follows: 1) The aligned task extracts latent clustering knowledge among similar users that can be quickly adapted to new users. 2) The graph neural network captures the high-order user-item relationships, while MeLU and MetaHIN ignore the graph structure information in user-item graph. And the interaction augmentation mitigates the impact of sparse interactions. 3) The task-wise contrastive regularization enhances the latent clustering prior knowledge. We introduce in detail in the ablation study. \end{itemize} \begin{table}[] \small \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.1cm} \caption{Effect of the number of aligned tasks.} \label{cluster} \setlength{\tabcolsep}{1.3mm}{ \begin{tabular}{c|cc|cc|cc} \hline \multirow{2}{*}{Task1} & \multicolumn{2}{c|}{DBook} & \multicolumn{2}{c|}{MovieLens} & \multicolumn{2}{c}{Yelp} \\ & R@10 & ND@10 & R@10 & ND@10 & R@10 & ND@10 \\ \hline \hline TMAG-1 & 0.1015 & 0.1178 & 0.2082 & 0.2392 & 0.0636 & 0.0714 \\ TMAG-10 & 0.1022 & 0.1185 & 0.2094 & 0.2413 & 0.0648 & 0.0721 \\ TMAG-20 & 0.1038 & 0.1193 & 0.2106 & 0.2431 & 0.0662 & 0.0727 \\ TMAG-30 & 0.1039 & 0.1202 & 0.2113 & 0.2443 & \textbf{0.0665} & \textbf{0.0732} \\ TMAG-40 & \textbf{0.1046} & \textbf{0.1212} & \textbf{0.2115} & \textbf{0.2451} & 0.0664 & 0.0731 \\ TMAG-50 & 0.1041 & 0.1207 & 0.2111 & 0.2439 & 0.0660 & 0.0729 \\ \hline \end{tabular}} \end{table} \begin{table}[t] \small \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.1cm} \caption{Effect of the autoencoder.} \label{au} \setlength{\tabcolsep}{1.3mm}{ \begin{tabular}{c|cc|cc|cc} \hline \multirow{2}{*}{Task1} & \multicolumn{2}{c|}{DBook} & \multicolumn{2}{c|}{MovieLens} & \multicolumn{2}{c}{Yelp} \\ & R@10 & ND@10 & R@10 & ND@10 & R@10 & ND@10 \\ \hline \hline w/o AE & 0.1030 & 0.1186 & 0.2101 & 0.2443 & 0.0653 & 0.0727 \\ w/ AE & \textbf{0.1046} & \textbf{0.1212} & \textbf{0.2115} & \textbf{0.2451} & \textbf{0.0665} & \textbf{0.0732} \\ \hline \end{tabular}} \end{table} \subsection{Ablation Study (RQ2)} We perform an ablation study to investigate the effectiveness of the proposed TMAG under different circumstances. In the following experiments, we use task1 as the default task and report its performance. The evaluation metrics are R@10 and ND@10. \subsubsection{Effect of Task Alignment} To investigate whether TMAG can benefit from task alignment, we search the number of clusters in the range of $\{1, 10, 20, 30, 40, 50\}$. The experimental results are summarized in Table \ref{cluster}, where TMAG-1 denotes that we do not align tasks and regard each user as a single task. Jointly analyzing Table \ref{data} and Figure \ref{cluster}, we have the following observations: \begin{itemize}[leftmargin=*] \item Increasing the number of clusters enhances the recommendation results. Obviously, TMAG with task alignment is consistently superior to TMAG-1 in all cases. We attribute the improvement to the fine-grained modeling of similar user groups. \item TMAG-40 achieves the best result on MovieLens and DBook and TMAG-30 performs the best on Yelp. The possible reason is that user side information on Yelp is not as diverse as the other two datasets, so it does not need too many clusters to extract the attribute knowledge. \item TMAG consistently outperforms other approaches when the number of clusters is varied across three datasets. It demonstrates the efficiency of TMAG by extracting latent clustering knowledge and adapting the globally shared meta-knowledge to the latent clustering knowledge that can be rapidly adapted to new users. \end{itemize} \begin{table}[t] \small \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.1cm} \caption{Effect of the augmentation variants.} \label{aug} \setlength{\tabcolsep}{1.3mm}{ \begin{tabular}{c|cc|cc|cc} \hline \multirow{2}{*}{Task1} & \multicolumn{2}{c|}{DBook} & \multicolumn{2}{c|}{MovieLens} & \multicolumn{2}{c}{Yelp} \\ & R@10 & ND@10 & R@10 & ND@10 & R@10 & ND@10 \\ \hline \hline TMAG-ga & 0.1034 & 0.1198 & 0.2101 & 0.2438 & 0.0648 & 0.0721 \\ TMAG-a & 0.1044 & 0.1209 & 0.2113 & 0.2449 & 0.0661 & 0.0728 \\ TMAG-g & 0.1035 & 0.1201 & 0.2103 & 0.2441 & 0.0650 & 0.0725 \\ TMAG & \textbf{0.1046} & \textbf{0.1212} & \textbf{0.2115} & \textbf{0.2451} & \textbf{0.0665} & \textbf{0.0732} \\ \hline \end{tabular}} \begin{tablenotes} \footnotesize \item[$^*$]Subscript notation: TMAG-ga removes the graph and attribute augmentation, TMAG-a removes the attribute augmentation, and TMAG-g removes the graph augmentation. \end{tablenotes} \vspace{-1em} \end{table} \begin{table}[t] \small \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.1cm} \caption{Effect of the contrastive augmentation.} \label{con} \setlength{\tabcolsep}{1.3mm}{ \begin{tabular}{c|cc|cc|cc} \hline \multirow{2}{*}{Task1} & \multicolumn{2}{c|}{DBook} & \multicolumn{2}{c|}{MovieLens} & \multicolumn{2}{c}{Yelp} \\ & R@10 & ND@10 & R@10 & ND@10 & R@10 & ND@10 \\ \hline \hline w/o Con & 0.1034 & 0.1196 & 0.2104 & 0.2439 & 0.0652 & 0.0720 \\ w/ Con & \textbf{0.1046} & \textbf{0.1212} & \textbf{0.2115} & \textbf{0.2451} & \textbf{0.0665} & \textbf{0.0732} \\ \hline \end{tabular}} \end{table} \subsubsection{Effect of Autoencoder} We attempt to understand how the autoencoder facilitates the clustering for aligned tasks. We consider two methods of clustering: 1) directly using the side information of users and items by one-hot encoding; 2) employing an autoencoder to obtain the attribute embeddings. Table \ref{au} describes the results. We can observe that TMAG with autoencoder performs better. We attribute the improvement to the effective learning of attribute representations by the autoencoder. \subsubsection{Effect of Interaction Augmentation} In TMAG, we combine the graph structure information and attribute information to augment the adjacency matrix of the graph to alleviate the problem of lacking interactions. To investigate its rationality, we test different settings by removing one type of augmentation or both. Table \ref{aug} shows the results. We can observe that the best setting is using both two types of augmentation. Removing either type will drop the performance. And the graph augmentation consistently outperforms the attribute augmentation, which demonstrates that collaborative signal captured from the graph structure is more beneficial than that from attributes to mine potential interactions. Both two augmentation methods are superior to the variant without augmentation, indicating that the interaction generator helps to learn more expressive representations. \subsubsection{Effect of Contrastive Regularization} To study the effectiveness of contrastive regularization, we test our model in two cases: 1) TMAG without contrastive regularization; 2) TMAG with contrastive regularization. Table \ref{con} shows the results. We can observe that TMAG with the contrastive regularization performs better, which indicates the task-wise attribute contrastive learning enhances the latent clustering prior knowledge and enables the model to learn more informative attribute representations. \begin{comment} \begin{figure}[t] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.3cm} \centering \centering \subfigcapskip=-6.5pt \subfigure[DBook]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{figures/inner-db-sig.pdf} \label{fig:ml1m-loss} \end{minipage}% } \subfigcapskip=-6pt \subfigure[MovieLens]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{figures/inner-ml-sig.pdf} \label{fig:yelp-loss} \end{minipage}% } \subfigcapskip=-6pt \subfigure[Yelp]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{figures/inner-ye-sig.pdf} \label{fig:amazon-loss} \end{minipage}% } \caption{Impacts of inner update steps on three datasets.} \label{fig:step} \end{figure} \end{comment} \begin{figure}[t] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.3cm} \centering \centering \subfigcapskip=-6pt \subfigure[DBook]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{latex/figures/sparsity-db-sig.pdf} \label{fig:ml1m-loss} \end{minipage}% } \subfigcapskip=-6pt \subfigure[MovieLens]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{latex/figures/sparsity-ml-sig.pdf} \label{fig:yelp-loss} \end{minipage}% } \subfigcapskip=-6pt \subfigure[Yelp]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{latex/figures/sparsity-ye-sig.pdf} \label{fig:amazon-loss} \end{minipage}% } \caption{Impacts of the size of support sets.} \label{fig:sparsity} \end{figure} \begin{figure}[t] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.3cm} \centering \centering \subfigcapskip=-6.5pt \subfigure[DBook]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{latex/figures/coe-db-sig.pdf} \label{fig:ml1m-loss} \end{minipage}% } \subfigcapskip=-8pt \subfigure[MovieLens]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{latex/figures/coe-ml-sig.pdf} \label{fig:yelp-loss} \end{minipage}% } \subfigcapskip=-6pt \subfigure[Yelp]{ \begin{minipage}[c]{0.15\textwidth} \centering \includegraphics[width=\linewidth]{latex/figures/coe-ye-sig.pdf} \label{fig:amazon-loss} \end{minipage}% } \caption{Impacts of augmentation coefficient.} \label{coe} \end{figure} \subsection{Model Sensitivity (RQ3)} \subsubsection{Impacts of the Size of Support Sets} \label{sec:sparsity} Interactions are significant data for cold-start recommendation and the sparsity of data influences the quality of interactions. To investigate the effect of support set sparsity (i.e., data sparsity) levels, we range the size on the support set of interacted items from 5 to 70. Since the users that interact with more than 70 are too few, we do not analyze these users. The results are shown in Figure \ref{fig:sparsity}. Overall, all methods perform better with a larger support set. However, when the support set is reduced, the performance degradation on TMAG is the smallest of all approaches. TMAG's recommendation performance is excellent even when the support set is short and performs well regardless of the length of the interaction history. Note that the longer the historical interactions, the smaller the number of users on all datasets. Due to the limited sample size, the performance is thought to be unstable. \subsubsection{Impacts of Augmentation Coefficient} We study the performance of interaction augmentation with regard to various coefficients in order to evaluate their robustness. Experiment is also conducted in a well-constrained setting on three datasets by varying $\alpha$ in Equation \eqref{gen} in range of $\{0, 0.2, 0.4, 0.6, 0.8, 1.0\}$. The results are illustrated in Figure \ref{coe}. We can observe that TMAG generalizes well to different coefficients, which demonstrates the effectiveness of the proposed framework. Between the two augmentation strategies, graph augmentation plays a dominant role. The model performs the best on DBook and MovieLens when $\alpha$ is 0.8, while $\alpha$ is 1 on Yelp dataset. This may because that attribute information on yelp is too little, which reduces the credibility of attribute augmentation. \begin{figure}[t] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.1cm} \centering \centering \subfigcapskip=-6pt \subfigure[NGCF]{ \begin{minipage}[c]{0.24\textwidth} \centering \includegraphics[width=0.9\linewidth]{latex/figures/NGCF_node.pdf} \label{fig:tsne_RAE} \end{minipage}% }\hspace{-5mm} \subfigcapskip=-6pt \subfigure[TMAG]{ \begin{minipage}[c]{0.24\textwidth} \centering \includegraphics[width=0.9\linewidth]{latex/figures/TMCG_node.pdf} \label{fig:tsne_SCVG} \end{minipage}% } \caption{Visualization of t-SNE projected representations derived from NGCF and TMAG. Users are randomly selected in the same task from Yelp and are represented by stars. The points with the same color are users' interacted items.} \label{fig:node} \vspace{-3mm} \end{figure} \subsection{Case Study (RQ4)} In this section, we will attempt to comprehend how the task alignment promotes the representation learning in the embedding space. Towards this end, we randomly select 5 users in the same task from the Yelp dataset along with their interacted items and visualize the learned user embedding vectors derived from NGCF and TMAG with t-SNE algorithm \cite{van2008visualizing}. Figure \ref{fig:node} provides the results. Note that the items are from the query set in task1, which are not paired with users in the training phase. Compared with NGCF, we can observe that users in the same task tend to be closer and embeddings of items interacted by the same users tend to form the clusters. It indicates that task alignment is capable of mining the latent clustering knowledge to learn more personalized node characteristics. \section{Conclusion and future work} In this paper, we propose TMAG to solve the cold-start problem at the model-level and at the feature-level simultaneously. At the model-level, we propose a task aligned constructor to capture the latent clustering knowledge that can be rapidly adapted to new users, which can address the local optimum issues. We also adopt a task-wise attribute contrastive regularization to enhance the latent clustering knowledge. At the feature-level, we combine the graph structure information and attribute information to augment the adjacency matrix of the graph, which alleviates the data sparsity problem. Extensive experiments on three real-world datasets demonstrate the effectiveness of our model for cold-start recommendation. In the future, we would like to expand our work in the following two directions. First, we will extend our model to address cold-start problem from the standpoint of items. Second, we would like to migrate our model to the sequential recommendation scenarios to investigate its effectiveness. \balance \bibliographystyle{ACM-Reference-Format}
1,116,691,499,599
arxiv
\section*{A relative notion of algebraic Lie group and applications to $n$-stacks} Carlos Simpson\newline {\small Laboratoire Emile Picard (UMR 5580 CNRS) \newline Universit\'e Paul Sabatier\newline 31062 Toulouse CEDEX, France} \bigskip Let ${\cal X}$ be the big etale site of schemes over $k={\bf C}$. If $S$ is a scheme of finite type over $k$, let ${\cal X} /S$ denote the big etale site of schemes over $S$. The goal of this paper is to introduce a full subcategory of the category of sheaves of groups on ${\cal X} /S$, which we will call {\em the category of presentable group sheaves} (\S 2), with the following properties. \newline 1. \, The category of presentable group sheaves contains those group sheaves which are representable by group schemes of finite type over $S$ (Corollary \ref{uvw}). \newline 2. \, The category of presentable group sheaves is closed under kernel, quotient (by a normal subgroup sheaf which is presentable), and extension (Theorem \ref{I.1.e}). \newline 3. \, If $S'\rightarrow S$ is a morphism then pullback takes presentable group sheaves on $S$ to presentable group sheaves on $S'$ (Lemma \ref{I.1.h}). \newline 4. \, If $S'\rightarrow S$ is a finite morphism then direct image takes presentable group sheaves on $S'$ to presentable group sheaves on $S$ (Lemma \ref{I.1.i}). \newline 5. \, If $S=Spec (k)$ then presentable group sheaves are just group schemes of finite type over $Spec (k)$ (Theorem \ref{I.1.m}). In particular if ${\cal G}$ is a presentable group sheaf over any $S$ then the pullback to each point $Spec (k )\rightarrow S$ is an algebraic group. \newline 6. \, There is a notion of connectedness extending the usual notion over $Spec(k )$ and compatible with quotients, extensions, pullbacks and finite direct images; and a presentable group sheaf ${\cal G}$ has a largest connected presentable subsheaf ${\cal G} ^0\subset {\cal G}$ which we call the {\em connected component} (Theorem \ref{I.1.o}). \newline 7. \, A presentable group sheaf ${\cal G}$ has a Lie algebra object $Lie({\cal G} )$ (Theorem \ref{lmn}) which is a vector sheaf with bracket operation (see below for a discussion of the notion of vector sheaf---in the case $S=Spec (k)$ it is the same thing as a finite dimensional $k$-vector space). \newline 8. \, If ${\cal G}$ is a connected presentable group sheaf then ${\cal G} /Z({\cal G} )$ is determined up to isomorphism by the Lie algebra sheaf $Lie ({\cal G} )$ (where $Z({\cal G} )$ denotes the center of ${\cal G}$). This is Theorem \ref{abc} below. \bigskip We envision the category of presentable group sheaves as a generalisation relative to an arbitrary base scheme $S$, of the category of algebraic Lie groups over $Spec ({\bf C} )$. We mention here a few questions related to the analogy with classical algebraic groups. Property 8 poses an obvious existence problem: given a Lie algebra object in the category of vector sheaves, does it come from a presentable group sheaf with vector sheaf center? I don't know the answer to this question. We do know, however, that $Aut(L)$ is a presentable group sheaf (Lemma \ref{AutLie}). Another question is the existence of a ``universal covering'', i.e. a morphism $\tilde{{\cal G} }\rightarrow {\cal G}$ surjective with finite kernel such that for any other such morphism ${\cal F} \rightarrow {\cal G}$ there is a factorization $\tilde{{\cal G} } \rightarrow {\cal F} \rightarrow {\cal G}$. There are obvious questions about the generalisation of the theory of representations to the case of presentable group sheaves. The first among these is whether there always exists a faithful representation into $Aut(V)$ for $V$ a vector sheaf. I suspect that the answer is no, but don't have a counterexample. For connected group sheaves this problem concerns only the center, because we always have the adjoint representation of ${\cal G}$ on $Lie ({\cal G} )$. Beyond the question of the description of the representations, there is also the question of whether a suitable tannakian theory exists, namely given a group ${\cal G} \subset Aut (V)$, is ${\cal G}$ defined as the stabilizer of some ${\cal G}$-invariant sub-vector-sheaf $U$ in a tensor power of $V$? The motivation for introducing presentable group sheaves comes from the theory of homotopy types over $Spec ({\bf C} )$, or what Grothendieck called ``schematization of homotopy types'' in \cite{Grothendieck}. We will discuss the application to this theory at the end of the paper---note also that it is explained in essentially the same way in \cite{kobe} where some applications to nonabelian de Rham cohomology are also announced. Briefly, the considerations are as follows. A homotopy type over ${\cal X}$ (which we call an ``$n$-stack'') is a presheaf of topological spaces on ${\cal X}$ satisfying a homotopic descent condition (``fibrant'' in the terminology of Jardine \cite{Jardine1}, cf \cite{kobe}). This condition is the generalisation of the descent condition that goes into the definition of $1$-stack. An $n$-stack or fibrant presheaf $T$ has homotopy sheaves as follows. First, $\pi _0(T)$ is a sheaf of sets on ${\cal X}$. Then for $i\geq 1$ if $S\in {\cal X}$ and $t\in T(S)$, $\pi _i (T|_{{\cal X} /S},t)$ is a sheaf of groups on ${\cal X} /S$ (abelian if $i\geq 2$). In the fibrant presheaf point of view, these homotopy sheaves are the sheafifications of the presheaves which one defines in the obvious way. These things satisfy the same sorts of properties as in the homotopy theory of spaces. In particular there are notions of homotopy fiber products and (as special cases) homotopy fibers and loop or path spaces. The homotopy groups of the homotopy fiber of a morphism fit into the usual long exact sequence (and there is a similar exact sequence for homotopy fiber products in general). There are also notions of morphism spaces $Hom (T,T')$ which are spaces or $n$-groupoids (depending on the point of view) and internal morphism objects $\underline{Hom}(T,T')$ which are $n$-stacks whose global sections are the morphism spaces. The main particularity of this situation is that $\pi _0(T)$ can be nontrivial and not just the union a set of points. Because of this, one must consider basepoints not only in $T(Spec (k ))$ but in $T(S)$ for any scheme $S$ (say, of finite type) in order to get the full picture of $T$. One is thus lead to consider sheaves of groups on ${\cal X} /S$. We would like to define a restricted class of $n$-stacks or fibrant presheaves of spaces which we will call {\em presentable}. We would like this category to be closed under homotopy fiber products and also under the truncation (or coskeleton) operations of eliminating the homotopy groups above a certain level. From these requirements it follows that the condition for inclusion in the class of presentable presheaves of spaces should be expressed solely in terms of the homotopy group sheaves. From the exact sequences for homotopy fibers or more generally fiber products, one can see that the category of group sheaves allowable as homotopy group sheaves of presentable spaces must be closed under kernel, cokernel and extension. We would like our allowable group sheaves to be the algebraic Lie groups when the base space is $Spec (k )$, and of course for doing anything useful we need notions of connectedness and an infinitesimal (Lie algebra) picture. These are the reasons which lead us to look for a notion of sheaf of groups on ${\cal X} /S$ with the properties listed above. I should add a note of caution about the terminology, for we propose the terminology {\em presentable group sheaf} and also {\em presentable $n$-stack}. If ${\cal G}$ is a sheaf of sets on ${\cal X} /S$ (i.e. a $0$-stack) which happens to have a group structure, then the condition that ${\cal G}$ be presentable as a $0$-stack is {\em not} the same as the condition that ${\cal G}$ be a presentable group sheaf on ${\cal X} /S$. The right way to think of a sheaf of groups is as corresponding to a $1$-stack which we can denote $K({\cal G} , 1)$ or $B{\cal G}$ (with a morphism to $S$). From this point of view the terminologies are compatible: ${\cal G}$ is a presentable group sheaf over $S$ if and only if $K({\cal G} , 1)$ is a presentable $1$-stack. Let's look more carefully at the reasoning that leads to our definition of presentable $n$-stack. What are we going to do with a presentable $n$-stack $T$? If $W$ is (the $n$-truncation of) a finite CW complex considered as a constant $n$-stack on ${\cal X}$ then we can look at the $n$-stack $Hom (W, T)$. This is the {\em nonabelian cohomology of $W$ with coefficients in $T$}. If $T= K({\cal O} , n)$ is the Eilenberg-MacLane presheaf with homotopy group sheaf equal to the structure sheaf of rings ${\cal O}$ on ${\cal X}$ in degree $n$, then $\pi _0Hom (W, T)$ is just the cohomology $H^n(W, {\bf C} )$---or rather the sheaf on ${\cal X}$ represented by this vector space. Similarly, if $G$ is a group scheme over ${\bf C}$ then for $T=K(G, 1)=BG$ we get that $Hom (W, G)$ is the moduli stack for flat principal $G$-bundles or equivalently representations $\pi _1(W)\rightarrow G$. We hope to obtain an appropriate mixture of these cases by considering a more general class of $n$-stacks $T$. In particular we would like to have a {\em Kunneth formula} for two CW complexes $V$ and $W$, $$ \underline{Hom} (U, \underline{Hom}(V,T))=\underline{Hom}(U\times V, T). $$ One can imagine for example the problem of trying to compute the moduli stack of flat principal $G$-bundles on $U\times V$ in terms of a Kunneth formula as above. One is forced to consider the cohomology of $U$ with coefficients in the moduli stack $T'=\underline{Hom}(V,BG)$, and this stack is not necessarily connected ($\pi _0(T')$ is roughly speaking the moduli space of principal $G$-bundles). The Kunneth formula is not an end in itself, as it is rare for a space to decompose into a product. It points the way to a ``Leray-Serre theory'' which could be more generally useful. If $W\rightarrow U$ is a morphism we would be led to consider a relative morphism stack $T'=\underline{Hom}(W/U, T) \rightarrow U$ and then try to take the $n$-stack of sections $U\rightarrow T'$, a sort of {\em nonabelian cohomology with twisted coefficients}. I haven't fully thought about this yet (and in particular not about the de Rham theory---see below---which seems to be significantly more complicated than that which is needed in the constant coefficient case, for example to make sense of the Kunneth formula). The motivation for all of this is to be able to do geometric versions of the nonabelian cohomology in the case where $W$ is, say, a smooth projective variety. It is announced with some sketches of proofs in \cite{kobe}, how to get a de Rham version of the morphism space $\underline{Hom}(W_{DR}, T)$ when $T$ is a presentable $n$-stack. We want of course to have the (analytic) isomorphism between de Rham and Betti cohomology. Needless to say, this will not work for an arbitrary $n$-stack $T$ on $X$ (for example if one takes $T=W$ to a constant stack associated to a CW complex which is an algebraic variety then there will probably be nothing in $Hom (W_{DR}, W)$ corresponding to the identity in $Hom (W, W)$). We need to impose conditions on $T$ which guarantee that it is reasonably close to the examples $K({\cal O} , n)$ or $K(G, 1)$ given above (in these cases, the de Rham-Betti isomorphism works as is already well known). As a first approach, the condition we want seems to be that the homotopy group sheaves should be representable by group schemes over the base $S$. In the case where $T$ is the moduli stack of flat principal $G$-bundles on a space $V$, encountered above when looking at the Kunneth formula, the $\pi _1$ sheaves are indeed representable (the moduli stack is an algebraic stack). Unfortunately the condition of being representable is not stable under cokernels, but as explained above this is important if we want our notion of good $n$-stack to be stable under homotopy fiber products. Before going directly to the conclusion that we need a category stable under kernels, cokernels and extensions, we can analyze a bit more precisely just what is needed. Notice first of all that the algebraic de Rham theory is not going to work well in the case of higher cohomology with coefficients in the multiplicative group scheme, i.e. when $T= K({\bf G}_m, n)$ for $n\geq 2$. I won't go into the explanation of that here! Thus, at least for the algebraic de Rham theory we would like to have an appropriate notion of unipotent abelian group sheaf. Not yet having come up with a reasonable general theory of this, we can replace this notion by the (possibly more restrictive) notion of {\em vector sheaf}. The notion of vector sheaf is explained in \S 4 below. The reader may actually wish to start by reading this section, since the theory of vector sheaves is in some sense a paradigm, applicable only for abelian group sheaves, of what we are trying to do in general. The notion of vector sheaf was introduced by A. Hirschowitz \cite{Hirschowitz} who called it ``U-coherent sheaf''. He defined the category of U-coherent sheaves as the smallest abelian category of sheaves of abelian groups containing the coherent sheaves (note that the category of coherent sheaves is not abelian on the big etale site or any big site). We take a more constructive approach, defining the notion of vector sheaf in terms of presentations, although in the end the two notions are equivalent. The notion of vector sheaf doesn't work too nicely in characteristic $p>0$, basically because the Frobenius automorphism of the sheaf ${\cal O}$ is not linear, so the linear structure is no longer encoded in the sheaf structure. As we try in the beginning of the paper to put off the hypothesis of characteristic zero as long as possible, and as the notion of vector sheaf comes into the analysis at a later stage (the infinitesima study related to properties 7 and 8 listed above), I have decided not to put the section on vector sheaves at the beginning. Still, it is essentially self-contained for the reader who wishes to start there. In considering the algebraic de Rham theory we will only be looking at $n$-stacks $T$ with $\pi _i(T,t)$ a vector sheaf on $S$ for $t\in T(S)$ and $i\geq 2$. What does this mean for our restriction on $\pi _1(T, t)$? Going back to the question of stability under fiber products, we see from looking at the long exact homotopy sheaf sequence that the minimum that is absolutely necessary is that our class of group sheaves $G$ be stable under central extension by a vector sheaf. On the other hand it also must be stable under taking kernels. One could thus hope to make good on a {\em minimalist approach} saying that we should look at the category of group sheaves generated by representable group sheaves (affine, say---this again would be needed to make the de Rham theory work), under the operations of kernel and central extension by a vector sheaf. A vector sheaf always has a presentation as the cokernel of a morphism of {\em vector schemes}, i.e. representable vector-space objects over the base $S$ (these are sometimes called {\em linear spaces} in the complex analytic category \cite{Grauert} \cite{Fischer} \cite{Axelsson-Magnusson}). The most natural approach then is to say, suppose a group sheaf $G$ has a presentation as the cokernel of a morphism $F_2 \rightarrow F_1$ of representable group sheaves, and suppose $E$ is a central extension of $G$ by a vector sheaf $U$ which is itself the cokernel of a morphism $V_2 \rightarrow V_1$ of vector schemes. Then try to combine these into a presentation of $E$ with, for example, a surjection $V_1\times F_1\rightarrow E$. The problem (which I was not able to solve although I don't claim that it is impossible) is then to lift the multiplication of $E$ to an associative multiplication on $V_1\times F_1$. As I didn't see how to do this, a slightly more general approach was needed, wherein we consider groups which have presentations by objects where the multiplication lifts but not necessarily to a multiplication satisfying the associativity property. This is the reasoning that leads to the definition of {\em $S$-vertical morphism:} \, a morphism where one can lift things such as multiplications in a nice way cf \S 2. We finally come to the definition of {\em presentable group sheaf} as a group sheaf $G$ which admits a vertical surjection $X\rightarrow G$ from a scheme of finite type over $S$, and such that there is a vertical surjection $R\rightarrow X\times _{G}X$ again from a scheme of finite type over $S$. One could, on the other hand, take a {\em maximalist approach} and try to include anything that seems vaguely algebraic. This would mean, for example, looking at sheaves $G$ such that there are surjections (in the etale sense, although not necessarily etale morphisms) $X\rightarrow G$ and $R\rightarrow X\times _GX$ with $X$ and $R$ schemes of finite type over the base $S$. We call this condition P2. This might also work (in fact it might even be the case that a P2 group sheaf is automatically presentable). However, I was not able to obtain a reasonable infinitesimal analysis which could lead, for example, to the notion of connected component---though again, I don't claim that this could never work. In a similar vein, one might point out that there is a fairly limited range of situations in which we use the lifting properties going into the definition of verticality. I have chosen to state the condition of verticality in what seems to be the most natural setting, but this leads to requiring that many more lifting properties be satisfied than what we actually use. One could rewrite the definition of verticality to include only those lifting properties that we use afterward. It might be interesting to see if this change makes any difference in which group sheaves are allowed as presentable. All in all, the definitions we give here of presentable group sheaf and of presentable $n$-stack are first attempts at giving useful and reasonable notions, but is is quite possible that they would need to be altered in the future in view of applications. A word about the characteristic of the ground field (or base scheme). While our aim is to work over a field of characteristic zero, there are certain parts of our discussion valid over any base scheme, namely those concerning the abstract method for defining conditions of presentability. When it comes down to finding conditions which result in a nice theory (and in particular which result in a theory having the required local structure) we must restrict ourselves to characteristic zero. It is possible that a variant could work nicely in positive characteristic, so we will present the first part of the argument concerning the definition of presentability (which is valid over any base scheme), in full generality (\S 1) before specifying in characteristic zero which morphisms we want to use in the presentations (\S 2). Actually the definition given in \S 2 works in any characteristic but we can only prove anything about local properties in characteristic zero (\S\S 4-9), so it is probably the ``right'' definition only in characteristic zero. With an appropriate different definition of verticality (certainly incorporating divided powers) what we do in these later sections might be made to work in any characteristic. \subnumero{Notations} We fix a noetherian ground ring $k$, for sections 1-3. From section 4 on, we assume that $k$ is an uncountable field of characteristic zero, and we may when necessary assume that the ground field is $k={\bf C}$. Let ${\cal X}$ denote the site of noetherian schemes over $k$ with the etale topology (this is known as the ``big etale site''). If $S\in {\cal X}$ then we denote by ${\cal X} /S$ the site of schemes over $S$ (again with the etale topology). A {\em sheaf} on ${\cal X}$ means (unless otherwise specified) a sheaf of sets. For a sheaf of groups, we sometimes use the terminology {\em group sheaf}. We will confuse notations between an object of ${\cal X}$ and the sheaf it represents. Denote by $\ast$ the sheaf on ${\cal X}$ with values equal to the one-point set; it is represented by $Spec (k)$ (and we can interchange these notations at will). If $S$ is a sheaf on ${\cal X}$ (most often represented by an object) then we have the site ${\cal X} /S$ of objects of ${\cal X}$ together with morphisms to $S$. There is an equivalence between the notions of sheaf on ${\cal X} /S$ and sheaf on ${\cal X}$ with morphism to $S$. Since we will sometimes need to distinguish these, we introduce the following notations. If ${\cal F}$ is a sheaf on ${\cal X}$ then its {\em restriction up} to ${\cal X} /S$ is denoted by ${\cal F} |_{{\cal X} /S}$, with the formula $$ {\cal F} |_{{\cal X} /S}(Y\rightarrow S)= {\cal F} (Y). $$ If ${\cal F}$ is a sheaf on ${\cal X} /S$ then we denote by $Res_{S/\ast}{\cal F}$ the corresponding sheaf on ${\cal X}$ together with its morphism $$ Res_{S/\ast}{\cal F} \rightarrow S. $$ It is defined by the statement that $Res_{S/\ast}{\cal F} (Y)$ is equal to the set of pairs $(a, f)$ where $a: Y\rightarrow S$ and $f\in {\cal F} ( Y\stackrel{a}{\rightarrow} S)$. We call this the {\em restriction of ${\cal F}$ from $S$ down to $\ast$}. More generally if $S'\rightarrow S$ is a morphism and if ${\cal F}$ is a sheaf on ${\cal X} /S'$ then we obtain a sheaf $Res _{S'/S}{\cal F}$ on ${\cal X}/S$ called the {\em restriction of ${\cal F}$ from $S'$ down to $S$}. The operations of restriction up and restriction down are not inverses: we have the formula, for a sheaf ${\cal F} $ on ${\cal X} /S$, $$ Res _{S'/S}({\cal F} |_{{\cal X} /S'}) = {\cal F} \times _SS' . $$ On the other hand, suppose $p:{\cal F} \rightarrow S'$ is a morphism of sheaves on ${\cal X}/S$. Then we denote by ${\cal F} /S'$ the corresponding sheaf on ${\cal X} /S'$ (the data of the morphism is implicit in the notation). It is defined by the statement that ${\cal F} /S' ( Y\rightarrow S')$ is equal to the set of $u \in {\cal F} (Y\rightarrow S)$ such that $p(u)\in S'(Y\rightarrow S)$ is equal to the given morphism $Y\rightarrow S'$. For another point of view note that there is a tautological section of $(S'/S)|_{ {\cal X} /S'}$, and ${\cal F} /S'$ is the preimage of this section in ${\cal F} |_{{\cal X} /S'}$. As a special case we get that if ${\cal F}$ is a sheaf on ${\cal X} = {\cal X} /\ast$ with a morphism ${\cal F} \rightarrow S$ then we obtain a sheaf ${\cal F} /S$ on ${\cal X} /S$. The operations $$ ({\cal F} \rightarrow S')\mapsto {\cal F} /S' $$ from sheaves on ${\cal X} /S$ with morphisms to $S'$ to sheaves on ${\cal X} /S'$, and $$ {\cal F} ' \mapsto (Res _{S'/S}{\cal F} ' \rightarrow S' $$ from sheaves on ${\cal X} /S'$ to sheaves on ${\cal X} /S$ with morphisms to $S'$, are inverses. For this reason it is often tempting to ignore the strict notational convention and simply use the same notations for the two objects. This is not too dangerous except in the last section of the paper where we will try to be careful. If a sheaf ${\cal F}$ on ${\cal X}$ is representable by an object $F\in {\cal X}$ and if $F\rightarrow S$ is a morphism then ${\cal F} /S$ is representable by the same object $F$ together with its morphism, considered as an object of ${\cal X} /S$. For this reason we will sometimes drop the notation ${\cal F} /S$ and just denote this as ${\cal F}$ when there is no risk of confusion (and in fact the attentive reader will notice that even in the definition two paragraphs ago we have written $S'$ when we should have written $S'/S$ in the first sentence...but the second version would have been impossible because not yet defined...!) Finally there is another natural operation: suppose $\pi : S'\rightarrow S$ is a morphism and ${\cal F}$ is a sheaf on ${\cal X} /S'$. Its {\em direct image} is the sheaf $\pi _{\ast}({\cal F} )$ defined by the statement that $$ \pi _{\ast}({\cal F} )(Y\rightarrow S):= {\cal F} (Y\times _SS' \rightarrow S'). $$ This is {\em not} the same thing as the restriction down from $S'$ to $S$. Think of the case where $S$ is one point and $S'$ is a collection of several points. The value of $Res_{S'/S}({\cal F} )$ at $S$ is the {\em union} of the values of ${\cal F}$ over the points in $S'$ whereas the value of $\pi _{\ast}({\cal F} )$ at $S$ is the {\em product} of the values of ${\cal F}$ at the points in $S'$. \numero{Presentability conditions for sheaves} We will define several conditions, numbered $P1$, $P2$, $P4({\cal M} )$, $P5({\cal M} )$ (whereas two other conditions $P3$ and $P3\frac{1}{2}$ will be defined later, in \S 2). The last two depend on a choice of a class ${\cal M}$ of morphisms in ${\cal X}$ subject to certain properties set out below. In the upcoming section we then specify which class ${\cal M}$ we are interested in (at least in characteristic zero), the class of {\em vertical morphisms}. Since the preliminary results depend only on the formal properties of ${\cal M}$ we thought it might be useful to state them in general rather than just for the class of vertical morphisms, this is why we have the seeming complication of introducing ${\cal M}$ into the notations for our properties. We also introduce {\em boundedness conditions} denoted $B1$ and $B2$. These conditions sum up what is necessary in order to be able to apply Artin approximation. Fix a base scheme $S\in {\cal X}$. In what follows, we work in the category of sheaves on ${\cal X} /S$. Thus a sheaf is supposed to be on ${\cal X} /S$ unless otherwise specified. \noindent {\bf P1.}\,\, We say that ${\cal F}$ is {\em P1} if there is a surjective morphism of sheaves $X\rightarrow {\cal F}$ where $X$ is represented by a scheme of finite type over $S$. We may assume that $X$ is affine. \noindent {\bf P2.}\,\, We say that ${\cal F}$ is {\em P2} if there are surjective morphisms of sheaves $X\rightarrow {\cal F}$ and $Y\rightarrow X\times _{{\cal F}}X$ where $X$ and $Y$ are represented by schemes of finite type over $S$. We may assume that $X$ and $Y$ are affine. \begin{lemma} \mylabel{I.t} If $G$ is a sheaf of groups which is P1, and $G$ acts on a sheaf $F$ which is P2, then the quotient sheaf $F/G$ is again P2. \end{lemma} {\em Proof:} Choose surjections $\varphi :X\rightarrow F$ and $(p_1,p_2):Y\rightarrow X\times _FX$. The action is a map $G\times F\rightarrow F$, and we can choose a surjection $(q_1,q_2):W\rightarrow G\times X$ (with $W$ an affine scheme, by condition P1 for $G$), such that the action lifts to a map $m:W\rightarrow X$. There is obviously a surjection $X\rightarrow F/G$. We have a map $$ W\times _X Y\rightarrow X\times X $$ (where the maps used in the fiber product are $m:W\rightarrow X$ and $p_1:Y\rightarrow X$), defined by $$ (w,y)\mapsto (q_2(w), p_2(y)). $$ This map surjects onto the fiber product $X\times _{F/G}X$. It clearly maps into this fiber product. The map is surjective because if $(x,x')\in X\times X$ with $g\varphi (x)=\varphi (x')$ then for a point $w$ of $W$ lying above $(g,x)$ we have $\varphi (m(w))= g\varphi (x)=\varphi (x')$; in particular there is a point $y$ of $Y$ with $p_1(y)=m(w)$ and $p_2(y)=x'$, so the point $(w,y)$ maps to $(x,x')$. Our surjection $$ W\times _X Y\rightarrow X\times _{F/G}X $$ now shows that $F/G$ is P2. \hfill $\Box$\vspace{.1in} {\em Remark:} These conditions are independent of base scheme $S$ for finite-type morphisms. More precisely if $S'\rightarrow S$ is a morphism of finite type and if ${\cal F} '$ is a sheaf on ${\cal X} /S'$ then denoting by ${\cal F} = Res _{S'/S}{\cal F} '$ its restriction down to $S$ we have that ${\cal F} $ is $P1$ (resp. $P2$) if and only if ${\cal F} '$ is $P1$ (resp. $P2$). \subnumero{Boundedness conditions} We consider the following boundedness conditions for a sheaves on ${\cal X}$. These two conditions are designed to contain exactly the information needed to apply the Artin approximation theorem \cite{Artin}. \newline {\bf B1.} \,\, We say that a sheaf ${\cal F}$ is {\em B1} if, for any $k$-algebra $B$, we have that $$ \lim _{\rightarrow}{\cal F} (Spec (B')) \rightarrow {\cal F} (Spec (B)) $$ is an isomorphism, where the limit is taken over the subalgebras $B'\subset B$ which are of finite type over $k$. This is equivalent to the local finite type condition of Artin \cite{Artin}. \noindent {\bf B2.} \,\, We say that a sheaf ${\cal F}$ is {\em B2} if, for any complete local ring $A$, we have that the morphism $$ {\cal F} (Spec (A)) \rightarrow \lim _{\leftarrow} {\cal F} (Spec (A/{\bf m} ^i) $$ is an isomorphism. The {\bf Artin approximation theorem} (\cite{Artin}) can now be stated as follows. {\em Suppose ${\cal F}$ is a sheaf of sets which is B1 and B2. If $S=Spec (A)$ is an affine scheme with point $P\in S$ corresponding to a maximal ideal ${\bf m} \subset A$ then for any $$ \eta \in \lim _{\leftarrow} {\cal F} (Spec (A/{\bf m} ^i)) $$ and for $i_0\geq 0$ there exists an etale neighborhood $P\in S' \rightarrow S$ and an element $\eta ' \in {\cal F} (S')$ agreeing with $\eta$ over $Spec (A/{\bf m} ^{i_0})$. } \begin{lemma} \mylabel{I.t.1} 1.\,\, If ${\cal F}$ and ${\cal G}$ are B1 (resp. B2) and $f,g$ are two morphisms from ${\cal F}$ to ${\cal G}$ then the equalizer is again B1 (resp. B2). \newline 2.\,\, Suppose ${\cal F}\rightarrow {\cal G}$ is a surjective morphism of sheaves. If ${\cal F}$ and ${\cal F} \times _{{\cal G}}{\cal F}$ are B1 then ${\cal G}$ is B1. \end{lemma} {\em Proof:} Fix $S=Spec (A)$ and $\{ B_i\}$ our directed system of $A$-algebras. Let $B= \lim _{\rightarrow}B_i$. Suppose $\eta \in {\cal G} (B)$. There is a natural morphism $$ \lim _{\rightarrow} {\cal G} (B_i)\rightarrow {\cal G} (B). $$ First we prove injectivity. Suppose $\varphi , \psi \in {\cal G} (B_i)$ map to the same element of ${\cal G} (B)$. We may choose an etale surjection of finite type $Spec (B'_i)\rightarrow Spec (B_i)$ such that the restrictions $\varphi '$ and $\psi '$ lift to elements $u,v\in {\cal F} (B_i)$. Their images in ${\cal F} (B')$ give a point $(u,v)_{B'}$ in ${\cal F} \times _{{\cal G}} {\cal F} (B')$ (here $B':= B\otimes _{B_i}B'_i$). By the condition B1 for the fiber product, there is a $j\geq i$ such that this point comes from a point $\eta \in {\cal F} \times _{{\cal G}}{\cal F} (B'_j)$. On the other hand, note that the product ${\cal F} \times {\cal F}$ is B1. The image of $\eta$ in ${\cal F} \times {\cal F} (B')$ is the same as that of $(u,v)$; and by the B1 condition for the product, there is $k\geq j$ such that the image of $\eta$ in ${\cal F} \times {\cal F} (B'_k)$ is equal to the image of $(u,v)$. In particular, $(u|_{Spec (B'_k)},v|_{Spec (B'_k)})$ lies in the fiber product ${\cal F} \times _{{\cal G}}{\cal F} (B'_k)$. In other words, $u$ and $v$ have the same images in ${\cal G} (B'_k)$. These images are the restrictions of the original $\varphi , \psi$. Since $Spec (B'_k)\rightarrow Spec (B_k)$ is an etale surjection, the images of $\varphi$ and $\psi $ in ${\cal G} (B_k)$ are the same. This proves the injectivity. Now we prove surjectivity. Then there exists an etale surjection of finite type $$ Spec (B')\rightarrow Spec (B) $$ such that $\eta |_{Spec (B')}$ comes from an element $\rho \in {\cal F} (B')$. The functor ``etale surjections of finite type'' is itself B1, so there is an etale $Spec (B'_i)\rightarrow Spec (B_i)$ inducing $B'$. Then $B'=\lim _{\rightarrow} B'_j$ where $B'_j= B_j\otimes _{B_i}B'_i$ for $j\geq i$. By the property B1 for ${\cal F}$ there is some $j$ such that $\rho$ comes from $\rho _j\in {\cal F} ( B'_j)$. We obtain an element $\eta '_j\in {\cal G} (B'_j)$ mapping to $\eta ':=\eta |_{Spec (B')}$. The two pullbacks of $ \eta '$ to $Spec (B'\otimes _BB')$ are equal. Note that $$ B'\otimes _BB' = \lim _{\rightarrow} B'_k\otimes _{B_k}B'_k, $$ so by the above injectivity, there is some $k$ such that the two pullbacks of $\eta _j|_{Spec (B'_k)}$ to $Spec (B'_k\otimes _{B_k}B'_k)$ are equal. Now the sheaf condition for ${\cal G}$ means that $\eta _j|_{Spec (B'_k)}$ descends to an element $\eta _k\in {\cal G} (B_k)$. The restriction of $\eta _k$ to $B'$ is equal to the restriction of $\eta$, so the sheaf condition for ${\cal G}$ implies that the restriction of $\eta _k$ to $Spec (B)$ is $\eta$. \hfill $\Box$\vspace{.1in} {\em Remark:} The direct product of a finite number of B1 (resp. B2) sheaves is again B1 (resp. B2) so part 1 of the lemma implies that the properties B1 and B2 are maintained under fiber products. \begin{theorem} \mylabel{I.t.2} Suppose ${\cal F}$ is a sheaf which is P2. Then ${\cal F}$ is B1. If the ground field is uncountable, then ${\cal F}$ is B2. \end{theorem} {\em Proof:} The condition B1 follows from the previous lemma. Indeed, let $X\rightarrow {\cal F}$ and $R\rightarrow X\times _{{\cal F}}X$ be the morphisms given by the property P2, with $X$ and $R$ of finite type (in particular, B1). Note that $R\times _{X\times _{{\cal F}}X}R= R\times _{X\times X}R$ is a scheme of finite type, so the lemma implies that $X\times _{{\cal F}}X$ is B1; another application of the lemma then shows that ${\cal F}$ is B1. For B2, let $S=Spec (A)$ with $A$ a complete local ring, and let $S_n:= Spec (A/{\bf m}^{n+1})$. Let $X\rightarrow {\cal F}$ and $R\rightarrow X\times _{{\cal F}}X$ be the morphisms given by the property P2, with $X$ and $R$ of finite type over $S$. Schemes of finite type are B2. We show surjectivity of the map $$ {\cal F} (S)\rightarrow \lim _{\leftarrow} {\cal F} (S_n). $$ Suppose $(\varphi _n )$ is a compatible system of elements of ${\cal F} (S_n)$. Let $$ E_n := \{ x_n \in X(S_n):\;\;\; x_n \mapsto \varphi _n \} . $$ Note that $E_n$ is a nonempty closed subset of the scheme $X(S_n )$ (that is, the scheme whose $Spec (k)$-valued points are $X(S_n)$). Let $$ E'_n:= \bigcap _{m\geq n} {\rm im}(E_m \rightarrow E_n); $$ this is an intersection of a decreasing family of nonempty constructible subsets of $E_n$. Since $k$ is uncountable, this intersection is nonempty. Indeed, the closures of the images form a decreasing family of closed sets, which stabilizes by the noetherian property of $E_n$; then within this closed subset, the dense constructible subsets contain open sets which are complements of proper closed subsets. The union of countably many proper closed subsets is a proper subset, so the intersection of the open complements is nonempty. (Note however that $E'_n$ is not necessarily constructible). The morphism $E'_{n+1} \rightarrow E' _n$ is surjective. To see this, suppose $u\in E' _n$. We can consider the subsets $$ D_m := \{ v\in E_m, \;\; v\mapsto u\} . $$ These are closed subsets of $E_m$, nonempty by the condition $u\in E'_n$. Let $D' _{n+1}= \bigcap _{m\geq n+1} {\rm im} (D_m \rightarrow D_{n+1})$. By the same proof as above, $D'_{n+1}$ is nonempty. But it is contained in $E'_{n+1}$ and maps to $u\in E'_n$. The surjectivity of the maps implies that the inverse limit $\lim _{\leftarrow} E'_n $ is nonempty. It is a subset of $\lim _{\leftarrow} X(S_n)=X(S)$, consisting of elements mapping to $(\varphi _n)$ in $\lim _{\leftarrow}{\cal F} (S_n)$. (In fact, this subset is equal to the inverse image of $(\varphi _n)$.) We obtain an element of $X(S)$, hence an element of ${\cal F} (S)$, mapping to $(\varphi _n)$. This proves surjectivity. Note that this part of the proof only used property P1 for ${\cal F}$. We now prove injectivity. Note that $X\times _{{\cal F}}X$ is P1, so by the proof above, we obtain surjectivity of the morphism $$ X\times _{{\cal F}}X(S)\rightarrow \lim _{\leftarrow}X\times _{{\cal F}}X(S_n). $$ Suppose two elements $u,v\in G(S)$ go to the same element of $G(S_n)$ for all $n$ (we write this $u_n=v_n$). We can lift them to elements $x,y\in X(S)$, and we obtain a compatible sequence of elements $(x_n, y_n)\in X\times_{{\cal F}}X (S_n)$. By the surjectivity of the above morphism, there is an element $(x',y')\in X\times _{{\cal F}}X(S)$ with $x'_n=x_n$ and $y'_n=y_n$. The images $u'$ and $v'$ of $x'$ and $y'$ in ${\cal F} (S)$ are equal. By the B2 property for $X$, this implies that $x'=x$ and $y'=y$, which shows that $u=v$. \hfill $\Box$\vspace{.1in} We have the following Krull-type property. \begin{lemma} \mylabel{Krull} Suppose ${\cal F}$ is a sheaf which is B1 and B2. Then for any scheme $S$ of finite type the natural morphism is an injection $$ {\cal F} (S ) \hookrightarrow \prod _{{\rm Art.} \, S'\rightarrow S} {\cal F} (S' ) $$ where the product is taken over $S'\rightarrow S$ which are artinian and of finite type. \end{lemma} {\em Proof:} Suppose $f,f'\in {\cal F} (S)$ agree over all artinian subschemes. Let ${\cal G} = S \times _{{\cal F} } S$ be the fiber product where $f$ and $f'$ provide the two morphisms from $S$ to ${\cal F}$. Then ${\cal G}$ is B1 and B2 (by the remark following Lemma \ref{I.t.1}). But ${\cal G}$ has a (unique) section over any artinian $S'\rightarrow S$ and applying B2, B1 and Artin approximation we obtain sections of ${\cal G}$ over an etale covering of $S$. \hfill $\Box$\vspace{.1in} \subnumero{Choice of a class of morphisms ${\cal M}$} Fix a base scheme $S\in {\cal X}$. We assume fixed for the rest of this section a subset ${\cal M}\subset Mor ({\cal X} /S)$ of morphisms in ${\cal X} /S$, containing the identities and closed under composition (i.e. ${\cal M}$ is the set of morphism of a subcategory of ${\cal X} /S$) subject to the following axioms: \newline {\bf M1}\,\, If $a$ and $b$ are composable morphisms such that $a$ and $ba$ are in ${\cal M}$, and $a$ is surjective, then $b$ is in ${\cal M}$. \newline {\bf M2}\,\, Compatibility with base change: if ${\cal F} \rightarrow {\cal G}$ is an ${\cal M}$-morphism and ${\cal H} \rightarrow {\cal G}$ any morphism, then ${\cal F} \times _{{\cal G}}{\cal H} \rightarrow {\cal H}$ is an ${\cal M}$-morphism; and conversely if $a:{\cal F}\rightarrow {\cal G}$ is a morphism such that ${\cal F} \times _{\cal G} Y\rightarrow Y$ is in ${\cal M}$ for every $S$-scheme and morphism $Y\rightarrow {\cal G}$, then $a$ is in ${\cal M}$. \newline {\bf M3}\,\, Etale morphisms between schemes are in ${\cal M}$. {\em Remark:} It follows from these axioms that the direct product of morphisms in ${\cal M}$ is again a morphism in ${\cal M}$. In the next section we will specify a certain such subcategory ${\cal M}$, the class of {\em vertical morphisms}, and show that it satisfies these axioms. But there may be other interesting examples of such a class of morphisms ${\cal M}$ to which the following definitions and lemmas could be applied. We can now extend our list of presentability properties which refer to the class ${\cal M}$. We use the notation ${\cal M}$-morphism for morphism lying in ${\cal M}$. The gap in the numbering is to leave a place for the property $P3$ later. This property (which is absolute rather than relative to a base scheme $S$) will come up only at the end of the paper, but it turns out to be more logical to number it in between $P2$ and $P4$ (this is the numbering used in \cite{kobe}). \noindent ${\bf P4({\cal M} )}$\,\, We say that a sheaf ${\cal F}$ is $P4({\cal M} )$ if there exist surjective ${\cal M}$-morphisms $$ X\rightarrow {\cal F} $$ and $$ R\rightarrow X\times _{{\cal F}}X $$ with $X$ and $R$ represented by affine schemes of finite type over $S$. \noindent ${\bf P5({\cal M} )}$\,\, We say that ${\cal F}$ is $P5({\cal M} )$ if it is $P5({\cal M} )$ and if, in addition, the structural morphism ${\cal F} \rightarrow S$ is in ${\cal M}$. \begin{lemma} \mylabel{I.z.1} If ${\cal F}$ and ${\cal G}$ are $P4({\cal M} )$ (resp. $P5({\cal M} )$) then so is ${\cal F} \times _S{\cal G}$. \end{lemma} {\em Proof:} The presentation is just the product of the presentations for ${\cal F}$ and ${\cal G}$. \hfill $\Box$\vspace{.1in} \subnumero{Kernels and extensions} \begin{lemma} \mylabel{I.1.a} If $f,g:{\cal G} \rightarrow {\cal H}$ are two morphisms, and if ${\cal G}$ and ${\cal H}$ are $P4({\cal M} )$, then the equalizer ${\cal F}$ is $P4({\cal M} )$. \end{lemma} {\em Proof:} Let $X\rightarrow {\cal H}$, $R\rightarrow X\times _{{\cal H}}X$, $Z\rightarrow {\cal G}$ and $T\rightarrow Z\times _{{\cal G}}Z$ be ${\cal M}$-morphisms with $X$, $R$, $Z$ and $T$ schemes of finite type over $S$. Assume that we have liftings $f',g': Z\rightarrow X$ of $f$ and $g$. Set $$ W:= Z\times _{X\times _SX}R. $$ It is a scheme of finite type over $S$. Note that the composed map $Z\times _{{\cal G}} {\cal F} \rightarrow Z\rightarrow X\times _SX$ factors through $X\times _{{\cal H}}X$, and we have $$ W= (Z\times _{{\cal G}} {\cal F} )\times _{X\times _{{\cal H}}X}R. $$ From this and property $M2$, it is clear that the morphism $W\rightarrow {\cal F}$ is surjective and in ${\cal M}$. Now set $$ V:= (W\times _SW)\times _{Z\times _SZ}T. $$ Again, this is of finite type over $S$. We have $$ W\times _{{\cal F}}W= W\times _{{\cal G}}W = (W\times _SW)\times _{Z\times _SZ}(Z\times _{{\cal G}}Z). $$ Therefore $$ V= (W\times _{{\cal F}} W)\times _{Z\times _{{\cal G}}Z}T. $$ From this and property $M2$ it is clear that the morphism $V\rightarrow W\times _{{\cal F}}W$ is surjective and in ${\cal M}$. We obtain the property $P4({\cal M} )$ for ${\cal F}$. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{I.1.a.1} If ${\cal F}\rightarrow {\cal H}$ and ${\cal G} \rightarrow {\cal H}$ are two morphisms between $P4({\cal M} )$ sheaves, then the fiber product ${\cal F} \times _{{\cal H}}{\cal G}$ is $P4({\cal M} )$. \end{corollary} {\em Proof:} The fiber product is the equalizer of the two morphisms ${\cal F} \times _S{\cal G} \rightarrow {\cal H}$. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{I.1.b} Suppose ${\cal H}$ is a group sheaf which is $P5({\cal M} )$. If ${\cal H}$ acts freely on a sheaf ${\cal G}$ with quotient ${\cal F} = {\cal G} /{\cal H}$, then the morphism ${\cal G} \rightarrow {\cal F}$ is in ${\cal M}$. \end{lemma} {\em Proof:} Make a base change by a scheme $Y\rightarrow {\cal F}$. Let ${\cal G} '':= {\cal G} \times _{{\cal F}}Y$. Then ${\cal H}$ acts freely on ${\cal G} ''$ with quotient $Y$. Since the morphism ${\cal G} '' \rightarrow Y$ is surjective in the etale topology, we may find an etale morphism (of finite type and surjective) $Y'\rightarrow Y$ such that the base change ${\cal G} ^3$ of ${\cal G} ''$ to $Y'$ admits a section. Then ${\cal G} ^3= Y'\times _S{\cal H}$. In particular, the morphism ${\cal G} ^3\rightarrow Y'$ is in ${\cal M}$, hence also the morphism ${\cal G} ^3 \rightarrow Y$. Finally, the morphism ${\cal G} ^3 \rightarrow {\cal G} ''$ is surjective, since $Y'\rightarrow Y$ is surjective, and is an ${\cal M}$-morphism because it becomes an etale morphism after base change to any scheme. By property $M1$, the morphism ${\cal G} '' \rightarrow Y$ is in ${\cal M}$; then by $M2$ the morphism ${\cal G} \rightarrow {\cal F}$ is in ${\cal M}$. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{I.1.c} Suppose ${\cal G}$ is a $P4({\cal M} )$ sheaf, and suppose $X\rightarrow {\cal G}$ is a morphism with $X$ a scheme of finite type over $S$. Then there exists a surjective ${\cal M}$-morphism $R\rightarrow X\times _{{\cal G}}X$ with $R$ a scheme of finite type over $S$. \end{lemma} {\em Proof:} Let $Y\rightarrow {\cal G}$ and $Q\rightarrow Y\times _{{\cal G}}Y$ be the surjective ${\cal M}$-morphisms. There is an etale surjection $X'\rightarrow X$ such that the lifting $X'\rightarrow Y$ exists. Note that $$ X'\times _{{\cal G}} X' = (X' \times _S X')\times _{Y\times _SY}(Y\times _{{\cal G}}Y). $$ We get that $$ R:= (X'\times _{{\cal G}}X')\times _{Y\times _{{\cal G}}Y}Q= (X' \times _S X')\times _{Y\times _SY}Q $$ is a scheme of finite type. But also the morphism $$ R= (X'\times _{{\cal G}}X')\times _{Y\times _{{\cal G}}Y}Q\rightarrow X'\times _{{\cal G}}X' $$ is in ${\cal M}$, by property $M2$. Finally, $$ X'\times _{{\cal G}} X' = (X\times _{{\cal G}}X) \times _{X\times _SX} X'\times _SX' $$ and $X'\times _SX'\rightarrow X\times _SX$ is an ${\cal M}$-morphism by $M3$ and the remark following the properties $M$. Thus $X'\times _{{\cal G}} X'\rightarrow X\times _{{\cal G}} X$ is in ${\cal M}$ (it is also surjective), so the surjection $R\rightarrow X\times _{{\cal G}}X$ is in ${\cal M}$. \hfill $\Box$\vspace{.1in} \begin{theorem} \mylabel{I.1.d} Suppose ${\cal H}$ is a group sheaf which is $P5({\cal M} )$, and suppose that ${\cal H}$ acts freely on a sheaf ${\cal G}$ with quotient ${\cal F} = {\cal G} /{\cal H} $. Then ${\cal F}$ is $P4({\cal M} )$ (resp. $P5({\cal M} )$) if and only if ${\cal G}$ is $P4({\cal M} )$ (resp. $P5({\cal M} )$). \end{theorem} {\em Proof:} By the lemma, the morphism ${\cal G} \rightarrow {\cal F}$ is in ${\cal M}$. If ${\cal G}$ is $P4({\cal M} )$ then there is a surjective ${\cal M}$-morphism $X\rightarrow {\cal G}$ with $X$ a scheme of finite type over $S$. The morphism $X\rightarrow {\cal F}$ is then surjective and in ${\cal M}$. Let $Y\rightarrow {\cal H}$ be a surjective ${\cal M}$-morphism. Now we have a surjective ${\cal M}$-morphism $$ X\times _SY\rightarrow X\times _S{\cal H} = X\times _{{\cal F}}{\cal G} , $$ and another surjective ${\cal M}$-morphism $$ X\times _{{\cal F}}{\cal G} \rightarrow {\cal F} \times _{{\cal F}}{\cal G} ={\cal G} . $$ Apply the previous lemma to the composition of these two morphisms, using the property $P4({\cal M} )$ of ${\cal G}$. We obtain the existence of a surjective ${\cal M}$-morphism $$ T\rightarrow (X\times _SY)\times _{{\cal G}} (X\times _SY) $$ with $T$ a scheme of finite type over $S$. On the other hand, note that we have a surjective ${\cal M}$-morphism $$ X\times _{{\cal F}}X\times _S{\cal H}=(X\times _{{\cal F}}{\cal G} )\times _{{\cal G}} (X\times _{{\cal F}}{\cal G} )\rightarrow X\times _{{\cal F}} X, $$ and a surjective ${\cal M}$-morphism $$ (X\times _SY )\times _{{\cal G}} (X\times _SY )\rightarrow (X\times _{{\cal F}}{\cal G} )\times _{{\cal G}} (X\times _{{\cal F}}{\cal G} ). $$ Composing these three morphisms we obtain a surjective ${\cal M}$-morphism $$ T\rightarrow X\times _{{\cal F}}X. $$ This proves that ${\cal F}$ is $P4({\cal M} )$. Suppose now that ${\cal F}$ is $P4({\cal M} )$. Let $$ X\rightarrow {\cal F} , \;\;\; R\rightarrow X\times _{{\cal F}} X $$ be the presentation given by the property $P4({\cal M} )$. We may choose $X$ in such a way that there exists a lifting $X\rightarrow {\cal G}$ (the freedom to replace $X$ by an etale cover comes from Property $M3$ and Lemma \ref{I.1.c}). This gives an isomorphism $X\times _{{\cal F}}{\cal G}\cong X\times _S{\cal H}$. Let $$ Y\rightarrow {\cal H} , \;\;\; W\rightarrow Y\times _{{\cal H}}Y $$ be the presentation given by the property $P4({\cal M} )$ of ${\cal H}$. We obtain surjective ${\cal M}$-morphisms $$ X\times _SY\rightarrow X\times _S{\cal H} $$ and (defining $U:= X\times _SW$) $$ U:=X\times _SW \rightarrow (X\times _SY)\times _{X\times _S{\cal H} }(X\times _SY). $$ Put $Z:= X\times _SY$. Then we have surjections in ${\cal M}$ $$ Z\rightarrow X\times _{{\cal F}}{\cal G} \rightarrow {\cal G} $$ (giving the first part of property $P4({\cal M} )$), and $$ U\rightarrow Z\times _{X\times _{{\cal F}}{\cal G}}Z. $$ Now, $$ (X\times _{{\cal F}}{\cal G} )\times _{{\cal G}} (X\times _{{\cal F}}{\cal G} )= $$ $$ X\times _{{\cal F}}(X\times _{{\cal F}}{\cal G} )=(X\times _{{\cal F}}X)\times _{{\cal F}}{\cal G} , $$ and we have an ${\cal M}$-surjection $$ R\times _{{\cal F}}{\cal G} \rightarrow (X\times _{{\cal F}}X)\times _{{\cal F}} {\cal G} . $$ Since $R\rightarrow {\cal F}$ lifts to $R\rightarrow {\cal G}$ we have $R\times _{{\cal F}}{\cal G}=R\times _S{\cal H}$ and letting $V\rightarrow R\times _SY$ be an etale surjection (needed for a certain step below), we obtain ${\cal M}$-surjections $$ V\rightarrow R\times _SY \rightarrow R\times _S{\cal H} \rightarrow (X\times _{{\cal F}}{\cal G} )\times _{{\cal G}} (X\times _{{\cal F}}{\cal G} ). $$ On the other hand, $$ Z\times _{{\cal G}} Z= Z\times _{X\times _{{\cal F}}{\cal G} }((X\times _{{\cal F}}{\cal G} ) \times _{{\cal G}} (X\times _{{\cal F}}{\cal G} ))\times _{X\times _{{\cal F}}{\cal G} }Z $$ so we obtain a surjection in ${\cal M}$ $$ Z\times _{X\times _{{\cal F}}{\cal G} }V\times _{X\times _{{\cal F}}{\cal G} }Z\rightarrow Z\times _{{\cal G}}Z. $$ We can assume (by choosing $V$ appropriately) that the morphism $$ V\rightarrow (X\times _{{\cal F}}{\cal G} )\times _{{\cal F}} (X\times _{{\cal F}}{\cal G} ) $$ lifts to a morphism $$ V\rightarrow Z\times_{{\cal F}} Z. $$ We then have an ${\cal M}$-surjection $$ U\times _ZV\times _ZU \rightarrow (Z\times _{X\times _{{\cal F}}{\cal G} }Z)\times _ZV\times _Z (Z\times _{X\times _{{\cal F}}{\cal G} }Z) $$ (where the two maps from $V$ to $Z$ used in the fiber product are the two projections composed with $V\rightarrow Z\times _{{\cal F}}Z$). Note that the right hand side is equal to $$ Z\times _{X\times _{{\cal F}}{\cal G} }V\times _{X\times _{{\cal F}}{\cal G} }Z, $$ which admits, as we have seen above, an ${\cal M}$-surjection to $Z\times _{{\cal G}}Z$. Since $U\times _ZV\times _ZU$ is a scheme of finite type over $S$, this completes the verification of the property $P4({\cal M} )$ for ${\cal G}$. We have now shown the equivalence of the conditions $P4({\cal M} )$ for ${\cal F}$ and ${\cal G}$. By the lemma, the morphism ${\cal G} \rightarrow {\cal F}$ is in ${\cal M}$. By Property $M1$, the structural morphism ${\cal F} \rightarrow S$ is in ${\cal M}$ if and only if the structural morphism ${\cal G} \rightarrow S$ is. Given the equivalence of the conditions $P4({\cal M} )$, this gives equivalence of the conditions $P5({\cal M} )$. \hfill $\Box$\vspace{.1in} Finally we give a lemma which allows us some flexibility in specifying resolutions. \begin{lemma} \mylabel{I.1.j} Suppose that $F$ is a sheaf on $S$ with surjective ${\cal M}$-morphisms $X\rightarrow F$ and $R \rightarrow X\times _F X$ such that $X$ and $R$ are $P4({\cal M} )$. Then $F$ is $P4({\cal M} )$. \end{lemma} {\em Proof:} Let $X'\rightarrow X$ and $Q\rightarrow X'\times _XX'$, and $R'\rightarrow R$ be the ${\cal M}$-surjections given by the hypotheses. We obtain a surjection $X'\rightarrow F$ in ${\cal M}$. On the other hand, $R'\rightarrow X\times _FX$ is in ${\cal M}$ and surjective, so $$ X'\times _XR'\times _XX' = R'\times _{X\times _FX}(X'\times _FX')\rightarrow X'\times _FX' $$ is an ${\cal M}$-surjection. But the left side is equal to $$ (X'\times _XX')\times _{X'}R' \times _{X'}(X'\times _XX') $$ if we choose (as we may assume is possible) a lifting $R'\rightarrow X'\times _FX'$ over $X\times _FX$. There is thus a surjection in ${\cal M}$ $$ Q\times _{X'}R'\times _{X'}Q\rightarrow X'\times _XR'\times _XX'. $$ Composing we get the required $$ Q\times _{X'}R'\times _{X'}Q\rightarrow X'\times _FX'. $$ \hfill $\Box$\vspace{.1in} \subnumero{Stability of the condition $P5({\cal M} )$} In the following corollary and theorem we will make use of a supplementary condition on the class ${\cal M}$: \newline {\bf M4}\,\, If $f: {\cal F} \rightarrow {\cal G}$ is a surjective morphism of sheaves of groups, then $f$ is in ${\cal M}$. \begin{corollary} \mylabel{I.z} Suppose ${\cal M}$ satisfies condition M4 in addition to the conditions M1-3. If ${\cal G}$ is a $P4({\cal M} )$ group sheaf then it is also $P5({\cal M} )$. \end{corollary} Indeed, M4 applied with ${\cal G} = \{ 1\}=S$ gives that the structural morphism ${\cal F} \rightarrow S$ for any sheaf of groups, is in ${\cal M}$. \hfill $\Box$\vspace{.1in} \begin{theorem} \mylabel{I.1.e} Suppose ${\cal M}$ satisfies condition M4 in addition to the conditions M1-3. Then if $$ 1\rightarrow {\cal F} \rightarrow {\cal E} \rightarrow {\cal G} \rightarrow 1 $$ is an extension of group sheaves and if any two of the elements are $P5({\cal M} )$, the third one is too. \end{theorem} {\em Proof:} Suppose that ${\cal F}$ is $P5({\cal M} )$. Then ${\cal E}$ is $P5({\cal M} )$ if and only if ${\cal G}$ is $P5({\cal M} )$ (by applying the previous theorem in view of the fact that ${\cal F}$ acts freely on ${\cal E}$ with quotient ${\cal G}$). The remaining case is if ${\cal E}$ and ${\cal G}$ are $P5({\cal M} )$. Then by Lemma \ref{I.1.a}, the kernel ${\cal F}$ (which is an equalizer of two maps ${\cal E} \rightarrow {\cal G}$) is $P4({\cal M} )$. By the above corollary, ${\cal F}$ is $P5({\cal M} )$. \hfill $\Box$\vspace{.1in} \numero{Lifting properties and verticality} We now fill in what class of morphisms ${\cal M}$ we would like to use in the theory sketched above. We could, of course, take ${\cal M} = {\cal X}$ to be the full set of morphisms of ${\cal X}$. This might well be a reasonable choice, but I don't see how to get a good infinitesimal theory in characteristic zero out of this choice. We could also try, for example, to take ${\cal M}$ as the class of flat (or maybe smooth) morphism s. But then any non-flat group scheme over $S$ would be a counterexample to property M4, and as we have seen this property is essential to be able to specify a class of presentable groups closed under kernels, cokernels and extensions. Thus we have to work a little harder to find an appropriate class of morphisms. We say that a morphism of sheaves $a:{\cal F} \rightarrow {\cal G}$, is {\em vertical} (or {\em $S$-vertical}, if the base needs to be specified), if it satisfies the following lifting properties for all $n\geq 1$: Suppose $Y$ is a scheme with $n$ closed subschemes $Y_i\subset Y$, with retractions $r_i:Y\rightarrow Y_i$---commuting pairwise ($r_ir_j=r_jr_i$)---such that for $j\leq i$, $r_i$ retracts $Y_i$ to $Y_j\cap Y_i$. Suppose given a morphism $Y\rightarrow {\cal G}$, and liftings $\lambda _i:Y_i\rightarrow {\cal F}$ such that $\lambda _i|_{Y_i\cap Y_j}= \lambda _j|_{Y_i\cap Y_j}$. Then for any $P\in Y$ lying on at least one of the $Y_i$ there exists an etale neighborhood $P\in Y' \rightarrow Y$ and a lifting $\lambda : Y' \rightarrow {\cal F}$ which agrees with the given liftings $\lambda _i|_{Y_i\times _YY'}$ on $Y_i\times _YY'$. For future reference we call this lifting property $Lift _n(Y; Y_i)$. \begin{lemma} \mylabel{I.u.1} Suppose $f:{\cal F} \rightarrow {\cal G}$ is a morphism of sheaves which are P2. Then $f$ is vertical if and only if $Lift _n(Y; Y_i)$ holds for all systems $(Y; Y_i)$ with $Y$ (and hence $Y_i$) artinian. \end{lemma} {\em Proof:} Suppose given a system $(Y, Y_i)$ which is not artinian. Choose a point $y_0$ (in one of the $Y_i$) and try to find a lifting in an etale neighborhood of $y_0$. We can find liftings on $Y^{(n)}$ (the infinitesimal neighborhoods of $y_0$) by hypothesis. Using the P2 property of ${\cal F}$ and an argument similar to that of Theorem \ref{I.t.2}, we can choose a compatible sequence of liftings. Since ${\cal F}$ is B2 we obtain a lifting over the spectrum of the complete local ring, then by Artin approximation (using B1) we obtain a lifting on an etale neighborhood of $y_0$. \hfill $\Box$\vspace{.1in} \begin{theorem} \mylabel{I.u} We have the following statements: \newline 1. \,\, If ${\cal F} \rightarrow {\cal G}$ is vertical and if ${\cal H} \rightarrow {\cal G}$ is any morphism of sheaves, then ${\cal F} \times _{{\cal G}}{\cal H} \rightarrow {\cal H}$ is vertical. \newline 2. \,\, If $a:{\cal F} \rightarrow {\cal G}$ is a morphism of sheaves such that for any $S$-scheme $Y$ and morphism $Y\rightarrow {\cal G}$, we have that ${\cal F} \times _{{\cal G}}Y\rightarrow Y$ is vertical, then $a$ is vertical; \newline 3. \,\, If $a:{\cal F} \rightarrow {\cal G}$ and $b:{\cal G} \rightarrow {\cal H}$ are two morphisms which are vertical, then $ba$ is vertical (also the identity is vertical); and \newline 4. \,\, If $a:{\cal F} \rightarrow {\cal G}$ and $b:{\cal G} \rightarrow {\cal H}$ are two morphisms such that $a$ and $ba$ are vertical, and $a$ is surjective, then $b$ is vertical. \newline 5.\,\, The etale surjections between schemes are vertical. \newline 6.\,\, Any injective morphism ${\cal F} \hookrightarrow S$ is vertical. \newline 7.\,\, If $f: {\cal F} \rightarrow {\cal G}$ is a surjective morphism of sheaves of groups, then $f$ is vertical. \end{theorem} {\em Proof:} The lifting property concerns only maps from schemes to ${\cal G}$, so it obviously satisfies parts 1 and 2. For part 3, just lift two times successively (for the identity the lifting property is tautological). For part 4, the proof is by induction on $n$. Keep the notations $a$, $b$, ${\cal F}$, ${\cal G}$ and ${\cal H}$ of part 4. Suppose $n=1$. Then we just have to note that if we have a lifting $Y_1 \rightarrow {\cal G}$ for $b$, then since $a$ is surjective, we can lift further to $Y_1\rightarrow {\cal F}$ (locally in the etale topology). The lifting for $ba$ gives $Y\rightarrow {\cal F}$ and we just project back to ${\cal G}$ to get the lifting for ${\cal G}$. This gives the case $n=1$. We may assume that the present lemma is known when there are strictly fewer than $n$ subschemes. Suppose we have liftings $\lambda _i: Y_i\rightarrow {\cal G}$; in order to get a lifting $\lambda$, and using the lifting property for the morphism $ba$, it suffices to choose liftings $\mu _i : Y_i\rightarrow {\cal F}$ with $\mu _i|_{Y_i\cap Y_j}= \mu _j|_{Y_i\cap Y_j}$. We can do this by induction. Suppose we have chosen $\mu _1,\ldots , \mu _{k-1}$. Since $k-1<n$, we know the lemma when there are $k-1$ subschemes; apply the lifting property for the morphism $a$ with respect to the morphism $Y_k\rightarrow {\cal G}$, with respect to the subschemes $Y_k\cap Y_i$, $i=1,\ldots , k-1$, and with respect to the liftings $\mu _j|_{Y_k\cap Y_j}$. We obtain a lifting $\mu _k:Y_k\rightarrow {\cal F}$ such that $\mu _k|_{Y_k\cap Y_j}=\mu _j|_{Y_k\cap Y_j}$. By induction now we obtain all of the liftings $\mu _1,\ldots , \mu _n$. The lifting property for $ba$ gives a lifting $\mu$ and we can set $\lambda := a\mu$. This completes the verification of part 4. For the etale surjections (part 5), use the previous lemma. Suppose $i:{\cal F} \rightarrow S$ is injective (part 6). To verify the lifting property for $Y\rightarrow S$ we just have to verify that this morphism factors through $Y\rightarrow {\cal F}$. For this, use the facts that $Y$ retracts onto $Y_1$ (over $S$) and that the morphism $Y_1\rightarrow S$ factors through $Y_1\rightarrow {\cal F}$. Finally we verify $Lift _n(Y, Y_i)$ for the morphism $f:{\cal F} \rightarrow {\cal G}$ in part 7. Let $r_i: Y\rightarrow Y_i$ denote the retractions. Suppose given $\mu : Y\rightarrow {\cal G}$ and $\lambda _i : Y_i \rightarrow {\cal F}$ satisfying the necessary compatibility conditions. Since $f$ is surjective, we may suppose that there is a lifting $\sigma : Y\rightarrow {\cal F}$ of $\mu$ (by restricting to an etale neighborhood in $Y$). We construct inductively $\phi _i : Y\rightarrow {\cal F}$ lifting $\mu$, with $\phi _i |_{Y_j}=\lambda _j$ for $j\leq i$. Denote the multiplication operations in ${\cal F}$ or ${\cal G}$ by $\cdot$. Let $$ h_1:= (\lambda _1\cdot (\sigma |_{Y_1})^{-1})\circ r_1: Y\rightarrow \ker (f). $$ Put $\phi _1 := h_1\cdot \sigma $. Then $\phi _1$ restricts to $\lambda _1$ on $Y_1$, and lifts $\mu$. Suppose we have chosen $\phi _i$. Let $$ h_{i+1}:= (\lambda _{i+1}\cdot (\phi _i |_{Y_{i+1}})^{-1})\circ r_{i+1} :Y\rightarrow \ker (f), $$ and put $\phi _{i+1}:= \phi _i \cdot h_{i+1}$. This lifts $\mu$ because $h_{i+1}$ is a section of $\ker (f)$. For $j\leq i$, $r_{i+1}$ maps $Y_j$ to $Y_j\cap Y_{i+1}$, and there $\lambda _{i+1}=\lambda _j$ agrees with $\phi _i$ so $h_{i+1}|_{Y_j}=1$. We don't destroy the required property for $j\leq i$. On the other hand, we gain the required property for $j=i+1$, by construction. This completes the inductive step to construct $\phi _i$. Finally, the $\phi _n$ is the lifting required for property $Lift _n(Y,Y_i)$. This completes the proof of part 7. \hfill $\Box$\vspace{.1in} From the above results, the class ${\cal M}$ of vertical morphisms satisfies the axioms M1, M2, M3, {\em and} M4 of the previous section. This is the principal class ${\cal M}$ to which we will refer, in view of which we drop ${\cal M}$ from the notation when ${\cal M}$ is the class of vertical morphisms. Thus the conditions $P4$ and $P5$ refer respectively to $P4({\cal M} )$ and $P5({\cal M} )$ with ${\cal M}$ the class of vertical morphisms. In particular we obtain the results \ref{I.z.1}, \ref{I.1.a}, \ref{I.1.a.1}, \ref{I.1.b}, \ref{I.1.c}, \ref{I.1.d}, \ref{I.z}, and \ref{I.1.e} for the properties P4 and P5. We have some further results about $P4$ and $P5$. \begin{lemma} \mylabel{I.x} Suppose that ${\cal F}$ is $P4$. In the situation of the lifting property $Lift_n(Y; Y_i)$ for the morphism $X\rightarrow {\cal F}$ given by property $P4$, suppose that $Y$ is the scheme-theoretic union of $Y_1,\ldots , Y_n$. Then the lifting is unique. \end{lemma} {\em Proof:} In effect, for morphisms $Y\rightarrow X$ with $X$ a scheme, if $Y$ is the scheme theoretic union of the $Y_i$ then the morphism is determined by its restrictions to the $Y_i$. \hfill $\Box$\vspace{.1in} \begin{proposition} \mylabel{aaa} The property of being vertical is stable under base change of $S$: suppose $p:S'\rightarrow S$ is a morphism of schemes. If $f:{\cal F} \rightarrow {\cal G}$ is vertical then $p^{\ast}(f):p^{\ast}({\cal F})\rightarrow p^{\ast}({\cal G} )$ is vertical. Furthermore if ${\cal H} \rightarrow {\cal K}$ is an $S'$-vertical morphism of sheaves on ${\cal X} /S'$ then the restriction down to $S$, $$ Res _{S'/S}({\cal H} )\rightarrow Res _{S'/S}({\cal K} ) $$ is $S$-vertical. \end{proposition} {\em Proof:} This follows from the form of the lifting properties. \hfill $\Box$\vspace{.1in} {\em Remark:} We often ignore the notation of ``restriction down'', then the first part of the proposition states that if ${\cal F} \rightarrow {\cal G} \rightarrow S$ with the first morphism being $S$-vertical, then ${\cal F} \times _SS'\rightarrow {\cal G} \times _SS'$ is $S'$-vertical. The last part of the proposition states that if ${\cal H} \rightarrow {\cal K} \rightarrow S'$ with the first morphism being $S'$-vertical, then it is also $S$-vertical. \begin{corollary} \mylabel{I.1.j.1} Suppose ${\cal F}$ is a sheaf over $S$, and suppose $S'\rightarrow S$ is a surjective etale morphism such that ${\cal F} |_{S'}$ is $P4$ over $S'$. Then ${\cal F}$ is $P4$ over $S$. \end{corollary} {\em Proof:} If $(Y,Y_i)$ is a system for the lifting property over $S$, then their pullbacks $(Y',Y_i')$ form such a system over $S'$. If a morphism of sheaves over $S'$ satisfies the lifting property, then we can lift for the system $(Y', Y'_i)$. This gives a lifting over $Y'$ for the system $(Y,Y_i)$, that is a lifting etale locally, thus satisfying the lifting property over $S$. Thus a morphism which is $S'$-vertical is also $S$-vertical. It follows that ${\cal F} |_{S'}$ is $P4({\cal M} )$ over $S$. Now $$ ({\cal F} |_{S'})\times _{{\cal F}} ({\cal F} |_{S'})= {\cal F} |_{S'}\times _{S'} (S'\times _SS'), $$ and $S'\times _SS'$ is $P4({\cal M} )$ over $S$. Thus $({\cal F} |_{S'})\times _{{\cal F}} ({\cal F} |_{S'})$ is $P4({\cal M} )$ over $S$. We can now apply Lemma \ref{I.1.j} with $X={\cal F} |_{S'}$ and $R=({\cal F} |_{S'})\times _{{\cal F}} ({\cal F} |_{S'})$. \hfill $\Box$\vspace{.1in} \subnumero{Presentable group sheaves} In view of the nice properties of $P5$ group sheaves, we make the following change of notation. A sheaf of groups ${\cal G}$ over ${\cal X} /S$ is a {\em presentable group sheaf} if it is $P5$. Note that we use this terminology only for sheaves of groups. \begin{corollary} \mylabel{uvw} A sheaf of groups which is representable by a scheme of finite type $G$ over $S$, is presentable. \end{corollary} {\em Proof:} This is because we can take $X=G$ and $R$ equal to the diagonal $G$ in the definition of property $P4$; and property $P5$ is then Corollary \ref{I.z}. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{vwx} The category of presentable group sheaves contains the category generated by representable group sheaves under the operations of extensions, kernels, and division by normal subgroups. \end{corollary} {\em Proof:} Theorem \ref{I.1.e}. \hfill $\Box$\vspace{.1in} In particular, the category of presentable group sheaves is much bigger than the category of representable group sheaves. I believe that the category of presentable group sheaves is strictly larger than the category generated generated by representable group sheaves under the operations of kernel, cokernel and extension. For example the group sheaves $Aut (V)$ for a vector sheaf $V$, which are presentable as shown below, are probably not generated from representable group sheaves by kernels, extensions and quotients (although I don't have a counterexample). In an intuitive sense, however, the two categories are about the same. The two previous corollaries would also hold for the category of $P5({\cal M} )$ group sheaves for any class ${\cal M}$ satisfying $M1$ through $M4$. We now give the main argument where we use the lifting properties and the notion of verticality, i.e. the special definition of ${\cal M}$. \begin{lemma} If ${\cal G}$ is a sheaf of groups and $X\rightarrow {\cal G}$ is a vertical surjection, with identity section $e:S\rightarrow X$ then (choosing a point $P$ on $e(S)$) there is a lifting of the multiplication to a map of etale germs $$ \mu : (X,P)\times _S(X,P) \rightarrow (X,P) $$ such that $\mu (x,e)=\mu (e,x)=x$. \end{lemma} {\em Proof:} Let $Y=X\times _SX$ and $Y_1 = X\times _Se(S)\cong X$ and $Y_2 = e(S)\times _SX \cong X$. We have retractions $Y\rightarrow Y_1$ and $Y\rightarrow Y_2$ as in the lifting property. The multiplication map ${\cal G} \times _S{\cal G} \rightarrow {\cal G}$ composes to give a map $Y=X\times _SX\rightarrow {\cal G}$. The identity gives liftings $Y_1\rightarrow X$ and $Y_2\rightarrow X$ agreeing on $Y_1\cap Y_2 = e(S)\times _S e(S)$. By the definition of verticality of the morphism $X\rightarrow {\cal G}$, there is an etale neighborhood $P\in Y'\rightarrow Y$ and a lifting to a map $Y'\rightarrow X$ agreeing with our given lifts on $Y'_1$ and $Y'_2$. This gives the desired map (note that when we have written the product of two etale germs, this means the germ of the product rather than the product of the two spectra of henselian local rings). \hfill $\Box$\vspace{.1in} We use this result in the following way. A map $\mu : X\times X\rightarrow X$ (defined on germs at a point $P$) such that $\mu (e,x)=\mu (x,e) = x$, gives rise to an exponential map $T(X)_e^{\wedge}\rightarrow X$ where $T(X)_e$ is the tangent vector scheme (see \S\S 5-8 below) to $X$ along the identity section $e$ and $T(X)_e^{\wedge}$ denotes the formal completion at the zero section. To define this exponential map note that the multiplication takes tangent vectors at $e$ to tangent vector fields on $X$ which we can then exponentiate in the classical way. The formal exponential map is an isomorphism between $T(X)_e^{\wedge}$ and the completion $X^{\wedge}$ along $e$. This is a fairly strong condition on $X$ which we will exploit below, notably to get $Lie ({\cal G} )$ and to develop a theory of connectedness. In particular this technique allows us to prove directly (in \S 6 below) that when $k$ is a field of characteristic zero, presentable group sheaves over $Spec (k)$ are just algebraic Lie groups over $k$. It is possible that in characteristic $p$ there would be an appropriate notion of verticality taking into account divided powers, which would have the same effect of enabling a good infinitesimal theory. This is why we have left the class ${\cal M}$ as an indeterminate in the first part of our discussion above. \subnumero{The conditions $P3$ and $P3\frac{1}{2}$} We now add the following two conditions, which will be used as conditions on $\pi _0$ in the last section (in contrast to the condition $P5$ which is to be used on $\pi _1$ and even $\pi _i$, $i\geq 2$). These conditions depend on a functorial choice of class ${\cal M} (Y)$ of morphisms of sheaves over $Y$ for each $Y\in {\cal X}$. We will leave to the reader the (easy) job of stating these properties in this generality, and instead we will state them directly when ${\cal M} (Y)$ is taken as the class of $Y$-vertical morphisms. Note that the properties we are about to state are {\em absolute} properties of sheaves on ${\cal X}$ rather than relative properties of sheaves over some base $S$. \noindent {\bf P3.} \,\, A sheaf ${\cal F}$ on ${\cal X}$ is $P3$ if there is a surjection $X\rightarrow {\cal F}$ from a scheme $X$ of finite type over $Spec (k)$, and if there is a surjection $\varphi : R\rightarrow X\times _{{\cal F}}X$ from a scheme $R$ of finite type over $Spec (k)$ such that $\varphi$ is an $X\times X$-vertical morphism. \noindent ${\bf P3\frac{1}{2}.}$ \,\, A sheaf ${\cal F}$ on ${\cal X}$ is $P3\frac{1}{2}$ if there is a surjection $X\rightarrow {\cal F}$ from a scheme $X$ of finite type over $Spec (k)$, and if there is a surjection $\varphi : R\rightarrow X\times _{{\cal F}}X$ from a scheme $R$ of finite type over $Spec (k)$ such that $\varphi$ is an $X$-vertical morphism, where the map to $X$ is the first projection of $X\times _{{\cal F}}X$. {\em Remark:} These properties seem almost identical. The first was refered to in \cite{kobe} (already as property $P3$). However it will turn out that the second version (which I hadn't yet thought of at the time of writing \cite{kobe}) seems more useful---cf \S 10 below. The author apologizes for this complication of the notation! {\em Remark:} $$ P5 \Rightarrow P4\Rightarrow P3\frac{1}{2}\Rightarrow P3 \Rightarrow P2 \Rightarrow P1. $$ These properties will not come into our study of group sheaves over a base $S$. Rather, they come in as conditions on $\pi _0$ of $n$-stacks on ${\cal X}$, in our brief discussion at the end of the paper. In fact we could have put off stating these properties until \S 10, but the reader had probably been wondering for some time already why we are skipping number $3$ in our list of properties. We quickly give the analogues, for $P3\frac{1}{2}$, of some of the basic facts about our other properties. We leave to the reader the task of elicudating the corresponding properties for $P3$. \begin{lemma} \mylabel{P3a} Suppose ${\cal G}$ is $P3\frac{1}{2}$ and suppose $X$ is a scheme of finite type with a morphism $X\rightarrow {\cal G}$. Then there is a surjection from a scheme of finite type $R\rightarrow X\times _{{\cal G}} X$ which is vertical with respect to the first factor $X$. \end{lemma} {\em Proof:} Let $Y\rightarrow {\cal G}$ and $W\rightarrow Y\times _{{\cal G}}Y$ be the surjections with the second one being vertical with respect to the first factor $Y$. There is an etale covering $X' \rightarrow X$ and a lifting of our morphism to $X'\rightarrow Y$. Then $$ X'\times _{{\cal G}}X'=(X'\times X') \times _{Y\times Y} (Y\times _{{\cal G}}Y) $$ so $$ R:=(X' \times X' )\times _{Y\times Y} W \rightarrow X'\times _{{\cal G}}X' $$ is surjective. It is vertical with respect to the first factor $Y$ and hence vertical with respect to the first factor $X'$. Since $X'\rightarrow X$ is etale, this morphism is also vertical with respect to $X$ (via the first factor). The surjection $$ X'\times _{{\cal G}}X' \rightarrow X\times _{{\cal G}}X $$ is the pullback of the etale morphism $X'\times X'\rightarrow X\times X$ so it is also vertical with respect to the first factor $X$. Composing we obtain $$ R\rightarrow X\times _{{\cal G}}X $$ vertical with respect to the first factor. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{P3b} Suppose ${\cal G}$ is $P3\frac{1}{2}$ and suppose ${\cal F} \subset {\cal G}$. If ${\cal F}$ is $P1$ then it is $P3\frac{1}{2}$. \end{corollary} {\em Proof:} Let $X\rightarrow {\cal F}$ be a surjection from a scheme of finite type $Y$. From the above lemma we get a surjection $R\rightarrow X\times _{{\cal G}}X$ which is vertical with respect to the first factor, but since ${\cal F}\rightarrow {\cal G}$ is injective $X\times _{{\cal G}} X=X\times _{{\cal F}}X$ and we're done. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{P3c} Suppose ${\cal G}$ is $P3\frac{1}{2}$ and ${\cal H}$ is $P2$, then the equalizer ${\cal F}$ of any two morphisms $f,g: {\cal G} \rightarrow {\cal H}$ is again $P3\frac{1}{2}$. \end{corollary} {\em Proof:} By Lemma \ref{I.1.a} with ${\cal M}$ being the class of all morphisms, we obtain that ${\cal F}$ is $P2$ and in particular $P1$. Since it is a subsheaf of ${\cal G}$, the previous corollary applies to show that ${\cal F}$ is $P3\frac{1}{2}$. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{P3d} Suppose ${\cal F} \rightarrow {\cal H}$ and ${\cal G} \rightarrow {\cal H}$ are two morphisms such that ${\cal H}$ is $P2$ and ${\cal F}$ and ${\cal G}$ are $P3\frac{1}{2}$. Then the fiber product ${\cal F} \times _{{\cal H}} {\cal G}$ is $P3\frac{1}{2}$. \end{corollary} {\em Proof:} The fiber product is the equalizer of two morphisms ${\cal F} \times {\cal G} \rightarrow {\cal H}$. Note that the product of two $P3\frac{1}{2}$ sheaves is again $P3\frac{1}{2}$---this comes from the general statement that if ${\cal A} \rightarrow {\cal B}$ is $S$-vertical and if ${\cal A} '\rightarrow {\cal B}'$ is $S'$-vertical then ${\cal A} \times {\cal A}'\rightarrow {\cal B} \times {\cal B} '$ is $S\times S'$-vertical (a direct consequence of the form of the lifting properties). \hfill $\Box$\vspace{.1in} Finally we have the analogue of one half of Theorem \ref{I.1.d}. I didn't quite see how to do the other half. \begin{proposition} \mylabel{P3e} Suppose $S$ is a scheme of finite type, and suppose ${\cal H}$ is a group sheaf over $S$ which is $P5$. Suppose that ${\cal G}\rightarrow S$ is a sheaf and that ${\cal H}$ acts freely on ${\cal G}$ over $S$, with quotient ${\cal F} = {\cal G} /{\cal H}$. If ${\cal F}$ is $P3\frac{1}{2}$ then ${\cal G}$ is $P3\frac{1}{2}$ (here ${\cal F}$ and ${\cal G}$ are being considered as the restrictions down to $Spec (k)$ of the corresponding sheaves over $S$). \end{proposition} {\em Proof:} We follow the proof of the second half of Theorem \ref{I.1.d}. Let $$ X\rightarrow {\cal F} , \;\;\; R\rightarrow X\times _{{\cal F}} X $$ be the presentation given by the property $P3\frac{1}{2}$. We may choose $X$ in such a way that there exists a lifting $X\rightarrow {\cal G}$, giving an isomorphism $X\times _{{\cal F}}{\cal G}\cong X\times _S{\cal H}$. Let $$ Y\rightarrow {\cal H} , \;\;\; W\rightarrow Y\times _{{\cal H}}Y $$ be the presentation given by the property $P4({\cal M} )$ of ${\cal H}$. We obtain surjective $S$-vertical morphisms $$ X\times _SY\rightarrow X\times _S{\cal H} $$ and (defining $U:= X\times _SW$) $$ U:=X\times _SW \rightarrow (X\times _SY)\times _{X\times _S{\cal H} }(X\times _SY). $$ Put $Z:= X\times _SY$. Then we have a surjection $$ Z\rightarrow X\times _{{\cal F}}{\cal G} \rightarrow {\cal G} $$ and an $S$-vertical surjection $$ U\rightarrow Z\times _{X\times _{{\cal F}}{\cal G}}Z. $$ Now, $$ (X\times _{{\cal F}}{\cal G} )\times _{{\cal G}} (X\times _{{\cal F}}{\cal G} )= $$ $$ X\times _{{\cal F}}(X\times _{{\cal F}}{\cal G} )=(X\times _{{\cal F}}X)\times _{{\cal F}}{\cal G} , $$ and we have a surjection vertical with respect to the first factor $X$, $$ R\times _{{\cal F}}{\cal G} \rightarrow (X\times _{{\cal F}}X)\times _{{\cal F}} {\cal G} . $$ Since $R\rightarrow {\cal F}$ lifts to $R\rightarrow {\cal G}$ we have $R\times _{{\cal F}}{\cal G}=R\times _S{\cal H}$ and letting $V\rightarrow R\times _SY$ be an etale surjection, we obtain surjections $$ V\rightarrow R\times _SY \rightarrow R\times _S{\cal H} \rightarrow (X\times _{{\cal F}}{\cal G} )\times _{{\cal G}} (X\times _{{\cal F}}{\cal G} ). $$ The first is etale, the second is $S$-vertical, and the third is $X$-vertical for the first factor, so the composition is $X$-vertical. As before $$ Z\times _{{\cal G}} Z= Z\times _{X\times _{{\cal F}}{\cal G} }((X\times _{{\cal F}}{\cal G} ) \times _{{\cal G}} (X\times _{{\cal F}}{\cal G} ))\times _{X\times _{{\cal F}}{\cal G} }Z $$ so we obtain an $X$-vertical surjection $$ Z\times _{X\times _{{\cal F}}{\cal G} }V\times _{X\times _{{\cal F}}{\cal G} }Z\rightarrow Z\times _{{\cal G}}Z. $$ We can assume by choosing $V$ appropriately that the morphism $$ V\rightarrow (X\times _{{\cal F}}{\cal G} )\times _{{\cal F}} (X\times _{{\cal F}}{\cal G} ) $$ lifts to a morphism $$ V\rightarrow Z\times_{{\cal F}} Z. $$ We then have an $S$-vertical surjection $$ U\times _ZV\times _ZU \rightarrow (Z\times _{X\times _{{\cal F}}{\cal G} }Z)\times _ZV\times _Z (Z\times _{X\times _{{\cal F}}{\cal G} }Z). $$ The right hand side is equal to $$ Z\times _{X\times _{{\cal F}}{\cal G} }V\times _{X\times _{{\cal F}}{\cal G} }Z, $$ which admits, as we have seen above, an $X$-vertical surjection to $Z\times _{{\cal G}}Z$. By composing we obtain an $X$-vertical, and hence $Z$-vertical surjection $$ U\times _ZV\times _ZU\rightarrow Z\times _{{\cal G}}Z. $$ This completes the proof. \hfill $\Box$\vspace{.1in} \numero{Functoriality} Suppose $F$ is a sheaf over $S$, and suppose $\pi : S'\rightarrow S$ is a morphism. We denote by $\pi ^{\ast}(F)$ the restriction $F|_{{\cal X} /S'}$, which is the sheaf associated to the presheaf $Y\rightarrow S' \mapsto F(Y\rightarrow S)$. If $F$ is representable then $\pi ^{\ast}F$ is also representable by the fiber product $F\times _SS'$. In general, we allow ourselves to use the notations $\pi ^{\ast}F$, $F\times _SS'$ and $F|_{S'}$ interchangeably. We have defined, for a sheaf $G$ on $S'$, the {\em restriction down $Res _{S'/S}(G )$}. Suppose $G$ is a sheaf on $S'$. We defined the direct image by $$ \pi _{\ast}G(Y\rightarrow S):= G(Y\times _SS' \rightarrow S'). $$ The morphism $F(Y\rightarrow S) \rightarrow F(Y\times _SS' \rightarrow S)$ gives a natural morphism $$ F\rightarrow \pi _{\ast} \pi ^{\ast}(F), $$ and the morphism $G(Y\times _S S'\rightarrow S) \rightarrow G (Y\rightarrow S')$ (coming from the graph morphism $Y\rightarrow Y\times _S S'$) gives a natural morphism $$ \pi ^{\ast}\pi _{\ast} (G)\rightarrow G. $$ These functors are adjoints and the above are the adjunction morphisms. More precisely, we have a natural isomorphism $$ Hom (F, \pi _{\ast}G)\cong Hom (\pi ^{\ast}F,G). $$ This may be verified directly. {\em Remark:} If $f:A\rightarrow B$ is a vertical morphism over $S'$ then $\pi _{\ast}(f):\pi _{\ast}A\rightarrow \pi _{\ast}B$ is vertical over $S$. To see this, note that if $Y, Y^{(n)}$ is a collection of $S$-schemes with retractions etc. as in the definition of verticality, then $Y\times _S S', Y^{(n)} \times _SS'$ is a collection with retractions over $S'$. The verticality of $\pi _{\ast}(f)$ for the case of $Y, Y^{(n)}$ follows from the verticality of $f$ for the case of $Y\times _S S', Y^{(n)} \times _SS'$. {\em Remark:} Direct and inverse images are compatible with fiber products. For inverse images this is easy. For direct images, suppose we have morphisms $A\rightarrow C$ and $B\rightarrow C$ on $S'$. We obtain morphisms $A\times _CB\rightarrow A$ and $A\times _CB$ satisfying a universal property. These give morphisms $$ \pi _{\ast}(A\times _CB)\rightarrow \pi _{\ast} A \;\; (resp. \;\; \pi _{\ast}B \, ). $$ We show the universal property: suppose $$ (u,v)\in (\pi _{\ast}A\times _{\pi _{\ast}C}\pi _{\ast}B )(Y), $$ that is $u\in A(Y\times _SS') $ and $v\in B(Y\times _SS')$ with the same image in $C(Y\times _SS')$. We obtain a unique element of $(A\times _CB)(Y\times _SS')$ mapping to $(u,v)$. This gives the claim. \begin{lemma} \mylabel{I.1.g.2} if ${\cal F}$ is a coherent sheaf on $S'$ and $\pi :S'\rightarrow S$ is a finite morphism then $\pi _{\ast}({\cal F} )$ is a coherent sheaf on $S$. \end{lemma} {\em Proof:} We may assume $S$ and $S'$ affine, so that $S=Spec (A)$ and $S'=Spec (A')$ with $A'$ a finite $A$-algebra. The coherent sheaf ${\cal F}$ corresponds to an $A'$-module $M$. This implies that $$ \pi _{\ast}({\cal F} ) (Spec (B)\rightarrow Spec (A)) = {\cal F} ( Spec (B\otimes _AA')) $$ $$ = M\otimes _{A'}(B\otimes _AA') = M\otimes _AB. $$ This formula means that $\pi _{\ast}({\cal F} )$ corresponds to the same module $M$ considered as an $A$-module; in particular it is coherent. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{I.1.h} If $F$ is $P4$ (resp. $P5$) on $S$ then $\pi ^{\ast}F$ is $P4$ (resp. $P5$) on $S'$. \end{lemma} {\em Proof:} Note first of all that $S$-verticality of a morphism of sheaves over $S'$ implies $S'$-verticality. Now if $F$ is $P4$, let $X\rightarrow F$ and $R\rightarrow X\times _FX$ be the corresponding vertical surjections. We get $\pi ^{\ast}(X) \rightarrow \pi ^{\ast}(F)$ and $$ \pi ^{\ast}(R)\rightarrow \pi ^{\ast}(X\times _FX)= \pi ^{\ast}(X)\times _{\pi ^{\ast}(F)}\pi ^{\ast}(X), $$ surjective and $S$-vertical (hence $S'$-vertical) morphisms. Note that $\pi ^{\ast}(X)$ and $\pi ^{\ast}(R)$ are schemes of finite type over $S'$, so we obtain the proof for P3. For $P5$ note that $\pi ^{\ast}(F)=F\times _SS'$, so by Theorem \ref{I.u}, $\pi ^{\ast}(F)\rightarrow S'$ is $S$-vertical; hence it is $S'$-vertical as required. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{I.1.i} Suppose $\pi :S'\rightarrow S$ is a finite morphism and suppose $G'$ is a $P4$ sheaf on $S'$. Then $\pi _{\ast}(G')$ is a $P4$ sheaf on $S'$. \end{lemma} {\em Proof:} Let $X'\rightarrow G'$ and $R'\rightarrow X'\times_{G'}X'$ be the surjective vertical morphisms with $X'$ and $R'$ schemes of finite type over $S'$. Let $G:= \pi _{\ast}(G')$ and similarly for $X$ and $R$. By the above remark, we obtain vertical morphisms $X\rightarrow G$ and $R\rightarrow X\times _GX$. (Note that $X\times _GX= \pi _{\ast}(X'\times _{G'}X'$ by above.) In the case of a finite morphism $\pi : S'\rightarrow S$, note that if $f:A\rightarrow B$ is surjective over $S'$ then $\pi _{\ast}(f):\pi _{\ast}A\rightarrow \pi _{\ast}B$ is surjective. This is a general property of sheaves on the etale topology, for which we sketch the proof (an application of Artin approximation). If $\eta \in \pi _{\ast}(B)(Y)$, this means $\eta : Y\times _SS'\rightarrow B$. For $y'\in Y\times _SS'$ there is an etale neighborhood $U\rightarrow Y\times _SS'$ and a lifting $U\rightarrow A$. We need to find an etale neighborhood $V$ of the image $y\in Y$ and a lifting $V\times _SS' \rightarrow U$. Define a functor $L(V/Y)$ to be the set of liftings $V\times _SS' \rightarrow U$ over $Y\times _SS'$. It is B1, and a lifting exists on $\hat{V}= Spec ({\cal O} _{Y,y}^{\wedge})$, so by Artin approximation there is an etale neighborhood $V$ with a lifting. Applying this to our case, the morphisms $X\rightarrow G$ and $R\rightarrow X\times _GX$ are surjective. By Lemma \ref{I.1.j}, it suffices to prove that $X$ and $R$ are $P4$. Thus it suffices in general to show: if $Z$ is a scheme of finite type over $S'$ then $\pi _{\ast}(Z)$ is $P4$. We make a further reduction: a scheme of finite type can be presented as the kernel of a morphism ${\bf A}^n \rightarrow {\bf A}^m$; the direct image is then the kernel of $\pi _{\ast}{\bf A}^n \rightarrow \pi _{\ast}{\bf A}^m$. The kernel of a morphism of $P4$ sheaves is again $P4$ (Lemma \ref{I.1.a}) so it suffices to treat the case $Z={\bf A}^n$. But in this case, $Z$ is a coherent sheaf and its direct image is also a coherent sheaf. One can see directly that a coherent sheaf ${\cal F}$ is $P4$ by using the fact that it has a resolution of the form $$ {\cal O} ^a \rightarrow {\cal O} ^b \rightarrow {\cal F} \rightarrow 0 $$ (exact even on the big site ${\cal X} /S$), or by looking at Lemmas \ref{I.1.g.1} (below) and \ref{I.1.g.2}. This completes the proof. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{restrictionPreserves?} Suppose $S'\rightarrow S$ is a morphism of schemes of finite type. Suppose ${\cal F}$ is a sheaf on ${\cal X} /S'$. Then $Res _{S'/S}{\cal F}$ is $P3\frac{1}{2}$ if and only if ${\cal F}$ is $P3\frac{1}{2}$. \end{lemma} {\em Proof:} This is only a matter of terminology since, $P3\frac{1}{2}$ being a global property, the statement that ${\cal F}$ is $P3\frac{1}{2}$ really means that the restriction of ${\cal F}$ down to $Spec (k)$ is $P3\frac{1}{2}$. It is obviously equivalent to say this after first restricting down to $S$. \hfill $\Box$\vspace{.1in} \numero{Vector sheaves} With this section we begin the part of our study which requires working over a ground field $k$ of characteristic zero. From now on ${\cal X}$ denotes the big etale site of schemes over $Spec (k)$. Before returning to the definition of presentability and its infinitesimal study, we make a detour to discuss {\em vector sheaves}. These are objects which will be the linearizations of presentable group sheaves---we are also interested in vector sheaves as candidates for the $\pi _i (T,t)$ with $t\in T(S)$, for an $n$-stack $T$ on ${\cal X} /S$ for $i\geq 2$. To be slightly more precise, suppose $S\in {\cal X}$ is a scheme over $k$, and let ${\cal X} /S$ denote the category of schemes over $S$. We will define a notion of {\em vector sheaf} on ${\cal X} /S$. This notion is what was called ``U-coherent sheaf'' by Hirschowitz in \cite{Hirschowitz}. The particular case which we call ``vector scheme'' below has already been well known for some time as the ``linear spaces'' of Grauert \cite{Grauert}, appearing notably in Whitney's tangent cones \cite{Whitney}. We feel that the terminology ``vector sheaf'' is more suggestive. Many of the results below seem to be due to Hirschowitz \cite{Hirschowitz} (in particular, the observation that duality is involutive) although some parts of the theory are certainly due to \cite{Fischer}, \cite{Grauert}, \cite{Whitney}. We have integrated these results into our treatment for the reader's convenience. Essentially the only thing new in our treatment is the first lemma (and the analogous statement about extensions). Before starting in on the definition, I would like to make one note of caution. The category of vector sheaves will not satisfy any nice (ascending or descending) chain condition. This is one of the principal differences with vector spaces or modules over a noetherian ring, and could in the long run pose a major problem if one wants to consider an ``infinite dimensional'' version of the theory such as by looking at $ind$- or $pro$- objects. We have a sheaf of rings ${\cal O}$ on ${\cal X}$, defined by ${\cal O} (X):= \Gamma (X, {\cal O} _X)$. Note that it is represented by the affine line. \begin{lemma} \mylabel{I.a} Suppose $F$ is a sheaf of abelian groups on ${\cal X} /S$, representable by a scheme which is affine and of finite type over $S$. If there exists a structure of ${\cal O}$-module for $F$, then this structure is unique. If $F$ and $G$ are two such sheaves, and if $a:F\rightarrow G$ is a morphism of sheaves of groups, then $a$ is a morphism of ${\cal O}$-modules. \end{lemma} {\em Proof:} The first statement of the lemma follows from the second. For the second statement, suppose $u\in F_X$. Consider the element $tu\in F_{X\times {\bf A}^1}$. For any positive integer $n$ we have $tu|_{X\times \{ n\}}=u+\ldots +u$ ($n$ times). The same is true for the image $a(u)$. Therefore $$ a(tu)|_{X\times \{ n\}} =ta(u)|_{X\times \{ n\}} . $$ We obtain two morphisms $X\times {\bf A}^1\rightarrow G$ which are equal on the subschemes $X\times \{ n\}$; this implies that they are equal. (Here is a proof of this: we may suppose that $X$ and the base $S$ are affine, so $X=Spec (A)$ and $G=Spec (B)$ and a morphism $X\times {\bf A}^1\rightarrow G$ corresponds to a morphism $\phi : B\rightarrow A[t]$. Pick any $b\in B$ and write $$ \phi (b)= \sum _{j=1}^m p_{j}t^j; $$ but the matrix $a_{nj}= n^j$ for $n,j=1,\ldots , m$ is invertible as a matrix with coefficients in $k$, so there is a matrix $c_{nj}$ with $$ p_j= \sum _{n=1}^m c_{nj}\phi (b)(n) . $$ Thus $\phi (b)$ is determined by the values at positive integers $\phi (b)(n)$.) \hfill $\Box$\vspace{.1in} A {\em vector scheme over $S$} is a sheaf $V$ of abelian groups on ${\cal X} /S$ which is a sheaf of ${\cal O}$-modules and such that there exists an etale covering $\{ S_{\alpha}\rightarrow S\}$ such that each $V|_{S_{\alpha}}$ is representable by a scheme $F_{\alpha}$ which is affine of finite type over $S_{\alpha}$. The above lemma shows that the category of vector schemes is a full subcategory of the category of sheaves of abelian groups on ${\cal X}$. In the complex analytic category these objects were called ``linear spaces'' by Grauert and were studied in \cite{Grauert}, \cite{Fischer}. The first remark is that, in fact, the locality in the definition of vector scheme was extraneous. In effect, since the representing schemes $F_{\alpha}$ are unique up to unique isomorphism, they glue together to give a scheme $F$, affine and locally of finite type over $S$. \begin{lemma} \mylabel{I.b} Suppose $V$ is a vector scheme on ${\cal X} /S$, and suppose $S$ is affine. Then there is an exact sequence $$ 0\rightarrow V\rightarrow {\cal O} ^m \rightarrow {\cal O} ^n $$ of abelian sheaves on ${\cal X}$. \end{lemma} {\em Proof:} Write $S=Spec (A)$ and $V=Spec (B)$. The action of ${\bf G} _m$ gives a decomposition $$ B = \bigoplus B^{\lambda} $$ where $B^{\lambda}$ consists of functions $b$ such that $b(tv)= t^{\lambda}b(v)$. The sum is over $\lambda \geq 0$ (integers), since the action extends to an action of the multiplicative monoid ${\bf A}^1$. Furthermore, if $b\in B^0$ then $b(tv)=b(v)$ for all $t$ (including $t=0$), in particular $b(v)=b(0)$. Thus $B^0= A$. If $b\in B^{\lambda }$ for $\lambda >0$ then $b(0)=b(0\cdot O)= 0$. Thus the zero section corresponds to the projection onto $B^0=A$. The decomposition is compatible with multiplication in $B$. It is also compatible with the comultiplication $B\rightarrow B\otimes _A B$ corresponding to the addition law on $V$. The comultiplication is $$ B^{\lambda } \rightarrow \bigoplus _{\mu + \nu = \lambda} B^{\mu} \otimes _A B^{\nu}, $$ and furthermore the coefficients $B^{\lambda} \rightarrow B^{\lambda}\otimes _A B^0= B^{\lambda}$ and $B^{\lambda} \rightarrow B^0\otimes _A B^{\lambda}= B^{\lambda}$ are the identity (corresponding to the formula $v+0=v=0+v$). On the other hand, the composition $B\rightarrow B\otimes _A B \rightarrow B$ corresponds to the map $v\mapsto v+v=2v$, which is also scalar multiplication by $t=2$. Thus the composition $$ B^{\lambda } \rightarrow \bigoplus _{\mu + \nu = \lambda} B^{\mu} \otimes _A B^{\nu}\rightarrow B^{\lambda} $$ is equal to multiplication by $2^{\lambda}$. The first and last terms in the sum give a contribution of $b\mapsto 2b$ (by the observation $v+0=v=0+v$), so for $\lambda \geq 2$, the composition $$ B^{\lambda } \rightarrow \bigoplus _{\mu + \nu = \lambda , 0< \mu , \nu < \lambda } B^{\mu} \otimes _A B^{\nu}\rightarrow B^{\lambda} $$ is multiplication by $2^{\lambda}-2$, invertible. Hence every element of $B^{\lambda}$ is expressed as a sum of products of elements of $B^{\mu}$ and $B^{\nu}$ for $\mu , \nu < \lambda$. This proves that $B^1$ generates $B$ as an $A$-algebra. Since $B$ is of finite type over $A$ (a consequence of the fact that we have supposed all of our schemes noetherian), we can choose a finite number of elements of $x_1,\ldots , x_m \in B^1$ which generate $B$ as an $A$-algebra, and these elements give an embedding $V\subset {\cal O} ^m$. This embedding is linear, since the elements are elements of $B^1$ (from the above discussion one sees that for $b\in B^1$ we have $b(u+v)=b(u)+ b(v)$). Write $$ B= A[x_1,\ldots , x_m]/I $$ for a homogeneous ideal $I=\bigoplus I^{\lambda}$. We claim that $I$ is generated as an ideal by $I^1$. To see this, let $I'$ be the ideal generated by $I^1$ and put $B'= A[x]/I'$. Under the comultiplication of $A[x]$ we have $$ I^1 \rightarrow I^1 \otimes _A A \oplus A \otimes _A I^1 , $$ so $I^1$ maps to zero in $B'\otimes _A B'$. Thus so does $I'$. We obtain a comultiplication $$ B'= A[x]/I'\rightarrow B'\otimes _A B', $$ so $Spec (B')$ is a vector scheme too. But the map $B'\rightarrow B$ is surjective and an isomorphism on the pieces of degree $1$. It is compatible with the comultiplication. We claim that it is injective, showing this on the part of degree $\lambda$ by induction on $\lambda$ (starting at $\lambda =2$). If an element $b\in (B')^{\lambda}$ maps to zero in $B$, then by applying the process given above (in the algebra $B'$) we can write $b= \sum b_{\mu} b_{\nu}$ for $\mu , \nu < \lambda$. But $b_{\mu}$ and $b_{\nu}$ map to the elements in $B$ given by applying the same process to the image of $b$; as this image is $0$, so are the images of $b_{\mu}$ and $b_{\nu}$. By the induction hypothesis, the map is injective on the pieces of degrees $\mu , \nu$, so $b_{\mu}=b_{\nu}=0$, giving $b=0$. This induction shows that $B'\cong B$, so $I'=I$ is generated by $I^1$. Since $B$ is of finite type over $A$ (which is noetherian), $I$ is generated by a finite number of elements. This implies that it is generated by a finite number of elements $y_1, \ldots , y_n$ of $I^1$. These elements give a linear map ${\cal O} ^n \rightarrow {\cal O} ^m$, and $V$ is the kernel. \hfill $\Box$\vspace{.1in} We come now to the main definition of this section. A {\em vector sheaf on $S$} is a sheaf of abelian groups $F$ on ${\cal X} /S$ such thatthere exists an etale covering $\{ S_{\alpha}\rightarrow S\}$ such that for each $\alpha$ there exists an exact sequence $$ U_{\alpha}\rightarrow V_{\alpha}\rightarrow F|_{S_{\alpha}}\rightarrow 0 $$ of sheaves of abelian groups, with $U_{\alpha}$ and $V_{\alpha}$ vector schemes over $S_{\alpha}$. Denote by ${\cal V} (S)$ the category of vector sheaves over $S$. If $X\rightarrow S$ is an element of ${\cal X} /S$, we denote by $F|_X$ the restriction of $F$ to the category ${\cal X} /X$. It is a vector sheaf over $X$ (this is easy to see from the definitions). If $F$ is a vector sheaf and $f\in F(Y)$ and $a:X\rightarrow Y$ is a morphism, we denote the restriction of $f$ to $X$ by $a^{\ast}(f)$ or just $f|_X$. \begin{lemma} \mylabel{I.c} If $F$ is a vector sheaf, and $S$ is an affine variety, then the cohomology groups $H^i (S, F)$ vanish for $i>0$. If $$ F_1\rightarrow F_2\rightarrow F_3 $$ is an exact sequence of vector sheaves (that is, an exact sequence in the category of abelian sheaves on ${\cal X}$, where the elements are vector sheaves) then for any $X$ over $S$ which is itself an affine scheme, the sequence $$ F_1(X)\rightarrow F_2(X)\rightarrow F_3(X) $$ is exact. \end{lemma} {\em Proof:} Treat first the case where $F$ is a vector scheme. We have an exact sequence $$ 0\rightarrow F\rightarrow {\cal O} ^a \rightarrow {\cal O} ^b $$ by Lemma \ref{I.b}. Let $G$ be the kernel of the morphism ${\cal O} ^a \rightarrow {\cal O} ^b$ on the small etale site over $S$. It is a coherent sheaf. Let $F'$ be the sheaf on ${\cal X}$ whose value on $Y\rightarrow S$ is the space of sections of the pullback (of coherent sheaves) of $G$ to $Y$. There is a surjective morphism $F'\rightarrow F$, which induces $F'(U)\stackrel{\cong}{\rightarrow} F(U)$ for any $U$ etale over $S$ (or even any $U$ which is flat over $S$). Let $K$ denote the kernel of $F'\rightarrow F$. We claim that if $Y$ is any scheme etale over $S$, then $H^i(Y, K)=0$. Prove this by ascending induction on $i$. If the cohomology in degrees $<i$ of all fiber products of elements in all etale covering families of $Y$ vanishes, then the degree $i$ sheaf cohomology is equal to the degree $i$ \v{C}ech cohomology. But the \v{C}ech cohomology is calculated only in terms of the values of the sheaf on the fiber products, and here the values of $K$ are zero. Thus $H^i(Y,K)=\check{H}^i(Y, K)=0$, completing the induction. We obtain $H^i(S, F)= H^i(S, F')$. But the higher cohomology of a coherent sheaf on an affine scheme $S$ vanishes (even in the big etale site). We obtain the desired vanishing. For the second part, suppose that $X=S$ is affine. The restriction of the exact sequence to the small etale site (over $X$) remains exact. It can be completed to a $5$-term exact sequence where the first and last terms are also coherent sheaves; then broken down into short exact sequences. The vanishing of $H^1$ of coherent sheaves on the small etale site yields the desired exactness of all the short exact sequences of global sections, and hence the exactness of the sequence in question. \hfill $\Box$\vspace{.1in} {\em Remark:} One can show that a vector sheaf $V$ over an affine $S$ has a resolution by vector schemes, over $S$ rather than over an etale covering of $S$ \cite{Hirschowitz}. \begin{lemma} \mylabel{I.d} Suppose $F$ is a vector sheaf over $S$. Then for any $X\in {\cal X} /S$ and $Y$ a scheme of finite type over $k$, we have $$ F(X\times _{Spec (k)}Y )= F(X)\otimes _k {\cal O} (Y). $$ The isomorphism is given by the pullback $F(X)\rightarrow F(X\times _kY)$ and the scalar multiplication by the pullback of functions on $Y$. \end{lemma} {\em Proof:} We first prove this when $F$ is a vector scheme. There is an exact sequence $$ 0\rightarrow F\rightarrow {\cal O} ^a \stackrel{M}{\rightarrow} {\cal O} ^b . $$ We have $F(X)= \ker (M(X))$ and $F(X\times _kY)= \ker (M(X\times _kY))$. But ${\cal O} (X\times _kY)={\cal O} (X)\otimes _k{\cal O} (Y)$, and $M(X\times _kY)=M(X)\otimes 1$. Since tensoring over $k$ is exact, $$ \ker (M(X)\otimes 1) = \ker (M(X)) \otimes _k {\cal O} (Y) $$ as desired. Now suppose $F$ is a vector sheaf. There is an exact sequence $$ U\rightarrow V\rightarrow F\rightarrow 0. $$ If $Z$ is affine then the sequence $$ U(Z)\rightarrow V(Z)\rightarrow F(Z)\rightarrow 0 $$ remains exact. To see this, replace $F$ by a coherent sheaf $F'$ on the small etale site over $Z$. The restriction of $F$ to the small etale site over $Z$ is the quotient $F'$ of the restriction of $U\rightarrow V$ to the small etale site over $Z$, that is to say the sections of $F$ and $F'$ are the same on schemes etale over $Z$ (and in particular over $Y$). But if $Z$ is affine, then taking global sections preserves surjectivity of a morphism of coherent sheaves. This gives the desired exact sequence (proceed in a similar way for exactness at $V(Z)$). Suppose now that $X$ and $Y$ are affine. Then applying the above to $Z=X$ and $Z=X\times _k Y$ we get $$ U(X)\otimes _k {\cal O} (Y) \rightarrow V(X)\otimes _k{\cal O} (Y) \rightarrow F(X\times _kY)\rightarrow 0 . $$ The first morphism is the same as in the tensor product of $$ U(X)\rightarrow V(X)\rightarrow F(X)\rightarrow 0 $$ with ${\cal O} (Y)$, so the two quotients are isomorphic: $F(X\times _kY)\cong F(X)\otimes _k {\cal O} (Y)$. This completes the case where $X$ and $Y$ are affine. But both sides of the equation have the property that they are sheaves in each variable $X$ and $Y$ separately; thus we may first localize on $X$ and then localize on $Y$, to reduce to the case where $X$ and $Y$ are affine. Finally, suppose $F$ is a vector sheaf, and write $F= \bigcup _{i\in I} F_i$ as a directed union of vector sheaves. The tensor product of the union is equal to the union of the tensor products: $$ F(X)\otimes _k{\cal O} (Y) = \bigcup _{i\in I} F_i (X)\otimes _k{\cal O} (Y) = \bigcup _{i\in I} F(X\otimes _kY) = F(X\otimes _kY). $$ Note that the inclusion maps in the two directed unions are the same (since the isomorphisms established above are uniquely determined by compatibility with the morphisms $F_i (X)\rightarrow F_i (X\otimes _kY)$ and with scalar multiplication by elements of ${\cal O} (Y)$). This completes the proof. \hfill $\Box$\vspace{.1in} {\em Remark:} We will mostly use this lemma in the following two cases. Suppose $F$ is a vector sheaf over $S$. Then for any $X\in {\cal X} /S$ we have $F(X\times {\bf A}^1)= F(X)\otimes _k k[t]$. The isomorphism is given by the pullback $F(X)\rightarrow F(X\times {\bf A}^1)$ and the scalar multiplication by the pullback of the coordinate function $t$ on ${\bf A}^1$. Similarly, $F(X\times {\bf G} _m) = F(X)\otimes _k k[t,t^{-1}]$, with the isomorphism uniquely determined by compatibility with the previous one under the inclusion ${\bf G} _m \subset {\bf A}^1$. \begin{lemma} \mylabel{I.e} A vector sheaf has a unique structure of ${\cal O}$-module, and any morphism of vector sheaves is automatically compatible with the ${\cal O}$-module structure. \end{lemma} {\em Proof:} Suppose that $\phi :F\rightarrow G$ is a morphism of vector sheaves. Suppose $X\in {\cal X} /S$. Suppose $f\in F(X\otimes {\bf A}^1)$. The difference $g=\phi (tf)-t\phi (f )$ is an element of $G(X\times {\bf A}^1)$ which restricts to zero on $X\otimes \{ n\}$ for any integer $n$. We can write $$ g=\sum _{i=1}^pg_i t^i $$ with $g^i\in G(X)$ (by the previous lemma). We know that $$ g(n)=\sum _{i=1}^pg_i n^i = 0 $$ for any integer $n$. But in $k$ the matrix $(n^i)_{1\leq n, i\leq p}$ has an inverse $(c_{ni})$, and we have $$ g_i = \sum _{n=1}^p c_{ni}g(n) = 0. $$ Therefore $\phi (tf)-t\phi (f )=g=0$, for any $f$. Thus $\phi $ is compatible with multiplication by $t$. Now suppose $\lambda \in {\cal O} (X)$. This gives a morphism $\gamma : X\rightarrow X\times {\bf A} ^1$ such that $\gamma ^{\ast} (tp_1^{\ast}(f))= \lambda f$ for any $f\in F(X)$ or $G(X)$ (here $p_1:X\times {\bf A}^1\rightarrow X$ is the projection). The fact that $\phi$ is a morphism of sheaves means that it is compatible with $\gamma ^{\ast}$ and $p_1^{\ast}$, so we have $$ \phi (\lambda f)= \phi (\gamma ^{\ast} (tp_1^{\ast}(f)))= \gamma ^{\ast} (\phi (tp_1^{\ast}(f))) $$ $$ = \gamma ^{\ast}(t\phi (p_1^{\ast} (f)))= \gamma ^{\ast}(tp_1^{\ast}(\phi (f)))= \lambda \phi (f). $$ Thus $\phi$ is compatible with scalar multiplication. This fact, applied to the identity of $F$, implies that the scalar multiplication is unique if it exists. For existence, note that any morphism of vector schemes is automatically a morphism of ${\cal O}$-modules, so the quotient has a structure of ${\cal O}$-module. Thus any vector sheaf has a structure of ${\cal O}$-module. If $F$ is a vector sheaf expressed as a directed union $F= \bigcup _{i\in I} F_i$ of finite vector sheaves, then the inclusions in the directed union are compatible with the ${\cal O}$-module structures; thus the union has an ${\cal O}$-module structure. \hfill $\Box$\vspace{.1in} The conclusion of this lemma is that the category of vector sheaves, with morphisms equal to those morphisms of abelian sheaves compatible with the ${\cal O}$-module structure, is a full subcategory of the category of sheaves of abelian groups on ${\cal X} /S$. Next we establish a Krull-type property. \begin{lemma} \mylabel{I.f} Suppose that $F$ is a vector sheaf over $S$, with $f\in F(Y)$, and suppose that for every $X\rightarrow Y$ where $X$ is an artinian scheme, $f|_X=0$. Then $f=0$. Suppose $\phi : F\rightarrow G$ is a morphism of vector sheaves such that for every $X\rightarrow S$ with $X$ artinian, $\phi |_X=0$. Then $\phi =0$. \end{lemma} {\em Proof:} We work with vector schemes over base schemes which are not necessarily of finite type over $k$ (the definition is the same, but we require additionally that the vector scheme be of finite type over the base). If $U\rightarrow V$ is a morphism of vector schemes over a henselian local ring $A$, and if $v$ is a section of $V$ over $A$ such that for each $n$ there exists $u_n\in U(Spec (A/{\bf m}^n))$ with $u_n$ mapping to the restriction of $v$, then there exists a section $u$ of $U$ over $A$ which maps to $v$. This follows from the strong Artin approximation theorem at maximal ideals, applied to finding sections of the morphism $U\times _VSpec (A)\rightarrow Spec (A)$. Now onto the proof of the lemma. For the first statement, any section $f$ is contained in a vector subsheaf of $F$, so we may suppose that $F$ is a vector sheaf. Choose a presentation $$ U\rightarrow V \rightarrow F \rightarrow 0 $$ by vector schemes. We may replace $X$ by a covering, so we may suppose that our section $f$ comes from a section $v$ of $V$. From the previous paragraph, for every henselized local ring $A$ of $X$, there exists a section $u$ of $U (Spec (A)$ mapping to $v$. But any such $A$---henselization at a point $P$---is the direct limit of algebras $A_i$ etale of finite type over $X$ (which give etale neighborhoods of $P$), and the space of sections is the direct limit: $$ U(Spec (A))= \lim _{\rightarrow } U( Spec (A_i )). $$ Thus there is a section $u_i$ over some $Spec (A_i)$ mapping to $v$. Thus every point $P$ of $X$ has an etale neighborhood on which there is a lifting of $v$ to a section of $U$. This implies that the image of $v$ in the cokernel $F$ in the etale topology, is zero. This gives the first statement, and the second statement follows easily from this. \hfill $\Box$\vspace{.1in} {\em Remark:} An alternative to the above proof is to use Lemma \ref{Krull}. The utility of this property comes from the following fact. \begin{corollary} \mylabel{I.g} If $F$ is a vector scheme, and if $Y\rightarrow S$ is an element of ${\cal X} /S$ with $Y$ artinian, then the functor $F_Y: Z\mapsto F(Y\times _{Spec (k)}Z)$ from schemes over $Spec (k)$ to sets, is represented by an additive group scheme (that is, a finite dimensional vector space) over $k$. This vector space is the $k$-module $F(Y)$. \end{corollary} {\em Proof:} By Lemma \ref{I.d}, we have $F_Y(Z)= F(Y)\otimes _k {\cal O} (Z)$ which is the space of morphisms of schemes from $Z$ to the vector space $F(Y)$. Thus $F_Y$ is represented by the vector space $F(Y)$. Note that from the exact sequences used in the proof of Lemma \ref{I.d}, $F(Y)$ is a finite-dimensional $k$-vector space. \hfill $\Box$\vspace{.1in} The group scheme ${\bf G} _m$ acts on every vector sheaf, by scalar multiplication. This action may be thought of as an action of the functor ${\bf G} _m (X)$ on $F(X)$, or as an automorphism of $F(X\otimes {\bf G} _m )$ (multiplication by $t$) which is natural in $X$. We have seen above that if $F\rightarrow G$ is a morphism of sheaves of abelian groups between two vector sheaves, then it is compatible with the ${\bf G} _m$ action. Suppose $A$ is a vector scheme, and $F$ is a vector sheaf. We look at $F(A)$, the space of sections over the scheme $A$. Let $F(A)^{\lambda}$ denote the subgroup of elements $f\in F(A)$ such that $f(ta)=t^{\lambda} f(a)$. Here $a\mapsto ta$ is considered as a morphism $A\times {\bf G} _m\rightarrow A$ over $S$, and $f(a)\mapsto t^{\lambda}f(a)$ is the automorphism of $F(A\times {\bf G} _m)$ given by scalar multiplication by $t^{\lambda} \in k[t,t^{-1}]$; the notation $f$ in the second half of the formula actually denotes the pullback of $f$ to $A\times {\bf G} _m$. \begin{lemma} \mylabel{I.h} With the above notations, $F(A)$ decomposes as a direct sum $$ F(A) = \bigoplus _{\lambda \in {\bf Z} ,\lambda \geq 0} F(A)^{\lambda} . $$ This direct sum decomposition is natural with respect to morphisms $F\rightarrow G$, and the linear piece $F(A)^1$ is exactly the space of morphisms of vector sheaves $A\rightarrow F$. \end{lemma} {\em Proof:} Recall that $F(A\times {\bf A}^1)= F(A)\otimes _kk[t]$, which we will just write as $F(A)[t]$. The morphism of scalar multiplication $A\times{\bf A}^1\rightarrow A$ gives $\Psi _t: F(A)\rightarrow F(A)[t]$ defined by $(\Psi _tf)(a):= f(ta)$ (to be accurate, this should be defined in terms of restriction maps for the morphisms involved, but we keep this notation for simplicity). Then $F(A)^{\lambda}$ is the set of $f$ such that $\Psi _tf = t^{\lambda}f$ in $F(A)[t]$. Let $\Psi _s[t]: F(A)[t]\rightarrow F(A)[s,t]$ denote the extension of $\Psi _s: F(A)\rightarrow F(A)[s]$ to the polynomials in $t$. We have $$ (\Psi _s[t]\Psi _tf)(a)= f(tsa)= (\Psi _{st}f)(a). $$ Write $$ \Psi _t(f)= \sum _{i=0}^{\infty} \psi _i(f)t^i, $$ where $\psi _i(f)\in F(A)$ and for any $f$, there are only a finite number of nonzero $\psi _i (f)$. Our previous formula becomes $$ \sum _{i,j} \psi _i (\psi _j (f))s^it^j = \sum _k \psi _k (f)(st)^k. $$ Comparing terms we see that $\psi _i(\psi _j (f))=0$ for $i\neq j$ and $\psi _i(\psi _i(f))=\psi _i (f)$. But in general $f\in F(A)^{\lambda}$ if and only if $\psi _i(f)=0$ for $i\neq \lambda$ and $\psi _{\lambda}(f)=f$. Therefore $\psi _i (f)\in F(A)^i$. Restrict to $t=1$, and note that the composed morphism $a\mapsto (a,1)\mapsto a$ is the identity so $\Psi _1(f)=f$. We get $$ f= \sum _{i=0}^{\infty} \psi _i (f) , $$ and this sum is actually finite. Thus every element of $F(A)$ can be expressed as a finite sum of elements of the $F(A)^{\lambda}$. On the other hand, this expression is unique: if $f= \sum f_i$ with $f_i\in F(A)^i$ then $$ \sum \psi _i(f)t^i=\Psi _t(f)=\sum \Psi _t(f_i)= \sum \psi _i (f_i)t^i = \sum f_i t^i, $$ and comparing coefficients of $t^i$ we get $f_i = \psi _i (f)$. This completes the proof of the decomposition (note that in working with ${\bf A}^1$ instead of ${\bf G} _m$ we obtain automatically that the exponents are positive). We have to show that $F(A)^1$ is equal to the space of linear morphisms from $A$ to $F$. A linear morphism gives an element of $F(A)^1$ (since it is compatible with the action of ${\cal O}$ by Lemma \ref{I.e}), and the resulting map from the space of morphisms to $F(A)^1$ is injective, since $F(A)$ is the space of morphisms of functors $A\rightarrow F$. Finally, we show surjectivity. For this, suppose given an element $\phi \in F(A)^1$. Suppose $Y$ is artinian, and $F$ is a vector sheaf over $S$. Then the functor $Z\mapsto F(Y\times Z )$ is represented by a vector space $F_Y$ over $k$ (Lemma \ref{I.g}). Our element of $F(A)$ now gives a morphism of schemes $\phi _Y: A_Y \rightarrow F_Y$ between these two vector spaces. It is compatible with scalar multiplication, so it is linear. In particular, if $u,v\in A_Y(Spec (k))= A(Y)$ then $\phi (u+v)= \phi (u)+\phi (v)$ in $F_Y(Spec (k))= F(Y)$. Now suppose $X$ is any element of ${\cal X} /S$. We show that $\phi : A(X) \rightarrow F(X)$ is a morphism of abelian groups. Suppose $u,v\in A(X)$. Let $f=\phi (u+v)-\phi (u)-\phi (v)\in F(X)$. By the previous paragraph, for any $Y\rightarrow X$ with $Y$ artinian, we have $f|_Y=0$. But the Krull property of Lemma \ref{I.f} then implies that $f=0$. This shows that $\phi$ is a morphism of sheaves of abelian groups. \hfill $\Box$\vspace{.1in} If $F$ and $G$ are sheaves of abelian groups, we denote by $Hom (F,G)$ the internal $Hom$, that is the sheaf of homomorphisms of sheaves of abelian groups from $F$ to $G$. The value $Hom (F,G)(X)$ is the space of morphisms of sheaves of abelian groups from $F|_{{\cal X} /X}$ to $G|_{{\cal X} /X}$ (this is already a sheaf). \begin{corollary} \mylabel{I.i} If $F\rightarrow G $ is a surjection of vector sheaves, and if $A$ is a vector scheme, then the morphism sheaves $$ Hom (A, F) \rightarrow Hom (A, G) $$ is surjective. If $X$ is affine then $$ Hom (A,F)(X)\rightarrow Hom (A, G)(X) $$ is surjective. \end{corollary} {\em Proof:} It suffices to prove the second statement. We may assume that $X=S$. We have $$ Hom (A, G)(S)= G(A)^1 $$ by the last statement of the lemma. Since $A$ is affine, the morphism $F(A)\rightarrow G(A)$ is surjective, and by the previous lemma this implies that $Hom (A,F)(S)=F(A)^1\rightarrow G(A)^1$ is surjective, giving the corollary. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{I.j} If $\phi : F\rightarrow G$ is a morphism of vector sheaves, then ${\rm coker}(\phi )$ and ${\rm ker} (\phi )$ are vector sheaves. \end{corollary} {\em Proof:} We may suppose that $S$ is affine and small enough. Choose presentations by vector schemes (cf the remark before Lemma \ref{I.d}) $$ U\rightarrow V\rightarrow F\rightarrow 0 $$ and $$ 0\rightarrow P \rightarrow R\rightarrow T \rightarrow G \rightarrow 0 $$ (note that the kernel $P$ is automatically a vector scheme). The morphism $V\rightarrow G$ lifts to a morphism $V\rightarrow T$, by the previous corollary, and we obtain a presentation $$ R \oplus V \rightarrow T \rightarrow {\rm coker} (\phi ) \rightarrow 0. $$ The fiber products $V\times _T R$ and $V\times _T R$ are vector schemes, and we have a presentation $$ U\times _T R\rightarrow V\times _T R \rightarrow {\rm ker}(\phi )\rightarrow 0. $$ \hfill $\Box$\vspace{.1in} Now we have shown that the category of vector sheaves is an abelian subcategory of the category of sheaves of abelian groups on ${\cal X} /S$. Suppose $A$ is a vector scheme and $F$ is a vector sheaf. Let ${\bf 3}$ denote the automorphism of $A$ obtained by multiplication by the scalar $3$ (any integer $\neq 0, \pm 1$ will do). We have $$ F(A)=\bigoplus F(A)^{\lambda } $$ (the decomposition given by Lemma \ref{I.h}) where $F(A)^{\lambda}$ may be characterized as the subspace of elements $f$ such that ${\bf 3}^{\ast}(f)= 3^{\lambda}f$. In particular, the linear subspace $Hom (A, F)= F(A)^1$ is characterized as the subspace of elements $f$ such that ${\bf 3}^{\ast}(f)= 3f$. \begin{theorem} \mylabel{I.k} Suppose $E$ and $G$ are vector sheaves, and $$ 0\rightarrow E \rightarrow F\rightarrow G \rightarrow 0 $$ is an extension in the category of sheaves of abelian groups on ${\cal X} /S$. Then $F$ is a vector sheaf. \end{theorem} {\em Proof:} We proceed in several steps. We may assume that $S$ is affine and small enough. Let $$ \begin{array}{ccccccc} & V & & & & B & \\ & \downarrow & & & & \downarrow & \\ & U & & & & A & \\ & \downarrow & & & & \downarrow & \\ 0 \rightarrow & E &\rightarrow & F & \rightarrow & G & \rightarrow 0 \\ & \downarrow & & & & \downarrow & \\ & 0 & & & & 0 & \end{array} $$ be presentations for $E$ and $G$. {\em Step 1.} {\em There exists a lifting of the morphism $A\rightarrow G$ to an element $\phi \in F(A)$ with $({\bf 3}^{\ast} - 3)^2\phi =0$.} The cohomology of $E$ over the affine $S$ is zero, so $$ 0\rightarrow E(A)\rightarrow F(A)\rightarrow G(A)\rightarrow 0 $$ is exact. Let $\alpha : A\rightarrow G$ denote the morphism in the presentation above, and choose $f\in F(A)$ mapping to $\alpha$. Then write $$ ({\bf 3}^{\ast} - 3)f = \sum e_i $$ with $e_{\lambda} \in E(A)^{\lambda}$ (thus ${\bf 3}^{\ast} e_{\lambda} = 3^{\lambda}e_{\lambda}$). Let $$ \phi = f-\sum c_{\lambda} e_{\lambda} $$ for $c_{\lambda} = (3^{\lambda }-3)^{-1}$ when $\lambda \neq 1$ (and $c_1=0$). We then have \begin{eqnarray*} ({\bf 3}^{\ast} - 3)\phi &=& ({\bf 3}^{\ast} - 3)f -\sum c_{\lambda}({\bf 3}^{\ast} - 3) e_{\lambda} \\ &=&\sum e_{\lambda}-\sum c_{\lambda}(3^{\lambda }-3)e_{\lambda} \\ &=& e_1. \end{eqnarray*} On the other hand, $({\bf 3}^{\ast} - 3)e_1=0$, so we get $$ ({\bf 3}^{\ast} - 3)^2\phi =0. $$ In other words, $\phi$ is in the generalized eigenspace for the eigenvalue $3$ of the transformation ${\bf 3}^{\ast}$. {\em Step 2.} {\em The extension $F$ satisfies the Krull property of Lemma \ref{I.f}: if $f\in F(X)$ such that for any artinian $Y\rightarrow X$, $f|_Y=0$, then $f=0$.} Under these hypotheses, $f$ maps to an element $g\in G(X)$ satisfying the same vanishing, so by Lemma \ref{I.f} we have $g=0$; thus $f$ comes from an element $e\in E(X)$. This element again satisfies the same vanishing, so by Lemma \ref{I.f}, $e=0$. {\em Step 3.} {\em If $A$ is a vector scheme and $F$ is an extension of two vector sheaves, then any element $\phi \in F(A)$ with $({\bf 3}^{\ast} - 3)^2\phi =0$ is a morphism of sheaves of abelian groups from $A$ to $F$.} Suppose $Y$ is artinian, and $G$ is a vector sheaf over $S$. Then the functor $Z\mapsto G(Y\times Z )$ is represented by a vector space $G_Y$ over $k$. If $F$ is an extension of two finite vetor sheaves $E$ and $G$, then let $F_Y$ denote the functor $Z\mapsto F(Y\times Z)$. We obtain an extension $$ 0\rightarrow E_Y \rightarrow F_Y \rightarrow G_Y \rightarrow 0 $$ in the category of sheaves of abelian groups over $Spec (k)$. But since the cohomology of the affine space $G_Y$ with coefficients in the additive group $E_Y$ vanishes, there is a lifting of the identity to a section $u\in F_Y(G_Y)$. Using $u$ we obtain an isomorphism of functors $F_Y \cong E_Y \times G_Y$, so $F_Y$ is a scheme. Since $F_Y$ is a sheaf of abelian groups, $F_Y$ is an abelian group-scheme over $k$. Since it is an extension of two additive groups, it is additive. Our element of $F(A)$ now gives a morphism of schemes $\phi _Y: A_Y \rightarrow F_Y$ between these two vector spaces. We still have $({\bf 3}^{\ast}-3)^2)\phi _Y =0$. But $F_Y(A_Y)$ decomposes into eigenspaces $$ F_Y(A_Y)=\bigoplus F_Y(A_Y)^{\lambda} $$ where $f\in F_Y(A_Y)^{\lambda} \Leftrightarrow f(ta)= t^{\lambda}f(a)$. In particular, $F_Y(A_Y)$ is the $3^{\lambda}$-eigenspace for ${\bf 3}^{\ast}$. But since the space $F_Y(A_Y)$ is the direct sum of eigenspaces, the generalized eigenspaces are equal to the eigenspaces, so $\phi _Y \in F_Y(A_Y)^1= Hom (A_Y, F_Y)$. In particular, if $u,v\in A_Y(Spec (k))= A(Y)$ then $\phi (u+v)= \phi (u)+\phi (v)$ in $F_Y(Spec (k))= F(Y)$. Now suppose $X$ is any element of ${\cal X} /S$. We show that $\phi : A(X) \rightarrow F(X)$ is a morphism of abelian groups. Suppose $u,v\in A(X)$. Let $f=\phi (u+v)-\phi (u)-\phi (v)\in F(X)$. By the previous paragraph, for any $Y\rightarrow X$ with $Y$ artinian, we have $f|_Y=0$. But the Krull property of Step 2 then implies that $f=0$. This shows that $\phi$ is a morphism of sheaves of abelian groups. {\em Step 4.} {\em There is a surjection from a vector scheme to $F$.} The direct sum of the morphism $U\rightarrow F$ with our lifting $\phi : A\rightarrow F$ gives a surjection $U\oplus A \rightarrow F \rightarrow 0$. In fact, this fits into a diagram $$ \begin{array}{ccccccc} 0 \rightarrow &U& \rightarrow & U\oplus A& \rightarrow & A&\rightarrow 0\\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \rightarrow & E &\rightarrow & F & \rightarrow & G & \rightarrow 0\\ & \downarrow & & \downarrow & & \downarrow & \\ & 0 & & 0 & & 0 . & \end{array} $$ {\em Step 5.} {\em There is a surjection from a vector scheme to the kernel of $U\oplus A \rightarrow F$ (proving the theorem).} Taking the kernels along the top row of the above diagram gives $$ \begin{array}{ccccccc} 0 \rightarrow & K & \rightarrow & L & \rightarrow & M & \rightarrow 0\\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \rightarrow &U& \rightarrow & U\oplus A& \rightarrow & A&\rightarrow 0\\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \rightarrow & E &\rightarrow & F & \rightarrow & G & \rightarrow 0\\ & \downarrow & & \downarrow & & \downarrow & \\ & 0 & & 0 & & 0 . & \end{array} $$ But $K$ and $M$ are vector sheaves, and we have surjections $V\rightarrow K \rightarrow 0$ and $B\rightarrow M \rightarrow 0$. By repeating the above argument in this case, we obtain a surjection $$ V\oplus B \rightarrow L \rightarrow 0, $$ finally giving our presentation $$ V\oplus B \rightarrow U\oplus A \rightarrow F \rightarrow 0. $$ Thus $F$ is a vector sheaf. \hfill $\Box$\vspace{.1in} Our abelian category ${\cal V}$ of vector sheaves is therefore closed under extensions of sheaves of abelian groups. \subnumero{Duality} Suppose $F, G$ are vector sheaves. We have defined $Hom (F,G)$ which is for now a sheaf of abelian groups. Put $$ F^{\ast} := Hom (F, {\cal O} ). $$ If $\phi :F\rightarrow G$ is a morphism of vector schemes, then we obtain a morphism $\phi ^t:G^{\ast} \rightarrow F^{\ast}$, and the construction $\phi \mapsto \phi ^t$ preserves composition (reversing the order, of course). \begin{lemma} \mylabel{I.l} {\rm (Hirschowitz \cite{Hirschowitz})} Suppose $$ 0\rightarrow U\rightarrow V\rightarrow W \rightarrow F \rightarrow 0 $$ is an exact sequence with $U$, $V$ and $W$ vector schemes. Then taking the dual gives an exact sequence $$ 0\rightarrow F^{\ast}\rightarrow W^{\ast}\rightarrow V^{\ast} \rightarrow U^{\ast} \rightarrow 0. $$ \end{lemma} {\em Proof:} Note first that the compositions are zero, since taking the dual is compatible with compositions (and the dual of the zero map is zero!). The map $F^{\ast}\rightarrow W^{\ast}$ is injective because $W\rightarrow F$ is surjective (so any morphism $F\rightarrow {\cal O}$ restricting to $0$ on $W$, must be zero). The morphism $V^{\ast }\rightarrow U^{\ast}$ is surjective: if $a:U\rightarrow {\cal O}$ is a morphism, it can be interpreted as a section of ${\cal O} (U)^1$; but since $U\subset V$ is a closed subscheme, we can extend this to a section $a'\in {\cal O} (V)$, then let $a''$ be the component of $a'$ in ${\cal O} (V)^1$; restriction from ${\cal O} (V)$ to ${\cal O} (U)$ is compatible with the ${\bf G} _m$ action, hence with the decomposition of Lemma \ref{I.h}, so $a''$ restricts to $a$. Suppose $b : W\rightarrow {\cal O}$ restricts to zero on $V$; then it factors through the quotient sheaf $F=W/V$, so it comes from $F^{\ast}$. Thus the sequence is exact at $W^{\ast}$. We still have to prove exactness at $V^{\ast}$. Choose embeddings $U\hookrightarrow {\cal O} ^m$ and $W\hookrightarrow {\cal O} ^n$. Then extend the first to a function $V\rightarrow {\cal O} ^m$; combining with the second we obtain $V\hookrightarrow {\cal O} ^{m+n}$, fitting into a diagram $$ \begin{array}{ccccccc} 0\rightarrow & {\cal O} ^m& \rightarrow & {\cal O} ^{m+n} & \rightarrow & {\cal O} ^n & \rightarrow 0 \\ &\downarrow &&\downarrow && \downarrow & \\ 0\rightarrow & U& \rightarrow & V & \rightarrow & W & . \end{array} $$ Furthermore, $U= {\cal O} ^m \cap V$ as subschemes of ${\cal O} ^{m+n}$ (by the injectivity of $W\rightarrow {\cal O} ^n$). Given a linear map $\lambda : V\rightarrow {\cal O}$ such that $\lambda |_{U}=0$, extend it to $\varphi : {\cal O} ^{m+n}\rightarrow {\cal O}$ such that $\varphi |_{{\cal O} ^{m}}=0$. Replace $\varphi$ by its linear part under the decomposition of Lemma \ref{I.h} (this will conserve the property $\varphi |_{{\cal O} ^{m}}=0$ as well as the property of restricting to $\lambda$). Our $\varphi$ now descends to a map ${\cal O} ^n\rightarrow {\cal O}$, restricting to $\varphi |_W$ which extends $\lambda$. Note in the previous paragraph, we have used the following general fact: if $X,Y\subset Z$ are closed subschemes of an affine scheme, and $\lambda \in {\cal O} (X)$ such that $\lambda |_{X\cap Y}=0$, then there exists $\varphi \in {\cal O} (Z)$ such that $\varphi |_X=\lambda$ and $\varphi |_Y=0$. To prove this, let $I_X$, $I_Y$ and $I_{X\cap Y}$ denote the ideals of $X$, $Y$ and $X\cap Y$ in the coordinate ring ${\cal O} (Z)$. The definition of the scheme-theoretic intersection $X\cap Y$ is that $I_{X\cap Y}= I_X+I_Y$, and our statement follows from the translation that $$ I_Y \rightarrow I_{X\cap Y}/I_X \subset {\cal O} (Z)/I_X $$ is surjective. We have completed the proof of the lemma. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{I.m} The functor $F\mapsto F^{\ast}$ is an exact functor from the category of finite vector sheaves, to the category of sheaves of abelian groups. \end{corollary} {\em Proof:} Suppose $$ 0\rightarrow F' \rightarrow F \rightarrow F'' \rightarrow 0 $$ is an exact sequence of vector schemes. Choose presentations $$ 0\rightarrow U'\rightarrow V'\rightarrow W'\rightarrow F' \rightarrow 0 $$ and $$ 0\rightarrow U''\rightarrow V''\rightarrow W''\rightarrow F \rightarrow 0, $$ and combine these into a presentation $$ 0\rightarrow U\rightarrow V\rightarrow W\rightarrow F \rightarrow 0 $$ with $U=U'\oplus U''$, $V=V'\oplus V''$ and $W=W'\oplus W''$ (using the method of Theorem \ref{I.k}, which is easier since we now have the required lifts automatically). These fit together into a diagram $$ \begin{array}{ccccccc} &0&&0&&0 & \\ & \downarrow & & \downarrow && \downarrow \\ 0\rightarrow &U'& \rightarrow &U& \rightarrow &U''& \rightarrow 0 \\ & \downarrow & & \downarrow && \downarrow \\ 0\rightarrow &V'& \rightarrow &V& \rightarrow &V''& \rightarrow 0 \\ & \downarrow & & \downarrow && \downarrow \\ 0\rightarrow &W'& \rightarrow &W& \rightarrow &W''& \rightarrow 0 \\ & \downarrow & & \downarrow && \downarrow \\ 0\rightarrow &F'& \rightarrow &F& \rightarrow &F''& \rightarrow 0 \\ & \downarrow & & \downarrow && \downarrow \\ &0&&0&&0 & \end{array} $$ where all the rows and columns are exact. Apply duality to this diagram; we obtain a diagram with the arrows reversed, with the columns exact, by the lemma. Furthermore, the same lemma shows that the upper three rows are exact (in fact, this is easier because the rows in the original diagram are split, by construction). This implies that the bottom row is exact, as desired. \hfill $\Box$\vspace{.1in} A {\em coherent sheaf} is a sheaf which (locally) has a presentation of the form $$ {\cal O} ^n\rightarrow {\cal O} ^m\rightarrow F\rightarrow 0. $$ In particular, note that it is a vector sheaf. This coincides with the usual definition: if $S$ is affine and $X\rightarrow S$ is a morphism, then $F(X)=F(S)\otimes _{{\cal O} (S)}{\cal O} (X)$ (this is because the same is true for ${\cal O}$, and the presentation remains exact on the right after tensoring). As usual, we can assume that a presentation as above exists globally over any affine base. \begin{corollary} \mylabel{I.n} The dual of a coherent sheaf is a vector scheme and vice-versa. \end{corollary} {\em Proof:} Note that ${\cal O} ^{\ast}={\cal O}$. Taking the dual of a presentation of a coherent sheaf gives $$ 0\rightarrow F^{\ast} \rightarrow {\cal O} ^m \rightarrow {\cal O} ^n, $$ so $F^{\ast}$ is a vector scheme (the kernel here is a closed subscheme of ${\cal O} ^n$). Conversely, if $V$ is a vector scheme, take an exact sequence such as given in Lemma \ref{I.b}, and apply the dual. We obtain a presentation for $V^{\ast}$ as a coherent sheaf. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{I.o} The dual of a vector sheaf is again a vector sheaf. \end{corollary} {\em Proof:} If $F$ is a vector sheaf, choose a presentation $$ U\rightarrow V\rightarrow F\rightarrow 0 $$ by vector schemes. Taking the dual gives $$ 0\rightarrow F^{\ast}\rightarrow U^{\ast}\rightarrow V^{\ast}. $$ By the previous corollary, $U^{\ast}$ and $V^{\ast}$ are coherent sheaves, in particular vector schemes. Thus $F^{\ast}$ is the kernel of a morphism of vector sheaves, so $F^{\ast}$ is a vector sheaf. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{I.p} If $F$ is a vector sheaf, then $F^{\ast\ast}=F$ (via the natural morphism). \end{lemma} {\em Proof:} If $F$ is a vector scheme, this follows from the construction given in Corollary \ref{I.n}: write $F=\ker (M)$ as the kernel of a matrix $M:{\cal O} ^m\rightarrow {\cal O} ^n$; then $F^{\ast} = {\rm coker} (M^t)$ is the cokernel of the transpose matrix (and this $M^t$ is really just the transpose, keeping the same coefficients as in $M$). Finally, $F^{\ast\ast}=\ker (M^{tt})$, but the transpose of the transpose is the same matrix $M=M^{tt}$, so $F=F^{\ast\ast}$. (The same argument works for coherent sheaves, of course). If $F$ is any vector scheme, choose a presentation $$ U\rightarrow V\rightarrow F\rightarrow 0 $$ and take the double dual. Since $U^{\ast\ast}=U$ and $V^{\ast\ast}=V$ we get $$ U\rightarrow V\rightarrow F^{\ast\ast}\rightarrow 0, $$ so $F^{\ast\ast}=F$. \hfill $\Box$\vspace{.1in} We have now shown that duality is an exact contravariant involution on the category ${\cal V} $ of vector sheaves, interchanging vector schemes and coherent sheaves. \begin{lemma} \mylabel{I.q} The vector schemes are projective objects in ${\cal V} $, and the coherent sheaves are injective objects. There exist enough projectives and injectives (assuming that $S$ is affine). \end{lemma} {\em Proof:} The argument given above shows that a vector scheme $A$ is a projective object: if $F\rightarrow G$ is a surjection of vector sheaves then, since $A$ is affine, $F(A)^1\rightarrow G(A)^1$ is surjective. By definition, every vector sheaf admits a surjection from a vector scheme, so there are enough projectives. By duality, the coherent sheaves are injective and there are enough injectives. \hfill $\Box$\vspace{.1in} Taking the dual of the three step resolution by vector schemes shows that every vector sheaf $F$ admits a resolution $$ 0\rightarrow F \rightarrow U\rightarrow V\rightarrow W\rightarrow 0, $$ with $U$, $V$ and $W$ coherent sheaves (in particular, injective). \subnumero{Internal $Hom$ and tensor products} We begin with a corollary to the last lemma. \begin{corollary} \mylabel{I.r} If $A$ is a vector scheme, then the functor $V\mapsto Hom (A,V)$ from ${\cal V} $ to the category of abelian sheaves, is exact. If $F$ is a coherent sheaf, then the functor $V\mapsto Hom (V,F)$ is exact. \end{corollary} {\em Proof:} If $S$ is affine, the functors $V\mapsto Hom (A,V)(S)$ and $V\mapsto Hom (V,F)(S)$ are exact, by the lemma. But the restriction of a vector scheme or a coherent sheaf, to any object $X\in {\cal X} /S$ is again a vector scheme or coherent sheaf over $X$, so we obtain exactness over every affine object; and since exactness is a local condition, we get exactness. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{I.s} If $F$ and $G$ are vector sheaves, then $Hom (F,G)$ is a vector sheaf. \end{lemma} {\em Proof:} Suppose $F$ and $G$ are vector schemes. Then the exact sequence $$ 0\rightarrow G\rightarrow {\cal O} ^a \rightarrow {\cal O} ^b $$ yields an exact sequence $$ 0\rightarrow Hom (F,G)\rightarrow Hom (F, {\cal O} ^a) \rightarrow Hom (F,{\cal O} ^b); $$ but the middle and right terms are direct sums of the dual $F^{\ast}$ which is a vector sheaf, so the kernel $Hom (F,G)$ is a vector sheaf. Now suppose $F$ is a vector scheme and $G$ is a vector sheaf; resolving $G$ by vector schemes we obtain a resolution of $Hom (F,G)$ by vector sheaves, from the previous sentence. Thus $Hom (F,G)$ is a vector sheaf in this case too. Now suppose $F$ is a vector scheme, and choose a resolution $$ U\rightarrow V\rightarrow F\rightarrow 0 $$ by vector schemes. The functor $W\mapsto Hom (W,G)$ is contravariant and left exact for any $G$, so we obtain an exact sequence $$ 0\rightarrow Hom (F,G)\rightarrow Hom (V,G)\rightarrow Hom (U,G). $$ The middle and right terms are vector sheaves by the previous arguments, so the kernel is also. This completes the proof in general. \hfill $\Box$\vspace{.1in} We now define the {\em tensor product} $F\otimes ^{{\cal V}} G$ of two vector sheaves to be $$ F\otimes _{{\cal O}}G:= (Hom (F, G^{\ast}))^{\ast}. $$ Beware that this is not just the tensor product of sheaves of ${\cal O}$-modules (although this will be the case if $F$ and $G$ are coherent sheaves). We can also define the {\em cotensor product} $$ F\otimes ^{{\cal O}} G:= Hom (F^{\ast} , G). $$ Again, beware here that this is not equal to the tensor product. The difference is seen in noting that the tensor product is right exact as usual, whereas the cotensor product is left exact. (These exactness statements hold in both variables since the tensor and cotensor products are commutative, as we see below). Duality permutes the tensor and cotensor products: $$ (F\otimes _{{\cal O}}G)^{\ast}= F^{\ast}\otimes ^{{\cal O}}G^{\ast} $$ and $$ (F\otimes ^{{\cal O}}G)^{\ast}= F^{\ast}\otimes _{{\cal O}}G^{\ast}. $$ Define recursively $$ V_1\otimes \ldots \otimes V_n := V_1\otimes (V_2\otimes \ldots \otimes V_{n}) $$ for either one of the tensor products. By {\em multilinear form} $V_1\times \ldots V_n\rightarrow W$ we mean simply a multilinear morphism of sheaves of groups. In the same way as above for the linear morphisms, we obtain a vector sheaf $Mult(V_1\times \ldots \times V_n , W)$ of multilinear forms (denoted $Bil (\;\;\; )$ when $n=2$). \begin{proposition} \mylabel{I.s.1} 1. \,\, There is a natural isomorphism $\alpha _{U,V}:Hom (U^{\ast}, V)\cong Hom (V^{\ast}, U)$ and $\alpha _{U,V}\alpha _{V,U}$ is the identity. \newline 2. \,\, There is a natural isomorphism $$ Multi (V_1\times \ldots \times V_n, W)\cong Hom (W^{\ast}, Multi (V_1\times \ldots \times V_n, {\cal O} ). $$ 3. \,\, There is a natural isomorphism $$ Multi (V_1\times \ldots \times V_n , W)\cong Hom (V_1, Multi (V_2\times\ldots \times V_n, W)). $$ \end{proposition} {\em Proof:} In each case one defines natural maps in both directions and checks that the two compositions are the identity. \hfill $\Box$\vspace{.1in} \begin{theorem} \mylabel{I.s.2} Suppose $V_i$ are vector sheaves, $i=1,\ldots , n$. There is a multilinear form $$ \mu : V_1\times \ldots V_n \rightarrow V_1 \otimes _{{\cal O}} \ldots \otimes _{{\cal O}} V_n $$ which is universal in the sense that if $$ \phi: V_1\times \ldots \times V_n\rightarrow W $$ is a multilinear form then there is a unique morphism $$ \psi : V_1 \otimes _{{\cal O}} \ldots \otimes _{{\cal O}} V_n \rightarrow W $$ such that $\phi = \psi \circ \mu $. \end{theorem} {\em Proof:} Note first that for $n=2$ there is a natural bilinear map $U\times V \rightarrow Hom (U, V^{\ast})^{\ast}= U\otimes _{{\cal O}}V$. Inductively this gives the multilinear map for any $n$. The universal property says that the induced map $$ Hom (V_1\otimes _{{\cal O}}\ldots \otimes _{{\cal O}}V_n,W)\rightarrow Multi (V_1,\ldots , V_n , W) $$ should be an isomorphism. We prove this by induction on $n$, so we may suppose it is true for $n-1$. By the definition of the multiple tensor product, the quantity on the left is $$ Hom (Hom (V_1, (V_2\otimes _{{\cal O}}\ldots \otimes _{{\cal O}}V_n)^{\ast})^{\ast},W). $$ By part 1 of the proposition, this is equal to $$ Hom (W^{\ast}, Hom (V_1, (V_2\otimes _{{\cal O}}\ldots \otimes _{{\cal O}}V_n)^{\ast})). $$ By induction, $(V_2\otimes _{{\cal O}}\ldots \otimes _{{\cal O}}V_n)^{\ast}=Multi (V_2\times \ldots \times V_n,{\cal O} )$. Coupled with part 3 of the proposition we get $$ Hom (W^{\ast}, Multi (V_1\times \ldots \times V_n,{\cal O} ) $$ which then is equal to the right hand side above, by part 2 of the proposition. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{I.s.3} The tensor and cotensor products have natural commutativity and associativity isomorphisms satisfying the usual constraints. \end{corollary} {\em Proof:} For the tensor product this follows from the universal property and the fact that the notion of multilinear form is independent of the order of the variables. For the cotensor product this follows because it is the dual of the tensor product. \hfill $\Box$\vspace{.1in} We can define symmetric and exterior powers, either with respect to the tensor product or with respect to the cotensor product. Let $S_n$ denote the symmetric group on $n$ objects. Let $V^{\otimes _{{\cal O}}n}$ (resp. $V^{\otimes ^{{\cal O}}n}$) denote the tensor product (resp. cotensor product) of $n$ copies of a vector sheaf $V$. Then $S_n$ acts on $V^{\otimes _{{\cal O}}n}$ (resp. $V^{\otimes ^{{\cal O}}n}$) because of the commutativity and associativity. This representation is completely reducible (this can be see object-by-object). The components are vector sheaves; this can be seen by noting that the fixed part is the kernel of a morphism of vector sheaves (the direct sum of $1-\gamma $ for $\gamma \in S_n$); to get the other components, apply that to tensor products with the irreducible representations of $S_n$. The trivial component of $V^{\otimes _{{\cal O}}n}$ (resp. $V^{\otimes ^{{\cal O}}n}$) is denoted by $Sym _{{\cal O}}^n(V)$, the symmetric power (resp. $Sym ^{{\cal O}}_n(V)$, the symmetric copower). The component corresponding to the sign representation is denoted by $\bigwedge _{{\cal O}}^n(V)$, the exterior power (resp. $\bigwedge ^{{\cal O}}_n(V)$, the exterior copower). {\em Remark:} There is a natural morphism $U\otimes _{{\cal O}}V\rightarrow U\otimes ^{{\cal O}}V$, that is to say $$ Hom (U,V^{\ast})^{\ast}\rightarrow Hom (U^{\ast}, V). $$ However, this is not an isomorphism. A counterexample can be constructed by looking for a case where the cotensor product is left exact but not right exact (and noting that the tensor product is right exact), or vice-versa. We have the expression $U\otimes ^{{\cal O}}V= Bil (U^{\ast}\times V^{\ast},{\cal O})$. The inequality mentionned above implies in particular that there are bilinear functions on $U^{\ast}\times V^{\ast}$ which are not sums of tensors $u\otimes v$. This is a big difference from the case of schemes (for example if $U$ and $V$ are coherent so that $U^{\ast}$ and $V^{\ast}$ are vector schemes, then the bilinear functions {\em are} sums of tensor products. \subnumero{Automorphisms of vector sheaves} We end our discussion of vector sheaves by showing how they give examples of presentable group sheaves. \begin{lemma} \mylabel{I.1.g.1} If $V$ is a vector sheaf over $S$, then $V$ is presentable. \end{lemma} {\em Proof:} Suppose $V$ is a vector scheme. Then taking $X=V$ and $R= V\times _VV=V$ we obtain the required presentation (note that the identity morphisms are vertical)---so $V$ is $P4$, and then $P5$ by Corollary \ref{I.z}. It follows from Theorem \ref{I.1.d} that the quotient of one vector scheme by another is again $P5$; and finally that the quotient of a vector scheme by such a quotient is $P5$. In view of the 3-stage resolution of any vector sheaf by vector schemes, we obtain the lemma. \hfill $\Box$\vspace{.1in} One of the main examples of presentable group sheaves is given by the following theorem. \begin{theorem} \mylabel{I.1.g} Suppose $V$ is a vector sheaf over $S$. Then the group sheaf $Aut (V)$ is a presentable. \end{theorem} {\em Proof:} By the previous lemma and Lemma \ref{I.s}, $Hom (V,V)$ is $P5$. We can express $$ Aut (V)\subset Hom (V,V)\times Hom (V,V) $$ as the equalizer of the two morphisms $$ \begin{array}{ccc} Hom (V,V)\times Hom (V,V)&\rightarrow &Hom (V,V)\times Hom (V,V)\\ (a,b) & \mapsto & (ab,ba) \\ (a,b)&\mapsto & (1,1). \end{array} $$ Apply Lemma \ref{I.1.a} to obtain that $Aut (V)$ is $P4$, and then Corollary \ref{I.z} to obtain that it is $P5$. \hfill $\Box$\vspace{.1in} A particular case of this construction is when $V$ is a coherent sheaf which we denote by ${\cal F}$. There is a presentation $$ U_2 \stackrel{\phi}{\rightarrow} U_1 \rightarrow {\cal F} \rightarrow 0 $$ where $U_i = {\cal O} ^{a_i}$. Let $Aut (U_2, U_1, \phi )$ denote the group sheaf of automorphisms of the morphism $U_2 \rightarrow U_1$. Any such automorphism gives an automorphism of ${\cal F}$ so we have a morphism $$ Aut (U_2, U_1, \phi )\rightarrow Aut ({\cal F} ). $$ \begin{lemma} \mylabel{surjection} This morphism is a surjection onto $Aut ({\cal F} )$, and $Aut (U_2, U_1, \phi )$ is represesentable by a group scheme over $S$. \end{lemma} {\em Proof:} The representability by a group scheme is clear, since $Aut (U_i)$ are group schemes (isomorphic to $GL(a_i)$) and the condition of compatibility with $\phi$ is a closed condition so $Aut (U_2, U_1, \phi )$ is a closed subscheme of $Aut (U_1)\times Aut (U_2)$. Suppose $S' \rightarrow S$ is a scheme and $P\in S'$ is a point. Suppose $\eta : {\cal F} |_{S'}\rightarrow {\cal F} |_{S'}$ is an automorphism. Let $$ U'_2 \stackrel{\phi '}{\rightarrow }U'_1 \rightarrow {\cal F} |_{S'}\rightarrow 0 $$ be a minimal resolution of ${\cal F} |_{S'}$ at the point $P$ (that is to say that the value $\phi '(P)$ is identically zero and the rank of $U'_2$ is minimal). Then there are locally free $W_i\cong {\cal O} ^{b_i}$ on $S'$ such that $U_i|_{S'} \cong U'_i \oplus W$ and such that the map $\phi |_{S'}$ can be written in block form with respect to this decomposition, with a morphism $\psi '$ in the block of the $W_i$ and the map $\phi '$ in the block of the $U'_i$, such that $\psi '$ is surjective. Our morphism $\eta$ extends to a morphism of resolutions $U'_{\cdot} \rightarrow U'_{\cdot}$ which is an isomorphism near $P$ by the minimality of the resolution (in fact the values $U'_i(P)$ are the $Tor ^i_{{\cal O} _{S'}}({\cal F} |_{S'}, k_P)$ and an isomorphism of ${\cal F} |_{S'}$ induces an isomorphism on the $Tor ^i$). We can complete this with the identity in the block of the $W_i$ to get an isomorphism of resolutions $U_i |_{S'}$ inducing $\eta$. This gives the desired surjectivity. \hfill $\Box$\vspace{.1in} {\em Question:} Does a similar result hold for the automorphisms of any vector sheaf? \numero{Tangent sheaves of presentable sheaves} Suppose $S'\rightarrow S$ is an $S$-scheme. Put $$ Y:= S' \times Spec (k[\epsilon _1 ,\epsilon _2, \epsilon _3]/(\epsilon _i^2, \epsilon _i\epsilon _j )) $$ with the subschemes $$ Y_i:= S' \times Spec (k[\epsilon _i ]/(\epsilon _i^2)) $$ and $$ Y_{ij}:= S' \times Spec (k[\epsilon _i ,\epsilon _j]/(\epsilon _i^2, \epsilon _j^2, \epsilon _i\epsilon _j )). $$ Note that $Y=Y_1\cup Y_2\cup Y_3$, and $Y_{ij}=Y_i\cup Y_j$, as well as $Y_i\cap Y_k = S'\subset Y$ and $Y_{ij}\cap Y_{jk}=Y_j$ (for $i\neq k$). It should be stated explicitly that $Y_{ij}$ is the closed subscheme defined by the ideal $(\epsilon _k)$, $k\neq i,j$; and $Y_i$ is the closed subscheme defined by the ideal $(\epsilon _j,\epsilon _k)$, $j,k\neq i$. We need a weaker version of the notion of verticality. We say that a morphism ${\cal F} \rightarrow {\cal G}$ of sheaves is {\em $T$-vertical} if it satisfies the lifting property $Lift_2(Y_{ij};Y_i ,Y_j)$ and $Lift _3(Y; Y_{12},Y_{23}, Y_{13})$ (for any $S'$). Note that these systems satisfy the retraction hypotheses in the lifting property, so the property of $T$-verticality is weaker than the property of verticality. The result of Theorem \ref{I.u} holds also for $T$-verticality, so the class ${\cal T}$ of $T$-vertical morphisms satisfies the axioms M1-M4. In particular the properties $P4$ and $P5$ imply $P4({\cal T} )$ and $P5({\cal T})$ respectively. The advantage of th weaker property of $T$-verticality is that if $X\rightarrow Z$ is a morphism of schemes over $S$, then it is $T$-vertical. To prove this, note that the properties $Y=Y_1\cup Y_2\cup Y_3$, $Y_{ij}=Y_i\cup Y_j$, $Y_i\cap Y_k = S'\subset Y$ and $Y_{ij}\cap Y_{jk}=Y_j$ mean that for defining morphisms from $Y$ to a scheme (or from $Y_{ij}$ to a scheme) it suffices to have compatible morphisms on the $Y_{ij}$ or on the $Y_i$. ({\em Caution:} We did not include the lifting condition $Lift _1(Y_1; S')$ in the notion of $T$-verticality; morphisms of schemes do not necessarily satisfy this lifting property!) The conclusion of the previous paragraph and property $M1$ for $T$-verticality is that if ${\cal F}$ is a $P4({\cal T} )$ sheaf then the structural morphism $p:{\cal F} \rightarrow S$ is $T$-vertical; thus $P4({\cal T} )\Leftrightarrow P5({\cal T})$. \begin{lemma} \mylabel{I.1.e.1} Suppose $f:{\cal F}\rightarrow {\cal G} $ is a morphism of $P4$ sheaves. Then $f$ is $T$-vertical. Furthermore, the liftings in the lifting properties for $f$, for the systems $(Y_{ij};Y_i ,Y_j)$ and $(Y; Y_{12},Y_{23}, Y_{13})$, are unique. \end{lemma} {\em Proof:} For $T$-verticality, we can choose vertical surjections $X\rightarrow {\cal F}$ and $Y\rightarrow {\cal G}$ so that there is a lifting $X\rightarrow Y$. This lifting is $T$-vertical since it is a morphism between schemes (cf the above remark). Hence the composition $X\rightarrow {\cal G}$ is $T$-vertical. By Theorem \ref{I.u}, part 4 for $T$-verticality, applied to the composition $X\rightarrow {\cal F} \rightarrow {\cal G}$, we obtain $T$-verticality of the morphism $f$. To prove the uniqueness, note that liftings to schemes are unique since $Y=Y_1\cup Y_2\cup Y_3$ and $Y_{ij}=Y_i\cup Y_j$. Then descend the uniqueness down from $X$ to ${\cal F}$ where $X\rightarrow {\cal F}$ is the vertical (hence $T$-vertical) morphism provided by the property $P4$. This descent of the uniqueness property is immediate from the lifting property for $X\rightarrow {\cal F}$. \hfill $\Box$\vspace{.1in} In the statement of the following theorem, the condition is $P4$ and not $P4({\cal T} )$ (i.e. that isn't a misprint). \begin{theorem} \mylabel{I.1.f} Suppose ${\cal F}\rightarrow {\cal G} $ is a morphism of $P4$ sheaves on $S$. Suppose $u :S\rightarrow {\cal F}$ is a section. Then the relative tangent sheaf $T(f )_{u}$ over $S$, defined by $$ T({\cal F} )_{u} (b:S'\rightarrow S):= \{ \eta :S' \times Spec (k[\epsilon ]/(\epsilon ^2))\rightarrow {\cal F}\;\; :\;\;\;\; f \eta = fubp_1 \;\; \mbox{and} \;\; \eta |_{S'}= ub \} , $$ has a natural structure of sheaf of abelian groups making it a vector sheaf. \end{theorem} {\em Proof:} We first define the natural abelian group structure on this sheaf. Suppose $$ \eta _i: S'\times Spec(k[\epsilon _i]/(\epsilon _i^2))\rightarrow {\cal F} $$ are sections of $T(f )_u$ over $S'$ ($i=1,,\ldots , 3$). ({\em Nota:} for the definition of the group law we only need $i=1,2$; we need $i=1,2,3$ only to check that it is associative.) Here (and below) we attach various subscripts to the variables $\epsilon$. Use the notations established above: $$ Y:= S' \times Spec (k[\epsilon _1 ,\epsilon _2, \epsilon _3]/(\epsilon _i^2, \epsilon _i\epsilon _j )) $$ with the subschemes $$ Y_i:= S' \times Spec (k[\epsilon _i ]/(\epsilon _i^2)) $$ and $$ Y_{ij}:= S' \times Spec (k[\epsilon _i ,\epsilon _j]/(\epsilon _i^2, \epsilon _j^2, \epsilon _i\epsilon _j )). $$ Note that $Y=Y_1\cup Y_2\cup Y_3$, and $Y_{ij}=Y_i\cup Y_j$. Again $Y_{ij}$ is the closed subscheme defined by the ideal $(\epsilon _k)$, $k\neq i,j$; and $Y_i$ is the closed subscheme defined by the ideal $(\epsilon _j,\epsilon _k)$, $j,k\neq i$. The systems $(Y_{ij};Y_i ,Y_j)$ and $(Y; Y_{12},Y_{23}, Y_{13})$ satisfy a unique lifting property for the morphism $f$ (Lemma \ref{I.1.e.1}). Note that $Y_{ij}\cap Y_{jk}=Y_j$ (for $i\neq k$). We apply this first to the system $(Y_{ij}; Y_i,Y_j)$. There is a unique morphism $$ \eta _{ij}: Y_{ij}\rightarrow {\cal F} $$ over the base morphism $Y_{ij}\rightarrow S\rightarrow {\cal G}$ and agreeing with $\eta _i$ (resp. $\eta _j$) on $Y_i$ (resp. $Y_j$). Let $$ \delta _{ij}: S' \times Spec (k[\epsilon ]/(\epsilon ^2))\rightarrow Y_{ij} $$ be the diagonal and---for future use---let $$ \delta _{123}: S' \times Spec (k[\epsilon ]/(\epsilon ^2))\rightarrow Y $$ be the triple diagonal. Then we put $$ \eta _i+\eta _j:= \eta _{ij} \circ \delta _{ij} . $$ This gives a composition which is obviously commutative (the definition is symmetric in the two variables). To check that it is associative, apply unique lifting for $(Y,Y_{ij})$ to get a unique $\eta _{123}: Y\rightarrow {\cal F}$ restricting to the $\eta _{ij}$ on $Y_{ij}$. Next, note that the triple diagonal is equal to the composition of $1\times \delta _{23}$ with the diagonal $$ S' \times Spec (k[\epsilon _0]/(\epsilon _0^2))\rightarrow Spec (k[\epsilon _1,\epsilon ]/(\epsilon _1^2, \epsilon ^2, \epsilon _1\epsilon )). $$ Using this, we get $$ \epsilon _1+(\eta _2 +\eta _3)= \eta _{123}\circ \delta _{123}. $$ Similarly, we have $$ (\epsilon _1+\eta _2) +\eta _3= \eta _{123}\circ \delta _{123}, $$ giving associativity. The identity element (which we denote by $0$) is the composition $$ S'\times Spec (k[\epsilon ]/(\epsilon ^2))\rightarrow S \rightarrow {\cal F} . $$ This construction is natural: if $$ \begin{array}{ccc} {\cal F} & \rightarrow & {\cal F} ' \\ \downarrow &&\downarrow \\ {\cal G} & \rightarrow & {\cal G} ' \end{array} $$ is a diagram with vertical arrows vertical, and if $u:S\rightarrow {\cal F}$ is a section projecting to $u':S\rightarrow {\cal F} '$, then composition with the morphism ${\cal F} \rightarrow {\cal F} '$ respects the conditions in the definition of the tangent sheaves, and so it gives a morphism $T(f)_u\rightarrow T(f')_{u'}$. The addition we have defined is natural, so this morphism of tangent sheaves respects the addition (it also respects the identity). The inverse is obtained by applying the automorphism $\epsilon \mapsto -\epsilon$. This completes the construction of the natural structure of sheaf of abelian groups. Next, we show that if $$ {\cal F} \stackrel{a}{\rightarrow }{\cal G} \stackrel{b}{\rightarrow }{\cal H} $$ is a sequence of morphisms of $P4$ sheaves, and if $u:S\rightarrow {\cal F}$ is a section, then we have an exact sequence $$ 0\rightarrow T(a)_u\rightarrow T(ba)_u \rightarrow T(b)_{au} $$ We certainly get such a sequence with the composition being zero. Furthermore, $T(a)_u$ is the subsheaf of $T(ba)_u$ consisting of those elements projecting to zero in $T(b)_{au}$ (this follows immediately from the definition). Furthermore, if $a$ is vertical, then the sequence is exact on the right. This follows from the lifting property in the definition of vertical, in view of the fact that $S'$ is a retraction of $S'\times Spec (k[\epsilon ]/(\epsilon ^2))$. (Note that we have not required this lifting property in the definition of $T$-verticality.) Let $p: {\cal F} \rightarrow S$ denote the structural morphism for a $P4$ sheaf ${\cal F}$, and define the tangent sheaf $T({\cal F} )_u:= T(p)_u$. If $f:{\cal F} \rightarrow {\cal G}$ is a morphism of $P4$ sheaves, the exact sequence of the previous paragraph becomes $$ 0\rightarrow T(f)_u\rightarrow T({\cal F})_u \rightarrow T({\cal G} )_{fu}. $$ Again, if $f$ is vertical then this sequence is exact on the right also. Finally, we show that if ${\cal F}$ is $P4$ then $T({\cal F} )_u$ is a vector sheaf. The above exact sequence implies that if $f$ is a morphism of $P4$ sheaves then $T(f)_u$ is a vector sheaf. Let $f: X\rightarrow {\cal F}$ be the vertical morphism given by the property $P4$. Since the question is etale local on $S$, we may assume that our section $u: S\rightarrow {\cal F}$ lifts to a section $v: S\rightarrow X$. We have an exact sequence $$ 0\rightarrow T(f)_v \rightarrow T(X)_v\rightarrow T({\cal F} )_u\rightarrow 0. $$ Note that $T(X)_v$ is a vector scheme (an easy thing to see---it is given by the linear parts of the equations of $X$ at the section $v$). Let $g:R\rightarrow X\times _{{\cal F}} X$ be the other vertical morphism given by the property $P4$. We claim that we have an exact sequence $$ 0\rightarrow T(X\times _{{\cal F}}X)_{(v,v)} \rightarrow T(X)_v \oplus T(X)_v \rightarrow T({\cal F} )_u \rightarrow 0. $$ To see this, note that an element of $T(X\times _{{\cal F}}X)_{(v,v)}$ consists of an element of $T(X\times _SX)_{(v,v)}$ mapping to $T({\cal F} )_u\subset T({\cal F} \times _S{\cal F} )_{(u,u)}$. Note that $$ T(X\times _SX)_{(v,v)}=T(X)_v \oplus T(X)_v, $$ and $$ T({\cal F} \times _S{\cal F} )_{(u,u)}=T({\cal F} )_u\oplus T({\cal F} )_u $$ with the map from $T({\cal F} )_u$ being the diagonal. The quotient of $T({\cal F} \times _S{\cal F} )_{(u,u)}$ by the diagonal $T({\cal F} )_u$ is thus isomorphic to $T({\cal F} )_u$ and we obtain the exact sequence in question. The surjectivity on the right is from surjectivity of $T(X)_v\rightarrow T({\cal F} )_u$. Lift $(v,v)$ to a section $w:S\rightarrow R$. The exact sequence for $g$ gives a surjection $$ T(R)_w \rightarrow T(X\times _{{\cal F}}X)_{(v,v)}\rightarrow 0. $$ Combining this with the above exact sequence, we obtain the right exact sequence $$ T(R)_w \rightarrow T(X)_v \oplus T(X)_v \rightarrow T({\cal F} )_u \rightarrow 0. $$ Since $T(R)_w$ and $T(X)_v$ are vector schemes, this shows that $T({\cal F} )_u$ is a vector sheaf. \hfill $\Box$\vspace{.1in} \numero{The case $S=Spec (k)$} We now analyse the definitions of the previous sections in the case where the base scheme is $S=Spec (k)$ (a hypothesis we suppose for the rest of this section). {\em Caution:} We will use throughout this section certain properties of vertical morphisms etc. which hold only in the context $S=Spec (k)$. The reader should not extrapolate these properties to other cases. Our first lemma is a preliminary version of the next lemma which we include because the argument may be easier to understand in a simpler context. \begin{lemma} \mylabel{I.1.k} Suppose $f: X\rightarrow Spec (k)$ is morphism of finite type. Then $f$ is vertical if and only if $f$ is a smooth morphism. \end{lemma} {\em Proof:} Suppose $X$ is smooth. Then the required lifting properties hold. Indeed, $X$ is etale locally a vector space, and Theorem \ref{I.u} (part 7) implies that vector spaces are vertical over $Spec(k)$. Conversely, suppose $f$ is vertical, and suppose $x\in X$. The first claim is that for any $v\in T(X)_x$ there is a smooth germ of curve $(C,0)$ mapping to $(X,x)$ with tangent vector $v$ at the origin. Since $X$ is of finite type, and by Artin approximation, it suffices to construct a compatible family of morphisms $$ \gamma _n:Spec (k[t]/t^n)\rightarrow X $$ sending $Spec(k)$ to $x$ and with tangent vector $v$ (that is, the map $\gamma _2$ represents $v$). Before starting the construction, choose a morphism $$ \mu : X\times X\rightarrow X $$ with $\mu (x,y)=\mu (y,x)=y$ for any $y$ (the possibility of finding $\mu$ follows from the definition of verticality). We now construct $\gamma _n$ by induction, starting with $\gamma _2$ given by $v$. Suppose we have constructed $\gamma _{n}$ by the inductive procedure. Let $Y(n):= Spec (k[r]/r^{n})\times Spec (k[s]/s^2)$. The composition gives a morphism $$ \phi _n:= \mu \circ (\gamma _n , \gamma _2): Y(n)\rightarrow X. $$ We will show that $\phi _n$ factors through the morphism $$ d: Y(n)\rightarrow Spec (k[t]/t^{n+1}) $$ which is dual to the morphism $$ k[t]/t^{n+1} \rightarrow k[r,s]/(r^n,s^2) $$ $$ t\mapsto r+s. $$ We will then choose $\gamma _{n+1}$ equal to the resulting morphism $Spec (k[t]/t^{n+1})\rightarrow X$, that is with $\phi _n =\gamma _{n+1}d$. Since $\gamma _n$ restricts to $\gamma _{n-1}$, and since we have chosen $\gamma _n$ by the inductive procedure, we have that $$ \phi _n|_{Y(n-1)}=\phi _{n-1} = \gamma _n d. $$ Writing $X=Spec (A)$ (in a neighborhood of $x$) the morphism $\phi _n$ corresponds to $$ \phi _n^{\ast}: A\rightarrow k[r,s]/(r^n,s^2). $$ We have that $\phi _n^{\ast} (a)$ reduces modulo $r^{n-1}$ to $d^{\ast}\gamma _n^{\ast} (a)$. Writing $$ \gamma _n^{\ast}(a)= \sum _{j=0}^{n-1}b_jt^j $$ we have $$ \phi _n^{\ast} (a)= \sum _{j=0}^{n-1}b_j (r+s)^j + \alpha r^{n-1} + \beta r^{n-1}s. $$ Write, on the other hand, the equation $\phi _n |_{Spec (k[r]/r^n)} = \gamma _n$. We get that $$ \phi _n^{\ast} (a) \sim \sum _{j=0}^{n-1}b_j r^j \;\; \mbox{mod} (s). $$ This gives $\alpha = 0$ in the above equation. Finally, note that $(r+s)^n= nr^{n-1}s$ modulo $(r^n, s^2)$. Thus we may set $b_n:= \beta / n$ and get $$ \phi _n^{\ast} (a)= \sum _{j=0}^{n}b_j (r+s)^j . $$ Put $$ \gamma _{n+1}^{\ast} (a):= \sum _{j=0}^{n}b_j t^j , $$ and we get the desired factorization $\phi _n = \gamma _{n+1}d$. This completes the inductive step for the construction of the $\gamma _n$. We obtain the desired formal curve and hence a curve $(C,0)$ as claimed. {\em Remark:} Intuitively what we have done above is to integrate the vector field on $X$ given by the tangent vector $v$ and the multiplication $\mu$. Of course, the curve $C$ is an approximation to the integral curve, which might only exist formally. The next step in the proof of the lemma is to choose a collection of vectors $v_1,\ldots , v_m$ generating $T(X)_x$, and to choose resulting curves $C_1, \ldots , C_m$. Using the map $\mu$ in succession (or applying directly the definition of verticality) we obtain a map $$ \Phi : (U,0):=(C_1\times \ldots \times C_m , 0)\rightarrow (X,x), $$ inducing the given morphisms on the factors $C_i$ (considered as subspaces of the product by putting the origin in the other places). By construction the differential $d\Phi _0$ is given by the vectors $v_1,\ldots , v_m$, in particular it gives a surjection $$ d\Phi _0: T(U)_0 \rightarrow T(X)_x \rightarrow 0. $$ Note that $U$ is smooth of dimension $m$. We claim that this implies $dim _x (X) \geq dim T(X)_x$. To see this, let $d:= dim _x(X)$ and $n:= dim T(X)_x$. By semicontinuity, the dimension of the fiber $\Phi ^{-1}(x)$ at the origin is at least equal to $m-d$. In particular, the tangent space to the fiber has dimension at least $m-d$; but this gives a subspace of dimension $m-d$ of $T(U)_0$ which maps to zero in $T(X)_x$; by the surjectivity of $d\Phi _0$ we get $n \leq m-(m-d)=d$, the desired inequality. Finally, it follows from this inequality that $X$ is regular at $x$ and hence smooth at $x$ (and, of course, the inequality is an equality!). This proves the lemma. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{smooth} Suppose $X$ and $Y$ are schemes of finite type over $k$ and $f: X\rightarrow Y$ is a morphism. Then $f$ is $Spec (k)$-vertical if and only if $f$ is smooth. \end{lemma} {\em Proof:} Note first that if $f$ is smooth then it is etale-locally a product with affine space so we get all of the lifting properties. Suppose now that $f$ is vertical. If $Q\in Y$ and $P\in f^{-1}(Q)$ then $Lift _1(Y, Q)$ implies that, after replacing $Y$ by an etale neighborhood of $Q$ we may suppose that there is a section $\sigma : Y\rightarrow X$ with $\sigma (Q)=P$. Let $T(X/Y)_{\sigma}$ denote the relative tangent vector scheme along the section $\sigma$. It is easy to see that the morphism $T(X/Y)_{\sigma}\rightarrow Y$ is $Spec(k)$-vertical. We then obtain that the morphism $$ \Gamma (Y, T(X/Y)_{\sigma})\rightarrow (T(X/Y)_{\sigma})_Q=T(f^{-1}(Q))_P $$ is surjective, and this then implies that $T(X/Y)_{\sigma}$ is a vector bundle over $Y$. The same argument as in the previous lemma allows us to ``exponentiate'' in a formal neighborhood of $P$, to get a map $\varphi$ from $T(X/Y)_{\sigma}^{\wedge}$ (the formal completion in a neighborhood of $0(Q)$) to $X$, which sends the zero section $0$ to $\sigma$ and whose tangent map is the identity along $\sigma$. We claim that if $S'$ is artinian local with a morphism $S''\rightarrow X$ sending the origin to $P$, then the morphism factors via $\varphi$ through a map $S'\rightarrow T(X/Y)_{\sigma}^{\wedge}$ sending the origin to $0(Q)$. Prove this claim using the standard deformation theory argument by induction on the length of $S'$: suppose $S''\subset S'$ is defined by an ideal $I$ annihilated by the maximal ideal, and suppose we know the claim for $S''$. Then there exists a map $S'\rightarrow T(X/Y)_{\sigma}^{\wedge}$ extending the known map on $S''$ since $T(X/Y)_{\sigma}^{\wedge}$ is a vector bundle over $Y$. The space of such extensions is a principal homogeneous space over $I\otimes _k (T(X/Y)_{\sigma})_Q$ whereas the space of extensions of $S''\rightarrow X$ to morphisms $S'\rightarrow X$ is a principal homogeneous space over $I\otimes _kT(f^{-1}(Q))_P$. The map $\varphi$ induces an isomorphism $$ (T(X/Y)_{\sigma})_Q\cong T(f^{-1}(Q))_P $$ so there is an extension to a map $S' \rightarrow T(X/Y)_{\sigma}^{\wedge}$ which projects to our given map $S'\rightarrow X$. This proves the claim. Now we can prove that $X\rightarrow Y$ is formally smooth at $P$. If $S''\subset S'$ are artinian local and if $a:S'\rightarrow Y$ is a map lifting over $S''$ to a map $b:S'' \rightarrow X$ sending the origin to $P$, then we get (from the previous claim) that the map $b$ factors through a map $S'' \rightarrow T(X/Y)_{\sigma}^{\wedge}$. Since $T(X/Y)_{\sigma}^{\wedge}$ is a vector bundle and in particular smooth over $Y$, this extends to a map $S' \rightarrow T(X/Y)_{\sigma}^{\wedge}$. This extension projects into $X$ to an extension $S' \rightarrow X$ of the map $b$. This shows formal smoothness. Since $X$ and $Y$ are of finite type, $f$ is smooth. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{I.1.l} Suppose $G$ is a presentable sheaf of groups on ${\cal X} /Spec (k)$ (which is equal to ${\cal X}$ in this case), and suppose $f:X\rightarrow G$ is a vertical morphism. Then $X$ is smooth over $Spec (k)$. \end{corollary} {\em Proof:} The morphism $G\rightarrow Spec (k)$ is vertical by Theorem \ref{I.u} (7). The composed morphism $X\rightarrow Spec (k)$ is vertical hence smooth by Lemma \ref{I.1.k}. \hfill $\Box$\vspace{.1in} \begin{theorem} \mylabel{I.1.m} If $G$ is a presentable group sheaf on ${\cal X} /Spec (k)$ then it is represented by a smooth separated scheme of finite type over $k$ (in other words it is an algebraic Lie group over $k$). \end{theorem} {\em Proof:} We assume $k={\bf C}$ for this proof. Choose vertical surjections $f:X\rightarrow G$ and $R\rightarrow X\times _GX$. Note that $R\rightarrow G$ is vertical, so $X$ and $R$ are smooth schemes of finite type. By adding some factors of affine spaces we can assume that the components of $X$ and $R$ all have the same dimension. By the previous section, the morphism $df:T(X)\rightarrow f^{\ast}T(G)$ is a morphism of vector sheaves on $X$, hence it is a morphism of vector bundles. It is surjective, so the kernel is a strict sub-vector bundle ${\cal F} \subset T(X)$. For each $x\in X$ we have $$ {\cal F} _x:= \ker (T(X)_x \rightarrow T(G)_{f(x)}). $$ The morphism $p_1: R\rightarrow X$ is vertical (since $X\times _GX\rightarrow X$ is the pullback of the vertical $X\rightarrow G$ by the morphism $X\rightarrow G$, and $p_1$ is the composition of the vertical $R\rightarrow X\times _GX$ with this projection). Therefore, by Lemma \ref{smooth} $p_1$ is smooth. Suppose $r\in R$ maps to $(x,y)\in X\times X$. Let $g\in G$ denote the common image of $x$ and $y$. We have an exact sequence $$ T(R)_r \rightarrow T(X)_x \oplus T(X)_y \rightarrow T(G)_g \rightarrow 0. $$ From this we get that the image of the map on the left always has the same dimension; in particular this shows that the map $T(R)\rightarrow (p_1, p_2)^{\ast}T(X\times X)$ is strict. For any point $g$ in $G$ we can identify $T(G)_g\cong T(G)_1$ by left multiplication. The morphism on the right in the exact sequence then comes from a morphism of the form $p_1^{\ast}(\alpha )-p_2^{\ast}(\alpha )$ where $\alpha : T(X) \rightarrow T(G)_1$ is obtained from the differential of $f$ by the left-multiplication trivialization. This morphism is a morphism of vector bundles from the tangent bundle of $X\times X$ to the constant bundle $T(G)_1$, so its kernel is a distribution in the tangent bundle of $X\times X$. The image of $R$ is an integral leaf of this distribution. In particular, the image of $R$ is a smooth complex submanifold of $X\times X$ (note that the map from $R$ to the leaf is smooth since, by the above exact sequence, the differential is surjective at any point---this implies that the image is open in the leaf). Choose a subvariety $X'\subset X$ which is everywhere transverse to the distribution ${\cal F}$, and which meets every subvariety of $X$ of the form $p_2(p_1^{-1}(x))$ for $p_i$ denoting the projections $R\rightarrow X$. We may assume that $X'$ is of finite type. Let $R'$ be the intersection of $X'\times X'$ with the image of $R$ in $X\times X$. We claim that the morphism $X'\rightarrow G$ is surjective and vertical, and that $R'= X'\times _GX'$. To see this, note that by hypothesis $X'\times _X R\rightarrow X$ is surjective on closed points. By our transversality assumptions this morphism is also smooth. Thus any point in $X$ is equivalent via $R$ (etale-locally) to a point in $X'$. For verticality, it suffices to prove that $X'\times _GX \rightarrow X$ is vertical (Theorem \ref{I.u}, parts 3 and 4). And for this it suffices to note that $X'\times _X R \rightarrow X' \times _GX$ is surjective and vertical (being the pullback of $X\times _XR\rightarrow X\times _GX$ by $X'\times _GX\rightarrow X\times _GX$), that $X'\times _XR\rightarrow X$ is smooth and hence vertical, and to apply Theorem \ref{I.u}, part 4. We get $X'\rightarrow G$ surjective and vertical. If we put $R'' $ equal to the pullback of $R$ to $X'\times X'$ then $R'' \rightarrow X'\times _G X'$ is surjective and vertical (it being also the pullback of $R$ via $X'\times _GX' \rightarrow X\times _GX$). The previous proof applied to this case shows that $R''$ is smooth over its image $R'$, and that $R'$ is a smooth subvariety of $X'\times X'$. But now, by our previous transversality assumptions, the projections $R'\rightarrow X'$ are etale. We can now conclude that $G$, which is the quotient of $X'$ by the equivalence relation $R'$, is a smooth algebraic space. We will find an open subset $U\subset G$ which is a smooth variety over $k$. In order to do this, let $d$ be the maximum number of points in the fibers of $X'\rightarrow G$. The fiber through a point $x$ is equal to $p_2(p_1^{-1}(x))$ where $p_i: R' \rightarrow X'$ here denote the projections. Let $W\subset X$ be the set of points $x$ where the maximum number $d$ of points in the fiber $p_1^{-1}(x)$ is achieved. Since the morphism $p_1: R'\rightarrow X$ is etale, it is easy to see that $W$ is an open subset, and that if we let $R'_W $ denote $p_1^{-1}(W)$ then $R'_W\rightarrow W$ is a finite etale morphism of degree $d$. On the other hand, if $x\in W$ and $y$ is in the fiber through $x$ then $y$ is also in $W$. This means that $p_2(R'_W)\subset W$. The correspondence $$ x\mapsto p_2(p_1^{-1}(x)) $$ gives a morphism $\chi$ from $W$ to the symmetric product $W^{(d)}$ having image in the complement of the singular locus. Then $W\times _{W^{(d)}}W= R'_W$. In particular, the quotient of $W$ by the equivalence relation $R'_W$ is the image of $\chi$. Note that $\chi $ is etale over its image, which is thus a locally closed subscheme of $W^{(d)}$. This shows that the quotient of $W$ by the equivalence relation is a scheme $U$ of finite type. It is also smooth. The morphism $W\rightarrow G$ factors through $U\rightarrow G$. We claim that the morphism $U\rightarrow G$ is an open subfunctor, that is for any $Y\rightarrow G$ the fiber product $U\times _GY$ is an open subset of $Y$. The fiber product is the quotient of $W\times _GY$ by the induced equivalence relation; and the quotient of $X'\times _GY$ by the equivalence relation is equal to $Y$. Choosing local liftings $Y\rightarrow X'$ we find that $X'\times _GY$ is the image of $R'\times _{X'\times X'}(X'\times Y)\rightarrow X'\times Y$, that is it is the pullback of $R'$. In particular it is a subscheme of $X'\times Y$. This subscheme surjects to $Y$ by a vertical morphism, a morphism which is hence smooth. The image of the open subset $W\times _GY$ (which is the intersection of $X'\times _GY$ with $W\times Y$) is therefore an open set in $Y$. This shows that $U\subset G$ is an open subfunctor. We can choose a finite number of elements $g_i \in G(S)$ such that $g_i\cdot U$ cover $G$. For the finiteness use the surjection $X\rightarrow G$ with $X$ of finite type (in particular, quasi-compact). We now apply Grothendieck's theorem about representability which says that if a functor $G$ is a sheaf (in the Zariski topology, which is the case here since Zariski is coarser than etale), and if it is covered by a finite number of open subfunctors $G_i$ which are representable by schemes, then the functor $G$ is representable by a scheme (the union of the schemes $G_i$). In our case the $G_i$ are the $g_i\cdot U$, representable by $U$. Since $U$ is of finite type, the union of a finite number of copies is again of finite type. We obtain that $G$ is a scheme of finite type. Note that $U$ is smooth so $G$ is smooth (alternatively, use that any group scheme is smooth). To complete the proof we just have to show that $G$ is separated. Note first that all connected components of $G$ must have the same dimension, so we can speak of the dimension of $G$ without problem. Let $\Delta \subset G\times G$ denote the diagonal. It is preserved by the diagonal left action of $G(k)$ on $G\times G$ (that is, the action $g(a,b)=(ga, gb)$). The complement $K:=\overline{\Delta}-\Delta$ is a closed subset of $G\times G$, of dimension strictly smaller than the dimension of $G$. But $K$ is invariant under the diagonal left action of $G(k)$, so its image $pr_1(K)\subset G$ is invariant by the left action of $G(k)$. Since $dim (K)< dim (G)$ the image $pr _1(K)$ (which is a constructible subset of dimension $\leq dim (K)$) is not dense in $G$. On the other hand, if $K$ were nonempty then this image, being left invariant, would contain a right translate of $G(k)$ which is Zariski dense. This contradiction implies that $K$ is empty, in other words $G$ is separated. This completes the proof of the theorem. \hfill $\Box$\vspace{.1in} {\em Application:} Suppose $S$ is any base scheme of finite type over $Spec (k)$ now, and suppose $S'\rightarrow S$ is an artinian scheme of finite type. Let $\pi : S' \rightarrow Spec (k)$ denote the structural morphism. If $G$ is a presentable group sheaf over $S$ the pullback $G|_{S'}$ is presentable (Lemma \ref{I.1.h}) and the direct image $\pi _{\ast} (G|_{S'})$ is presentable over $Spec (k)$ (Lemma \ref{I.1.i}). By Theorem \ref{I.1.m}, $\pi _{\ast}(G|_{S'})$ is represented by a group scheme of finite type which we denote $G_{S'}$ over $k$. We have $$ G(S')= G_{S'}(Spec (k)). $$ Furthermore, if $X\rightarrow G$ is a vertical surjection then we obtain a scheme of finite type $X_{S'}= \pi _{\ast}(X|_{S'})$ with a morphism $X_{S'}\rightarrow G_{S'}$. This morphism is smooth. \numero{Local study of presentable group sheaves} In this section we return to the case of general base scheme $S$ (in particular, the hypothesis $S=Spec (k)$ is no longer in effect). First we establish some notations for formal completions. Suppose $G$ is a presentable group sheaf. Let $\widehat{G}$ denote the sheaf which associates to $Y\in {\cal X}$ the set of values in $G(Y)$ which restrict to the identity on $Y^{\rm red}$. More generally, use the same notation $\widehat{{\cal F}}$ whenever ${\cal F}$ is a sheaf with a given section playing the role of the identity section (usually the section in question is understood from the context). \subnumero{Local structure} \begin{lemma} \mylabel{I.1.n} Suppose $G$ is a presentable group over a base $S$. Suppose $Z\rightarrow G$ is a vertical surjection with $Z$ an affine scheme of finite type over $S$. Let $T(Z)_e\rightarrow S$ be the tangent vector scheme at a lift $e$ of the identity section. For any $s\in S$ there is an etale neighborhood $$ e(s)\in W \stackrel{p}{\rightarrow} Z $$ and an etale $S$-morphism $q:U\rightarrow TZ$, such that $q=p$ over the section $e$ (which maps to the zero section of $TZ$). \end{lemma} {\em Proof:} Verticality of $Z\rightarrow G$ means that we can choose a lifting of the multiplication of $G$ to $m: Z\times Z \rightarrow Z$ such that $m(x,e)=x$ and $m(e,y)=y$. Let $Q: Z\rightarrow Z$ be the automorphism $Q(x):= m(x,x)$. It has the effect of multiplication by $2$ on the tangent scheme $TZ$ at the identity section, because $$ \frac{\partial }{\partial x}m(x,x)(e)= \frac{\partial }{\partial x}m(x,e)+ \frac{\partial }{\partial x}m(e,x)(e)= 2 \frac{\partial x}{\partial x} =2. $$ If we embedd $Z\subset {\bf A}^N_S$ as a closed subscheme with the identity section going to the origin-section, then we may extend $Q$ to a morphism $Q': {\bf A}^N_S\rightarrow {\bf A}^N_S$ such that $Q'$ acts by multiplication by two on the tangent space at the origin. Let $\widehat{{\bf A}^N_S}$ denote the formal completion of the affine space along the origin-section. Then $Q'$ induces an automorphism of $\widehat{{\bf A}^N_S}$, and it is well known---and easy to see using power series---that such an automorphism is conjugate to its linear part (since the eigenvalues are different from $1$). We obtain an automorphism $F: \widehat{{\bf A}^N_S}\rightarrow \widehat{{\bf A}^N_S}$ such that $F^{-1}\circ Q'\circ F = 2$. Let $\widehat{Z}\subset \widehat{{\bf A}^N_S}$ be the closed formal subscheme obtained by completing $Z$ at the identity section. Note that $\widehat{Z}$ is preserved by $Q'$. Thus the image $F(\widehat{Z})$ is a formal subscheme which is preserved by multiplication by $2$. It follows that it is a cone, and in particular that the linear parts of the equations defining $F(\widehat{Z})$ vanish on $F(\widehat{Z})$. This means that $F(\widehat{Z})$ is included in its tangent scheme $T(F(\widehat{Z}))$ along the identity section. Translating back by $F$ we obtain an immersion $$ \widehat{Z}\hookrightarrow TZ $$ which is the identity on the tangent space at the identity section. The image is a closed formal subscheme preserved by scalar multiplication. For any artinian scheme $S'$ over $S$, $Z(S')$ is a smooth scheme over $Spec (k)$ and $\widehat{Z(S')}\subset TZ(S')$ is a closed formal subscheme at the origin, with the same Zariski tangent space, and which is formally preserved by scalar multiplication. Therefore $\widehat{Z(S')}\cong \widehat{TZ(S')}$. Now $\widehat{Z}(S')$ is the inverse image of $e\in \widehat{Z(Spec (k))}$ via the map $$ \widehat{Z(S')}\rightarrow \widehat{Z(Spec (k))}. $$ The same is true for the tangent scheme $TZ$. From these properties we get that $\widehat{Z}(S')\rightarrow \widehat{TZ}(S')$ is an isomorphism for any $S'$. As that holds true for all artinian schemes $S'$ over $S$ we get that the morphism $\widehat{Z} \rightarrow \widehat{TZ}$ is an isomorphism. Artin approximation now gives the existence of such an isomorphism (inducing the same map on tangent schemes along the identity section) over an etale neighborhood in $Z$, as required for the lemma. \hfill $\Box$\vspace{.1in} \subnumero{Theory of the connected component} We need to develop a suitable theory of the connected component of a presentable group sheaf $G$. \begin{theorem} \mylabel{I.1.o} If $G$ is a presentable group sheaf over $S$, then there is a unique subsheaf of groups $G^0\subset G$ such that $G^0$ is presentable and such that for any artinian $S$-scheme $S'$, we have $G^0(S')$ equal to the connected component of $G(S')$ (when these are considered as algebraic groups over the ground field of $S'$---cf the application at the end of the section on the situation over $Spec (k)$). \end{theorem} {\em Proof:} We first show existence. Let $Z\rightarrow G$ be a vertical surjection with $Z$ a scheme of finite type. Let $\sigma : S\hookrightarrow Z$ be the identity section. We claim that there is an open neighborhood $U\subset Z$ of $\sigma (S)$ such that for any artinian $S$-scheme $S'$, $U(S')$ is connected. By Lemma \ref{I.1.n}, there is an etale neighborhood of the zero section $W \rightarrow TZ$ and another etale morphism $W\rightarrow Z$ giving an etale neighborhood of the section $\sigma$. We claim that (possibly throwing out a closed subset of $W$ not meeting the section) we can assume that the $W(S')$ are connected. In what follows we refer to the lifting of the zero section of $TZ$ as the section $\sigma$ of $W$. For any given $S'$, artinian located at $s\in S$, there is a surjection of vector spaces $$ (TZ)(S')\rightarrow V_i \subset (TZ)(s), $$ for some subspace $V_i$ which depends on $S'$. If $W\rightarrow TZ$ is our etale morphism, then we have $$ W(S')=W(s)\times _{TZ(s)}(TZ)(S') = W(s)\times _{TZ(s)}V_i \times_{V_i}(TZ)(S'), $$ since a point $S'\rightarrow TZ$ has a unique lifting to $W$ once the lifting is specified on the closed point. Thus $W(S')$ is connected if and only if, for all subspaces $V_i \subset (TZ)(s)$ we have that $W(s)\times _{TZ(s)}V_i$ is connected. Let $Gr (TZ)\rightarrow S$ be the disjoint union of the grassmanian schemes of subspaces of different dimensions. It is proper over $S$. We have a universal subscheme $$ {\cal V} \subset Gr (TZ)\times _S TZ. $$ Note that the map ${\cal V} \rightarrow TZ$ is proper. Let $\tilde{W}:= W\times _{TZ} {\cal V} $; this is an etale covering of ${\cal V}$, and is proper over $W$. Let $\tilde{W}^N\subset \tilde{W}$ be the union of the connected components in fibers which do not pass through the section $\sigma$ (relative to $Gr (TZ)$). Note that $\tilde{W}^N$ is a constructible subset of $\tilde{W}$ (one can see this by noetherian induction). Let $W^N\subset W$ be the image of $\tilde{W}^N$. It is again a constructible subset. A point $w\in W$ is in $W^N$ if and only if there exists a vector subspace $V_i \subset (TZ)(s)$ such that $w$ is in a different connected component of $V_i \times _{TZ}W$ from $\sigma (s)$. In particular, if we choose an analytic neighborhood of the section $\sigma$ which is isomorphic to a tubular neighborhood of the zero-section of $TZ$, then this analytic neighborhood doesn't meet $W^N$. Thus there is a Zariski open neighborhood of $\sigma$ not meeting $W^N$. Since taking a Zariski open subset doesn't affect connectivity (the schemes $W_{S'}$ in question being smooth), we may replace $W$ by this open subset and hence assume that $W^N$ is empty. From the discussion of the previous paragraph, this implies that the $W_{S'}$ are connected, proving the first claim. Let $U$ be the image of $W$ in $Z$. Note that the set-theoretic image is an open set and is equal to the image of the functor, since $W\rightarrow Z$ is etale. Let $\eta : Z\times _SZ \rightarrow Z$ be a lifting of the multiplication map $(g,h)\mapsto gh$ such that $\eta (z, 1)= z$ and $\eta (1,z)=z$. We claim that the composition law $Z\times _SZ \rightarrow G$ is a vertical morphism. Note that $Z\times _SZ\rightarrow G\times _SG$ is vertical, so it suffices to prove that the composition $G\times _SG\rightarrow G$ is vertical. For this, notice that there is an isomorphism $G\times _SG\cong G\times _S G$ sending $(a,b)$ to $(ab,b)$, and which interchanges the multiplication and the first projection. Since the first projection is vertical (this comes from the fact that $G\rightarrow S$ is vertical), we obtain that the composition law is vertical, yielding the claim. By Lemma \ref{I.1.c}, there exists a vertical surjection $$ R\rightarrow (Z\times _SZ )\times _G (Z\times _S Z) $$ with $R$ a scheme of finite type. Let $G^0\subset G$ be the image of the morphism $U\times _SU\rightarrow G$. Then the morphism $U\times _S U\rightarrow G^0$ is a vertical surjection, and we have a vertical surjection $$ R'\rightarrow (U\times _SU )\times _{G^0} (U\times _S U) $$ obtained by letting $R'$ be the inverse image of $(U\times _SU )\times _{G^0} (U\times _S U)$ in $R$. Note that $R'$ is also equal to the fiber product $$ U\times _SU\times _SU\times _SU\times _{Z\times _SZ\times _S Z\times _S Z}R, $$ so $R'$ is a scheme of finite type over $S$. We claim that for any artinian $S'$, the $G^0(S')$ is equal to the connected component of $G(S')$. To see this, note first of all that $G^0(S')$ is connected (since it is the image of $U(S')\times U(S')$ which is connected). And secondly, note that the morphism $$ Z(S')\rightarrow G(S') $$ is an open map (this is a map of smooth varieties---cf the section on what happens over a field and in particular the application at the end). Therefore the image of $U(S')$ is an open subset $V\subset G(S')$. It is connected since $U(S')$ is connected. The image of $(U\times _SU)(S')$ is equal to the image of the multiplication map $V\times V\rightarrow G(S')$. It is easy to see that if $V$ is a connected Zariski open subset of an algebraic group over a field (containing the identity), then the image of the multiplication map is a subgroup. Thus $G^0(S')$ is a subgroup of $G(S')$. It contains an open neighborhood of the identity and it is connected, so it is equal to the connected component. We claim now that $G^0$ is a sheaf of subgroups of $G$. If $g,h\in G^0(S')$ then the product $gh$ restricts into $G^0(S'')$ for any artinian ring $S''$ over $S'$. The sheaf $G^0$ is P2, hence it is B1 and B2 (Theorem \ref{I.t.2}). The inverse image of the section $gh$ by the morphism $G^0\rightarrow G$ is again B1 and B2. This inverse image is nonempty artinian $S''$. By Artin approximation, the inverse image has a section locally over $S'$, and since this section is unique if it exists, it gives a section $gh\in G^0(S')$. We have now shown existence of $G^0$ as required by the theorem. For uniqueness, suppose that $G^1$ were another candidate. Then $G^0$ and $G^1$ are both B1 and B2 subsheaves of $G$ having the same points over artinian $S'$. Artin approximation implies that they are equal. \hfill $\Box$\vspace{.1in} We say that a presentable group sheaf $G$ is {\em connected} if $G= G^0$. The above theorem immediately gives the characterization that $G$ is connected if and only if $G(S')$ is connected for all artinian $S'$. \begin{corollary} \mylabel{connex} We have the following properties. \newline 1. \, If $G$ is connected then any quotient group of $G$ is connected; \newline 2.\, Of $G$ and $H$ are connected then any extension of $G$ by $H$ is connected; \newline 3. \, If $G$ is a connected group sheaf over a base $S$ and if $Y\rightarrow S$ is any morphism of schemes then $G|_{{\cal X} /Y}$ is a connected group sheaf over $Y$; and \newline 4. \, If $f:Y\rightarrow S$ is a finite morphism and if $G$ is a connected group sheaf over $Y$ then $f_{\ast}(G)$ is a connected group sheaf over $S$. \newline 5.\, If $G$ is any presentable group sheaf then the connected component $G^0$ is the largest connected presentable subgroup. \end{corollary} {\em Proof:} Items 1-3 are immediate from the characterization. To prove 4 note that if $S'\rightarrow S$ then $f_{\ast}(G)(S')= G(Y\times _SS')$ and $Y\times _SS'$ is artinian, so this latter group is connected, thus by the above characterization $f_{\ast}(G)$ is connected. To prove 5 note that if $H$ is any connected subgroup of $G$ then $H(S') \subset G^0(S')$ for all artinian $ S'$, hence $H\subset G^0$. \hfill $\Box$\vspace{.1in} \subnumero{Finite presentable group sheaves} We say that a presentable group sheaf $G$ is {\em finite} if $G^0=\{ 1\}$. If $G$ is any presentable group sheaf, then the connected component $G^0$ is a normal subgroup sheaf, and the quotient $C:=G/G^0$ is again presentable. Over artinian $S'$, this quotient is just the group of connected components, in particular the connected component is trivial. Thus $C$ is finite. \begin{lemma} \mylabel{I.1.p} If $G$ is a finite presentable group sheaf, then there is an integer $N$ such that for any henselian local $S$-scheme $S'$ (with algebraically closed residue field), the number of elements in $G(S')$ is less than or equal to $N$. \end{lemma} {\em Proof:} We first treat the case where $S'$ is artinian local with algebraically closed residue field. Let $Z\rightarrow G$ and $R\rightarrow Z\times _GZ$ be the vertical surjections given by the fact that $G$ is $P4$. There is an etale neighborhood $U\rightarrow Z\times _SZ$ of the diagonal such that $U$ is isomorphic to an etale neighborhood of the zero section in the total scheme $TZ$ (and this isomorphism is compatible with the first projection to $Z$). This is seen as in the argument above. Furthermore, as above we may assume that the fibers of the first projection $U\rightarrow Z$ are connected (over any artinian scheme). Then for any artinian scheme $S'\rightarrow U$, the two elements of $G(S')$ obtained from the two projections $U\rightarrow Z$ are the same, by the hypothesis that $G$ is finite. (To see this, compare $(a,b): S'\rightarrow U$ with $(a,a): S\rightarrow U$; they are in the same fiber over $a$, and this fiber is connected, so they have to have the same image in $G(S')$.) Thus, any artinian subscheme of $U$ lifts into $R$. This implies that there is (locally in the etale topology) a lifting $U\rightarrow R$. Let $V\subset Z\times _SZ$ be the image of $U$. It is a Zariski neighborhood of the diagonal, and locally there is a lifting from $V$ into $R$. Let $F\subset Z\times _SZ$ be the reduced closed subscheme corresponding to the closed subset which is the complement of $V$. Suppose $Y\rightarrow S$ is an artinian local scheme (with acrf). If $(\alpha _1,\ldots , \alpha _n)$ is an $n$-tuple of distinct points of $G(Y)$, then there is a lifting $(a_1,\ldots , a_n) \in Z\times _S \ldots \times _S Z(Y)$ such that for any $i,j$ we have that $(a_i, a_j): Y\rightarrow Z\times _SZ$ is not contained in $V$. In particular, the reduced point $(a_i,a_j)^{\rm red}$ is contained in $F$. Thus the reduced point $(a_1,\ldots , a_n)^{\rm red}$ is contained in the closed subscheme $$ F^{(n)}:= \bigcap _{i,j} pr_{ij}^{-1}(F)\subset X\times _S\ldots \times _SX. $$ We claim that there is an $n$ such that $F^{(n)}$ is empty. For any $(x_1,\ldots , x_k)\in F^{(k)}$, let $$ \Phi (x_1,\ldots , x_k):= \{ y\in X, \;\; \pi (y)= \pi (x_i)\in S,\;\; (y,x_1,\ldots , x_k)\in F^{(k+1)}\} . $$ Note that these are closed subschemes of $X$ with strict inclusions $$ \Phi (x_1,\ldots , x_k) \subset \Phi (x_1,\ldots , x_{k-1}). $$ Furthermore, $\Phi (x_1,\ldots , x_k)$ varies algebraically with $(x_1,\ldots , x_k)$. Let $d=dim (X)$ and let $\Lambda = {\bf N} ^d$ with the lexicographic ordering giving the most importance to the $d$th coordinate. For any algebraic set $Y$ of dimension $\leq d$, let $\lambda (Y)= (\lambda _1, \ldots , \lambda _d)$ be defined by setting $\lambda _d$ equal to the number of irreducible components of dimension $d$. Note that if $Y'\subset Y$ is a strict inclusion of a closed subset then $\lambda (Y')< \lambda (Y)$. Let $\Lambda ^{(k)}$ be the finite set of all $\lambda (\Phi (x_1,\ldots , x_k))$ for $(x_1,\ldots , x_k)\in F^{(k)}$ (it is finite because $\Phi (x_1,\ldots , x_k)$ varies algebraically with $(x_1,\ldots , x_k)$). Introduce an order relation on subsets $\Sigma \subset \Lambda$ by saying $$ \Sigma < \Sigma \; \Leftrightarrow \forall \sigma \in \Sigma ,\, \exists \sigma ' \in \Sigma ',\;\; \sigma < \sigma ' . $$ Then the sequence $\Lambda ^{(k)}$ is a sequence of finite subsets which is strictly decreasing for this order relation. We claim that this implies (by combinatorics) that one of the $\Lambda ^{(k)}$ is empty. To see this, assume that the combinatorial claim is true for $d-1$. We will show that the set of upper bounds for $\lambda _d$ on $\Lambda ^{(k)}$ doesn't stabilize. If it were to stabilize after $k_0$ at a certain $y$, then for $k\geq k_0$ we could let $A^k\subset {\bf N} ^{(d-1)}$ be the subset of elements $(a_1,\ldots, a_{d-1})$ such that $(a_1,\ldots , a_{d-1},y)\in \Lambda ^{(k)}$. We obtain a strictly decreasing sequence of subsets for the case of $d-1$, so it is eventually empty, meaning that in fact the upper bound for $\lambda _d$ didn't stabilize. A decreasing sequence which doesn't stabilize can't exist, so eventually there is no upper bound, in other words $\Lambda ^{(k)}$ becomes empty. This gives the claim. Since one of the $\Lambda ^{(k)}$ is empty, one of the $F^{(k)}$ is empty. Let $N$ be chosen so that $F^{(N)}$ is empty (and consequently $F^{(k)}$ is empty for $k\geq N$). Then by the above argument, if $(\alpha _1,\ldots , \alpha _n)$ is an $n$-tuple of distinct points of $G(Y)$, we must have $n<N$. This gives the theorem in the case of an artinian local $Y$. Now suppose $A$ is a henselian local ring and $S'= Spec (A)$. Let $S'_n:=Spec (A/{\bf m}_A^n)$. In the inverse system $\lim _{\leftarrow} G(S'_n)$ we have that all of the $G(S'_n)$ have cardinality bounded by $N$. In particular, the cardinality of the inverse limit is bounded by $N$. Now suppose that there are $N+1$ distinct points $y_i$ in $G(S')$. Two of the points go to the same point in $\lim _{\leftarrow} G(S'_n)$, which means that for two of the points, the liftings $z_i,z_j\in Z(S')$ give a point $(z_i,z_j)$ in $Z\times _SZ$ which lifts, over any $S'_n$, into $R$. By strong artin approximation (check here !!!), the point $(z_i,z_j)$ must lift into $R$ so the two points in $G(S')$ are equal, a contradiction. This completes the proof of the lemma. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{I.1.q} If $G$ is a presentable group sheaf, then $G$ is finite if and only if $G(S')$ is finite for any henselian (resp. artinian) $S'$. \end{corollary} {\em Proof:} The lemma provides one direction. For the other, note that if $G(S)$ is finite for artinian $S'$ then $G^0(S')=\{ 1\}$. By unicity in the characterization of $G^0$ we get $G^0= \{ 1\}$. \hfill $\Box$\vspace{.1in} \numero{Local study of presentable subgroups} In this section we show that if $H\subset G$ is a presentable subgroup of a presentable group $G$ then locally at the identity, in an appropriate sense, $H$ is defined by the vanishing of a section of a vector sheaf. This is a generalisation of the basic result that a subgroup of an algebraic group is smooth, and hence a local complete intersection---cut out by a section of its normal bundle. We obtain this result only in a ``neighborhood of the identity'', or more precisely upon pullback by a vertical morphism $X\rightarrow G$ such that $X$ admits a lift of the identity. If $Y$ is a scheme with morphism $Y\rightarrow G$ such that $P\in Y$ maps to the identity section in $G$, then there will be an etale neighborhood of $P\in Y$ lifting to $X$ (which is why we can think of $X$ as a neighborhood of the identity). This result will be used in a future study of de Rham cohomology (results announced in \cite{kobe}). There, it will be important to have a structure theory for presentable subgroups because of the general principle that if $G$ is a presentable group sheaf then $G/Z(G)\subset Aut ({\cal L} )$ where ${\cal L} = Lie (G)$ is the Lie algebra vector sheaf of $G$ (see \S 9 below). A good understanding of the structure of presentable subgroups will allow us to reduce to looking at de Rham cohomology with coefficients in $Aut ({\cal L} )$ for ${\cal L}$ a vector sheaf, and here we have a more concrete hold on what happens. \begin{theorem} \mylabel{D.1} Suppose $G$ is a connected presentable group sheaf over $S$, and suppose $H\subset G$ is a presentable subgroup sheaf. Suppose that $X_1\rightarrow G$ is a vertical morphism with lift of the identity section $e:S\rightarrow X_1$. Suppose $P\in S$. Then there is an etale neighborhood $X\rightarrow X_1$ of $e(P)$ with a lift of the identity $e: S\rightarrow X$ (possibly after localizing in the etale topology of $S$ here) and an etale morphism $\rho : X\rightarrow TX_e$ sending $e$ to the zero section, such that $$ X\times _GH = \rho ^{-1}(TX_e \times _{TG_e} TH_e). $$ In particular, there is a vector sheaf $V$ over $S$ and a section $\sigma : X\rightarrow V$ such that $X\times _GH= \sigma ^{-1}(0)$. \end{theorem} {\em Proof:} Let $X_1\rightarrow G$ be a surjective vertical morphism with $X_1(S')$ connected for all artinian $S'$ (with $X_1$ a scheme of finite type). Put $Y_1:= X_1\times _GH$. It is a subsheaf of $X_1$. We can choose a vertical surjection $Z_1\rightarrow Y_1$ (with $Z_1$ a scheme of finite type over $S$) together with a lift of $(e,e)$ also denoted by $e$. Note that the morphism $Z_1\rightarrow H$ is also vertical (using the composition property of vertical morphisms and the fact that the morphism $Y_1\rightarrow H$ is vertical by the pullback property). There is an etale neighborhood of $(e,e)\in X_1\times _SX_1$ denoted by $U_1\rightarrow X_1\times _SX_1$ together with a lifting $\psi : U_1\rightarrow X_1$ of the multiplication in $G$, such that $\psi $ restricted to the inverse images of $\{ e\} \times _SX_1$ or $X_1\times _S\{ e\} $ are the identity. We obtain a morphism $$ U_1\times _{X_1\times _SX_1}(Y_1\times _SY_1)\rightarrow Y_1 $$ compatible with the multiplication in $H$ and again having the property that the restrictions to the inverse images of the two ``coordinate axes'' are the identity. Now pull back our multiplication to $$ U_1\times _{X_1\times _SX_1}(Z_1\times _SZ_1) $$ and note that $Z_1\rightarrow Y_1$ being vertical, there is an etale neighborhood of the identity section (all of this is local on $S$!) $$ V_1\rightarrow U_1\times _{X_1\times _SX_1}(Z_1\times _SZ_1) $$ (which we can consider just as an etale neighborhood $V_1\rightarrow Z_1\times _SZ_1$) and a good lift of our multiplication $$ V_1\rightarrow Z_1 $$ restricting to the identity on the inverse images of the ``coordinate axes''. We obtain in this way morphisms on the etale germs $$ {\bf 2}_{Z_1}: (Z_1,e)\rightarrow (Z_1,e) $$ and $$ {\bf 2}_{X_1}: (X_1,e)\rightarrow (X_1,e) $$ compatible with the morphism $Z_1\rightarrow X_1$. These morphisms induce multiplication by $2$ on the tangent vector schemes. There are unique analytic isomorphisms of complex analytic germs $$ (X_1,e)^{\rm an}\cong (T(X_1)_e,0)^{\rm an} $$ and $$ (Z_1,e)^{\rm an}\cong (T(Z_1)_e,0)^{\rm an} $$ transforming the automorphisms ${\bf 2}$ into multiplication by $2$ and inducing the identity on tangent spaces at the identity section. (To see uniqueness, note that over artinian bases these are germs of vector spaces, and any germ of automorphism $f$ of a vector space, such that $f(2x)=2f(x)$, is linear; hence fixing it at the identity fixes it.) By uniqueness, these isomorphisms are compatible with the morphism $Z_1\rightarrow X_1$. On the formal level, we have an etale morphism of formal germs $$ \hat{\varphi}:\widehat{T(X_1)_e}\rightarrow X_1 $$ such that $\widehat{T(Z_1)_e}$ maps into $Y_1$. The {\em first claim} is that, in fact, this gives a map $$ Spec (\widehat{{\cal O}} _{T(X_1)_e,e(S)}) \rightarrow X_1 $$ such that $$ T(Z_1)_e \times _{T(X_1)_e}Spec (\widehat{{\cal O}} _{T(X_1)_e,e(S)}) $$ maps into $Y_1$. We can now apply Artin approximation to find an etale neighborhood $W_1\rightarrow T(X_1)_e$ of the identity section (of course locally on $S$) together with a morphism $W_1\rightarrow X_1$ inducing the identity on tangent vector schemes at the identity section, and sending $$ T(Z_1)_e\times _{T(X_1)_e}W_1\rightarrow Y_1. $$ We can suppose that the morphism $W_1\rightarrow X_1$ is etale. In particular the morphism $W_1\rightarrow G$ is vertical. We obtain two subsheaves $$ im (T(Z_1)_e\times _{T(X_1)_e}W_1\stackrel{pr_2}{\rightarrow} W_1) \subset W_1\times _{X_1} Y_1 \subset W_1. $$ They have the same tangent subsheaves at the identity. Our {\em main claim} is that by taking an open subset of $W_1$ (still a neighborhood of $e(P)$ for a given basepoint $P\in S$) we can assume that these two subsheaves are equal. The first subsheaf is given by the vanishing of the morphism $$ W_1\rightarrow T(X_1)_e /T(Z_1)_e = T(G)/T(H), $$ while the second subsheaf is equal to $W_1\times _GH$. Setting $X=W_1$ we obtain the result of the theorem. We just have to prove the {\em first claim} and the {\em main claim}. {\em Proof of the first claim:} By the sheaf condition and the finite type condition B1 and B2 for $Y_1$, it suffices to prove that for any artinian $S'$, we have $$ T(Z_1)_e \times _{T(X_1)_e}Spec (\widehat{{\cal O}} _{T(X_1)_e,e(S)})(S') $$ mapping into $Y_1(S')$. That is to say, we have to prove that for any point $S'\rightarrow T(Z_1)_e$ mapping to a point of $T(X_1)_e$ located near the origin (that is to say factoring through $Spec (\widehat{{\cal O}} _{T(X_1)_e,e(S)})$), this point maps into $Y_1(S')$. We change to an algebraic notation. We can suppose that $S=Spec(A)$, $T(X_1)_e=Spec(B)$ and $T(Z_1)_e=Spec(C)$. Further we can suppose that $S'=Spec (K)$ with $K$ artinian (although not necessarily of finite type). We have $C\rightarrow K$. Since $T(Z_1)_e$ is a vector scheme we have a map $C\rightarrow C[t]$ corresponding to multiplication by $t$ (and compatible with the same map on $B$). Let $\hat{B}$ denote the completion of $B$ around the zero section (which corresponds to an ideal ${\bf b}\subset B$). We are provided with a factorisation $B\rightarrow \hat{B}\rightarrow K$. We can assume that $K$ is of finite type over $\hat{B}$, and in particular that $K$ is the total fraction ring of a subring $R\subset K$ such that $R$ is finite over $\hat{B}$. Let ${\bf r}\subset R$ denote the ideal corresponding to ${\bf b}\subset B$ (note that $R$ is complete with respect to ${\bf r}$). Let $K\{ t\} \subset K[[ t]]$ denote the set of formal series of the form $\sum a_it^i$ such that there exists $\eta \in R$ such that $\eta a_i \in {\bf r}^i$. With the same notations for $B$, multiplication by $t$ provides a map $\hat{B}\rightarrow B\{ t\}$ compatible with the map $B\rightarrow B[t]$, hence we get a map $\hat{B} \rightarrow K\{ t\}$. On the other hand we get a map $C\rightarrow C[t]\rightarrow K[t]\rightarrow K\{ t\}$. Putting these together we get a map $$ \hat{B}\otimes _BC \rightarrow K\{ t\} $$ corresponding to multiplication by $t$. There is an evaluation at $t=1$ which is a map $K\{ t\} \rightarrow K$ (this summability of the formal series comes from the definition of $K\{ t\}$ and the completeness of $R$), and the above map is compatible with this and with the map $\hat{B}\otimes _BC\rightarrow K$ given at the start. All in all we obtain a map $$ Spec (K\{ t\}) \rightarrow T(Z_1)_e \times _{T(X_1)_e}Spec (\widehat{{\cal O}} _{T(X_1)_e,e(S)}) $$ which induces on the subscheme $Spec (K)\rightarrow Spec (K\{ t\})$ (evaluation at $t=1$) the original inclusion. Now compose with the projection into $G$. We obtain a morphism $$ Spec (K\{ t\} )\rightarrow G $$ which sends $Spec (K[[t ]])$ into $H$ (this comes from the condition that $\widehat{T(Z_1)}$ maps into $Y_1$ together with B1 and B2 for $Y_1$ or $H$) and we would like to show that it sends $Spec (K)$ (at $t=1$) into $H$. It suffices to show that $Spec (K\{ t\} )\rightarrow H$. By Noether normalization there is a morphism $R'\rightarrow R$ such that $R'$ is integral and $R$ is finite over $R'$. Let $K'$ be the total fraction ring of $R'$: it is a field, and $K$ is finite over $K'$. There is an ideal ${\bf r'}\subset R'$ which induces ${\bf r}$, and $K\{ t\}$ is finite over the ring $K'\{ t\}$ defined in the same way as above with respect to this ideal. Let $G'$ and $H'$ denote the direct images to $Spec (K')$ of the groups $G$ and $H$ pulled back to $K$. We have that $H'$ is a presentable subgroup of the presentable group $G'$ (Lemma \ref{I.1.i}), but since $K'$ is a field, $H'\subset G'$ is a closed subgroup of the algebraic group $G'$ over $K'$. Since $K$ is finite over $K'$ we have $$ K\{ t\} = K'\{ t\} \otimes _{K'}K, $$ whence our point $Spec (K\{ t\} )\rightarrow G$ gives a point $Spec (K'\{ t\} )\rightarrow G'$ sending $Spec (K'[[ t]])$ into $H'$. Now since $H'$ is a closed subgroup of $G'$ both of which are algebraic groups (of finite type) over $K'$, we get that $Spec (K' \{ t\} )\rightarrow H'$, meaning that $Spec (K\{ t\} )\rightarrow H$. This completes the proof of the first claim. \hfill $\Box$\vspace{.1in} {\em Proof of the main claim:} Suppose that the main claim is not true. Note that there is a scheme of finite type surjecting to $W_1\times _{X_1}Y_1$. The falsity of the main claim means that the morphism from this scheme to $T(X_1)_e/T(Z_1)_e$ is nonzero on any subset of the form pullback of an open subset of $W_1$ containing $P$. In particular we can find a (possibly nonreduced) curve inside this scheme, such that the section pulls back to something nonzero on the generic (artinian) point, but such that the image of the curve in $W_1$ contains $P$ in its closure. We get an $S$-scheme $S'$ with reduced scheme equal to a curve, and a morphism $\psi :S'\rightarrow W_1\times _GH$ such that the projection into $T(X_1)_e/T(Z_1)_e$ is nontrivial at the generic point of $S'$, such that $P$ is in the closure of the image of $S'$. Let $\overline{S}'$ be a closure of $S'$ relative to $W_1$ obtained by adding one point over $P$. Call this point $P'$. Then for any $n$ there is an etale neighborhood of $P\in W_1$ on which the squaring map $n$-times is defined. We obtain an etale $\overline{S}'_n\rightarrow \overline{S}'$ on which the squaring map $n$-times is defined. We may assume that $\overline{S}'_n$ consists of an etale morphism $S'_n\rightarrow S'$, union one point $P'_n$ over $P'$. Denote by $\psi _n: \overline{S}'_n\rightarrow W_1$ the result of the squaring operation iterated $n$ times. There is an analytic isomorphism of a neighborhood of $P'_n$ in $\overline{S}'_n$ with a neighborhood of $P'$ in $S'$, and an analytic trivialization of a neighborhood of $P$ in $W_1$ (isomorphism with the tangent vector scheme) such that $\psi _n= 2^n\psi$ as analytic germs around the point $P'_n$. {\em Step 1.} There is an $n_0$ such that for any $n\geq n_0$, the projection of $S'_n$ into $T(X_1)_e/T(Z_1)_e$ is nontrivial at the generic point of $S'_n$. In particular for any $m$ the projection of $S'_{mn_0}$ into $T(X_1)_e/T(Z_1)_e$ is nontrivial at the generic point of $S'_{mn_0}$. Let $v: W_1\rightarrow T(X_1)_e/T(Z_1)_e$ denote our section. With respect to our analytic trivialization of $W_1$ where the squaring map becomes multiplication by $2$, can take a Taylor expansion for $v$ around the identity section of $W_1$, $$ v= v_1 + v_2+ v_3 + \ldots + v_{i-1} + w_i, $$ with $v_j(2x)= 2^jv(x)$ and $w_i$ vanishes to order $i$ along $e$; this notion can be defined by considering $w_i$ as a section of a coherent sheaf ${\cal F}$ which contains $T(X_1)_e/T(Z_1)_e$. By hypothesis the restriction of $v$ to $S'$ is nonzero at the generic point of $S'$. Let ${\cal G} _{S'}$ be the quotient of ${\cal F} |_{S'}$ by the ``torsion'' subsheaf (i.e. the subsheaf of sections supported in dimension zero). That a section is nonzero at the generic point means that its projection into ${\cal G} _{S'}$ is nonzero. We may choose $i$ big enough so that $v$ is nonzero in ${\cal G} _{S'}$ modulo the image of sections which vanish to order $i$ along $e$. Let $\overline{v}_j$ denote the projection of $v_j$ into the space of sections of ${\cal G}_{S'}$ modulo the image of the sections vanishing to order $i$. At least one of the $\overline{v}_j$ is nonzero. Now notice that the projection of $v(2^nx)$ is equal to $$ \overline{v(2^nx)} = 2^n\overline{v}_1(x) + 2^{2n}\overline{v}_2(x) + \ldots + 2^{(i-1)n}\overline{v}_{i-1}(x). $$ A little $2$-adic argument shows that there is $n_0$ such that for $n\geq n_0$ this quantity must be nonzero. We obtain that $\overline{v(2^nx)}\neq 0$ and hence that $v(2^nx)=v(\psi_nx)$ is nonzero at the generic point of $S'_n$, as claimed for Step 1. {\em Step 2.} The Zariski closure of the union of the images of the $\psi _{mn_0}$ contains the zero-section. To prove this, note that in the formal completion at $P$, the union of the closures of the $S'_{mn_0}$ is a subset stable under multiplication by $2^{n_0}$, hence its Zariski closure is stable under (fiberwise) multiplication by $2$, hence it is fiberwise homogeneous and thus contains the zero-section. The completion of the Zariski closure contains the Zariski closure of the intersection with the completion, so the zero-section is in the closure. {\em Step 3.} Over the generic point of $S$, the zero section is in the Zariski closure of the $S'_{mn_0}$. Otherwise we would obtain a function nonvanishing on the zero section and vanishing on the $S'_{mn_0}$; clearing denominators this function can be assumed defined over $S$ rather than the generic point of $S$, and since (we may assume) the $S'_{mn_0}$ are all schemes of pure dimension $1$ dominating $S$, this function defined over $S$ which vanishes generically on the $S'_{mn_0}$, must vanish identically on the $S'_{mn_0}$. This would contradict the fact that the zero section is in the Zariski closure globally over $S$. {\em End of proof of claim:} Now we work over the generic geometric artinian point of $S$. Change notations now to suppose that $S$ is artinian and $S'=S$; we note the schemes $S'_{mn_0}$ by $S_{mn_0}$ (they are all isomorphic to $S$) with $S'=:S_1$. We have points $S_{mn_0}\rightarrow W_1\times _GH$ all mapping to something nonzero in $T(X_1)_e/T(Z_1)_e$. Note, as a bit of a detour, that the connected component of the identity in $W_1\times _GH (S)$, must map to zero in $T(X_1)_e/T(Z_1)_e(S)$. This is because $T(X_1)_e/T(Z_1)_e(S)= T(X_1)_e(S)/T(Y_1)_e(S) = T(G)_e(S)/T(H)_e(S)$, whereas verticality of $X_1\rightarrow G$ implies that $X_1(S)\rightarrow G(S)$ is smooth. In particular $W_1\times _GH (S)$ is a smooth local complete intersection so a morphism from $W_1(S)$ to the normal space $T(G)_e(S)/T(H)_e(S)$ of $W_1\times _GH (S)$, with zero set contained in the complete intersection, must have zero set which is a union of connected components of $W_1\times _GH (S)$. Containing the identity, it contains the connected component of the identity. In particular, our points $S_{mn_0}\rightarrow W_1\times _GH$ from before are never in the connected component of $W_1\times _GH (S)$ which contains the identity. On the other hand, these points all lift to $Z_2\rightarrow W_1$ (a scheme of finite type surjecting vertically to $W_1\times _GH$). Let $Z_2(S)'$ denote the union of components of $Z_2(S)$ which contain liftings of our points $S_{mn_0}\rightarrow W_1$. We have a morphism $Z_2(S)'\rightarrow W_1(Spec (k))$ whose image is a constructible set. But the image contains all of the points where the $S_{mn_0}$ are located, so the image must contain a generic point of any irreducible component of the Zariski closure of the $S_{mn_0}$. In particular, there is a component of $Z_2(S)'$ which maps to something in $W_1(Spec (k))$ containing the identity in its closure. Let $W_1(S)_e$ denote the inverse image of $e\in W_1(Spec (k))$ in $W_1(S)$. Let $N\subset G(S)$ denote the image of $W_1(S)_e\rightarrow G(S)$. We claim: that $N$ is a unipotent subgroup of $G(S)$, and that the morphism $W_1(S)_e\rightarrow N$ is a fibration with connected fibers. Assume this claim for the moment. The image of $W_1(S)\rightarrow W_1(Spec (k))$ is a closed subvariety $R\subset W_1(Spec (k))$ (this can be seen since $W_1$ is etale over the vector scheme $TX_1$). We have a morphism $R\rightarrow G(S)/N$. On the other hand, the above morphism $Z_1(S)' \rightarrow W_1(Spec (k))$ factors through a morphism $Z_1(S)'\rightarrow R$, and the image of this map contains $1\in R$ in its closure. The morphism $W_1(S)\rightarrow R$ is a fibration with fiber $W_1(S)_e$ in the etale topology. It suffices to prove that $TX_1 (S) \rightarrow TX_1(Spec (k))$ is a fibration over its image, since locally in the etale topology $W_1$ is isomorphic to $TX_1$. In fact if $V$ is any vector scheme then $V(S)$ and $V(Spec (k))$ are vector spaces so the morphism $V(S)\rightarrow V(Spec (k))$ is a fibration over its image, with fiber the inverse image of the origin. We now show that the morphism $$ W_1\times _GH(S) \rightarrow R \times _{G(S)/N} H(S) $$ is a fibration in the etale topology with fiber the kernel of $W_1(S)_e\rightarrow N$. Locally on $R$ we can choose a lifting $\lambda : R \rightarrow W_1(S)$ and then we have a morphism $$ R\times _{G(S)/N}H(S)\rightarrow N $$ given by $(r,h)\mapsto h^{-1}im(\lambda (r))$. We claim that (locally over $R$) $$ W_1(S)\times _{G(S)}H(S) = W_1(S)_e \times _N (R\times _{G(S)/N}H(S)). $$ The morphism from right to left associates to the point $(a,r,h)$ the point $(i(a)\ast \lambda (r) , h)$ where $i: W_1\rightarrow W_1$ is an etale-locally defined morphism covering the inverse. This shows that the morphism at the start of the paragraph is a fibration. Suppose $A$ is an algebraic group with connected algebraic subgroups $B\subset A$ and $N\subset A$. Then the morphism $$ B / (B\cap N) \rightarrow A/N $$ is proper over an open neighborhood of the class of the identity in $A/N$. To prove this, proceed as follows. Let $I\subset A/N$ denote the image. Let $Z\subset A/N$ denote the subset of points over which the map in question is not proper. This can be constructed as follows. Let $X:= B/(B\cap N)$, and let $\overline{X}$ be a relative completion with proper morphism $\overline{X}\rightarrow A/N$; and suppose that $X\subset \overline{X}$ is open and dense. Then $Z$ is the image of $\overline{X}-X$. Since the map $X\rightarrow A/N$ is injective, we have that the dimension of the image $Z$ is strictly less than the dimension of the image $I$ of $X$. In particular, there is a point $y\in I$ such that the morphism in question is proper over a neighborhood $U$ of $y$. But since $B$ acts on $X$ and compatibly on $A$ (by left multiplication) the morphism in question is proper over any translate of the form $bU$. Setting $b\in B$ equal to the inverse of a representative in for $y$ we obtain a neighborhood $bU$ of the identity over which the map is proper. Note that by the above claim that we are accepting for now, the fiber of the fibration $W_1(S)_e\rightarrow N$ is connected. On the other hand, by the previous paragraph the map $H(S)\rightarrow G(S)/N$ induces a map $H(S)/(H(S)\cap N)\rightarrow G(S)/N$ which is proper over a neighborhood of the class of the identity. Let $I\subset G(S)/N$ be the image of this map. It is a locally closed subset and the subset topology coincides with the topology of the base of the fibration, at least near the identity. Note that $H(S)$ is fibered over $I$ with fibers $H(S)\cap N$ which are connected because $N$ and hence $H(S)\cap N$ are unipotent groups (unipotent groups are always connected). Finally we have the following situation: $$ W_1\times _GH(S)\rightarrow R\times _{G(S)/N} I $$ is a fibration with connected fiber, whereas $I\subset G(S)/N$ is a locally closed subset. Since an etale fibration is an open map, the image of the connected component of the identity in $W_1\times _GH(S)$ is an open neighborhood of the identity in $R\times _{G(S)/N}I$. Since the fibers of the fibration are connected, the image of the complement of the identity component is the complement of the image of the identity component. In particular, there is an open neighborhood of the identity in $R\times _{G(S)/N}I\subset R$ (and hence an open neighborhood of the identity in $R$) whose inverse image doesn't meet any other connected component of $W_1\times _GH(S)$. Finally, since $R$ is a closed subset of $W_1(Spec(k))$, we obtain an open neighborhood of the identity in $W_1(Spec (k))$ whose inverse image in $W_1\times _GH(S)$ is contained in the connected component of the identity. This is a contradiction to our earlier situation where $Z_2(S)' \rightarrow W_1(Spec (k))$ has image a constructible set with the identity in its closure. This completes the proof modulo the following part. We have to show that $N$ is a unipotent subgroup of $G(S)$ and that the morphism $W_1(S)_e\rightarrow N$ is a fibration with connected fibers. Write $S=Spec (A)$ with $A$ artinian, and choose a sequence of ideals $I_j\subset A$ for example $I_j= {\bf m}^j$. Let $$ W_1(S)_j $$ be the set of points of $W_1(S)$ which restrict to the identity on $Spec (A/I_j)$. In particular $W_1(S)_1=W_1(S)_e$. Choose a good lift of the multiplication $$ W_1\times W_1 \rightarrow W_1 $$ in a formal neighborhood of our point $P$. We obtain $$ \ast : W_1(S)_e\times W_1 (S)_e\rightarrow W_1(S)_e. $$ This operation is not a group, however we have the following property. $$ \ast : W_1(S)_j\times W_1 (S)_j\rightarrow W_1(S)_j. $$ Next, note that the fact that $W_1$ is etale over a vector scheme gives another operation which we denote $$ + : W_1(S)_e\times W_1 (S)_e\rightarrow W_1(S)_e $$ which is an abelian group structure. We can write $$ a*b = a+b + F(a,b) $$ where $$ F: W_1(S)_j\times W_1 (S)_j\rightarrow W_1(S)_{j+1}. $$ This is because there is a unique good operation on the set of elements of $W_1(Spec (A/I_{j+1})$ which restrict to the identity in $W_1(Spec (A/I_j)$. (Also note that the $+$-quotient $W_1(S)_j/W_1(S)_{j+1}$ injects into this subset of $W_1(Spec(A/I_{j+1})$). Because of this formula, we can define the quotient $W_1(S)_j/W_1(S)_{j+1}$ with respect to the operation $\ast$ and it is the same as the quotient with respect to $+$. In particular note that the morphism $$ W_1(S)_j\rightarrow W_1(S)_j/W_1(S)_{j+1} $$ is a fibration with fiber a vector space. The morphism $W_1(S)_e\rightarrow G(S)$ is compatible with the operation $\ast$. Let $N_j$ denote the image of $W_1(S)_j$ in $G(S)$ (in particular $N_1=N$). The $N_j$ are constructible sets and subgroups so they are algebraic subgroups of $G(S)$. We obtain a surjective morphism $$ W_1(S)_j /W_1(S)_{j+1} \rightarrow N_j/N_{j+1}, $$ but from the previous formula the operation on the left is a unipotent algebraic group, this shows that $N_j/N_{j+1}$ is a unipotent group, and since extensions of unipotent groups are unipotent (and $N_j=\{ 1\}$ for $j$ large), $N=N_1$ is unipotent. We claim that $W_1(S)_e\rightarrow N$ is smooth. Suppose $R'\subset R$ is an inclusion of artinian schemes over $Spec(k)$. Look at the map $$ W_1(S)_e(R)\rightarrow W_1(S)_e(R')\times _{N(R')}N(R). $$ Suppose that we have a map $S\times R\rightarrow G$ and a lifting over $S\times R'$ to $W_1$, sending $Spec(k)\times R$ to $e$. We would like to find a lifting over $S\times R$ sending $Spec(k)\times R$ to $e$. We can do this whenever $R'$ is a union of $R_i$ and we have commuting retracts from $R'$ to $R_i$, just apply the verticality property to $R\times S$ with retracts to $R_i\times S$ and $R\times Spec( k)$. This proves that the morphism $W_1(S)_e\rightarrow G(S)$ is vertical with respect to $Spec(k)$. It is then immediate that the morphism $W_1(S)_e\rightarrow N$ is surjective (since $N$ is the image of the previous map). Note that $N$ is presentable, so by Lemma \ref{smooth}, the morphism $W_1(S)_e\rightarrow N$ is smooth. Let $K\subset W_1(S)_e$ be the inverse image of $1\in N$. Then for any two points $a,b$ with $f(a)=f(b)$ there is a unique element $k\in K$ such that $b=k\ast a$ (the existence and uniqueness of such an element $k\in W_1(S)_e$ can be seen using the above grading and the expression for $\ast$, and then it is immediate that $k\in K$ from compatibility of $f$ with $\ast$). Any point $n\in N$ has an etale neighborhood $n\in U\stackrel{p}{\rightarrow} N$ with a section $\sigma :U\rightarrow W_1(S)_e$. Then we obtain a morphism $$ K\times U \rightarrow W_1(S)_e\times _N U $$ obtained by sending $(k,u)$ to $(k\ast \sigma (u), p(u))$. This is an isomorphism on the level of points by the above property for $K$, and both sides are smooth, so it is an isomorphism. This proves that $W_1(S)_e\rightarrow N$ is a fibration in the etale topology. Finally, to show that the fibers are connected it suffices to show that $K$ is connected. But since $W_1(S)_e$ and $N$ are vector spaces and the morphism $f$ is a fibration in the etale topology, the associated analytic morphism is a fibration in the usual topology, so the fiber is contractible. \hfill $\Box$\vspace{.1in} \numero{The Lie algebra sheaf} \begin{theorem} \mylabel{lmn} If ${\cal G}$ is a presentable group sheaf, and if we set $Lie ({\cal G}) := T({\cal G} )_1$, then there is a unique bilinear form (Lie bracket) $$ [\cdot , \cdot ] : Lie ({\cal G} )\times Lie ({\cal G} )\rightarrow Lie ({\cal G} ) $$ which, over artinian base schemes, reduces to the usual Lie bracket. \end{theorem} {\em Proof:} A section of $Lie ({\cal G} )$ over $S'= Spec (A)$ is a morphism $Spec (A[\epsilon ])\rightarrow {\cal G}$ sending $Spec (A)$ to the identity section (in our notation here $\epsilon$ denotes an element with $\epsilon ^2=0$). Given two such morphisms which we denote $\alpha$ and $\beta$ we obtain $$ \alpha p_1, \, \beta p_2 : Spec (A[\epsilon , \epsilon '])\rightarrow {\cal G} $$ (where also $(\epsilon ')^2=0$) and we can form the morphism $$ \gamma := \alpha p_1 \cdot \beta p_2 \cdot (\alpha m p_1) \cdot (\beta m p_2) : Spec (A[\epsilon , \epsilon '])\rightarrow {\cal G} $$ where the $g\cdot h$ denotes composition in ${\cal G}$, and where $m= A[\epsilon ]\rightarrow A[\epsilon ]$ is the involution sending $\epsilon$ to $-\epsilon$. The morphism $\gamma$ restricts to the identity on $Spec (A[\epsilon , \epsilon ']/(\epsilon \epsilon '))$. Let $$ q: Spec (A[\epsilon , \epsilon '])\rightarrow Spec (A[\delta]) $$ denote the morphism sending $\delta$ to $\epsilon \epsilon '$ (here again $\delta ^2=0$). Our first claim is that if the morphism $\gamma$ factors as $\gamma = \varphi \circ q$ then $\varphi $ is unique. To see this suppose that $\phi$ and $\varphi$ were two morphisms $Spec (A[\delta])\rightarrow {\cal G}$ with $\phi \circ q = \varphi \circ q$. Let $X\rightarrow {\cal G}$ and $R\rightarrow X\times _{{\cal G}}X$ be the morphisms in a presentation of ${\cal G}$, with a chosen lift of the identity section into $X$. Choose liftings $\tilde{\varphi}$ and $\tilde{\phi}$ from $Spec (A[\delta ])$ into $X$ sending $Spec (A)$ to the identity section of $X$ (here we may have to localize on $S'=Spec (A)$ in the etale topology---but henceforth ignore this point, much as we have already ignored it in lifting the identity section into $X$\ldots ). The fact that the compositions with $q$ are the same means that the pair $(\tilde{\varphi}\circ q, \tilde{\phi}\circ q)$ defines a point which we denote $$ \eta :Spec (A[\epsilon , \epsilon '])\rightarrow X\times _{{\cal G}}X. $$ Note that $Spec (A[\epsilon , \epsilon ']/(\epsilon \epsilon '))$ projects by $q$ to $Spec (A)\subset Spec (A[\delta ])$ and both $\tilde{\varphi}$ and $\tilde{\phi}$ send $Spec (A)$ to the identity section (by hypothesis on our liftings) so in particular $\eta$ sends $Spec (A[\epsilon , \epsilon ']/(\epsilon \epsilon '))$ to the identity pair $(e,e)$ in $X\times _{{\cal G} }X$. On the other hand we can take $Y=Spec (A[\epsilon , \epsilon '])$ and $Y_1=Spec (A[\epsilon ])$ and $Y_2 = Spec (A[\epsilon '])$ and then apply the lifting property $Lift _2(Y, Y_i)$ which holds for the morphism $R\rightarrow X\times _{{\cal G}}X$ because (from the hypothesis in the property $P4$) this morphism is vertical. Fix a lifting $e_R: S\rightarrow R$ of the identity pair in $X\times _{{\cal G}}X$ and fix the values of the morphisms (denoted $\lambda _i$ in the definition of the lifting property) as being $e_R$ on $Y_1$ and $Y_2$. These are indeed liftings of our given morphisms $Y_i \rightarrow X\times _{{\cal G}}X$ since, as we have seen above, both $Y_1$ and $Y_2$ map to the identity pair (the subscheme defined by $(\epsilon \epsilon ')$ is the union of $Y_1$ and $Y_2$). We obtain by the lifting property a lifting $Y\rightarrow R$ which agrees with $e_R$ on $Y_1$ and $Y_2$. If we write (locally) $R=Spec (B)$ then this morphism corresponds to a morphism $a:B\rightarrow A[\epsilon , \epsilon ']$ such that the projection of $B$ modulo $\epsilon$ or modulo $\epsilon '$ is a constant morphism $B\rightarrow A$. It now follows that $a$ factors through $B\rightarrow A[\delta ]$. We obtain a morphism $Spec (A[\delta ])\rightarrow R$ whose projection into $X\times X$ is the pair $(\tilde{\varphi}, \tilde{\phi} )$ (that this is the case is easy to check directly again by supposing that $X$ is affine). This implies that $(\tilde{\varphi}, \tilde{\phi} )$ has image in $X\times _{{\cal G}}X$, in other words that the morphisms $\tilde{\varphi}$ and $\tilde{\phi}$ from $Spec (A[\delta ])$ into $X$ project to the same morphism into ${\cal G}$. Thus $\varphi = \phi$, completing the proof of uniqueness. Now we show existence of the factorization $\gamma = \varphi \circ q$. The preceding uniqueness result implies that it is sufficient to construct $\varphi$ after etale localization on $S'$. Thus we may assume that $\alpha$ and $\beta$ lift to points $\tilde{\alpha}, \tilde{\beta}: Spec (A[\epsilon ])\rightarrow X$ sending $Spec (A)$ to the identity section. There is a good lifting of the multiplication in ${\cal G}$ to a multiplication $X\times X \rightarrow X$ which we still denote $x\cdot y$, where goodness means the property $x\cdot e = e \cdot x = x$. We can now put $$ \tilde{\gamma }:= \tilde{\alpha }p_1 \cdot \tilde{\beta }p_2 \cdot (\tilde{\alpha }m p_1) \cdot (\tilde{\beta }m p_2) : Spec (A[\epsilon , \epsilon '])\rightarrow X. $$ We still have the formula that $$ \alpha \cdot (\alpha m) = e $$ (this is because the first order term of the composition is just addition of vectors) and from this formula it follows that $\tilde{\gamma }$ sends the subschemes $Spec (A[\epsilon ])$ and $Spec (A[\epsilon '])$ to $e$ (through their projections to $Spec (A)$). Since now $X$ is a scheme, this implies directly the existence of $\tilde{\varphi } : Spec (A[\delta ])\rightarrow X$ such that $\tilde{\gamma } = \tilde{\varphi } \circ q$. Projecting from $X$ to ${\cal G}$ we get the factorization $\varphi$ desired. Finally, we set $[\alpha , \beta ] := \varphi$ from the above construction. It is of course completely clear from the construction that if $S'$ is artinian, this gives the usual Lie bracket on the algebraic group ${\cal G} (S' )$. it remains to be seen that this morphism is bilinear and satisfies the Jacobi identity (i.e. that a certain deduced trilinear form vanishes). But these properties can be checked on values over artinian schemes $S'$, and there since the bracket we have defined coincides with the usual one, we get bilinearity and the Jacobi identity. \hfill $\Box$\vspace{.1in} {\em Remark:} The subtlety in our whole situation being essentially that the factorization, while immediate and obviously unique in the case where the target of the map is a scheme, does not necessarily exist and may not be unique even if it does exist, when the target of the map is just a sheaf. One can give examples of $P2$ sheaves ${\cal H}$ on ${\cal X} /S$ together with morphisms $Spec (A[\epsilon , \epsilon '])\rightarrow {\cal H}$ restricting to a given section $S\rightarrow {\cal H}$ over the subscheme defined by $(\epsilon \epsilon ')$, and where the morphism either doesn't factor through $Spec (A[\delta ])$ or else such that the factorization isn't unique. We indicate here a simpler example which shows the way toward the examples refered to in the above paragraph. Let $Y\rightarrow X$ be a degree $2$ morphism of smooth curves completely ramified above a point $x\in X$. Let ${\cal F}$ be the image of this morphism (considered as a sheaf on ${\cal X}$). Let $y$ be the point lying over $x$ and suppose $f: Spec (k[\epsilon]/\epsilon ^3)\rightarrow Y$ is a nonzero tangent vector located at $y$. Then the associated element of ${\cal F}( Spec (k[\tau ]/\tau ^3))$ is constant (equal to the constant point $y$) on the subscheme $$ Spec (k[\tau ]/\tau ^2)\subset Spec (k[\tau ]/\tau ^3). $$ Nevertheless there exists no factorization of the form $$ Spec (k[\tau ]/\tau ^3)\subset Spec (k[\tau ]/\tau ^2)\rightarrow {\cal F} $$ (this factorization would have existed had ${\cal F}$ been a scheme). We can obtain an example where a factorization of the type needed in the above theorem doesn't exist, simply by composing this example with the morphism $Spec (k[\epsilon , \epsilon ']) \rightarrow Spec(k[\tau ]/\tau ^3$ sending $\tau$ to $\epsilon + \epsilon '$. The sheaf ${\cal F}$ in this example is not $P4$ with respect to $Spec (k)$. Of course ${\cal F}$ is not a group sheaf. As stated elsewhere, I am not sure about whether a $P2$ group sheaf might not automatically have to be $P4$ for example (or at least satisfy some of the properties we use here). For example we have seen that an algebraic space (of finite type) which is a group is automatically a scheme. \subnumero{The adjoint representation} Suppose ${\cal G}$ is a presentable group sheaf. Then ${\cal G}$ acts on itself by conjugation, by the formula $$ Int (g)(h):= ghg^{-1}. $$ More precisely this action is a morphism ${\cal G} \times {\cal G} \rightarrow {\cal G}$ and if we put in the identity map on the first projection we obtain a morphism ${\cal G} \times{\cal G} \rightarrow {\cal G} \times {\cal G}$ which is a morphism of group objects (the second variable) over the first variable ${\cal G}$. From this and from the invariance of the above definition of the Lie algebra object, this action induces an action (the {\em adjoint action}) $$ {\cal G} \times Lie ({\cal G} )\rightarrow Lie ({\cal G} ) $$ which preserves the bracket. If $({\cal L} , [,])$ is a Lie algebra sheaf (that is to say, ${\cal L}$ is a vector sheaf with bilinear morphism $[,]: {\cal L} \times {\cal L} \rightarrow {\cal L}$ satisfying the Jacobi identity) then we obtain a group sheaf $Aut ({\cal L} , [,])$. \begin{lemma} \mylabel{AutLie} If $({\cal L} , [,])$ is a Lie algebra sheaf then $Aut ({\cal L} , [,])$ is a presentable group sheaf. \end{lemma} {\em Proof:} The group sheaf $Aut ({\cal L} )$ of automorphisms of the vector sheaf ${\cal L}$ is presentable and in particular $P4$ by Theorem \ref{I.1.g}. The Lie bracket can be considered as a morphism $$ {\cal L} \otimes _{{\cal O}} {\cal L} \rightarrow {\cal L} . $$ The subgroup $Aut ({\cal L} , [,])\subset Aut ({\cal L} )$ may thus be represented as the equalizer of two morphisms $$ Aut ({\cal L} ) \rightarrow Hom ({\cal L} \otimes _{{\cal O}} {\cal L} , {\cal L} ). $$ Note that $Hom ({\cal L} \otimes _{{\cal O}}{\cal L} , {\cal L} )$ is a vector sheaf by Lemma \ref{I.s} and the definition of tensor product following that lemma; and presentable by Theorem \ref{I.1.g}. In particular $Aut ({\cal L} , [,])$ is $P4$ by Lemma \ref{I.1.a} and presentable by Corollary \ref{I.z}. \hfill $\Box$\vspace{.1in} The adjoint action may be interpreted as a morphism of presentable group sheaves $$ Ad : {\cal G} \rightarrow Aut (Lie ({\cal G} ), [,] ). $$ We can of course forget about the bracket and compose this with the morphism into $Aut(Lie ({\cal G} ))$ which is just the automorphism sheaf of a vector sheaf. \begin{proposition} \mylabel{Adjoint} Suppose ${\cal G}$ is a connected presentable group sheaf. Then the kernel of the morphism $Ad$ is the center $Z({\cal G} )$ (that is to say the sheaf whose values are the centers of the values of ${\cal G}$). \end{proposition} {\em Proof:} The statement amounts to saying that a section $g$ of ${\cal G}$ acts trivially on ${\cal G}$ if and only if it acts trivially on $Lie ({\cal G} )$. This statement is true, in fact, of any automorphism (defined over any base scheme $S'\rightarrow S$). It suffices to prove this last statement for the values over artinian base schemes (if an automorphism agrees with the identity on the values over all artinian base schemes then it must be equal to the identity). In the case of values over artinian base schemes it is just the statement that an automorphism which acts trivially on the Lie algebra of a connected algebraic group must act trivially on the whole group. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{centerPres} If ${\cal G}$ is a connected presentable group sheaf, then the center $Z({\cal G} )$ is again presentable. \end{corollary} {\em Proof:} By Proposition \ref{Adjoint} the center is the kernel of a morphism of presentable group sheaves. By Theorem \ref{I.1.e}, this kernel is presentable. \hfill $\Box$\vspace{.1in} {\em Question} Suppose ${\cal G}$ is a presentable group sheaf, not necessarily connected. Is the center $Z({\cal G} )$ presentable? This is related to the following question. {\em Question} Suppose $H$ is a finite presentable group sheaf. Is $Aut (H)$ presentable? A positive response here would allow us to prove that the center $Z({\cal G} )$ is connected, because it is the kernel of the action of $Z({\cal G} ^o)$ on the group of connected components $H={\cal G} /{\cal G} ^o$. \subnumero{Determination of presentable group sheaves by their Lie algebras} The object of this section is to prove the following theorem, which is a generalization of the well known principle that a Lie group is determined by its Lie algebra, up to finite coverings, if the center is unipotent. \begin{lemma} \mylabel{123} Suppose $F, G \subset H$ are two presentable group subsheaves of a presentable group sheaf $H$, and suppose $F$ and $G$ are connected. If $Lie (F)=Lie (G)$ as subsheaves of $Lie (H)$ then $F=G$. \end{lemma} {\em Proof:} By the properties B1 and B2 and artin approximation, it suffices to show that for any artinian $S'$ we have $F(S')=G(S')$. But these two are connected Lie subgroups of $H(S')$ which by hypothesis have the same Lie algebras; thus they are equal. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{abc} Suppose $F$ and $G$ are connected presentable group sheaves on ${\cal X}$. Suppose $Lie (F)\rightarrow Lie(G)$ is an isomorphism of Lie algebras. Then this isomorphism lifts to a unique isomorphism $F/Z(F)\cong G/Z(G)$ where $Z()$ denotes the center. \end{corollary} {\em Proof:} Note that the center of a connected presentable group sheaf is presentable by \ref{centerPres}, so $F/Z(F)$ and $G/Z(G)$ are presentable. Let $L=Lie (F)=Lie (G)$ and let $A=Aut (L)$ (automorphisms of the vector sheaf or of the Lie algebra sheaf, we don't care). We get maps $F\rightarrow A$ and $G\rightarrow A$. Let $F_1$ and $G_1$ denote the images. We have $$ Lie (F_1) = im (L\rightarrow Lie (A)) = Lie (G_1) $$ as subsheaves of $Lie (A)$, so by Lemma \ref{123} we have $F_1=G_1$. On the other hand, note that $Z(F)$ is the kernel of the map $F\rightarrow A$ because if an element of $F$ acts trivially on $Lie (F)$ then by exponentiation and the fact that $F$ is connected, it acts trivially on all $F(S')$ for $S'$ artinian hence in fact it acts trivially on $F$. Thus $F_1 = F/Z(F)$ and similarly $G_1=G/Z(G)$. \hfill $\Box$\vspace{.1in} We have now finished verifying that the class of presentable group sheaves satisfies the properties set out in the introduction. In effect: \newline Property 1 is Corollary \ref{uvw}; \newline Property 2 is Theorem \ref{I.1.e}; \newline Property 3 is Lemma \ref{I.1.h}; \newline Property 4 is Lemma \ref{I.1.i}; \newline Property 5 is Theorem \ref{I.1.m}; \newline Property 6 is Theorem \ref{I.1.o} and Corollary \ref{connex}; \newline Property 7 is Theorem \ref{lmn}; and \newline Property 8 is Theorem \ref{abc}. \subnumero{Questions} We present in further detail some other questions analogous to well known properties of algebraic Lie groups, which seem to be more difficult here. {\bf 1.} \, (Existence) {\em If $({\cal L} , [,])$ is a Lie algebra sheaf (i.e. a vector sheaf with bilinear operation satisfying the Jacobi identity) then does there exist a presentable group sheaf ${\cal G}$ with $Lie ({\cal G} )= ({\cal L} , [,])$?} One has the following idea for a proof of existence in a formal sense. Take a resolution of ${\cal L}$ by vector schemes, and lift the bracket to a bracket (not necessarily satisfying the Jacobi identity) on the vector scheme $X$ surjecting to ${\cal L}$. Then use an explicit version of Baker-Campbell-Hausdorff to define a composition law on the formal completion of $X$ along the zero section. This composition law will not be associative, but one should be able to use the second part of the resolution of ${\cal L}$ to define a relation scheme $R$ (formally), such that when we set ${\cal G}$ to be the quotient of $X$ by $R$ we get a group sheaf. One would have to check that the maps are vertical. Of course this idea for a proof skirts the main question of how to integrate the formal structure out into an actual presentable group sheaf. {\bf 2.} \, {\em Does every (connected, say) presentable group sheaf have a faithful representation on a vector sheaf?} I guess that the answer is probably no, but I don't have a specific example in mind. {\bf 3.} \, {\em Suppose $Lie ({\cal F} )\rightarrow Lie ({\cal G} )$ is a morphism of vector Lie algebras. Under what conditions does this lift to a morphism ${\cal F} '\rightarrow {\cal G}$ where ${\cal F}' \rightarrow {\cal F}$ is a finite covering?} {\bf 4.} \, {\em What happens in Theorem \ref{abc} if we don't divide out by the centers?} {\bf 5.} \, {\em Suppose $G\subset Aut (V)$ is a presentable subgroup of the automorphisms of a vector sheaf. Is there a vector subsheaf $U\subset T^{a,b}(V)$ of a tensor power of $V$ (or possibly a cotensor power or a mixture\ldots ) such that $U$ is preserved by the action of $G$ and such that $G$ is characterized as the subgroup of $Aut (V)$ preserving $U$?} One of the main problems in trying to prove such a statement is that the vector sheaves (and similarly $P4$ or $P5$ sheaves) don't satisfy any nice chain condition. Note that in the situation of question 4, for any sub-vector sheaf $U$ of a tensor and cotensor combination of $V$, the subgroup of $Aut (V)$ of elements preserving $U$ is a presentable subgroup, so at least we obtain a way of constructing examples, even if we don't know whether we get everything this way. \numero{Presentable $n$-stacks} Recall that an $n$-groupoid in the sense of \cite{Tamsamani} is essentially the same thing as an $n$-truncated homotopy type \cite{Tamsamani2}. In view of this, we can approach the theory of $n$-stacks (we assume from here on that this means $n$-stack of $n$-groupoids and drop the word ``groupoid'' from the notation) via the theory of presheaves of topological spaces or equivalently simplicial presheaves\cite{Jardine1}. We adopt a working convention that by {\em $n$-stack} we mean the presheaf of $n$-groupoids associated to a fibrant presheaf of spaces \cite{Jardine1} \cite{kobe} or, a bit more generally, any presheaf of $n$-groupoids such that the associated simplicial presheaf (taking the diagonal of the nerve) is fibrant in the sense of \cite{kobe} which means that it satisfies the global part of the fibrant condition of Jardine \cite{Jardine1}. Some special cases are worth mentioning. A $0$-stack is simply a sheaf of sets. A $1$-stack is what is usually called a stack---it is a sort of sheaf of groupoids. The notions of $2$-stack and $3$-stack were explored heuristically from the category-theoretic point of view in \cite{Breen23}. We suppose given an adequate theory of morphism $n$-stacks $Hom (R,T)$; and of homotopy fiber products $T\times _RT'$ for $n$-stacks. These can be had, for example, within the realm of presheaves of spaces \cite{Jardine1} \cite{kobe} \cite{flexible}. The path-stack $P^{t_1,t_2}T$ on ${\cal X} /S$ between two basepoints (i.e. objects) $t_1,t_2\in T(S)$ is then well defined. We denote by $\pi _0(T)$ the truncation down to a sheaf of sets, and from this and the path space construction we obtain the homotopy group sheaves $\pi _i(T,t)$ over ${\cal X} /S$ for an $n$-stack $T$ and object $t\in T(S)$. In terms of the easier-to-understand version version of the theory involving presheaves of spaces, the homotopy group sheaves are defined as follows. If $t\in T(S)$ then for any $Y\rightarrow S$ we get a basepoint $t|_Y\in T(Y)$. The functor $$ Y/S\mapsto \pi _i (T(Y), t|_Y) $$ is a presheaf on ${\cal X} /S$ which we denote by $\pi _i^{\rm pre}(T,t)$. Then $\pi _i(T,t)$ is sheaf associated to this presheaf. There is probably a good extension of the theory to $\infty$-stacks which would correspond to presheaves of spaces which are not necessarily truncated (and I suppose that it again becomes equivalent to Jardine's theory but there may be a few subtleties hidden here). Generally below when we speak of $n$-stacks, $n$ will be indeterminate. There is probably not too much difference between the theory of $\infty$-stacks and the projective limit of the theories of $n$-stacks, so we will stick to the notation $n$-stack. For $t_1, t_2\in T(S)$ use the notation $\varpi _1(T,t_1,t_2)$ for the sheaf on ${\cal X} /S$ of paths in $T|_{{\cal X} /S}$ from $t_1$ to $t_2$ up to homotopy. Thus $$ \varpi _1(T,t_1,t_2)= \pi _0 (P^{t_1,t_2}T). $$ We make the following definition. \newline ---We say that an $n$-stack $T$ on ${\cal X}$ is {\em presentable} if it satisfies the following conditions: \begin{enumerate} \item The sheaf $\pi _0(T)$ is P1 over $k$. \item For any finite type morphism of schemes $Z\rightarrow Y$ and any two sections $\eta : Y\rightarrow T$ and $\eta ': Z\rightarrow T$ the sheaf $\varpi _1(T|_{{\cal Z} /Z}, \eta |_Z, \eta ' )$, when restricted down from $Z$ to $Y$, is $P4$ over $Y$. \item For any scheme $Y$ and section $\eta : Y\rightarrow T$, the higher homotopy group sheaves $\pi _i( (T|_{{\cal Z} /Y}), \eta )$, for $i\geq 1$, are presentable group sheaves ($P5$) over $Y$. \end{enumerate} (Recall that if ${\cal H}$ is a sheaf on ${\cal X} / Z$ then it can also be considered as a sheaf on ${\cal X}$ with a map to $Z$; the restriction down to $Y$ is the same sheaf taken with the composed map to $Y$, then considered as a sheaf on ${\cal X} /Y$. This shouldn't be confused with the direct image from $Z$ to $Y$. In heuristic topological terms the fiber over $y\in Y$ of the restriction is obtained by taking the direct union of the fibers of ${\cal H}$ over the points $z$ lying over $y$, whereas the fiber of the direct image is obtained by taking the direct product of the fibers of ${\cal H}$ over points $z$ lying over $y$.) {\bf Caution:} This definition of presentability is very slightly different from the definition given in \cite{kobe}. The older version of presentability for $T$ as defined in \cite{kobe} corresponds to the property $P3$ for $\pi _0$ (see Theorem \ref{I.1.q.1kobe} below); whereas the present definition corresponds to the property $P3\frac{1}{2}$ (see Theorem \ref{I.1.q.1} below). I hope that the present version corresponding to $P3\frac{1}{2}$ will be the most useful. The reason for changing the definition was to be able to state Theorem \ref{stability} in a nice way, i.e. to have a reasonable definition of {\em presentable morphism} of $n$-stacks. {\em Caution:} If $T$ is $0$-truncated, that is a sheaf of sets, and happens to have a group structure, then this notion is not the same as the notion that $T$ be a presentable group sheaf. The presentability in $T$ as defined here refers to the higher homotopy groups. In fact, presentability in this case corresponds to the property $P3\frac{1}{2}$ rather than $P4$ (see below). We can also reasonably use the notations {\em presentable homotopy sheaf}; {\em presentable space over ${\cal X}$} or just {\em presentable space}; or {\em presentable fibrant presheaf of spaces}, for the notion of presentable $n$-stack. Property $1$ implies the seemingly stronger statement that there is a section $f: Z\rightarrow T$ over a scheme $Z$ of finite type over $k$, such that the associated morphism $Z\rightarrow \pi _0(T)$ is surjective. The second condition reduces, in the case $\eta = \eta '$, to the statement that for any scheme $Y$ and section $\eta : Y\rightarrow T$, the fundamental group sheaf $\pi _1( (T|_{{\cal Z} /Y}), \eta )$ is a presentable group sheaf over $Y$. We can give an alternative characterization, from which it follows that any truncation $\tau _{\leq n}T$ of a presentable space is again presentable. Recall that we have defined a condition $P3\frac{1}{2}$ which is intermediate between $P2$ and $P4$. \begin{theorem} \mylabel{I.1.q.1} Suppose $T$ is an $n$-stack over $X$. Then $T$ is presentable if and only if the sheaf $\pi _0$ is $P3\frac{1}{2}$, and for any $Y\in {\cal X}$ and $t\in T(Y)$, the sheaves $\pi _i (T|_{{\cal X} /Y}, t)$ are presentable group sheaves ($P5$) over $Y$. \end{theorem} {\em Proof:} Suppose $T$ is presentable. Then we just have to show that $\pi _0$ is $P3\frac{1}{2}$. We know that it is P1, so there is a surjection $Y\rightarrow \pi _0$. By replacing $Y$ by an etale cover, we may assume that this comes from a point $t\in T(Y)$. The path space $P^{p_1^{\ast}t, p_2^{\ast}t}T$ maps to $Y\times Y$, and $$ \varpi _1(T, p_1^{\ast}t, p_2^{\ast}t )=\pi _0(P^{p_1^{\ast}t, p_2^{\ast}t}T)\rightarrow Y\times _{\pi _0} Y $$ is surjective. Let $G\rightarrow Y$ be the sheaf of groups $\pi _1(T|_Y,t)$. It is presentable by hypothesis, and $G$ acts freely on (the restriction from $Y\times Y$ down to $Y$ of) $\varpi _1(T, p_1^{\ast}t, p_2^{\ast}t )$ with quotient $Y\times _{\pi _0}Y$. Finally, we know that (the restriction from $Y\times Y$ down to $Y$ of) $\varpi _1(T, p_1^{\ast}t, p_2^{\ast}t )$ is $P4$ over $Y$; thus the quotient $Y\times _{\pi _0}Y$ is $P4$ over $Y$ by Theorem \ref{I.1.d}. Now by definition there exists a surjective morphism $Q\rightarrow Y\times _{\pi _0}Y$ which is $Y$-vertical. This is what is required to show that $\pi _0$ is $P3\frac{1}{2}$. Now suppose that $\pi _0$ is $P3\frac{1}{2}$ and that the other homotopy group sheaves are presentable. We obtain immediately that $\pi _0$ is P1. Let $X\rightarrow \pi _0$ be the surjection given by the property $P3\frac{1}{2}$. Then we have an $X$-vertical surjection $Q\rightarrow X\times _{\pi _0}X$ (where the morphism to $X$ is the first projection). Suppose $X'\rightarrow X$ is an etale surjection chosen so that the map $X\rightarrow \pi _0$ lifts to $t\in T(X')$. Let $Q'$ be the pullback of $X' \times X'$ to $Q$. Then $Q'= (X'\times _{\pi _0}X')\times _{X\times _{\pi _0}X}Q$ so $Q'\rightarrow X'\times _{\pi _0}X'$ is $X$-vertical, and hence $X'$-vertical. This implies that $X'\times _{\pi _0}X'$ is $P4$ over $X'$, because we can take as the relation scheme $$ Q'\times _{X'\times _{\pi _0}X'}Q'= Q'\times _{X'\times X'}Q' $$ which is already a scheme of finite type (and the identity is vertical). Now we have a sheaf of groups $G= \pi _1(T|_{X'}, t)$ over $X'$ which is by hypothesis presentable, and $G$ acts freely on $\varpi _1(T, p_1^{\ast}t, p_2^{\ast}t )$ with quotient $X'\times _{\pi _0}X'$. By Theorem \ref{I.1.d}, $\varpi _1(T, p_1^{\ast}t, p_2^{\ast}t )$ is $P4$ over $X'$. Now suppose that we have a finite type morphism $q:Z\rightarrow Y$ and two points $\eta _1 \in T(Y)$ and $\eta _2 \in T(Z)$, and we show that the restriction from $Z$ to $Y$ of the path space $\varpi _1(T, \eta _1 |_Z,\eta _2)$ is $P4$ over $Y$. There are etale surjections $ Y'\rightarrow Y$ and $Z'\rightarrow Z$ (of finite type) with $Z'\rightarrow Y'$ and there are morphisms $f_1:Y'\rightarrow X'$ and $f_2: Z'\rightarrow X'$ such that $f_1^{\ast} (t)$ is homotopic to $\eta _1|_{Y'}$ and $f_2^{\ast} (t)$ is homotopic to $\eta _2|_{Z'}$. Let $(f_1|_{Z'},f_2): Z'\rightarrow X'\times X'$ denote the resulting morphism (the first projection of which factors through $Y'$). Then $$ \varpi _1(T,\eta _1|_{Z}, \eta _2)|_{Z'}= \varpi _1(T,\eta _1|_{Z'}, \eta _2|_{Z'})= (f_1|_{Z'},f_2)^{\ast} \varpi _1(T,p_1^{\ast}t, p_2^{\ast}t) $$ $$ = (q, f_2)^{\ast}[\varpi _1(T,p_1^{\ast}t, p_2^{\ast}t)|_{Y'\times X'}]. $$ Note that $\varpi _1(T,p_1^{\ast}t, p_2^{\ast}t)|_{Y'\times X'}$ is $P4$ with respect to $Y'$, so by the appendix to the proof below, one gets that the restriction down to $Y'$ of $\varpi _1(T,\eta _1|_{Z}, \eta _2)|_{Z'}$ is $P4$ with respect to $Y'$. By Corollary \ref{I.1.j.1}, the restriction down to $Y$ of $\varpi _1(T,\eta _1|_{Z}, \eta _2)$ is $P4$ over $Y$. \hfill $\Box$\vspace{.1in} {\em Appendix to the proof:} Suppose $Z\rightarrow Y$ is a finite type morphism, and suppose ${\cal F}$ is a sheaf on $Y$. Then the restriction from $Z$ down to $Y$ of the pullback ${\cal F} |_Z$ is equal to the fiber product $Z\times _Y{\cal F}$. Note also that $Z$ is $P4$ over $Y$. Thus if ${\cal F}$ is $P4$ over $Y$ then the restriction of the pullback is again $P4$. \begin{corollary} \mylabel{truncation} If $T$ is a presentable $n$-stack and if $m<n$ then $\tau _{\leq m}T$ is a presentable $m$-stack. \end{corollary} {\em Proof:} Indeed the truncation operation preserves the homotopy group sheaves (and the homotopy sheaf $\pi _0$). By the theorem, presentability is expressed solely in terms of these sheaves so it is preserved by truncation. \hfill $\Box$\vspace{.1in} We have a similar theorem for the old version of presentability of $T$ \cite {kobe}. \begin{theorem} \mylabel{I.1.q.1kobe} Suppose $T$ is an $n$-stack over $X$. Then $T$ is presentable in the sense of \cite{kobe} if and only if the sheaf $\pi _0$ is $P3$, and for any $Y\in {\cal X}$ and $t\in T(Y)$, the sheaves $\pi _i (T|_{{\cal X} /Y}, t)$ are presentable group sheaves ($P5$) over $Y$. \end{theorem} {\em Proof:} The proof is the same as above only very slightly easier. The details are left to the reader. \hfill $\Box$\vspace{.1in} \subnumero{Very presentable $n$-stacks} We make the following more restrictive definition. Say that a presentable group sheaf $G$ on ${\cal X} /S$ is {\em affine} if, for any artinian $S$-scheme $S'$, the group scheme $G(S')$ over $Spec (k)$ is affine. A truncated homotopy sheaf $T$ is {\em very presentable} if $T$ is presentable and if for any $\eta \in T_Y$ we have that $\pi _1(T/Y,\eta )$ is affine, and $\pi _i(T/Y, \eta )$ are vector sheaves for $i\geq 2$. The idea behind the definition of ``very presentable'' is that we want to require the higher homotopy groups to be unipotent. Note that if we don't require $\pi _1$ to be affine, or $\pi _i$ to be unipotent $(i\geq 2$), then the comparison between algebraic and analytic de Rham cohomology (announced in \cite{kobe}) is no longer true, even over the base $S=Spec (k)$ when all of the groups are representable. This is the reason for making the definition of ``very presentable''. I make the following conjecture: \begin{conjecture} \mylabel{I.1.r} If $G$ is an abelian affine presentable group sheaf on ${\cal X} /S$ such that for any artinian $S'\rightarrow S$ the group scheme $G(S')$ over $k$ is a direct sum of additive groups, then $G$ is a vector sheaf. \end{conjecture} If we knew this conjecture, we could replace the condition of being a vector sheaf by the condition that the $G(S')$ are unipotent (hence additive) for $G=\pi _i$, $i\geq 2$; this would then be along the same lines as the affineness condition for $\pi _1$. As it is, we need to require the condition of $\pi _i$ being vector schemes ($i\geq 2$) for many of the arguments concerning de Rham cohomology sketched in \cite{kobe} to work. {\em Remark:} The categories of presentable and very presentable $n$-stacks are closed under weak equivalences and fiber products but not under cofiber products (push-outs); thus they are not closed model categories. {\em Remark:} We have the same statement as Corollary \ref{truncation} for very presentable stacks (if $T$ is very presentable then $\tau _{\leq m} T$ is very presentable). \subnumero{Other presentability conditions} Recall from \cite{kobe} that we used the notation $P6$ for affine presentable group sheaves and $P7$ for vector sheaves. An $n$-stack $T$ on ${\cal X}$ is {\em $(a_0,\ldots , a_n)$-presentable} (with $a_i \in \{ 0,1, 2 ,3, 3\frac{1}{2}, 4, 5,6, 7\}$) if $\pi _0(T)$ is $Pa_0$ and if for any scheme $Y$ and $t\in T(Y)$, $\pi _i (T, t)$ is $Pa_i$ over $Y$. Here by convention $P0$ means no condition at all. Thus a presentable $n$-stack in our previous notation becomes a $(3\frac{1}{2},5,5, \ldots )$-presentable $n$-stack in this notation. A very presentable $n$-stack is a $( 3\frac{1}{2}, 6, 7, 7, \ldots )$-presentable $n$-stack. The old notions of presentability and very presentability as defined in \cite{kobe} are respectively $(3,5,5,\ldots )$-presentability and $(3,6,7,7, \ldots )$ presentability. There may be some interest in considering, for example, the $(2,2,2,\ldots )$-presentable $n$-stacks, or the $(0,0, 7,7,7,\ldots )$-presentable $n$-stacks. Some other useful versions might be $(4, 5, 5, \ldots )$-presentable $n$-stacks, or $(4, 6, 7, 7, \ldots )$-presentable $n$-stacks for example. Here the condition $P4$ on $\pi _0$ would be with respect to $S=Spec (k)$. For example an algebraic stack with smooth morphisms from the morphism scheme to the object scheme (or even more strongly a Deligne-Mumford stack where these morphisms are etale) would be a $(4, 5)$-presentable stack. The converse is not true since in the condition of $(4,5)$-presentability, the morphism sheaves are not necessarily representable. In fact we will never see the condition of representability of the morphism sheaves in our context, since this is unnatural from the point of view of higher-order stacks (and even in the context of algebraic stacks, one may wonder why the morphism object itself was never allowed to be an algebraic space?). {\em Remark:} Again we have the statement of Corollary \ref{truncation}: if $T$ is an $(a_0,\ldots , a_n)$-presentable $n$-stack then $\tau _{\leq m}T$ is an $(a_0,\ldots , a_m)$-presentable $m$-stack. {\em Remark:} A good convention for using all of these different notions would be to chose some variables $A$, $B$, etc. and set them to be specific $(a_0, a_1, \ldots )$ at the start of a discussion, then to use the notation ``$A$-presentable'' or ``$B$-presentable'' throughout the discussion. \subnumero{A relative version of presentability} We can make a relative definition. In general, say that a morphism $T\rightarrow R$ of $n$-stacks is {\em $(a_0,\ldots , a_n)$-presentable} if for any scheme $Y\in {\cal X}$ and any morphism $Y\rightarrow R$, the fiber $T\times _RY$ is $(a_0,\ldots , a_n)$-presentable. In particular we obtain the notions of presentable and very presentable morphisms by taking $(3\frac{1}{2},5,5, \ldots )$ and $(3\frac{1}{2},6,7,7, \ldots )$ respectively. It is clear that if $T\rightarrow R$ is an $(a_0,\ldots , a_n)$-presentable morphism and if $R'\rightarrow R$ is any morphism of $n$-stacks then the morphism $T':= T\times _RR'\rightarrow R'$ is $(a_0,\ldots , a_n)$-presentable. \begin{lemma} \mylabel{structural} Suppose that $a_0 \leq 5$. An $n$-stack $T$ on ${\cal X}$ is $(a_0,\ldots , a_n)$-presentable if and only if the structural morphism $T\rightarrow \ast$ is $(a_0,\ldots , a_n)$-presentable. \end{lemma} {\em Proof:} Since $\ast$ is itself a scheme of finite type (it is $Spec (k)$) the structural morphism being $(a_0,\ldots , a_n)$-presentable implies that $T$ is $(a_0,\ldots , a_n)$-presentable. For the other implication, suppose $T$ is $(a_0,\ldots , a_n)$-presentable, then for any scheme of finite type $Y$ we have that $T\times Y = T\times _{\ast}Y$ is $(a_0,\ldots , a_n)$-presentable (since a scheme $Y$ is $a_0$-presentable for any $a_0 \leq 5$). \hfill $\Box$\vspace{.1in} {\em Remark:} If ${\cal G}$ is a sheaf of groups on ${\cal X} /S$ then ${\cal G}$ is a presentable group sheaf if and only if $K({\cal G} , 1)\rightarrow S$ is a presentable morphism of $1$-stacks. This is the correct point of view relating our terminologies ``presentable group sheaf'' and ``presentable morphism'' or ``presentable $n$-stack'', i.e. the answer to the terminological problem posed by the caution at the start of this section. \begin{theorem} \mylabel{stability} Suppose $R$ is a presentable (resp. very presentable) $n$-stack. Then a morphism $T\rightarrow R$ is presentable (resp. very presentable) if and only if $T$ itself is presentable (resp. very presentable). \end{theorem} The proof of this theorem will be given in the next subsection below. We first state a few corollaries. \begin{corollary} \mylabel{fiberprod} Suppose $T\rightarrow R$ and $S\rightarrow R$ are morphisms between presentable (resp. very presentable) $n$-stacks. Then the fiber product $T\times _RS$ is presentable (resp. very presentable). \end{corollary} {\em Proof:} From the theorem, the morphism $T \rightarrow R$ is presentable, hence the morphism $T\times _RS$ is presentable and since $S$ is presentable, again from the theorem we conclude that $T\times_RS$ is presentable. The same goes for very presentable. \hfill $\Box$\vspace{.1in} \begin{lemma} \mylabel{basechange} Suppose $R'\rightarrow R$ is a morphism inducing a surjection on $\pi _0$. Then a morphism $T\rightarrow R$ is presentable (resp. very presentable) if and only if the morphism $T':= T\times _RR'\rightarrow R'$ is presentable (resp. very presentable). \end{lemma} {\em Proof:} One direction follows directly from the first remark after the definition above. For the other direction, suppose that $T'\rightarrow R'$ is presentable (resp. very presentable). Then for any scheme $Y\rightarrow R$ there is an etale covering $Y' \rightarrow Y$ and a lifting $Y'\rightarrow R'$, and we have $$ (T\times _RY)\times _YY'=T\times _RY'=T' \times _{R'}Y', $$ which is presentable (resp. very presentable) by hypothesis. The conditions on homotopy sheaves for being presentable (resp. very presentable) are etale-local, so $T\times _RY$ is presentable (resp. very presentable). \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{composition} Suppose $R\rightarrow S$ and $S\rightarrow T$ are presentable (resp. very presentable) morphisms of $n$-stacks. Then the composition $R\rightarrow T$ is a presentable (resp. very presentable) morphism. \end{corollary} {\em Proof:} Suppose $X$ is a scheme of finite type with a morphism $X\rightarrow T$. Then $$ X\times _TR = (X\times _TS) \times _SR. $$ By hypothesis, $(X\times _TS)$ is presentable (resp. very presentable), and by the other hypothesis and the base change property given at the start of the subsection, the morphism $(X\times _TS) \times _SR\rightarrow (X\times _TS)$ is presentable (resp. very presentable). Theorem \ref{stability} now implies that $X\times _TR$ is presentable (resp. very presentable). By definition then, the morphism $R\rightarrow T$ is presentable (resp. very presentable). \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{check} Suppose $f:T\rightarrow R$ is a morphism such that $R$ is presentable (resp. very presentable), and suppose $X\rightarrow R$ is a morphism from a scheme of finite type $X$ which is surjective on $\pi _0$. Then $T$ and the morphism $f$ are presentable (resp. very presentable) if and only if $T\times _RX$ is presentable (resp. very presentable). \end{corollary} {\em Proof:} By Lemma \ref{basechange} the morphism $f$ is presentable if and only if the morphism $p_2: T\times _RX\rightarrow X$ is presentable. On the other hand, $T$ is presentable if and only if $f$ is presentable, from Theorem \ref{stability}. Similarly $T\times _RX$ is presentable if and only if $p_2$ is presentable again by \ref{stability}. This gives the desired statement (the same proof holds for very presentable). \hfill $\Box$\vspace{.1in} We now give some results that will be used in the proof of Theorem \ref{stability}. \begin{lemma} \mylabel{vector?} Suppose $V$ is a vector sheaf and $G$ is a presentable group sheaf on ${\cal X} /S$. If $f: V\rightarrow G$ is a morphism of group sheaves then the kernel of $f$ is a vector sheaf. \end{lemma} {\em Proof:} There is a natural isomorphism of vector sheaves $\varphi : V \cong Lie (V)$, such that $\varphi$ reduces to the exponential on the values over artinian $S'$. To construct $\varphi$ note that a section of $V$ may be interpreted as a map ${\cal O} \rightarrow V$. We have a tautological section of $Lie ({\cal O} )$ so for every section of $V$ the image of this tautological section is a section of $Lie (V)$. This map is an isomorphism on values over artinian schemes, so it is an isomorphism. Let $U \subset Lie (V)$ be the kernel of $$ Lie (f) : Lie (V)\rightarrow Lie (G). $$ Since $Lie (f)$ is a morphism of vector sheaves, its kernel $U$ is a vector sheaf. We claim that $\varphi ^{-1}(U)$ is the kernel of $f$. In order to prove this claim it suffices to prove it for the values over artinian $S'$ (since both are presentable and contained in $V$, and using \ref{Krull}). Here it reduces to the following statement about Lie groups: the kernel of an algebraic morphism from a vector space to a Lie group is the exponential of the kernel of the corresponding morphism of Lie algebras. To prove this notice first that this exponential is a subvector subspace; we can take the quotient and then we are reduced to the case where the map is injective on Lie algebras. The kernel is thus a finite subgroup, but a vector space contains no finite subgroups so we are done. \hfill $\Box$\vspace{.1in} \begin{proposition} \mylabel{I.1.s.3} Suppose $R$, $S$ and $T$ are $n$-stacks over ${\cal X}$, with morphisms $R\rightarrow T$ and $S\rightarrow T$. Suppose $Z\in {\cal X}$ and $(r,s)\in R\times _TS(Z)$. Let $t\in T(Z)$ be the common image of $r$ and $s$. Then we have the following long exact sequence of homotopy group sheaves on ${\cal X} /Z$: $$ \ldots \rightarrow \pi _i (R\times _TS|_{{\cal X} /Z},(r,s))\rightarrow \pi _i(R|_{{\cal X} /Z},r)\times \pi _i (S|_{{\cal X} /Z})\rightarrow $$ $$ \pi _i (T|_{{\cal X} /Z},t)\rightarrow \pi _{i-1}(R\times _TS|_{{\cal X} /Z},(r,s)) \rightarrow \ldots , $$ terminating with the sequence $$ \pi _2(R|_{{\cal X} /Z},r)\times \pi _2(S|_{{\cal X} /Z},s)\rightarrow \pi _2(T|_{{\cal X} /Z},t)\rightarrow \pi _1(R\times _TS|_{{\cal X} /Z}, (r,s))\rightarrow $$ $$ \pi _1(R|_{{\cal X} /Z},r)\times \pi _1(S|_{{\cal X} /Z},s) \stackrel{\displaystyle \rightarrow }{\rightarrow } \pi _1(T|_{{\cal X} /Z},t) $$ (the last part meaning that the image is equal to the equalizer of the two arrows). Furthermore, we have a similar sequence for the path spaces. Suppose $(r_1,s_1)$ and $(r_2,s_2)$ are two points, with images $t_1$ and $t_2$. We have the exact sequence $$ \pi _2(R|_{{\cal X} /Z},r_1)\times \pi _2(S|_{{\cal X} /Z},s_1)\rightarrow \pi _2(T|_{{\cal X} /Z},t_1)\stackrel{acts\; on}{\rightarrow} \varpi _1(R\times _TS|_{{\cal X} /Z}, (r_1,s_1),(r_2,s_2)) $$ $$ \mbox{with quotient the equalizer of} \varpi _1(R|_{{\cal X} /Z},r_1,r_2)\times \varpi _1(S|_{{\cal X} /Z},s_1,s_2) \stackrel{\displaystyle \rightarrow }{\rightarrow } \varpi _1(T|_{{\cal X} /Z},t_1,t_2). $$ \end{proposition} {\em Proof:} We show that we have similar exact sequences at the homotopy presheaf level; then the sequences for the homotopy sheaves follow by sheafification. To define the exact sequences at the presheaf level, we can work object by object, so it suffices to give functorial exact sequences for fibrations of topological spaces $R\rightarrow T$ and $S\rightarrow T$ with basepoints $(r,s)$ mapping to $t$. The morphisms are defined as follows. The morphism from $\pi _i (R\times _TS,(r,s))$ to $\pi _i(R,r)\times \pi _i (S,s)= \pi _i (R\times S, (r,s))$ comes from the inclusion $R\times _TS\rightarrow R\times S$. The morphism from the product to $\pi _i (T,t)$ is the difference of the two projection maps. The morphism from $\pi _i (T,t)$ to $\pi _{i-1}(R\times _TS,(r,s))$ is obtained as a composition $$ \pi _i (T,t)\stackrel{\delta}{\rightarrow} \pi _{i-1} (R_t,r) \stackrel{(1, 0_s)}{\rightarrow }\pi _{i-1}(R_t\times S_t,(r,s)) \stackrel{i}{\rightarrow}\pi _{i-1}(R\times _TS,(r,s)) $$ where $\delta$ is the connecting homomorphism for the fibration $R\rightarrow T$, $0_s$ is the constant class at the basepoint $s$, and $i$ is the inclusion of the fiber $i: R_t\times S_t\rightarrow R\times _TS$. If we took $(1,1)$ instead of $(1,0_s)$ we would get the connecting morphism for the fibration $R\times _TS\rightarrow T$, which goes to zero in the homotopy of the total space $R\times _TS$. Thus, our map is the same as the map which would be obtained by putting in $-(0_r,1)$ instead. From the equality of these two maps, one obtains that the composition of this map with the difference of projections, is equal to zero. That the other compositions are zero is easy to see. Exactness follows by making a diagram with this sequence on one horizontal row, with the sequence $$ \pi _i(R_t\times S_t, (r,s))= \pi _i(R_t,r)\times \pi _i (S_t,s)\rightarrow 0\rightarrow \ldots $$ on the row above, and the sequence $$ \pi _i (T,t)\rightarrow \pi _i (T,t)\times \pi _i (T,t)\rightarrow \pi _i (T,t) \stackrel{0}{\rightarrow} \pi _{i-1}(T,t) $$ on the row below. The vertical rows then have the exact fibration sequences going downwards. One obtains the exactness of the sequence of homotopy groups in question (this works at the end by using the extension of the homotopy sequence for a fibration, to the action of $\pi _1$ of the base on $\pi _0$ of the fiber, with the $\pi _1$ of the total space being the stabilizer of the component of $\pi _0$ of the fiber containing the basepoint. Finally, we treat the case of the path spaces. What is written on the left means, more precisely, that the cokernel of the first map acts freely on the middle sheaf, with quotient equal to the equalizer. The action in question is by the map to $\pi _1(R\times _TS, (r_1,s_1))$ which itself acts on the path space. Now if $\varpi _1(R\times _TS, (r_1,s_1), (r_2,s_2))$ is empty then the equalizer in question is also empty (any element of the equalizer can be realized as a pair of paths mapping to exactly the same path in $T$, giving a path in the fiber product). Note that we count an action on the empty set as free. So we may assume that $\varpi _1(R\times _TS, (r_1,s_1), (r_2,s_2))$ is nonempty, and choose an element. This choice gives compatible choices in all the other path spaces, so composing with the inverse of this path we reduce to the exact sequence for fundamental groups. \hfill $\Box$\vspace{.1in} {\em Remark:} We can extend this sequence to a statement involving $\pi _0$, specially in the case of a fibration sequence. This will be done as we need it below. \begin{lemma} \mylabel{kernel} Suppose $S$ is a base scheme and suppose ${\cal F}$ is a sheaf on ${\cal X} /S$ whose restriction down to ${\cal X}$ is $P3\frac{1}{2}$. Suppose that ${\cal G}$ is a $P4$ sheaf on ${\cal X} /S$ with morphism ${\cal G} \rightarrow {\cal F}$, and finally suppose that $\eta : S\rightarrow {\cal F}$ is a section. Then the inverse image ${\cal H} \subset {\cal G}$ of the section $\eta$ is a $P4$ sheaf. \end{lemma} {\em Proof:} Let $X\rightarrow {\cal G}$ and $W\rightarrow X\times _{{\cal G}}X$ be the $S$-vertical surjections for ${\cal G}$. Fix a surjection $Z\rightarrow {\cal F}$ and a surjection $W\rightarrow Z\times _{{\cal F}}Z$ which is vertical with respect to the first projection to $Z$. Fix a lifting $\eta '$ of the section to $Z$ (note that we are allowed to etale-localize on the base $S$). Let $U:= S\times _{Z}W$ where the morphism in the fiber product is the first projection from $W$ to $Z$ (note that $U$ is a scheme of finite type over $S$). The surjective morphism $$ U\rightarrow S\times _{Z}(Z\times _{{\cal F}}Z) = S\times _{{\cal F}} Z $$ is $S$-vertical since the morphism $W\rightarrow Z\times _{{\cal F}}Z$ was $Z$-vertical. We can choose a lifting $X\rightarrow Z$ of the morphism ${\cal G} \rightarrow {\cal F}$. Then $$ S\times _{{\cal F}} X= (S\times _{{\cal F}}Z)\times _ZX $$ so there is an $S$-vertical morphism $$ U\times _Z X \rightarrow S\times _{{\cal F}} X. $$ On the other hand the $S$-vertical morphism $X\rightarrow {\cal G}$ gives an $S$-vertical morphism $$ S\times _{{\cal F}} X\rightarrow S\times _{{\cal F}} {\cal G} = {\cal H} . $$ Note that $Y:= U\times _Z X$ is a scheme of finite type with a surjective vertical morphism to ${\cal H}$. Since ${\cal G}$ is $P4$ there exists a scheme of finite type $V$ and an $S$-vertical morphism $$ V\rightarrow Y\times _{{\cal G}}Y = Y\times _{{\cal H}} Y. $$ This gives the condition $P4$ for ${\cal H}$. \hfill $\Box$\vspace{.1in} The following lemma is a consequence of Corollary \ref{fiberprod}, but we need it in the proof of Theorem \ref{stability}. \begin{lemma} \mylabel{I.1.s.?} If $R$ and $S$ are presentable (resp. very presentable) $n$-stacks over ${\cal X}$ and $X$ a scheme of finite type, with morphisms $R\rightarrow S$ and $X\rightarrow S$, then the homotopy fiber product $X\times _SR$ is presentable (resp. very presentable). \end{lemma} {\em Proof:} Suppose $f:Y\rightarrow X\times _SR$ is a morphism. Let $r: Y\rightarrow R$ and $s: Y\rightarrow S$ be the composed morphisms. Then (since $X$ is zero-truncated) for $i\geq 1$ we have $$ \pi _i(X\times _SR |_{{\cal X} /Y}, f)= \pi _i (Y\times _SR/Y, r). $$ The latter fits into a homotopy exact sequence, which we can therefore write $$ \ldots \pi _{i+1}(S|_{{\cal X} /Y}, s)\rightarrow \pi _i(X\times _SR |_{{\cal X} /Y}, f)\rightarrow \pi _i(R|_{{\cal X} /Y}, r)\rightarrow \ldots . $$ In the presentable case we obtain immediately from Theorem \ref{I.1.e} that $\pi _i(X\times _SR |_{{\cal X} /Y}, f)$ is a presentable group sheaf over $Y$. In the very presentable case, for $i\geq 3$ we obtain immediately (from Corollary \ref{I.j} and Theorem \ref{I.k}) that $\pi _i(X\times _SR |_{{\cal X} /Y}, f)$ is a vector sheaf. For $i=2$ we obtain the same conclusion but must also use Lemma \ref{vector?}. For $i=1$ we obtain that $\pi _1(X\times _SR |_{{\cal X} /Y}, f)$ is $P5$. In fact it is an extension of the kernel of a morphism of $P6$ group sheaves, by a vector sheaf. Therefore it is also affine (since kernels and extensions by vector sheaves at least, preserve the affineness property). Thus it is $P6$. We just have to prove (in both the presentable and very presentable case) that $\pi _0(X\times _SR)$ is $P3\frac{1}{2}$. Let $a: X\rightarrow S$ denote the given morphism. Recall that $\pi _0(X\times _SR)/X$ denotes this sheaf considered as a sheaf on ${\cal X} /X$. We have an action of $\pi _0(S|_{{\cal X} /X}, a)$ (which is a $P5$ group sheaf over $X$) on $\pi _0(X\times _SR)/X$, and the quotient is the fiber product $X\times _{\pi _0(S)}\pi _0(R)/X$ (i.e. again considered as a sheaf over ${\cal X} /X$). This is the same thing as the inverse image of the given section $a$ via the map $\pi _0(R|_{{\cal X} /X})\rightarrow \pi _0(S|_{{\cal X} /X})$. By Corollary \ref{P3c} or \ref{P3d} the quotient by the action is $P3\frac{1}{2}$. Finally by Proposition \ref{P3e}, the sheaf $\pi _0(X\times _SR)$ is $P3\frac{1}{2}$. \hfill $\Box$\vspace{.1in} {\em Remark:} A similar technique allows one to directly prove Corollary \ref{fiberprod}, that if $R$, $S$ and $T$ are presentable (resp. very presentable) $n$-stacks with morphisms $R\rightarrow S$ and $T\rightarrow S$ then the fiber product $R\times _ST$ is presentable (resp. very presentable). This is left to the reader. Our technique is to use only the above special case to get Theorem \ref{stability}, and then to deduce Corollary \ref{fiberprod} as a consequence. \subnumero{The proof of Theorem \ref{stability}} Lemma \ref{I.1.s.?} immediately implies one direction in Theorem \ref{stability}, namely that if $R$ and $S$ are presentable then the morphism $f$ is presentable. We have to show the other direction: suppose $S$ is a presentable $n$-stack, $R$ is an $n$-stack, and $f:R\rightarrow S$ is a presentable morphism. Choose a scheme of finite type $X$ with a morphism $X\rightarrow S$ inducing a surjection on $\pi _0$. We will show that if $X\times _SR$ is presentable then $R$ is presentable. First of all the morphism $\pi _0(X\times _SR)\rightarrow \pi _0(R)$ is surjective so if $\pi _0(X\times _SR)$ is $P1$ then so is $\pi _0(R)$. For the higher homotopy groups, suppose that $s:Z\rightarrow R$ is a morphism. Lift the projection into $S$ (denoted by $s$) to a morphism $Z\rightarrow X$. This gives a point $f: Z\rightarrow X\times _SR$ and by composition $f_Z: Z\rightarrow Z\times _SR= Z\times _X(X\times _SR)$. Then we have the exact sequence $$ \ldots \rightarrow \pi _i(Z\times _SR |_{{\cal X} /Z}, f_Z) \rightarrow \pi _i(R|_{{\cal X} /Z}, r)\rightarrow \pi _i(S|_{{\cal X} /Z}, s)\rightarrow \ldots . $$ But since $Z$ and $X|_{{\cal X} /Z}$ are zero-truncated, and we have that $Z\times _SR = Z\times _X(X\times _SR)$, the higher homotopy groups $\pi _i(Z\times _SR |_{{\cal X} /Z}, f_Z)$ are the same as the $\pi _i(X\times _SR |_{{\cal X} /Z}, f)$. Thus we can write the exact sequence as $$ \ldots \rightarrow \pi _i(X\times _SR |_{{\cal X} /Z}, f) \rightarrow \pi _i(R|_{{\cal X} /Z}, r)\rightarrow \pi _i(S|_{{\cal X} /Z}, s)\rightarrow \ldots . $$ Note that (in the very presentable case) the kernel of the morphism $$ \pi _2(S|_{{\cal X} /Z}, s)\rightarrow \pi _1(X\times _SR |_{{\cal X} /Z}, f) $$ is a vector sheaf by Lemma \ref{vector?}. In the other cases the kernel (and the cokernel on the other end) are automatically vector sheaves by Corollary \ref{I.j}. Since the property of being a vector sheaf is preserved under extension we get the condition that the $\pi _i(R|_{{\cal X} /Z}, r)$ are vector sheaves ($i\geq 2$). In the presentable case the exact sequence immediately gives the property $P5$ for the $\pi _i(R|_{{\cal X} /Z}, r)$ for ($i\geq 2$). We have to treat the case of $\varpi _1$. Suppose $Z\rightarrow Y$ is a morphism of finite type and suppose $r, r': Z\rightarrow R$ are points such that $r$ factors through $Y$. Let $s,s'$ denote the images in $S$ and assume that they lift to points $f, f'$ and $f_Z, f'_Z$ as above (with $f$ or $f_Z$ factoring through $Y$). We first study everything on the level of sheaves on ${\cal X} /Z$. Note first that $$ Z \times _{S|_{{\cal X} /Z}}(R|_{{\cal X} /Z}) \rightarrow R|_{{\cal X} /Z}\rightarrow S|_{{\cal X} /Z} $$ is a fibration sequence (this should actually have been pointed out above in the treatment of the $\pi _i$, $i\geq 2$), over the basepoint $s\in S(Z)$. On the other hand note that $r': Z\rightarrow R$ is a point lying over $s'$. Consider the map $$ \varpi _1(S|_{{\cal X} /Z}, s, s')\rightarrow \pi _0(Z \times _{S|_{{\cal X} /Z}}(R|_{{\cal X} /Z})) $$ which sends a path to the point obtained by transporting $f'_Z$ along the path from $s'$ back to $s$. The fibration sequence gives the following statement: {\em The group $\pi _1(Z \times _{S|_{{\cal X} /Z}}(R|_{{\cal X} /Z}, f_Z)$ acts on $\varpi _1(R|_{{\cal X} /Z}, r, r')$ with quotient the inverse image in $\varpi _1(S|_{{\cal X} /Z}, s, s')$ of the section $f_Z: Z \rightarrow \pi _0( Z \times _{S|_{{\cal X} /Z}}(R|_{{\cal X} /Z})$. } Now we note that $$ \pi _1(Z \times _{S|_{{\cal X} /Z}}(R|_{{\cal X} /Z}, f_Z) = \pi _1(X\times _SR|_{{\cal X} /Z}, f), $$ and $$ \pi _0(Z \times _{S|_{{\cal X} /Z}}(R|_{{\cal X} /Z})\subset \pi _0(X\times _S R|_{{\cal X} /Z}). $$ The transport of $f'$ along the path from $s'$ to $s$ again gives a map $$ \varpi _1(S|_{{\cal X} /Z}, s, s')\rightarrow \pi _0(X\times _SR|_{{\cal X} /Z}) $$ and we obtain the following statement. {\em The group $\pi _1(X\times _SR|_{{\cal X} /Z}, f)$ acts on $\varpi _1(R|_{{\cal X} /Z}, r, r')$ with quotient the inverse image in \linebreak $\varpi _1(S|_{{\cal X} /Z}, s, s')$ of the section $f: Z \rightarrow \pi _0( X\times _SR|_{{\cal X} /Z})$. } Now we look at everything in terms of sheaves on ${\cal X} /Y$. Let $Res _{Z/Y}$ denote the restriction from $Z$ down to $Y$, and let $\tilde{f}$ denote the $Y$-valued point corresponding to $f$. Note that $$ Res _{Z/Y} \pi _0(X\times _SR|_{{\cal X} /Z}) = \pi _0(X\times _SR|_{{\cal X} /Y})\times _YZ. $$ In general if ${\cal A}$ is a sheaf over $Z$ and ${\cal B}$ a sheaf over $Y$ with a section $Y\rightarrow {\cal B}$ then $$ Res _{Z/Y}({\cal A} \times _{{\cal B} |_{{\cal X} /Z}}Z) = (Res _{Z/Y}{\cal A} )\times _{{\cal B}}Y. $$ In particular the inverse image in $\varpi _1(S|_{{\cal X} /Z}, s, s')$ of the section $f: Z \rightarrow \pi _0( X\times _SR|_{{\cal X} /Z})$ restricts down to $Y$ to the inverse image in $Res _{Z/Y}\varpi _1(S|_{{\cal X} /Z}, s, s')$ of the section $\tilde{f}: Y \rightarrow \pi _0(X\times _SR|_{{\cal X} /Y})$. Another general principal is that if ${\cal G}$ is a group sheaf on $Y$ such that ${\cal G} |_{{\cal X} /Z}$ acts on a sheaf ${\cal H}$ then ${\cal G}$ acts on $Res _{Z/Y}{\cal H}$ with quotient equal to $Res _{Z/Y}({\cal H} /({\cal G} |_{{\cal X} /Z}))$. With these things in mind, our above statement becomes: {\em The group $\pi _1((X\times _SR |_{{\cal X} /Y}, \tilde{f})$ acts on $Res _{Z/Y}\varpi _1(R|_{{\cal X} /Z}, r, r')$ with quotient the inverse image in $Res _{Z/Y}\varpi _1(S|_{{\cal X} /Z}, s, s')$ of the section $\tilde{f}: Y \rightarrow \pi _0(X\times _SR|_{{\cal X} /Y})$. } Now the facts that $\pi _0(X\times _SR|_{{\cal X} /Y})$ is $P3\frac{1}{2}$ and that $Res _{Z/Y}\varpi _1(S|_{{\cal X} /Z}, s, s')$ is $P4$ (which comes by hypothesis) imply that the inverse image in question is $P4$ (Lemma \ref{kernel}); then the theorem on quotients (Theorem \ref{I.1.d}) and the fact that the group $\pi _1((X\times _SR |_{{\cal X} /Y}, \tilde{f})$ is $P5$ over $Y$ gives the condition that $Res _{Z/Y}\varpi _1(R|_{{\cal X} /Z}, r, r')$ is $P4$ over $Y$. This is the condition on $\varpi _1$ needed to insure that $R$ is presentable. This completes the proof of Theorem \ref{stability}. \hfill $\Box$\vspace{.1in} We have the following characterization of presentable morphisms via the relative homotopy group sheaves. \begin{proposition} \mylabel{characterization} Suppose $f: R\rightarrow S$ is a morphism of $n$-stacks. Then $f$ is presentable (resp. very presentable) if and only if the following conditions are satisfied for any scheme $X$ of finite type: \newline ---for any morphism $X\rightarrow S$, the sheaf $\pi _0(X\times _SR)$ is $P3\frac{1}{2}$; and \newline ---for any morphism $r: X\rightarrow R$ the sheaves $\pi _i(X\times _SR/X, r)$ on ${\cal X} /X$ are presentable group sheaves over $X$ (resp. $\pi _1$ is affine presentable and $\pi _i$ are vector sheaves for $i\geq 2$). \end{proposition} {\em Proof:} This falls out of the proof of \ref{stability}. \hfill $\Box$\vspace{.1in} {\em Exercise:} For which values of $(a_0,a_1,\ldots )$ does Theorem \ref{I.1.s.?} hold for $(a_0,a_1,\ldots )$-presentable spaces? Place these conditions in Corollary \ref{I.1.u} below. \subnumero{Going to the base of a fibration} It is an interesting question to ask, if $R\rightarrow S$ is a morphism of $n$-stacks such that $R$ is presentable and such that for every scheme-valued point $X\rightarrow S$ the fiber product $X\times _SR$ is presentable, then is $S$ presentable? The answer is surely no in this generality. We need to make additional hypotheses. Directly from the fibration exact sequences, one can see that if $\pi _0(S)$ is assumed to be $P3\frac{1}{2}$ (a hypothesis which seems unavoidable) and if we suppose that for any point $a:X\rightarrow S$, the action of $\pi _1(S|_{{\cal X} /X}, a)$ on $\pi _0(X\times _SR)$ factors through a presentable group sheaf over $X$, then $S$ will be presentable. As a particular case, if the morphism $R\rightarrow S$ is relatively $0$-connected (i.e. the fibers are connected) and surjective on $\pi _0$, then presentability of $R$ implies presentability of $S$. One might look for other weaker conditions, for example that the fibers satisfy some sort of artinian condition (e.g. there is a surjection from a scheme finite over $X$, to $\pi _0(X\times _SR)$). I don't know if this can be made to work. \subnumero{Presentable shapes} We have a notion of internal $Hom$ for $n$-stacks. In the topological presheaf interpretation (\cite{kobe} \S 2), recall that $\underline{Hom}(R,T)$ is defined to be the presheaf $X\mapsto Hom (R'_X,T|_{{\cal X} /X})$ where $R'_X$ is a functorial replacement of $R|_{{\cal X} /X}$ by a cofibrant presheaf. \begin{corollary} \label{I.1.u} Suppose $W$ is a finite CW complex, and let $W_{{\cal X}}$ denote the constant $n$-stack with values $\Pi _n(W)$ (or in terms of presheaves of spaces, it is the fibrant presheaf associated to the constant presheaf with values $\tau _{\leq n}W$). If $T$ is a presentable (resp. very presentable) $n$-stack over $X$ then the $n$-stack $\underline{Hom}(W_{{\cal X}}, T)$ is presentable (resp. very presentable). \end{corollary} {\em Proof:} We first show this for $W=S^m$, the $m$-sphere. Do this by induction on $m$. It is clear for $m=0$ because then $W$ consists of two points and $\underline{Hom}(W_{{\cal X}}, T)=T\times T$. For any $m$, write $S^m$ as the union of two copies of $B^m$ joined along $S^{m-1}$. We get $$ \underline{Hom}(S^m_{{\cal X}}, T)=T\times _{ \underline{Hom}(S^{m-1}_{{\cal X}}, T)}T, $$ since $\underline{Hom}(B^m_{{\cal X}}, T)=T$. By Theorem \ref{I.1.s.?}, $\underline{Hom}(S^m_{{\cal X}}, T)$ is presentable (resp. very presentable). This shows the corollary for the spheres. We now treat the case of general $W$, by induction on the number of cells. We may thus write $W=W'\cup B^m$ with the cell $B^m$ attached over an attaching map $S^{m-1}\rightarrow W'$, and where we know the result for $W'$. Then $$ \underline{Hom}(W_{{\cal X}}, T)=\underline{Hom}(W'_{{\cal X}}, T)\times _{ \underline{Hom}(S^{m-1}_{{\cal X}}, T)}T. $$ Again by Theorem \ref{I.1.s.?}, we obtain the result for $W$. \hfill $\Box$\vspace{.1in} Let $Pres ^n/{\cal X}$ denote the $n+1$-category of presentable $n$-stacks. We define the {\em presentable shape} of $W$ to be the $n+1$-functor $$ Shape (W):T\mapsto \underline{Hom}(\underline{W}, T) $$ from $Pres ^n/{\cal X}$ to $Pres ^n/{\cal X}$. One can show (using the calculations of \cite{kobe} Corollary 3.9 over $S=Spec (k)$) that if $W$ is connected and simply connected then this functor is homotopy-representable by an object $Hull (W)\in Pres /{\cal X} $. On the other hand, in most cases where $W$ is not simply connected, the presentable shape is not representable. We could try to interpret the hull of $W$ as the inverse limit of $Shape (W)$, but this is not a standard kind of inverse limit. It is a question for further study, just what information is contained in $Shape (W)$. {\em Example:} Take $G=GL(n)$ and $T= K(G, 1)$. Fix a finite CW complex $U$. Then $M:=\underline{Hom}(U, T)$ is the moduli stack for flat principal $G$-bundles (i.e. flat vector bundles of rank $n$) on $U$. More generally it should be interesting to look at presentable or very presentable {\em connected} $T$, these are objects whose homotopy group sheaves are algebraic Lie groups over $Spec (k)$. Note that if $k$ is algebraically closed then there is an essentially unique choice of basepoint $t\in T(Spec (k))$. If $G= \pi _1(T, t)$ then we have a fibration $T\rightarrow K(G,1)$ and we get a morphism $$ \underline{Hom} (U, T) \rightarrow \underline{Hom}(U, K(G, 1)). $$ This expresses $\underline{Hom} (U, T)$ as a presentable $n$-stack over the moduli stack $M$ of flat principal $G$-bundles over $U$. One can see from this example that we should consider the notion of vector sheaf as a candidate for the higher homotopy group sheaves. \subnumero{Leray theory} We develop here a nonabelian Leray theory and K\"unneth formula. This is in some sense one of the principal reasons for going to nonconnected $n$-stacks, as they can intervene as intermediate steps even when the original coefficient stacks were connected. We give some notation for the stack of sections. If $T\rightarrow S$ is a morphism of $n$-stacks on ${\cal X}$ (or on any site) then we denote by $\underline{\Gamma}(S, T)$ the $n$-stack of sections, i.e. of diagrams $$ \begin{array}{ccc} S&\rightarrow &T\\ & {\displaystyle =}\searrow&\downarrow \\ & & S \end{array} $$ (with homotopy making the diagram commutative). We also have a notion of relative morphism stack. Suppose that $T\rightarrow S$ and $T' \rightarrow S$ are two morphisms of $n$-stacks. Then we obtain an $n$-stack together with morphism to $S$ $$ \underline{Hom}(T/S, T'/S) \rightarrow S. $$ In topological language this corresponds to the space whose fiber over $s$ is the space of morphisms from $T_s$ to $T'_s$. This should not be confused with another useful construction in the same situation, the space $$ \underline{Hom}_S(T, T') $$ which is the $n$-stack of diagrams $$ \begin{array}{ccc} T&\rightarrow &T'\\ & \searrow&\downarrow \\ & & S \end{array} $$ (again with homotopy making the diagram commutative). These things can be constructed using the point of view of simplicial presheaves or presheaves of spaces---cf for example \cite{flexible}. It remains to be seen how to give constructions of these things purely within the realm of stacks (and consequently to extend the same constructions to stacks of $n$-categories which are not necessarily $n$-groupoids). We have the following relationships among the above constructions. First of all, $\underline{\Gamma}(S, T) = \underline{Hom}_S(S, T)$. Then, \begin{lemma} Suppose $T\rightarrow S$ and $T' \rightarrow S$ are morphisms of $n$-stacks. There is a natural equivalence $$ \underline{\Gamma} (S, \underline{Hom}(T/S, T'/S)) \cong \underline{Hom}_S(T,T'). $$ \end{lemma} {\em Proof:} From the point of view of presheaves of spaces, see \cite{flexible}. \hfill $\Box$\vspace{.1in} Finally note that if $T$ is an $n$-stack and $R\rightarrow S$ is a morphism of $n$-stacks then $$ \underline{Hom}_S(R/S, T\times S/S) \cong \underline{Hom}(R, T). $$ From the above lemma we obtain a method of ``devissage'': \begin{corollary} Suppose $T$ is an $n$-stack and $R\rightarrow S$ is a morphism of $n$-stacks, then $$ \underline{Hom}(R, T) \cong \underline{\Gamma} (S, \underline{Hom}(R/S, T\times S/S)). $$ \end{corollary} \vspace*{-.5cm} \hfill $\Box$\vspace{.1in} In words this says that to calculate the stack of morphisms from $R$ to $T$ we first look at the fiberwise morphisms from $R/S$ to $T$, and then we take the sections over $S$. Rather than taking the internal morphism and section spaces we can take the external ones, removing the underline in the notation which means taking the sections over $\ast$ (which is $Spec (k)$ in our case). We get the statement $$ Hom(R, T) \cong \Gamma (S, \underline{Hom}(R/S, T\times S/S)). $$ Note that it is still essential to look at the internal $\underline{Hom}$ inside the space of sections. It might be worthwhile looking at how this works in the case of usual cohomology. Suppose ${\cal A}$ is a sheaf of abelian groups on ${\cal X}$. Let $T= K({\cal A} , n)$, so that $Hom(R, T)$ is an $n$-groupoid with homotopy groups $$ \pi _i = H^{n-i}(R, {\cal A} ). $$ Similarly $\underline{Hom}(R/S, T)$ is an $n$-stack over $S$ whose relative homotopy group sheaves over $S$ are the higher direct images $$ \pi _i = R^{n-i}f_{\ast} ({\cal A} |_R). $$ There is a spectral sequence for the $n$-stack of sections going from the cohomology of $S$ with coefficients in the relative homotopy sheaves to the homotopy groups of the space of sections, which turns out to be the Leray spectral sequence in this case. This version of Leray theory is due to Thomason \cite{Thomason}, who developed it mostly in the context of presheaves of spectra. We finally introduce one more bit of notation combining the previous notations, that is the {\em relative section stack}. Suppose $R\rightarrow S\rightarrow T$ are morphisms of $n$-stacks. Then we obtain the $n$-stack $$ \underline{\Gamma}(S/T, R/T)\rightarrow T $$ which is geometrically the ``fiberwise space of sections of the morphism $R\rightarrow S$ along the fibers of $S\rightarrow T$''. The above Leray theory can itself be presented in a relative context: \begin{lemma} \mylabel{RelativeLeray} Suppose $R\rightarrow S\rightarrow T\rightarrow U$ are morphisms of $n$-stacks. Then $$ \underline{\Gamma}(T/U, \underline{\Gamma}(S/T, R/T)/U) \cong \underline{\Gamma}(T/U, R/U). $$ \end{lemma} \hfill $\Box$\vspace{.1in} Of course, given four morphisms there should be a diagram expressing compatibility of these Leray equivalences (and further diagrams of homotopy between the homotopies). \subnumero{Leray theory for presentable and very presentable $n$-stacks} Now we get back to presentable and very presentable $n$-stacks. Our goal is to show that in certain cases the Leray theory stays within the world of presentable $n$-stacks. The first task is to generalize Corollary \ref{I.1.u} to the case of a local coefficient system, i.e. a presentable morphism of $n$-stacks to our given finite CW complex. \begin{lemma} \mylabel{Leray2} Suppose $U$ is a constant $n$-stack associated to the $n$-groupoids associated to a finite CW complex. Suppose $T\rightarrow U$ is a presentable (resp. very presentable) morphism of $n$-stacks. Then the $n$-groupoid of sections $\underline{\Gamma}(U, T)$ is a presentable (resp. very presentable) $n$-stack. \end{lemma} {\em Proof:} The proof is identical to that of Corollary \ref{I.1.u} but we repeat it here for the reader's convenience. As before, we first treat the case $U=S^m$ by induction on $m$. It is clear for $m=0$ because then $W$ consists of two points $a,b$ and $\underline{\Gamma }(W_{{\cal X}}, T)=T_a\times T_b$, with the fibers $T_a$ and $T_b$ being presentable (resp. very presentable). Now for any $m$, write $S^m$ as the union of two copies of $B^m$ joined along $S^{m-1}$ and let $T_a$ be the fiber of $T$ over a basepoint. This fiber is presentable (resp. very presentable). We get $$ \underline{\Gamma }(S^m_{{\cal X}}, T)=T_a\times _{ \underline{\Gamma }(S^{m-1}_{{\cal X}}, T)}T_a, $$ since $\underline{\Gamma }(B^m_{{\cal X}}, T)\cong T_a$. By the induction hypothesis and Theorem \ref{I.1.s.?}, $\underline{\Gamma }(S^m_{{\cal X}}, T)$ is presentable (resp. very presentable). This shows the lemma for the spheres. We now treat the case of general $U$, by induction on the number of cells. We may thus write $U=U'\cup B^m$ with the cell $B^m$ attached over an attaching map $S^{m-1}\rightarrow U'$, and where we know the result for $U'$. Again let $T_a$ be the fiber over a basepoint in the attached cell. Then $$ \underline{\Gamma }(U_{{\cal X}}, T)=\underline{\Gamma }(U'_{{\cal X}}, T)\times _{ \underline{\Gamma }(S^{m-1}_{{\cal X}}, T)}T_a. $$ By Theorem \ref{I.1.s.?} and the above result for spheres, we obtain the result for $U$. \hfill $\Box$\vspace{.1in} Say that a morphism $U\rightarrow V$ of $n$-stacks is {\em of finite CW type} if for any scheme of finite type $X$ with morphism $X\rightarrow V$ there is a covering family $\{ Y_{\alpha} \rightarrow X\}$ and finite CW complexes $W^{\alpha}$ such that $Y_{\alpha} \times _V U \cong Y_{\alpha} \times W^{\alpha}_{{\cal X}}$ (with $W^{\alpha}_{{\cal X}}$ being the constant $n$-stack associated to $\Pi _n(W^{\alpha})$ as defined previously). \begin{theorem} \mylabel{Leray} Suppose $U\rightarrow V$ is a morphism of $n$-stacks of finite CW type, and suppose $T\rightarrow U$ is a presentable (resp. very presentable) morphism of $n$-stacks. Then $\underline{\Gamma} (U/V, T/V)\rightarrow V$ is a presentable (resp. very presentable) morphism. \end{theorem} {\em Proof:} Suppose $X$ is a scheme of finite type with a morphism $X\rightarrow V$. Let $\{ Y^{\alpha} \rightarrow X\}$ be the covering family and $\{ W^{\alpha}\}$ the collection of finite CW complexes with isomorphisms $U\times _VY^{\alpha}\cong W^{\alpha}_{{\cal X}}$ given by the fact that $U\rightarrow V$ is a morphism of finite CW type. It suffices to prove that $$ \underline{\Gamma} (U/V, T/V)\times _V Y^{\alpha}= \underline{\Gamma} (U\times _VY^{\alpha}/Y^{\alpha}, T\times _VY^{\alpha}/Y^{\alpha}) $$ is presentable (resp. very presentable). Thus it suffices to prove the theorem in the case where $V$ is a scheme of finite type and $U=V\times W_{{\cal X}}$ for a finite CW complex $W$. With these hypotheses we return to the notations of the theorem. If $W$ is a finite union of components then the section space in question will be the product of the section spaces of each of the components. Thus we may assume that $W$ is connected. The $n$-stack of sections from $W_{{\cal X}}$ to $V\times W_{{\cal X}}$ is isomorphic to $V$. Thus the $n$-stack of sections of the morphism $T\rightarrow W_{{\cal X}}$ maps to $V$, and this $n$-stack of sections is the same as the relative section stack $\underline{\Gamma}(U/V, T/V)$. It suffices to prove that $\underline{\Gamma}(W_{{\cal X}}, T)$ is presentable (resp. very presentable). But the morphism $V\times W_{{\cal X}}\rightarrow W_{{\cal X}}$ is very presentable, so by Corollary \ref{composition} the morphism $T\rightarrow W_{{\cal X}}$ is presentable (resp. very presentable), and Lemma \ref{Leray2} applies to give that $\underline{\Gamma}(W_{{\cal X}}, T)$ is presentable (resp. very presentable) as needed. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{Leray1} Suppose $U\rightarrow V$ is a morphism of $n$-stacks of finite CW type and Suppose $T\rightarrow V$ is a presentable morphism of $n$-stacks. Then the morphism $$ \underline{Hom}(U/V, T/V)\rightarrow V $$ is a presentable morphism. \end{corollary} {\em Proof:} We have $$ \underline{Hom}(U/V, T/V) = \underline{\Gamma}(U/V, T\times _VU/V) $$ and $T\times _VU\rightarrow U$ is presentable by \ref{fiberprod}, so Theorem \ref{Leray} applies. \hfill $\Box$\vspace{.1in} \begin{corollary} \mylabel{Leray1a} Suppose $T$ is a presentable $n$-stack, and suppose $V\rightarrow U$ is a morphism whose fibers are finite CW complexes in the sense of the above theorem. Then $$ \underline{Hom}(V/U, T\times U/U)\rightarrow U $$ is a presentable morphism. \end{corollary} {\em Proof:} Indeed, the morphism $T\times U\rightarrow U$ is presentable. \hfill $\Box$\vspace{.1in} We look at the case of a morphism of $n$-groupoids $f:U\rightarrow V$ such that $U$ and $V$ are the $n$-groupoids associated to finite CW complexes. Suppose that the fibers of $f$ are the $n$-groupoids associated to finite CW complexes. This is the case for example if $f$ comes from a smooth morphism of manifolds. For a presentable $n$-stack $T$ we can calculate $$ \underline{Hom}(V, T) = \underline{\Gamma}(U, \underline{Hom}(V/U, T\times U/U)). $$ Corollary \ref{Leray1a} states that $\underline{Hom}(V/U, T\times U/U)\rightarrow U$ is a presentable morphism, and Lemma \ref{Leray2} (which is also a corollary of Theorem \ref{Leray}) states that for any presentable morphism $R\rightarrow U$ the space of sections is presentable. We obtain in particular the presentability of $\underline{Hom}(V, T)$ (which we already knew beforehand). The Leray devissage process thus stays within the realm of presentable $n$-stacks. {\em The K\"unneth formula:} We can apply the above discussion to the particular case where $V=U\times U'$ is a product. In this case the formula is simplified: $$ \underline{Hom}(U\times U', T) = \underline{Hom}(U, \underline{Hom}(U', T)) $$ and again (this time using only Corollary \ref{I.1.u}) this process of first taking $\underline{Hom}(U', T)$ and then $\underline{Hom}(U, -)$ stays within the realm of presentable $n$-stacks. Of course the entire discussion above works equally well if we replace ``presentable'' by ``very presentable''. {\em Example:} Take $G=GL(n)$ and $T= K(G, 1)$. Then $M':=\underline{Hom}(U', T)$ is the moduli stack for flat principal $G$-bundles (i.e. flat vector bundles of rank $n$) on $U'$. After that, assuming that $U$ is connected, $\underline{Hom}(U, M')$ is the moduli stack of flat $G$-bundles on $U\times U'$. More generally it should be interesting to look at presentable or very presentable {\em connected} $T$, these are objects whose homotopy group sheaves are algebraic Lie groups over $Spec (k)$. Note that if $k$ is algebraically closed then there is an essentially unique choice of basepoint $t\in T(Spec (k))$. If $G= \pi _1(T, t)$ then we have a fibration $T\rightarrow K(G,1)$ and we get a morphism $$ \underline{Hom} (U, T) \rightarrow \underline{Hom}(U, K(G, 1)). $$ This expresses $\underline{Hom} (U, T)$ as a presentable $n$-stack over the moduli stack $M$ of flat principal $G$-bundles over $U$.
1,116,691,499,600
arxiv
\section{Introduction}\setcounter{equation}{0} String theory is thought to be important to the construction of quantum gravity. The model that derives from string theory at tree level in two dimensions \cite{DEALIK} will be regarded here as a fundamental field theory of gravity in its own right, and methods of quantum field theory will be applied to it. This is in contrast with taking the fundamental theory to be a generally reparametrisation invariant sigma model on the two-dimensional world-sheet manifold of the string and then demanding that fields configure in such a way that is consistent with conformal invariance, {\it i.e.}, so that the $\beta$-functions vanish. To build up this theory one would have to expand in world-sheet perturbation theory, considering also topologies, whereas we shall work with the spacetime manifold. With this distinction in procedures in mind, this model will be referred to as the string-motivated model. In semi-classical gravity the expectation value of the matter energy-momentum tensor is coupled to the gravitational field. If this coupling is to the Einstein tensor, then the Bianchi Identities and energy conservation ensure mathematical consistency\footnote {In two dimensions, the result of applying this procedure to the Einstein Equation is either de Sitter or anti-de Sitter space. The backreaction problem is thus completely soluble\cite{NS}.}. Physical consistency of this procedure has been amusingly questioned in \cite{GEIK}. Using this quantum principle of equivalence, one has approximately included the effect of the matter upon the geometry of the spacetime. The aim is to see how a black hole would develop when such back-reaction is considered. This would naturally extend the original calculations\cite{SHA} in which the geometry of the spacetime is treated as a fixed background. This has been done with some success both generally\cite{SHM}, and in the context of several other dilaton gravity models in two dimensions\cite{REVS}. We begin the second section by introducing the string-motivated model, and note classical solutions for which the tachyon field is set to zero. Two examples of solutions which have non-trivial tachyon field are then found and written down. The first is flat space, the second represents a naked singularity. Further examples of black holes which have undergone backreaction by the tachyon field are given. An ansatz for these black holes is applied, analogous to the metric outside an evaporating star in general relativity. The general solution is found in this form. The ansatz shows the position of the apparent horizon: its relationship to the singularity and event horizon are calculable via a certain integral. By ignoring backreaction one can solve the field equation for the tachyon in the fixed static black hole geometry. This has been done, for example in \cite{MK,ST}, in the Schwarzschild gauge assuming staticity. In the third section, it is found that in the ingoing null coordinates, for a particular tachyon potential, one obtains the same hypergeometric equation for the radial part as in \cite{ST}, but there is also a u-dependent piece. In the fourth section, the procedure for coupling the energy-momentum of the tachyon field to the field equations is described. It is noted that this gives back the CGHS\cite{CGHS} equations if one works in the double-null coordinates, and drops tachyon terms. Thus the procedure used here is equivalent to adding the Polyakov term for the tachyon field to the action itself. A set of semi-classical equations are found in the ingoing `Eddington-Finkelstein' gauge. Unfortunately, these equations are at least as complicated as those of \cite{CGHS}, where numerical methods were resorted to\cite{BGHS,JH3}, before the model was adjusted so as to be exactly soluble\cite{RST}. \section{The String-Motivated Model}\setcounter{equation}{0} The following is the action for the classical part of the string model \begin{equation} S = \frac {1} {2\pi} \int d^2 x \sqrt{-g} \, e^{-2\phi} \, ( R + 4{\nabla \phi}^2+4\lambda^2 - (\nabla T)^2 -V(T)) -\frac 1 {\pi} \int d \Sigma \sqrt{-h} \,e^{-2\phi} \, (K-2{\nabla}_{n}\phi) \label{eq:SMM} \end{equation} The fields present are the metric, $g_{\mu\nu}$, the dilaton, $\phi$ and the tachyon $T$. There is a boundary term which makes the variational problem well-defined and enables the thermodynamics of the theory to be derived. K is the trace of the second fundamental form of the metric, and $\bf n$ is the normal vector to the boundary. The equations of motion derived from (\ref{eq:SMM}) are \begin{equation} R_{\mu\nu} +2 \nabla_{\mu} \nabla_{\nu} \phi- \nabla_{\mu} T \nabla_{\nu} T=0 \end{equation} \begin{equation} -R +4 (\nabla \phi)^2 -4 {\nabla}^2 \phi + {\nabla T}^2 + V(T) - 4\lambda^2 = 0 \label{eq:DIL} \end{equation} \begin{equation} {\nabla}^2 T - 2\nabla \phi \nabla T - \frac 1 2 {dV \over dT}=0 \label{eq:TAC} \end{equation} where $\lambda^2$ defines the mass scale here but is related to the central charge in string theory, $V(T)$ is the tachyon potential. Let us work in single null coordinates, \begin{equation} ds^2=-h(u,r) du^2-2dudr, \label{eq:eBG} \end{equation} where h is a function on the spacetime to be determined. In these coordinates, the field equations are \begin{equation} h(h_{,rr}-2h_{,r}\phi_{,r})+2h_{,r}\phi_{,u} -2h_{,u}\phi_{,r}+ 4 \phi_{,uu}-2 T_{,u}^{2}=0 \label{eq:BGUoo} \end{equation} \begin{equation} h_{,rr}-2h_{,r}\phi_{,r} + 4 \phi_{,ur}-2 T_{,u}T_{,r}=0 \label{eq:BGUoi} \end{equation} \begin{equation} 2\phi_{,rr}- T_{,r}^{2}=0 \label{eq:BGUii} \end{equation} \begin{equation} h_{,rr}-4h_{,r}\phi_{,r}-8\phi_{,r}\phi_{,u}+8 \phi_{,ur} +4h{\phi_{,r}}^2-4 h\phi_{,rr} +h {T_{,r}}^{2}-2 T_{,u}T_{,r} =4{\lambda}^2-V \label{eq:BDILU} \end{equation} \begin{equation} h T_{,rr}-2T_{,ur}+h_{,r}T_{,r}-2h\phi_{,r}T_{,r} +2\phi_{,u}T_{,r}+2\phi_{,r}T_{,u}-\frac 1 2 \frac {dV} {dT} =0 \label{eq:eBT} \end{equation} \section{ Classical Solutions}\setcounter{equation}{0} \subsection{$T=0$} These equations simplify if one looks for solutions with zero tachyon field. There exists a timelike Killing vector in this case\cite{GIBPERB}, and so it is no restriction to drop terms which contain derivatives of u. One then has \begin{equation} h_{,rr}-2h_{,r}\phi_{,r}=0 \label{eq:BGUSTAToo} \end{equation} \begin{equation} \phi_{,rr}=0 \label{eq:BGUSTATii} \end{equation} \begin{equation} h_{,rr}-4h_{,r}\phi_{,r} +4h{\phi_{,r}}^2-4 h\phi_{,rr} -4{\lambda}^2= 0 \label{eq:BDILUSTAT} \end{equation} Thus there is a `linear dilaton' $\phi(r)=-\lambda r + \phi_0$ and a metric given by \begin{equation} h(r)=1-ae^{-2\lambda r}, \label{eq:hSTAT} \end{equation} where a is a constant. The curvature information is in $R=-h_{,rr}=4a\lambda^2 e^{-2\lambda r}.$ There is a curvature singularity at $r\to -\infty.$ It will be useful to transform solutions of the form (\ref{eq:eBG}) to null coordinates. One transforms to conformally flat null coordinates via \begin{equation} ds^2=-\Omega^2(u,r) dudv=-h du^2-2dudr. \label{eq:GLOB} \end{equation} If h is a function of r only, then the solution is $\Omega^2=h.$ A more general case is considered later. One then finds that $h=(1\pm e^{-\lambda(v-u)})^{-1},$ where the positive sign corresponds to $a>0$. A further transform to `Kruskal' coordinates \begin{equation} \beta U=e^{-\lambda u} \label{eq:KRi} \end{equation} \begin{equation} \alpha V=e^{\lambda v} \label{eq:KRii} \end{equation} yields, in the case of $a>0$, the familiar metric form for the maximally-extended static black hole \cite{W}, \begin{equation} ds^2=-\frac {dUdV} {-\frac {\lambda^2} {\alpha\beta} - \lambda^2 UV}. \label{eq:WI} \end{equation} where $\alpha\beta<0.$ If $\alpha\beta=-\frac {\lambda^3} {M}$ then M is the ADM mass\cite{CGHS}. If $a<0$ one finds \begin{equation} ds^2=-\frac {dUdV} { \frac {\lambda^2} {\alpha\beta} -{\lambda^2} UV }. \label{eq:WII} \end{equation} This latter solution represents a naked singularity, whereas the former is the black hole described in the first half of\cite{CGHS}. The ground state solution, which has $M=0$, is the `linear dilaton vacuum'. This corresponds to $a=0$ in the ingoing coordinate solution for h. \subsection{\bf $T\neq 0$} $\underline {Example 1}$ A first example is a flat geometry bathed in a u-dependent tachyon. One starts with a linear dilaton $\phi=-\lambda r$, and it is assumed that $V(T)=aT^2.$ \begin{equation} T=Ae^{\frac {-au} {2\lambda}} \label{eq:UTAC} \end{equation} \begin{equation} h=1-\frac {V} {4\lambda ^2} \label{eq:UTACi} \end{equation} Since h is a function of u only, this space is flat. \vspace{10mm} $\underline {Example 2}$ If $V=\lambda=0$, another set of solutions is of the form \begin{equation} h=\alpha r^{n} \label{eq:POW} \end{equation} and the dilaton and tachyon are \begin{equation} \phi=-\frac 1 2 (1-n)\log r=\frac 1 2 T\sqrt {1-n}. \label{eq:POWi} \end{equation} The curvature is $R=\a n(n-1)r^{(n-2)}.$ In null coordinates, \begin{equation} ds^2=-\alpha\Big(\frac {\alpha(1-n)} 2 \Big)^{\frac {n} {1-n}} \frac {dudv} {(v-u)^{\frac {n} {n-1}}}. \label{eq:POWii} \end{equation} In the case $n=2$ there is a singularity-free space of constant curvature. For $n=0$ there is flat space and logarithmic $\phi=\frac 1 2 T. $ Otherwise, there is a timelike (naked) singularity on the line $v=u, r=0.$ \vspace{10mm} $\underline {Example 3}$ In order to find solutions which represent the black hole perturbed classically by the tachyon field, one can assume that the metric takes the form of a black hole with a dynamical horizon. That is, solutions are of the form \begin{equation} ds^2=-h(u,r) du^2-2dudr, \end{equation} where \begin{equation} h=1-e^{-2\lambda(r-f(u))} \label{eq:Ti} \end{equation} One then solves for the function f(u) which gives the position of the horizon\footnote {The motivation for this ansatz comes from four dimensional theory. The metric outside a radiating star is related to (\ref{eq:Ti}). This is called the Vaidya metric, and can be written \begin{equation} ds^2=-(1-\frac {2M(u)} r)du^2-2dudr+r^2d\Omega ^2 \label{eq:VY} \end{equation} The mass in the Schwarzschild metric has been upgraded from a constant to a function of the retarded time, which is reasonable if the radiation is made up of massless particles. The metric is a solution of the Einstein equations in a source field of pure radiation, \begin{equation} G_{uu}=R_{uu}=8\pi T_{uu}=-\frac 2 {r^2} \frac {dM} {du} \label{eq:eEIN} \end{equation} One might ask what is the future development of this system. To answer this, consider Stefans Law, \begin{equation} \frac {dM} {du}=aAT^4\propto M^{-2} \label{eq:eSTEF} \end{equation} where A is the area of the star, a is Stefans constant. This implies that \begin{equation} \frac {dM} {du}\propto u^{-\frac 2 3} \label{eq:eMU} \end{equation} The rate of mass decrease therefore diverges at finite retarded time. This footnote will be expanded upon elsewhere.} $r_h=f(u).$ The implicit assumption that the horizon motion is u-dependent corresponds to the masslessness of the tachyon field. One might try to find solutions for which the dilaton background is linear, $\phi=-\lambda r$, but the field equations then become \begin{equation} -2a \lambda^2 e^{2\lambda f} f_{,uu} -e^{2\lambda r}T_{,u}^2=0 \label{eq:BUU} \end{equation} \begin{equation} T_{,u}T_{,r}=0 \label{eq:BUR} \end{equation} \begin{equation} T_{,r}^2=0, \label{eq:BRR} \end{equation} which means $T=T(u)$. Inspection of (\ref{eq:BUU}) shows that $a=0$ so that $T=0$, and no progress is made. If, by contrast, one tries the following dilaton field: \begin{equation} \phi=-\lambda[r-f(u)], \label{eq:DILi} \end{equation} then the value of the dilaton is thus fixed on the horizon. The field equations become \begin{equation} 2\lambda f_{,uu}- T_{,u}^2=0, \label{eq:BGA} \end{equation} \begin{equation} T_{,u}T_{,r}=0, \label{eq:BGB} \end{equation} \begin{equation} T_{,r}^2=0, \label{eq:BGC} \end{equation} \begin{equation} 8\lambda^2 f_{,u} + V(T) = 0, \label{eq:BDILA} \end{equation} \begin{equation} 4\lambda T_{,u}= -{dV\over dT}, \label{eq:BTA} \end{equation} Equation (\ref{eq:BGB}) implies that T is a function of u only, and given the potential V(T), one can solve for T in (\ref{eq:BTA}). One can substitute into equation (\ref{eq:BGA}) and integrate twice to obtain the function f and hence the backreacted metric. \footnote{It should be noted that in this form, the dilaton field is constant on the horizon. It can be shown however that the ADM mass of the solution is related to the dilaton there. The dilaton gives a coordinate independent measure of position, so if it is constant on the horizon, then the horizon is not moving. This suggests a static black hole, perturbed by the classical tachyonic backreaction. } The tachyon potential is given by $V(T)= a T^2+ b T^3+...$ where $a$ and $b$ are taken from string theory calculations, and wont be specified here. For $V(T)=0$, equation (\ref{eq:BTA}) implies that T is a constant, and integrating up (\ref{eq:BGA}) shows that f(u) is then linear which is a static solution equivalent to (\ref{eq:WI}), the usual static black hole. For quadratic V(T), \begin{equation} T= e^{\frac {-a} {2\lambda}(u+u_0)}. \label{eq:QTi} \end{equation} The solution for $f(u)$ is then \begin{equation} \lambda f(u)=\frac 1 {8\lambda} e^{\frac {-a} {\lambda}(u+u_0)} \label{eq:ROT} \end{equation} If the $O(T^3)$ term is included, one obtains \begin{equation} T=\frac {2a} {3b} \frac 1 { e^{ \frac a {2\lambda} (u+u_0) }-1}. \label{eq:ROTT} \end{equation} Then \begin{equation} \lambda f(u) = A\left( \log |{ e^{ \frac a {2\lambda} (u+u_0) } -1} | + \frac { e^{ \frac a {2\lambda} (u+u_0) } } { (e^{ \frac a {2\lambda} (u+u_0) } -1)^2} \right) \label{eq:ROTH} \end{equation} where A is a constant depending on a and b. Note that these solutions (\ref{eq:QTi})-(\ref{eq:ROTH}) solve all the field equations at once. The geometry hasn't been fixed using the metric and dilaton equations in isolation and setting $T=0$, as is done in the following section. To see what these geometries look like globally, one can transform to conformal null coordinates. Then the position of the horizon and singularity are easily calculable. One must find $\Omega$ in \begin{equation} ds^2=-\Omega^2(u,r) dudv=-h du^2-2dudr. \end{equation} The following expression for $\Omega$ then obtains \begin{equation} 4\Omega_{,u}=2h\Omega_{,r}-\Omega h_{,r} \label{eq:PART} \end{equation} For solutions of the form (\ref{eq:Ti}) one finds that $\Omega^2=e^{-2\lambda(r+\frac u 2)}$ and \begin{equation} \lambda v= 2 e^{\lambda (r+\frac u 2)} - \lambda c(u) \label{eq:SMV} \end{equation} where $c(u)=\int du e^{2\lambda f} e^{\lambda u}$. Thus \begin{equation} ds^2=-\frac {dudv} {{\frac \lambda 2}(c(u)+v)} \label{eq:NULA} \end{equation} By rescaling $V=\lambda v$, and transforming to $U= -e^{-\lambda u}$, one obtains the form \begin{equation} ds^2=\frac {dUdV} {\frac 1 2 U \lambda^2( V+\lambda c(U))} \label{eq:KRUS} \end{equation} For $f=0$, the static Witten black hole given in equation ({\ref{eq:WI}) is recovered. If f is linear in u, this static metric still results. Unfortunately, this transformation difficult to perform for the solutions (\ref{eq:ROTH}) and (\ref{eq:ROT}), except for the case $a=-\lambda^2$. Since the form of the geometry is that of a black hole by ansatz, and the tachyon is a scalar field so that it will remain non-trivial in any coordinate system: these are black hole solutions with classical tachyon hair. The fact that the dilaton is constant at zero on the horizon suggests that the solution is in fact static. The tachyon field however is not constant on the horizon. \section{ The Tachyon field in Fixed Geometry} Now the static solution (\ref{eq:WI}) found by setting $T=0$ is fed into the dilaton and gravitational field equations, and determine the tachyon equation. If one assumes that the solution is a separable function, the radial part is found to be a hypergeometric function in r, as was seen in \cite{ST}, which reduces to an exponential function in flat space, while the u-dependent piece is exponential. If the constant of integration is taken to be imaginary, this becomes plane wave. One can substitute the real solution back into the field equations, expanding around the origin in r to try to find out how the tachyon backreacts upon the geometry perturbatively. The tachyon equation of the string model was \begin{equation} {\nabla}^2 T - 2\nabla \phi \nabla T = \frac 1 2 {dV \over dT} \label{eq:BT} \end{equation} which in the coordinates (\ref{eq:GLOB}) with $f=0$ becomes \begin{equation} h T_{,rr} +2\lambda T_{,r}-\frac 1 2 \frac {dV} {dT} -2T_{,ru}-2\lambda T_{,u}=0 \label{eq:TU} \end{equation} Let $U=e^{\lambda r}T$, and look for solutions $U=\rho(r)\xi(u)$, assuming quadratic tachyon potential with coefficient $a=-\lambda^2$. One then finds that the function $\xi=e^{\frac c 2 u}$, where c is a constant. The equation for $ \rho(r)$ is \begin{equation} x(1-x)\rho ''+ (1+\frac {c} {2\lambda}-2x)\rho ' -\frac 1 4 \rho =0 \label{eq:HYPE} \end{equation} This is a hypergeometric equation. The solutions are \begin{equation} \rho=A F(\frac 1 2 ;\frac 1 2 ;1+\frac c {2\lambda};e^{-2\lambda r}) + B F(\frac 1 2 ;\frac 1 2 ;1+\frac c {2\lambda}; 1-e^{-2\lambda r}) \label{eq:HYSOLN} \end{equation} This gives T immediately. We now return to the gravitational and dilaton equations. It is expected that the dilaton and metric to be perturbed near the origin by this T field which is fed into the field equations. One finds that the new dilaton and metric must be static. The static field equations with $V(T)=aT^2$ are: \begin{equation} h_{,rr}-2h_{,r}\phi_{,r}=0 \end{equation} \begin{equation} 2\phi_{,rr}- T_{,r}^{2}=0 \end{equation} \begin{equation} h_{,rr}-4h_{,r}\phi_{,r} +4h{\phi_{,r}}^2-4 h\phi_{,rr}+h {T_{,r}}^{2} -4{\lambda}^2+ a T^2= 0 \end{equation} \begin{equation} h T_{,rr}+h_{,r}T_{,r}-2h\phi_{,r}T_{,r}-aT =0 \label{eq:BTr} \end{equation} T is known, hence (\ref{eq:BGUii}) implies $\phi(r)$ and (\ref{eq:BDILU}) acts as a check for this solution. One can calculate the power series solution for metric and dilaton around the origin of r. This naturally depends on the expansion coefficients of the hypergeometrical tachyon, and is not very instructive. In summary, the black hole solution has been taken as a fixed background in which the tachyon moves. This gives a hypergeometric function. When one iterates this solution, one can find an expansion for the perturbed dilaton and metric near the origin. The metric and dilaton must remain static, though in which global configuration we do not know. The initial assumption of $f(u)=0$ as a fixed background followed by many iterations doesn't necessarily lead to the result which would obtain if one were to solve the equations of motion at once, and is thus of limited value. \section{Semi-Classical Treatment of the Model} In this section the tachyon field is treated as a quantum field. One simply adds to the expression for its classical stress tensor the quantum stress tensor, which is derivable in two dimensions using the trace anomaly and the conservation equations. The additional term might be produced by including a term in the action. This term is non-local, and it need not be specified here. The other fields are still treated classically, but one would need later to include dilaton and graviton loops. This question was addressed in the CGHS model by proliferating the number of scalar fields, which rendered other terms small and the semi-classical approximation exact. The tachyon in (\ref{eq:SMM}) is not coupled as a standard scalar field,{\it i.e.} \begin{equation} {\cal L}=\sqrt {-g}((\nabla T)^2 - (m^2+\xi R)T^2) \label{eq:LS} \end{equation} where $\xi$ is a numerical factor, which is zero for both minimal and conformal couplings in two dimensions, but rather, \begin{equation} {\cal L}_T=\sqrt {-g} e^{-2\phi}((\nabla T)^2+V(T)) \label{eq:LT} \end{equation} The lagrangian (\ref{eq:LT}) for the tachyon field is clearly conformally coupled. The factor $e^{-2\phi}$ cannot be removed by a conformal transformation in two dimensions. Using dimensional analysis the trace anomaly for this form of field must be \begin{equation} \alpha R + \beta \label{eq:anom} \end{equation} where R is the Ricci scalar; $\alpha$ and $\beta$ are constants found in the explicit calculation through the heat equation. By functionally differentiating the tachyon part of the Lagrangian with respect to the metric one finds the classical stress tensor for the tachyon is \begin{equation} T_{\mu\nu}=e^{-2\phi}(\nabla_{\mu} T\nabla_{\nu} T-\frac 1 2 g_{\mu\nu} ({\nabla T}^2 + V)) \label{eq:cIT} \end{equation} Using the field equations, this can be written \begin{equation} T_{\mu\nu}=2e^{-2\phi}(g_{\mu\nu}({\nabla \phi}^2-{\nabla}^2\phi-\lambda^2) +\nabla_{\mu}\nabla_{\nu}\phi) \label{eq:cITi} \end{equation} The simple step that we propose in order to include quantum effects is to take the left hand side of this equation to be the sum of the classical and quantum stress tensors for the tachyon. For completeness, the equations of motion in the gauge (\ref{eq:eBG}) are written down:- \begin{equation} e^{2\phi}T_{uu}= hh_{,r}\phi_{,r}+h_{,r}\phi_{,u}-h_{,u}\phi_{,r} -4h\phi_{,ru}+2h^2\phi_{,rr}-2h^2{\phi_{,r}}^2 +4h\phi_{,r}\phi_{,u}+2\phi_{,uu}+2h\lambda ^2 \label{eq:TI} \end{equation} \begin{equation} e^{2\phi}T_{ur}=h_{,r}\phi_{,}-2\phi_{,ru}+2h\phi_{,rr} -2h{\phi_{,r}}^2+4\phi_{,r}\phi_{,u}+2\lambda ^2 \label{eq:TII} \end{equation} \begin{equation} e^{2\phi}T_{rr}=2\phi_{,rr} \label{eq:TIII} \end{equation} where $T_{\mu\nu}=T^{cl}_{\mu\nu}+\langle {T^q_{\mu\nu}}\rangle.$ The classical components of $T_{\mu\nu}$ are \begin{equation} T^{cl}_{rr}=e^{-2\phi}T_{,r}^{2} \label{eq:z} \end{equation} \begin{equation} T^{cl}_{ur}=\frac 1 2 e^{-2\phi}(hT_{,r}^{2}+V(T)) \label{eq:y} \end{equation} \begin{equation} T^{cl}_{uu}=e^{-2\phi}(T_{,u}^{2}+ \frac 1 2 h(hT_{,r}^{2}-2T_{,u}T_{,r}+V(T))) \label{eq:x} \end{equation} If one works in the ingoing null coordinate gauge, it may be seen that it isn't possible to solve for the energy momentum tensor components for general h(see (\ref{eq:eBG})), but if one tries the ansatz (\ref{eq:Ti}), the quantum piece of the energy momentum tensor can be found. If the trace anomaly for the tachyon field is $\alpha R$, then this is \begin{equation} \langle {T^q_{rr}}\rangle=2\lambda ^2\alpha +\xi \label{eq:TRR} \end{equation} \begin{equation} \langle {T^q_{ur}}\rangle=-3\lambda ^2\alpha e^{-2\lambda(r-f)} +\lambda ^2\alpha+ \frac 1 2 \xi(1-e^{-2\lambda(r-f)}) \label{eq:TUR} \end{equation} \begin{equation} \langle {T^q_{uu}} \rangle=-e^{-2(r-f)}\Big(2\lambda^2\alpha \dot f+ 4\lambda ^2\a+\frac 1 2 \xi\Big) +3\lambda^2\alpha e^{-4\lambda(r-f)}+\frac 1 4 \xi+t(u) \label{eq:TUU} \end{equation} where $\xi=Be^{2\lambda(2r+u)}$, and $t(u)$ is an arbitrary function of u determined by the boundary conditions. Keeping terms involving $\xi$, there will be large distance divergences in the components, so one sets $B=0$. These terms are added to the classical stress tensor, and substituted into the field equations (\ref{eq:TI})-(\ref{eq:TIII}). When $\alpha$ is set to zero, one recovers the classical field equations (\ref{eq:BGUoo})-(\ref{eq:BDILU}). These equations are clearly quite complicated, and one cannot find closed form solutions. Numerical solutions might be interesting, but this is not pursued here. \subsection{Solutions in the Conformal Gauge} One can work in Kruskal double null coordinates, i.e.(\ref{eq:GLOB}). The equations of motion are then \begin{equation} e^{2\rho}(-4\lambda ^2 +V)-8\rho_{,uv}+ 16\phi_{,uv}-16\phi_{,u}\phi_{,v}-4T_{,u}T_{,v}=0 \label{eq:BDILQ} \end{equation} \begin{equation} \alpha e^{2\phi}(\rho_{,uu}-\rho_{,u}^2-t_{u}(u))+ 4\rho_{,u}\phi_{,u} -2\phi_{,uu}+T_{,u}^2=0 \label{eq:BQi} \end{equation} \begin{equation} e^{2\rho}(-4\lambda ^2 +V)-4\alpha e^{2\phi}\rho_{,uv}+8\phi_{,uv} -16\phi_{,u}\phi_{,v}=0 \label{eq:BQio} \end{equation} These equations reduce to the CGHS equations if one removes tachyon terms. Another approach is to define \begin{equation} \Theta_{\mu\nu}^{cl}(\tilde T)=\Theta_{\mu\nu}^{cl}(T)+ \Theta_{\mu\nu}^{q}(T) \label{eq:TT} \end{equation} where the quantity $\tilde T$ takes into account both the quantum and classical contributions to the energy-momentum tensor of the tachyon field. It is this field then that appears in the action (\ref{eq:SMM}). Taking $\tilde T=0$, so that $V(\tilde T)=0$, one has the classical CGHS equations with no matter. The solution to these is a one parameter family of static black holes, with a vacuum state, the linear dilaton, given by the zero mass case (\ref{eq:WI}). But the relations (\ref{eq:TT}) will give equations for the potential V(T) in terms of the conformal factor and the dilaton, \begin{equation} e^{-2\phi}V(T)=\alpha e^{-2\rho}\rho_{,uv} \label{eq:VT} \end{equation} which are known, and which will determine the potential V(T) if one states the form of T. This will then determine the constraint functions $t_{u}$ and $t_{v}$. \begin{equation} T_{,u}^2=\alpha e^{2\phi}(\rho_{,uu}-\rho_{,u}^2-t_{u}(u)) \label{eq:CONT} \end{equation} and similarly for the advanced constraint equation in v. One could also choose the tachyon field to cancel out the quantum piece after combining (\ref{eq:BDILQ}) and (\ref{eq:BQio}), {\it i.e.} \begin{equation} T_{,u}T_{,v}=\alpha e^{2\phi}\rho_{,uv} \label{eq:RSI} \end{equation} Then if one works in the gauge $\rho=\phi$ and chooses \begin{equation} V=2\alpha\rho_{,uv}. \label{eq:RSII} \end{equation} The remaining equation is just the dynamical equation of the RST model. \begin{equation} -4\lambda^2e^{2\rho}-2\alpha e^{2\rho}\rho_{,uv}+8\rho_{,uv}- 16\rho_{,u}\rho_{,v}=0. \label{eq:QS} \end{equation} These are the RST black holes, but generated by the tachyon field and its potential. The relations(\ref{eq:RSI}) and (\ref{eq:RSII}) imply the form of the tachyon potential in terms of T. To summarise, the equations for the string-motivated model have been found, which correspond to those of the CGHS model but in ingoing coordinates which give the position of the apparent horizon. It seems that one has to resort to numerical solutions, where one could consider tachyonic ingoing matter, for various potentials. Equilibrium static solutions and contact with `RST' black holes are found by working in the conformal gauge and shaping the tachyonic terms. \section{Conclusion} One can try to simulate the black hole formation and evaporation in two dimensions: the hope is that the results will have bearing upon more realistic descriptions, as other scientific work in two dimensions often has. In this paper, a model of gravity arising from string theory is treated as a quantum field theory. First classical solutions are noted for zero and non-trivial tachyon field configurations. Then, the equation of motion for the tachyon in a fixed flat space and black hole geometry is solved, and is iterated into the dilaton and gravitational field equations. Finally, the quantum stress tensor for the tachyon field is found and coupled appropriately to the classical field equations in another gauge from that which has usually been used. The aim was to consider an analogous coordinate system to that which highlights most clearly the behaviour of a radiating star in four dimensions. The solutions then immediately tell us where the apparent horizon is. Working in this gauge was useful in finding classical solutions and considering the behaviour of the tachyon in a fixed geometry. However, although this is another example of a coordinate system in which one can calculate the quantum stress tensor components, and thus derive semi-classical equations of motion, it does not yield simpler equations than those found in the conformal gauge. \section{Acknowledgements} I thank Malcolm Perry for his ideas and criticisms, and Gary Gibbons for a later reading of the paper. I am grateful to the Cambridge Philosophical Society for its financial support.
1,116,691,499,601
arxiv
\section{Introduction} \label{sec:intro} Binary black hole (BBH) mergers are the most common sources of gravitational waves (GWs) detected by the LIGO/Virgo collaboration \citep{LIGO2019, LIGO2021a, LIGO2021b}. A variety of formation channels for these merging BBHs have been studied, ranging from isolated binary evolution \citep[e.g.,][]{Lipunov1997, Podsiadlowski2003, Belczynski2010, Belczynski2016, Mandel2016, Marchant2016}, strong gravitational scatterings in dense star clusters \citep[e.g.,][]{PortegiesZwart2000, OLeary2006, Miller2009, Banerjee2010, Downing2010, Ziosi2014, Samsing2014, Rodriguez2015, Samsing2018, Kremer2019}, and tertiary-induced mergers in stellar triple/quadrupole systems \citep[e.g.,][]{Miller2002, Silsbee2017, Liu2018, Liu2019, Liu2019a, Fragione2019} or in nuclear clusters around a central supermassive BHs \citep[e.g.,][]{Antonini2012, VanLandingham2016, Petrovich2017, Hamers2018, Liu2019b, Liu2021}. The probability and significance of each channel are still unclear. The possibility of BBH mergers in AGN accretion disks around supermassive black holes (SMBHs) has received much attention in recent years \citep[e.g.,][]{McKernan2012, McKernan2014, Bartos2017, Stone2017, Secunda2019, Secunda2020, Yang2019a, Yang2019b, Grobner2020, Ishibashi2020, Tagawa2020b, Tagawa2020a, Ford2021}. BBHs in flat disks may be hardened, or even driven to merger, by a series of nearly co-planar binary-single scatterings \citep[e.g.,][]{Stone2017, Leigh2018, Samsing2020}. Hydrodynamical interaction between the gaseous AGN disk and a pre-existing BH binary may also influence the orbital evolution of the binary \citep{Baruteau2011, Li2021a, Li2021, Li2022}. BH Mergers that happen inside AGN disks could have several observable properties, such as the possible associations with electromagnetic counterparts \citep{Stone2017, McKernan2019, Graham2020} and distinct mass and spin distributions \citep[e.g.,][]{McKernan2018, Yang2019a}. The AGN disk channel of BBH mergers typically relies on the disks being an ideal environment for dynamical formation of BH binaries. BBHs could be ``pre-existing'' in nuclear star clusters and get captured in the inner AGN disks \citep{Bartos2017} or form in situ in the extended region of the disks \citep{Stone2017}. It was suggested that AGN disks \citep{Sirko2003, Thompson2005} may contain migration traps where stellar-mass BHs (sBHs) can accumulate \citep{Bellovary2016}. Orbiters inside such traps can have close encounters with each other due to their mutual gravity and potentially form binaries \citep{Secunda2019, Secunda2020}. A direct gas-capture channel has also been proposed to form binaries from single sBHs \citep{Tagawa2020a}. An important and unsolved issue of the AGN disk BBH merger scenario is the evolution of sBHs during close encounters and how bound BBHs actually form. Previous studies tend to adopt very simple prescriptions for BBH formation and follow-up evolution. For example, \cite{Secunda2019} performed $N$-body simulations of multiple sBHs and included disk force prescriptions to mimic the effects of eccentricity damping and migration traps, but they assumed that all close encounters between two BHs within the mutual Hill radius and with negative relative binary energy lead to the formation of bound binaries that quickly merge. This assumption is problematic as the vast majority of such bound binaries are short-lived and are quickly destroyed by the tidal force of the SMBH. \cite{Tagawa2020a} considered gas-assisted binary formation in their population synthesis study, including the time delay between BBH formation and merger, and allowing newly formed BBHs to be disrupted before mergers. However, their studies are one-dimensional and they assumed that the relative orbits of all BBHs are circular for simplicity. In this paper, we study how often two or more sBHs in closely-packed, initially circular and nearly co-planar orbits around a SMBH can be captured into a ``permanent'' binary and merge with the aid of the gravitational radiation. Such close-packed orbits (with the difference in orbital radii less than a few times the Hill radius $R_{\rm H}$) can be naturally produced by the differential migrations of sBHs in the AGN disk and/or the migration traps \citep{Bellovary2016, Secunda2019}. Since the sBHs in close orbits are dynamically unstable, they exhibit chaotic orbital motion around the SBMH and undergo repeated close encounters with each other (with their separation less than $R_{\rm H}$). Binary capture occurs only in the very rare occasion when two sBHs experience an extreme close encounter (with their separation several orders of magnitude less than $R_{\rm H}$), during which energy dissipation through GW emission is effective. We perform a large number of $N$-body simulations to study the occurrence rate of such close encounters and the properties of the captured BBHs. Since the dynamical influence of the disk gas on sBHs during close encounters is highly uncertain, the major part of our paper focuses on the clean problem where the only dissipative effect is GW emission. Nevertheless, we also carry out an exploratory study on the gas effects by adding ``frictional'' forces on the sBHs to mimic the BH-gas interactions. The rest of this paper is structured as follows. In Section~\ref{sec:GWCE}, we introduce our scenario for BBH formation in AGN disks via close encounters between sBHs. In Section~\ref{sec:fiducial}, we describe our fiducial ``SMBH + 2 BHs'' simulations with no gas effects included. We use these simulations to obtain the time evolution of the close encounter rate, the distribution of the minimum sBH separations during encounters, and the timescale (and the probability) to form long-lived or merging BH binaries. Section~\ref{sec:inc} discusses how our results depend on the initial inclinations of the BH orbits around the SMBH. In Section~\ref{sec:friction} we apply simple models of the disk forces on the sBHs to assess the effects of gas disks on the evolution of the embedded sBHs and the formation rate of merging BBHs. Section~\ref{sec:NBH} explores the rate and properties of the close encounters in systems with more than two sBHs around the SMBH. In Section~\ref{sec:summary}, we summarize our findings. \section{Scenario} \label{sec:GWCE} AGN disks can help bringing stellar-mass black holes (sBHs) circulating around a supermassive BH (SMBH) into close orbits due to the differential migrations of the BHs and migration traps \citep{Bellovary2016}, therefore promoting close encounters between the sBHs. While the encountering sBHs typically have too much relative kinetic energy to become bound in the presence of the tidal field of the SMBH, they may occasionally have a very close encounter, during which gravitational wave (GW) emission can take away the excessive energy. Dynamical friction from the disk gas may also play a role, but its effect is more difficult to quantify. These very close encounters may turn the two BHs into a bound binary and lead to BBH merger. We consider a system with a central SMBH of mass $M$, around which orbit two or more sBHs on nearly circular and nearly co-planar trajectories. For simplicity, henceforth `BHs' always refer to stellar-mass black holes that are orbiting around the SMBH. Due to their migrations in the AGN disk, the BHs may have orbits very close to each other. For the most part of this paper, we set up two BHs with masses $m_1$, $m_2$ and initial semi-major axes $a_1$, $a_2$ around the SMBH. If the dimensionless orbital separation (in units of the mutual Hill radius $R_{\rm H12}$) is less than a critial value, i.e. \begin{eqnarray} \label{eq:a1-a2-criterion} \frac{a_2-a_1}{R_{\rm H12}} \equiv K < K_{\rm crit} \sim 1, \end{eqnarray} where \begin{eqnarray} \label{eq:RH} R_{\rm H12} \equiv \frac{a_1+a_2}{2} \left(\frac{m_1+m_2}{3M}\right)^{1/3}, \end{eqnarray} the orbits are dynamically unstable, such that the two BHs will soon develop orbital crossing and start chaotic evolution. The boundary between ``stable'' and ``unstable'' can be fuzzy but the critical value $K_{\rm crit}$ is of order unity and depends on the ``frictional'' force acting on the BH from the disk gas (Li, Rodet and Lai 2022). In the absence of BH-disk interaction, the Hill stability criterion gives $K_{\rm crit}=2\sqrt{3}$ \citep{Gladman1993}. There are three possible outcomes for the two BHs in unstable orbits: (i) The lighter BH ($m_2$) is ejected from the system; (ii) The two BHs experience a sufficiently close encounter such that GW emission and/or gas drag makes them into a bound binary and eventually merge; (iii) One of the BHs moves very close to the SMBH and gets ``swallowed'' by it. Outcome (iii) has a negligible probability when $a_1,a_2 \gg GM/c^2$, i.e. when the horizon radius of the SMBH is much less than the BH orbital distances. In our ``SMBH + 2 BHs'' systems with $M \gg m_1, m_2$, outcome (i) will take many orbital periods to happen. This can be understood as follows. Close encounters between $m_1$ and $m_2$ ($<m_1$) cause $m_2$ to experience energy diffusion, with the change (loss or gain) of energy during each encounter given by $\Delta E \sim \alpha (G m_1 m_2 / a_1)$, where $\alpha\gtrsim1$. Thus the average number of close encounters required for $m_2$ to be ejected is $\left< N_{\rm ej} \right> \sim (G M m_2/2a_1)^{2}(\Delta E)^{-2} \sim (4\alpha^2)^{-1}(M/m_1)^2$. Indeed, extensive numerical experiments carried out by \cite{Pu2021} show that $N_{\rm ej}$ has a broad distribution, with the mean value given by (see their Eq 24) \begin{eqnarray} \left< N_{\rm ej} \right> \simeq 0.06^2 \left( \frac{M}{m_1} \right)^2 \left(1+\frac{m_2}{m_1}\right)^4, \end{eqnarray} and the distribution has a long tail at the larger values (the $68\%$ quantile of $N_{\rm ej}$ ranges from $0.25\left< N_{\rm ej} \right>$ to $13\left< N_{\rm ej} \right>$). The ejection time is usually much longer then $N_{\rm ej}P_2$ (where $P_{2}$ is the initial orbital period of $m_{2}$) since the semi-major axis of $m_2$ increases as it approaches ejection. Thus, for $M/m_1 \gtrsim 10^{6}$, the ejection timescale $t_{\rm ej} \gg 10^{10}P_2$, which means ejection almost never happens. We are thus left with outcome (ii), i.e. BH binary formation due to dissipative processes. If we neglect the possible effect of gas drag, the only dissipation is GW emission (``gravitational bremsstrahlung''). During a very close encounter (i.e. the separation between $m_1$ and $m_2$ is much less than $R_{\rm H}$), the two BHs lose their relative energy by the amount \citep{Peters1964,Quinlan1989} \begin{eqnarray} \label{eq:dEGW} \Delta E_{\rm GW} = \frac{85\pi}{12\sqrt{2}}\frac{G^{7/2} \mu^2 m_{12}^{5/2}}{c^5 r_{\rm p}^{7/2}}, \end{eqnarray} where $m_{12} = m_1+m_2$, $\mu=m_1m_2/m_{12}$, and $r_{\rm p}$ is the periastron separation of the relative trajectory of the BHs. To form a binary that is stable in the presence of the SMBH tidal field, we require \begin{eqnarray} \Delta E_{\rm GW} \gtrsim \eta\frac{G m_1 m_2}{R_{\rm H12}}, \end{eqnarray} with $\eta$ of order unity. This implies that the pericenter distance of the relative orbit of $m_1$ and $m_2$ must be less than the critical value for capture, given by \begin{eqnarray} \nonumber \frac{r_{\rm cap}}{R_{\rm H12}} & \simeq & 2.85 \eta^{-2/7} \left(\frac{\mu}{m_{12}}\right)^{2/7} \left(\frac{m_{12}}{M}\right)^{10/21} \left(\frac{a_{12}}{GM/c^2}\right)^{-5/7}, \\ \label{eq:GWCE-rp_crit} & \simeq & 10^{-4} \eta^{-2/7} \left(\frac{4\mu}{m_{12}}\right)^{2/7} \left(\frac{10^{6}m_{12}}{M}\right)^{10/21} \\ \nonumber && \times \left(\frac{a_{12}}{100GM/c^2}\right)^{-5/7}, \end{eqnarray} where $a_{12}=(a_1+a_2)/2\simeq a_1$. Thus, for the two unstable BHs to be captured into a bound binary requires an extremely close encounter with $r_{\rm p}\lesssim10^{-4}R_{\rm H12}$. A major goal of our paper is to evaluate the rate of such extremely close encounters and the properties of the resulted merging BH binaries. \section{Formation of Bound Binaries in ``SMBH + 2 BHs'' Systems} \label{sec:fiducial} In this section, we study the formation rate of bound binaries for two BHs in unstable orbits around a SMBH. We neglect the effects of gas disk (if any) here -- these effects will be studied in Section~\ref{sec:friction}. \subsection{Setup of simulations} \label{sec:fiducial-setup} The fiducial system of our simulations consists of a central SMBH with mass $M$ and two smaller BHs with mass $m_1=2\times10^{-5}M$ and $m_2=10^{-5}M$. The initial orbital separation is set as \begin{eqnarray} a_2-a_1 = 2 R_{\rm H12}, \end{eqnarray} so that their orbits are unstable. For convenience, henceforth we define \begin{eqnarray} R_{\rm H} \equiv a_1 \left(\frac{m_1}{3M}\right)^{1/3} = \left(\frac{m_1}{m_{12}}\right)^{1/3} R_{\rm H12} \end{eqnarray} to be the Hill radius of the $m_1$ at the beginning of the simulation and use it as the unit of distance. We let $P_1$ be the initial orbital period of $m_1$ and use it as the unit of time. We note that $P_1$ at $a_1=100GM/c^2$ for $M=10^6 M_{\odot}$ is $10^{-3}$ yr. The BHs are given initial eccentricities $e_1=0$, $e_2=10^{-5}$, and inclinations $i_1=i_2=R_{\rm H}/a_1$. We carry out 2000 runs and sample the initial values of the argument of the peripasis, the longitude of the ascending node, and the mean anomaly randomly in the range $[0,2\pi]$ for each BH, assuming they all have uniform distributions. We simulate the evolution of the system using the $N$-body code \textsc{REBOUND} \citep{Rein2012} with the \textsc{IAS15} integrator \citep{Rein2015}. Each simulation runs for $10^{5}$ $P_1$. \subsection{Close encounters} \label{sec:fiducial-CE} To characterize encounters with various degrees of ``closeness'', we designate a close encounter event to be ``CE$\alpha$'' when the separation between the BHs, $r_{\rm rel} = \left|\vec{r}_1 - \vec{r}_2\right|$, becomes less than $10^{-\alpha}R_{\rm H}$ and numerically keep track of CE0, CE1 and CE2 in each simulation. A CE$\alpha$ event starts when $r_{\rm rel}$ changes from $>10^{-\alpha}R_{\rm H}$ to $<10^{-\alpha}R_{\rm H}$, and ends when $r_{\rm rel}$ becomes greater than $10^{-\alpha}R_{\rm H}$ again and the relative energy $E_{\rm rel}=\frac{1}{2}\mu\left|\vec{v}_1 - \vec{v}_2\right|^2 - \frac{Gm_1m_2}{r_{\rm rel}}$ is positive. The whole process is regarded as a single CE$\alpha$ event no matter how long it elapses. In the simulation, a new CE$\alpha$ is logged only if the previous one has ended. \begin{figure}[t] \epsscale{1.2} \plotone{pics-NCE-fid.pdf} \caption{{\bf Left:} Cumulative number of CE0, CE1 and CE2 events as a function of time for our fiducial ``SMBH+2BHs'' systems. A CE$\alpha$ event occurs when the separation between the two BHs becomes less than $10^{-\alpha}R_{\rm H}$. In each panel, each of the 200 colored curves represents a simulation of the ``SMBH+2BHs'' system. The black curve shows the average number of CE$\alpha$ events (averaged over all 2000 runs). {\bf Right:} Distribution of $N_\alpha$ at $t=10^5P_1$ from all simulations. The horizontal axis shows the probability of each bin. } \label{fig:NCE-vs-t} \end{figure} The left panels of Figure~\ref{fig:NCE-vs-t} show the time evolution of the CE0, CE1, and CE2 counts in 200 examples (out of a total of 2000) of our simulations. Because of the stochastic nature of the evolution, the cumulative event count, $N_{\alpha}(t)$, in one simulation can differ significantly from another. The right panel of Figure~\ref{fig:NCE-vs-t} shows the probability distribution of $N_{\alpha}(t=10^{5}P_1)$ from all 2000 simulations. The total numbers of CE0, CE1, CE2 events in those simulations are 248790, 43153, 5238, respectively. Thus, at $t=10^{5}P_1$, the average number per simulation of CE0, CE1, and CE2 are $\left<N_{\alpha}(10^{5}P_1)\right> = $ 124, 22, and 2.6, respectively. Note that $N_{\alpha}$ has a wide distribution: For example, while $\left<N_{2}(10^{5}P_1)\right> = 2.6$, $5\%$ of the runs have $N_{2}(10^{5}P_1)>10$. Despite the difference in the numbers, the three kinds of CEs all have higher occurrence rate at the early times than at the later times. The time evolution of the average can be described by a power law $\langle N_{\alpha}(t) \rangle \propto t^{n_{\alpha}}$. We perform least-square fits using such power-law to the results at $t\gtrsim2\times10^4P_1$ to exclude the initial transient stage. We find \begin{eqnarray} \label{eq:N-vs-t} \langle N_{0}(t) \rangle & = & 0.67\left(\frac{t}{P_1}\right)^{0.45},\\ \langle N_{1}(t) \rangle & = & 0.067\left(\frac{t}{P_1}\right)^{0.50},\\ \label{eq:N-vs-t-fid} \langle N_{2}(t) \rangle & = & 0.0064\left(\frac{t}{P_1}\right)^{0.52}. \end{eqnarray} \subsection{Very close encounters and BBH formation rate} \label{sec:fiducial-rp} During each CE$\alpha$ event, we take a simulation snapshot every $10^{-3}P_1$ and monitor the separation $r_{\rm rel} = |\vec{r}_1 - \vec{r}_2|$ between the two BHs. The exact periapsis passage moment may lie between two of the snapshots. We use the snapshot right after the true pericenter passage (when $r_{\rm rel}$ first increases) to calculate the relative energy and angular momentum of the encountering BHs: \begin{eqnarray} E_{\rm rel} & = & \frac{1}{2}\mu \vec{v}_{\rm rel}^2 - \frac{Gm_1m_2}{r_{\rm rel}}, \\ \vec{\ell}_{\rm rel} & = & \vec{r}_{\rm rel} \times \vec{v}_{\rm rel}. \end{eqnarray} We then compute the semi-major axis $a_{\rm rel}$ of the orbit via $E_{\rm rel} = -Gm_1m_2/(2a_{\rm rel})$, and the pericenter distance $r_{\rm p}$ using $\ell_{\rm rel}=\sqrt{Gm_{12}a_{\rm rel}(1-e_{\rm rel}^2)}\simeq\sqrt{2Gm_{12}r_{\rm p}}$. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-PDF-aCE.pdf} \caption{Probability distribution of $R_{\rm H}/a_{\rm rel}$ in our fiducial simulations, where $a_{\rm rel}$ is the semi-major axis of the relative orbit of the BH pairs undergoing close encounters. } \label{fig:PDF-aCE} \end{figure} Figure~\ref{fig:PDF-aCE} shows the distribution of the semi-major axis $a_{\rm rel}$ of the BH binary undergoing close encounters. The most likely $a_{\rm rel}$ is about $R_{\rm H}$, and nearly all close encounters have $R_{\rm H}/a_{\rm rel}\leq2.3$. Given that the BH pairs during the encounters are either unbound or have $a_{\rm rel} \gtrsim R_{\rm H}$, they quickly depart from each other after reaching their minimum separation and get disrupted by the SMBH tide. This implies that the lifetime of most binaries formed by CEs is less than their orbital period, which is approximately equal to their period around the SMBH when $a_{\rm rel}\simR_{\rm H}$. Indeed, we find that, among the 248790 CE0 events in our 2000 simulations, the vast majority ($96.35\%$) disentangle within one orbital period ($P_1$) around the SMBH, and only $0.02\%$ survive for more than $10P_1$. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-CDF-rp.pdf} \caption{Cumulative distribution of $r_{\rm p}$ in close encounters in our fiducial simulations. The solid colored curves show the distribution normalized for CE0, CE1, CE2 and CE3. The dashed black lines are given by equation~\eqref{eq:Probability-vs-rp}. } \label{fig:CDF-rp} \end{figure} Figure~\ref{fig:CDF-rp} shows the distribution of the pericenter (``closest'') distance of the two BHs during close encounters. We find that, for a CE$\alpha$ event, the cumulative distribution of $r_{\rm p}$ (i.e., the probability for the pericenter distance to be less than $r_{\rm p}$) is approximately given by \begin{eqnarray} \label{eq:Probability-vs-rp} P_{\alpha}(<r_{\rm p}) \simeq \frac{r_{\rm p}}{10^{-\alpha}R_{\rm H}} \qquad (\text{for }r_{\rm p}\leq10^{-\alpha}R_{\rm H}). \end{eqnarray} Equation~\eqref{eq:Probability-vs-rp} becomes increasingly accurate for closer encounters. The difference in $\langle N_{\alpha}(t) \rangle$ for CE0, CE1 and CE2 discussed in Section~\ref{sec:fiducial-CE} is a direct consequence of this cumulative distribution of $r_{\rm p}$. Equation~\eqref{eq:Probability-vs-rp} is equivalent to a constant probability distribution $dP_{\alpha}/dr_{\rm p} = \text{const}. $ Since the relative angular momentum is $\ell_{\rm rel} \simeq \sqrt{2Gm_{12}r_{\rm p}}$ for nearly parabolic encounters (for $r_{\rm p} \ll a_{\rm rel} \sim R_{\rm H}$; see Figure~\ref{fig:PDF-aCE}), this is equivalent to the probability distribution in $\ell_{\rm rel}$ given by \begin{eqnarray} \label{eq:PDF-vs-l} \frac{dP_{\alpha}}{d\ell_{\rm rel}} \propto \ell_{\rm rel}. \end{eqnarray} This relation has been previously obtained both analytically and numerically in the context of planetary collisions \citep{Li2020}. It can be understood as follows: When the two BHs reach a close separation $r_{\rm rel}\llR_{\rm H}$, the angular momentum is $\ell_{\rm rel}=v_{\rm rel}r_{\perp}$, where $v_{\rm rel} \simeq \sqrt{2Gm_{12}/r_{\rm rel}}$ and $r_{\perp}$ is the projection of $\vec{r}_{\rm rel}$ perpendicular to $\vec{v}_{\rm rel}$; if the initial mutual inclination of the two BHs is much larger than $r_{\rm rel}/a_1$, the vector $\vec{r}_{\perp}$ is sampled uniformly in the plane perpendicular to $\vec{v}_{\rm rel}$ \citep{Li2020}, i.e. $dP/(r_{\perp}dr_{\perp})=\text{const}$, which implies Equation~\eqref{eq:PDF-vs-l}. While Figure~\ref{fig:CDF-rp} depicts the $r_{\rm p}$-distribution during the whole simulation time span, Figure~\ref{fig:CDF-rp-time} shows that the CEs from different time intervals also follow to the same distribution. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-CDF-rp-time.pdf} \caption{Cumulative distribution of $r_{\rm p}$ in close encounters (CE0s) detected at different time intervals (as labeled) in our fiducial simulations. The black line here is the same as the black line in Figure~\ref{fig:CDF-rp}. } \label{fig:CDF-rp-time} \end{figure} Thus, combining Equations~\eqref{eq:N-vs-t} and~\eqref{eq:Probability-vs-rp}, the averaged cumulative number of very close encounters with $r_{\rm p}<r_{\rm cap}$ given by \begin{eqnarray} \label{eq:N-rcap-t} \langle N(t; r_{\rm p}<r_{\rm cap}) \rangle = \langle N_{\alpha}(t)\rangle P_{\alpha}(<r_{\rm cap}). \end{eqnarray} Evaluating Equation~\eqref{eq:N-rcap-t} using $\left< N_2(t) \right>$ and $P_2(<r_{\rm cap})$, we find \begin{eqnarray} \label{eq:Nprod} \langle N(t; r_{\rm p} < r_{\rm cap}) \rangle &\simeq& 6\times10^{-5} \left(\frac{t}{P_1}\right)^{0.52} \nonumber \\ && \times \left( \frac{r_{\rm cap}}{10^{-4}R_{\rm H}} \right), \end{eqnarray} where we have scaled $r_{\rm cap}$ according to the estimate given by Equation~\eqref{eq:GWCE-rp_crit}. Thus, for $r_{\rm cap} \simeq 10^{-4}R_{\rm H}$, it would take about $10^{8}P_1$ on average for the two BHs to capture into a merging binary by GW emission. Note that, because of the wide spread of $N(t)$ (see Figure~\ref{fig:NCE-vs-t}), it is possible for an unstable pair of BHs to reach $N(t)\simeq 4\left<N(t)\right>$ (about $5\%$ of the systems) and be captured in $\sim10^7P_1$. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-PDF-inc.pdf} \caption{Probability density function of the binary obliquity $i_{\rm bin}$ of BHs in close encounters. Note that $i_{\rm bin}$ measures the angle between the close encounter plane and the orbital plane around the SMBH, i.e. $\cos{i_{\rm bin}}\equiv\hat{\ell}_{\rm rel}\cdot\hat{z}$, where $\hat{\ell}_{\rm rel}$ is the relative angular momentum axis of the two BHs, and $\hat{z}$ is the initial orbital axis around the SMBH. } \label{fig:PDF-inc} \end{figure} \subsection{Orbital obliquities of BH binaries formed in close encounters} The orbital obliquities ($i_{\rm bin}$) of the BH binaries formed by CEs can be inferred from the inclination of the angular momentum vector $\vec{\ell}_{\rm rel}$ of the CE orbits with respect to the initial orbital axis ($\hat{z}$) around the SMBH, i.e. $\cos{i_{\rm bin}}=\hat{\ell}_{\rm rel}\cdot\hat{z}$. Figure~\ref{fig:PDF-inc} shows the distribution of $\cos{i_{\rm bin}}$ for the close encounters in our fiducial simulations. Encounters within Hill radius (CE0, black) are predominantly prograde with the most probable $i_{\rm bin}$ being zero. But closer encounters (CE1 and CE2) have nearly uniform distribution of $\cos{i_{\rm bin}}$. This indicates that merging BH binaries formed in very close encounters ($r_{\rm p}\llR_{\rm H}$) have a broad range of inclinations with respect to the AGN disk plane. The broad distribution of $\cos{i_{\rm bin}}$ is consistent with the findings of \cite{Li2020} in the context of planet collisions. Such a broad distribution arises when the initial mutual inclination of the BH orbits $\Delta i$ is much larger than $r_{\rm p}/a_1$. Since our simulations have $\Delta i \sim R_{\rm H}/a_1$, all CEs with $r_{\rm p}\llR_{\rm H}$ will have a broad $\cos{i_{\rm bin}}$ distribution. \footnote{\cite{Li2020} showed that when $\Delta i \gg r_{\rm p}/a_1$ and $\Delta i \ll R_{\rm H}/a_1$, the distribution of $\cos{i_{\rm bin}}$ is $f(\cos{i_{\rm bin}})\propto (1-\cos^2{i_{\rm bin}})^{-1/2}$, and that when $\Delta i \gg r_{\rm p}/a_1$ and $\Delta i \sim R_{\rm H}/a_1$, $f(\cos{i_{\rm bin}})$ becomes more uniform.} \begin{figure}[t] \epsscale{1.2} \plotone{pics-NCE-fid-5v.pdf} \caption{Same as Figure~\ref{fig:NCE-vs-t}, except showing the number of CE2 events in the systems with different BHs masses (as labeled). The texts on the right panels show the average number of $N_2$ at $t=10^{5}P_1$ and the percentage of the runs that experience more than 10 CE2 events. } \label{fig:NCE-vs-t-3v} \end{figure} \subsection{Mass dependence of the close encounter rate} \label{sec:fid-mass} The mass ratio between the SMBH and the BHs in AGN disks can span several orders of magnitude. We repeat our fiducial experiment (which has $m_1=2m_2=2\times10^{-5}M$) with four different sets of BH masses: $(m_1,m_2)=(8,4)\times10^{-5}M$, $(m_1,m_2)=(2,1)\times10^{-6}M$, $(m_1,m_2)=(2,1.5)\times10^{-5}M$, and $(m_1,m_2)=(2,0.5)\times10^{-5}M$. We use them to test the effect of having larger BH mass, smaller BH mass, smaller BH mass ratio ($m_1/m_2=4/3$), and larger BH mass ratio ($m_1/m_2=4$), respectively. Figure~\ref{fig:NCE-vs-t-3v} compares the CE2 event rates in the fiducial system and in the four systems with different BH masses. In all cases, the $N_2$-distributions at $t=10^5P_1$ show large spreads, with the mean $\langle N_2(t=10^5P_1) \rangle$ in the range of $1.6$-$2.9$ (see Figure~\ref{fig:NCE-vs-t-3v}). The probabilities for high $N_2$s (beyond the mean) are more different: for example, the probability for each simulation to get more than 10 CE2 events is $5\%$, $9\%$, $1\%$, $4\%$, and $6\%$ for the five cases displayed in Figure~\ref{fig:NCE-vs-t-3v}. We find that, for four systems with different BH masses, the probability distributions of $R_{\rm H}/a_{\rm rel}$, $r_{\rm p}/R_{\rm H}$ and $i_{\rm bin}$ are all similar to the fiducial results as depicted in Figures~\ref{fig:PDF-aCE},~\ref{fig:CDF-rp} and~\ref{fig:PDF-inc}. Overall, our results suggest that the BBH formation rate obtained for our fiducial system in Section~\ref{sec:fiducial-rp} can be applied to other systems with different BH masses (as long as $m_1, m_2 \ll M$) to within a factor of 2. The biggest effect that we observe is a drop in the CE2 rate by a factor of 2 when the BH masses are lowered by a factor of 10 compared to the fiducial system. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-NCE-vs-t-inc.pdf} \caption{Average number of CE2 events as a function of time in systems with different initial orbital inclinations ($i_1=i_2=i$ as labeled). The black curve is the same as the black curve in the bottom left panel of Figure~\ref{fig:NCE-vs-t}. } \label{fig:NCE-vs-t-all-inc} \end{figure} \section{Systems of different initial inclinations} \label{sec:inc} \subsection{Results with different initial inclinations} \label{sec:inc-coplanar} Our simulations in Section~\ref{sec:fiducial} assume that the initial orbits of the two BHs have inclinations $i_1=i_2=R_{\rm H}/a_1$ and random longitudes of ascending nodes. We expect the CE rates to increase when $i_1$ and $i_2$ are smaller. Figure~\ref{fig:NCE-vs-t-all-inc} shows the cumulative number of CE2 counts (averaged over 2000 runs) in simulations with different initial values of $i_1$ and $i_2$. We see that, as expected, $\left<N_2(t)\right>$ increase as $i_1$ and $i_2$ decrease. This increase is highly substantial in the exact coplanar systems ($i_1=i_2$=0), for which we find an average of 56 CE2s at $t=10^5P_1$ and the CE2 rate evolve in time as \begin{eqnarray} \label{eq:N-vs-t-coplanar} \left< N_2(t) \right> \simeq 0.05 \left(\frac{t}{P_1}\right)^{0.61} \quad \text{(exact coplanar)}. \end{eqnarray} This should be compared to Equation~\eqref{eq:N-vs-t-fid} for our fiducial runs (which assume $i_1=i_2=R_{\rm H}/a_1$). We notice that a smaller (but non-zero) initial inclination leads to more CE2 at the beginning, but the various $\left<N_2(t)\right>$ curves become roughly parallel after a few ten-thousand orbits, indicating that the rate $d\langle N_2 \rangle/dt$ ``saturates'' to the fiducial value (see below). \begin{figure}[ht] \epsscale{1.0} \plotone{pics-CDF-rp-inc.pdf} \caption{Same as Figure~\ref{fig:CDF-rp}, but showing results for the CE0 events only and for different values of the initial orbital inclinations ($i_1=i_2=i$) as indicated by the labels. The black solid curve is the same as in Figure~\ref{fig:CDF-rp}. The black dashed line shows the linear $\propto r_{\rm p}$ scaling (see Equation~\ref{eq:Probability-vs-rp}) and the blue dashed line shows the $\sqrt{r_{\rm p}}$ scaling (see Equation~\ref{eq:Probability-vs-rp-coplanar}). } \label{fig:CDF-rp-inc} \end{figure} The distributions of the pericenter (``closest'') distance $r_{\rm p}$ of CE2 for the simulations with different initial inclinations are shown in Figure~\ref{fig:CDF-rp-inc}. For the exact co-planar systems, we find that the cumulative distribution of $r_{\rm p}$ is \begin{eqnarray} \label{eq:Probability-vs-rp-coplanar} P_{\alpha}(<r_{\rm p}) \simeq \left(\frac{r_{\rm p}}{10^{-\alpha}R_{\rm H}}\right)^{1/2} \quad (\text{for } r_{\rm p} \leq 10^{-\alpha}R_{\rm H}). \end{eqnarray} This should be contrasted with Equation~\eqref{eq:Probability-vs-rp} for our fiducial simulations. Equation~\eqref{eq:Probability-vs-rp-coplanar} is equivalent to a uniform distribution of angular momentum, $dP/d\ell_{\rm rel} = \text{const}$. Such a uniform distribution is expected because for $i_1=i_2=0$, the dynamics is confined to the original orbital plane, and at $r_{\rm rel}\llR_{\rm H}$ the projected separation $r_{\perp}$ is restricted to a line with $dP/dr_{\perp} = \text{const}$. Combining Equations~\eqref{eq:N-vs-t-coplanar} and~\eqref{eq:Probability-vs-rp-coplanar}, we find that for the exact co-planar systems, the cumulative BH binary formation rate in very close encounters is \begin{eqnarray} \label{eq:Nprod-flat} \langle N(t; r_{\rm p}<r_{\rm cap}) \rangle \simeq 0.005 \left(\frac{t}{P_1}\right)^{0.61} \left(\frac{r_{\rm cap}}{10^{-4}R_{\rm H}}\right)^{1/2}. \end{eqnarray} It would only take about $6\times10^{3}P_1$ on average for the two BHs to form a merging binary. Compared to Equation~\eqref{eq:Nprod}, we see that for such perfectly coplanar systems, the BH binary formation rate is much higher than our fiducial systems (with $i_1=i_2=R_{\rm H}/a_1$). All of the other small initial inclination simulations yield between 0.1 and 1.0 critical CEs with $r_{\rm p}<10^{-4}R_{\rm H}$ on average in their first $10^{5}P_1$. We expect their rates to be similar to the fiducial rates over longer timescales (see below). \begin{figure}[ht] \epsscale{1.0} \plotone{pics-theta-vs-t.png} \caption{Time evolution of the mutual inclination of the two BH orbits in the fiducial system (green) and in the nearly coplanar system with initial $i_1=i_2=10^{-5}R_{\rm H}/a_1$ (blue). The colored lines represent 200 individual simulations for each system. The black curves represent their averages. } \label{fig:theta-vs-t} \end{figure} \subsection{Evolution of nearly coplanar systems} \label{sec:inc-time} Figures~\ref{fig:NCE-vs-t-all-inc} and~\ref{fig:CDF-rp-inc} show that for nearly coplanar systems (with initial inclinations $0<i_1,i_2\ll R_{\rm H}/a_1$), both the cumulative CE2 rate and the $r_{\rm p}$ distribution $P_{\alpha}(<r_{\rm p})$ lie between those of the exact coplanar system and our fiducial system. However, Figure~\ref{fig:NCE-vs-t-all-inc} also indicates that while the cumulative number of CE2 events in the first $\sim 10^4$ orbits is much larger than the fiducial case, the rate $d\left<N_2\right>/dt$ seems to settle down to the fiducial rate at later times. This suggests that the mutual inclinations of the BH orbits grow in time in systems with nearly coplanar initial orbits. Figure~\ref{fig:theta-vs-t} shows the evolution of mutual inclination $\theta_{12}$ of the BH orbits for our fiducial systems and for a system with initial $i_1=i_2=10^{-5}R_{\rm H}/a_1$. The mutual inclination is computed from $\cos{\theta_{12}}=\hat{\ell}_1\cdot\hat{\ell}_2$, where $\hat{\ell}_1$ and $\hat{\ell}_2$ are the unit angular momentum vectors of $m_1$ and $m_2$ around the SMBH. We see that in the nearly coplanar system, the mutual inclination $\theta_{12}$ gradually increases and then saturates at $\theta_{12}\simR_{\rm H}/a_1$. For our fiducial system, the average mutual inclination remains at $\simR_{\rm H}/a$ throughout the simulation. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-CDF-rp-inc-vs-time.pdf} \caption{Same as Figure~\ref{fig:CDF-rp-time}, showing the time evolution of the cumulative distribution of $r_{\rm p}$. The colored lines show the results for a system with initial inclination $i_1=i_2=10^{-5}R_{\rm H}/a_1$, and different colors indicate different time intervals of the simulations. The black lines show the total distribution for the exact coplanar system (dash-dot) and the fiducial system (solid) for comparison. } \label{fig:CDF-rp-inc-vs-time} \end{figure} Due to the time evolution of the mutual inclination $\theta_{12}$, we expect that the CE statistics, such as the $r_{\rm p}$-distribution, also evolve with time in the nearly coplanar systems. Figure~\ref{fig:CDF-rp-inc-vs-time} shows the time dependence of the $r_{\rm p}$-distribution for the $i_1=i_2=10^{-5}$ system. It is clear that the initial small inclinations ($i_1$, $i_2\llR_{\rm H}/a_1$) only affect the results at earlier times. The long-term statistics of CEs for nearly coplanar systems is better represented by our fiducial simulations with $i_1=i_2=R_{\rm H}/a_1$ (see Section~\ref{sec:fiducial}). \section{Effects of Frictional Disk Forces} \label{sec:friction} The AGN disk can affect the dynamical evolution of the embedded BHs. For example, a BH (with mass $m_1$ and semi-major axis $a_1$) experiences eccentricity damping on the timescale \citep{Tanaka2004} \begin{eqnarray} \nonumber \tau_{e} & \simeq & \frac{M^2h^4}{2\pi m_1 \Sigma a_1^2}P_1 \\ \nonumber & \simeq & 1.2\times10^6 \left(\frac{a_1}{10^{2}GM/c^2}\right)^{-2}\left(\frac{\Sigma}{10^5\text{g/cm}^2}\right)^{-1} \\ & & \times \left(\frac{h}{0.03}\right)^4\left(\frac{m_1}{10M_\odot}\right)^{-1}P_1, \end{eqnarray} where $\Sigma$ is the disk surface density and $h$ is the disk aspect ratio, and we have adopted some representative parameters for AGN disks (e.g., \citealp{Sirko2003}; see Figure 1 of \citealp{Secunda2019}). In the previous sections, we have studied BH binary captures via very close encounters in the absence of any disk force on the BHs. A full exploration of the effects of disk forces on BH binary formation would require long-term hydrodynamical simulations and is beyond the scope of this paper. Here, to qualitatively assess the effect of the disk, we apply simple prescriptions of disk forces on the BHs in our $N$-body simulations. We consider two simple models for the disk forces: \begin{enumerate} \item The first model includes the frictional force (per unit mass): \begin{eqnarray} \label{eq:df-drag} \vec{F}_{\rm drag} = -\frac{\vec{v} - \vec{v}_{\rm K}}{\tau_{\rm drag}}, \end{eqnarray} where $\vec{v}_{\rm K}=\sqrt{GM/r^3}\ \hat{\theta}$ is the Keplerian velocity and $r$ is the instantaneous distance of the BH to the SMBH. This force tends to damp the BH velocity $\vec{v}$ to $\vec{v}_{\rm K}$ and damp its eccentricity around the SMBH at the rate $\dot{e}\simeq-e/\tau_{\rm drag}$. \item The second model includes a force that mimics a migration trap at radius $r_0$: \begin{eqnarray} \label{eq:df-trap} \vec{F}_{\rm trap} = -\frac{\Omega_{\rm K,0} (r-r_0)}{\tau_{\rm trap}}\hat{\vec{\theta}}, \end{eqnarray} where $\Omega_{\rm K,0}=\sqrt{GM/r_0^3}$ is the Keplerian frequency at $r_0$. Equation~\eqref{eq:df-trap} assumes that the torque on the BH at $r$ is approximately linear in $(r-r_0)$ near the trap. In the following, we set $r_0$ to $a_1$, the initial semi-major axis of $m_1$. \end{enumerate} The constants $\tau_{\rm drag}$ and $\tau_{\rm trap}$ in Equations~\eqref{eq:df-drag} to~\eqref{eq:df-trap} characterize the strengths of the disk forces. We apply these disk forces to our fiducial systems (ses Section~\ref{sec:fiducial}), considering different values of $\tau_{\rm drag}$ and $\tau_{\rm trap}$. For each value case, we perform 200 simulations up to $10^5$ orbit. \begin{figure}[ht] \epsscale{1.2} \plotone{pics-NCE-disk.pdf} \caption{Similar to Figure~\ref{fig:NCE-vs-t-3v}, but for the fiducial systems (see Figure~\ref{fig:NCE-vs-t}) and including the effects of disk forces. The type and the strength (timescale) of the adopted disk force for each panel is as labeled. } \label{fig:NCE-disk} \end{figure} Figure~\ref{fig:NCE-disk} presents three example simulations: $\vec{F}_{\rm drag}$ only with $\tau_{\rm drag}=10^{6}P_1$, $\vec{F}_{\rm trap}$ only with $\tau_{\rm trap}=10^{6}P_1$, and both $\vec{F}_{\rm drag}$ and $\vec{F}_{\rm trap}$ with $\tau_{\rm drag}=\tau_{\rm trap}=10^{6}P_1$. The left panel shows the time evolution of $N_2$ and the right panel shows distribution of $N_2$ at $t=10^5P_1$. In all three cases, we have the average $\langle N_2 (t=10^5P_1)\rangle \simeq 2$. Due to the stochastic nature of the evolution, all three simulations exhibit large variation in the individual $N_2$, with about $2\%$ of the runs having $N_2(t=10^5P_1)\gtrsim10$. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-NCE-vs-t-tau.pdf} \caption{Average number of CE2 events as a function of the time in the fiducial systems (see Fig.~\ref{fig:NCE-vs-t}), including different types and strengths of disk forces. For example, the red solid line refers to the case that includes only $\vec{F}_{\rm drag}$ (with $\tau_{\rm drag}=10^{6}P_1$), the green solid line includes only $\vec{F}_{\rm trap}$ (with $\tau_{\rm trap}=10^6P_1$), and the blue solid line includes both $\vec{F}_{\rm drag}$ and $\vec{F}_{\rm trap}$ (with $\tau_{\rm drag}=\tau_{\rm trap}=10^6P_1$). } \label{fig:NCE-vs-t-tau} \end{figure} Figure~\ref{fig:NCE-vs-t-tau} shows the effects of different disk forces on the CE2 rates by comparing the time evolution $\langle N_2(t)\rangle$ from our simulations with different force types and strengths. The drag force (Equation~\ref{eq:df-drag}) tends to stabilize the system by preventing crossing orbits between the BHs. When $\vec{F}_{\rm drag}$ with $\tau_{\rm drag}=10^5P_1$ (dashed red) or $10^6P_1$ (solid red) is applied, the system is still unstable initially and the two BHs experience CE2 events (also see Li, Rodet and Lai, in prep). However, no more CE2s are found after about $t=10^4P_1$ for $\tau_{\rm drag}=10^5P_1$ and $t=8\times10^4P_1$ for $\tau_{\rm drag}=10^6P_1$. To check why CEs cease, Figure~\ref{fig:scatter-rin-vs-rout-tau} compares the orbital separation of the two BHs in the no-drag simulations (fiducial, left panel) and the with-drag simulations ($\tau_{\rm drag}=10^{5}P_1$, right panel) at $t=4\times10^{3}P_1$, $1.2\times10^{4}P_1$, and $2\times10^{4}P_1$. In the fiducial simulations, $r_{\rm in}=a_{\rm in}(1+e_{\rm in})$ (the apocenter distance of the inner BH to the SMBH) and $r_{\rm out}=a_{\rm out}(1-e_{\rm out})$ (the pericenter distance of the outer BH) spread to the region with $r_{\rm out}-r_{\rm in} \lesssim 2.5 R_{\rm H12}$ during the first 4000 orbits and remain in the same region at later time. This allows the two BHs to continue to ``engage'' with each other. However, in the $\tau_{\rm drag}=10^{5}P_1$ simulations, the BH orbits evolve in time toward smaller $r_{\rm in}$ and larger $r_{\rm out}$. Eventually, the difference between $r_{\rm out}$ and $r_{\rm in}$ becomes too large and CE becomes very rare. As a result, no CE2 happens between $t=2\times10^{4}P_1$ and $10^{5}P_1$ in the with-drag simulations. \begin{figure}[ht] \epsscale{1.2} \plotone{pics-scatter-rin-vs-rout-tau.pdf} \caption{Apocenter of the inner BH $r_{\rm in} = a_{\rm in}(1+e_{\rm in})$ and the pericenter of the outer BH $r_{\rm out} = a_{\rm out}(1-e_{\rm out})$ from the fiducial (left) and the $\tau_{\rm drag}=10^{5}P_1$ simulations (right) at three different times. The region between the dashed lines corresponds to where $(r_{\rm out} - r_{\rm in})/R_{\rm H12} \in (2,3)$, with $R_{\rm H12}$ given by Equation~\eqref{eq:RH} and assuming $a_{1,2}\simeq r_{\rm in,out}$. It is marked in both panels to help comparing the distributions. } \label{fig:scatter-rin-vs-rout-tau} \end{figure} One may expect that the trapping force (Equation~\ref{eq:df-trap}) can accelerate the CEs by keeping the BH orbits close to the trapping radius. Our simulations show that when $\vec{F}_{\rm trap}$ is applied, our systems indeed have more CE0 events. However, Figure~\ref{fig:NCE-vs-t-tau} shows that the CE2 rate is actually smaller than the fiducial simulations without $\vec{F}_{\rm trap}$: The average number of CE2 events becomes $2.1$ for $\tau_{\rm trap}=10^6P_1$ (solid green) and $0.9$ for $\tau_{\rm trap}=10^5P_1$ (dashed green) after $10^5$ orbits. Unlike in the simulations with $\vec{F}_{\rm drag}\neq0$, the CE2 rate in our $\vec{F}_{\rm trap}\neq0$ simulations do not drop to zero at later times. The stabilizing effects of the drag force and the trapping force can balance each other. The blue curves in Figure~\ref{fig:NCE-vs-t-tau} show that applying $\tau_{\rm drag}=\tau_{\rm trap}=10^{5}P_1$ produces more CE2 events than applying either of the two forces. With $\tau_{\rm drag}=\tau_{\rm trap}=10^{6}P_1$, the two BHs experience slightly more CE2 events than in the fiducial simulations (black curve). \begin{figure}[ht] \epsscale{1.0} \plotone{pics-CDF-rp-tau.pdf} \caption{Similar to Figure~\ref{fig:CDF-rp}, but showing the cumulative distribution of $r_{\rm p}$ in CE0 events for systems with different types and strengths of disk forces. } \label{fig:CDF-rp-tau} \end{figure} Figure~\ref{fig:CDF-rp-tau} shows that the $r_{\rm p}$-distributions of our simulations with drag forces are similar to the fiducial result. In particular, for $r_{\rm p}/R_{\rm H}\lesssim 0.01$, the linear scaling of the distribution with $r_{\rm p}$ remains accurate in all cases. In the simulations with the trap force, the mild encounters with $r_{\rm p}/R_{\rm H}\gtrsim0.01$ are relatively more frequent than in the fiducial system with no disk forces (as the CDFs with the trap force have a larger slope at $r_{\rm p}/R_{\rm H}\gtrsim0.01$). Thus the simple power-low distribution of $r_{\rm p}$ given by Equation~\eqref{eq:Probability-vs-rp} is still valid for CE2 but becomes less accurate for CE0 when $\vec{F}_{\rm trap}$ is applied. In a real AGN disk, multiple effects can take place at the same time. The outcome of their competition depends on the detailed disk properties. Our results in this section suggest that disk forces have little effect on the $r_{\rm p}$-distribution for very close encounters (Figure~\ref{fig:CDF-rp-tau}), but can influence the CE2 event rate (Figure~\ref{fig:NCE-vs-t-tau}), and therefore affecting the BBH formation rate. Our simulations also suggest that in order for the two unstable BHs in a AGN disk to capture into a merging binary, a combination of the drag force and trapping force from the disk is needed, or we would rely on the chance that an individual system is in the high-$N_2$ tail of the distribution. We emphasize that our results in this section are based on simple prescriptions of disk forces. Hydrodynamics simulations will be needed to fully capture the effects of disk forces. \section{Three and more BH\lowercase{s} around SMBH} \label{sec:NBH} While a ``SMBH + 2BHs'' system is unstable only if $a_2-a_1 \lesssim 2\sqrt{3}R_{\rm H}$ (see Equation~\ref{eq:a1-a2-criterion}), no precise stability criterion exists for systems with more than two BHs. In fact, such systems always exhibit instability eventually, except that the instability time grows with increasing orbital spacings. Numerical integrations suggest that a system of 3 or more bodies on nearly circular, coplanar orbits around a central massive object is stable for at least $N$ orbital periods if the separation between adjacent bodies satisfies $\left|a_{j+1}-a_{j}\right|\gtrsim K(N)R_{\rm H}$, where the constant $K(N)$ increases with increasing $N$ \citep[e.g., $K\simeq9-12$ for $N=10^{10}$;][]{Smith2009}. To explore close encounters in systems with more than two BHs, we consider a ``SMBH + 3BHs'' system with $(m_1,m_2,m_3)=(2,1,0.5)\times10^{-5}M$ and a ``SMBH + 5 BHs'' system with $(m_1,m_2,m_3,m_4,m_5)=(2,1,0.5,0.25,0.125)\times10^{-5}M$, where $M$ is mass of the central SMBH. The initial orbital separation is set as \begin{eqnarray} a_{j+1}-a_j = K R_{\rm H,mut}, \end{eqnarray} where $R_{\rm H,mut}$ is mutual Hill radius of $m_j$ and $m_{j+1}$ (see Equation~\ref{eq:RH}) and we set $K=2$ or $4$. The BHs are given initial eccentricities $e_1=0$, $e_j=10^{-5}$ for $j>1$ and inclinations $i=R_{\rm H}/a_1$ for all BHs. Similar to our fiducial simulations (Section~\ref{sec:fiducial}), we carry out 2000 runs and sample the initial values of the argument of the peripasis, the longitude of the ascending node, and the mean anomaly randomly in the range $[0,2\pi]$ for each BH, assuming they all have uniform distributions. \begin{figure}[ht] \epsscale{0.9} \plotone{pics-NCE-vs-t-3p.png} \caption{Average number of CE2 as a function of time for systems with 3 and 5 BHs. The fiducial result for 2 BHs (the bottom left planel of Figure~\ref{fig:NCE-vs-t}) is also shown for comparison. } \label{fig:NCE-vs-t-3p} \end{figure} Figure~\ref{fig:NCE-vs-t-3p} shows the averaged cumulative CE2 rates for systems with two, three and five BHs. The two-BH (fiducial) systems experience nearly 3 to 5 times more CE2 events than the other systems. For the $K=2$ cases, the average number of CE2 events at $t=10^5P_1$ is 0.9 with 3 BHs and 1.1 with 5 BHs. The fact that the CE2 rate is highest for the two-BH systems is somewhat surprising. It may be because the two-BH systems are subjected to conservation constraints, which cause the BH orbits to repeatedly overlap. A three-or-more-BH architecture has more degrees of freedom and there is no conservation law between any two BHs. Adopting $K=4$ in the three- and five-BH simulations leads to no CE2 events before $t \simeq 2\times10^{3}P_1$. This is because the initial conditions with larger $K$ require longer times for the BHs to develop their first Hill sphere crossing. After the instability develops, the $K=4$ systems catch up in the growth rate. At $t=10^5P_1$, the three-BH and five-BH simulations with $K=4$ yield $\left< N_2 \right>=0.5$ and $0.62$, respectively. For systems with larger $K$, we expect a longer time before the CEs occur, but the CE rate will eventually converge. We fit the time evolution of the average number of CE2 events for $t>5\times10^4P_1$, by \begin{eqnarray} \left< N_2(t) \right> = \left(\frac{t}{T}\right)^{n_2}. \end{eqnarray} Least-square fit gives $n_2 = 0.26$, $0.50$, $0.30$, $0.50$ and $T = 1.2\times10^5$, $4.0\times10^5$, $8.5\times10^4$, $2.6\times10^5$ for 3 BHs with $K=2$, 3 BHs with $K=4$, 5 BHs with $K=2$ and 5 BHs with $K=4$, respectively. Using $\langle N(t;r_{\rm p}<r_{\rm cap}) \rangle = \langle N_2 (t) \rangle P_2(<r_{\rm cap})$ (see Equation~\ref{eq:N-rcap-t}) with $P_2(<r_{\rm cap}) \simeq r_{\rm p}/ (10^{-2}R_{\rm H})$ (see Equation~\ref{eq:Probability-vs-rp}), we find that it would take about $5\times10^{12}P_1$, $4\times10^{9}P_1$, $4\times10^{11}P_1$, $3\times10^{9}P_1$ on average to form a merging BH binary in each of the four cases. \begin{figure}[ht] \epsscale{1.0} \plotone{pics-CDF-rp-3p.pdf} \caption{Same as Figure~\ref{fig:CDF-rp}, except showing the cumulative distribution of $r_{\rm p}$ in close encounters (CE2 only) in simulations with different number of BHs and initial spacings. } \label{fig:CDF-rp-3p} \end{figure} \begin{figure}[ht] \epsscale{1.0} \plotone{pics-PDF-inc-3p.pdf} \caption{Same as Figure~\ref{fig:PDF-inc}, except showing the results for the CE2 events in the simulations with different number of BHs and initial spacings. } \label{fig:PDF-inc-3p} \end{figure} Figure~\ref{fig:CDF-rp-3p} shows that the $r_{\rm p}$-distributions of CE2 for all of the simulations in this section are similar to the fiducial result (see Figure~\ref{fig:CDF-rp}). Figure~\ref{fig:PDF-inc-3p} shows that CE2 always have nearly uniform distributions of $\cos{i_{\rm bin}}$ and thus the merging BH binaries formed in our scenario have a wide range of inclination angles with respect to the AGN disk plane. This is expected: Regardless of the number and the initial spacings of the BHs, the relative orbit of the captured binary is dominated by the gravity of the BHs during the CEs. Having more BHs and different initial spacings only affects the CE rate. \section{Summary and Discussion} \label{sec:summary} In this paper, we have studied the long-term evolution of two or more stellar-mass BHs on closely-packed, dynamically unstable circular orbits around a SMBH, with the initial semi-major axes separated by a few times the Hill radius $R_{\rm H}$. Such an orbital configuration may naturally arise from BH migrations in the AGN disk and leads to recurring close encounters between the BHs. We use $N$-body simulations to study the statistics and rate of close encounters (of various degrees of ``closeness'' compared to $R_{\rm H}$), the properties of the relative orbits of the encountering BHs, and the probability for two BHs to be captured into a merging binary with the help of gravitational wave emission during very close encounters. Our fiducial simulations focus on ``SMBH + 2 BHs'' systems with $m_1=2m_2=2\times10^{-5}M$ and initial inclinations $i_1=i_2=R_{\rm H}/a_1$ (where $a_1$ and $a_2$ are the BHs' semi-major axes around the SMBH). Additions simulations with different BH masses, initial orbital inclinations, prescriptions for disk forces, and different number of BHs are also performed. Our simulations show that close encounters (CEs) between the BHs in such systems exhibit three general characteristics: \begin{enumerate} \setcounter{enumi}{0} \item Close encounters (with the separation between two BHs less than $R_{\rm H}$) occur stocastically and the rate of CEs generally declines in time (see Fig.~\ref{fig:NCE-vs-t}). The average cumulative number of CEs, $\langle N(t)\rangle$, is approximately a power-law function of time, although there are wide spreads in $N(t)$ for different systems. For our fiducial ``SMBH + 2 BHs'' setup, we find $\langle N(t)\rangle\propto t^{0.5}$ (see Eqs.~\ref{eq:N-vs-t}-\ref{eq:N-vs-t-fid}). \item The vast majority of the CEs result in the formation of short-lived binaries with a small binding energy (of order $Gm_1m_2/R_{\rm H}$; see Fig.~\ref{fig:PDF-aCE}). Such binaries are quickly destroyed by the SMBH tide and do not lead to binary mergers. \item The closest separation ($r_{\rm p}$) during a close encounter follows a cumulative distribution $P(<r_{\rm p})\proptor_{\rm p}$ for $r_{\rm p}\llR_{\rm H}$ (see Figs.~\ref{fig:CDF-rp},~\ref{fig:CDF-rp-time}, and Eq.~\ref{eq:Probability-vs-rp}). This distribution is robust, regardless of the BH masses (Section~\ref{sec:fid-mass}) and the number of BHs in the system (Section~\ref{sec:NBH}). For systems with very small but non-zero initial mutual inclinations, the same $r_{\rm p}$-distribution applies at later times as the mutual inclinations grow. In a system with two exact co-planar BHs, the distribution becomes $P(<r_{\rm p})\proptor_{\rm p}^{1/2}$ (Fig.~\ref{fig:CDF-rp-inc} and Eq.~\ref{eq:Probability-vs-rp-coplanar}). \end{enumerate} These results imply that, to capture an encountering BH pairs into a ``permenant'' binary, a fast dissipative mechanism is required. Given the high close encounter rate, a promising mechanism is the GW emission when $r_{\rm p}$ is very small (less than the critical ``capture'' radius $r_{\rm cap}\sim 10^{-4}R_{\rm H}$, depending on the system parameters; see Eq.~\ref{eq:GWCE-rp_crit}): \begin{enumerate} \setcounter{enumi}{3} \item We provide a semi-analytical formula for the averaged cumulative rate of binary captures via GW emission in a ``SMBH + 2 BHs'' fiducial system, Equation~\eqref{eq:N-rcap-t} or~\eqref{eq:Nprod}, which is the product of the close encounter rate $\langle N(t)\rangle$ and the capture probability $P(<r_{\rm cap})$ during each encounter. \item Our formula suggests that, in the fiducial systems, the timescale for two BHs to be captured is $10^8P_1$ on average (assuming $r_{\rm cap}=10^{-4}R_{\rm H}$) and $\lesssim 10^7P_1$ for the $5\%$ of the systems with $N(t)\gtrsim4\langle N(t)\rangle$. In the exact co-planar systems, the capture rate is much higher (see Eq.~\ref{eq:Nprod-flat}), and we find that the average binary capture time is $6\times10^3P_1$. \item After the two BHs are captured, we expect these BH binaries to merge in a few binary orbits. Their mergers will exhibit high eccentricities (see below) when entering the LIGO band ($\gtrsim 10$~Hz) and will have a board distribution of orbital inclinations relative to the original AGN disk (see Figure~\ref{fig:PDF-inc}). \end{enumerate} We have carried out additional simulations to assess how the above results may be influenced by various system parameters and by the gas disk effects: \begin{enumerate} \setcounter{enumi}{6} \item The masses of the BHs (relative to the mass of the SMBH) affect the close encounter rate only in a modest way (Section~\ref{sec:fid-mass}). For example, the rate of CEs with $r_{\rm p}\le 10^{-2}R_{\rm H}$ decreases by a factor of $\sim 2$ when the BH masses are lowered by a factor of ten (see Figure~\ref{fig:NCE-vs-t-3v}). The $r_{\rm p}$-distribution (Eq.~\ref{eq:Probability-vs-rp}) is robust. Thus, we expect that our fiducial binary capture rate, Equation~\eqref{eq:Nprod}, remain valid for systems with different BH masses. \item The most optimal setup to get more binary captures in our scenario is with two exact co-planar BHs (see points 3 and 5 above). Such an exact co-planar system would have a higher rate of close encounters (see Fig.~\ref{fig:NCE-vs-t-all-inc} and Eq.~\ref{eq:N-vs-t-coplanar}) and a flatter $r_{\rm p}$-distribution (Fig.~\ref{fig:CDF-rp-inc} and Eq.~\ref{eq:Probability-vs-rp-coplanar}), leading to a greatly enhanced binary capture rate (Eq.~\ref{eq:Nprod-flat}). However, if the mutual inclination between the BH orbits is initially small but non-zero, it will inevitably grow and evolve to an ``equilibrium'' value $\sim R_{\rm H}/a_1$ (see Fig.~\ref{fig:theta-vs-t}), causing $P(<r_{\rm p})$ to converge to the fiducial result (see Fig.~\ref{fig:CDF-rp-inc-vs-time}). \item We have explored the effects of gas disks by applying simple prescriptions of disk forces (see Eqs.~\ref{eq:df-drag} and~\ref{eq:df-trap}) on the BHs in our N-body simulations. We find that such prescribed disk forces do not necessarily lead to an enhanced binary capture rate. In fact, simple gas drags on the BHs may stop close encounters at late times. A ``migration trap'' force can sometimes balance the drag force and maintain the close encounter rate (see Fig.~\ref{fig:NCE-vs-t-tau}). \item The number of BHs and their initial spacings in a closely-packed system do not affect the $r_{\rm p}$-distribution (Fig.~\ref{fig:CDF-rp-3p}) and the orbital obliquity distribution (Fig.~~\ref{fig:PDF-inc-3p}) during the very close encounters. We find that systems with more than 2 BHs have a lower close encounter rate than systems with two BHs. \end{enumerate} Regarding point 6 above: Using Equation~\eqref{eq:GWCE-rp_crit} for $r_{\rm cap}$, we find that the GW emitted by the BH binary at capture has a frequency \begin{eqnarray} && f_{\rm cap}={1 \over \pi}\left({Gm_{12}\over r_{\rm cap}^3}\right)^{1/2}\nonumber\\ &&\qquad =(1.4\,{\rm Hz})\,\eta^{3/7}\left({4\mu\over m_{12}}\right)^{\!-3/7} \left({a_{12}\over 100 GM/c^2}\right)^{\!-3/7}\nonumber\\ &&\qquad \times \left({M\over 10^8M_\odot}\right)^{\!-2/7}\left({m_{12}\over 100M_\odot} \right)^{\!-5/7}. \end{eqnarray} For the adopted parameter values, this frequency lies slightly below the LIGO band. Such a newly captured binary will merge within a few binary orbits. Using the standard gravitational radiation formulae \citep{Peters1964}, it is easy to see that when the frequency of the GWs from the binary enters the LIGO band (>10~Hz), the binary can retain a very significant ($\gtrsim 0.5$) eccentricity. The event rate of binary BH mergers produced in our scenario depends on the population of BHs in AGN disks, which is very uncertain. Nevertheless, given the binary capture timescale obtained in this paper, a non-negligible event rate for such BH mergers may be expected. Perhaps the recently claimed eccentric merger event GW190521 \citep{Gayathri2022} is an example. The most uncertain aspects of the present study concern the effects of gas disks. Our conclusion on the gas effects (point 9 above) should be considered tentative. For future work, hydrodynamics simulations should be used for a more in-depth study of the disk effects. Although we have only considered GW emission in this paper, the broad range of $r_{\rm p}$ in close encounters may allow the physical processes at different distance scales to operate. For example, interaction between the circum-BH disks may generate dissipation for the BH pairs during close encounters. Our results of the close encounter rate and the $r_{\rm p}$-distribution can be applied to these alternative dissipation mechanisms to explore different possibilities of binary BH formation in AGN disks. \acknowledgments This work is supported in by NSF grant AST-2107796 and the NASA grant 80NSSC19K0444. \software{Rebound \citep{Rein2012}, Matplotlib \citep{Hunter2007}, NumPy \citep{Walt2011}, SciPy \citep{Virtanen2020} } \vspace{2cm}
1,116,691,499,602
arxiv
\section{Introduction} Most recent theories of neutrino oscillations have used a 3x3 S-matrix approach with three active neutrinos\cite{ahlo01,jo04,hjk11}. Recent experiments on neutrino oscillations\cite{mini13} have suggested the existence of at least one sterile neutrino with the mass and mixing angles used in the present work. See Ref\cite{mini13} for references to earlier experiments, and Refs\cite{kmms13,ggllz15} for reviews of sterile neutrino oscillations with references to experimental and theoretical publications. In the present work we use a U-matrix approach, introduced for active neutrinos with a 3x3 U-matrix\cite{as97}, and extended to a 4x4 U-matrix with one sterile neutrino in a recent study of $\mathcal{P}(\nu_\mu \rightarrow \nu_e)$, the transition probability for a muon neutrino to oscillate to an electron neutrino\cite{lsk14,lsk15}. We introduce a 6x6 U-matrix for three active and three sterile neutrinos, an extension of previous work with six neutrinos\cite{tg07}. An early study of the effect of adding 3 sterile neutrinos may be found in Ref\cite{gs81}, where it was found that in a broad class of theories consistent with grand unification, the neutrino mixing angles are likely to be comparable to the corresponding quark mixing angles and might be much larger in a special case. This result holds for a wide range of mass ratios for the light-neutrino Majorana masses. \section{ 6x6 U-Matrix} Active neutrinos with flavors $\nu_e,\nu_\mu,\nu_\tau$ and three sterile neutrinos, $\nu_{s_1},\nu_{s_2},\nu_{s_3}$ are related to neutrinos with definite mass by \begin{eqnarray} \label{f-mrelation} \nu_f &=& U\nu_m \; , \end{eqnarray} where $U$ is a 6x6 matrix and $\nu_f,\nu_m$ are 6x1 column vectors. We use the notation $s_{ij}, c_{ij}=sin\theta_{ij},cos\theta_{ij}$, with $\theta_{12}, \theta_{23}, \theta_{13}$ the mixing angles for active neutrinos; and $s_\alpha=sin(\alpha), c_\alpha=cos(\alpha), s_\beta=sin(\beta)$, etc, where $\alpha,\beta,\gamma$ are sterile-active neutrino mixing angles. \begin{eqnarray} \label{Uform} U &=& O^{23}O^{13} O^{12} O^{14} O^{24} O^{34} O^{15} O^{25} O^{35} O^{45} O^{16}O^{26} O^{36} O^{46} O^{56} \end{eqnarray} with ($O^{45}$, $O^{46}$, and $O^{56}$, giving sterile-sterile neutrino mixing, are not shown) \vspace{3mm} $O^{23}$= $\left( \begin{array}{ccclcr} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & c_{23} & s_{23} & 0 & 0 & 0 \\ 0 & -s_{23} & c_{23} & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \hspace{3mm}$O^{13}$= $\left( \begin{array}{ccclcr} c_{13} & 0 & s_{13} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\-s_{13} & 0 & c_{13} & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \vspace{3mm} $O^{12}$= $\left( \begin{array}{ccclcr} c_{12} & s_{12} & 0 & 0 & 0 & 0\\ -s_{12} & c_{12} & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \hspace{3mm}$O^{14}$= $\left( \begin{array}{ccclcr} c_\alpha & 0 & 0 & s_\alpha & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ -s_\alpha & 0 & 0 & c_\alpha & 0 & 0\\ 0& 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \vspace{3mm} $O^{24}$= $ \left( \begin{array}{ccclcr} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & c_\alpha & 0 & s_\alpha & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & -s_\alpha & 0 & c_\alpha & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0\\ 0& 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \hspace{3mm}$O^{34}$= $\left( \begin{array}{ccclcr} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & c_\alpha & s_\alpha & 0 & 0 \\ 0 & 0 & -s_\alpha & c_\alpha & 0 & 0\\ 0& 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ $O^{15}$= $\left( \begin{array}{ccclcr} c_\beta & 0 & 0 & 0 & s_\beta & 0\\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ - s_\beta & 0 & 0 & 0 & c_\beta & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \hspace{3mm}$O^{25}$= $\left( \begin{array}{ccclcr} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & c_\beta & 0 & 0 & s_\beta & 0\\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 &-s_\beta & 0 & 0 & c_\beta & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \vspace{3mm} $O^{35}$= $\left( \begin{array}{ccclcr} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & c_\beta & 0 & s_\beta & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0& 0 & -s_\beta & 0 & c_\beta & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$ \hspace{3mm}$O^{16}$= $\left( \begin{array}{ccclcr} c_\gamma & 0 & 0 & 0 & 0 & s_\gamma\\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ - s_\gamma & 0 & 0 & 0 & 0 & c_\gamma \end{array} \right)$ \vspace{3mm} $O^{26}$= $\left( \begin{array}{ccclcr} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & c_\gamma & 0 & 0 & 0 & s_\gamma\\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 &-s_\gamma & 0 & 0 & 0 & c_\gamma \end{array} \right)$ \hspace{3mm}$O^{36}$= $\left( \begin{array}{ccclcr} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & c_\gamma & 0 & 0 & s_\gamma\\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 &- s_\gamma & 0 & 0 & c_\gamma \end{array} \right)$ \vspace{5mm} $ \mathcal{P}(\nu_\mu \rightarrow\nu_e)$ is obtained from the 6x6 U matrix and the neutrino mass differences $\delta m_{ij}^2=m_i^2-m_j^2$ for a neutrino beam with energy $E$ and baseline $L$ by \begin{eqnarray} \label{Pue-1} \mathcal{P}(\nu_\mu \rightarrow\nu_e) &=& Re[\sum_{i=1}^{6}\sum_{j=1}^{6} U_{1i}U^*_{1j}U^*_{2i}U_{2j} e^{-i(\delta m_{ij}^2/E)L}] \; , \end{eqnarray} an extension the 4x4\cite{lsk14,lsk15} theory with one serile neutrino, which used the 3x3 formalism of Ref\cite{as97}, to a 6x6 matrix formalism\cite{tg07}. From Eq(\ref{Uform}), multiplying the 12 6x6 $O$ matrices, we obtain the matrix U. With $\delta_{CP}$=0, $U^*_{ij}=U_{ij}$, so we only need $U_{1j},U_{2j}$. \begin{eqnarray} \label{U1j} U_{11}&=&.821 ca{\rm \;}cb{\rm \;}cg \nonumber \\ U_{12} &=& cg ((.554 ca - .821 sa^2) cb - .821 ca{\rm \;}sb^2) - .821 ca{\rm \;} cb{\rm \;}sg^2 \nonumber \\ U_{13}&=&cg ((.15 ca-.554 sa^2-.821ca{\rm \;}sa^2)cb-(.554 ca - .821 sa^2)sb^2 \nonumber \\ && -.821ca{\rm \;}cb{\rm \;}sb^2) - .821 ca{\rm \;}cb{\rm \;}cg{\rm \;} sg^2 - ((.554 ca - .821 sa^2) cb -.821 ca{\rm \;}sb^2)sg^2 \nonumber \\ U_{14} &=&cg(cb(.15sa +.554 ca{\rm \;}sa + .821 ca^2{\rm \;}sa)-.821ca{\rm \;} cb^2{\rm \;}sb^2 \nonumber \\ && -(.554 ca-.821 sa^2)cb{\rm \;}sb^2-(.15 ca-.554 sa^2-.821 ca sa^2)sb^2) - .821ca{\rm \;}cb{\rm \;}sg^2 cg^2\nonumber \\ &&-cg((.554ca-.821sa^2) cb -.821ca {\rm \;}sb^2)sg^2 -(cb (.15 ca - .554 sa^2 - .821 ca sa^2)\nonumber \\ && - .821ca{\rm \;}cb{\rm \;}sb^2 -(.554 ca-.821 sa^2) sb^2)sg^2 \nonumber \end{eqnarray} \newpage \begin{eqnarray} \label{U1j2} U_{15} &=&cg(.821ca{\rm \;}sb{\rm \;}cb^3+(.15sa+.554ca{\rm \;}sa+ .821 ca^2{\rm \;}sa)sb \nonumber \\ && +(.554 ca-.821 sa^2)cb^2{\rm \;}sb+(.15ca-.554sa^2-.821ca{\rm \;}sa^2) cb{\rm \;}sb) \nonumber \\ && -.821ca{\rm \;}cb{\rm \;}cg^3 sg^2 -cg^2(cb(.554ca-.821 sa^2)-.821 sb^2)sg^2 \nonumber \\ &&-cg(cb (.15 ca-.554 sa^2-.821ca{\rm \;}sa^2)-.821 ca{\rm \;}cb{\rm \;}sb^2 \nonumber \\ && -(.554 ca-.821 sa^2)sb^2 sg^2 -(cb(.15sa+.554ca{\rm \;} sa + .821ca^2 sa) - .821 ca{\rm \;}cb^2{\rm \;}sb^2 \nonumber \\ && -cb(.554ca -.821sa^2)sb^2+(.15ca-.554sa^2-.821ca{\rm \;}sa^2)sb^2) sg^2 \nonumber \\ U_{16}&=& .821ca{\rm \;}cb{\rm \;}sg{\rm \;}cg^4+(.821ca{\rm \;}cb^3{\rm \;}sb + (.15sa+.554ca{\rm \;}sa+.821ca^2{\rm \;}sa)sb\nonumber \\ && +cb^2 (.554 ca-.821 sa^2)sb + cb(.15 ca -.554 sa^2-.821 ca{\rm \;} sa^2) sb) sg \nonumber \\ &&+cg^3 ((.554 ca - .821 sa^2) cb - .821 ca{\rm \;}sb^2)sg +\nonumber \\ && cg^2 (cb (.15 ca -.554 sa^2 - .821ca{\rm \;}sa^2)-.821ca{\rm \;}cb {\rm \;}sb^2\nonumber \\ && - (.554 ca - .821 sa^2) sb^2) sg\nonumber \\ &&+cg(cb(.15 sa+.554ca{\rm \;}sa+.821 ca^2{\rm \;}sa)-.821ca{\rm \;}cb^2 sb^2 \nonumber \\ && -cb(.554ca-.821sa^2)sb^2-(.15 ca-.554 sa^2-.821ca{\rm \;}sa^2)sb^2) sg \; , \end{eqnarray} \begin{eqnarray} \label{U2j} U_{21}&=& -.484ca{\rm \;}cb{\rm \;}cg \nonumber \\ U_{22}&=&cg(.527ca+.484 sa^2)cb-.821ca{\rm \;}sb^2)+ .484ca{\rm \;}cb{\rm \;}sg^2 \nonumber \\ U_{23}&=& cg((.699ca-.527sa^2+.484ca{\rm \;}sa^2)cb-(.527ca+.484sa^2)sb^2 +.484ca{\rm \;}cb{\rm \;}sb^2)\nonumber \\ && +.484ca{\rm \;}cb{\rm \;}cg{\rm \;}sg^2-((.527ca + .484sa^2)cb +.484 ca{\rm \;}sb^2)*sg^2 \nonumber \\ U_{24}&=& cg(cb(.699 sa+.527ca{\rm \;}sa-.484ca^2{\rm \;}sa)+ .484ca{\rm \;}cb^2{\rm \;}sb^2 \nonumber \\ && -(.527ca +.484sa^2)cb{\rm \;}sb^2 -(.699ca-.527sa^2+.484ca{\rm \;}sa^2) sb^2) +.484ca{\rm \;}cb{\rm \;}sg^2{\rm \;}cg^2 \nonumber \\ &&-cg((.527 ca +.484sa^2)cb+.484ca{\rm \;}sb^2)sg^2-(cb(.69 ca-.527sa^2+ .484ca{\rm \;}sa^2) +\nonumber \\ && .484ca{\rm \;}cb{\rm \;}sb^2-(.527ca + .484 sa^2)sb^2)sg^2 \nonumber \\ U_{25}&=& cg(-.484 ca{\rm \;}sb{\rm \;}cb^3 +(.699sa+.527ca{\rm \;}sa -.484ca^2{\rm \;}sa)sb\nonumber \\ && +(.527ca+.484sa^2)cb^2{\rm \;}sb +(.699ca -.527sa^2+.484 ca{\rm \;}sa^2) cb{\rm \;}sb)+.484ca{\rm \;}cb{\rm \;}cg^3{\rm \;}sg^2 \nonumber \\ &&-cg^2(cb(.527ca+.484sa^2)+.484ca{\rm \;}sb^2)sg^2 -cg(cb(.699ca-.527sa^2+.484ca{\rm \;}sa^2)+ \nonumber \\ && .484ca{\rm \;}cb{\rm \;}sb^2-(.527ca+.484sa^2)sb^2)sg^2 -(cb(.699sa+.527ca{\rm \;}sa-.484ca^2{\rm \;}sa)+ \nonumber \\ &&.484ca{\rm \;}cb^2{\rm \;}sb^2 -cb(.527ca +.484sa^2)sb^2 +(.699ca-.527sa^2+.484 ca{\rm \;}sa^2)sb^2)sg^2 \nonumber \\ U_{26}&=& -.484ca{\rm \;}cb{\rm \;}sg{\rm \;}cg^4+ (-.484ca{\rm \;}cb^3{\rm \;}sb + (.699 sa + .527 ca sa-.484ca^2{\rm \;}sa)sb \nonumber \\ && +cb^2 (.527ca + .484sa^2)sb+ cb(.699ca-.527sa^2+.484ca{\rm \;}sa^2)sb)sg \nonumber\\ && +cg^3((.527ca+.484 sa^2)cb + .484 ca{\rm \;}sb^2)sg +\nonumber\\ && cg^2(cb(.699ca-.527sa^2+.484ca{\rm \;}sa^2)+.484 ca{\rm \;}cb{\rm \;}sb^2 - (.527 ca + .484 sa^2) sb^2) sg \nonumber\\ &&+cg(cb(.699sa+.527ca{\rm \;}sa-.484ca^2 sa)+.484ca{\rm \;}cb^2{\rm \;}sb^2 - cb(.527ca +.484sa^2)sb^2 \nonumber\\ &&-(.699 ca-.527sa^2 +.484 ca{\rm \;}sa^2)sb^2)sg \; . \end{eqnarray} \newpage \section{$\mathcal{P}(\nu_\mu \rightarrow \nu_e)$ For equal sterile neutrino masses} Assuming that all three sterile neutrinos have the same mass, sterile-active neutrino mass differences are $\delta m_{4j}^2=m_4^2-m_j^2 \simeq .9 (eV)^2$, with $\delta m_{4j}^2$ taken from the best fit to neutrino oscillation data\cite{mini13} (see Ref\cite{mini13} for references to earlier experiments), from Eq(\ref{Pue-1}) $\mathcal{P}(\nu_\mu \rightarrow \nu_e)$ is \begin{eqnarray} \label{Pue-2} \mathcal{P}(\nu_\mu \rightarrow \nu_{e}) &=&Re[U_{11}U_{21}[ U_{11}U_{21}+ U_{12}U_{22} e^{-i\delta L}+ U_{13}U_{23} e^{-i\Delta L}+ \nonumber \\ && (U_{14}U_{24}+U_{15}U_{25} +U_{16}U_{26}) e^{-i\gamma L}]+ \nonumber \\ && U_{12}U_{22}[ U_{11}U_{21}e^{-i\delta L}+ U_{12}U_{22} + U_{13}U_{23} e^{-i\Delta L}+ \nonumber\\ && (U_{14}U_{24}+U_{15}U_{25} +U_{16}U_{26}) e^{-i\gamma L}]+ \nonumber \\ && U_{13}U_{23}[ U_{11}U_{21}e^{-i\Delta L}+ U_{12}U_{22}e^{-i\Delta L} \nonumber \\ && + U_{13}U_{23} +(U_{14}U_{24}+U_{15}U_{25} +U_{16}U_{26}) e^{-i\gamma L}]+ \nonumber \\ && (U_{14}U_{24}+U_{15}U_{25}+U_{16}U_{26})[(U_{11}U_{21}+ U_{12}U_{22} \nonumber \\ &&+ U_{13}U_{23})e^{-i\gamma L}+U_{14}U_{24}+U_{15}U_{25}+U_{16}U_{26}]] \; , \end{eqnarray} with $\delta=\delta m_{12}^2/2E,\; \Delta=\delta m_{13}^2/2E,\; \gamma= \delta m_{jk}^2/2E$ (j=1,2,3;k=4,5,6). The neutrino mass differences are $\delta m_{12}^2=7.6 \times 10^{-5}(eV)^2$, $\delta m_{13}^2 = 2.4\times 10^{-3} (eV)^2$; and $\delta m_{jk}^2 (j=1,2,3;k=4,5,6) =0.9 (eV)^2$\cite{mini13}. \vspace{3mm} From Eq(\ref{Pue-2}) \begin{eqnarray} \label{Pue-3} \mathcal{P}(\nu_\mu \rightarrow\nu_e) &=& U_{11}^2 U_{21}^2+ U_{12}^2 U_{22}^2+ U_{13}^2 U_{23}^2+ \nonumber \\ && (U_{14}U_{24}+U_{15}U_{25}+U_{16}U_{26})^2 + \nonumber \\ && 2U_{11} U_{21} U_{12} U_{22} cos\delta L + \\ && 2(U_{11} U_{21} U_{13} U_{23}+ U_{12} U_{22} U_{13} U_{23})cos\Delta L+ \nonumber \\ &&2(U_{14}U_{24}+U_{15}U_{25}+U_{16}U_{26}) \nonumber \\ &&(U_{11} U_{21}+U_{12} U_{22}+U_{13} U_{23})cos\gamma L \nonumber \; . \end{eqnarray} \vspace{3mm} Note that $\alpha\simeq 9.2^o$ from a recent analysis of MiniBooNE data, which was used in a recent study of $\mathcal{P}(\nu_\mu \rightarrow\nu_e)$ with one sterile neutrino\cite{lsk14,lsk15}. The figure below shows $\mathcal{P}(\nu_\mu \rightarrow \nu_e)$ with $\alpha=\beta=\gamma= 0^o$, giving the results of a recent 3x3 S-mtrix calculation\cite{lsk14-2}. In the figure, for the other curves, the sterile-active mixing angle $\alpha=9.2^o$, while $\beta$ and $\gamma$ are chosen to be $9.2^o$ and $20^o$ to compare the 6x6 to the previous 3x3 results. \newpage Using Eq(\ref{Pue-3}), one finds $\mathcal{P}(\nu_\mu \rightarrow \nu_e)$ for the 6x6 vs 3x3 theories: \vspace{5cm} \begin{figure}[ht] \begin{center} \epsfig{file=Pmue-6x6-9-11-15.eps,height=12cm,width=10cm} \end{center} \caption{$\mathcal{P}(\nu_\mu \rightarrow\nu_e)$ for MINOS(L=735 km), MiniBooNE(L=500m), JHF-Kamioka(L=295 km), and CHOOZ(L=1.03 km). (a) solid, for $\alpha=\beta=\gamma$=$9.2^o$; (b) dashed, for $\alpha, \beta, \gamma$ =$9.2^o$, $20^o$, $20^o$; (c) dash-dotted curve for $\alpha=\beta=\gamma$=$0^o$ giving the 3x3 result .} \end{figure} \vspace{3mm} \newpage \section{Conclusions} From the figure we note that with the small mixing angle, $\alpha=\beta= \gamma$=$9.2^o$, taken from the MiniBooNE analysis for $s_\alpha$, for MINOS, MiniBooNE, and JHF-Kamioka there is significant difference between our 6x6 and the earlier 3x3 prediction for $\mathcal{P}(\nu_\mu \rightarrow \nu_e)$, given by $\alpha=\beta=\gamma=0^o$. For the larger $20^o$ mixing angles for $\beta$ and $\gamma$, which are not known, there is a much larger difference between th 6x6 and 3x3 theories for these three experimental set-ups. For CHOOZE, however, $\mathcal{P}(\nu_\mu \rightarrow \nu_e)$ is not significantly dependent on the mixing angles $\alpha, \beta, \gamma$ for the values used, and is similar to the 3x3 prediction. There are many different choices for the parameters needed for this study, which we shall investgate in future work. \Large {\bf Acknowledgements} \vspace{3mm} \normalsize This work was carried out while LSK was a visitor at Los Alamos National Laboratory, Group P25. The author thanks Dr. William Louis for information about recent and future neutrino oscillation experiments, and Dr. Terrance Goldman for advice on the mixing angles.
1,116,691,499,603
arxiv
\subsection{An optimal basis of master integrals} It has been shown by Tarasov and Lee~\cite{Tarasov:1996br,Lee:2009dh} that the value of a Feynman integral in $d$ space-time dimensions can be directly related to that of the same integral in $d-2$ or $d+2$ space-time dimensions. This implies that, if all MIs of a given graph are known as Laurent expansion in any \textsl{even} number of dimensions, $d =2\,n$, \begin{align} &\mathcal{I}_{i}(d;x_{ij}) = \sum_{\alpha=-b}^\infty \, \mathcal{I}_{i}^{(\alpha)}(2\,n;x_{ij}) \, (d-2\,n)^{\alpha}\,, \labbel{eq:ser2n} \end{align} then the coefficients of their series expansions in $d = 4$, $\mathcal{I}_{i}^{(\alpha)}(4;x_{ij})$ in~\eqref{eq:ser4}, can be obtained as linear combinations of the $\mathcal{I}_{i}^{(\alpha)}(2\,n;x_{ij})$. For more details see for example~\cite{Laporta:2004rb,Remiddi:2013joa} and the discussion in Appendix~\ref{App:Dim}. Indeed, changing the basis of MIs changes the form of the matrix $A(d;x_{ij})$ in equation~\eqref{eq:diffeqgen}. An interesting problem is therefore how to define an \textsl{optimal} basis of MIs in order to simplify as much as possible the system~\eqref{eq:sysdeq}. Since we are interested in computing the MIs as Laurent expansion in $(d-4)$ (or, in general, in $(d-2\,n)$), an obvious simplification would occur if we could decouple some of the differential equations, at least in the considered limit. In particular, given a system of $N$ coupled equations, one could think of classifying the complexity of the latter by determining the \textsl{minimum number} of differential equations that cannot be decoupled in the limit $d \to 4$ (or more generally $d \to 2\,n$). At this point it is useful clarify more precisely what we mean with \textsl{decoupling} in this context. Let us consider a $2 \times 2$ coupled system of differential equations\footnote{Again, we neglect the inhomogeneous terms everywhere.} \begin{align} \frac{\partial}{\partial x} \, \vec{\mathcal{I}}(d;x) = A(d;x)\, \vec{\mathcal{I}}(d;x)\,, \end{align} where $\vec{\mathcal{I}}(d;x) = \left( I_1(d;x), I_2(d;x) \right)$ is the 2-vector of unknown functions, $A(d,x)$ is a $2 \times 2$ matrix, $d$ are the space-time dimensions and $x$ is a variable the functions depend on\footnote{In the case of Feynman integrals $x$ represents a generic Mandelstam variable.}. Assume now for simplicity that the functions $I_1(d;x), I_2(d;x)$ are finite in the limit $d \to 4$ and that the matrix $A(d;x)$ does not contain any explicit poles in $1/(d-4)$. Assume finally that, in the limit $d \to 4$, the matrix $A(d;x)$ has \textsl{non-zero} non-diagonal entries, and therefore that the system is coupled in the limit $d \to 4$. Of course, by expanding the entries of $A(d;x)$ in $(d-4)$ we can write our system as \begin{align} \frac{\partial}{\partial x} \, \vec{\mathcal{I}}(d;x) = A^{(0)}(4;x)\, \vec{\mathcal{I}}(d;x) + (d-4) A^{(1)}(4;x)\, \vec{\mathcal{I}}(d;x) + \mathcal{O}\left( (d-4)^2 \right)\,. \end{align} It is now clear that if, by any means, we can find two independent solutions to the $2 \times 2$ system \begin{align} \frac{\partial}{\partial x} \, \vec{f}(x) = A^{(0)}(4;x)\, \vec{f}(x)\,, \end{align} say $ \left( v_1(x), v_2(x)\right)$ and $ \left( w_1(x), w_2(x)\right)$, then we can define the new vector $\vec{\mathcal{J}}(d;x)$ through the rotation \begin{equation} \vec{\mathcal{J}}(d;x) = G(x)\,\vec{\mathcal{I}}(d;x)\,, \quad \mbox{with} \quad G(x) = \left( \begin{array}{cc} v_1(x) & w_1(x) \\ v_2(x) & w_2(x) \end{array}\right)\,, \end{equation} such that the differential equations satisfied by $\vec{\mathcal{J}}(d;x)$ assume the form \begin{align} \frac{\partial}{\partial x} \, \vec{\mathcal{J}}(d;x) = (d-4) \, G^{-1}(x)\,A^{(1)}(4;x) G(x) \, \vec{\mathcal{I}}(d;x) + \mathcal{O}\left( (d-4)^2 \right)\,, \end{align} i.e. they become trivial in the limit $d \to 4$. The matrix $G(x)$ can be of course \textsl{arbitrarily complicated}, as it contains the solutions of a second order differential equation. In this case we would have of course achieved a decoupling, but at the price of having to solve a coupled system of differential equations, which is in the general case not possible. On the other hand, what we are really interested in is to determine whether a basis of MIs exists such that some of the non-diagonal terms of the matrix $A(d;x)$ become zero in the limit $d \to 4$, and such that this basis can still be reached from our starting basis only through IBPs (i.e. without having to solve a coupled system of differential equations!). What this means in practice is that, if such a basis existed, then the rotation matrix $G$ would assume a very simple form, namely it would contain only rational functions of the external invariants $x_{ij}$ (and of the dimensions $d$). This new basis would therefore fulfil a system of differential equations where some (or all) of the MIs decouple in the limit $d \to 4$, and still it would be a system of linear differential equations with rational coefficients only. In this respect we note that, for all known cases of MIs which can be integrated in terms of multiple polylogarithms, a change of basis in the sense described above can be found and the system of differential equations can be put in \textsl{triangular} form as $d \to 4$ \begin{align} \frac{\partial}{\partial x_{ij}} \vec{\mathcal{J}}(d;x_{ij}) = T(4;x_{ij}) \vec{\mathcal{J}}(d;x_{ij})+ \mathcal{O}(d-4)\,, \end{align} where $T(4;x_{ij})$ is a triangular matrix and does not depend on the dimensions $d$. From the point of view of the classification outlined above this corresponds to the easiest case, where all equations decouple in the limit $d \to 4$ and, effectively, the problem reduces to a series of independent integrations by quadrature. Finding a basis in this form is often a first step towards a canonical basis in the sense introduced in~\cite{Henn:2013pwa}. We want to stress here that of course all these considerations apply in the very same way for any integer number of dimensions $d \to n$ (even or odd). Unfortunately a change of basis of this kind cannot always be found. Several cases are known where the system cannot be \textsl{completely triangularised} and instead at least two differential equations remain coupled. In these cases MPLs turn out not to be enough for describing the solution and the class of functions must be enlarged to include also elliptic generalisations of the latter~\cite{Broedel:2015hia}. It is unclear whether this will be the end of the story, since cases where three or more coupled equations survive are relatively easy to find, as we will show in the following. What still appears to be missing is a criterion to determine, given a Feynman graph, what is the minimum number of equations which cannot be decoupled. Together with simplifying as much as possible the problem at hand, this could also give a hint to which class of functions are required for describing the solution. \section{Reading the IBPs in fixed numbers of dimensions} \labbel{sec:IBPsN} \setcounter{equation}{0} \numberwithin{equation}{section} In order to find a possible working criterion to determine the minimum number of coupled differential equations we should go back to think how the differential equations are derived. We saw that differentiating a master integral with respect to the external invariants produces new integrals belonging to the same Feynman graph. By using the IBPs one can then reduce these integrals to MIs, ending up with a system of differential equations. If we start with $N$ master integrals we will obtain in general a coupled system of $N$ linear differential equations. The fact that the $N$ differential equations are coupled can be seen, in this respect, as due to the linear independence of the $N$ master integrals in $d$ dimensions. As we already discussed, for any physical application we are interested in computing Feynman integrals as Laurent series in $(d-4)$ or, more generally, in $(d-2\,n)$ with $n \in \mathbb{N}$. Of course, different integrals have different degrees of divergence, i.e. their Laurent expansion starts at different orders in $(d-2\,n)$. For any value of the dimensions, nevertheless, the maximal divergence can be computed in dimensional regularisation and depends only on the topology of the graph under consideration (i.e. on the number of loops, of external legs etc.). We can therefore imagine to first generate the IBPs in $d$ dimensions and then expand them as a Laurent series in $(d-2\,n)$, obtaining in this way a chained set of systems of IBPs, one for every order in $(d-2\,n)$. It is clear that, by construction, at every order in $(d-2\,n)$, the homogeneous part of each system will be identical, while the inhomogeneous part will contain the previous orders of the expansion (and the sub-topologies, that we will neglect throughout). If we limit ourselves to the first order of the expansion, i.e. the one corresponding to the highest pole in $(d-2\,n)$, the system of equations that we are left with is equivalent to the original system of IBPs where $d$ is fixed to be $d=2\,n$, and corresponds to the sole homogeneous system. Now, it is very well known that upon fixing the number of space-time dimensions in the IBPs to an integer value it may happen that some of the equations degenerate and, in particular, that some of the integrals that used to be linearly independent for generic values of $d$, become linearly dependent from each other. From the point of view of the differential equations satisfied by the master integrals, if some of the integrals were to become linearly dependent in the limit $d \to 2\,n$, one would expect that those masters should not bring any new information in that limit and it should therefore be possible to \textsl{decouple} them from the system of differential equations as $d \to 2 \, n$. Let us try to state this point more precisely. As exemplification we consider a topology that is reduced to 2 master integrals which we call $\mathcal{I}_1(d;x)$ and $\mathcal{I}_2(d;x)$, where $d$ are the dimensions and $x$ is a generic Mandelstam variable. Neglecting the sub-topologies the system of differential equations that they satisfy can be written as \begin{equation} \left\{ \begin{array}{c} \frac{\partial}{\partial \, x}\,\mathcal{I}_1(d;x) = c_{11}(d;x)\,\mathcal{I}_1(d;x) + c_{12}(d;x) \, \mathcal{I}_2(d;x) \\ \\ \frac{\partial}{\partial \, x} \mathcal{I}_2(d;x) = c_{21}(d;x)\,\mathcal{I}_1(d;x) + c_{22}(d;x) \, \mathcal{I}_2(d;x) \end{array} \right.\,, \labbel{eq:sysgen} \end{equation} where the $c_{ij}(d;x)$ are rational functions. Let us now follow the argument above and generate the IBPs fixing $d=2\,n$. Let us assume that, by solving this simplified system, one of the two master integrals becomes linearly dependent from the other one and the new IBPs produce the relation \begin{equation} \mathcal{I}_2(2\,n;x) = b(x)\,\mathcal{I}_1(2\,n;x)\,, \labbel{eq:relgen} \end{equation} where $b(x)$ is a rational function of the Mandelstam variables\footnote{To be precise we should recall that, since the master integrals can be divergent, this relation cannot be seen, in general, as a real relation between the two masters.}. Equation~\eqref{eq:relgen} implies that in $d=2\,n$ one of the two master integrals becomes linearly dependent in the sense of the IBPs. According to the argument above we would therefore expect to be able to decouple the two differential equations in this limit. In order to see this it is useful to ask ourselves how such a relation can emerge from the original $d$-dimensional IBPs. Let us imagine that upon solving the IBPs for generic $d$, we can find a $d$-dimensional relation expressing a given integral of the graph under consideration, say $K(d;x)$, as a linear combination of the two masters and such that \begin{equation} K(d,x) = \frac{1}{d-2\,n} \left( b_1(d;x)\,\mathcal{I}_1(d;x) + b_2(d;x)\,\mathcal{I}_2(d;x) \right)\,, \labbel{eq:relgenibps} \end{equation} with $b_1(x)/b_2(x)= b(x)$ and $\lim_{d \to 2\,n} b_i(d;x) = b_i(x)$, for $i=1,2$. It is clear that, if this is the case, the IBPs which would generate this identity for generic $d$, would instead generate~\eqref{eq:relgen} once $d$ is fixed to be $d=2\,n$. These relations are precisely what we are looking for. To refer to the latter we will often use throughout the paper the notation \begin{equation} b_1(d;x)\,\mathcal{I}_1(d;x) + b_2(d;x)\,\mathcal{I}_2(d;x) = \mathcal{O}(d-2\,n), \end{equation} or equivalently \begin{equation} b_1(x)\,\mathcal{I}_1(d;x) + b_2(x)\,\mathcal{I}_2(d;x) = \mathcal{O}(d-2\,n), \end{equation} where it should be understood that, in general, this does not mean that the combination above is really of order $\mathcal{O}(d-2\,n)$, but simply that it \textsl{becomes zero upon setting $d=2\,n$ in the IBPs.} Note that, of course, using the $b_i(x)$ instead of the $b_i(d;x)$ can only produce corrections of order $\mathcal{O}(d-2\,n)$ due to~\eqref{eq:relgenibps}. We will see many examples of these relations in the sections below. Naively, the fact that only one integral is linearly independent for $d=2\,n$ would require that the integral itself should satisfy a first order differential equation as $d \to 2\,n$. Finding a basis of master integrals for which this first order equation emerges is equivalent to finding a basis which decouples the system~\eqref{eq:sysgen}. To this aim let us perform the following rotation of the master integral basis \begin{equation} \mathcal{J}_1(d;x) = b_1(x)\,\mathcal{I}_1(d;x) + b_2(x)\,\mathcal{I}_2(d;x) \,,\qquad \mathcal{J}_2(d;x) = \mathcal{I}_2(d;x)\,.\labbel{eq:rotation} \end{equation} The system~\eqref{eq:sysgen} under this rotation becomes \begin{align} &\frac{\partial}{\partial \, x}\,\mathcal{J}_1(d;x) = \left( c_{11}(d;x) + \frac{b_2(x) c_{21}(d;x) + b_1'(x)}{b_1(x)} \right) \,\mathcal{J}_1(d;x) \nonumber \\ &\quad + \left( b_1(x)c_{12}(d;x) + b_2(x) \left( c_{22}(d;x)-c_{11}(d;x) \right) + b_2'(x) - \frac{b_2(x) \left[ b_2(x) c_{21}(d;x) + b_1'(x) \right]}{b_1(x)}\right) \,\mathcal{J}_2(d;x) \nonumber \\ & \nonumber \\ &\frac{\partial}{\partial \, x}\,\mathcal{J}_2(d;x) = \left( c_{22}(d;x) - \frac{b_2(x)}{b_1(x)} c_{21}(d;x) \right) \mathcal{J}_2(d;x) + \frac{c_{21}(d;x)}{b_1(x)} \mathcal{J}_1(d;x)\,. \labbel{eq:gendecoupling} \end{align} Equations~\eqref{eq:gendecoupling} do not look particularly illuminating at first glance. We claim nevertheless that these equations are precisely what we were looking for. The basis $\mathcal{J}_1(d;x), \mathcal{J}_2(d;x)$ defined in~\eqref{eq:rotation}, in fact, has been chosen in order to exploit the linear dependence of the two master integrals in the limit $d \to 2\,n$. In this limit the IBPs tell us that $\mathcal{J}_1(d;x)$ is by construction suppressed by a factor $(d-2\,n)$ and therefore decouples from the problem. We expect therefore that the differential equation for the latter should decouple in this limit or, in other words, that \begin{equation} \left( b_1(x)c_{12}(d;x) + b_2(x) \left( c_{22}(d;x)-c_{11}(d;x) \right) + b_2'(x) - \frac{b_2(x) \left[ b_2(x) c_{21}(d;x) + b_1'(x) \right]}{b_1(x)}\right) \propto \mathcal{O}(d-2\,n)\,. \labbel{eq:gensuppr} \end{equation} If this is true then upon expanding the system of differential equations as Laurent series in $(d-2\,n)$ one can, at every oder, first solve the differential equation for $\mathcal{J}_1(d;x)$ by quadrature, and then use this as an input for the second equation. A rigorous mathematical proof of equation~\eqref{eq:gensuppr} is outside the scope of this paper and we will limit ourselves to show explicitly how this works in practice with several examples of different complexity. The considerations above can be easily generalised to $N$ master integrals $\mathcal{I}_1$,..., $\mathcal{I}_N$. In this case one starts with a system of $N$ coupled differential equations. By solving the IBPs for $d=2\,n$ one can then verify how many of the master integrals become linearly dependent in this limit. Assuming that $N-M$ integrals remain independent, this means that $M$ relations like~\eqref{eq:relgenibps} can be found, say \begin{alignat}{3} &K_1(d;x) &=& \frac{1}{d-2\,n}\left( b_{11}(d;x) \mathcal{I}_1(d;x) + ... + b_{1N}(d;x)\mathcal{I}_N(d;x) \right) \nonumber \\ &...&& \nonumber \\ &K_{M}(d;x) &=& \frac{1}{d-2\,n}\left( b_{M1}(d;x) \mathcal{I}_1(d;x) + ... + b_{MN}(d;x)\mathcal{I}_N(d;x) \right)\,, \labbel{eq:relgenibpsN} \end{alignat} and the $b_{ij}(d;x)$ are as always rational functions of the dimensions and of the Mandelstam variables\footnote{For this to be true the relations~\eqref{eq:relgenibpsN} must be linearly independent in the limit $d \to 2\,n$.}. As for the previous example we will often write these relations as \begin{align} &b_{11}(d;x) \mathcal{I}_1(d;x) + ... + b_{1N}(d;x)\mathcal{I}_N(d;x) = \mathcal{O}(d-2\,n) \nonumber \\ &... \nonumber \\ &b_{M1}(d;x) \mathcal{I}_1(d;x) + ... + b_{MN}(d;x)\mathcal{I}_N(d;x) = \mathcal{O}(d-2\,n)\,, \labbel{eq:genrelN} \end{align} where once more we imply that these combinations become zero upon setting $d=2\,n$ in the IBPs. As before we define $b_{ij}(x) = \lim_{d \to 2\,n} b_{ij}(d;x)$ and, following the same reasoning, we can then try to rotate the basis of master integrals to \begin{alignat}{3} &\mathcal{J}_1(d;x) &&=\; && b_{11}(x) \mathcal{I}_1(d;x) + ... + b_{1N}(x)\mathcal{I}_N(d;x) \nonumber \\ & .. \nonumber \\ &\mathcal{J}_M(d;x) &&=\; && b_{M1}(x) \mathcal{I}_1(d;x) + ... + b_{MN}(x)\mathcal{I}_N(d;x) \nonumber \\ &\mathcal{J}_{M+1}(d;x) &&=\; && \mathcal{I}_{M+1}(d;x) \nonumber \\ & .. \nonumber \\ &\mathcal{J}_{N}(d;x) &&=\; && \mathcal{I}_{N}(d;x)\,. \labbel{eq:rotationN} \end{alignat} Under the rotation~\eqref{eq:rotationN}, we expect the $M$ integrals $\mathcal{J}_1(d;x)$, ..., $\mathcal{J}_M(d;x)$ to decouple from the remaining independent integrals in the limit $d \to 2\,n$, as in~\eqref{eq:gendecoupling}. One must be cautious here on what is intended by decoupling. According to the arguments above, upon the change of basis~\eqref{eq:rotationN}, we expect the system of differential equations to split into two blocks in the limit $d \to 2\,n$, one $M \times M$ and the other $(N-M) \times (N-M)$. This would correspond, order by order in $(d-2\,n)$, to an $M$-th plus an $(N-M)$-th order differential equation, unless for some other reason internally the two blocks of differential equations further decouple in this limit. On the other hand, for the Feynman graphs that we considered so far (see for example Sections~\ref{sec:sun2} and~\ref{sec:ban}), even a stronger claim can be made. In these cases the rotation~\eqref{eq:rotationN} not only splits the system into two blocks, as described above, but it also produces an explicit $(d-2\,n)$ in front of the whole $M \times M$ block originating from relations~\eqref{eq:genrelN}. This explicit overall factor allows to effectively reduce the problem to the solution of one single $(N-M)$-th differential equation, plus $M$ integrations by quadrature. The reason for this behaviour is still partly unclear and deserves further study. Summarising, the discussion above brings us to the following conclusion. Given a topology with $N$ master integrals which fulfil a set of $N$ coupled differential equations in $d$ space-time dimensions, the study of the IBPs in fixed numbers of dimensions, say $d=2\,n$, provides a tool to determine how many master integrals can be decoupled from the differential equations as $d \to 2\,n$. Of course, the arguments given above are partly oversimplified and we have not provided here any rigorous mathematical proof. The structure of the differential equations can be in general very involved and, instead of embarking on complicated mathematical arguments, we prefer to show explicitly how this ideas can be simply applied to different cases of increasing complexity. In the next section we will start off by considering simple examples where, by fixing the number of dimensions to an even integer value, only one master integral remains linearly independent and therefore the problem can be reduced to the solution of one linear differential equation. We will then move to more interesting cases where, even in fixed numbers of dimensions, more than one master integral remain linearly independent and one cannot avoid the problem of solving higher order differential equations which give rise to more complicated mathematical structures. \section{Explicit examples} \labbel{sec:Ex} \setcounter{equation}{0} \numberwithin{equation}{section} In the previous section we outlined the main ideas behind this paper. We argued that the IBPs might degenerate in the limit of fixed (even) integer numbers of dimensions $d \to 2 \, n$, such that some of the master integrals become effectively linearly dependent from each other. While this is a very well known fact, we argued that this degeneracy, if present, can be used in order to simplify the system of differential equations satisfied by the master integrals. In this section we will present many explicit examples of this simple idea. We will start by studying the two-loop sunrise graph with one massive and two massless propagators, Section~\ref{sec:sun1}, and a two-loop triangle with three off-shell legs, Section~\ref{sec:triangle}. In both examples there are only two master integrals and by studying the IBPs in fixed even numbers of dimensions, one relation can be found, allowing to decouple the differential equations in that limit. We will then consider the case of the two-loop massive sunrise, Section~\ref{sec:sun2}, and of a non-planar two-loop triangle, Section~\ref{sec:crossed}. In both cases not all equations can be decoupled, and a minimal bulk of two differential equations remains coupled, giving rise to elliptic functions. We will then study the case of a two-loop massive triangle with three master integrals, Section~\ref{sec:triangle2}. Here, similarly to the non-planar two-loop triangle, there are three master integrals. In this case, nevertheless, the differential equations can be completely decoupled and the solution can be written in terms of MPLs. Finally, as a last example, we will move to the three-loop massive banana graph~\ref{sec:ban}. In this case, we will study different possible mass-arrangements of increasing complexity, showing how the number of master integrals changes consequently, and how our method allows to determine easily which subset of master integrals can be immediately decoupled from the differential equations. \subsection{The two-loop sunrise with one massive propagator} \labbel{sec:sun1} \setcounter{equation}{0} \numberwithin{equation}{section} Let us start off by considering the case of the two-loop sunrise with one massive and two massless propagators. We define the following set of integrals belonging to its Feynman graph \begin{align} I(d;n_1,n_2,n_3,n_4,n_5) &= \Sunriseone{p^2} \nonumber \\ &= \int \mathfrak{D}^d k \mathfrak{D}^d l\, \frac{(k \cdot p)^{n_4}(l \cdot p)^{n_5}} {\left( k^2 \right)^{n_1} \left( l^2 \right )^{n_2} \left((k-l+p)^2-m^2 \right)^{n_3}}\,, \labbel{eq:sunr1} \end{align} where $p^2 = s$ is the momentum transfer. The integration measure is defined as \begin{equation} \mathfrak{D}^d k = C(d)\, \frac{d^dk}{(2 \pi)^d}\,, \labbel{eq:measure} \end{equation} and the explicit form of the function $C(d)$ is not relevant for the considerations below. Note that this Feynman graph does not contain any sub-topology. We keep explicit only the dependence on the space-time dimensions $d$ and on the powers of the denominators and scalar products, which will be important for what follows. Performing a usual reduction through IBPs one finds two independent MIs, which can be chosen as \begin{equation} \mathcal{I}_1(d;s) = I(d;1,1,1,0,0)\,, \qquad \mathcal{I}_2(d;s) = I(d;1,1,2,0,0)\,. \labbel{eq:missun1} \end{equation} Using the methods outlined in the previous sections we can now derive the differential equations fulfilled by $\mathcal{I}_1$ and $\mathcal{I}_2$ in the momentum squared $s$. This step can be performed automatically using, for example, Reduze 2~\cite{vonManteuffel:2012np}, and we end up with the following $2 \times 2$ linear system \begin{align} &\frac{d \mathcal{I}_1}{d s} = \frac{(d-3)}{s} \, \mathcal{I}_1 - \frac{m^2}{s}\, \mathcal{I}_2 \nonumber \\ &\frac{d \mathcal{I}_2}{d s} = \frac{(d-3)(3 d - 8)}{2\,m^2} \, \left(\frac{1}{s} - \frac{1}{s-m^2}\right) \mathcal{I}_1 + \left(\frac{2(d-3)}{s-m^2} - \frac{(3d-8)}{2\,s} \right)\, \mathcal{I}_2\,. \labbel{eq:deqsun1} \end{align} As one can immediately see, the equations are coupled for any \textsl{even} value of the dimensions $d$\footnote{On the other hand, the equations become triangular as $d \to 3$.}. It is well known that these integrals can be computed as a series expansion in $d \to 4$ in terms of HPLs only, see for example~\cite{Huber:2015bva}. Let us then try and use the ideas outlined in Section~\ref{sec:IBPsN} in order to decouple the differential equations in the limit $d \to 4$. First of all note that, in the limit $ d \to 4$, both master integrals are UV divergent and in particular they both develop a double pole \begin{align} &\mathcal{I}_1(d;s) = \frac{1}{(d-4)^2}\,\mathcal{I}_1^{(-2)}(4;s) + \frac{1}{(d-4)}\,\mathcal{I}_1^{(-1)}(4;s) + \mathcal{O}(1) \\ &\mathcal{I}_2(d;s) = \frac{1}{(d-4)^2}\,\mathcal{I}_2^{(-2)}(4;s) + \frac{1}{(d-4)}\,\mathcal{I}_2^{(-1)}(4;s) + \mathcal{O}(1)\,. \end{align} Moreover it is easy to show that any integral of the form~\eqref{eq:sunr1} can \textsl{at most} develop a double pole in $(d-4)$. Equipped with these consideration, let us now produce the IBPs for this Feynman graph as described above but, instead of solving them keeping the full dependence on the parameter $d$, we can set $d=4$.\footnote{The possibility of solving IBPs for fixed values of $d$ is already implemented in the development version of Reduze 2.} As we discussed in detail in the previous section, this is equivalent to expanding the IBPs in Laurent series, and considering the first of the chained systems of equations obtained, namely the one corresponding to the double pole in $(d-4)$. Following the arguments of the previous section, we would expect to find a degeneracy of the two master integrals in $d=4$, which should then allow us to decouple the two differential equations. As expected the two masters~\eqref{eq:missun1} become linearly dependent \begin{equation} \mathcal{I}_2^{(-2)}(4;s) = \frac{1}{m^2} \mathcal{I}_1^{(-2)}(4;s) \,. \labbel{eq:relsun1d4} \end{equation} As discussed in Section~\ref{sec:IBPsN}, such a relation must come from a corresponding $d$-dimensional IBP. Indeed, if one considers the original $d$-dimensional system of IBPs and solves it for the two masters, it is easy to find the following relation \begin{align} I(d; 2,1,1,0,0) = - \left(\frac{1}{d-4}\right)\, \frac{(d-3)}{s-m^2} \left(\,(3d-8)\mathcal{I}_1(d;s)-4\,m^2\,\mathcal{I}_2(d;s) \right)\,. \labbel{eq:ibpd4} \end{align} In the limit $d \to 4$ Eq.~\eqref{eq:ibpd4} trivially generates Eq.~\eqref{eq:relsun1d4}. In the notation of Section~\ref{sec:IBPsN} we can write this relation as \begin{equation} (3d-8)\mathcal{I}_1(d;s)-4\,m^2\,\mathcal{I}_2(d;s) = \mathcal{O}(d-4)\,, \end{equation} or, equivalently, keeping also in the right-hand side only terms of $\mathcal{O}(d-4)$, \begin{equation} \mathcal{I}_1(d;s) - m^2\, \mathcal{I}_2(d;s) = \mathcal{O}(d-4)\,, \end{equation} recalling that this does not mean that this linear combination is of order $\mathcal{O}(d-4)$, but that it becomes zero if we fix $d=4$ in the IBPs. In this particular case, since the Feynman graph under consideration does not have any sub-topologies, Eq.~\eqref{eq:relsun1d4} can be seen as a \textsl{real relation} between the highest poles of the two master integrals. This relation, which is naturally derived from the IBPs only, can be easily verified by computing the highest poles of the two master integrals. A very simple exercise gives \begin{align} & \mathcal{I}_1(d;s) = \frac{1}{(d-4)^2} \left( \frac{m^2}{2}\right) + \mathcal{O}\left( \frac{1}{(d-4)} \right)\nonumber \\ & \mathcal{I}_2(d;s) = \frac{1}{(d-4)^2} \left( \frac{1}{2}\right) + \mathcal{O}\left( \frac{1}{(d-4)} \right)\,, \labbel{eq:ressun4} \end{align} in agreement with Eq.~\eqref{eq:relsun1d4}. The overall normalisation of Eq.~\eqref{eq:ressun4} is of course arbitrary and it has to do with the choice for the integration measure~\eqref{eq:measure}. Let us now try and exploit this relation in order to simplify the system of differential equations~\eqref{eq:deqsun1}. We perform the change of basis from the ``standard'' MIs $\mathcal{I}_1(d;s)$, $\mathcal{I}_2(d;s)$, to the new MIs defined as \begin{align} &\mathcal{J}_1(d;s) = \mathcal{I}_1(d;s) - m^2\, \mathcal{I}_2(d;s)\,, \qquad \mathcal{J}_2(d;s) = \mathcal{I}_1(d;s) \,. \labbel{eq:newbasissun1} \end{align} Note that in this case the first of the two masters in~\eqref{eq:newbasissun1} has only a single pole in $(d-4)$ due to the exactness of relation~\eqref{eq:relsun1d4}. As second master integral we can choose any of the two and here we performed simply a random choice picking $\mathcal{I}_1(d;s)$. Choosing $\mathcal{I}_2(d;s)$ would indeed lead to equivalent results. Deriving the differential equations for the new basis we find \begin{align} \frac{d\, \mathcal{J}_1}{d\,s} &= \left[ \frac{2}{s-m^2} - \frac{1}{s} + (d-4) \left( \frac{2}{s-m^2} - \frac{3}{2\,s} \right)\right]\,\mathcal{J}_1 \nonumber \\ &+ (d-4) \left[ \frac{3}{2(s-m^2)} -\frac{1}{s} + \frac{3}{2}(d-4) \left( \frac{1}{s-m^2} - \frac{1}{s}\right)\right]\, \mathcal{J}_2 \nonumber \\ \frac{d\,\mathcal{J}_2}{d\,s} &= \frac{1}{s}\,\mathcal{J}_1 + \frac{(d-4)}{s}\,\mathcal{J}_2\,. \labbel{eq:deqsun1dec} \end{align} Equations~\eqref{eq:deqsun1dec} confirm the discussion in Section~\ref{sec:IBPsN} and can therefore be seen as of the main result of this paper. Let us have a closer look at these two equations and compare them to~\eqref{eq:deqsun1}. We note immediately that the equations are not in canonical form. On the other hand, the matrix of the system does become \textsl{triangular} in the limit $d\to4$, and in particular the master $\mathcal{J}_2$ appears in the differential equation for integral $\mathcal{J}_1$ multiplied by an explicit factor $(d-4)$, as predicted in~\eqref{eq:gensuppr}. For any practical purposes this is enough, since it means that one can expand the differential equations as Laurent series in $(d-4)$ and, order by order, first solve the differential equation for $\mathcal{J}_1$ by simple quadrature, and then use this result as input for the differential equation for $\mathcal{J}_2$, which can in turn be solved by quadrature. Needless to say, this procedure can in principle be iterated up to any order in $(d-4)$. A comment is in order. In this simple example the relation found by studying the IBPs in $d=4$ can be interpreted as an actual relation between the double poles of the two master integrals. Very often the first poles of arbitrarily chosen MIs are either constants or very simple rational functions. One might therefore naively think that, by evaluating explicitly the poles of a given set of MIs, one could simply look for simple relations among the latter. Such relations would indeed contain only simple rational functions. It is well known, though, that in several cases the poles of a master integral can be represented entirely through its sub-topologies. If this is the case, a relation between the poles of the masters would be useless, as it would not bring any new information as far as the master integrals are concerned. In order to achieve a decoupling one must therefore use a relation which is contained in the IBPs and as such represents an effective \textsl{degeneracy} of the master integrals in the limit $d \to 2\,n$, with $n \in \mathbb{N}$. \subsubsection{Simplification of the differential equations in $d=2$}\labbel{sec:Tri2} It is interesting to see what happens by repeating the same analysis for the graph~\eqref{eq:sunr1} in $d=2$ instead of $d=4$. Again, an easy analysis of the two master integrals~\eqref{eq:missun1} shows that they both develop a double pole in $d=2$, which in this case is of IR origin \begin{align} &\mathcal{I}_1(d;s) = \frac{1}{(d-2)^2}\,\mathcal{I}_1^{(-2)}(2;s) + \frac{1}{(d-2)}\,\mathcal{I}_1^{(-1)}(2;s) + \mathcal{O}(1) \\ &\mathcal{I}_2(d;s) = \frac{1}{(d-2)^2}\,\mathcal{I}_2^{(-2)}(2;s) + \frac{1}{(d-2)}\,\mathcal{I}_2^{(-1)}(2;s) + \mathcal{O}(1)\,. \end{align} As before, one can easily see that also in this case all integrals of the form~\eqref{eq:sunr1} can develop \textsl{at most} a double pole in $(d-2)$. We can proceed and generate the IBPs for generic $d$ and then expand them as Laurent series, this time in $(d-2)$, starting from $1/(d-2)^2$. We are then left with a series of chained systems of IBPs, each for a different order in $(d-2)$. As in the previous case, we can now focus on solving the first system, corresponding to the double pole. This again is equivalent to considering the original system of IBPs and simply fixing $d=2$. Upon doing this one immediately sees that once more the two MIs become linearly dependent \begin{align} \mathcal{I}_2^{(-2)}(2;s) = \frac{1}{s-m^2}\mathcal{I}_1^{(-2)}(2;s)\,. \labbel{eq:relsun1d2} \end{align} As for the previous case, relation~\eqref{eq:relsun1d2} must come from a corresponding $d$-dimensional relation. Indeed, if one solves the IBPs in $d$ dimensions one finds, among the others, the following relation \begin{align*} I(d; 1,1,1,1,0) = \left(\frac{1}{d-2}\right)\, \frac{m^2}{3} \left(\,\left[ (2-d)\frac{s}{m^2} + (3-d) \right] I_1(d;s) - (s-m^2)I_2(d;s) \right)\,.\labbel{eq:ibpd2} \end{align*} It is clear that in the limit $d \to 2$ Eq.~\eqref{eq:ibpd2} generates instead Eq.~\eqref{eq:relsun1d2}. Proceeding as above, we can choose as new basis of MIs \begin{align} &\mathcal{J}_1(d;s) = \mathcal{I}_1(d;s) - (s - m^2) \mathcal{I}_2(d;s)\,, \qquad \mathcal{J}_2(d;s) = \mathcal{I}_1(d;s) \,. \end{align} Deriving the differential equations for $\mathcal{J}_1$ and $\mathcal{J}_2$ one finds immediately \begin{align} &\frac{d \mathcal{J}_1}{d s} = (d-2) \left[ \frac{2}{s-m^2} - \frac{3}{2\,s} \right] \mathcal{J}_1 + (d-2) \left[ \frac{3(d-2)}{2\,s} - \frac{2}{s-m^2}\right] \mathcal{J}_2 \nonumber \\ &\frac{d \mathcal{J}_2}{d s} = \left[ \frac{1}{s-m^2} - \frac{1}{s} \right] \mathcal{J}_1+ \left[ \frac{(d-2)}{s} - \frac{1}{s-m^2} \right] \,\mathcal{J}_2\,. \end{align} Again, as expected from the arguments of Section~\ref{sec:IBPsN}, we see that the differential equation for $\mathcal{J}_1$ decouples from the one for $\mathcal{J}_2$ in the limit $d\to 2$, respecting the same pattern described in equation~\eqref{eq:gensuppr}. Once more for every practical purposes this is enough to reduce the solution of the system of differential equations to iterated integrations by quadrature. \subsection{A two-loop triangle with three legs off-shell} \labbel{sec:triangle} Let us consider now a massless two-loop three-point function with three legs off-shell. The problem has been widely studied in the literature, mainly in the context of vector boson pair production~\cite{Birthwright:2004kk,Chavez:2012kn,Gehrmann:2013cxs,Henn:2014lfa}, and it is well known that this Feynman graph can be reduced to two independent MIs, which can be integrated in terms of MPLs only. We define the Feynman graph as follows \begin{align} I(d;n_1,&n_2,n_3,n_4,n_5,n_6,n_7) = \triangleThree{q}{p_1}{p_2} \nonumber \\ &= \int \mathfrak{D}^d k \mathfrak{D}^d l \frac{ \left(l\cdot l \right )^{n_5} \left( k \cdot p_2\right)^{n_6} \left( l \cdot p_2 \right)^{n_7}} { \left( k^2 \right)^{n_1} \left( (k-l)^2 \right)^{n_2} \left( (l-p_1)^2 \right )^{n_3} \left( (k-p_1-p_2)^2 \right )^{n_4} }\,, \labbel{trian1} \end{align} where $p_1^2 = m_1^2$, $p_2^2=m_2^2$ and $q^2 = (p_1+p_2)^2 = s$. We used Reduze 2 in order to reduce this graph to two independent MIs \begin{equation} \mathcal{I}_1(d;s,m_1^2,m_2^2) = I(d;1,1,1,1,0,0,0)\,,\quad \mathcal{I}_2(d;s,m_1^2,m_2^2) = I(d;2,1,1,1,0,0,0)\,. \end{equation} We can then proceed and derive the differential equations for these two MIs. As always we neglect the sub-topologies throughout. In this particular case the latter are simple two-loop corrections to massless two-point functions which have been known analytically for a very long time. The homogeneous part of the differential equations in the momentum transfer $s$ reads \begin{align} & P(s,m_1^2,m_2^2)\, \frac{d\,\mathcal{I}_1}{d\,s} = \frac{(d-4)(m_1^2-m_2^2)^2 + \left( (3d-8)m_1^2 - 3(d-4)m_2^2\right)s + 2(d-4)s^2}{2s}\, \mathcal{I}_1 + 2\,s\,m_1^2\,\mathcal{I}_2 \\ & P(s,m_1^2,m_2^2)\,\frac{d\,\mathcal{I}_2}{d\,s} = \frac{(10-3\,d)\left( (d-3)(m_1^2-m_2^2) + (2d-7)\,s \right)}{2s} \,\mathcal{I}_1 \nonumber \\ &\qquad \qquad \qquad \quad \;\;\, + \frac{(d-6) (m_1^2-m_2^2)^2 + \left( (22-5d)m_1^2 + (d+2)m_2^2 \right)\,s -2(d-2)\,s^2 }{2s} \, \mathcal{I}_2\,, \labbel{eq:deqtri} \end{align} where we defined the polynomial \begin{equation} P(s,m_1^2,m_2) = m_1^4+(s-m_2^2)^2-2 m_1^2(s+m_2^2)\,. \end{equation} The equations are coupled in the limit $d \to 4$. Again, as for the sunrise studied in Section~\ref{sec:sun1}, the integrals can develop at most a double pole in $(d-4)$. Instead of performing a complete Laurent expansion of the IBPs, we generate them and then fix explicitly $d=4$ before solving them. This is enough to check whether the two MIs degenerate in this limit. By solving the IBPs one finds that this is precisely the case and the following relation is extracted \begin{align} \mathcal{I}_1(d;s,m_1^2,m_2^2) +s\,\mathcal{I}_2(d;s,m_1^2,m_2^2) = \mathcal{O}(d-4)\,, \labbel{eq:trid4} \end{align} where again we used the notation introduced above, indicating that the combination~\eqref{eq:trid4} becomes zero if we fix $d=4$ in the IBPs. Of course, also in this case, if we had expanded the IBPs as Laurent series starting from the double pole, relation~\eqref{eq:trid4} could have been interpreted as a relation between the double poles of the two master integrals \begin{align} \mathcal{I}_1^{(-2)}(4;s,m_1^2,m_2^2) +s\,\mathcal{I}_2^{(-2)}(d;s,m_1^2,m_2^2) = 0\,. \end{align} Note, nevertheless, that this time the relation is \textsl{not exact}, differently from~\eqref{eq:relsun1d4}, since the sub-topologies might in general contribute modifying~\eqref{eq:trid4}. Eq.~\eqref{eq:trid4} is anyway sufficient to decouple the homogeneous part of the differential equations. We proceed as above and define the new basis \begin{align} \mathcal{J}_1(d;s,m_1^2,m_2^2)=\mathcal{I}_1(d;s,m_1^2,m_2^2) +s\,\mathcal{I}_2(d;s,m_1^2,m_2^2) \,, \quad \mathcal{J}_2(d;s,m_1^2,m_2^2)=\mathcal{I}_1(d;s,m_1^2,m_2^2)\,. \labbel{eq:basistri2} \end{align} Deriving the differential equations satisfied by~\eqref{eq:basistri2} we find \begin{align} & P(s,m_1^2,m_2^2)\, \frac{d \mathcal{J}_1}{d\,s} = \frac{ (d-4)(m_1^2-m_2^2)^2 + \left( (22-5d)m_1^2 + (d-2)m_2^2\right)s - 2(d-3)s^2 }{2s}\,\mathcal{J}_1 \nonumber \\ &\qquad \qquad \qquad \quad \;\;\, -\frac{(d-4)\,\left( 3(d-5) m_1^2 + (11-3d)m_2^2 - 3(7-2d)s\right)}{2}\,\mathcal{J}_2 \nonumber \\ & P(s,m_1^2,m_2^2)\, \frac{d \mathcal{J}_2}{d\,s} = 2 \, m_1^2\,\mathcal{J}_1 + \frac{(d-4)(s+m_1^2-m_2^2)(2\,s+m_1^2-m_2^2)}{2\,s}\,\mathcal{J}_2\,. \end{align} Again, as expected, we see that the differential equation for $\mathcal{J}_1$ contains an explicit factor $(d-4)$ multiplying the second integral $\mathcal{J}_2$. The result is consistent with the general structure described in Section~\ref{sec:IBPsN}. Once more we want to stress that, as expected, in this case all MIs can be integrated in terms of MPLs only. \subsection{The two-loop massive sunrise} \labbel{sec:sun2} In the previous sections we considered two simple examples of $2 \times 2$ systems where the two equations could be decoupled in the limit $d \to 4$, such that the problem could always be reduced to integrations by quadrature. As we already discussed this is not always possible and the first known case where at least two differential equations remain coupled is the two-loop massive sunrise. The two-loop massive sunrise graph is defined as follows \begin{align} I(d;n_1,n_2,n_3,n_4,n_5) &= \Sunrisetwo{p} \nonumber \\ &= \int \mathfrak{D}^d k \mathfrak{D}^d l\, \frac{(k \cdot p)^{n_4}(l \cdot p)^{n_5}} {\left( k^2-m_1^2 \right)^{n_1} \left( l^2 -m_2^2\right )^{n_2} \left((k-l+p)^2-m_3^2 \right)^{n_3}}\,. \labbel{eq:sunr3} \end{align} This integral has been studied widely in the literature and in particular a lot of attention has been devoted to the differential equations that it fulfils. In the general case where all three masses assume different values, a normal reduction through IBPs shows that all integrals can be expressed as linear combinations of 4 independent MIs, which can be chosen to be \begin{align} &\mathcal{I}_1(d;s) = I(d;1,1,1,0,0)\,, \quad \mathcal{I}_2(d;s) = I(d;2,1,1,0,0)\,, \nonumber \\ &\mathcal{I}_3(d;s) = I(d;1,2,1,0,0)\,, \quad \mathcal{I}_4(d;s) = I(d;1,1,2,0,0)\,. \labbel{eq:missunr3} \end{align} In~\cite{Caffo:1998du} it was shown that these integrals fulfil a coupled system of 4 linear first order differential equations in $d$ dimensions. The system remains coupled in the limits $d \to 2\,n$, where $n \in \mathbb{N}$. It was lately shown in~\cite{MullerStach:2012mp}, using algebraic geometry methods (and as such \textsl{a priori} orthogonal to the IBPs), that the scalar integral $\mathcal{I}_1(d;s)$ satisfies a second-order Picard-Fuchs differential equation in $d=2$. This suggested the possibility of finding a proper change of basis of MIs, in the sense of the IBPs, such that two of the four differential equations satisfied by the latter would decouple in the limit $d \to 2$. Since the four MIs in~\eqref{eq:missunr3} are \textsl{finite} in $d=2$, it appeared natural to try and obtain the decoupling of the differential equations by finding \textsl{new relations} among the MIs, valid strictly only for $d=2$. Such relations can be found using the so-called \textsl{Schouten Identities} introduced in~\cite{Remiddi:2013joa}. In that reference the Schouten identities are introduced and the case of the two-loop sunrise with different masses is worked out in detail. It is shown that, as expected, in $d=2$ only two master integrals are linearly independent. This allowed to recover the second order differential equation found in reference~\cite{MullerStach:2012mp} in a completely independent manner. In this section we will show that those results can be even more easily re-obtained using the methods described in this paper, and namely by solving the IBPs for the massive sunrise in $d=2$. The Schouten identities can be imagined as a tool for extracting this piece of information from the IBPs and are, in this respect, equivalent to the study of the IBPs in fixed number of dimensions. We will show another example of this equivalence in Appendix~\ref{App:Sch}. Since the algebra in this case is rather heavy due to the large number of scales, we will only report the result of the solution of the IBPs in $d=2$, referring to~\cite{Remiddi:2013joa} for their use to simplify the system of differential equations. As already discussed above, solving the system with $d=2$ is in general easier and, as expected, two of the four MIs degenerate, leaving only two linearly independent MIs. By choosing as MIs $\mathcal{I}_1(2;s)$ and $\mathcal{I}_2(2;s)$, we find the following additional relations (as everywhere else we neglect the sub-topologies for simplicity) \begin{align} m_2^2\,&P(s,m_1^2,m_2^2,m_3^2)\,\mathcal{I}_3(2;s) = (m_1^2 - m_2^2) (m_1^2 + m_2^2 - m_3^2 - s) \, \mathcal{I}_1(2;s) \nonumber \\ &+ m_1^2 \left( m_1^4 - 3 m_2^4 + 2 m_1^2 (m_2^2 - m_3^2 - s) + (m_3^2 - s)^2 + 2 m_2^2 (m_3^2 + s) \right) \mathcal{I}_2(2;s) \nonumber \\ & \nonumber \\ m_3^2\,&P(s,m_1^2,m_2^2,m_3^2)\,\mathcal{I}_4(2;s) = (m_1^2 - m_3^2) (m_1^2 - m_2^2 + m_3^2 - s) \mathcal{I}_1(2;s) \nonumber \\ &+ m_1^2 \left( m_1^4 + m_2^4 - 3 m_3^4 + 2 m_2^2 (m_3^2 - s) + 2 m_3^2 s + s^2 - 2 m_1^2 (m_2^2 - m_3^2 + s) \right) \mathcal{I}_2(2;s)\,, \labbel{eq:relsunr3} \end{align} where we defined the polynomial $$P(s,m_1^2,m_2^2,m_3^2) = (-3 m_1^4 + m_2^4 + (m_3^2 - s)^2 - 2 m_2^2 (m_3^2 + s) + 2 m_1^2 (m_2^2 + m_3^2 + s)).$$ As we discussed in Section~\ref{sec:IBPsN}, we expect such relations to come by $d$-dimensional IBPs with an overall factor $1/(d-2)$. Indeed by studying the reduction to MIs in $d$ dimensions it is easy to find the following two relations \begin{align} \mathcal{O}(d-2) = \frac{1 }{3} & \Big\{ \left[ (d-3) (2 m_1^2 - m_2^2 - m_3^2) - (d - 2) s\right] \, \mathcal{I}_1(d;s)\, +2\,m_1^2(s-m_1^2) \mathcal{I}_2(d;s) \nonumber \\ &+ m_2^2 (-3 m_1^2 + m_2^2 + 3 m_3^2 - s) \mathcal{I}_3(d;s) + m_3^2 (-3 m_1^2 + 3 m_2^2 + m_3^2 - s)\,\mathcal{I}_4(d;s) \Big\}\nonumber \\&\nonumber \\ \mathcal{O}(d-2) = \frac{1 }{3} & \Big\{ \left[ (d-3) (m_1^2 - 2m_2^2 + m_3^2) - (d - 2) s\right] \, \mathcal{I}_1(d;s)\, -2\,m_2^2(s-m_2^2) \mathcal{I}_3(d;s) \nonumber \\ &+ m_1^2 (-m_1^2 + 3m_2^2 - 3 m_3^2 + s) \mathcal{I}_2(d;s) + m_3^2 (-3 m_1^2 + 3 m_2^2 - m_3^2 + s)\,\mathcal{I}_4(d;s) \Big\}\,. \labbel{eq:relsunr3ibps} \end{align} Relations~\eqref{eq:relsunr3ibps} can be compared with the corresponding formulas (3.14) and (3.15) of~\cite{Remiddi:2013joa}. It is easy to see that they are identical in the limit $d \to 2$, the only difference being the absence of the terms coming from the sub-topologies, which we are neglecting here. These two relations (and in particular their limiting value as $d \to 2$) can be used, as described in Section~\ref{sec:IBPsN}, in order to decouple two of the four differential equations of the two-loop massive sunrise graph, by choosing as new basis of master integrals \begin{align} &\mathcal{J}_1(d;s) = \mathcal{I}_1(d;s)\,,\quad \mathcal{J}_2(d;s) = \mathcal{I}_2(d;s)\nonumber \\&\nonumber \\ &\mathcal{J}_3(d;s) = - (2 m_1^2 - m_2^2 - m_3^2) \, \mathcal{I}_1(d;s)\, +2\,m_1^2(s-m_1^2) \mathcal{I}_2(d;s) \nonumber \\ &+ m_2^2 (-3 m_1^2 + m_2^2 + 3 m_3^2 - s) \mathcal{I}_3(d;s) + m_3^2 (-3 m_1^2 + 3 m_2^2 + m_3^2 - s)\,\mathcal{I}_4(d;s) \nonumber \\&\nonumber\\ &\mathcal{J}_4(d;s) = - (m_1^2 - 2m_2^2 + m_3^2) \, \mathcal{I}_1(d;s)\, -2\,m_2^2(s-m_2^2) \mathcal{I}_3(d;s) \nonumber \\ &+ m_1^2 (-m_1^2 + 3m_2^2 - 3 m_3^2 + s) \mathcal{I}_2(d;s) + m_3^2 (-3 m_1^2 + 3 m_2^2 - m_3^2 + s)\,\mathcal{I}_4(d;s)\,. \end{align} We do not give the explicit form of the differential equations, referring to~\cite{Remiddi:2013joa} for further details. In comparing, note that the basis presented here differs from the one in~\cite{Remiddi:2013joa} by the absence of sub-topologies and by orders $\mathcal{O}(d-2)$. Furthermore we want to stress, in relation to the discussion in Section~\ref{sec:IBPsN}, that using this basis produces an overall factor $(d-2)$ in front of the two differential equations for $\mathcal{J}_3(d;x)$ and $\mathcal{J}_4(d;x)$. This implies that one has, at every order in $(d-2)$, only one second order differential equation (needed to solve the block of $\mathcal{J}_1(d;x)$ and $\mathcal{J}_2(d;x)$), plus two integrations by quadrature (required to determine $\mathcal{J}_3(d;x)$ and $\mathcal{J}_4(d;x)$). \subsection{A two-loop non-planar crossed vertex} \labbel{sec:crossed} As a further application, let us consider a two-loop non-planar crossed vertex with two massive propagators. This graph is topologically completely unrelated to the two-loop sunrise and was studied thoroughly in~\cite{Aglietti:2007as}. There it was shown that it can be reduced to three MIs, which would therefore be expected to satisfy a system of three coupled differential equations. In~\cite{Aglietti:2007as} a basis of MIs was found such that one of the three differential equations decouples from the other two in the limit $d \to 4$. This reduced effectively the problem to that of solving, for every order in $(d-4)$, a second order differential equation, plus an integration by quadrature for the third MI. In this section we would like to study this Feynman graph with our method and show that the decoupling found in~\cite{Aglietti:2007as} comes as well from a degeneracy of the master integrals in $d=4$ which can be read off directly from the IBPs. Following~\cite{Aglietti:2007as} we define the Feynman graph as follows \begin{align} I(&d;n_1,n_2,n_3,n_4,n_5,n_6,n_7) = \trianglecross{q}{p_1}{p_2} \nonumber \\ &=\int \mathfrak{D}^d k\, \mathfrak{D}^d l\, \frac{(\,k \cdot p_2)^{n_7}} {(k^2-m^2)^{n_1} (l^2-m^2)^{n_2} \left((k-p_1)^2\right)^{n_3} \left((l-p_2)^2\right)^{n_4} \left((k-l-p_1)^2\right)^{n_5} \left((k-l+p_2)^2\right)^{n_6}}\,, \end{align} with $p_1^2 = p_2^2 = 0$ and $(p_1+p_2)^2 = s$. It is easy to verify that this topology can be reduced to 3 MIs, for example \begin{align} \mathcal{I}_1(d;s) = I(d;1,1,1,1,1,1,0)\,, \quad \mathcal{I}_2(d;s) = I(d;2,1,1,1,1,1,0)\,, \quad \mathcal{I}_3(d;s) = I(d;1,1,2,1,1,1,0)\,. \labbel{eq:miscrossed} \end{align} Let us derive the differential equations in the momentum transfer $s$, neglecting as everywhere else all sub-topologies. We get \begin{align} \frac{d\,\mathcal{I}_1}{d\,s} &= \frac{(d-6)}{s}\mathcal{I}_1 - \frac{2\,m^2}{s}\mathcal{I}_2\,, \nonumber \\ & \nonumber \\ \frac{d\,\mathcal{I}_2}{d\,s} &= \frac{(d-5)(2d-9)(s-4m^2)}{s(s-m^2)(s+8m^2)}\,\mathcal{I}_1 + \frac{14 (d-4) m^4 - (5 d-13) m^2 s - 2 s^2}{s(s-m^2)(s+8m^2)}\,\mathcal{I}_2 \nonumber \\ & + \frac{2 \,(d-4) \,m^2}{s(s+8\,m^2)}\,\mathcal{I}_3\,, \nonumber \\ & \nonumber \\ \frac{d\,\mathcal{I}_3}{d\,s} &= \frac{2(d-5)(2d-9)(s+2m^2)}{s(s-m^2)(s+8m^2)}\,\mathcal{I}_1 + \frac{2 m^2\left( (24 - 5 d) m^2 + (21 - 4 d) s \right) }{s(s-m^2)(s+8m^2)}\,\mathcal{I}_2 \nonumber \\ & - \frac{2 (3 d-8) m^2 + (d-3) s}{s(s+8\,m^2)}\,\mathcal{I}_3 \,. \labbel{eq:deqcrossed} \end{align} Looking at these equations we see immediately that $\mathcal{I}_3$ is already decoupled from the other two in the limit $d \to 4$. This means that, with this basis, the problem is reduced to that of solving, at every order in $(d-4)$, a coupled system for $\mathcal{I}_1$ and $\mathcal{I}_2$. With the explicit solution for the latter, one can then obtain $\mathcal{I}_3$ integrating its differential equation by quadrature. It would be then interesting to know whether this decoupling is also due to a degeneracy of the MIs in $d=4$. Moreover it would be even more interesting to verify whether a new basis could be found, for which the differential equations become completely triangular as $d \to 4$, reducing even further the complexity of the problem. Following the recipe described above, we can try and solve the IBPs for this Feynman graph for $d=4$. A word of caution is required here. The three MIs selected above have Laurent expansions in $(d-4)$ which start at different orders, in particular one can easily find (for example using sector decomposition~\cite{Borowka:2015mxa}) that the first two masters are finite, while the third develops a cubic pole \begin{align} \mathcal{I}_1(d\to4;s) = \mathcal{O}(1)\,,\quad \mathcal{I}_2(d\to4;s) = \mathcal{O}(1)\,, \quad \mathcal{I}_3(d\to4;s) = \mathcal{O} \left( \frac{1}{(d-4)^3} \right)\,. \label{eq:polescrossed} \end{align} Nevertheless this poses no practical obstacle to the applicability of the method presented in this paper. As we already discussed in general, by expanding the system of IBPs in Laurent series around $d=4$, we will get, at every order in $(d-4)$, an independent system of differential equations whose homogeneous part (i.e. the one containing the order of the MIs under consideration) has always the same form, while the non-homogeneous part will of course change and, in particular, depend on the previous orders of the expansion (and on the sub-topologies, which we neglect). What we are interested in is, indeed, only the homogeneous part of this system. Fixing $d=4$ is therefore enough in order to determine whether, for any order of the expansion, the MIs become linearly dependent. Upon doing this we find only one relation between the three masters which reads \begin{equation} \mathcal{I}_1(d;s) + (5\,m^2+s)\mathcal{I}_2(d;s) + 3\,m^2\,\mathcal{I}_3(d;s) = \mathcal{O}(d-4)\,, \labbel{eq:relcrossed1} \end{equation} where, as always, we mean that this combination becomes zero when we set $d=4$ in the IBPs. Equivalently, one can also proceed in a more formal way, expanding all IBPs in Laurent series starting from the triple pole up to the finite piece, and supplementing the piece of information on the highest poles of the MIs~\eqref{eq:polescrossed}. Upon doing this, one obtains four chained systems of IBPs (one for every oder in $(d-4)$), which can be solved bottom-up starting from the one corresponding to the highest pole. Since the first two masters are finite, the first three systems give no information on the latter, while the fourth system (corresponding to the finite piece of the MIs) produces the relation \begin{equation} \mathcal{I}_1^{(0)}(4;s) + (5\,m^2+s)\mathcal{I}_2^{(0)}(4;s) + 3\,m^2\,\mathcal{I}_3^{(0)}(4;s) + \frac{2}{m^2} I^{(-1)}(4;1,1,1,1,1,1,1)=0\,. \labbel{eq:relcrossed2} \end{equation} Indeed relations~\eqref{eq:relcrossed1} and~\eqref{eq:relcrossed2} are identical up to the presence of the previous order in the expansion of the integral $I(d;1,1,1,1,1,1,1)$. If we had solved the IBPs in $d$ dimensions, this integral would have been of course expressed in terms of the three masters~\eqref{eq:deqcrossed}. Solving the system in $d=4$ instead does not allow to express this integral in terms of the other three, but this comes with no surprise and can be very well understood in terms of the degeneracy of the system of IBPs in this limit. By studying explicitly the integral $I(d;1,1,1,1,1,1,1)$ it is easy to see that it is also finite in $d=4$, namely $$I(d\to4;1,1,1,1,1,1,1) = \mathcal{O}(1)\,, \quad \longrightarrow \quad I^{(-1)}(4;1,1,1,1,1,1,1) = 0\,.$$ With this piece of information one recovers again relation~\eqref{eq:relcrossed1}, which was found by simply solving the system of IBPs in $d=4$. Since only one relation has been found, which moreover involves all three masters $\mathcal{I}_1$, $\mathcal{I}_2$, and $\mathcal{I}_3$, we have no way to decouple more than one MIs from the system. What we mean here is that, since in system~\eqref{eq:deqcrossed} only the differential equations for $\mathcal{I}_1$ and $\mathcal{I}_2$ are coupled, if we had found one relation but involving only $\mathcal{I}_1$ and $\mathcal{I}_2$, we could have used it to decouple this block of the system. An example of this is given in Section~\ref{sec:triangle2}. This is not the case and we therefore expect the minimal number of coupled integrals in $d=4$ to be two, giving rise to a second order differential equation, as for the case of the two-loop massive sunrise, see Section~\ref{sec:sun2}. As an exercise, we can try to change basis also in this case exploiting the piece of information found in~\eqref{eq:relcrossed1}. We expect to end up with a new system of differential equations, where nevertheless again two out of three equations are coupled as $d \to 4$ (and as such practically equivalent to~\eqref{eq:deqcrossed}), showing that the system cannot be further simplified. Let us introduce the new basis \begin{align} \mathcal{J}_1(d;s) = \mathcal{I}_1(d;s)\,,\quad \mathcal{J}_2(d;s) = \mathcal{I}_2(d;s)\,, \quad \mathcal{J}_3(d;s) = \mathcal{I}_1(d;s) + (5\,m^2+s)\mathcal{I}_2(d;s) + 3\,m^2\,\mathcal{I}_3(d;s)\,. \end{align} Deriving the differential equations and neglecting all sub-topologies we get \begin{align} \frac{d\, \mathcal{J}_1}{d\,s} &= \frac{(d-6)}{s}\mathcal{J}_1 - \frac{2\,m^2}{s}\mathcal{J}_2 \nonumber \\ & \nonumber \\ \frac{d\, \mathcal{J}_2}{d\,s} &= \frac{2(d-4)(s-m^2)+3(d-5)(2d-9)(s-4m^2)}{3s(s-m^2)(s+8m^2)}\,\mathcal{J}_1 \nonumber \\ &+ \frac{52(d-4)m^4-(23d-71)m^2\,s-2(d-1)s^2}{3\,s(s-m^2)(s+8m^2)}\,\mathcal{J}_2 + \frac{2 \,(d-4) \,m^2}{s(s+8\,m^2)}\,\mathcal{J}_3 \nonumber \\ & \nonumber \\ \frac{d\, \mathcal{J}_3}{d\,s} &= \frac{(d-4) (6 d-29)}{9 m^2 s} \mathcal{J}_1 + \frac{(d-4)(s-10m^2)}{9 m^2 s} \mathcal{J}_2 - \frac{(d-1) \mathcal{J}_3}{3\, s}\,, \nonumber \\ \labbel{eq:deqcrossed2} \end{align} which is again a system of three differential equation, two of which remain coupled in the limit $d \to 4$, giving rise to a second order differential equation for one of the two coupled masters.\footnote{In~\cite{Aglietti:2007as} it was shown that the homogeneous part of the second order differential equation satisfied by the scalar master integral $\mathcal{I}_1(d;s)$ is equivalent to that of the two-loop massive sunrise with equal masses.} We note that the new system~\eqref{eq:deqcrossed2}, compared with the previous one~\eqref{eq:deqcrossed}, has a slightly different structure. As for the previous cases that we analysed, once we switch to the new basis defined through the IBPs degeneracy~\eqref{eq:relcrossed1}, the differential equation for the new master $\mathcal{J}_3$ develops an explicit factor $(d-4)$ in front of the other two masters $\mathcal{J}_1$ and $\mathcal{J}_2$, as predicted in equation~\eqref{eq:gensuppr}. \subsection{A two-loop massive triangle with three master integrals} \labbel{sec:triangle2} Before moving to a three-loop example, let us try and see what happens in a case similar to the one studied above, i.e. a Feynman graph reduced to three master integrals, but where the system of differential equations can be completely triangularised as $d \to 4$. Let us consider the following two-loop massive triangle \begin{align} I(d;n_1,&n_2,n_3,n_4,n_5,n_6,n_7) = \triangleMass{P}{p_1}{p_2} \nonumber \\ &= \int \mathfrak{D}^d k \mathfrak{D}^d l \frac{ \left(k\cdot p_1 \right )^{n_5} \left( l \cdot p_1\right)^{n_6} \left( l \cdot q \right)^{n_7}} { \left( l^2 - m^2\right)^{n_1} \left( (k-l)^2 \right)^{n_2} \left( (k-p_1)^2 -m^2\right )^{n_3} \left( (k-p_1-p_2)^2-m^2 \right )^{n_4} }\,, \labbel{trian2} \end{align} with two legs off-shell, namely $P^2 = (p_1+p_2) = s$, $p_1^2 = 0$, $p_2^2 = q^2$. This graph has been studied in the context of the QCD corrections to $H \to Z\gamma$ in~\cite{Bonciani:2015eua,Gehrmann:2015dua}. Similarly to our previous example it is reduced to three master integrals, such that we are dealing with a system of three differential equations. We start from an arbitrarily chosen basis of master integrals \begin{align} \mathcal{I}_1(d;s,q^2) = I(d;1,1,&1,1,0,0,0)\,, \quad \mathcal{I}_2(d;s,q^2) = I(d;1,1,2,1,0,0,0)\nonumber \\ &\mathcal{I}_3(d;s,q^2) = I(d;1,1,1,2,0,0,0)\,. \end{align} The masters depend on three variables $s, q^2$ and $m^2$, and therefore on two independent ratios. For simplicity we will consider only the differential equations in $s$, while all considerations done here work identically for the differential equations in the other variables. In order to simplify as much as possible the formulas we write explicitly only the order zero of the homogeneous differential equations in $(d-4)$, which is also the bulk which we need to simplify. The equations read \begin{align} &\frac{\partial}{\partial s} \mathcal{I}_1(d;s) = \frac{1}{(q^2-s)} \mathcal{I}_1(d;s) + \frac{2\,m^2}{s}\mathcal{I}_2(d;s) + \frac{s\,q^2 - 2 m^2(s+q^2)}{s(q^2-s)}\mathcal{I}_3(d;s) + \mathcal{O}(d-4) \nonumber \\ &\frac{\partial}{\partial s} \mathcal{I}_2(d;s) = \frac{m^2}{s(q^2-s)-m^2q^2}\,\mathcal{I}_2(d;s) + \frac{q^2 s - m^2(2 q^2+s)}{(q^2-s)(s(q^2-s)-m^2q^2)}\,\mathcal{I}_3(d;s) + \mathcal{O}(d-4) \nonumber \\ &\frac{\partial}{\partial s} \mathcal{I}_3(d;s) = \frac{m^2(q^2-s)}{s\,(s(q^2-s)-m^2q^2)}\,\mathcal{I}_2(d;s) + \frac{m^2(s^2+s \,q^2 - q^4)}{s(q^2-s)(s(q^2-s)-m^2q^2)}\,\mathcal{I}_3(d;s) + \mathcal{O}(d-4) \,, \end{align} where the dependence from $q^2$ is left as implicit in the master integrals for ease of notation. One can immediately see that only two of the differential equations are coupled. One should in principle first solve the $2 \times 2$ coupled system for $\mathcal{I}_2(d;s)$ and $\mathcal{I}_3(d;s)$, and then, with the latter as an input, one could attempt to solve the differential equation for $\mathcal{I}_1(d;s)$ by quadrature. Let us try now and study the IBPs in the limit $d \to 4$. By solving them as discussed in the previous sections one sees that the master integrals $\mathcal{I}_2(d;s)$, $\mathcal{I}_3(d;s)$ become linearly dependent in this limit and one finds the relation \begin{align} (q^2 - s )(s-2\,m^2) \mathcal{I}_2(d;s) + \left[ s(s+2 m^2) - 2 \,q^2\,(s-m^2) \right] \mathcal{I}_3(d;s) = \mathcal{O}(d-4) \,. \labbel{eq:reltri2} \end{align} As for the case of the non-planar triangle studied in Section~\ref{sec:crossed}, we find only one relation, while we have three master integrals. One of the three masters nevertheless is already decoupled, and moreover relation~\eqref{eq:reltri2} involves only $\mathcal{I}_2(d;s)$ and $\mathcal{I}_3(d;s)$, which are precisely the two coupled integrals. We expect this therefore to be enough to decouple the system. We define the new basis \begin{align} &\mathcal{J}_1(d;s) = \mathcal{I}_1(d;s)\,,\qquad \mathcal{J}_2(d;s) = \mathcal{I}_2(d;s) \nonumber \\ & \mathcal{J}_3(d;s) = \frac{(q^2 - s )(s-2\,m^2)}{m^4} \mathcal{I}_2(d;s) + \frac{ s(s+2 m^2) - 2 \,q^2\,(s-m^2) }{m^4}\mathcal{I}_3(d;s)\,, \labbel{eq:newbasistri2} \end{align} where the $1/m^4$ has been added for dimensional reasons. Deriving the differential equations for this new basis, and keeping again only the first order in $(d-4)$ we find \begin{align} \frac{\partial}{\partial s} \mathcal{J}_1(d;s) &= \frac{1}{q^2 -s} \mathcal{J}_1(s;d) + \frac{s(q^2-4 m^2)}{2\,q^2(s-m^2)-s(s+m^2)}\mathcal{J}_2(d;s) \nonumber \\ &+ \frac{( 2 \,m^2 (q^2+s) - s \, q^2 ) \, m^4 }{s(q^2-s)(2q^2(s-m^2)-s(s+m^2))} \mathcal{J}_3(d;s) + \mathcal{O}(d-4) \nonumber \\ & \nonumber \\ \frac{\partial}{\partial s} \mathcal{J}_2(d;s) &= \frac{q^2(2 \, m^4 + (s-2m^2)s )}{(q^2(s-m^2) -s^2)(2\, q^2(s-m^2)-s(s+2 m^2))}\, \mathcal{J}_2(d;s) \nonumber \\ &+ \frac{(s \, m^2 - q^2(s-m^2))\,m^4}{(q^2-s) (q^2(s-m^2) -s^2)(2\, q^2(s-m^2)-s(s+2 m^2))}\, \mathcal{J}_3(d;s) + \mathcal{O}(d-4) \nonumber \\ & \nonumber \\ \frac{\partial}{\partial s} \mathcal{J}_3(d;s) &= \frac{q^2 - 2 s}{s (q^2-s)}\, \mathcal{J}_3(d;s) + \mathcal{O}(d-4)\,. \end{align} As expected the system of differential equations becomes triangular and, in particular, the equation for the new integral $\mathcal{J}_3(d;s)$, defined through relation~\eqref{eq:reltri2}, decouples from $\mathcal{J}_2(d;s)$, following the usual pattern of equation~\eqref{eq:gensuppr}. One can then proceed, order by order in $(d-4)$, integrating by quadrature first the differential equation for $\mathcal{J}_3(d;s)$, then the one for $\mathcal{J}_2(d;s)$ and finally the one for $\mathcal{J}_1(d;s)$. As a last comment we want to stress that, if we had considered the system in $\partial/\partial q^2$, the same change of basis~\eqref{eq:newbasistri2} would have indeed been sufficient to triangularise this one as well. \subsection{The three-loop massive banana graph}\labbel{sec:ban} As last example let us consider a more complicated three-loop graph. We choose the three-loop massive banana graph, which is the natural three-loop generalisation of the two-loop massive sunrise. In the most general case this Feynman graph depends on the momentum squared $p^2=s$ and on four different masses $m_1$, $m_2$, $m_3$ and $m_4$ \begin{align} I_4(d;n_1,n_2,n_3,&n_4,n_5,n_6,n_7,n_8,n_9) = \threeban{p} \nonumber \\ &=\int \mathfrak{D}^d k_1\, \mathfrak{D}^d k_2\, \mathfrak{D}^d k_3\, \frac{(k_1\cdot p)^{n_5}(k_2 \cdot p)^{n_6}(k_3 \cdot p)^{n_7}(k_1 \cdot k_2)^{n_8}(k_1 \cdot k_3)^{n_9}} {(k_1^2-m_1^2)^{n_1}(k_2^2-m_2^2)^{n_2}(k_3^2-m_3^2)^{n_3}((k_1+k_2+k_3-p)^2-m_4^2)^{n_4}}\,, \end{align} where the subscript $4$ indicates that the four masses are all different. In the two-loop case there are 4 MIs when the 3 masses have all different values, which in turn degenerate to 2 MIs in the case of equal masses. On the other hand we saw that, irrespective of the explicit values of the internal masses, one is always left with only two independent MIs in $d=2$~\eqref{eq:relsunr3}. This allowed us to decouple two of the four MIs from the differential equations in the limit $d\to2$ and prove that the scalar amplitude satisfies a second order differential equation in this limit. It would therefore be interesting to verify whether a similar behaviour can also be seen in the three-loop banana graph. Since in the general case with four different masses the algebra becomes very cumbersome, we will consider different cases of increasing complexity, namely increasing at every step the number of different internal masses and check how many MIs are found in $d$ dimensions and how many can be decoupled in the limit $d \to 2$. \subsubsection{The equal-mass case} Let us start considering the equal-mass case. We use the following notation $$I_1(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) = I_4(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) \Big|_{m_4 = m_3 = m_2 = m_1 = m}\,,$$ where the subscript ``$1$'' indicates now that all masses have the same value. Running a reduction to MIs with a code of choice it is easy to check that there are 3 independent MIs in $d$ dimensions which can be chosen to be \begin{align} \mathcal{I}_1(d;s) = I_1(d;1,1,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_2(d;s) = I_1(d;2,1,1,1,0,0,0,0,0)\,,\nonumber \\ & \mathcal{I}_3(d;s) = I_1(d;3,1,1,1,0,0,0,0,0)\,. \end{align} The differential equations in the momentum transfer for these three MIs read \begin{align} \frac{d \mathcal{I}_1}{d\,s} &= \frac{3d-8}{2\,s}\mathcal{I}_1 - \frac{4\,m^2}{s}\,\mathcal{I}_2 \nonumber \\ &\nonumber \\ \frac{d \mathcal{I}_2}{d\,s} &= \frac{(3d-8)(2d-5)}{8\,s\,m^2}\, \mathcal{I}_1 + \frac{ (d-4)\,s - 8(2d-5)m^2 }{8\,s\,m^2} \,\mathcal{I}_2 - \frac{1}{2}\,\mathcal{I}_3 \nonumber \\ &\nonumber \\ \frac{d \mathcal{I}_3}{d\,s} &= \frac{(2d-5)(3d-8) \left(16(11d-37)m^4 + 4(32-9d)\,m^2\,s + (d-4)\,s^2 \right)}{32\,m^2\,s\,(s-4 m^2)(s-16 m^2)}\,\mathcal{I}_1\nonumber \\ & \hspace{-0.5cm} - \frac{\left[ 64 \left( 440 + (47 d-289) d \right) m^6 - 16 \left( 668 + d (62 d-409) \right) m^4 s + 16 (d-4) (4 d-13) m^2 s^2 - (d-4)^2 s^3\right]}{32\,m^2\,s\,(s-4 m^2)(s-16 m^2)}\,\mathcal{I}_2 \nonumber \\ &\hspace{-0.5cm}+ \frac{\left[ 1024 (d-4) m^8 + 192 (27 - 8 d) m^6 s + 96 (2 d-7) m^4 s^2 - 4 (d-4) m^2 s^3\right]}{32\,m^2\,s\,(s-4 m^2)(s-16 m^2)} \, \mathcal{I}_3\,, \end{align} and we can easily verify that, in spite of the fact that $\mathcal{I}_3$ does not appear in the first equation, the system is still coupled as $d \to 2$. Trying to solve the system of IBPs in $d=2$ shows no further degeneracy and therefore we conclude that the system cannot be further simplified with our method. Having a system of three coupled first-order equation means that we can rephrase it as a third-order differential equation for any of the three masters, and in particular for the scalar amplitude $\mathcal{I}_1(d;s)$. The fact that the scalar amplitude fulfils a third-order differential equation is in agreement with the findings in~\cite{MullerStach:2012mp}. Deriving the third-order differential equation satisfied by $\mathcal{I}_1(d;s)$ we find \begin{equation} D_d^{(3)}\, \mathcal{I}_1(d;s) = 0\,, \end{equation} where the $d$-dimensional third order differential operator $D_d^{(3)}$ reads \begin{align} D_d^{(3)} = &\frac{d^3}{d\,s^3} + \frac{3\left( 64m^4 + 10(d-5)m^2 s - (d-4)s^2\right)}{s(s-4m^2)(s-16m^2)}\, \frac{d^2}{d\,s^2} \nonumber \\ & +\frac{(d-4)(11d-36) s^2-64(d-4)d\,m^4 - 4\left( 216+d(7d-88)\right)m^2\,s }{4\,s^2(s-4m^2)(s-16m^2)}\, \frac{d}{d\,s} \nonumber \\ & + \frac{(3-d)(3d-8)\left( 2(d+2)m^2 + (d-4)s\right)}{4\,s^2\,(s-4m^2)(s-16m^2)}\,, \end{align} and all sub-topologies are neglected as always. In the limit $d \to 2$ the differential operator simplifies to \begin{align} D_2^{(3)} = &\frac{d^3}{d\,s^3} + \frac{6\left( s^2 - 15 m^2 s + 32 m^4\right)}{s(s-4m^2)(s-16m^2)}\, \frac{d^2}{d\,s^2} +\frac{\left(7 s^2 - 68 m^2 s + 64 m^4\right) }{s^2(s-4m^2)(s-16m^2)}\, \frac{d}{d\,s} + \frac{1}{s^2\,(s-16m^2)}\,, \end{align} which is in agreement with~\cite{MullerStach:2012mp}. \subsubsection{The case of two different masses} Let us move now to a slightly more general case and let the masses take two different values. There are two possible arrangements, which we call $I_2^A$ and $I_2^B$, defined as follows $$I_2^A(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) = I_4(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) \Big|_{m_3 = m_2 = m_1=m_a, m_4=m_b}\,,$$ $$I_2^B(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) = I_4(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) \Big|_{m_2 = m_1=m_a , m_4 = m_3=m_b}\,.$$ The two configurations are intrinsically different and it makes sense to look at the two cases separately. \begin{itemize} \item[A)] In configuration A a reduction to MIs for generic $d$ gives 5 independent MIs which can be chosen as \begin{align} \mathcal{I}_1^A(d;s) = I_2^A(d;1,1,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_2^A(d;s) = I_2^A(d;2,1,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_3^A(d;s) = I_2^A(d;1,1,1,&2,0,0,0,0,0)\,,\quad \mathcal{I}_4^A(d;s) = I_2^A(d;3,1,1,1,0,0,0,0,0)\,,\nonumber \\ & \mathcal{I}_5^A(d;s) = I_2^A(d;2,2,1,1,0,0,0,0,0)\,. \end{align} \item[B)] In configuration B we find instead 6 independent MIs for generic $d$ \begin{align} \mathcal{I}_1^B(d;s) = I_2^B(d;1,1,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_2^B(d;s) = I_2^B(d;2,1,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_3^B(d;s) = I_2^B(d;1,1,2,&1,0,0,0,0,0)\,,\quad \mathcal{I}_4^B(d;s) = I_2^B(d;3,1,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_5^B(d;s) = I_2^B(d;2,2,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_6^B(d;s) = I_2^B(d;2,1,2,1,0,0,0,0,0)\,. \end{align} \end{itemize} A natural question at this point would be how many MIs degenerate in the two cases in the limit $d \to 2$, and therefore what is the order of the differential equation satisfied by the scalar amplitudes $\mathcal{I}_1^A(d;s)$ and $\mathcal{I}_2^B(d;s)$ respectively. A naive expectation, based on the two-loop sunrise, would be to see in cases $A$ and $B$, $2$ and $3$ MIs decouple respectively, such that the problem would reduce to the solution of a third-order differential equation, as in the equal-mass case. Unfortunately this naive expectation is not satisfied and we find that, by solving the IBPs in $d=2$, in both cases \textsl{four} MIs remain independent, corresponding in principle to a \textsl{fourth-order} differential equation for the scalar amplitude in both mass-configurations. On the other hand, it is interesting to see that in both configurations, in spite of the different number of MIs in $d$ dimensions, the problem can be reduced to an equation of the same order (i.e. four) in $d=2$. Neglecting the sub-topologies we find in configuration A the following relation which allows to express the fifth master integral in terms of the previous four \begin{align} m_a^2(s-5 m_a^2 + m_b^2)\mathcal{I}_5^A(2;s) &= \frac{3 m_a^2 + m_b^2 -s }{12 \,m_a^2}\, \mathcal{I}_1^A(2;s) + \frac{51 m_a^4 + (m_b^2 - s)^2 - 6 m_a^2 (m_b^2 + 2 s) }{12 \,m_a^2}\, \mathcal{I}_2^A(2;s) \nonumber \\ &+ \frac{ m_b^2 (m_b^2 - s)}{6 \, m_a^2} \, \mathcal{I}_3^A(2;s) + \frac{21 m_a^4 + (m_b^2 - s)^2 - 6 m_a^2 (m_b^2 + s) }{6}\, \mathcal{I}_4^A(2;s)\,. \labbel{eq:relban2mA} \end{align} In configuration B, instead, there are two different relations, which can be used to two express $\mathcal{I}_5^B$ and $\mathcal{I}_6^B$ in terms of the other four MIs in $d=2$. We do not report the explicit solution of the IBPs in $d=2$ which looks rather cumbersome. As for the case of the two-loop sunrise, these identities originate from $d$-dimensional IBPs which degenerate in the limit $d \to 2$ due to an overall factor $1/(d-2)$. There are many of these relations, but only two of them are linearly independent in the limit $d \to 2$, and they read (keeping only the first order in $(d-2)$) \begin{align} \mathcal{O}(d-2) &=\left[ 2(m_a^2 + m_b^2)- s \right] \mathcal{I}_1(d;s) + \left[ 4\,m_a^2(5 m_a^2 + 4m_b^2) - 4\,(2 m_a^2 + m_b^2) s + s^2\right]\, \mathcal{I}_2(d;s) \nonumber \\ &+ 4 m_b^2 (2 m_a^2 + m_b^2 - s) \, \mathcal{I}_3(d;s) + 2 m_a^2 \left[ 4(m_a^2 + m_b^2)(2 m_a^2 - s) + s^2 \right] \, \mathcal{I}_4(d;s) \nonumber \\ &+ 4 m_a^4 \left[ 2(m_a^2 + m_b^2) -s\right]\, \mathcal{I}_5(d;s) + 8 m_a^2 m_b^2 (4 m_a^2-s)\, \mathcal{I}_6(d;s)\,, \labbel{eq:relban2mB1} \end{align} \begin{align} \mathcal{O}(d-2) &= (-2 m_a^2 + 6 m_b^2 - 3 s)\,\mathcal{I}_1(d;s) + \left[ -20 m_a^4 + 8m_a^2(7 m_b^2 -2\,s) + 3 \, s (s - 4 m_b^2)\right]\,\mathcal{I}_2(d;s) \nonumber \\ &+ 12 \, m_b^2 (m_b^2-s)\,\mathcal{I}_3(d;s) + 2 m_a^2\left[ -8 m_a^2(m_a^2 -3 m_b^2) - 4 (m_a^2 + 3 m_b^2)\,s + 3 s^2\right] \mathcal{I}_4(d;s) \nonumber \\ &- 4 m_a^4 (2 m_a^2 - 6 m_b^2 + s ) \,\mathcal{I}_5(d;s) + 32\, m_a^2\, m_b^2 (m_b^2-s)\, \mathcal{I}_6(d;s)\,. \labbel{eq:relban2mB2} \end{align} We stress again that relations~\eqref{eq:relban2mA},~\eqref{eq:relban2mB1} and~\eqref{eq:relban2mB2} are \textsl{not exact} since all sub-topologies have been neglected throughout. These relations can be nevertheless used in order to derive new systems of differential equations where, for both A and B configurations, only $4$ equations remain coupled in the limit $d \to 2$. For example, in the case of configuration A we can take as new basis $$\mathcal{J}_1^A(d;s) = \mathcal{I}_1^A(d;s)\,, \quad \mathcal{J}_2^A(d;s) = \mathcal{I}_2^A(d;s)\,, \quad\mathcal{J}_3^A(d;s) = \mathcal{I}_3^A(d;s)\,, \quad \mathcal{J}_4^A(d;s) = \mathcal{I}_4^A(d;s)\,,$$ plus the new master defined as \begin{align} \mathcal{J}_5^A(d;s) &= m_a^2(s-5 m_a^2 + m_b^2)\mathcal{I}_5^A(2;s) - \frac{3 m_a^2 + m_b^2 -s }{12 \,m_a^2}\, \mathcal{I}_1^A(2;s) \nonumber \\ & - \frac{51 m_a^4 + (m_b^2 - s)^2 - 6 m_a^2 (m_b^2 + 2 s) }{12 \,m_a^2}\, \mathcal{I}_2^A(2;s) \nonumber \\ &- \frac{ m_b^2 (m_b^2 - s)}{6 \, m_a^2} \, \mathcal{I}_3^A(2;s) - \frac{21 m_a^4 + (m_b^2 - s)^2 - 6 m_a^2 (m_b^2 + s) }{6}\, \mathcal{I}_4^A(2;s)\,. \end{align} Upon doing this one finds that the differential equation for the new master $\mathcal{J}_5^A$ assumes the form \begin{align} \frac{d\,\mathcal{J}_5}{d\,s} = (d-2) \left[ c_{51}(d;s)\mathcal{J}_1 + c_{52}(d;s)\mathcal{J}_2 + c_{53}(d;s)\mathcal{J}_1 + c_{54}(d;s)\mathcal{J}_2 \right] + c_{55}(d;s)\mathcal{J}_5\,, \end{align} where the functions $c_{ij}(d;s)$ are rational functions for the dimension $d$, the momentum $s$ and the two masses, and are \textsl{finite} as $d \to 2$. This insures, thanks to the overall coefficients $(d-2)$, that the differential equation for $\mathcal{J}_5$ decouples completely from the other four, as expected. As far as configuration $B$ is concerned, in order to achieve the complete decoupling of two out of the six equations, we can take as basis \begin{align} \mathcal{J}_1^B(d;s) = \mathcal{I}_1^B(d;s)\,, \quad \mathcal{J}_2^B(d;s) = \mathcal{I}_2^B(d;s)\,, \quad\mathcal{J}_3^B(d;s) = \mathcal{I}_3^B(d;s)\,, \quad \mathcal{J}_4^B(d;s) = \mathcal{I}_4^B(d;s)\,, \end{align} together with \begin{align} \mathcal{J}_5^B(d;s) &=\left[ 2(m_a^2 + m_b^2)- s \right] \mathcal{I}_1(d;s) + \left[ 4\,m_a^2(5 m_a^2 + 4m_b^2) - 4\,(2 m_a^2 + m_b^2) s + s^2\right]\, \mathcal{I}_2(d;s) \nonumber \\ &+ 4 m_b^2 (2 m_a^2 + m_b^2 - s) \, \mathcal{I}_3(d;s) + 2 m_a^2 \left[ 4(m_a^2 + m_b^2)(2 m_a^2 - s) + s^2 \right] \, \mathcal{I}_4(d;s) \nonumber \\ &+ 4 m_a^4 \left[ 2(m_a^2 + m_b^2) -s\right]\, \mathcal{I}_5(d;s) + 8 m_a^2 m_b^2 (4 m_a^2-s)\, \mathcal{I}_6(d;s) \,, \end{align} \begin{align} \mathcal{J}_6^B(d;s) &=(-2 m_a^2 + 6 m_b^2 - 3 s)\,\mathcal{I}_1(d;s) + \left[ -20 m_a^4 + 8m_a^2(7 m_b^2 -2\,s) + 3 \, s (s - 4 m_b^2)\right]\,\mathcal{I}_2(d;s) \nonumber \\ &+ 12 \, m_b^2 (m_b^2-s)\,\mathcal{I}_3(d;s) + 2 m_a^2\left[ -8 m_a^2(m_a^2 -3 m_b^2) - 4 (m_a^2 + 3 m_b^2)\,s + 3 s^2\right] \mathcal{I}_4(d;s) \nonumber \\ &- 4 m_a^4 (2 m_a^2 - 6 m_b^2 + s ) \,\mathcal{I}_5(d;s) + 32\, m_a^2\, m_b^2 (m_b^2-s)\, \mathcal{I}_6(d;s)\,. \end{align} Using this basis one obtains a new system of differential equations, where the two equations for $\mathcal{J}_5^B$ and $\mathcal{J}_6^B$ develop an explicit overall factor $(d-2)$, such that, at every oder in the Laurent expansion, they can be solved trivially by quadrature. Order by order, once the result for the latter is known, one is left with a system of four coupled differential equations for the remaining master integrals. For compactness we prefer not to give here explicitly the systems of differential equations. \subsubsection{The case of three different masses} Generalising even further we can check what happens if three out of the four masses are allowed to take different values. In this case there is of course only one possibility, which we choose to be $$I_3(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) = I_4(d;n_1,n_2,n_3,n_4,n_5,n_6,n_7,n_8,n_9) \Big|_{m_4 = m_3}\,.$$ We start, as always, performing a reduction for generic $d$. The complexity increases and we find $8$ independent MIs \begin{align} \mathcal{I}_1(d;s) = I_3(d;1,1,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_2(d;s) = I_3(d;2,1,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_3(d;s) = I_3(d;1,2,1,&1,0,0,0,0,0)\,,\quad \mathcal{I}_4(d;s) = I_3(d;1,1,2,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_5(d;s) = I_3(d;3,1,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_6(d;s) = I_3(d;2,2,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_7(d;s) = I_3(d;2,1,2,&1,0,0,0,0,0)\,, \quad \mathcal{I}_8(d;s) = I_3(d;1,2,2,1,0,0,0,0,0)\,. \end{align} We can then consider the system of IBPs for $d=2$. It is easy to check that in this case $3$ MIs degenerate, and therefore only $5$ MIs remain linearly independent. We do not report here the equivalent of relations~\eqref{eq:relban2mA}~\eqref{eq:relban2mB1} and~\eqref{eq:relban2mB2}, since they are considerable more lengthy, but one can easily work out the reduction in $d=2$ and find that, for example, $\mathcal{I}_6(2;s)$, $\mathcal{I}_7(2;s)$ and $\mathcal{I}_8(2;s)$ can be written as linear combinations of the $\mathcal{I}_1(2;s)$,...,$\mathcal{I}_5(2;s)$. Using the methods described above, $3$ out of the $8$ differential equations for this particular mass configuration can be decoupled in the limit $d \to 2$, and one can in principle derive a fifth-order differential equation for any of the MIs, and in particular for the scalar amplitude $\mathcal{I}_1(d;s)$. \subsubsection{The general case of four different masses} Last but not least, we move to considering the most general configuration with four different masses. In this case the complexity increases even further and solving the IBPs in $d$ dimensions brings to a reduction in terms of 11 different MIs \begin{align} \mathcal{I}_1(d;s) = I_4(d;1,1,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_2(d;s) = I_4(d;2,1,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_3(d;s) = I_4(d;1,2,1,&1,0,0,0,0,0)\,,\quad \mathcal{I}_4(d;s) = I_4(d;1,1,2,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_5(d;s) = I_4(d;1,1,1,&2,0,0,0,0,0)\,, \quad \mathcal{I}_6(d;s) = I_4(d;3,1,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_7(d;s) = I_4(d;2,2,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_8(d;s) = I_4(d;2,1,2,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_9(d;s) = I_4(d;2,1,1,&2,0,0,0,0,0)\,, \quad \mathcal{I}_{10}(d;s) = I_4(d;1,2,2,1,0,0,0,0,0)\,, \nonumber \\ & \mathcal{I}_{11}(d;s) = I_4(d;1,2,1,2,0,0,0,0,0)\,. \end{align} The number of independent master integrals is obviously very large and, if all differential equations for the $11$ MIs were to be coupled, this would imply an $11$-th order differential equation for any of the masters and in particular for the scalar amplitude $\mathcal{I}_1(d;s)$. It is therefore very interesting in this case to know how many MIs can be decoupled using the methods described above. Again it is enough to repeat the reduction to MIs, but fixing this time $d=2$, and we immediately find that 5 out of the 11 MIs become linearly dependent and can be expressed in terms of the other 6. Which MIs survive depends of course on the internal algorithm for the solution of the IBPs. In our case we find as independent MIs \begin{align} \mathcal{I}_1(2;s) = I_4(2;1,1,1,&1,0,0,0,0,0)\,, \quad \mathcal{I}_2(2;s) = I_4(2;2,1,1,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_3(2;s) = I_4(2;1,2,1,&1,0,0,0,0,0)\,,\quad \mathcal{I}_4(2;s) = I_4(2;1,1,2,1,0,0,0,0,0)\,,\nonumber \\ \mathcal{I}_5(2;s) = I_4(2;1,1,1,&2,0,0,0,0,0)\,, \quad \mathcal{I}_6(2;s) = I_4(2;3,1,1,1,0,0,0,0,0)\,. \end{align} This implies that, the new basis of 11 $d$ dimensional MIs defined following the recipe given above, fulfils a system of $11$ differential equations, $5$ of which decouple from the system as $d \to 2$. In this way we expect that a sixth-order differential equation for the scalar amplitude can be derived in $d=2$. \subsection{Comments and open questions} \labbel{sec:Comments} Before moving on to the conclusions we would like to bring attention to some issues that might have gone unnoticed and which nevertheless leave room to very interesting open questions. In the previous sections we have worked out different applications of the ideas outlined in Section~\ref{sec:IBPsN}. We have seen explicitly that studying the IBPs in $d=2$ or $d=4$ can provide, for different Feynman graphs, identities useful to decouple the system of differential equations they fulfil. However in this discussion there is a point that we have avoided mentioning on purpose. Let us imagine to have to deal with a Feynman graph with three master integrals $\mathcal{I}_1$, $\mathcal{I}_2$ and $\mathcal{I}_3$, which fulfil a system of three coupled differential equations in the limit $d \to 4$, and let us suppose to apply the methods described above in order to try and decouple the system. We can imagine that by solving the IBPs in $d=4$ only one relation can be found. In this case we know that we can use it in order to decouple one of the three integrals in the limit $d \to 4$, leaving therefore a system of two coupled equations, equivalent to a second order differential equation. Is this enough to say that there must be no other way to fully decouple all three differential equations? The answer is, in general, of course no. We have discussed already how the decoupling of the differential equations in \textsl{any even integer} number of dimensions is equivalent to the decoupling of the latter in $d=4$. In Appendix~\ref{App:Dim} we show explicitly how, if one can find a basis that decouples the differential equations in $d=2\,n$, a corresponding basis can be constructed which decouples them in $d=4$. Let us then go back to the problem of the three coupled master integrals. Let us imagine that studying the IBPs in $d=2$ two linearly independent relations are found among the three masters and that this allows to completely decouple the system in $d=2$. In this case we know that a corresponding basis would have to exist in $d=4$ as well, and we could find it with the methods described in Appendix~\ref{App:Dim}. On the other hand, if we had not found any new relations in $d=2$, we could still decide to try in $d=6$, or in $d=8, 10, 12$ etc. apparently without an end. Who or what tells us when to stop and, therefore, when we can assert without any doubts that not enough relations can be found, in any even number of dimension, to decouple completely the system? This question is extremely interesting and we unfortunately do not have a conclusive answer to it. For sure in all examples considered so far it has always been enough to study the differential equations in $d=2$ and $d=4$ only in order to find all needed relations to decouple the system. In those cases where not all equations could be decoupled (see for example Sections~\ref{sec:sun2},~\ref{sec:crossed},~\ref{sec:ban}) an attempt to consider different numbers of dimensions would simply produce no new relations at all, suggesting that there is no way to further simplify the problem, at least in this framework. Of course this does not constitute a mathematical proof in any respect. If these considerations have not brought to a definite answer yet, they nevertheless open the possibility for a different perspective in the way a system of differential equations for master integrals should be studied. We usually tend to think that the only physically relevant results are obtained when studying the system in the limit $d \to 4$. A lot of useful information, though, can be extracted studying the system as $d \to 2\,n$ and, sometimes, the relations found in this way appear to be independent and, in a sense, complementary. Whether these relations are really independent and how to determine the \textsl{maximum amount} of information that can be extracted by studying the IBPs in fixed integer numbers of dimensions remain open questions for now. It seems however reasonable to think that a more global approach, which allowed to study the systems of differential equations in general for any even number of dimensions (and not only in the limit $d \to 4$) could possibly bring a deeper insight in the structure and properties of the latter. \section{Conclusions} \labbel{sec:Concl} \setcounter{equation}{0} \numberwithin{equation}{section} The method of differential equations has proven to be one of the most effective and promising tools for the evaluation of multi-loop and multi-scale Feynman integrals. The usual procedure consists in reducing all Feynman integrals to a basis of master integrals through integration by parts identities, then derive differential equations satisfied by the master integrals and finally try and solve them as Laurent expansion in $(d-4)$. For many problems of physical interests the coefficients of the Laurent expansion of the master integrals can be expressed in terms of a particular class of special functions called multiple polylogarithms. It has been noted that, whenever this is possible, a basis of master integrals can be found such that their differential equations become triangular in the limit $d \to 4$, allowing a simple integration of the differential equations by quadrature. It was moreover conjectured that in all such cases a canonical basis can be found, turning the integration of the differential equations into a straightforward algebraic problem. If a complete set of boundary conditions is also known, the problem can therefore be considered as completely solved. On the other hand, different cases are known where such a simplification cannot be achieved and a minimum number of differential equations remain coupled. Of course in all these cases also a canonical basis (in the original sense introduced in~\cite{Henn:2013pwa}) cannot be found. Whenever this happens, it becomes of crucial importance to be able to determine the \textsl{minimum number} of master integrals which cannot be decoupled from the system. If two or more equations are coupled, in fact, no general technique exists to find a solution and one must resort to different considerations in order to find a complete set of homogeneous solutions, which can then be used in order to build up the inhomogeneous solution using Euler's method of the variation of constants (see for example~\cite{Laporta:2004rb}). Of course, the larger the number of coupled equations is, the more difficult it becomes finding a complete set of solutions. Reducing the order of the system of differential equations is therefore essential from a practical point of view in order to be able to successfully tackle the problem. The issue is nevertheless interesting also from a more general point of view. Master integrals satisfying higher order differential equations, in fact, usually cannot be expressed in terms of multiple polylogarithms only and a very intensive theoretical effort has been recently devoted to determine the properties of the new special functions required. The most famous example is the two-loop massive sunrise graph. In this case two differential equations remain coupled and therefore the problem amounts to solving a second-order differential equation. It has been recently shown that the solution of the latter can be expressed in terms of a new generalisation of the multiple polylogarithms, called elliptic polylogarithms. Many questions are nevertheless still to be answered. Are elliptic polylogarithms enough for describing all Feynman integrals whose evaluation can be reduced to a second order differential equation? And what about higher order equations? A first step towards an answer to these questions seems therefore to be in a criterion to determine, given a set of master integrals and the system of differential equations they fulfil, the minimum number of differential equations coupled, and therefore the class of special functions required. In this paper we presented a simple idea which proved to be very useful in this respect. We showed in particular that the study of the IBPs for fixed integer values of the space-time dimensions, $d=n$, can provide the information required for decoupling the differential equations in the limit $d \to n$. Indeed our criterion is, in principle, a sufficient but not a \textsl{necessary} one, in the sense that we did not prove that if no extra relation can be found among the MIs when $d = n$, then no decoupling is possible for $d \to n$. The criterion has moreover proven to be extremely effective, inasmuch as it provided, in all cases that we considered, relations useful for decoupling some of the differential equations and therefore substantially simplify the problem at hand. It would indeed be extremely interesting to prove whether this criterion is not only a sufficient but also a necessary criterion, checking, for example, whether the number of independent MIs of the three-loop banana graph in $d=2$ (see Section~\ref{sec:ban}) can be further reduced by any other means, reducing in this way also the maximum degree of the differential equation satisfied by the scalar amplitude. The criterion is moreover extremely simple to apply, since it can be very easily implemented into any existing public or private IBPs reduction code. In this respect, the possibility of pairing the study of IBPs in fixed numbers of space-time dimensions together with the new concept of a (pseudo-)finite basis of MIs, recently introduced in~\cite{vonManteuffel:2014qoa}, looks particularly promising. The latter, in fact, could potentially provide a way to automatically determine the highest poles developed by different Feynman integrals for different values of the space-time dimensions. \section*{Acknowledgements} I am indebted to Ettore Remiddi, whose continuous advice and support were fundamental for the completion of this work. The idea of looking for identities between master integrals for fixed numbers of the space-time dimensions in order to simplify the systems of differential equations was first developed with him in~\cite{Remiddi:2013joa}. The generalisation of those ideas presented in this paper have benefited from many interesting discussions with him. I wish to thank Andreas von Manteuffel for the continuous encouragement to follow the ideas developed in this paper and for having allowed me to use the development version of Reduze 2 prior to its publication. Finally I need to thank Pierpaolo Mastrolia for many discussions and useful input during different stages of the project, Thomas Gehrmann for his continuous support and for his comments on the manuscript and Dominik Kara for carefully proofreading the present version of the paper.
1,116,691,499,604
arxiv
\section{\label{sec:Introduction} Introduction} The field of cavity quantum electrodynamics (CQED) has seen rapid progress in the past several years. One of the main reasons for this is the development of high quality factors optical micro-cavities with mode volumes that are less than a cubic wavelength of light~\cite{VuckovicYamamoto03}. These high-Q cavities allow previously unattainable interaction strengths between a cavity mode and a dipole emitter such as a quantum dot. There are a large number of applications that require strong interactions between a cavity and dipole emitter. These include methods for conditional phase shifts on single photons~\cite{DuanKimble04}, single photon generation~\cite{KuhnHennrich02}, and quantum networking~\cite{CiracZoller97}. These applications either exploit modification of the dipole emission rate, or cavity spectrum, when the two systems are coupled. It is often perceived that in order to observe significant modification of the cavity spectrum, one must enter the so-called ``strong coupling'' regime. In this regime the interaction strength between the cavity and dipole is sufficiently large to fully split the cavity mode into a lower and upper polariton. In this paper we show that the strong coupling regime is not required in order to see significant modification of the cavity spectrum. We consider a single cavity that is coupled to two waveguides and behaves as a resonant drop filter. When an optical field whose frequency is resonant with the cavity is sent down one waveguide, the drop filter cavity would normally transmit all the field from one waveguide to another. Hence, the waveguide would appear opaque at the cavity resonance because all the light would be dropped to the other port. We show that if one places a resonant dipole in the drop-filter cavity, the waveguide becomes highly transparent, even in the weak coupling regime. This transparency is caused by destructive interference of the two cavity dressed states. We refer to this effect as Dipole Induced Transparency (DIT), because of its close analogy to Electromagnetically Induced Transparency (EIT) in atomic media~\cite{HarrisField90}. The fact that we do not need strong coupling to modify the transmission of a waveguide is extremely important for the field of semiconductor CQED. Although photonic crystal cavities allow us to approach the regime of strong coupling with a single emitter, it is very difficult to fabricate cavities that have sufficiently high quality factors to reach the strong coupling regime. Things become even more difficult when we attempt to integrate these cavities with waveguides. The cavity-waveguide coupling rate must be sufficiently large that we do not lose too much of the field to leaky modes. At the same time, leakage into the waveguide introduces additional losses making strong coupling even more difficult to achieve. Thus strong coupling and efficient waveguide interaction require mutually conflicting demands on the performance of the cavity. Our result relaxes the constraint on strong coupling, allowing us to work in a practical parameter regime. To demonstrate the application of DIT, we conclude this paper by showing how it can be used to share entanglement between spatially separated dipoles, and to perform a full non-destructive Bell measurement on two dipoles. These operations are extremely useful for building quantum repeaters~\cite{BriegelDur98,DuanLukin01}. \begin{figure} \centering\includegraphics[width=5cm]{Figure1.eps} \caption{Cavity waveguide system for quantum repeaters.} \label{fig:CavWguide} \end{figure} Fig.~\ref{fig:CavWguide} shows a schematic of the type of system we are considering. A cavity containing a single dipole emitter is evanescently coupled to two waveguides. The cavity is assumed to have a single relevant mode, which couples only to the forward propagating fields (e.g. a whispering gallery mode). This system is equivalent to an input field reflecting off of a double-sided linear cavity, and our analysis equally applies to both cases. The dipole may be detuned by $\delta$ from cavity resonance, denoted $\omega_0$, while $g$ is the vacuum Rabi frequency of the dipole. Both waveguides are assumed to have equal coupling rate into the cavity. This condition is known as critical coupling, and should result in the input field from one waveguide being completely transmitted to the other when $\gamma\gg\kappa$~\cite{ManolatouKhan99}. When a dipole is placed inside the cavity, the cavity mode will split into two modes, the lower and upper polariton branches, that are shifted from the center frequency by the vacuum Rabi frequency. In the strong coupling regime, the vacuum Rabi frequency is sufficiently large that the cavity mode is split by more than a linewidth. In this regime, the cavity spectrum is no longer resonant with the input field, which now remains in its original waveguide. Our main interest, however, is in the weak coupling regime where the vacuum Rabi frequency does not exceed the cavity decay rate. In this case, the lower and upper polariton branches overlap significantly, and are still largely resonant with the input field. Nevertheless, the two branches can still destructively interfere in a narrow spectral region near zero detuning. This interference is analogous to the interference between the two dressed states of an atomic lambda system in Electromagnetically Induced Transparency. To establish this, we begin with the Heisenberg operator equations for the cavity field operator $\hat{\op{b}}$ and dipole operator $\op{\sigma_-}$, given by~\cite{WallsMilburn} \begin{eqnarray} \frac{d\hat{\op{b}}}{dt} & = & -\left(i\omega_0 + \gamma + \kappa/2 \right)\hat{\op{b}} - \sqrt{\gamma} \left(\a_{in} + \c_{in}\right) \nonumber\\ & & - \sqrt{\kappa}\e_{in} - ig\op{\sigma_-} \label{eq:Heisenbergb}\\ \frac{d\op{\sigma_-}}{dt} & = & -\left(i\left(\omega_0+\delta\right)+\frac{\tau}{2}\right)\op{\sigma_-} + ig\op{\sigma_z}\hat{\op{b}}-\hat{\op{f}} \label{eq:HeisenbergSig} \end{eqnarray} The operators $\a_{in}$ and $\c_{in}$ are the field operators for the flux of the two input ports of the waveguide, while $\e_{in}$ is the operator for potential leaky modes. The bare cavity has a resonant frequency $\omega_0$ and an energy decay rate $\kappa$ (in the absence of coupling to the waveguides). This decay rate is related to the cavity quality factor Q by $\kappa=\omega_0/Q$. The parameter $\gamma$ is the energy decay rate from the cavity into each waveguide. Similarly, the dipole operator $\op{\sigma_-}$ has a decay rate $\tau$, and $\hat{\op{f}}$ is a noise operator which preserves the commutation relation. The output fields of the waveguide, $\a_{out}$ and $\c_{out}$, are related to the input fields by~\cite{WallsMilburn} \begin{eqnarray} \a_{out} - \a_{in} & = & \sqrt{\gamma}\hat{\op{b}} \label{eq:ascat}\\ \c_{out} - \c_{in} & = & \sqrt{\gamma}\hat{\op{b}} \label{eq:cscat} \end{eqnarray} Eq.~\ref{eq:HeisenbergSig} is difficult to solve because the field operator $\hat{\op{b}}$ is multiplied by the time varying operator $\op{\sigma_z}$. However, we can significantly simplify the problem by looking at the weak excitation limit, where the quantum dot is predominantly in the ground state. In this limit, $\langle\op{\sigma_z}(t)\rangle\approx -1$ for all time, and we can substitute $\op{\sigma_z}(t)$ with its average value of $-1$. After deriving a solution, we will check the validity of this approximation. Assuming the cavity is excited by a weak monochromatic field with frequency $\omega$, we calculate the response of $\hat{\op{b}}$ and $\op{\sigma_-}$ using fourier decomposition. We assume that the cavity decay rate is much faster than the dipole decay rate, so that $\tau/\gamma\approx 0$. This is a realistic assumption for a quantum dot coupled to a photonic crystal cavity, but does not necessarily apply in atomic systems coupled to very high-Q optical resonators. In this limit the waveguide input-output relations are given by the expressions \begin{eqnarray} \a_{out} & = & \frac{-\gamma \c_{in} + \left( -i\Delta\omega + \frac{\kappa}{2} +\frac{g^2}{-i\left(\Delta\omega - \delta\right) +\tau/2}\right)\a_{in} - \sqrt{\kappa\gamma}\e_{in}}{-i\Delta\omega + \gamma + \kappa/2 + \frac{g^2}{-i\left(\Delta\omega - \delta\right)+\tau/2}} \label{eq:asolved}\\ \c_{out} & = & \frac{-\gamma \a_{out} + \left( -i\Delta\omega + \frac{\kappa}{2} + \frac{g^2}{-i\left(\Delta\omega - \delta\right)+\tau/2}\right)\c_{out} -\sqrt{\kappa\gamma}\e_{out}}{-i\Delta\omega + \gamma + \kappa/2 + \frac{g^2}{-i\left(\Delta\omega - \delta\right)+\tau/2}} \label{eq:csolved} \end{eqnarray} where $\Delta\omega = \omega - \omega_0$. First, consider the case where the dipole is resonant with the cavity, so that $\delta=0$. In the ideal case, the bare cavity decay rate $\kappa$ is very small and can be set to zero. In this limit, when the field is resonant with the cavity and $g=0$ we have $\a_{in}=-\c_{out}$, as one would expect from critical coupling. In the opposite regime, when $2g^2/\tau\gg\gamma+\kappa/2$ we have $\a_{in}=\a_{out}$, so that the field remains in the original waveguide. This condition can be re-written as $F_p = 2g^2/[(\gamma + \kappa/2)\tau]\gg1$, where $F_p$ is the Purcell factor. Thus, in order to make the waveguide transparent, we need to achieve large Purcell factors. However, we do not need the strong coupling regime $(g>\gamma+\kappa/2)$. When $\tau\ll\gamma+\kappa/2$ we can achieve transparency for much smaller values of $g$. In this sense, our scheme is best suited for implementation in photonic crystal cavities coupled to quantum dots. The small mode volumes of photonic crystal cavities, coupled with the large oscillator strength of quantum dots, allows us to achieve the large Purcell factors needed for proper operation~\cite{EnglundFattal05,BadolatoHennessy05,VuckovicFattal03}. The above condition has another interpretation that can be borrowed from atomic physics. The critical atom number $N_0=(2\gamma +\kappa)\tau/g^2$ and critical photon number $m_0=(\tau/2g)^2$ are defined as the number of atoms and photons in the cavity required to see modification of the cavity spectrum~\cite{Kimble}. Our condition is equivalent to $N_0\ll 1$, so a single emitter is enough to modify the cavity. Also, because $\tau\ll g$ we automatically have $m_0\ll 1$. We now go back and check the validity of our assumption that $\langle\op{\sigma_z}\rangle\approx -1$, which is equivalent to stating that $\langle\op{\sigma_+}\op{\sigma_-}\rangle\ll 1$. Using Equations~\ref{eq:Heisenbergb}-\ref{eq:cscat}, and assuming $F_p\gg 1$, we can show that on resonance, $\langle\op{\sigma_+}\op{\sigma_-}\rangle\ll 1$ is equivalent to the condition $\langle\a_{in}^{\dagger}\a_{in}\rangle\ll g^2/\gamma$. This condition basically states that the incoming photon flux $\langle\a_{in}^{\dagger}\a_{in}\rangle$ must be much smaller than the modified spontaneous emission decay rate of the emitter (in the limit that the cavity decay is dominated by $\gamma$), and is well satisfied in the operating regime we are working in. \begin{figure} \centering\includegraphics[width=5cm]{Figure2.eps} \caption{Probability for field in $\a_{in}$ to transmit into $\a_{out}$ and $\c_{out}$ respectively. (a) transmission with no dipole in cavity. (b) transmission with a dipole in the cavity} \label{fig:scatter} \end{figure} Fig.~\ref{fig:scatter} plots the probability that $\a_{in}$ transmits into $\a_{out}$ and $\c_{out}$. We use realistic experimental parameters to create this plot. We set $\gamma=1THz$ which is about a factor of 10 faster than $\kappa$ for a cavity with a quality factor of $Q=10,000$. We set $g=330GHz$, a number calculated from FDTD simulations of cavity mode volume for a single defect dipole cavity in a planar photonic crystal coupled to a quantum dot~\cite{VuckovicYamamoto03}. The dipole decay rate is set to $\tau=1GHz$, taken from experimental measurements~\cite{VuckovicFattal03}. Panel (a) of Fig.~\ref{fig:scatter} considers the case where the cavity does not contain a dipole. In this case $g=0$, representing a system where two waveguides are coupled by a cavity. This well known structure is often referred to as a drop filter. The width of the transmission spectrum for the drop filter is determined by the lifetime of the cavity, which in our case is dominated by $\gamma$. When a dipole is present in the cavity, the result is plotted in panel (b). In this case, a very sharp peak in the transmission spectrum appears at $\Delta\omega=0$. This peak is caused by destructive interference of the cavity field, which prevents the input field from entering the cavity. On resonance, all of the field is now transmitted through the waveguide instead of being dropped to the other port. The spectral width of the transmission peak is roughly equal to $g$. It is important to note that the transmission is almost complete, even though $g$ is a factor of 3 smaller than the cavity decay rate of $\gamma+\kappa/2$. \begin{figure} \centering\includegraphics[width=5cm]{Figure3.eps} \caption{Transmission of waveguide as function of $\delta$, the detuning of the dipole from the cavity.} \label{fig:DeltaPlot} \end{figure} We now consider the effect of detuning the dipole. The transmission spectrum for several values of $\delta$ is plotted in Fig~\ref{fig:DeltaPlot}. Introducing a detuning in the dipole causes a shift in the location of the transmission peak., so that destructive interference occurs when the field frequency is equal to the dipole frequency. Thus, we do not have to hit the cavity resonance very accurately to observe DIT. We only need to overlap the dipole resonance within the cavity transmission spectrum. \begin{figure} \centering\includegraphics[width=5cm]{Figure4.eps} \caption{Application of DIT to quantum repeaters. a) a method for generating entanglement between two dipoles using DIT. b) a non-destructive Bell measurement.} \label{fig:RepeaterFig} \end{figure} The fact that we can strongly modify the transmission spectrum of a waveguide by the state of a dipole can be extremely useful for quantum information processing. As one example, we now present a way in which DIT can be applied to engineering quantum repeaters for long distance quantum communication. Quantum repeaters can be implemented all optically~\cite{PanGasparoni03,WaksZeevi02}, as well as using atomic systems~\cite{DuanLukin01}. One of the main problems with these proposals is that it is difficult to implement the full Bell measurement required for swapping entanglement. This leads to a communication rate that is exponentially decaying with the number of repeaters. More recent proposals incorporate interaction between nuclear and electron spins to implement the full Bell measurement~\cite{ChildressTaylor05}. Here we propose a method for implementing entanglement, as well as a full Bell measurement on an atomic system using only interaction with a coherent field. This leads to an extremely simple implementation of a quantum repeater. \begin{figure} \centering\includegraphics[width=5cm]{Figure5.eps} \caption{Panel (a), probability of detecting even parity for an odd parity state as a function of $\gamma$. Panel (b), solid line plots the fidelity of the state $(\|gg>\pm\|mm>)/{\sqrt{2}}$ after a parity measurement. Dotted line plots the probability that the measuring field contains at least one photon for detection. \label{fig:FidSuc}} \end{figure} In panel (a) of Fig.~\ref{fig:RepeaterFig} we show how DIT can be used to generate entanglement between two spatially separated dipoles. A weak coherent beam is split on a beamsplitter, and each port of the beamsplitter is then sent to two independent cavities containing dipoles. The waveguide fields are then mixed on a beamsplitter such that constructive interference is observed in ports $\hat{\op{f}}$ and $\hat{\op{h}}$. Each dipole is assumed to have three relevant states, a ground state, an excited state, and a long lived metastable state which we refer to as $\|g>$, $\|e>$, and $\|m>$ respectively. The transition from ground to excited state is assumed to be resonant with the cavity while the metastable to excited state transition is well off resonance from the cavity, and is thus assumed not to couple to state $\|e>$. The states $\|g>$ and $\|m>$ represent the two qubit states of the dipole. When the dipole is in state $\|m>$, it does not couple to the cavity, which now behaves as a drop filter. Thus, we have a system that transforms $\a_{in}^{\dagger}\|g>\|0>\to\a_{out}^{\dagger}\|g>\|0>$ and $\a_{in}^{\dagger}\|m>\|0>\to-\c_{out}^{\dagger}\|m>\|0>$. This operation can be interpreted as a C-NOT gate between the state of the dipole and the incoming light. When the dipole is in a superposition of the two states, this interaction generates entanglement between the path of the field and the dipole state. After the beamsplitter, this entanglement will be transferred to the two dipoles. If the state of both dipoles is initialized to $(\|g> + \|m>)/\sqrt{2}$, it is straightforward to show that a detection event in ports $\hat{\op{g}}$ or $\hat{\op{i}}$ collapses the system to $(\|g,m> - \|m,g> )/\sqrt{2}$. Another important operation for designing repeaters is a Bell measurement. Panel (b) of Fig.~\ref{fig:RepeaterFig} shows how to implement a complete Bell measurement between two dipoles using only cavity waveguide interactions with coherent fields. The two cavities containing the dipoles are coupled to two waveguides. When a coherent field $\|\alpha>$ is sent down waveguide 1, each dipole will flip the field to the other waveguide if it is in state $\|m>$, and will keep the field in the same waveguide if it is in state $\|g>$. Thus, a detection event at ports $\a_{even}$ and $\a_{odd}$ corresponds to a parity measurement. A Bell measurement can be made by simply performing a parity measurement on the two dipoles, then a Hadamard rotation on both dipoles, followed by a second parity measurement. To understand why this works, consider the four Bell states $\|\phi_\pm> = (\|gg>\pm\|mm>)/\sqrt{2}$ and $\|\psi_\pm> = (\|gm>\pm\|mg>)/\sqrt{2}$. The first parity measurement distinguishes the states $\|\phi_\pm>$ from $\|\psi_\pm>$, since these two groups have opposite parity. After a Hadamard rotation on both dipoles, it is easy to verify that the states $\|\phi_+>$ and $\|\psi_->$ are unaffected, while $\|\phi_->\to\|\psi_+>$ and $\|\psi_+>\to\|\phi_->$, and thus flip parities. The second measurement will then distinguish between the states $\|\phi_+>$ from $\|\phi_->$ and $\|\psi_+>$ from $\|\psi_->$, which completely distinguish the four Bell states. It is important to note that this measurement is non-destructive, in that after the measurement the state of the dipoles remains in the measured state. The performance of the Bell apparatus is analyzed in Fig~\ref{fig:FidSuc}. Panel (a) plots the probability that an odd parity state will falsely create a detection event in port $\a_{even}$, as a function of $\gamma$. The probability becomes high at large $\gamma$ due to imperfect transparency. It also increases at small $\gamma$ because of imperfect drop filtering. The minimum value of about $10^{-3}$ is achieved at approximately 3THz. In panel (b) of Fig.~\ref{fig:FidSuc} we plot both the fidelity and success probability of a parity measurement as a function of the number of photons in the probe field. The fidelity is calculated by applying the Bell measurement to the initial state $\|\psi_i>=(\|g,g> \pm \|m,m>)/\sqrt{2}$, and defining the fidelity of the measurement as $F=|\bk<\psi_f|\psi_i>|^2$, where $\|\psi_f>$ is the final state of the total system which includes the external reservoirs. The probability of success is defined as the probability that at least one photon is contained in the field. The fidelity is ultimately limited by cavity leakage, which results in ``which path'' information beaing leaked to the environment. This information leakage depends the strength of the measurement which is determined by the number of photons in the probe fields. Using more probe photons results in a higher success probability, but a lower fidelity. To calculate this tradeoff, we use previously described values for cavity and reservoir losses, and set the coupling rate $\gamma$ to 4THz, which is where the probability of false detection is near its minimum. At an average of three photons, a fidelity of over $90\%$ can be achieved with a success probability exceeding $95\%$. These numbers are already promising, and improved cavity and dipole lifetimes could lead to even better operation. This work was funded in part by the MURI center for photonic quantum information systems (ARO/DTO Program DAAD19-03-1-0199), and a Department of Central Intelligence postdoctoral grant.
1,116,691,499,605
arxiv
\section{Introduction} \label{introduction} \setcounter{equation}{0} In many statistical applications, and especially in epidemiology and biostatistics, incomplete data arise for a variety of reasons; cf., Eaton and Kariya (1983), Garren and Peddada (2000), Little and Rubin (2002), Peddada, Harris, and Davidov (2010), Krishnamoorthy and Yu (2012), and Davidov and Peddada (2013). Consequently, much work has been done on explicit formulas that allow for statistical inference with incomplete data to achieve specified levels of significance. In this paper, we consider the problem of kurtosis tests for multivariate normality using two-step monotone incomplete data. Following our earlier work on monotone incomplete multivariate normal data (Chang and Richards, 2009, 2010; Richards and Yamada, 2010; Romer, 2009; Romer and Richards, 2010, 2013; Yamada, 2013), we write the data in the form \begin{equation} \label{monotonesample} \binom{{\boldsymbol{X}}_1}{{\boldsymbol{Y}}_1} \ \binom{{\boldsymbol{X}}_2}{{\boldsymbol{Y}}_2} \ \cdots \ \binom{{\boldsymbol{X}}_n}{{\boldsymbol{Y}}_n} \ \; \begin{matrix} \\ {\boldsymbol{Y}}_{n+1} \end{matrix} \ \ \begin{matrix} \\ {\boldsymbol{Y}}_{n+2} \end{matrix} \ \ \begin{matrix} \\ \cdots \end{matrix} \ \ \begin{matrix} \\ {\boldsymbol{Y}}_N \end{matrix} \end{equation} where each ${\boldsymbol{X}}_j$, $1 \le j \le n$, is $p \times 1$ and each ${\boldsymbol{Y}}_j$, $1 \le j \le N$, is $q \times 1$. As in our earlier work, we assume that the data are missing completely at random (MCAR). It follows from results of Eaton and Kariya (1983), Hao and Krishnamoorthy (2001), and others that an explicit solution for the likelihood equations for the covariance matrix requires the MCAR assumption. Thus, the MCAR assumption and the monotone data pattern ensure the validity of likelihood inference and a unique, explicit solution to the likelihood equations. In this paper, we define for data of the form (\ref{monotonesample}) a generalization of Mardia's statistic for testing kurtosis. We derive the asymptotic non-null and null distributions of the new statistic under certain regularity conditions. We apply our results to a well-known Pennsylvania cholesterol data set (Ryan, Joiner, and Cryer, 2005, p.~267) which has been used widely to illustrate statistical methods for analyzing monotone incomplete multivariate data. Our results provide an invariant test of normality for that data, adding to the literature on that subject (see Henze (2002) for an extensive account of invariant testing procedures in the complete case), and we complement results of Romer (2009) and Romer and Richards (2013), where exploratory methods were applied to the cholesterol data set. Our results are as follows. In Section \ref{prelim}, we provide the background for testing kurtosis with the data (\ref{monotonesample}). We define in Section \ref{kurtosis} the kurtosis statistic, $b_{2,p,q}$, and prove that $b_{2,p,q}$ is identical to a statistic constructed from the observed data and from data imputed by linear regression methods. Further, we establish an invariance property of $b_{2,p,q}$ which allows us to reduce the general problem to the canonical case in which the population has mean zero and identity covariance matrix. We state in Section \ref{asymptoticdistns} the null and non-null asymptotic distributions of $b_{2,p,q}$ corresponding, respectively to the normal case and to a broad class of alternatives defined by moment conditions on the distribution, and we apply those results to the Pennsylvania cholesterol data, reaching conclusions similar to Romer (2009). We derive in Appendix \ref{Sigma-asymptotics} asymptotic expansions of ${\widehat{\boldsymbol\Sigma}}$ and ${\widehat{\boldsymbol\Sigma}}^{-1}$, where ${\widehat{\boldsymbol\Sigma}}$ is an estimator of ${\boldsymbol\Sigma}$, the covariance matrix of the population underlying the sample (\ref{monotonesample}), and then in Appendix \ref{b1b2asymptotics} we apply those expansions to derive asymptotic expansions of two statistics used to construct $b_{2,p,q}$. Finally, we obtain in Appendix \ref{basymptotics} the null and non-null asymptotic distributions of $b_{2,p,q}$. \section{Notation and preliminaries} \label{prelim} \setcounter{equation}{0} Throughout the paper, we follow the notation of Chang and Richards (2009, 2010). Thus, all matrices and vectors are written in boldface type; ${\boldsymbol{I}}_d$ denotes the identity matrix of order $d$; and $\boldsymbol{0}$ denotes any matrix or vector of zeros, the dimension of which will be clear from the context. We also let $\tau = n/N$ denote the proportion of observations in (\ref{monotonesample}) that are complete, and set $\bar{\tau} = 1-\tau \equiv (N-n)/N$. Define the sample means \begin{equation} \label{samplemeans} \bar {\boldsymbol{X}} = \frac{1}{n} \sum_{j=1}^n {\boldsymbol{X}}_j, \qquad \bar {\boldsymbol{Y}}_1 = \frac{1}{n} \sum_{j=1}^n {\boldsymbol{Y}}_j, \qquad \bar {\boldsymbol{Y}}_2 = \frac{1}{N-n}\sum_{j=n+1}^N {\boldsymbol{Y}}_j, \end{equation} and \begin{equation} \label{grandmean} \bar {\boldsymbol{Y}} = \frac1N \sum_{j=1}^N {\boldsymbol{Y}}_j \equiv \tau \bar {\boldsymbol{Y}}_1+\bar{\tau} \bar {\boldsymbol{Y}}_2. \end{equation} We also define corresponding matrices of sums of squares and products, \begin{equation} \label{sumsqmatrices} \begin{array}{ll} {\boldsymbol{A}}_{11} = \sum\limits_{j=1}^n ({\boldsymbol{X}}_j-\bar {\boldsymbol{X}})({\boldsymbol{X}}_j-\bar {\boldsymbol{X}})', & {\boldsymbol{A}}_{12} = {\boldsymbol{A}}_{21}' = \sum\limits_{j=1}^n ({\boldsymbol{X}}_j-\bar {\boldsymbol{X}})({\boldsymbol{Y}}_j-\bar {\boldsymbol{Y}}_1)', \\ {\boldsymbol{A}}_{22,n} = \sum\limits_{j=1}^n ({\boldsymbol{Y}}_j-\bar {\boldsymbol{Y}}_1)({\boldsymbol{Y}}_j-\bar {\boldsymbol{Y}}_1)', & {\boldsymbol{A}}_{22,N} = \sum\limits_{j=1}^N ({\boldsymbol{Y}}_j-\bar {\boldsymbol{Y}})({\boldsymbol{Y}}_j-\bar {\boldsymbol{Y}})', \end{array} \end{equation} set ${\boldsymbol{A}}_{11\cdot2,n}={\boldsymbol{A}}_{11}-{\boldsymbol{A}}_{12}{\boldsymbol{A}}_{22,n}^{-1}{\boldsymbol{A}}_{21}$, and let \begin{equation} \label{Amatrix} {\boldsymbol{A}} = \begin{pmatrix}{\boldsymbol{A}}_{11} &{\boldsymbol{A}}_{12}\\ {\boldsymbol{A}}_{21}&{\boldsymbol{A}}_{22,n}\end{pmatrix}. \end{equation} Let ${\boldsymbol\mu}$ denote the mean and ${\boldsymbol\Sigma}$ the covariance matrix of the population underlying the data (\ref{monotonesample}); we assume that ${\boldsymbol\Sigma}$ is nonsingular. We partition ${\boldsymbol\mu}$ and ${\boldsymbol\Sigma}$ similar to (\ref{monotonesample}), so that $$ {\boldsymbol\mu} = \begin{pmatrix} {\boldsymbol\mu}_1 \\ {\boldsymbol\mu}_2 \end{pmatrix}, \qquad {\boldsymbol\Sigma} = \begin{pmatrix} {\boldsymbol\Sigma}_{11} & {\boldsymbol\Sigma}_{12} \\ {\boldsymbol\Sigma}_{21} & {\boldsymbol\Sigma}_{22} \end{pmatrix}, $$ where ${\boldsymbol\mu}_1$ and ${\boldsymbol\mu}_2$ are $p \times 1$ and $q \times 1$ vectors, respectively, and the submatrices ${\boldsymbol\Sigma}_{11}$, ${\boldsymbol\Sigma}_{12} = {\boldsymbol\Sigma}_{21}'$, and ${\boldsymbol\Sigma}_{22}$ are of order $p \times p$, $p \times q$, and $q \times q$, respectively. We also define $$ {\widehat{\boldsymbol\mu}} = \begin{pmatrix}{\widehat{\boldsymbol\mu}}_1 \\ {\widehat{\boldsymbol\mu}}_2\end{pmatrix}, \qquad {\widehat{\boldsymbol\Sigma}} = \begin{pmatrix} {\widehat{\boldsymbol\Sigma}}_{11} &{\widehat{\boldsymbol\Sigma}}_{12} \\ {\widehat{\boldsymbol\Sigma}}_{21}&{\widehat{\boldsymbol\Sigma}}_{22} \end{pmatrix}, $$ where \begin{equation} \label{muhat} {\widehat{\boldsymbol\mu}}_1 = \bar {\boldsymbol{X}} - \bar{\tau} {\boldsymbol{A}}_{12}{\boldsymbol{A}}_{22,n}^{-1}(\bar {\boldsymbol{Y}}_1-\bar {\boldsymbol{Y}}_2), \qquad {\widehat{\boldsymbol\mu}}_2 = \bar{{\boldsymbol{Y}}}, \end{equation} and \begin{equation} \begin{aligned} \label{Sigmahat} & {\widehat{\boldsymbol\Sigma}}_{11} = \frac1n {\boldsymbol{A}}_{11\cdot2,n} + \frac1N {\boldsymbol{A}}_{12}{\boldsymbol{A}}_{22,n}^{-1}{\boldsymbol{A}}_{22,N}{\boldsymbol{A}}_{22,n}^{-1}{\boldsymbol{A}}_{21}, \\ & {\widehat{\boldsymbol\Sigma}}_{12} = {\widehat{\boldsymbol\Sigma}}_{21}' = \frac1N {\boldsymbol{A}}_{12}{\boldsymbol{A}}_{22,n}^{-1}{\boldsymbol{A}}_{22,N}, \quad {\widehat{\boldsymbol\Sigma}}_{22} = \frac1N {\boldsymbol{A}}_{22,N}. \end{aligned} \end{equation} If the underlying population is multivariate normal, denoted by $N_{p+q}({\boldsymbol\mu},{\boldsymbol\Sigma})$, then ${\widehat{\boldsymbol\mu}}$ and ${\widehat{\boldsymbol\Sigma}}$ are the maximum likelihood estimators of ${\boldsymbol\mu}$ and ${\boldsymbol\Sigma}$, respectively. We refer to Chang and Richards (2009, 2010), Richards and Yamada (2010), Romer (2009), and Romer and Richards (2010) for results on the distributions of ${\widehat{\boldsymbol\mu}}$ and ${\widehat{\boldsymbol\Sigma}}$, and inference for ${\boldsymbol\mu}$ and ${\boldsymbol\Sigma}$. As noted by Chang and Richards (2009, p. 1886), the statistical model underlying two-step monotone incomplete multivariate normal data is related to double sampling designs, in which additional data are collected on a subset of variables in order to improve estimation of a parameter; cf. Little (1976, p. 594) and Cohn, Davidov, and Haitovsky (2008). \section{Testing kurtosis with monotone incomplete data} \label{kurtosis} \setcounter{equation}{0} \subsection{The kurtosis statistic} \label{kurtosisstatistic} Consider a $d$-dimensional multivariate population represented by a random vector ${\boldsymbol{Z}}$ with mean vector ${\boldsymbol\mu}$ and nonsingular covariance matrix ${\boldsymbol\Sigma}$, and kurtosis parameter $$ \beta_{2,d} = E[({\boldsymbol{Z}}-{\boldsymbol\mu})'{\boldsymbol\Sigma}^{-1}({\boldsymbol{Z}}-{\boldsymbol\mu})]^2. $$ To perform inference for $\beta_{2,d}$ with the monotone incomplete data (\ref{monotonesample}), we define the statistic \begin{equation} \begin{aligned} \label{b2pq} b_{2,p,q} = \frac1N\Bigg\{c_1 \sum_{j=1}^n \Bigg[ & \Bigg(\binom{{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}-{\widehat{\boldsymbol\mu}}\Bigg)'{\widehat{\boldsymbol\Sigma}}^{-1} \Bigg(\binom{{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}-{\widehat{\boldsymbol\mu}}\Bigg)\Bigg]^2 \\ & + \, c_2 \sum_{j=n+1}^N \big[ ({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)'{\widehat{\boldsymbol\Sigma}}^{-1}_{22} ({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big]^2\Bigg\}, \end{aligned} \end{equation} where $c_1, c_2 > 0$ are constants. In general, $c_1$ and $c_2$ can depend on $n$ and $N$ subject to the conditions $c_1 = O(\tau)$, $c_2 = O(\bar{\tau})$, and $c_1, c_2 \not\to 0$ as $n, N \to \infty$; e.g., $(c_1,c_2) = (\tau,\bar{\tau})$ with $n/N \to \delta \in (0,1)$. Alternatively, $(c_1,c_2)$ can be chosen to minimize $\sigma^2$, the asymptotic variance of $b_{2,p,q}$ under the null hypothesis; this can be obtained by minimizing, over all $(c_1,c_2)$ subject to a suitable constraint on $c_1$ and $c_2$, a formula for $\sigma^2$ in (\ref{nullsigmasq}). Each term in (\ref{b2pq}) is an analog of the well-known statistic of Mardia (1970, 1974) for testing kurtosis with complete data, and our usage of different weights is due to the fact that the incomplete data ${\boldsymbol{Y}}_j$, $j=n+1,\ldots,N$ provide partial information about the population. We may also motivate the statistic $b_{2,p,q}$ as follows: First, we impute each missing observation ${\boldsymbol{X}}_j$, $j=n+1,\ldots,N$, using a linear regression imputation scheme, \begin{equation} \label{Xhatj} \widehat{\boldsymbol{X}}_j = \widehat{E}({\boldsymbol{X}}_j|{\boldsymbol{Y}}_j) \equiv {\widehat{\boldsymbol\mu}}_1 + {\widehat{\boldsymbol\Sigma}}_{12} {\widehat{\boldsymbol\Sigma}}_{22}^{-1} ({{\boldsymbol{Y}}}_{j}-{\widehat{\boldsymbol\mu}}_2), \end{equation} which is motivated by the formula for the conditional expectation of a partitioned multivariate normally distributed random vector. Under the hypothesis of multivariate normality, $\widehat{\boldsymbol{X}}_j$ is the maximum likelihood estimator of $E({\boldsymbol{X}}_j|{\boldsymbol{Y}}_j)$, the conditional expectation of ${\boldsymbol{X}}_j$ given ${\boldsymbol{Y}}_j$. Second, we use as our data the merged sets of observed and imputed data vectors, \begin{equation} \label{impute} \binom{{\boldsymbol{X}}_1}{{\boldsymbol{Y}}_1} \ \binom{{\boldsymbol{X}}_2}{{\boldsymbol{Y}}_2} \ \cdots \ \binom{{\boldsymbol{X}}_n}{{\boldsymbol{Y}}_n} \ \binom{\,\widehat{\boldsymbol{X}}_{n+1}}{{\boldsymbol{Y}}_{n+1}} \ \ \binom{\,\widehat{\boldsymbol{X}}_{n+2}}{{\boldsymbol{Y}}_{n+2}} \ \ \cdots \ \binom{\,\widehat{\boldsymbol{X}}_N}{{\boldsymbol{Y}}_N}. \end{equation} To perform inference about $\beta_{2,d}$, it is natural to use the statistic \begin{equation} \begin{aligned} \label{bhat} \widehat b_{2,p,q} = \frac1N\Bigg\{& c_1 \sum_{j=1}^n \Bigg[ \Bigg(\binom{{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}-{\widehat{\boldsymbol\mu}}\Bigg)'{\widehat{\boldsymbol\Sigma}}^{-1} \Bigg(\binom{{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}-{\widehat{\boldsymbol\mu}}\Bigg)\Bigg]^2 \\ & + \ c_2 \sum_{j=n+1}^N \Bigg[ \Bigg(\binom{\;\widehat{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}-{\widehat{\boldsymbol\mu}}\Bigg)'{\widehat{\boldsymbol\Sigma}}^{-1} \Bigg(\binom{\;\widehat{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}-{\widehat{\boldsymbol\mu}}\Bigg)\Bigg]^2\Bigg\}, \end{aligned} \end{equation} an analog of Mardia's statistic based on the vectors in (\ref{impute}). We again use possibly different weights, $c_1$ and $c_2$, to reflect the fact that some data are imputed, hence they provide less information about ${\boldsymbol\mu}$ and ${\boldsymbol\Sigma}$ than in the case in which all observations are fully observed. It is remarkable that $b_{2,p,q} \equiv \widehat{b}_{2,p,q}$, a result established in Theorem \ref{invariant} given in Section \ref{invariance}. \vskip5pt Set ${\boldsymbol{Z}}_j = \displaystyle\binom{{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}$, $j=1,\ldots,n$; then, (\ref{b2pq}) becomes $$ b_{2,p,q} = \frac1N\Big\{c_1 \sum_{j=1}^n \big[({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})'{\widehat{\boldsymbol\Sigma}}^{-1}({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})\big]^2 + c_2 \sum_{j=n+1}^N \big[({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)'{\widehat{\boldsymbol\Sigma}}^{-1}_{22}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big]^2\Big\}. $$ Also, let \begin{equation} \label{z1bar} \bar{\boldsymbol{Z}}_1 = \frac1n\sum_{j=1}^n {\boldsymbol{Z}}_j \equiv \binom{\bar{\boldsymbol{X}}}{\bar{\boldsymbol{Y}}_1}, \end{equation} and \begin{equation} \label{ytilde} \widetilde {\boldsymbol{Y}} = \begin{pmatrix} {\boldsymbol{A}}_{12}{\boldsymbol{A}}_{22,n}^{-1}(\bar{\boldsymbol{Y}}_1 - \bar{\boldsymbol{Y}}_2) \\ \bar{\boldsymbol{Y}}_1 - \bar{\boldsymbol{Y}}_2 \end{pmatrix} \equiv {\boldsymbol{A}} \begin{pmatrix} \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & {\boldsymbol{A}}_{22,n}^{-1} \end{pmatrix} \begin{pmatrix} \boldsymbol{0} \\ \bar {\boldsymbol{Y}}_1-\bar {\boldsymbol{Y}}_2 \end{pmatrix}. \end{equation} By a direct calculation using (\ref{Amatrix}) and (\ref{muhat}), we deduce that \begin{equation} \label{zjminusmuhat} {\boldsymbol{Z}}_j - {\widehat{\boldsymbol\mu}} = {\boldsymbol{Z}}_j - \bar{\boldsymbol{Z}}_1 + \bar{\tau} \widetilde{\boldsymbol{Y}}. \end{equation} \subsection{An invariance property of \texorpdfstring{$\boldsymbol{b_{2,p,q}}$}{b2pq}} \label{invariance} Define the statistics \begin{align*} b_{2,p,q}^{(1)} & = \sum_{j=1}^n \big[({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})'{\widehat{\boldsymbol\Sigma}}^{-1}({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})\big]^2, \\ \intertext{and} b_{2,p,q}^{(2)} & = \sum_{j=n+1}^N\big[({\boldsymbol{Y}}_j - {\widehat{\boldsymbol\mu}}_2)'{\widehat{\boldsymbol\Sigma}}^{-1}_{22}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big]^2. \end{align*} Then, \begin{equation} \label{b-formula} b_{2,p,q} = \frac1N\Big(c_1b_{2,p,q}^{(1)} + c_2b_{2,p,q}^{(2)}\Big). \end{equation} Let ${\boldsymbol\Lambda}_{11}$ and ${\boldsymbol\Lambda}_{22}$ be $p \times p$ and $q \times q$ positive definite matrices, respectively; let ${\boldsymbol\Lambda}_{12}$ be a $p \times q$ matrix; and let ${\boldsymbol\nu}_1$ and ${\boldsymbol\nu}_2$ be $p \times 1$ and $q \times 1$ vectors, respectively. Set $$ {\boldsymbol\Lambda} = \begin{pmatrix} {\boldsymbol\Lambda}_{11} & \boldsymbol{0} \\ \boldsymbol{0} & {\boldsymbol\Lambda}_{22} \end{pmatrix}, \quad {\boldsymbol{C}} = \begin{pmatrix} {\boldsymbol{I}}_p & {\boldsymbol\Lambda}_{12} \\ \boldsymbol{0} & {\boldsymbol{I}}_q \end{pmatrix}, \quad {\boldsymbol\nu} = \begin{pmatrix} {\boldsymbol\nu}_1 \\{\boldsymbol\nu}_2 \end{pmatrix}, $$ and consider the group of affine transformations of the data (\ref{monotonesample}) of the form \begin{equation} \label{affine} \begin{array}{ccll} \displaystyle{\binom{{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}} & \to & {\boldsymbol\Lambda} {\boldsymbol{C}} \displaystyle{\binom{{\boldsymbol{X}}_j}{{\boldsymbol{Y}}_j}} + {\boldsymbol\nu}, & j=1,\ldots,n \\ & &\\ {\boldsymbol{Y}}_j & \to & {\boldsymbol\Lambda}_{22}{\boldsymbol{Y}}_j + {\boldsymbol\nu}_2, & j=n+1,\ldots,N \end{array} \end{equation} Now we have the following result: \begin{theorem} \label{invariant} The kurtosis statistics $b_{2,p,q}^{(1)}$, $b_{2,p,q}^{(2)}$, and $b_{2,p,q}$ are invariant under the transformations (\ref{affine}). Moreover, $b_{2,p,q} \equiv \widehat{b}_{2,p,q}$. \end{theorem} \noindent{\it Proof}. Under the transformation (\ref{affine}), we verify using (\ref{samplemeans}) and (\ref{grandmean}) that $\bar{\boldsymbol{X}}$, $\bar{\boldsymbol{Y}}_1$, $\bar{\boldsymbol{Y}}_2$, and $\bar{\boldsymbol{Y}}$ are transformed to ${\boldsymbol\Lambda}_{11}(\bar{\boldsymbol{X}}_1 + {\boldsymbol\Lambda}_{12}\bar{\boldsymbol{Y}}_1) + {\boldsymbol\nu}_1$, ${\boldsymbol\Lambda}_{22}\bar{\boldsymbol{Y}}_1 + {\boldsymbol\nu}_2$, ${\boldsymbol\Lambda}_{22}\bar{\boldsymbol{Y}}_2 + {\boldsymbol\nu}_2$, and ${\boldsymbol\Lambda}_{22}\bar{\boldsymbol{Y}} + {\boldsymbol\nu}_2$, respectively. Further, by (\ref{sumsqmatrices}), the matrix ${\boldsymbol{A}}$ in (\ref{Amatrix}) is transformed to ${\boldsymbol\Lambda}{\boldsymbol{C}}{\boldsymbol{A}}{\boldsymbol{C}}'{\boldsymbol\Lambda}$, i.e., \begin{eqnarray} \label{Atransform} {\boldsymbol{A}}_{11} & \to & {\boldsymbol\Lambda}_{11}({\boldsymbol{A}}_{11} + {\boldsymbol\Lambda}_{12}{\boldsymbol{A}}_{21} + {\boldsymbol{A}}_{12}{\boldsymbol\Lambda}_{21} + {\boldsymbol\Lambda}_{12}{\boldsymbol{A}}_{22,n}{\boldsymbol\Lambda}_{21}){\boldsymbol\Lambda}_{11} \nonumber \\ {\boldsymbol{A}}_{12} & \to & {\boldsymbol\Lambda}_{11}({\boldsymbol{A}}_{12} + {\boldsymbol\Lambda}_{12}{\boldsymbol{A}}_{22,n}){\boldsymbol\Lambda}_{22}, \\ {\boldsymbol{A}}_{22,n} & \to & {\boldsymbol\Lambda}_{22}{\boldsymbol{A}}_{22,n}{\boldsymbol\Lambda}_{22} \nonumber \end{eqnarray} Hence ${\boldsymbol{A}}_{11\cdot 2,n} \to {\boldsymbol\Lambda}_{11}{\boldsymbol{A}}_{11\cdot 2,n}{\boldsymbol\Lambda}_{11}$ and ${\boldsymbol{A}}_{22,N} \to {\boldsymbol\Lambda}_{22}{\boldsymbol{A}}_{22,N}{\boldsymbol\Lambda}_{22}$. Further, it follows from (\ref{Sigmahat}) and (\ref{Atransform}) that ${\widehat{\boldsymbol\Sigma}}_{11\cdot 2}$ and ${\widehat{\boldsymbol\Sigma}}_{22}$ are transformed to ${\boldsymbol\Lambda}_{11}{\widehat{\boldsymbol\Sigma}}_{11\cdot 2}{\boldsymbol\Lambda}_{11}$ and ${\boldsymbol\Lambda}_{22}{\widehat{\boldsymbol\Sigma}}_{22}{\boldsymbol\Lambda}_{22}$, respectively. By a well-known quadratic identity (Anderson, 2003, p.~63, Exercise~2.54), for $j=1,\ldots,n$, \begin{align} \label{quadidentity} ({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})'&{\widehat{\boldsymbol\Sigma}}^{-1}({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}}) \nonumber \\ \equiv \ & \begin{pmatrix} {\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1 \\ {\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2 \end{pmatrix}' \begin{pmatrix} {\widehat{\boldsymbol\Sigma}}_{11} &{\widehat{\boldsymbol\Sigma}}_{12} \\ {\widehat{\boldsymbol\Sigma}}_{21}&{\widehat{\boldsymbol\Sigma}}_{22} \end{pmatrix}^{-1} \begin{pmatrix} {\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1 \\ {\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2 \end{pmatrix} \nonumber \\ = \ & \big({\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1-{\widehat{\boldsymbol\Sigma}}_{12}{\widehat{\boldsymbol\Sigma}}_{22}^{-1}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big)'{\widehat{\boldsymbol\Sigma}}_{11\cdot 2}^{-1}\big({\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1-{\widehat{\boldsymbol\Sigma}}_{12}{\widehat{\boldsymbol\Sigma}}_{22}^{-1}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big) \nonumber \\ & \qquad + ({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)'{\widehat{\boldsymbol\Sigma}}_{22}^{-1}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2). \end{align} It follows from (\ref{muhat}) that ${\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1-{\widehat{\boldsymbol\Sigma}}_{12}{\widehat{\boldsymbol\Sigma}}_{22}^{-1}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2) = {\boldsymbol{X}}_j-\bar{\boldsymbol{X}}-{\boldsymbol{A}}_{12}{\boldsymbol{A}}_{22,n}^{-1}({\boldsymbol{Y}}_j-\bar{\boldsymbol{Y}}_1)$, and the latter expression is transformed by (\ref{affine}) to \begin{align*} {\boldsymbol\Lambda}_{11}({\boldsymbol{X}}_j + {\boldsymbol\Lambda}_{12}{\boldsymbol{Y}}_j) - {\boldsymbol\Lambda}_{11}(\bar{\boldsymbol{X}} + {\boldsymbol\Lambda}_{12}\bar{\boldsymbol{Y}}_1) - {\boldsymbol\Lambda}_{11}&({\boldsymbol{A}}_{12} + {\boldsymbol\Lambda}_{12}{\boldsymbol{A}}_{22,n}){\boldsymbol{A}}_{22,n}^{-1}({\boldsymbol{Y}}_j-\bar{\boldsymbol{Y}}_1) \\ & = {\boldsymbol\Lambda}_{11}\big({\boldsymbol{X}}_j-\bar{\boldsymbol{X}} - {\boldsymbol{A}}_{12}{\boldsymbol{A}}_{22,n}^{-1}({\boldsymbol{Y}}_j-\bar{\boldsymbol{Y}}_1)\big) \\ & \equiv {\boldsymbol\Lambda}_{11}\big({\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1-{\widehat{\boldsymbol\Sigma}}_{12}{\widehat{\boldsymbol\Sigma}}_{22}^{-1}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big). \end{align*} Recall that ${\widehat{\boldsymbol\Sigma}}_{11\cdot 2}$ is transformed to ${\boldsymbol\Lambda}_{11}{\widehat{\boldsymbol\Sigma}}_{11\cdot 2}{\boldsymbol\Lambda}_{11}$, so it follows that the first term in (\ref{quadidentity}) remains invariant under (\ref{affine}). In similar fashion, (\ref{affine}) transforms ${\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2$ to ${\boldsymbol\Lambda}_{22}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)$, $j=1,\ldots,N$ and, as noted earlier, also transforms ${\widehat{\boldsymbol\Sigma}}_{22}$ to ${\boldsymbol\Lambda}_{22}{\widehat{\boldsymbol\Sigma}}_{22}{\boldsymbol\Lambda}_{22}$; consequently, the second term in (\ref{quadidentity}) remains invariant under (\ref{affine}). It follows that $b_{2,p,q}^{(1)}$ and $b_{2,p,q}^{(2)}$ each are invariant under (\ref{affine}), so is $b_{2,p,q}$. Finally, to show that $b_{2,p,q} \equiv \widehat{b}_{2,p,q}$, we apply the earlier quadratic identity (Anderson, 2003, loc. cit.) for $j=n+1,\ldots,N$ to obtain \begin{align*} \begin{pmatrix} \widehat{\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1 \\ {\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2\end{pmatrix}' & \begin{pmatrix} {\widehat{\boldsymbol\Sigma}}_{11} &{\widehat{\boldsymbol\Sigma}}_{12} \\ {\widehat{\boldsymbol\Sigma}}_{21} &{\widehat{\boldsymbol\Sigma}}_{22} \end{pmatrix}^{-1} \begin{pmatrix} \widehat{\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1 \\ {\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2\end{pmatrix} \\ = & \ \big(\widehat{\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1-{\widehat{\boldsymbol\Sigma}}_{12} {\widehat{\boldsymbol\Sigma}}_{22}^{-1} ({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big)'{\widehat{\boldsymbol\Sigma}}_{11\cdot 2}^{-1} \big(\widehat{\boldsymbol{X}}_j-{\widehat{\boldsymbol\mu}}_1-{\widehat{\boldsymbol\Sigma}}_{12} {\widehat{\boldsymbol\Sigma}}_{22}^{-1} ({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)\big) \\ & \ + ({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)'{\widehat{\boldsymbol\Sigma}}_{22}^{-1}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2). \end{align*} By the definition in (\ref{Xhatj}) of $\widehat{\boldsymbol{X}}_j$, it follows that $\widehat{\boldsymbol{X}}_j - {\widehat{\boldsymbol\mu}}_1 - {\widehat{\boldsymbol\Sigma}}_{12}{\widehat{\boldsymbol\Sigma}}_{22}^{-1}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2) \equiv \boldsymbol{0}$. Therefore the last terms in (\ref{b2pq}) and (\ref{bhat}) are identical, so we obtain $b_{2,p,q} \equiv \widehat b_{2,p,q}$. $\qed$ \smallskip \begin{remark} \label{normalcase} {\rm In the multivariate normal case, Romer and Richards (2010, Proposition 2.1) showed that the invariance of $b_{2,p,q}$ is due to the fact that ${\widehat{\boldsymbol\mu}}$ and ${\widehat{\boldsymbol\Sigma}}$, being maximum likelihood estimators, are equivariant. Specifically, ${\widehat{\boldsymbol\mu}}$ and ${\widehat{\boldsymbol\Sigma}}$ are transformed under (\ref{affine}) to ${\boldsymbol\Lambda}{\boldsymbol{C}}{\widehat{\boldsymbol\mu}} + {\boldsymbol\nu}$ and ${\boldsymbol\Lambda}{\boldsymbol{C}}{\widehat{\boldsymbol\Sigma}}{\boldsymbol{C}}'{\boldsymbol\Lambda}$, respectively; therefore the quadratic form $({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})'{\widehat{\boldsymbol\Sigma}}^{-1}({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})$, $j=1,\ldots,n$ is transformed to \begin{align*} \big(({\boldsymbol\Lambda}{\boldsymbol{C}}{\boldsymbol{Z}}_j+{\boldsymbol\nu})-({\boldsymbol\Lambda}{\boldsymbol{C}}{\widehat{\boldsymbol\mu}}+{\boldsymbol\nu})\big)' ({\boldsymbol\Lambda}{\boldsymbol{C}}{\widehat{\boldsymbol\Sigma}}{\boldsymbol{C}}'{\boldsymbol\Lambda})^{-1} \big(({\boldsymbol\Lambda}{\boldsymbol{C}}{\boldsymbol{Z}}_j+{\boldsymbol\nu})&-({\boldsymbol\Lambda}{\boldsymbol{C}}{\widehat{\boldsymbol\mu}}+{\boldsymbol\nu})\big) \\ & \equiv ({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}})'{\widehat{\boldsymbol\Sigma}}^{-1}({\boldsymbol{Z}}_j-{\widehat{\boldsymbol\mu}}), \end{align*} and this proves that $b_{2,p,q}^{(1)}$ is invariant under the group (\ref{affine}). It can be shown similarly that each quadratic form $({\boldsymbol{Y}}_j - {\widehat{\boldsymbol\mu}}_2)'{\widehat{\boldsymbol\Sigma}}^{-1}_{22}({\boldsymbol{Y}}_j-{\widehat{\boldsymbol\mu}}_2)$, $j=n+1,\ldots,N$ is invariant under (\ref{affine}), hence so is $b_{2,p,q}^{(2)}$. Therefore, in the multivariate normal case, the invariance of $b_{2,p,q}$ under (\ref{affine}) is a consequence of equivariance. In non-normal cases, however, the detailed computations in the proof of Theorem \ref{invariant} are necessary to prove that $b_{2,p,q}$ is invariant under (\ref{affine}). Moreover, now that it has been established that $b_{2,p,q}$ is invariant under (\ref{affine}), we then choose ${\boldsymbol\Lambda}_{11} = {\boldsymbol\Sigma}_{11 \cdot 2}^{-1/2}$, ${\boldsymbol\Lambda}_{22} = {\boldsymbol\Sigma}_{22}^{-1/2}$, ${\boldsymbol\Lambda}_{12} = -{\boldsymbol\Sigma}_{12}{\boldsymbol\Sigma}_{22}^{-1}$, and ${\boldsymbol\nu} = - {\boldsymbol\Lambda}{\boldsymbol{C}}{\boldsymbol\mu}$ to reduce the data to being mutually independent and monotone incomplete with mean $\boldsymbol{0}$ and covariance matrix ${\boldsymbol\Lambda}{\boldsymbol{C}}{\boldsymbol\Sigma}{\boldsymbol{C}}'{\boldsymbol\Lambda}' = {\boldsymbol{I}}_{p+q}$. Therefore, in deriving the distribution of $b_{2,p,q}$, we assume without loss of generality that ${\boldsymbol\mu} = \boldsymbol{0}$ and ${\boldsymbol\Sigma} = {\boldsymbol{I}}_{p+q}$. }\end{remark} \section{The null and non-null asymptotic distributions of \texorpdfstring{$\boldsymbol{b_{2,p,q}}$}{b2pq}} \label{asymptoticdistns} For the $(p+q)$-dimensional random vector ${\boldsymbol{Z}} = \begin{pmatrix} {\boldsymbol{X}} \\ {\boldsymbol{Y}} \end{pmatrix}$, define \begin{equation} \label{Xidef} {\boldsymbol{\Xi}} = E\big(({\boldsymbol{Z}}\bZ')^2\big) = E\big({\boldsymbol{Z}}({\boldsymbol{Z}}'{\boldsymbol{Z}}){\boldsymbol{Z}}'\big) = E(\|{\boldsymbol{Z}}\|^2\,{\boldsymbol{Z}}\bZ'), \end{equation} and write ${\boldsymbol{\Xi}}$ in partitioned form, \begin{equation} \label{Xi} {\boldsymbol{\Xi}} \equiv \begin{pmatrix}{\boldsymbol{\Xi}}_{11}&{\boldsymbol{\Xi}}_{12}\\{\boldsymbol{\Xi}}_{21}&{\boldsymbol{\Xi}}_{22}\end{pmatrix} = \begin{pmatrix}E(\|{\boldsymbol{Z}}\|^2\,{\boldsymbol{X}}\bX')&E(\|{\boldsymbol{Z}}\|^2\,{\boldsymbol{X}}{\boldsymbol{Y}}')\\E(\|{\boldsymbol{Z}}\|^2\,{\boldsymbol{Y}}{\boldsymbol{X}}')&E(\|{\boldsymbol{Z}}\|^2\,{\boldsymbol{Y}}\bY')\end{pmatrix}. \end{equation} Define \begin{equation} \label{Xistar} {\boldsymbol{\Xi}}^* = E\big(({\boldsymbol{Y}}\bY')^2\big), \qquad {\boldsymbol{\Theta}}^*=E(\|{\boldsymbol{Y}}\|^2\,{\boldsymbol{Y}}); \end{equation} and set \begin{equation} \label{Xitilde} \wtilde{\boldsymbol{\Xi}} \equiv \begin{pmatrix} \wtilde{\boldsymbol{\Xi}}_{11}&\wtilde{\boldsymbol{\Xi}}_{12}\\ \wtilde{\boldsymbol{\Xi}}_{21}&\wtilde{\boldsymbol{\Xi}}_{22} \end{pmatrix} = \begin{pmatrix} c_1{\boldsymbol{\Xi}}_{11}&c_1{\boldsymbol{\Xi}}_{12}\\ c_1{\boldsymbol{\Xi}}_{21}&c_1\tau{\boldsymbol{\Xi}}_{22}+c_2\bar{\tau}{\boldsymbol{\Xi}}^* \end{pmatrix}, \end{equation} \begin{equation} \label{Theta} {\boldsymbol{\Theta}} \equiv \begin{pmatrix}{\boldsymbol{\Theta}}_1 \\ {\boldsymbol{\Theta}}_2\end{pmatrix} = \begin{pmatrix}E(\|{\boldsymbol{Z}}\|^2{\boldsymbol{X}}) \\ E(\|{\boldsymbol{Z}}\|^2{\boldsymbol{Y}}) \end{pmatrix}, \end{equation} and \begin{equation} \label{Thetatilde} \wtilde{\boldsymbol{\Theta}} \equiv \begin{pmatrix}\wtilde{\boldsymbol{\Theta}}_1 \\ \wtilde{\boldsymbol{\Theta}}_2 \end{pmatrix} = \begin{pmatrix}c_1{\boldsymbol{\Theta}}_1 \\ c_1\tau{\boldsymbol{\Theta}}_2+c_2\bar{\tau}{\boldsymbol{\Theta}}^*\end{pmatrix}. \end{equation} Our main result, given in the following theorem, provides the non-null distribution of $b_{2,p,q}$ for a large class of alternatives. \begin{theorem} \label{basympthm} Suppose that the monotone incomplete random sample (\ref{monotonesample}) is drawn from a population modeled by a random vector ${\boldsymbol{Z}}=\begin{pmatrix}{\boldsymbol{X}}\\{\boldsymbol{Y}}\end{pmatrix}$ such that $E({\boldsymbol{Z}}) = \boldsymbol{0}$, ${\rm{Cov}}({\boldsymbol{Z}}) = {\boldsymbol{I}}_{p+q}$, and $E\|{\boldsymbol{Z}}\|^8 < \infty$. For $n, N \rightarrow \infty$ with $n/N \to \delta \in (0,1)$, we have \begin{equation} \label{nonnulldistn} N^{1/2}\(b_{2,p,q} - \nu\)/\sigma \stackrel{\cal{L}}{\to} N(0,1), \end{equation} where \begin{equation} \label{nu} \nu = c_1\tau E(\|{\boldsymbol{Z}}\|^4) + c_2\bar{\tau} E(\|{\boldsymbol{Y}}\|^4) \end{equation} and \begin{equation} \label{sigmasq} \begin{aligned} \sigma^2 \ = \ & \tau \Big\{c_1^2 \, {\rm{Var}}\|{\boldsymbol{Z}}\|^4 + 4 \, {\rm{Var}}({\boldsymbol{Z}}'\widetilde{\boldsymbol{\Xi}}{\boldsymbol{Z}}) + 16 \, \widetilde{\boldsymbol{\Theta}}'\widetilde{\boldsymbol{\Theta}} \\ & \qquad - 4 \, c_1 \, {\rm{Cov}}(\|{\boldsymbol{Z}}\|^4, {\boldsymbol{Z}}'\widetilde{\boldsymbol{\Xi}}{\boldsymbol{Z}}) - 8 c_1 \, E\|{\boldsymbol{Z}}\|^4 {\boldsymbol{Z}}'\widetilde{\boldsymbol{\Theta}} + 16 \, E[{\boldsymbol{Z}}'\widetilde{\boldsymbol{\Xi}}{\boldsymbol{Z}}\bZ']\widetilde{\boldsymbol{\Theta}}\Big\} \\ & + \bar{\tau}\Big\{c_2^2 \, {\rm{Var}}\|{\boldsymbol{Y}}\|^4 + 4 \, {\rm{Var}}({\boldsymbol{Y}}'\widetilde{\boldsymbol{\Xi}}_{22}{\boldsymbol{Y}}) + 16 \, \widetilde{\boldsymbol{\Theta}}_2'\widetilde{\boldsymbol{\Theta}}_2 \\ & \qquad -4 \, c_2 {\rm{Cov}}(\|{\boldsymbol{Y}}\|^4, {\boldsymbol{Y}}'\widetilde{\boldsymbol{\Xi}}_{22}{\boldsymbol{Y}}) -8 c_2 \, E\|{\boldsymbol{Y}}\|^4{\boldsymbol{Y}}'\widetilde{\boldsymbol{\Theta}}_2 + 16 \, E[{\boldsymbol{Y}}'\widetilde{\boldsymbol{\Xi}}_{22}{\boldsymbol{Y}}\bY']\widetilde{\boldsymbol{\Theta}}_2\Big\}. \end{aligned} \end{equation} \end{theorem} The proof of this result is provided in Appendix \ref{basymptotics}. For the case in which ${\boldsymbol{Z}} \sim N_{p+q}(\boldsymbol{0},{\boldsymbol{I}}_{p+q})$, the limiting distribution (\ref{nonnulldistn}) reduces to the following result on the null distribution of $b_{2,p,q}$. \begin{corollary} \label{basympcor} Suppose that the monotone incomplete sample (\ref{monotonesample}) is drawn from $N_{p+q}(\boldsymbol{0},{\boldsymbol{I}}_{p+q})$. For $n, N \rightarrow \infty$ with $n/N \to \delta \in (0,1)$, we have \begin{equation} \label{nulldistn} N^{1/2}\(b_{2,p,q} - \nu\)/\sigma \stackrel{\cal{L}}{\to} N(0,1), \end{equation} where \begin{equation} \label{nullnu} \nu = c_1\tau (p+q)(p+q+2) + c_2\bar{\tau} q(q+2) \end{equation} and \begin{equation} \label{nullsigmasq} \begin{aligned} \sigma^2 = \ & 8 \tau\Big\{c^2_1(p+q)(p+q+2)(p+q+3)+ c^2_1p(p+q+2)^2 \\ & \qquad + q\big(c_1\tau(p+q+2)+c_2\bar\tau(q+2)\big)^2 \\ & \qquad - 2c_1(p+q+2)\Big(pc_1(p+q+2) + q\big(c_1\tau(p+q+2) + c_2\bar\tau(q+2)\big)\Big)\Big\} \\ & + 8 \bar\tau\Big\{c_2^2q(q+2)(q+3)+ q\big(c_1\tau(p+q+2)+c_2\bar\tau(q+2)\big)^2 \\ & \qquad\quad - 2c_2(q+2)q\big(c_1\tau (p+q+2)+ c_2\bar\tau(q+2)\big)\Big\}. \end{aligned} \end{equation} \end{corollary} \begin{remark} \label{specialcases} {\rm For $(c_1,c_2) = (\tau,\bar{\tau})$, (\ref{nullnu}) and (\ref{nullsigmasq}) reduce to \begin{align} \label{taumean} \nu & = \tau^2 (p+q)(p+q+2) + \bar\tau^2 q(q+2) \\ \intertext{and} \label{tauvariance} \sigma^2 & = 8\{\tau^3 (p+q)(p+q+2) + \bar\tau^3 q(q+2) + \tau\bar\tau(\tau(p+q+2)-\bar\tau(q+2))^2\}, \end{align} respectively. For $(c_1,c_2) = (1,1)$, (\ref{nullnu}) and (\ref{nullsigmasq}) reduce, respectively, to \begin{align} \label{mardiamean} \nu & = \tau (p + q)(p + q + 2) + \bar\tau q(q + 2) \\ \intertext{and} \label{mardiavariance} \sigma^2 & = 8\{\tau (p + q)(p + q + 2) + \bar\tau q(q + 2)+\tau\bar\tau p^2q\}. \end{align} Corollary \ref{basympcor} also generalizes the result of Mardia (1970) for complete data; indeed, when we set $\tau =1$, i.e., $n = N$ in (\ref{mardiamean}) and (\ref{mardiavariance}), we find that $\nu$ reduces to $(p+q)(p+q+2)$ and $\sigma^2$ reduces to $8(p+q)(p+q+2)$; hence (\ref{nulldistn}) reduces to the result of Mardia. }\end{remark} \subsection{Application to the Pennsylvania cholesterol data set} Assume that the Pennsylvania cholesterol data (Ryan, et al. 2005, p. 267) consist of mutually independent vectors and that missing observations are MCAR. For that data set, we have $p = 1$, $q = 2$, $N = 28$, and $n = 19$, and we choose $c_1 = \tau$ and $c_2 = \bar{\tau}$. The asymptotic mean and variance of $b_{2,1,2}$ are obtained from (\ref{nullnu}) and (\ref{nullsigmasq}) to be $\nu = 7.7334$ and $\sigma^2 = 181.1658$, respectively. By Corollary \ref{basympcor}, the asymptotic null distribution of the statistic (\ref{nulldistn}) is $$ \sqrt{28}(b_{2,1,2} - 7.7334)/13.4598 \approx N(0,1). $$ We calculate using (\ref{b-formula}) that the observed value of $b_{2,1,2}$ is $5.8623$. Therefore, the observed value of the statistic (\ref{nulldistn}) is $$ \sqrt{28}(5.8623 - 7.7334)/13.4598 = -0.7356. $$ The approximate $P$-value of the test is $2\Phi(-0.7356) = 0.2310$, where $\Phi(\cdot)$ denotes the cumulative distribution function of the standard normal distribution. Therefore, we fail to reject the null hypothesis of multivariate normality at the 5\% level of significance. We note that the same conclusion is obtained by applying the classical Mardia statistic to the subset of the Pennsylvania cholesterol data set consisting of the $n = 19$ complete observations only. For this subset, the observed value of Mardia's statistic is $7.8176$, the corresponding observed value of the normal approximation to Mardia's statistic is $-0.1207$, hence the resulting approximate $P$-value for Mardia test is $2\Phi(-0.1207) = 0.9038$. However, this $P$-value is so large that the test based on the complete data appears unable to assess the strength of the evidence against the null hypothesis of normality, and this reflects the loss of information inherent in discarding the incomplete observations; cf., Little and Rubin (2002), p. 41. By contrast, the $P$-value based on the full data set appears to provide some measure of the strength of the evidence against the null hypothesis, even though the strength of that evidence is assessed to be too weak to reject that hypothesis. We remark also that, in the case of the cholesterol data, the smaller sample size is less likely to yield an accurate normal approximation to Mardia's statistic, so the substantially larger $P$-value of the Mardia test for the complete data perhaps should be applied cautiously. \vskip 0.4truein \noindent {\bf Acknowledgments.} We are grateful to the referees and the associate editor for comments which helped us to improve the manuscript. The research of Richards was also supported by a 2013--2014 sabbatical leave-of-absence and a Romberg Guest Professorship at the Heidelberg University Graduate School for Mathematical and Computational Methods in the Sciences, funded by the German Universities Excellence Initiative grant GSC 220/2. \bigskip \bigskip \bigskip { \noindent {\bf\Large References} \medskip \parskip=3pt \parindent=0pt Anderson, T.~W.~(2003). {\sl An Introduction to Multivariate Statistical Analysis} (third edition). Wiley, New York. Chang, W.-Y., and Richards, D.~St.~P. (2009). Finite-sample inference with monotone incomplete multivariate normal data, I. {\it J. Multivariate Anal.}, {\bf 100}, 1883--1899. Chang, W.-Y., and Richards, D.~St.~P. (2010). Finite-sample inference with monotone incomplete multivariate normal data, II. {\it J. Multivariate Anal.}, {\bf 101}, 603--620. Cohn, N., Davidov, O., and Haitovsky, Y. (2008). Double sampling designs in multivariate linear models with missing data. {\it Comm. Statist. Simulation Comput.}, {\bf 37}, 1156--1166. Davidov, O., and Peddada, S.~D. (2013). The linear stochastic order and directed inference for multivariate ordered distributions. {\it Ann. Statist.}, {\bf 41}, 1--40. Eaton, M. L., and Kariya, T. (1983). Multivariate tests with incomplete data. {\it Ann. Statist.}, {\bf 11}, 654--665. Garren, S. T., and Peddada, S. D. (2000). Asymptotic normality in multivariate nonlinear regression and multivariate generalized linear regression models under repeated measurements with missing data. {\it Statist. Probab. Lett.}, {\bf 48}, 293--302. Hao, J. and Krishnamoorthy, K. (2001). Inferences on a normal covariance matrix and generalized variance with monotone missing data. {\it J. Multivariate Anal.}, {\bf 78}, 62--82. Henze, N.~(1994). On Mardia's kurtosis test for multivariate normality. {\it Commun. Statist. -- Theory \& Methods}, {\bf 23}, 1031--1045. Henze, N. (2002). Invariant tests for multivariate normality: A critical review. {\it Statist. Papers}, {\bf 43}, 467--506. Krishnamoorthy, K., and Yu, J. (2012). Multivariate Behrens-Fisher problem with missing data. {\it J. Multivariate Anal.}, {\bf 105}, 141--150. Little, R. J. A. (1976). Inference about means from incomplete multivariate data, {\it Biometrika} 63 (1976) 593--604. Little, R. J. A., and Rubin, D. B. (2002). {\sl Statistical Analysis with Missing Data} (second edition). Wiley, Hoboken, NJ. Mardia, K.~V. (1970). Measures of multivariate skewness and kurtosis with applications. {\it Biometrika}, {\bf 57}, 519--530. Mardia, K.~V. (1974). Applications of some measures of multivariate skewness and kurtosis in testing normality and robustness studies. {\it Sankhy\=a} B, {\bf 36}, 115--128. Peddada, S.~D., Harris, S., Davidov, O. (2010). Analysis of correlated gene expression data on ordered categories. {\it J. Ind. Soc. Agric. Statist.}, {\bf 64}, 45--60. Richards, D.~St.~P., and Yamada, T. (2010). The Stein phenomenon for monotone incomplete multivariate normal data. {\it J. Multivariate Anal.}, {\bf 101}, 657--678. Romer, M.~M. (2009). {\sl The Statistical Analysis of Monotone Incomplete Multivariate Normal Data}. Doctoral Dissertation, Penn State University. Romer, M. M., and Richards, D.~St.~P. (2010). Maximum likelihood estimation of the mean of a multivariate normal population with monotone incomplete data. {\it Statist. \& Probab. Lett.}, {\bf 80}, 1284--1288. Romer, M.~M., and Richards, D.~St.~P. (2013). Finite-sample inference with monotone incomplete multivariate normal data, III: Hotelling's $T^2$-statistic. {\it Statist. Modelling}, {\bf 13}, 431--457. Ryan, B., Joiner, B., and Cryer, J. (2005). {\sl Minitab Handbook} (fifth edition). Duxbury Press, Boston. Yamada, T. (2013). Asymptotic properties of canonical correlation analysis for one group with additional observations. {\it J. Multivariate Anal.}, {\bf 114}, 389--401. }
1,116,691,499,606
arxiv
\section{Introduction} \la{intro} In a previous paper \cite{Rahmfeld1}, we argued that the spectrum of elementary BPS ($N_R=1/2$) states of compactified heterotic strings could be identified with extremal electrically charged black holes. Further evidence for this interpretation was supplied in \cite{Sen2,Peet,Khurimyers,Callan,Shiraishi}. In particular, the $N_L=1$ states and the $N_L>1$ states (with vanishing left-moving internal momentum) admit a single scalar/Maxwell interpretation with parameters $a=\sqrt{3}$ or $a=1$ respectively. In other words, by choosing appropriate combinations of dilaton and moduli fields to be the scalar field $\phi$ and appropriate combinations of the field strengths to be the Maxwell field $F$, the field equations can be consistently truncated to a form given by the Lagrangian \begin{equation}} \newcommand{\ee}{\end{equation} {\cal L}= \frac{1}{32\pi}\sqrt{-g}\left [R-\frac{1}{2}(\partial \phi)^2-\frac{1}{4}e^{-a \phi}F^2 +...\right] \la{actiona} \ee for these two values of $a$, these combinations being just those corresponding to the quantum numbers of the string states\footnote{A {\it consistent} truncation is defined to be one for which all solutions of the truncated theory are solutions of the original theory. The dots in (\ref{actiona}) refer to terms involving a pseudoscalar combination of axion and moduli fields which are in general required for consistency but which do not contribute to non-rotating black hole solutions. We are grateful to R. Myers for pointing out their omission in the original version of this paper.}. In the case of zero angular momentum, the ADM mass $M{}_{black}$ of the extremal black hole solutions of (\ref{actiona}) is given by \begin{equation}} \newcommand{\ee}{\end{equation} M_{black}^2=Q^2/4(1+a^2) \la{2} \ee where $Q=\int \tilde F/8\pi=\int e^{-a\phi}{}{*F}/8\pi$ is the electric charge, where $*$ denotes the Hodge dual and where, for simplicity, we have set the asymptotic value of $\phi$ to zero. The $a=1$ case yields the supersymmetric dilaton black hole \cite{Gibbons}. The $a=\sqrt{3}$ case corresponds to the Kaluza-Klein black hole and the ``winding" black hole \cite{Khurinew} which are related to each other by $T$-duality. The Kaluza-Klein solution has been known for some time \cite{Gibbons} but only recently recognized \cite{Khurinew} as a heterotic string solution. We also argued that the corresponding solitonic magnetically charged and dyonic spectrum, predicted by S-duality \cite{Schwarz1}, is also described by extremal black holes. Indeed, we were first motivated to make the black hole conjecture for the elementary states by first noting that the ``winding'' magnetic monopoles are extremal black holes \cite{Khurinew} and then noting that string/string duality interchanges the roles of $S$-duality and $T$-duality and therefore that the solitonic monopoles play the same role for the dual string as the elementary winding states play for the fundamental string \footnote{Of course this involves extending the classical notion of a black hole from the weak coupling to the strong coupling regime. We will therefore take the liberty of describing a state by the words {\it black hole} if there exists at least one string picture in which its mass exceeds the Planck mass for weak coupling.}. By allowing $F$ to describe combinations of field strengths and their duals, other dyons, preserving fewer supersymmetries and therefore not predicted by S-duality alone, can also be assigned $a$ values. One finds $a=1/\sqrt{3}$ and $a=0$. The $a=0$ case yields the Reissner-Nordstrom solution\footnote{The Reissner-Nordstrom black hole is not a solution of dimensionally reduced pure gravity but has long been known to be a solution of $M$-theory \cite{Duffpope,Pope}.} which, notwithstanding contrary claims in the literature, does solve the low-energy string equations \cite{Khurinew,Rahmfeld1}. The $a=1/\sqrt{3}$ black hole \cite{Horowitz2} was identified as a dyonic solution in \cite{Rahmfeld3}. These four values of $a$ yield solutions which are special cases of the most general solution subsequently found in \cite{Cveticyoum}, and shown to be exact to all orders in $\alpha'$ in \cite{Cvetictseytlin}. In the $N=2$ theory, black holes with $a=\sqrt 3,1,1/\sqrt 3,0$ all preserve $1$ supersymmetry and therefore belong to fundamental supermultiplets with maximum spins $1/2$ \cite{Rahmfeld3}. In the $N=4$ theory, black holes with $a=\sqrt{3},1,1/\sqrt{3},0$ preserve $2,2,1,1$ supersymmetries, respectively, and therefore belong to fundamental supermultiplets with maximum spins $1,1,3/2,3/2$ \cite{Kalloshpeet,Rahmfeld1,Rahmfeld3}. This same black hole interpretation could be extended to the spectrum of BPS states of toroidally compactified Type $II$ strings \cite{Hull} where the supersymmetric black holes admitting a single scalar/Maxwell interpretation correspond once again to the same four values of $a=\sqrt{3},1,1/\sqrt{3},0$. They preserve $4,2,1,1$ of the $N=8$ supersymmetries, respectively \cite{Popelu1,Khuriortin}, and therefore belong to fundamental supermultiplets with maximum spins $2,3,7/2,7/2$. On the basis of their mass and charge assignment, it was further suggested \cite{Rahmfeld1,Rahmfeld3} that we interpret these four values of $a$ as $1-, 2-, 3-$ and $4$-particle bound states with zero binding energy. This is reviewed in section (\ref{bound}). For example, the Reissner-Nordstrom ($a=0$) black hole equals four Kaluza-Klein ($a=\sqrt{3}$) black holes! This zero-binding-energy bound-state conjecture can, in fact, be verified in the classical black hole picture by finding explicit $4$-centered black hole solutions which coincide with the $a=\sqrt{3},1,1/\sqrt{3},0$ solutions as we bring $1,2,3,4$ centers together and take the remaining $3,2,1,0$ centers out to infinity \cite{Rahmfeld4}. Such a construction is possible because of the appearance of four independent harmonic functions \cite{Cvetictseytlin}. Moreover, this provides a novel realization of the {\it no-force} condition in that the charge carried by each black hole corresponds to a different $U(1)$. Thus the gravitational attraction cannot be cancelled by an electromagnetic repulsion but rather by a subtle repulsion due to scalar exchange. This phenomenon was also observed in \cite{Kalloshcancel}. In section (\ref{supermultiplets}) we shall provide further evidence for the bound state interpretation in the quantum string state picture by showing that not only the masses, electric charges and magnetic charges but also the spins, and supermultiplet structures are consistent with this interpretation of the $a=0,1/\sqrt 3,1$ string states being merely bound states of the fundamental $a=\sqrt{3}$ states. This is entirely consistent with the claim of \cite{Holzhey}, using completely different reasoning, that only $a>1$ dilaton black holes can be interpreted as {\it elementary} particles. Can this interpretation also apply to the recently discussed {\it massless} black holes \cite{Strominger2,Cveticyoum2,Behrndt,Kalloshcancel,Chancvetic}? Classically, the answer is yes in that there exist 2-centered solutions which coincide with the massless black hole as we bring the two centers together. In this case, however, it is necessary to assume that one of the constituents has a negative mass and it therefore seems unlikely that this bound state interpretation can survive quantum-mechanically. For the purely electric elementary string states, where it makes sense to assign an oscillator number, the massive BPS states in the heterotic theory are given by the $(N_R=1/2,N_L\geq1)$ states \cite{Rahmfeld1} and the massless BPS states belong to the $(N_R=1/2,N_L=0)$ sector \cite{Behrndt}. Curiously, however, it is possible to extend the black hole bound state interpretation to non-BPS states, for example the non-supersymmetric $a=1$ dilaton black hole of \cite{Garfinkle} corresponds to $(N_R=3/2,N_L=1)$. There is now a consensus that all of string theory and its duality properties follow from an underlying eleven-dimensional theory \cite{Howe,Townsend,Witten,Duffliuminasian,Schwarz2,Horava}, now known as {\it $M$-theory}. In section (\ref{multi}) we turn to the black branes of $M$-theory \cite{Dufflupope2,Klebanov1,Mukherji2,Muto}, where (\ref{actiona}) is now in an arbitrary spacetime dimension $D\leq11$ and the $F$ is now a $(p+2)$-form. The parameter $a$ can be conveniently re-expressed as \cite{Popestainless} \begin{equation}} \newcommand{\ee}{\end{equation} a^2 = \Delta - \frac{2 (p+1)(D-p-3) }{D-2} , \ee since $\Delta$ is a parameter that is preserved under dimensional reduction \cite{Popestainless}. Originally, attention was focussed on the $\Delta=4$ solutions \cite{Strominger1,Dufflu,Khuristring} but various new supersymmetric solitons with $\Delta\neq 4$ have recently been studied \cite{Popestainless,Popelu1,Popelu2}. These authors proposed to classify $p$-branes into ``rusty'' or ``stainless'' according as they can or cannot be ``oxidized'' to isotropic brane solutions of a higher-dimensional supergravity. (Oxidation is the opposite of double dimensional reduction \cite{Howe,fifteen}.) Examples of new stainless solutions included a $\Delta=2$ $5$-brane in $D=9$ and $\Delta=2,4/3$ strings in $D=5$ \footnote{The $\Delta=4$ $6$-brane in $D=9$ can, in fact, be oxidized to a Type $IIB$ $7$-brane in $D=10$.}. The authors of \cite{Popestainless} then raised the question of whether these new $\Delta \neq 4$ branes deserve to be treated as fundamental in their own right. In this section we shall generalize the treatment of extremal black holes \cite{Rahmfeld4} and confirm on the level of classical solutions that these new $\Delta=4/n$ $p$-branes can also be regarded as bound states with zero binding energy of $n$ fundamental $\Delta=4$ branes. We find new $1\leq n \leq m$-centered $p$-brane solutions which reproduce the $\Delta=4/n$ solutions of \cite{Popelu2} as we allow $n$ of the centers to coincide and take the remaining $(m-n)$ out to infinity. In particular, the $\Delta=2$ fivebrane is a bound state of two $\Delta=4$ fivebranes and the $\Delta=2$ and $\Delta=4/3$ strings are respectively bound states of two and three fundamental $\Delta=4$ strings. In section (\ref{entropy}), we also discuss the temperature and entropy of the extreme $a=\sqrt{3}$, $1$, $1/\sqrt{3}$, $0$ black holes. Here a good deal more needs to be understood since only for the $a=0$ solution is the dilaton vanishing and so the classical entropy prediction for the other three cases (namely zero) is unreliable. Similar remarks apply to the macroscopic entropy and temperature of the other $p$-branes. \section{The bound state conjecture} \la{bound} Let us begin by recalling the bound state conjecture in the context of the four-dimensional heterotic string obtained by toroidal compactification. At a generic point in the moduli space of vacuum configurations the unbroken gauge symmetry is $U(1)^{28}$ and the low energy effective field theory is described by $N=4$ supergravity coupled to $22$ abelian vector multiplets. Using the canonical metric, the bosonic part of the Lagrangian is given by \cite{Sen} \begin{equation} {\cal{L}}={1\over32\pi}\sqrt{-g}\left[R-\frac{1}{2}(\partial\eta)^2 -\frac{1}{12}e^{-2\eta}H^2+{1\over8}{\rm Tr} } \newcommand{\NP}{Nucl. Phys. (\partial {\cal M} L\partial {\cal M} L)-{1\over4}e^{-\eta}F{}^T ( L {\cal M} L)F\right], \la{full} \end{equation} where $L$ is the metric of $O(6,22)$. ${\cal M}={\cal M}^T\in O(6,22)/O(6)\times O(22)$ parametrizes the scalars in the sigma model, the 28 $F_{\mu\nu}$'s are the $U(1)$ fields strengths, $\eta$ is the four dimensional dilaton and $H$ the $3$-form field strength with the usual Chern-Simons terms. The string theory has a perturbative $O(6,22;Z)$ $T$-duality that transforms Kaluza-Klein states into winding states, and a non-perturbative $SL(2,Z)$ $S$-duality that transforms electric states into magnetic states. This is reflected in the $O(6,22;R)$ invariance of the Lagrangian (\ref{full}) and the $SL(2,R)$ invariance of its equations of motion. Let us work at a special point in the moduli space and set the asymptotic value of ${\cal M}$ to $I$, and the asymptotic dilaton field to zero. We shall return to the general case later. Let us define $\vec{Q}_{R,L}$ and $\vec{P}_{R,L}$ by \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \vec{Q}_{R,L}&=&\frac{1}{2}(I\pm L)\vec{Q} \nonumber} \def\bd{\begin{document} \\ \vec{P}_{R,L}&=&\frac{1}{2}(I\pm L)\vec{P}\ , \eea where $\vec{Q}$ and $\vec{P}$ are the $28$-dimensional electric and magnetic charge vectors. Denoting by $N_L$ and $N_R$ the number of left and right oscillators respectively, the mass of an elementary (purely electric) string state is given by \begin{equation}} \newcommand{\ee}{\end{equation} M_{string}^2=\frac{1}{8}(\vec{Q}_R^2+2 N_R-1)=\frac{1}{8}\ (\vec{Q}_L^2+2 N_L-2) \la{stringmass} \ee See, for example, \cite{Sen}. In this $N=4$ theory, states (whether elementary or solitonic) fall into $3$ categories according as they are annihilated by $q=2,1,0$ supersymmetries, in which case their masses are given by \cite{Rahmfeld3} \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} M_{central}&=&Z_1=Z_2~~~~~~~q=2\nonumber} \def\bd{\begin{document}\\ &=&Z_1>Z_2~~~~~~~q=1\nonumber} \def\bd{\begin{document}\\ &>&Z_1 \geq Z_2~~~~~~~q=0 \eea where $Z_1$ and $Z_2$ are the moduli of the central charges in the supersymmetry algebra given by \cite{Cveticyoum,Rahmfeld3} \begin{equation} Z_{1,2}^2=\frac{1}{8}\left[\vec{Q}_R^2+\vec{P}_R^2\pm 2\left(\vec{Q}_R^2\vec{P}_R^2-(\vec{Q}_R\vec{P}_R)^2\right)^{\frac{1}{2}}\right]\ . \la{central} \end{equation} It follows by comparing $M_{string}$ of (\ref{stringmass}) and $M_{central}$ of (\ref{central}) that the elementary string states, being purely electric, are either BPS states with $N_R=1/2$ or else non-supersymmetric states with $N_R >1/2$. The $N_R=1/2$ states correspond to that subset of the full spectrum that is annihilated by half of the supersymmetry generators ($q=2$), belongs to short representations of the $N=4$ supersymmetry algebra and saturates the Bogomol'nyi bound $M_{central}=Z_1=Z_2$. The basic superspin $L=0$ multiplet is the $16$ dimensional ($J_{max}=1$) multiplet $(1,4,5)$. This is the only multiplet appearing for $N_L=0$. However, higher values of superspin $L$ may appear for higher $N_L$. Since the left moving oscillators have spins $0$ if they lie in the $22$ compact dimensions or $1$ if they lie in the spacetime dimensions, the superspin obeys the bound $L\leq N_L$. In particular for $N_L=1$, we have in addition to the above $L=0$ multiplet the $48$ dimensional ($J_{max}=2$) multiplet $(1,4,6,4,1)$. For $N_L>1$ we have $J_{max} =L+1$. The $N_R >1/2$ states, belonging to the long representations, are annihilated by no supersymmetries ($q=0$) and satisfy $M_{central}>Z_1 \geq Z_2$. The $L=0$ multiplet is $256$ dimensional with ($J_{max}=2$). No elementary states belong to the intermediate representation, which are annihilated by one supersymmetry ($q=1$) and satisfy $M_{central}=Z_1>Z_2$. These $L=0$ multiplets are $64$ dimensional with ($J_{max}= 3/2$). The situation changes when we allow for solitonic states of the string theory which carry magnetic charge. Then we can have all three categories of supermultiplet \cite{Rahmfeld1,Rahmfeld3}, and indeed we predicted the existence of new dyonic ($J_{max}=3/2$) multiplets in the string spectrum, over and above the ($J_{max}=1$) dyonic states related by $S$-duality to the elementary states and predicted by Schwarz and Sen \cite{Schwarz1}. In \cite{Rahmfeld1} we considered these elementary electrically charged massive $N_R=1/2$ states, and showed that the spin zero, superspin zero, states correspond to extreme limits of non-rotating black hole solutions which preserve $1/2$ of the spacetime supersymmetries. By supersymmetry, the black hole interpretation then applies to all members of the $N=4$ supermultiplet \cite{Gibbonsperry,Aichelburg}, which has $J_{max}=1$. For a subset of states the low-energy string action can be truncated to (\ref{actiona}). The scalar-Maxwell parameter is given by $a=\sqrt 3$ for $N_L=1$ and $a=1$ for $N_L>1$ (and vanishing left-moving internal momenta). The other superspin zero states with $N_L>1$ are extremal non-rotating black holes too, but are not described by a single scalar truncation of the type (\ref{actiona}). We also made a similar identification for the dyonic states \cite{Rahmfeld3}. To see this, let us recall that general black hole solutions are determined by the $56$ components of the electric and magnetic charge vectors $\vec{Q}$ and $\vec{P}$. We can simplify the problem by applying an $O(6)\times O(22)$ T-duality transformation which eliminates all but four electric and four magnetic components. This corresponds to a truncation involving just four field strengths: $F_1, \ F_2, \ F_3$ and $F_4$, and just three complex scalars: $S=a+ie^{-\eta}$, $T=b+ie^{-\sigma}$, $U=c+ie^{-\rho}$. \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} {\cal L}&={1\over 32\pi}\sqrt{-g}\biggl \{ &\hspace{-0.4truecm}R-\frac{1}{2}\left[(\partial\eta)^2 +(\partial\s)^2+(\partial\rho)^2\right]-\nonumber} \def\bd{\begin{document}\\ & & -\frac{1}{2}\left[e^{2\s}(\partial b)^2 + e^{2\r}(\partial c)^2\right]-\frac{1}{12}e^{-2\eta}H^2- \nonumber} \def\bd{\begin{document} \\ && \hspace*{-0.5truecm} -\frac{e^{-\eta+\s+\r}}{4} \biggl[ |T|^2 |U|^2 F_1^2 +|T|^2F_2^2+ F_3^2+ |U|^2 F_4^2 +2bF_2F_3+2cF_3F_4 +\nonumber} \def\bd{\begin{document}\\ &&\hspace{20mm}-2c |T|^2F_1F_2 -2b|U|^2 F_1F_4 -cb (F_1F_3-F_2F_4)\biggr]\biggr\}, \la{action1} \eea It is noteworthy that this procedure keeps only the degrees of freedom which arise from the toroidal compactification from six to four dimensions: $N=2$ supergravity coupled to three vector multiplets. $F_1,F_2$ are the Kaluza-Klein fields, $F_3,F_4$ are the winding fields: $S$ is the axion/dilaton, $T$ the Kahler-form and $U$ the complex structure. The axion field is obtained by dualizing $H$, which satisfies the standard Bianchi identity. The Lagrangian (\ref{action1}) was thoroughly analyzed in \cite{Rahmfeld3} where, in particular the triality of the three duality groups, $SL(2;Z)_S$, $SL(2;Z)_T$, $SL(2;Z)_U$, was emphasized: (\ref{action1}) is obviously invariant under $T\leftrightarrow U, \ F_2\leftrightarrow F_4$ exchange, but the equations of motion are also invariant under $S\leftrightarrow T,\ S\leftrightarrow U $ exchange (accompanied by an appropriate electric/magnetic transformation of the field strengths), which lead to the interpretation of a triality of the $S$, $T$ and $U$ strings. To simplify further, we shall here consider solutions with vanishing pseudoscalars so that the reduced Lagrangian is \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} {\cal L}&={1\over 32\pi}\sqrt{-g}\biggl \{ &\hspace{-0.4truecm}R-\frac{1}{2}\left[(\partial\eta)^2 +(\partial\s)^2+(\partial\rho)^2\right]-\nonumber} \def\bd{\begin{document}\\ & & \hspace*{-0.5truecm} -\frac{e^{-\eta}}{4}\left[e^{-\s-\r}F_1^2+e^{-\s+\r}F_2^2 +e^{\s+\r}F_3^2 +e^{\s-\r}F_4^2\right] \biggr\}, \la{action} \eea where one has to keep in mind the constraints imposed by the requirement of vanishing pseudoscalars. The extremal black holes we have in mind to illustrate the point, namely those that allow for a description by an action of type (\ref{actiona}) with $a=\sqrt{3},1,1/\sqrt{3},0$, can be obtained from (\ref{action}) by making various truncations: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} a=\sqrt{3}: & F_{1}\neq0, &F_2=F_3=F_4=0,~~Q^2=1 \nonumber} \def\bd{\begin{document} \\ a=1: & F_{1}=F_3\neq0, & F_2=F_4=0,~~~~~~~~~Q^2=2 \nonumber} \def\bd{\begin{document} \\ a=1/\sqrt{3}: & F_{1}=F_3={\tilde F}_2\neq0, & F_4=0,~~~~~~~~~~~~~~~~Q^2=3 \nonumber} \def\bd{\begin{document} \\ a=0: & F_{1}=F_3={\tilde F}_3 ={\tilde F}_4 \neq0, &~~~~~~~~~~~~~~~~~~~~~~~~~~Q^2=4 \eea Table 1 summarizes the charge and mass quantum numbers for those and a few more black holes and string states in the heterotic string theory. (If so desired, one may then use $S-T-U$ triality to describe them in the dual Type $II$ string pictures \cite{Rahmfeld3}.) Based on this Table it can be easily verified \cite{Rahmfeld1,Rahmfeld3} that the mass and charge quantum numbers of the $a=0,1/\sqrt{3},1,\sqrt{3}$ black holes admit the interpretation of $4,3,2,1$ particle bound state with zero binding energy. For the purposes of illustration we have chosen the special cases where all non-zero charges are equal to unity but it is easily generalized to the case of different charges $Q_1$, $P_2$, $Q_3$ and $P_4$ \cite{Cveticyoum} where the interpretation is that of a $(Q_1+P_2+Q_3+P_4)$-particle bound state with zero binding energy. Note, by the way, that even the extremal, but {\it non-supersymmetric}, $a=1$ black hole with electric charges $(1,0,-1,0)$ fits into the string spectrum with the assignments $N_L=1,N_R=3/2$. (Since the masses of non-supersymmetric states are not protected from quantum corrections by any non-renormalization theorems we do usually not expect (\ref{stringmass}) to give the correct answer for arbitrary black holes.) This solution is related by T-duality \cite{Rahmfeld1} to the non-supersymmetric black hole of \cite{Garfinkle} which has just one non-vanishing charge corresponding to one of the $16$ $U(1)$s in the Yang-Mills sector. It is frequently claimed that extremal black holes and supersymmetric black holes are synonymous, but while $M_{string}^2={\vec{Q}}_L^2/8$ and $M_{string}^2={\vec{Q}}_R^2/8$ are both extremal, only $M_{string}^2={\vec{Q}}_R^2/8$ is supersymmetric, owing to the left/right asymmetry of the heterotic string. (Note, however, that this solution preserves one quarter of the supersymmetries when embedded into maximal $N=8$ supergravity coming from the Type $II$ string). {\footnotesize \begin{table} \begin{center} \begin{tabular}{|c|c|c||c|c|c|c||c|c|} \hline \multicolumn{3}{|c||} {\em Quantum Numbers} & \multicolumn{4}{c||} {\em String States} & \multicolumn{2}{c|} {\em Black Holes }\\ \hline \hline {$(Q_1,Q_2,Q_3,Q_4)$} & {$ (P_1,P_2,P_3,P_4)$} &q &$N_L$ &$N_R$ &$M_{string}^2$ & $M_{central}^2$ & a & $M_{black}^2$ \\ \hline \hline & & & & & & & & \\ (1,0,0,0) & (0,0,0,0) & 2 &1 & $\frac{1}{2}$& $\frac{1}{16}$ & $\frac{1}{16}$ & $\sqrt{3}$ & $\frac{1}{16}$\\ (1,0,1,0) & (0,0,0,0) &2 &2 & $\frac{1}{2}$& $\frac{1}{4}$ & $\frac{1}{4}$ & 1 & $\frac{1}{4}$ \\ (2,1,1,1) & (0,0,0,0) &2 &4 & $\frac{1}{2}$& $\frac{13}{16}$ & $\frac{13}{16}$ &x& $\frac{13}{16}$ \\ (1,0,-1,0) & (0,0,0,0) & 2& 0 & $\frac{1}{2}$& $0$ & 0 & x &$0$ \\ (1,0,-1,0) & (0,0,0,0) & 0& 1 & $\frac{3}{2}$& $\frac{1}{4}$ & 0 & 1 &$\frac{1}{4}$ \\ & & & & & & & & \\ (1,0,0,0) & (0,1,0,0) & 1&x & x& x & $\frac{1}{4}$ & 1& $\frac{1}{4}$ \\ (1,0,1,0)& (0,1,0,0)& 1& x& x& x & $\frac{9}{16}$ & $\frac{1}{\sqrt{3}}$ & $\frac{9}{16}$ \\ (1,0,1,0) & (0,1,0,1) &1& x & x & x & $1$ & 0 & 1 \\ (1,0,-1,0) & (0,1,0,-1) & 0& x & x & x & 0 & 0 & 1 \\\hline \end{tabular} \end{center} \normalsize \begin{center}Table 1: Masses and charges of black holes and string states. \end{center} \end{table} \normalsize} As discussed in \cite{Rahmfeld1}, although the superpartners of the non-rotating black holes are themselves black holes, they are {\it not} rotating black holes in the sense of Kerr. On the contrary, as explained in \cite{Aichelburg}, it is their fermionic hair that carries the angular momentum in contrast to conventional rotating black holes where the angular momentum is bosonic. Rather this bosonic angular momentum is supplied by the left moving oscillators, which leads us to identify the Kerr-type angular momentum with the superspin $L$. However, as also discussed in \cite{Rahmfeld1}, these $N_R=1/2$ string states cannot then be rotating {\it black holes} since these mass=charge solutions have event horizons only for vanishing Kerr angular momentum. \section{Black hole supermultiplets} \la{supermultiplets} \begin{table} \begin{center} \begin{tabular}{|c||ccccc|}\hline & &$a=0$&$a=1/\sqrt{3}$&$a=1$&$a=\sqrt{3}$\\ \hline\hline $N=2$&&1&1&1&1\\ $N=4$&&1&1&2&2\\ $N=8$&&1&1&2&4\\ \hline \end{tabular} \end{center} \normalsize \begin{center}Table 2: Number of preserved supersymmetries, $q$, for black holes with parameters $a=0,1\sqrt{3},1,\sqrt{3}$ in $N=2,4,8$ theories. \end{center} \end{table} \normalsize Since the basic quantities like mass and charge are not going to change if we move to the $N=8$ or $N=2$ theories, we may as well extend the conjecture to these theories also. The number of preserved supersymmetries is given in Table 2. In this section we check that the supermultiplet structure of the black holes is consistent with this bound state interpretation. The relevant group theory may be found in \cite{Ferrarasavoy}. Let us recall that an $N$-extended supersymmetry algebra admits $N/2$ central charges $Z_1,...,Z_{N/2}$. States fall into $1+N/2$ categories according as they are annihilated by $ N/2 \geq q \geq 0$ supersymmetry generators. $q$ also counts the number of $Z$s that obey the bound $M_{central}=Z_{max}$. We are primarily interested in multiplets with states with spin $J=0$ and superspin $L=0$, which we identify with the non-rotating black hole solutions. The rest of the $L=0$ supermultiplet may then be filled out using the fermionic zero modes \cite{Aichelburg}. In the fundamental multiplets the spin will run from $J=0$ up to $J= (N-q)/2$ whereas for multiplets with non-zero superspin $L$, the spin will run from $J=0$ to $J=L+(N-q)/2$ for $L<(N-q)/2$ and from $J=L-(N-q)/2$ to $J=L+(N-q)/2$ for $L \geq (N-q)/2$. The multiplets are constructed in the spirit of \cite{Ferrarasavoy} by combining massless supermultiplets and then employing the Higgs mechanism to obtain the massive multiplet. For each number $q$ of preserved supersymmetries we give the results up to $J_{max}=4$ (implying superspins $L=0,\frac{1}{2},1,\frac{3}{2}$ and $L=2$ for the case of four preserved supersymmetries and so on). The results are shown in Tables 3, 4 and 5 for the $N=8, \ N=4$ and $N=2$ theories, respectively. {\footnotesize \begin{table} \begin{center} \begin{tabular}{|r |r| r r r r r |} \hline $q$&$J$&$L=0$&$L=\frac{ 1}{2}$& $L=1$&$L=\frac{3}{2}$& $L=2$\\ \hline\hline 4 & 4 & & & & &1\\ & $\frac{7}{2}$ & & & &1 &8\\ &3 & & & 1 &8 &28\\ & $\frac{5}{2}$ & & 1 &8 & 28&56\\ & 2 & 1 & 8 & 28 & 56&70\\ & $\frac{3}{2}$ & 8 & 28& 56& 70&56\\ & 1 & 27 & 56& 70& 56&28\\ & $\frac{1}{2}$ &48 & 69& 56& 28&8\\ &0 & 42 & 48& 27&8 &1\\ \hline 3 & 4 & & & &1&\\ & $\frac{7}{2}$ & & &1 &10&\\ &3 & & 1 &10 &45&\\ & $\frac{5}{2}$ & 1 & 10 & 45&120&\\ & 2 & 10 & 45 & 120&210&\\ & $\frac{3}{2}$ & 44 & 120& 210&252&\\ & 1 & 110& 209& 252&210&\\ & $\frac{1}{2}$ & 165& 242& 209&120&\\ &0 & 132& 165&110 &44&\\ \hline 2 & 4 && &1&&\\ & $\frac{7}{2}$ & & 1 &12&&\\ &3 & 1 &12 &66&&\\ & $\frac{5}{2}$ & 12 & 66&220&&\\ & 2 & 65 & 220&495&&\\ & $\frac{3}{2}$ & 208& 494&792&&\\ & 1 & 429& 780&923&&\\ & $\frac{1}{2}$ & 527& 858&780&&\\ &0 & 429 & 572 &429&&\\ \hline 1 & 4 & &1&&&\\ & $\frac{7}{2}$ & 1 &14&&&\\ &3 &14 &91&&&\\ & $\frac{5}{2}$ & 90&664&&&\\ & 2 & 350&1000&&&\\ & $\frac{3}{2}$ & 910&1988&&&\\ & 1 & 1638&2912&&&\\ & $\frac{1}{2}$ & 2002 & 3068&&&\\ &0 &1430 &2002 &&&\\ \hline 0 & 4 & 1& & & &\\ & $\frac{7}{2}$ & 16& & & &\\ &3 & 119 & & & &\\ & $\frac{5}{2}$ &544& & & &\\ & 2 & 1700& & & &\\ & $\frac{3}{2}$ & 3808& & & &\\ & 1 &6188 & & & &\\ & $\frac{1}{2}$ & 7072 & & & &\\ &0 & 4862 & & & &\\ \hline \end{tabular} \end{center} \normalsize \begin{center}Table 3: Massive supersymmetry representations of $N=8$\end{center} \end{table} \normalsize} \begin{table} \begin{center} \begin{tabular}{|r |r| r r r|} \hline $q$&$J$&$L=0$&$L=\frac{1}{2}$& $L=1$\\ \hline\hline 2 & 2 & & & 1 \\ & $\frac{3}{2}$ & & 1 & 4 \\ & 1 & 1 & 4 & 6 \\ & $\frac{1}{2}$ & 4 & 6 & 4 \\ &0 & 5 & 4& 1 \\ \hline 1 & 2 & &1 & \\ & $\frac{3}{2}$ & 1 &6 & \\ & 1 & 6 & 15 & \\ & $\frac{1}{2}$ & 14 & 20 & \\ &0 &14 & 14&\\ \hline 0 & 2 & 1 & & \\ & $\frac{3}{2}$ & 8 & & \\ & 1 & 27 & & \\ & $\frac{1}{2}$ & 48 & & \\ &0 & 42& & \\ \hline \end{tabular} \end{center} \normalsize \begin{center}Table 4: Massive supersymmetry representations for $N=4$\end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|r |r| r r|} \hline $q$&$J$&$L=0$&$L=\frac{1}{2}$\\ \hline\hline 1 & 1 & & 1 \\ & $\frac{1}{2}$ & 1 & 2 \\ &0 & 2 & 1\\ \hline 0 & 1 & 1 & \\ & $\frac{1}{2}$ & 4 & \\ &0 &5 & \\ \hline \end{tabular} \end{center} \normalsize \begin{center}Table 5: Massive supersymmetry representations for $N=2$\end{center} \end{table} \normalsize Let us begin by considering black hole solutions of the $N=8$ supergravity theory with $Z_1 \geq Z_2\geq Z_3 \geq Z_4$. Here we have the five categories: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} M_{central}&=&Z_1=Z_2=Z_3=Z_4~~~~~~~~~~~~~~~~~~~~~~~~~~q=4 \nonumber} \def\bd{\begin{document} \\ &=&Z_1=Z_2=Z_3>Z_4~~~~~~~~~~~~~~~~~~~~~~~~~~q=3 \nonumber} \def\bd{\begin{document} \\ &=&Z_1=Z_2>Z_3\geq Z_4~~~~~~~~~~~~~~~~~~~~~~~~~~q=2 \\ &=&Z_1=Z_2\geq Z_3\geq Z_4~~~~~~~~~~~~~~~~~~~~~~~~~~q=1 \nonumber} \def\bd{\begin{document} \\ &>&Z_1\geq Z_2 \geq Z_3 \geq Z_4~~~~~~~~~~~~~~~~~~~~~~~~~~q=0 \nonumber} \def\bd{\begin{document} \eea In particular, the $a=\sqrt{3}$ black holes preserve $q=4$ supersymmetries and must belong to short supermultiplets which we assume to be the maximum spin $J=2$, superspin $L=0$ multiplet $(1,8,27,48,42)$ appearing in Table 3. The bound state interpretation requires that the $a=1$ black holes preserving $q=2$ supersymmetries appear in the product of two $a=\sqrt{3}$ representations. Ignoring the internal quantum numbers, this product will decompose into multiplets with $4 \geq J_{max} \geq 2$ as follows \begin{equation}} \newcommand{\ee}{\end{equation} \begin{array}{ll} 1\times\left[ 4\right]\oplus 8\times\left[ \frac{7}{2}\right] \oplus27\times\left[3\right]\oplus48\times\left[\frac{5}{2}\right] \oplus42\times\left[2\right]&~~~q=4\\ 1\times \left[4\right] \oplus 6\times \left[\frac{7}{2}\right] \oplus14\times\left[3\right]\oplus14\times \left[\frac{5}{2}\right]&~~~q=3 \\ 1\times \left[4\right]\oplus 4\times \left[\frac{7}{2}\right] \oplus 5\times \left[3\right]&~~~q=2 \\ 1\times \left[4\right]\oplus 2\times \left[\frac{7}{2}\right]&~~~q=1\\ 1\times \left[4\right]&~~~q=0 \end{array} \ee Which of the above possibilities is actually realized, however, will depend on the charge assignments of the two constituents. Suppose each is singly charged under just one of the Kaluza-Klein $U(1)$s, then the bound state will again belong to a $q=4$ multiplet if the $U(1)$s are the same. On the other hand, if one carries a Kaluza-Klein charge and the other a winding charge in the same dimension, then we get $q=2$. Since these are precisely the quantum numbers of the $a=1$ black hole, which has twice the mass of the $a=\sqrt{3}$ black hole, this is entirely consistent with our hypothesis. Note, however, that although one might also expect to obtain $q=3$ multiplets, a closer look at the internal quantum numbers shows that these do not in fact arise, since the $M_{central}=Z_1=Z_2=Z_3>Z_4$ configuration can never be achieved with $2$-particle states. The same arguments go through for an embedding of the black holes into $N=4$ or $N=2$ supergravity with an appropriate number of matter multiplets. For the $N=4$ case the compositions for the product of two short multiplets are given by \footnote{ Since the same group theory applies, this suggests that higher superspin multiplets also appear in the spectrum of global $N=4$ Yang-Mills theories, where traditionally attention is focussed only on maximum spin $1$.} \begin{equation}} \newcommand{\ee}{\end{equation} \begin{array}{ll} 1\times \left[2\right]\oplus 4\times \left[\frac{3}{2}\right] \oplus5\times\left[1\right]& ~~~q=2\\ 1\times \left[2\right] \oplus 2\times \left[\frac{3}{2}\right] &~~~q=1 \\ 1\times\left[ 2\right]&~~~q=0 \end{array} \ee For example, the appearance of the $q=0$ multiplet is consistent with the interpretation of the non-supersymmetric $a=1$ black hole as a bound state of a supersymmetric $a=\sqrt{3}$ positively charged Kaluza=Klein black hole and a supersymmetric $a=\sqrt{3}$ negatively charged winding black hole. Similarly, for $N=2$ we find \begin{equation}} \newcommand{\ee}{\end{equation} \begin{array}{ll} 1\times\left[ 1\right]\oplus 2\times\left[ \frac{1}{2}\right] & ~~~q=1\\ 1\times\left[ 1\right]&~~~q=0 \end{array} \ee Therefore, we have shown that the supermultiplet structures are consistent with our hypothesis in the case of $2$-particle bound states. It is straightforward to show by taking further tensor products that things go through in a similar way for the $3$- and $4$-particle states. It should be stressed that we are not claiming that all bound states can be identified with oscillator states. In the $N=4$ table, for example, $L=1/2, J_{max}=3/2$ states appear in the tensor product, whereas there are no $L=1/2$ supermultiplets in the oscillator spectrum of the $N=4$ heterotic string because the left-moving sector is purely bosonic. Similarly, the $L=1,J_{max}=2$ states appearing in the tensor product will be black holes whose angular momentum is fermionic in origin \cite{Aichelburg}, whereas the $L=1$ oscillator states have a bosonic Kerr-type angular momentum and cannot therefore be black holes. So far we have put ourselves at the special point in moduli space $S=T=U=i$. At generic points, the mass formula becomes \cite{Ceresole,Rahmfeld3} \begin{equation}} \newcommand{\ee}{\end{equation} M_{central}=\frac{1}{{\rm Im}~S~ {\rm Im}~T~{\rm Im}~U} {\rm max}|(\alpha_1+U\alpha_2+T\alpha_4-TU\alpha_3)\pm(S\beta_1-SU\beta_2-ST\beta_4 -STU\beta_3)| \ee where $\alpha$ and $\beta$ are the quantized electric and magnetic charge vectors. Thus, although the bound state interpretation continues to apply, the zero binding energy phenomenon does not. The triangle inequality ensures that at generic points the bound states have non-zero binding energy (except of course when the charges are not relatively prime). \section{Multi-black brane solutions of $M$-theory} \la{multi} \vspace{1truecm} \begin{center} \begin{tabular}{|l||l|l|l|l|} \hline & 4-form & 3-form & 2-form & 1-form \\ \hline\hline D=11&4&&&\\ \hline D=10 &$4$& $4$ &$4$ & \\ \hline D=9 &$4$&$ 4$ &$4,2$ &$4$ \\ \hline D=8 &$4$&$ 4$ &$4,2$ &$4,2$ \\ \hline D=7 & &$4 $ &$4,2$ &$ 4,2$\\ \hline D=6 && $4,2$ & $4,2$&$4,2,\frac{4}{3},1$ \\ \hline D=5 && &$4,2,\frac{4}{3}$ &$4,2,\frac{4}{3},1$ \\ \hline D=4 && &$4,2,\frac{4}{3},1$ &$4,2,\frac{4}{3},1,\frac{4}{5}, \frac{2}{3},\frac{4}{7}$ \\ \hline \end{tabular} \la{brane} \end{center} \begin{center}Table 6: $\Delta$ values for supersymmetric $p$-branes\end{center} \vspace{1truecm} Let us now turn to the black branes of $M$-theory \cite{Dufflupope2,Klebanov1}. Starting from eleven dimension, toroidal compactification gives rise to a variety of $(d +1)$-form field strengths and hence fundamental $(d -1)$-brane and solitonic $(\tilde d-1)$-brane solutions in the lower dimension $D=d+\tilde d+2$. The compactified eleven-dimensional supergravity theory admits a consistent truncation to the following set of fields: the metric tensor $g_{MN}$, a set of $n$ scalars $\vec{\phi}=(\phi_1,...\phi_n)$, and $n$ field strengths $F_{\alpha}$ of rank $(\tilde d +1)$. The $D$-dimensional action describing the $p$-branes under consideration is then given by \cite{Popelu2} \begin{equation}} \newcommand{\ee}{\end{equation} \lag=\sqrt{-g}\left[R - \frac{1}{2} (\partial \vec{\phi})^2 -\frac{1} {2\times (d+1)!} \sum_{\a=1}^{n} e^{\vec {a}_\a \vec{\phi}} F_{d+1}^{\a \ 2}\right]\ , \label{actionp} \ee where $n$ is the number of participating field strengths. If all active charges are equal, this can be further truncated to the Lagrangian (\ref{actiona}) involving a single scalar and single field strength where $a$, $\phi$ and $F$ are given by \[ a^{-2}=\sum_{\alpha\beta}(M^{-1})_{\alpha\beta}\] \[\phi=a\sum_{\alpha\beta}(M^{-1})_{\alpha\beta}\vec {a}_\a \vec{\phi}\] \begin{equation}} \newcommand{\ee}{\end{equation} (F_{\alpha})^2=a^2\sum_{\alpha\beta}(M^{-1})_{\alpha\beta}F^2 \ , \ee where the matrix $ M_{\a \b}$ is given by \begin{equation}} \newcommand{\ee}{\end{equation} M_{\a \b}=\vec{a}_{\a} \vec{a}_{\b}. \la{matrix} \ee The parameter $a$ can conveniently be expressed as \begin{equation}} \newcommand{\ee}{\end{equation} a^2=\Delta-\frac{2d\tilde d}{d+\tilde d} \ee As discussed in \cite{Popelu2}, supersymmetric $p$-branes solutions can arise only when the value of $\Delta$ is given by $\Delta=4/n$. This occurs when \begin{equation}} \newcommand{\ee}{\end{equation} M_{\a \b}=4\delta_{\a \b} -2\frac{d\tilde{d}}{d+\dt}. \ee Originally, attention was focussed on the $\Delta=4$ solutions \cite{Strominger1,Dufflu,Khuristring} but various new supersymmetric solitons with $\Delta\neq 4$ have recently been studied \cite{Popestainless,Popelu1,Popelu2}. In this section we shall generalize the treatment of extremal black holes \cite{Rahmfeld4} and confirm on the level of classical solutions that these new $p$-branes can also be regarded as bound states with zero binding energy of fundamental $\Delta=4$ branes. We find new $1\leq n \leq m$-centered $p$-brane solutions which reproduce the $\Delta=4/n$ solutions of \cite{Popelu2} as we allow $n$ of the centers to coincide and take the remaining $(m-n)$ out to infinity. Table 6 summarizes in which dimensions the various solitons in maximal supergravities arise. Below eight (six) dimensions the four(three)-form field strengths are dualized. For the purposes of exhibiting the multi-centered solutions, we shall work with (\ref{actionp}). The equations of motion are \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \frac{1}{\sqrt{-g}}\partial_{M}\left(\sqrt{-g} e^{\vec{a}_{\a} \vec\phi} F_{d+1}^{\a \ MN}\right)&=&0 \la{eom1}\\ \frac{1}{\sqrt{-g}}\partial_{M}\left(\sqrt{-g} \partial^{M} \phi_i\right) &=& \sum_{n}\frac{a_{i\a}}{2\times (d+1)!}F_{d+1}^{\a \ 2} \la{eom2}\\ R_{MN}&=&\frac{1}{2}\partial_{M}\vec{\phi}\partial_{N}\vec{\phi} + \nonumber} \def\bd{\begin{document} \\ & & \qquad\quad +\frac{1}{2 d !}\sum_{n}e^{\vec{a}_{\a} \vec{\phi}}\biggl(F^{\a \ 2}_{MN}-\frac{d}{n(d+\tilde{d})} F^{\a \ 2} g_{MN}\biggr) \la{eom3} \eea The one-center solutions of these equations have been intensively studied. Here we find the {\it multi-center} solution which reads for the solitonic case: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} f_{\a}&=&(1+\frac{\l_\a}{d|\vec{y}-\vec{y}_{0,\a}|^{d}}) \la{factor}\\ ds^2&=&\prod_{\a=1}^{n}f_{\a}^{-\frac{d}{d+\tilde{d}}}dx^{\mu}dx_{\mu} + \prod_{\a=1}^{n}f_{\a}^{\frac{\tilde{d}}{d+\tilde{d}}}dy^m dy_m \\ e^{-\vec{a}_{\a} \vec{\phi}}&=& f_{\a}^2 \prod_{\b=1}^{n}f_\b^{-\frac{d\tilde{d}}{d+\tilde{d}}} \\ F^\a_{m_1...m_{d+1}}&=&\l_\a\e_{m_1 ...m_{d+1}p} \frac{y^p}{|\vec{y}- \vec{y}_{0,\a}|^{d+2}} \eea where $\mu$ refers to the $\tilde{d}$ world-volume coordinates of the solitonic $(\tilde{d}-1)$-brane and $m$ to the $D-\tilde{d}=d+2$ transverse coordinates. In the special case $d=0$ we choose \begin{equation}} \newcommand{\ee}{\end{equation} f_{\a}=(1+\l_\a\ln{\frac{|\vec{y}- \vec{y}_{0,\a}|} {r_{0,\a}}}) \ee instead of (\ref{factor}). The solutions for elementary multi-$p$-branes are easily obtained by generalizing the single-centered solutions of \cite{Popelu2} along the lines above. Since (\ref{eom1}) and (\ref{eom2}) are essentially linear in the contributions of each field strength they are obviously satisfied for the multi-center solutions. The only slightly non-trivial check comes from the Einstein equations, since here the scalar fields and $R_{mn}$ involve non-linearities in the individual soliton contributions. If we plug the ansatz into the field equations we find the following condition for vanishing non-linearities: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \frac{d\tilde{d}^2}{2(d+\tilde{d})}+\tilde{d} M^{-1}_d\left[(n-2)(\frac{d\tilde{d}}{d+\dt})^2-2 (2-\frac{d\tilde{d}}{d+\dt}) \frac{d\tilde{d}}{d+\dt}\right] + \ \ \ \ \ \ \ \ & & \nonumber} \def\bd{\begin{document} \\ \hspace*{-0.5truecm}+\tilde{d} M^{-1}_n\biggl[(2-\frac{d\tilde{d}}{d+\dt})^2 +(\frac{d\tilde{d}}{d+\dt})^2-2(n-2) (2-\frac{d\tilde{d}}{d+\dt}) \frac{d\tilde{d}}{d+\dt}+ \ \ & & \nonumber} \def\bd{\begin{document} \\ +2(n-2) (\frac{d\tilde{d}}{d+\dt})^2+(n-2)(n-3)(\frac{d\tilde{d}}{d+\dt})^2\biggr]&=&0, \eea where $M^{-1}_{d(n)}$ denote the (off)-diagonal elements of the matrix $M_{\a \b}$ defined in (\ref{matrix}). Indeed, for all the new supersymmetric $p$-branes the above condition holds, allowing us to generalize the single-center solutions to multi-center ones. However, there is one subtlety, for certain values of $d,n,D$, $M$ is singular and cannot be inverted. The problem arises for ($D=4, n=4, d=1$) and ($D=5, n=3, d=1$). The first case was considered in \cite{Rahmfeld4} and shown to be a valid solution. The second one can also be shown to work by an independent calculation. If so desired, one may now consider the special case of coincident centers and equal charges to obtain the $\Delta=4/n$ solutions, thus confirming that these admit the interpretation of bound states with zero binding energy of $n$ fundamental $\Delta=4$ branes. Most of our results are not based on the fact that we consider maximal supergravities. So we can ask ourselves what kind of bound states survive in the heterotic theory. It appears that all $1$-form and $2$-form types do. In four dimensions we also have the four classic black hole types and also strings with up to seven participating field strengths. In the heterotic theory the number seven finds a very natural explanation in the presence of one $S$-field 3 $T$-fields and 3 $U$-fields of section (2) \cite{Rahmfeld3,Rahmfeld2}. The multi-string solution found in \cite{Rahmfeld2} with non-vanishing $S$ and $T$ fields is given by \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} ds^2&=&-dt^2+dz^2+S_2T^{(1)}_2T^{(2)}_2T^{(3)}_2(dx^2+dy^2) \nonumber} \def\bd{\begin{document} \\ S&=&S_1+iS_2=\sum_{i=1}^{n}s_i\ln{\frac{x+iy-w_i}{r_{i}}}\\ T^{(a)}&=&T^{(a)}_1+iT^{(a)}_2= \sum_{i=1}^{n}t^{(a)}_i\ln{\frac{x+iy-w^{(a)}_i}{r^{(a)}_{i}}} \eea where $w_i=x_i+iy_i$ and $r_{i}$ denote the positions and sizes of the sources. The solution preserves $1/2$, $1/4$ and $1/8$ of the supersymmetries for one, two and three $T$-fields but without an $S$ charge. In the presence of the $S$ field the supersymmetries get halved with the exception of the configuration with 3 $T$ and one $S$ field which breaks either breaks either seven eighth or all supersymmetries, depending on the chirality choice. It is straightforward to generalize these results to include the $U$ fields. The presence of only one 3-form in all dimensions (above five) forbids the $D=6,d=2,n=2$ solution. Nevertheless, we have a very interesting solution in $D=6,d=2$, which also can be viewed as a bound state: the dyonic string of \cite{Rahmfeld2,Dufflupope1}: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} \Phi&=&\Phi_E+\Phi_M, \nonumber} \def\bd{\begin{document} \\ ds^2&=&e^{\Phi_E}(-dt^2+dz^2)+e^{\Phi_M}dx^idx_i \label{dyonstring}\\ e^{-\Phi_E}&=& 1+\frac{Q}{y^2} \qquad e^{-\Phi_M}= 1+\frac{P}{y^2} \nonumber} \def\bd{\begin{document} \\ H_3&=& 2Q\e_3+2Pe^{\Phi}*\e_3 \nonumber} \def\bd{\begin{document} \eea with $y^2=x^ix_i, \ \ i=1,2,3,4$ and (in general) \begin{equation}} \newcommand{\ee}{\end{equation} e^{\Phi_E}\partial^2 e^{-\Phi_E}=e^{\Phi_M}\partial^2 e^{-\Phi_M}=0 \ee Since the electric as well as the magnetic part are determined by two independent harmonic functions, we can easily generalize (\ref{dyonstring}) to a multi-string solution. The same holds for the ten dimensional dyonic string also found in \cite{Rahmfeld2}: \begin{eqnarray}} \newcommand{\eea}{\end{eqnarray} ds^2&=&e^{\Phi_{E_1}+\Phi_{E_2}} (-dt^2+dz^2)+e^{\Phi_{M_1}}\delta^{ij}dy_idy_j +e^{\Phi_{M_2}}\delta^{ab}dy_ady_b \nonumber} \def\bd{\begin{document} \\ e^{\Phi_{E_\a}} \partial^2_1 e^{-\Phi_{E_\a}}&=&0 \qquad e^{\Phi_{M_\a}} \partial^2_2 e^{-\Phi_{M_\a}}=0 \la{tendyon} \eea with $i,j=2,3,4,5; \ \ a,b=6,7,8,9$ and $\a=1,2$. $\partial^2_{1,2}$ denote the d'Alemberts in the $(2,3,4,5)$ and $(6,7,8,9)$ subspaces respectively. The antisymmetric tensor field is given as \begin{equation}} \newcommand{\ee}{\end{equation} B_{01}=e^{\Phi_{E_1+E_2}}, \ \ H_{ijk}=\e_{ijkl}\partial^l\Phi_{M_1}, \ \ H_{abc}=\e_{abcd}\partial^d\Phi_{M_2}. \ee (\ref{tendyon}) is written in ten dimensional string coordinates. Another interesting aspects of the multi-$p$-brane solutions is that the charge parameters of the individual constituents are independent. In particular, if we have only two $p$-branes, we can choose their charges to be of equal magnitude but different sign, which leads to massless solutions \cite{Cveticyoum2,Behrndt,Kalloshcancel,Chancvetic,Popelu1,Ortin,Dufflupope1}. The interpretation is that two supersymmetric $p$-branes, one with positive and one with negative mass, can combine to form a supersymmetric massless $p$-brane. Certainly, an isolated brane with negative mass does not make sense quantum-mechanically but there may be some quantum confinement mechanism that allows it to exist only as a bound state. \section{Entropy and temperature} \la{entropy} In this section, we ask whether the entropy and temperature of these black $p$-branes \cite{Dufflupope2,Klebanov1,Mukherji2} is consistent with the bound state interpretation. The mass per unit volume and the charges of the multi black $p$-branes may be written in terms of parameters $k$ and $\mu_{\alpha}$ as \cite{Dufflupope2} \[ M_{black}=k(\tilde d \sum_{\a=1}^{n}\sinh^2\mu_{\alpha} +\tilde d +1) \] \begin{equation}} \newcommand{\ee}{\end{equation} \lambda_{\alpha}=\frac{1}{2}\tilde d k \sinh 2\mu_{\alpha} \ee The Hawking temperature $T$ and entropy $S$ of these multi black $p$-branes in the case where all centers are coincident are given by \cite{Dufflupope2} \[ T=\frac{\tilde d}{4\pi y_+} \prod_{\a=1}^{n}(\cosh\mu_{\alpha})^{-1} \] \begin{equation}} \newcommand{\ee}{\end{equation} S=\frac{1}{4}{y_+}^{\tilde d +1} \omega_{\tilde d +1} \prod_{\a=1}^{n}(\cosh\mu_{\alpha}) \ee where the event horizon is located at $y={y_+}=k^{1/\tilde d}$ and where $\omega_{\tilde d +1}$ is the volume of the unit $(\tilde d +1)$-sphere. The form of the total entropy as a product of the individual entropies is puzzling from the bound state interpretation. It remains a puzzle when we take the extremal limit. To illustrate this, let us consider the special case where each of the $n$ field strengths is equal. Then we have \cite{Dufflupope2} \[ T=\frac{\tilde d}{4\pi y_+}(\cosh\mu)^{-n} \] \begin{equation}} \newcommand{\ee}{\end{equation} S=\frac{1}{4}{y_+}^{\tilde d +1} \omega_{\tilde d +1}(\cosh\mu)^n \ee The extremal limit corresponds to $k\rightarrow 0$, $\mu\rightarrow \infty$, holding $\lambda=\tilde d \sqrt{n/2}ke^{\mu}$ constant. Thus the entropy vanishes unless the constant $a$ is zero and $d=1$. This happens only for $(D=4,n=4,d=1)$, which is just the Reissner Nordstrom black hole, and for the five-dimensional black hole $(D=5,n=3,d=1)$. Remarkably, these were precisely the two cases where the matrix $M$ of (\ref{matrix}) was singular. At first sight this result seems strange: In $D=4$, for example, it seems very unnatural to combine three black holes and still the entropy is zero; but adding a fourth one suddenly forces it to be finite. To see in more detail how this comes about, let us invoke \cite{Ferrarakallosh}, where a new recipe for the calculation of the horizon area was given: the scalar fields on the horizon (in our case $\eta$, $\sigma$ and $\rho$) are determined by the requirement that the central charge becomes extremal. The entropy is then given by the value of the central charge evaluated with the scalar fields at the horizon. For the standard black holes with charges $Q_1,Q_3,P_2$ and $P_4$ the scalar fields on the horizon are fixed to \cite{Ferrarakallosh} \begin{equation}} \newcommand{\ee}{\end{equation} e^{-2\eta}\rightarrow|\frac{P_2 P_4}{Q_1Q_3}| \ , \ \ \ \ e^{-2\s}\rightarrow|\frac{P_2Q_3}{Q_1 P_4}| \ ,\ \ \ \ e^{-2\r}\rightarrow|\frac{Q_3 P_4}{Q_1 P_2}|. \ee For each ``incomplete'' black hole, i.e. a state with not all four charges non-zero, at least one of the scalars blow up, either to plus or minus infinity. This ties in very nicely with our bound state hypothesis. For an $a=1/{\sqrt{3}}$ with (for example) $Q_1=Q_3=P_2=1$ all three scalars diverge: \begin{equation}} \newcommand{\ee}{\end{equation} \eta\rightarrow\infty \ , \ \ \ \ \s\rightarrow-\infty \ , \ \ \ \ \r\rightarrow \infty. \ee The elementary magnetic black hole with $P_4=1$ on the other hand has diverging scalars with the opposite sign: \begin{equation}} \newcommand{\ee}{\end{equation} \eta\rightarrow-\infty \ , \ \ \ \ \s\rightarrow\infty \ , \ \ \ \ \r\rightarrow-\infty! \ee Since we know from \cite{Rahmfeld4,Cvetictseytlin} that there is a multi-black hole solution and the scalars are additive we can safely conclude that the $a=1/\sqrt{3}$ and $a=\sqrt{3}$ state conspire to force the scalars and therefore the entropy to be finite. All this suggests that it is only the $a=0$ non-dilatonic $p$-brane entropies that can be trusted since when the dilaton is non-trivial there will be strong coupling effects that we do not yet know how to handle. However, it was shown in \cite{Mukherji2} that if quantum corrections smooth out singularities, even black holes with $a \neq 0$ may have non-vanishing entropy. \section{Conclusions} We have reexamined the suggestion in \cite{Rahmfeld1} that, in the extremal limit, the non-rotating black hole solutions of string theory may be identified with both elementary and solitonic BPS string states, and have confirmed that the interpretation of certain multiply charged black holes as bound states at threshold of singly charged black holes is consistent with the masses, charges, spins and supermultiplet structure of the string states. We also confirm that the bosonic Kerr-type angular momentum, arising from the left-moving sector of the heterotic string, corresponds to the superspin $L$ of the oscillator states and hence that only $L=0$ BPS oscillator states can be black holes. One is tempted to conclude that the $L>0$ oscillator states must therefore be described by naked singularities, but there also exist solutions that are asymptotically identical to such solutions but near the core have a much milder singularity and whose angular momentum is naturally Regge-bounded \cite{Harveyblack}. Moreover, this bound state interpretation generalizes to super $p$-branes in $D$ dimensions. In doing so, of course we have to put ourselves at special points in moduli space. In section (\ref{bound}), for example, we set the asymptotic value of ${\cal M}$ to $I$ and the asymptotic value of the dilaton to zero. For generic points in moduli space, the bound state interpretation would continue to apply but we would no longer have the zero binding energy phenomenon (except of course if the charges are not relatively prime). We might add that these results are also consistent with the recent recognition that some $p$-branes carrying Ramond-Ramond charges admit an interpretation as Dirichlet-branes, or $D$-branes, and are therefore amenable to the calculational power of conformal field theory \cite{Polchinski}. Bound states of $p$-branes have been discussed from the somewhat different perspective of Dirichlet branes in \cite{Schwarz3,Witten2,Li,Sen3,Vafa,Tseytlin}. Apart from their intrinsic importance, therefore, these black holes and black $p$-branes have recently come to the fore as away of providing a microscopic explanation of the Hawking entropy and temperature formulae which have long been something of an enigma. See \cite{Horowitz} for a recent review. For reasons discussed in section (\ref{entropy}), however, only the $a=0$ branes are amenable to these calculations, given our current technology. An interesting special case is provided by the $a=\sqrt{3},1,1/\sqrt{3},0$ black holes which admit the interpretation as $1,2,3,4$-particle bound states at threshold \cite{Rahmfeld1,Rahmfeld3}. One feature which appeared mysterious to us at the time was: Why the unit charge solutions singled out {\it four} values of $a$ and hence why only $1,2,3,4$- and not $5,6,...$-particle bound states? An explanation of this has recently been given \cite{Townsendpapa,Klebanov2,Behrndt2,Gauntlett,Larsen} in terms of intersecting membranes \cite{Duffstelle} and fivebranes \cite{Guven} in $D=11$. This opens up a new direction for the study of black hole and black $p$-brane bound states. Finally, another crucial consistency check on the black hole, bound state, string state picture is supplied by comparing gyromagnetic and gyroelectric ratios. This turns out to be quite subtle \cite{Russo} and will treated in a separate publication \cite{Duffliurahmfeld}. \section{Acknowledgements} We have enjoyed useful conversations with Jim Liu and Massimo Porrati on string state gyromagnetic ratios and with Sudipta Mukherji, Hong Lu and Chris Pope on $p$-brane entropy. \newpage \bibliographystyle{preprint}
1,116,691,499,607
arxiv
\section{Introduction} \input{src/intro} \section{Methodology} \input{ src/method} \input{ table/main_results2.0} \section{Experiments} \input{src/experiments} \input{ src/result} \section{Analysis} \input{ src/analysis} \section{Related Work} \input{ src/related} \section{Conclusion} \input{ src/conclusion} \newpage \subsection{Task Independent Rules} \paragraph{Connected Directed Graph Formalization} \label{sec:graph formalization} From formal perspective, the AMR is a connected directed acyclic graph. Our first principle when conducting AMRization is to transform the data structure of auxiliary task similar to that of AMR. For SRL, the original structure is forest consisting of multiple unconnected trees. By adding virtual root node and generating a directed edge from the virtual root to each root of SRL trees, the SRL annotations become a connected graph as depicted in Figure~\ref{fig:amrization}. For Dependency Parsing task, no extra procedure is needed since the structure of dependency tree already agrees with AMR in spite of reentrancy. For NLG task like Machine Translation, due to the lack of structure information, such transformation is not available. \paragraph{Variable Restoration} Following \citet{bevil-spring}, since AMR paring task uses graph-isomorphic linearization which requires special tokens <R0>,...,<Rn> to represent variables and handle coreference, we apply variable restoration by assign a numerical token to each node after the graph formalization. \paragraph{Linearization} After graph formalization and variable restoration, the graph structure should be linearized before doing seq2seq training. Following \citet{bevil-spring},we employ a DFS-based linearization with special tokens to indicate variables and parentheses to mark visit depth, which is the best linearization method according to \citet{bevil-spring}. \subsection{Transform SRL to Pseudo AMR} AMR captures the predicate-argument structure of one sentence which is also the main task of Semantic Role Labeling. Actually, SRL is an important sub-task of AMR parsing \citep{ban-AMR,dam-smatch-incremental}. As shown in Figure \ref{fig:srl_dp_amr}, the SRL and AMR of sentence "The boy wants to leave" carry much resemblance. 1) They both locate the predicates (want, leave) of sentence and conduct word-sense disambiguation according to PropBank. 2) They both capture the multiple arguments of center predicate. However, it is undeniable that there are gaps in both formal and semantic level between SRL and AMR. From formal perspective: 1) AMR is a connected directed graph while the sturcture of SRL is forest. This can be handled through \textbf{Graph Formation} as discussed in Section~\ref{sec:graph formalization}. 2) Reentrancy is an important and nontrivial feature of AMR. Though SRL capture similar predicate-argument structure to AMR, there is no reentrancy. From semantic level, the arguments of SRL are token spans (eg. the boy, that boy, this boy) while the argument of AMR is concept (boy). The latter is more abstract and capture deeper semantics. In following sections, we propose \textbf{Argument Reduction} and \textbf{Reentrancy Restoration} as depicted in Figure\ref{fig:amrization}, attempting to bridge the formal and semantic gap between SRL and AMR. \paragraph{Argument Reduction}We use dependency parser from Stanford CoreNLP Toolkit \cite{corenlp} to do the argument reduction. We run dependency parsing if the argument of current predicate is more than one token and replace the original token span with the root token in the dependency structure. Similar method has been to applied by \cite{zhang-etal-2021-comparing} to find the head of token spans of argument. \paragraph{Reentrancy Restoration} We design an heuristic algorithm based on DFS to restore reentrancy in SRL, transforming the tree structure to graph. The core idea of the restoration is that we create a variable when the algorithm first sees a node. Next time if meeting node with the same name, the destination of the edge will be referenced to the variable we have created at first. The pseudo code is in Appendix~\ref{alg:reen} \subsection{Transform Dependency Structure to Pseudo AMR} Similar to AMR parsing, dependency parsing (DP) builds a directed tree structure from tokens of a sentence. DP can be viewed as an edge prediction task since all nodes in the graph are given while AMR Parsing requires predicting both nodes and edges, which makes DP a sub-task of AMR parsing from the perspective of edge prediction. There are three main methods to do dependency parsing: Transition-based \citep{Chen2014AFA}, Graph-based \citep{Kiperwasser2016SimpleAA}, Seq2Seq-based \citep{Li2018Seq2seqDP}, which are the same as AMR parsing suggesting the resemblance between AMR Parsing and Dependency Parsing. Different from constituency parsing (CP), there is no none-terminal in both AMR and DP, which makes DP more similar to AMR than CP. Formal and semantic gap also exist between DP and AMR. First, some relations in dependency parsing focuses on shallow syntax like (punctuation, delimiters) which are far from high-level semantic relations in AMR. Second, similar to SRL, the basic node of dependency structure is token while that of AMR is concept, which capture deeper syntax-independent semantics . We use \textbf{Redundant Relation Removal} and \textbf{Token Lemmatization} to better transform DP to Pseudo AMR. \paragraph{Redundant Relation Removal} We remove some of the dependency relations that are far from relations in AMR eg. :ROOT, :DET. However, the choice of best redundant relations is an NP-hard problem. We reserve the full exploration of relation removal for further studies. By removing some relations of the dependence, the parsing result become more sparse compared with original DP tree, forcing the model to ignore some semantics-unrelated tokens during seq2seq training. \paragraph{Token Lemmatization} We use NLTK \citep{bird-2006-nltk} to conduct token lemmatization on the node of dependency tree. Together with the smart-initialization \citep{bevil-spring} by setting the concept token's embedding as the average of the subword constituents, the vector of lemmatized token (want) becomes closer to the vector concept (want-01) in the embedding matrix, therefore requiring the model to capture deeper semantic when conducting DP task. \subsection{More Similar Sentence Representation} To examine how different auxiliary tasks affect AMR parsing, we collect the sentences' representation from different tasks' trained encoders\footnote{The computing process of sentences representation distance is illustrated in Figure~\ref{fig:comp} in appendix}. We use the average hidden state of the encoder's output as the sentence representation. We compute the Cosine Similarity and L2 distance between auxiliary tasks' representation and AMR's representation for same sentence. The test split of AMR 2.0 is used for evaluation. Finally, We apply Gaussian distribution to fit the distribution of distances and draw the probability distribution function curves as shown in Figure~\ref{fig:repr}. It turns out that under both distance metrics, SRL/DP consistently provide more similar sentence representation to AMR than Translation and SRL/DP are more similar to AMR parsing. It empirically justifies our hypothesis that semantically or formally related tasks can lead to a better initialization for AMR parsing. \subsection{Ablation Study on AMRization Methods} As shown in Table~\ref{tab:amrization_small}, we conduct ablation study on how different AMRization methods affect the performance AMR parsing. For both SRL and DP, jointly adopting our AMRization techniques can further improve the performance of AMR parsing significantly, comparing to trivial linearization. The imperfect reentrancy restoration method leads to a significant improvement in terms of both the Topology and Concept related scores. It reveals that transformation of structure to mimic the feature of AMR can better the knowledge transfer between shallow and deep semantics. As shown in Table~\ref{tab:amrization}, compared with jointly using the two techniques, it is worth noting that model with solely Reentrancy Restoration reaches highest fine-grained scores in especially on Reentrancy and SRL scores. To explore the reason why it surpasses adopting both techniques, we analyse the number of restored reentrancy. The result shows that about 10k more reentrancies are added when Argument Reduction (AR) is previously executed. It's expected since AR replaces the token span to the root token. Compared with token span, single token is more likely to be recognized as the correference variable according to the Reentrancy Restoration (RR) algorithm, thus generating more reentrancy, which might include bias to the model. This explains why solely using RR can lead to better results on SRL and Reen. \input{ table/amrization} \input{ table/finetuning_select} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{ fig/pic.pdf} \caption{The loss curve on development set of AMR 2.0 for different training paradigms.} \label{fig:training-curve} \end{figure} \subsection{ITL Outweighs MTL} We report the result of different fine-tuning paradigms in Table~\ref{tab:funetuning}. It justifies our assumption that classic multitask learning with task tag as previously applied in \citet{xu-seqpretrain,Damonte2021OneSP} does not compare with intermediate training paradigm for AMR Parsing task. As shown in Figure~\ref{fig:training-curve}, Intermediate-task training provides a faster and better converging process than MTL. We assume this is due to the huge gap between AMR parsing and auxiliary tasks which may harm the optimization process of MTL. The process of optimizing all auxiliary tasks simultaneously may introduce noise to AMR Parsing. We also find that under the setting of ITL, sequentially training SRL and DP tasks did not bring further improvement to AMR parsing. We guess this is due to the catastrophic forgetting problem. Further regularization during training might help the model progressively learn from different auxiliary tasks and relieve catastrophic forgetting. \subsection{Exploration in Out-of-Distribution Generalization} \input{ table/OOD} Following \citet{bevil-spring,lam2021ensembling}, we assess the performance of our models when trained on out-of-distribution (OOD) data. The models trained solely on AMR 2.0 training data are used to evaluate out-of-distribution performance on the BIO, the TLP and the News3 dataset. Table~\ref{tab:ood} shows the result of our out-of-distribution experiments. Our model surpass other models even the ensembled one\citep{lam2021ensembling}, creating new state-of-the-art for single model. \subsection{Exploration in Low Resources Setting} \input{ table/fewshot} Since the annotation of AMR is both time and labor consuming, it raises our interests if we can improve the learning ability of AMR Parser under low resources setting. We set three low resources benchmarks \textbf{BOLT, LORELEI, DFA} for AMR parsing based on the different sufficient degree of training examples. Detail of the datasets is described in Appendix~\ref{app:few-shots} . Compared with the AMR2.0 dataset which has 36521 training samples, the number of training samples in \textbf{BOLT, LORELEI, DFA} are 2.9\%, 12.2\% and 17.7\% of the number of AMR2.0. Table~\ref{tab:few} reports the result. Our model surpasses the SPRING model by a real large margin (about 25 Smatch) in the BOLT dataset which is the most insufficient in data and gains a consistent improvement on all datasets, suggesting that our pretraining method is effective under low resources conditions. \section{Algorithms} \label{alg:reen} \begin{algorithm}[htb] \caption{Reentrancy Restoration for SRL} \hspace*{0.02in} {\bf Input:} Treenode:T\\ \hspace*{0.02in} {\bf Output:} Graph:G \\ \hspace*{0.02in} {\bf Description:} T is root node of the original SRL after node ROOT is added to form tree structure. G is the output graph with possible reentrancy restored.\\ \hspace*{0.02in} {\bf Global Variables:} Dict: V=\{\}. Here Dict is the official data structure of Python's dictionary. \begin{algorithmic}[1] \For{predicate in T.sons} \For{son in predicate.sons()} \If{son.name in V.keys()} \State son = V[son.name] \State\# restore reentrancy \Else \State V[son.name] = son \EndIf \EndFor \EndFor \State \Return T \end{algorithmic} \end{algorithm} \section{Ensemble Models' Methods} \label{app:baselines} \paragraph{Graphene-4S$^E$} \citet{lam2021ensembling} make use of 4 SPRING models from different random seeds and their proposed graph ensemble algorithm to do the ensembling. They also include another ensemble model named Graphene All which includes four checkpoints from models of different architectures, SPRING\citep{bevil-spring}, APT\citep{zhou2021amr}, T5, and Cai\&Lam\citep{cai2020amr}. We do not report the score of Graphene All since it aggregates models with different inductive bias while our ensemble model only use models from one structure. It is out of the scope for fair comparison. \paragraph{Structure-aware$^E$} \citet{saft} use ensemble results from 3 models' combination to generate the ensemble model. \paragraph{Ours (w/ SRL)$^E$} We use the setting the same as \citet{saft}, we use the average of three models' parameters as the ensemble model. \section{Auxiliary Datasets Description} \label{app:dataset_des} \subsection{Summarization} \paragraph{\textsc{CNN/DM}\citep{Hermann2015TeachingMT}}The CNN / DailyMail Dataset is an English-language dataset containing news articles as written by journalists at CNN and the Daily Mail. The dataset is widely accepted as benchmark to test models' performance of summarizing . \paragraph{\textsc{DialogSum}\citep{chen-etal-2021-dialogsum}} The Real-Life Scenario Dialogue Summarization (\textsc{DialogSum}), is a large-scale summarization dataset for dialogues. Unlike \textsc{CNN/DM} which focuses on monologue news summarization, \textsc{DialogSum} covers a wide range of daily-life topics in the form of spoken dialogue. We use all the training data (13k) to conduct the intermediate training. \subsection{Translation} \paragraph{\textsc{WMT14 EN-DE}} We select the first 40k,80k,200k and 400k training examples from WMT14 EN-DE training set to form EN-DE translation intermediate tasks. \subsection{Dependency Parsing} \paragraph{\textsc{Penn Treebank}\citep{ptb3}} The Penn Treebank (PTB) project selected 2,499 stories from a three year Wall Street Journal (WSJ) collection of 98,732 stories for syntactic annotation. We only utilize the dependency structure annotations to form our intermediate dependency parsing task. There are 39,832 (\textasciitilde40k) sentences. \subsection{Semantic Role Labeling} \paragraph{\textsc{OntoNotes}\citep{Weischedel2017OntoNotesA}} The OntoNotes project is built on two resources, following the \textsc{Penn Treebank}\citep{ptb3} for syntax and the \textsc{Penn PropBank} for predicate-argument structure. We select 40k sentences with SRL annotations to form intermediate task. \section{Low-resource Datasets Description} \label{app:few-shots} We set three Low-resource Learning benchmark for AMR parsing: \begin{enumerate} \item \textbf{BOLT} Using only the BOLT split of AMR data of AMR2.0 dataset. The training, validation and test data each has 1061, 133 and 133 amrs respectively. \item \textbf{LORELEI} Using only the LORELEI split of AMR data of AMR3.0 dataset. The training,validation and test data each has 4441, 354 and 527 amrs respectively. \item \textbf{DFA} Using only the DFA split of AMR data of AMR2.0 dataset. The training, validation and test data each has 6455, 210 and 229 amrs respectively. \end{enumerate} Compared with the AMR2.0 dataset which has 36521 training samples, the number of training samples in \textbf{BOLT, LORELEI, DFA} are 2.9\%, 12.2\% and 17.7\% of the number of AMR2.0. \section{Training Details} \label{training_details} We tune the hyper-parameters on the SPRING baseline, and then adding the auxiliary data using just those hyper-parameters without any changing. We use RAdam \citep{liu-RAdam} as our optimizer, and the learning rate is $3e^{-5}$. Batch-size is set to 2048 tokens with 10 steps accumulation. The dropout rate is set to 0.3. \begin{table}[h] \centering \footnotesize \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ll} \toprule Parameter& Searching Space \\ \midrule Learning rate & 1e-5, 3e-5, 5e-5, 1e-4 \\ Batch-size & 256, 512, 1024, 2048, 4096 \\ Grad. accu. & 10 \\ Dropout & 0.1, 0.2, 0.3 \\ \bottomrule \end{tabular} } \caption{Hyper-parameters searching space} \label{tab:ood} \end{table} \input{table/amization_full} \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{fig/compute.pdf} \caption{Illustration of how to compute sentence representation distance of different tasks. The sentences used for evaluate are never seen in the training of AMR Parsing and other auxiliary tasks. Cosine Similarity is computed the same way. We collect all sentences' distance of one encoder to draw the Gaussian distribution curve. } \label{fig:comp} \end{figure*} \section{Acknowledgements} We thank all reviewers for their valuable advice. This paper is supported by the National Key Research and Development Program of China under Grant No.2020AAA0106700, the National Science Foundation of China under Grant No.61936012 and 61876004. \section{Ethics Consideration} We collect our data from public datasets that permit academic use and buy the license for the datasets that are not free. The open-source tools we use for training and evaluation are freely accessible online without copyright conflicts. \subsection{Auxiliary Task Selection} we choose from a wide range of tasks including summarization\citep{Hermann2015TeachingMT,chen-etal-2021-dialogsum}, translation and parsing\citep{ptb3,Weischedel2017OntoNotesA} to evaluate the contribution of different auxiliary task for AMR parsing. We use gold annotation data in all tasks and maintain a close number (\textasciitilde40k) of examples for each task for fair comparison. Due to space limitation, we list the full dataset description in Appendix~\ref{app:dataset_des}. \subsection{Datasets} \paragraph{AMR Datasets} We conducted out experiment on two AMR benchmark datasets, AMR 2.0 and AMR 3.0. AMR2.0 contains $36521$, $1368$ and $1371$ sentence-AMR pairs in training, validation and testing sets, respectively. AMR 3.0 has $55635$, $1722$ and $1898$ sentence-AMR pairs for training validation and testing set, respectively. We also conducted experiments in out-of-distribution datasets (BIO,TLP,News3) and low-resources setting. \paragraph{Auxiliary Task Datasets} Apart from DP/SRL, we choose NLG tasks including summarization and translation to evaluate the contributions of auxiliary tasks. Description of datasets is listed Appendix~\ref{app:dataset_des}. \subsection{Evaluation Metrics} We use the Smatch scores \citep{cai-smatch} and further the break down scores \cite{dam-smatch-incremental} to evaluate the performance. To fully understand the aspects where auxiliary tasks improve AMR parsing, we divide the fine-grained scores to two categories: \textbf{1) Concept-Related} including Concept, NER and Negation scores, which care more about concept centered prediction. \textbf{2) Topology-Related} including Unlabeled, Reentrancy and SRL scores, which focus on edge and relation prediction. NoWSD and Wikification are listed as isolated scores because NoWSD is highly correlated with Smatch score and wikification relies on external entity linker system. \subsection{Experiment Setups} \paragraph{Model Setting} We use current state-of-the-art Seq2Seq AMR Paring model SPRING \citep{bevil-spring} as our main baseline model and apply BART-Large \citep{lew-bart} as our pretrained model. Blink \citep{li-etal-2020-efficient} is used to add wiki tags to the predicted AMR graphs. We do not apply re-category methods and other post-processing methods are the same with \citet{bevil-spring} to restore AMR from token sequence. Please refer to Section \ref{training_details} from appendix for more training details. \paragraph{AMRization Setting} For SRL, we explore four AMRization settings. 1) Trivial. Concept :multi-sentence and relation :snt are used to represent the virtual root and its edges to each of the SRL trees. 2) With Argument Reduction. We use dependency parser from Stanford CoreNLP Toolkit \cite{corenlp} to do the argument reduction. 3) With Reentrancy Restoration 4) All techniques. For DP, we apply four AMRization settings 1) Trivial. Extra relations in dependency tree are added to the vocabulary of BART 2) With Lemmatization. We use NLTK \citep{bird-2006-nltk} to conduct token lemmatization 3) With Redundant Relation Removal. We remove PUNCT, DET, MARK and ROOT relations. 4) All techniques. \subsection{Auxiliary Task Selection} \label{sec: task selection} When introducing auxiliary tasks for AMR parsing, the selected tasks should be formally or semantically related to AMR, thus the knowledge contained in them can be transferred to AMR parsing. Based on this principle of relevance, we choose semantic role labeling (SRL) and dependency parsing (DP) as our auxiliary tasks. We involve Translation and Summarization tasks for comparison. \paragraph{Semantic Role Labeling} SRL aims to recover the predicate-argument structure of a sentence, which can enhance AMR parsing, because: (1) Recovering the predicate-argument structure is also a sub-task of AMR parsing. As illustrated in Figure \ref{fig:amrization}(a,b), both AMR and SRL locate the predicates (``want'', ``leave'') of the sentence and conduct word-sense disambiguation. Then they both capture the multiple arguments of center predicate. (2) SRL and AMR are known as shallow and deep semantic parsing, respectively. It is reasonable to think that the shallow level of semantic knowledge in SRL is useful for deep semantic parsing. \paragraph{Dependency Parsing} DP aims to parse a sentence into a tree structure, which represents the dependency relation among tokens. The knowledge of DP is useful for AMR parsing, since: (1) Linguistically, DP (syntax parsing task) can be the precursor task of AMR (semantic parsing). (2) The dependency relation of DP is also related to semantic relation of AMR, e.g., as illustrated in Figure \ref{fig:srl_dp_amr}(c), ``NSUBJ'' in DP usually represents ``:ARG0'' in AMR. Actually, they both correspond to the agent-patient relations in the sentence. (3) DP is similar to AMR parsing from the perspective of edge prediction, because both of them need to capture the relation of nodes (tokens/concepts) in the sentence. \subsection{AMRization} \label{sec: amrization} Although SRL and DP are highly related to AMR parsing, there still exists gaps between them, e.g., SRL annotations may be disconnected, while AMR is always a connected graph. To bridge these gaps, we transform them into PseudoAMR, which we call AMRization. \subsubsection{Transform SRL to PseudoAMR} We summarize typical gaps between SRL and AMR as: (1) \textit{Connectivity}. AMR is a connected directed graph while the structure of SRL is a forest. (2) \textit{Span-Concept Gap}. Nodes in AMR graph represent concepts (e.g., ``boy'') while that of SRL are token spans (e.g., ``the boy'', ``that boy''). Actually all the mentioned token spans correspond to the same concept. (3) \textit{Reentrancy}. Reentrancy is an important feature of AMR as shown in Figure~\ref{fig:amrization}(a), the instance boy is referenced twice as ARG0. The feature can be applied to conduct coreference resolution. However, there is no reentrancy in SRL. To bridge such gaps, we propose \textbf{Connectivity Formation}, \textbf{Argument Reduction} and \textbf{Reentrancy Restoration} to transform SRL into PseudoAMR. \paragraph{Connectivity Formation} \label{sec:graph formalization} To address the connectivity gap, we need to merge all SRL trees into a connective graph. Note that the merging doesn't guarantee correctness in semantic level. As shown in Figure~\ref{fig:amrization}(b-1), we first add a virtual root node, then generating a directed edge from the virtual root to each root of SRL trees, thus the SRL annotation becomes a connected graph. \paragraph{Argument Reduction} To address the Span-Concept Gap, as shown in Figure~\ref{fig:amrization}(b-2), if the argument of current predicate is a span with more than one token, we will replace this span with its head token in its dependency structure. Thus token spans ``the boy'', ``that boy'' will be transformed to ``boy'', more similar to the corresponding concept. Similar method has been to applied by \cite{zhang-etal-2021-comparing} to find the head of token spans of argument. \paragraph{Reentrancy Restoration} For the reentrancy gap, we design a heuristic algorithm based on DFS to restore reentrancy in SRL. As shown in Figure~\ref{fig:amrization}(b-3), the core idea of the restoration is that we create a variable when the algorithm first sees a node. If the DFS procedure meets node with the same name, the destination of current edge will be redirected to the variable we have created at first. Please refer to Appendix~\ref{alg:reen} for the pseudo code of the reentrancy restoration. \paragraph{Dependency Guided Restoration} The previous restoration algorithm can not guarantee the merging of nodes agrees to the meaning of reentrancy in AMR since it merges concept according to their appearance order in the SRL structure. And it does not handle the merging of predicates. As shown in Figure~\ref{fig:amrization}(b-3), the node ``leave'' and ``leave-01'' should be merged, however we can't get this information directly from the SRL annotations. We therefore propose another restoration method based on the dependency structure of the corresponding sentence of the SRL as illustrated in Figure~\ref{fig:dpg} This restoration algorithm takes the result of previous Connectivity Formation as input. It first merges the leaf-nodes corresponding to the same token. This step is accurate since leaf-nodes' merging will not bring divergence. The second step is to merge predicate nodes. For all sub-trees of the root node, it first check whether one predicate appear in others' argument span and whether the predicate directly depend on the span's predicate. If both two conditions are satisfied, the algorithm will merge the predicate and the span to one node. Last, it removes the root node and root-edges if the graph remains connected after removing. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{ fig/dpguided.png} \caption{Illustration of Dependency Guided Restoration. In step 2, leaf-nodes ``The boy'' are merged. In step 3, none-leaf node ``leave-01'' is merged with leaf-node ``to leave'' since ``leave-01'' appears in word span ``to leave'' and word ``leave'' depends on word ``want''.} \label{fig:dpg} \end{figure} \subsubsection{Transform Dependency Structure to PseudoAMR} We summarize the gaps between Dependency Tree and AMR as: (1) \textit{Redundant Relation}. Some relations in dependency parsing focus on syntax, e.g., ``:PUNCT'' and ``:DET'', which are far from semantic relations in AMR. (2) \textit{Token-Concept Gap}. The basic element of dependency structure is token while that of AMR is the concept, which captures deeper syntax-independent semantics. We use \textbf{Redundant Relation Removal} and \textbf{Token Lemmatization} to transform the dependency structure to PseudoAMR to handle the gaps. \paragraph{Redundant Relation Removal} For the Redundant Relation Gap, we remove some relations which are far from the sentence's semantics most of the time, such as ``PUNCT'' and ``DET''. As illustrated in Figure \ref{fig:amrization}(c-1), by removing some relations of the dependence, the parsing result become more compact compared with original DP tree, forcing the model to ignore some semantics-unrelated tokens during seq2seq training. \paragraph{Token Lemmatization} As shown in Figure \ref{fig:amrization}(c-2), for Token-Concept Gap, we conduct lemmatization on the node of dependency tree based on the observation that the affixes of single word do not affect the concept it corresponds to. Together with the smart-initialization \citep{bevil-spring} by setting the concept token's embedding as the average of the subword constituents, the embedding vector of lemmatized token (`want') becomes closer to the vector concept (`want-01') in the embedding matrix, therefore requiring the model to capture deeper semantic when conducting DP task. \subsubsection{Linearization} After all AMRization steps, the graph structure of SRL/DP also should be linearized before doing seq2seq training. As depicted in the right part of Figure~\ref{fig:amrization}, we linearize the graph by the DFS-based travel, and use special tokens \textit{<R0>, ..., <Rk>} to indicate variables, and parentheses to mark the depth, which is the best AMR linearization method of \citet{bevil-spring}. \subsection{Training Paradigm Selection} \label{sec: training paradigm} After task selection and AMRization, we still need to choose an appropriate training paradigm to train PseudoAMR and AMR effectively. We explore three training paradigms as follows: \paragraph{Multitask training} Following \citet{xu-seqpretrain,Damonte2021OneSP}, we use classic schema in sequence-to-sequence multitask training by adding special task tag at the beginning of input sentence and training all tasks simultaneously. The validation of best model is conducted only on the AMR parsing sub-task. \paragraph{Intermediate training} Similar to \citet{kun2020intermediate}, we first fine-tune the pretrained model on the intermediate task (PseudoAMR parsing), followed by fine-tuning on the target AMR parsing task under same training setting. \paragraph{Multitask \& Intermediate training} We apply a joint paradigm to further explore how different paradigms affect AMR parsing. We first conduct multitask training, followed by fine-tuning on AMR parsing. Under this circumstance, Multitask training plays the role as the intermediate task. \subsection{Main Results} We report the result (ITL + All AMRization Techniques) on benchmark AMR 2.0 and 3.0 in Table~\ref{tab:main_results2.0}. On AMR 2.0, our models with DP or SRL as intermediate task gains consistent improvement over the SPRING model by a large margin (1.2 Smatch) and reach new state-of-the-art for single model (85.2 Smatch). Compared with SPRING with 200k extra data, our models achieve higher performance with much less extra data (40k v.s. 200k), suggesting the effectiveness of our intermediate tasks. We also compare our models with contemporary work \citep{lam2021ensembling,saft}. It turns out that our ensemble model beats its counterpart with less extra data, reaching a higher performance (85.3 Smatch). In fact, even without ensembling, our model still performs better than those ensembling models and the model using Dependency Guided Restoration method achieves higher performance than the trivial one, showing the effectiveness of our methods. On AMR 3.0, Our models consistently outperform other models under both single model (83.9 Smatch) and ensembling setting (84.0 Smatch). Same as AMR 2.0, our single model reaches higher Smatch score than those ensembling models, revealing the effectiveness of our proposed methods. \paragraph{Fine-grained Performance} To better analyse how the AMR parser benefits from the intermediate training and how different intermediate tasks affect the overall performance. We report the fine-grained score as shown in Table~\ref{tab:main_results2.0}. We can tell that by incorporating intermediate tasks, considerable increases on most sub-metrics, especially on the Topology-related terms, are observed. On both AMR 2.0 and 3.0 our single model with SRL as intermediate task achieves the highest score in Unlabeled, Reentrancy and SRL metrics, suggesting that SRL intermediate task improves our parser's capability in Coreference and SRL. DP leads to consistent improvement in topology-related metrics, which also derives better result on NER sub-task (92.5 on AMR 2.0, 89.2 on AMR 3.0). We suppose that the ":nn" relation which signifies multi-word name entities in dependency parsing helps the AMR parser recognize multi-word name entities. Generally speaking, AMR parser gains large improvement in Topology-related sub-tasks and NER by incorporating our intermediate tasks in terms of the Smatch scores. \subsection{Exploration in Auxiliary Task Selection} \input{ table/task_small} We explore how different tasks affect AMR parsing apart from DP and SRL. We involve two classic conditional NLG tasks, Summarization and Translation for comparison as shown in Table~\ref{tab:task_sel_small}. The comparison implies that SRL and DP are better auxiliary tasks for AMR Parsing even under the circumstance where their counterparts exploit far more data (40k v.s. 400k). In fact, the performance of MT drops while introducing more data, which contradicts with \citet{xu-seqpretrain} 's findings that more MT data can lead to better result when pretraining the \textit{raw Transformer model}. However, this is not surprising under the background of Intermediate-task Learning where we already have a pretrained model with large-scale pretraining. Whether the intermediate tasks' form fits for the target task is far more important than the amount of data in the intermediate-task as also revealed by \citet{poth2021intermediate}. According to their observation, tasks with the most data (QQP 363k, MNLI 392k) perform far worse ( -97.4\% relative performance degradation at most) on some target tasks compared with tasks having much smaller datasets (CommonsenseQA 9k, SciTail 23k) which on the contrary give a positive influence. In conclusion, our findings suggest that the selection of intermediate task is important and should be closely related to AMR parsing in form, otherwise it would even lead to a performance drop for AMR parsing.
1,116,691,499,608
arxiv
\section{Introduction} \label{intro_sec} \noindent Normal crossings (NC) divisors and varieties are the most basic and important classes of singular objects in complex algebraic and K\"ahler geometries. An \sf{NC divisor} in a smooth variety~$X$ is a subvariety~$V$ locally defined by an equation of the form \BE{Local1_e} z_1\cdots z_k = 0\EE in a holomorphic coordinate chart $(z_1,\ldots,z_n)$ on~$X$. From a global perspective, an NC divisor is the image of an immersion with transverse self-intersections. A \sf{simple normal crossings} (or~\sf{SC}) \sf{divisor} is a global transverse union of smooth divisors, i.e. $$V=\bigcup_{i\in S}\!V_i \subset X$$ with the \sf{singular locus} $$V_\prt=\bigcup_{\begin{subarray}{c}I\subset S\\ |I|=2\end{subarray}}\!\!\!V_{I}\subset V, \qquad\hbox{where}\quad V_I=\bigcap_{i\in I}\!V_i\quad \forall~I\!\subset\!S.$$ An~\sf{NC variety} of complex dimension $n$ is a variety $X_{\eset}$ that can be locally embedded as an NC divisor in~$\C^{n+1}$. Thus, every sufficiently small open set~$U_\eset$ in $X_{\eset}$ can be written~as $$U_\eset=\bigg(\bigsqcup_{i\in S}\!U_i\bigg)\!\Big/\!\!\!\sim, \qquad U_{ij}\approx U_{ji}\quad \forall~i,j\!\in\!S,~i\!\neq\!j,$$ where $\{U_{ij}\}_{j\in S-i}$ is an SC divisor in a smooth component~$U_i$ of~$U_\eset$. An \sf{SC~variety} is a global transverse union of smooth varieties $\{X_i\}_{i\in S}$ along SC divisors $\{X_{ij}\}_{j\in S-i}$ in~$X_i$,~i.e. \BE{Xesetdfn_e} X_{\eset}=\bigg(\bigsqcup_{i\in S}\!X_i\bigg)\!\Big/\!\!\!\sim,\qquad X_{ij}\approx X_{ji}\quad \forall~i,j\!\in\!S,~i\!\neq\!j,\EE with the \sf{singular locus} \BE{Xprtdfn_e} X_{\prt}=\bigcup_{\begin{subarray}{c}i,j\in S\\ i\neq j\end{subarray}} \!\!\!X_{ij}\subset X_\eset. \EE A two-dimensional 3-fold SC variety is shown in Figure~\ref{P2cut_fig}.\\ \noindent In parallel with his introduction of $J$-holomorphic curve techniques in symplectic topology in~\cite{Gr}, Gromov asked about the feasibility of introducing notions of singular (sub-)varieties of higher dimension suitable for this field; see \cite[p343]{GrBook}. Important developments since then, such as symplectic sum constructions \cite{Gf,MW}, degeneration and decomposition formulas for Gromov-Witten invariants \cite{Tian,CH,LR,Jun2,Brett}, log Gromov-Witten theory \cite{GS,AC}, affine symplectic geometry \cite{MAff,McLean}, homological mirror symmetry~\cite{Sheridan}, and a new perspective on Atiyah-Floer Conjecture \cite{DF} suggest the need for (soft) symplectic notions of NC divisors and varieties that are equivalent, in a suitable sense, to the corresponding (rigid) geometric notions.\\ \noindent A {\it smooth} symplectic divisor~$V$ in a symplectic manifold $(X,\om)$ is an almost K\"ahler divisor with respect to an $\om$-compatible almost complex structure~$J$ which is integrable in the normal direction to~$V$. Furthermore, the projection \BE{AKtoXproj_e}\AK(X,V)\lra \Symp(X,V), \qquad (\om,J)\lra \om,\EE from the space of such pairs $(\om,J)$ to the space of symplectic forms~$\om$ on~$X$ which restrict to symplectic forms on~$V$ is a weak homotopy equivalence. This property of~\eref{AKtoXproj_e}, rather than its surjectivity, is fundamental to applications of smooth divisors in symplectic topology. In~\cite{SympDivConf}, we propose\\ \begin{minipage}{6in}\label{DefPhil_minip} {\it to treat NC symplectic divisors/varieties up to deformation equivalence, showing that each deformation equivalence class has a subspace of sufficiently ``nice" representatives so that an appropriate analogue of~\eref{AKtoXproj_e} is a weak homotopy equivalence}.\\ \end{minipage} \noindent We also prove that unions of so-called {\it positively intersecting collections} of smooth symplectic divisors provide for a notion of SC symplectic divisors compatible with this prospective and lead to a compatible notion of SC symplectic varieties. An overview of the program initiated in~\cite{SympDivConf} and of its potential applications in symplectic topology and algebraic geometry appears in~\cite{SympNCSumm}.\\ \noindent The present paper extends Definitions~\ref{SCD_dfn} and~\ref{SCC_dfn} of SC symplectic divisors and varieties, the notions of regularizations for them, and the main theorems of~\cite{SympDivConf} to arbitrary NC divisors and varieties. This is done from both local and global perspectives, which are better suited for different types of applications. In the local perspectives of Definitions~\ref{NCD_dfn} and~\ref{NCC_dfn}, \sf{NC symplectic divisors} and \sf{varieties} are spaces that are locally SC symplectic divisors and varieties, respectively. In the global perspectives of Proposition~\ref{NCD_prp} and Section~\ref{NCCgl_subs}, NC symplectic divisors and varieties are images of immersions with transverse self-intersections. Regularizations are key to many applications of divisors in symplectic topology, including the symplectic sum constructions of \cite{Gf,MW,SympSumMulti}. They in particular ensure the existence of almost complex structures~$J$ on~$X$ that are ``nice" along divisors in smooth and NC symplectic varieties. Such almost complex structures in turn play a central role in Gromov-Witten theory and in its interplay with algebraic geometry and string theory, for example.\\ \noindent After recalling the notions of regularizations for SC symplectic divisors and varieties in Section~\ref{SC_sec}, we define NC divisors and their regularizations from a local perspective in Section~\ref{NCDloc_subs} and from a global perspective in Section~\ref{NCDgl_subs}. The two perspectives are equivalent by Proposition~\ref{NCD_prp} and the last paragraph of Section~\ref{NCDgl_subs}. Theorem~\ref{NCD_thm} extends \cite[Theorem~2.13]{SympDivConf} to arbitrary NC symplectic divisors. We define NC varieties and their regularizations from a local perspective in Section~\ref{NCCloc_subs} and from a global perspective in Section~\ref{NCCgl_subs}. The two perspectives are shown to be equivalent in Section~\ref{NCCcomp_subs}. Theorem~\ref{NCC_thm} extends \cite[Theorem~2.17]{SympDivConf} to arbitrary NC symplectic varieties. Section~\ref{eg_subs} provides examples of non-SC normal crossings divisors and varieties. In Section~\ref{NCCpf_sec}, we deduce Theorem~\ref{NCC_thm} from the proof of \cite[Theorem~2.17]{SympDivConf} using the local perspective of Section~\ref{NCCloc_subs} and a seemingly weaker, but equivalent, version of the notion of regularization of \cite[Definition~2.15(1)]{SympDivConf}. \section{Simple crossings divisors and varieties} \label{SC_sec} \noindent We begin by introducing the most commonly used notation. For a set $S$, denote by $\cP(S)$ the collection of subsets of~$S$ and by $\cP^*(S)\!\subset\!\cP(S)$ the collection of nonempty subsets. If in addition $i\!\in\!S$, let $$\cP_i(S)=\big\{I\!\in\!\cP(S)\!:\,i\!\in\!S\big\}.$$ For $N\!\in\!\Z^{\ge0}$, let $$[N]=\{1,\ldots,N\}, \quad \cP(N)\!=\!\cP\big([N]\big), \quad \cP^*(N)\!=\!\cP^*\big([N]\big).$$ For $i\!\in\![N]$, let $\cP_i(N)\!=\!\cP_i([N])$.\\ \noindent For $k\!\in\!\Z^{\ge0}$, denote by $\bS_k$ the $k$-th symmetric group. For $k'\!\in\![k]$, we identify $\bS_{k'}$ with the subgroup of~$\bS_k$ consisting of the permutations of $[k']\!\subset\![k]$ and denote by $\bS_{[k]-[k']}\!\subset\!\bS_k$ the subgroup of~$\bS_k$ consisting of the permutations of $[k]\!-\![k']$. In particular, $$\bS_{k'}\times\!\bS_{[k]-[k']}\subset \bS_k\,.$$ For each $\si\!\in\!\bS_k$ and $i\!\in\![k]$, let $\si_i\!\in\!\bS_{k-1}$ be the permutation obtained from the bijection $$[k]\!-\!\{i\}\lra [k]\!-\!\{\si(i)\}, \qquad j\lra \si(j),$$ by identifying its domain and target with $[k\!-\!1]$ in the order-preserving fashions.\\ \noindent If $\cN\!\lra\!V$ is a vector bundle, $\cN'\!\subset\!\cN$, and $V'\!\subset\!V$, we define $$\cN'|_{V'}=\cN|_{V'}\cap\cN'\,.$$ Let $\bI\!=\![0,1]$. \subsection{Notation and definitions} \label{SCdfn_subs} \noindent Let $X$ be a (smooth) manifold. For any submanifold $V\!\subset\!X$, let $$\cN_XV\equiv \frac{TX|_V}{TV}\lra V$$ denote the normal bundle of~$V$ in~$X$. For a collection $\{V_i\}_{i\in S}$ of submanifolds of~$X$ and $I\!\subset\!S$, let $$V_I\equiv \bigcap_{i\in I}\!V_i\subset X\,.$$ Such a collection is called \sf{transverse} if any subcollection $\{V_i\}_{i\in I}$ of these submanifolds intersects transversely, i.e.~the homomorphism \BE{TransVerHom_e} T_xX\oplus\bigoplus_{i\in I}T_xV_i\lra \bigoplus_{i\in I}T_xX, \qquad \big(v,(v_i)_{i\in I}\big)\lra (v\!+\!v_i)_{i\in I}\,,\EE is surjective for all $x\!\in\!V_I$. By the Inverse Function Theorem, each subspace $V_I\!\subset\!X$ is then a submanifold of~$X$ of codimension $$\codim_XV_I=\sum_{i\in I}\codim_XV_i$$ and the homomorphisms \BE{cNorient_e2}\begin{split} \cN_XV_I\lra \bigoplus_{i\in I}\cN_XV_i\big|_{V_I}\quad&\forall~I\!\subset\!S,\qquad \cN_{V_{I-i}}V_I\lra \cN_XV_i\big|_{V_I} \quad\forall~i\!\in\!I\!\subset\!S,\\ &\bigoplus_{i\in I-I'}\!\!\!\cN_{V_{I-i}}V_I\lra \cN_{V_{I'}}V_I \quad\forall~I'\!\subset\!I\!\subset\!S \end{split}\EE induced by inclusions of the tangent bundles are isomorphisms.\\ \noindent As detailed in \cite[Section~2.1]{SympDivConf}, a transverse collection $\{V_i\}_{i\in S}$ of oriented submanifolds of an oriented manifold~$X$ of even codimensions induces an orientation on each submanifold $V_I\!\subset\!X$ with $|I|\!\ge\!2$; we call it \sf{the intersection orientation of~$V_I$}. If $V_I$ is zero-dimensional, it is a discrete collection of points in~$X$ and the homomorphism~\eref{TransVerHom_e} is an isomorphism at each point $x\!\in\!V_I$; the intersection orientation of~$V_I$ at $x\!\in\!V_I$ then corresponds to a plus or minus sign, depending on whether this isomorphism is orientation-preserving or orientation-reversing. We call the original orientations of $X\!=\!V_{\eset}$ and $V_i\!=\!V_{\{i\}}$ \sf{the intersection orientations} of these submanifolds~$V_I$ of~$X$ with $|I|\!<\!2$.\\ \noindent Suppose $(X,\om)$ is a symplectic manifold and $\{V_i\}_{i\in S}$ is a transverse collection of submanifolds of~$X$ such that each $V_I$ is a symplectic submanifold of~$(X,\om)$. Each $V_I$ then carries an orientation induced by $\om|_{V_{I}}$, which we call the \sf{$\om$-orientation}. If $V_I$ is zero-dimensional, it is automatically a symplectic submanifold of~$(X,\om)$; the $\om$-orientation of~$V_I$ at each point $x\!\in\!V_I$ corresponds to the plus sign by definition. By the previous paragraph, the $\om$-orientations of~$X$ and~$V_i$ with $i\!\in\!I$ also induce intersection orientations on all~$V_I$. \begin{dfn}\label{SCD_dfn} Let $(X,\om)$ be a symplectic manifold. A \sf{simple crossings} (or \sf{SC}) \sf{symplectic divisor} in~$(X,\om)$ is a finite transverse collection $\{V_i\}_{i\in S}$ of closed submanifolds of~$X$ of codimension~2 such that $V_I$ is a symplectic submanifold of~$(X,\om)$ for every $I\!\subset\!S$ and the intersection and $\om$-orientations of~$V_I$ agree. \end{dfn} \noindent An SC symplectic divisor $\{V_i\}_{i\in S}$ with $|S|\!=\!1$ is a smooth symplectic divisor in the usual sense. If $(X,\om)$ is a 4-dimensional symplectic manifold, a finite transverse collection $\{V_i\}_{i\in S}$ of closed submanifolds of~$X$ of codimension~2 is an SC symplectic divisor if all points of the pairwise intersections $V_{i_1}\!\cap\!V_{i_2}$ with $i_1\!\neq\!i_2$ are positive. By \cite[Example~2.7]{SympDivConf}, it is not sufficient to consider the deepest (non-empty) intersections in higher~dimensions. \begin{dfn}\label{SCdivstr_dfn} Let $X$ be a manifold and $\{V_i\}_{i\in S}$ be a finite transverse collection of closed submanifolds of~$X$ of codimension~2. A \sf{symplectic structure on $\{V_i\}_{i\in S}$ in~$X$} is a symplectic form~$\om$ on~$X$ such that $V_I$ is a symplectic submanifold of $(X,\om)$ for all $I\!\subset\!S$. \end{dfn} \noindent For $X$ and $\{V_i\}_{i\in S}$ as in Definition~\ref{SCdivstr_dfn}, we denote by $\Symp(X,\{V_i\}_{i\in S})$ the space of all symplectic structures on $\{V_i\}_{i\in S}$ in~$X$ and by $$\Symp^+\big(X,\{V_i\}_{i\in S}\big)\subset \Symp\big(X,\{V_i\}_{i\in S}\big)$$ the subspace of the symplectic forms~$\om$ such that $\{V_i\}_{i\in S}$ is an SC symplectic divisor in~$(X,\om)$. The latter is a union of topological components of the former. \begin{dfn}\label{TransConf_dfn1} Let $N\!\in\!\Z^+$. An \sf{$N$-fold transverse configuration} is a tuple $\{X_I\}_{I\in\cP^*(N)}$ of manifolds such that $\{X_{ij}\}_{j\in[N]-i}$ is a transverse collection of submanifolds of~$X_i$ for each $i\!\in\![N]$ and $$X_{\{ij_1,\ldots,ij_k\}}\equiv \bigcap_{m=1}^k\!\!X_{ij_m} =X_{ij_1\ldots j_k}\qquad\forall~j_1,\ldots,j_k\in[N]\!-\!i.$$ A \sf{symplectic structure} on an $N$-fold transverse configuration~$\X$ such that $X_{ij}$ is a closed submanifold of~$X_i$ of codimension~2 for all $i,j\!\in\![N]$ distinct is a~tuple $$(\om_i)_{i\in[N]}\in \prod_{i=1}^N\Symp\big(X_i,\{X_{ij}\}_{j\in[N]-i}\big)$$ such that $\om_{i_1}|_{X_{i_1i_2}}\!=\!\om_{i_2}|_{X_{i_1i_2}}$ for all $i_1,i_2\!\in\![N]$. \end{dfn} \noindent For an $N$-fold transverse configuration~$\X$ as in Definition~\ref{TransConf_dfn1}, we define the spaces \hbox{$X_{\eset}\!\supset\!X_{\prt}$} as in~\eref{Xesetdfn_e} and~\eref{Xprtdfn_e}. If in addition $X_{ij}$ is a closed submanifold of~$X_i$ of codimension~2 for all $i,j\!\in\![N]$ distinct, let $\Symp(\X)$ denote the space of all symplectic structures on $\X$ and $$\Symp^+\big(\X\big)= \Symp\big(\X\big) \cap \prod_{i=1}^N\Symp^+\big(X_i,\{X_{ij}\}_{j\in[N]-i}\big)\,.$$ Thus, if $(\om_i)_{i\in[N]}$ is an element of $\Symp^+(\X)$, then $\{X_{ij}\}_{j\in[N]-i}$ is an SC symplectic divisor in $(X_i,\om_i)$ for each $i\!\in\![N]$. \begin{dfn}\label{SCC_dfn} Let $N\!\in\!\Z^+$. An \sf{$N$-fold simple crossings} (or \sf{SC}) \sf{symplectic configuration} is a~tuple \BE{SCCdfn_e}\X=\big((X_I)_{I\in\cP^*(N)},(\om_i)_{i\in[N]}\big)\EE such that $\{X_I\}_{I\in\cP^*(N)}$ is an $N$-fold transverse configuration, $X_{ij}$ is a closed submanifold of~$X_i$ of codimension~2 for all $i,j\!\in\![N]$ distinct, and $(\om_i)_{i\in[N]}\in\Symp^+(\X)$. The \sf{SC symplectic variety associated~to} such a tuple~$\X$ is the pair~$(X_{\eset},(\om_i)_{i\in[N]})$. \end{dfn} \begin{figure} \begin{pspicture}(-3,-2)(11,2) \psset{unit=.3cm} \psline[linewidth=.1](15,-2)(22,-2)\psline[linewidth=.1](15,-2)(15,5) \psline[linewidth=.1](15,-2)(10.5,-6.5)\pscircle*(15,-2){.3} \rput(19.5,2.5){\sm{$\wh\P^2$}}\rput(11.5,-1){\sm{$\wh\P^2$}}\rput(17,-5){\sm{$\wh\P^2$}} \rput(21.5,-2.9){\sm{$E$}}\rput(21.5,-1.1){\sm{$\bar{L}$}} \rput(14.2,4.5){\sm{$\bar{L}$}}\rput(15.7,4.5){\sm{$E$}} \rput(10,-5.7){\sm{$E$}}\rput(12,-6.1){\sm{$\bar{L}$}} \rput(15.8,-1.2){\sm{$P$}} \end{pspicture} \caption{A 3-fold NC variety} \label{P2cut_fig} \end{figure} \noindent A two-dimensional 3-fold SC configuration and associated NC variety are shown in Figure~\ref{P2cut_fig}. In this figure, $\wh\P^2$ denotes the blowup of the complex projective space~$\P^2$ at a point~$p$ and $E,\ov{L}\!\subset\!\wh\P^2$ are the exceptional divisor and the proper transform of a line through~$p$, respectively. In particular, we take $$X_{i,i-1}=\ov{L}\subset X_i=\wh\P^2, \quad X_{i,i+1}=E\subset X_i=\wh\P^2 \qquad\forall~i\in\Z_3\approx\{1,2,3\}$$ and choose an identification $\ov{L}\!\approx\!E$. \subsection{Regularizations} \label{SCDregul_subs} \noindent We next recall the notions of regularizations for a submanifold $V\!\subset\!X$, a symplectic submanifold with a split normal bundle, a transverse collection $\{V_i\}_{i\in S}$ of submanifolds of a manifold~$X$ with a symplectic structure~$\om$ on $(X,\{V_i\}_{i\in S})$, and an SC symplectic configuration~$\X$. A regularization in the sense of Definition~\ref{TransCollregul_dfn}\ref{sympregul_it} for $\{V_i\}_{i\in S}$ in~$(X,\om)$ symplectically models a neighborhood of $x\!\in\!V_I$ in~$X$ on a neighborhood of the zero section~$V_I$ in the normal bundle~$\cN_XV_I$ split as in~\eref{cNorient_e2} with a standardized symplectic form. A regularization for~$\X$ in the sense of Definition~\ref{TransConfregul_dfn}\ref{SCCreg_it} is a compatible collection of regularizations for the collections $\{X_{ij}\}_{j\in[N]-i}$ of submanifolds of~$X_i$.\\ \noindent If $B$ is a manifold, possibly with boundary, we call a family $(\om_t)_{t\in B}$ of 2-forms on~$X$ \sf{smooth} if the 2-form~$\wt\om$ on~$B\!\times\!X$ given~by $$\wt\om_{(t,x)}(v,w)= \begin{cases} \om_t|_x(v,w),&\hbox{if}~v,w\!\in\!T_xX;\\ 0,&\hbox{if}~v\!\in\!T_tB; \end{cases}$$ is smooth. Smoothness for families of other objects is defined similarly.\\ \noindent For a vector bundle $\pi\!:\cN\!\lra\!V$, we denote by $\ze_{\cN}$ \sf{the radial vector field} on the total space of~$\cN$; it is given~by $$\ze_{\cN}(v)=(v,v)\in\pi^*\cN =T\cN^{\ver} \lhra T\cN\,.$$ Let $\Om$ be a fiberwise 2-form on~$\cN\!\lra\!V$. A connection~$\na$ on~$\cN$ induces a projection $T\cN\!\lra\!\pi^*\cN$ and thus determines an extension~$\Om_{\na}$ of~$\Om$ to a 2-form on the total space of~$\cN$. If $\om$ is a closed 2-form on~$V$, the 2-form \BE{ombund_e2} \wt\om \equiv \pi^*\om+\frac12\nd\io_{\ze_{\cN}}\Om_{\na} \equiv \pi^*\om+\frac12\nd\big(\Om_{\na}(\ze_{\cN},\cdot)\big)\EE on the total space of $\cN$ is then closed and restricts to~$\Om$ on $\pi^*\cN\!=\!T\cN^{\ver}$. If $\om$ is a symplectic form on~$V$ and $\Om$ is a fiberwise symplectic form on~$\cN$, then~$\wt\om$ is a symplectic form on a neighborhood of~$V$ in~$\cN$.\\ \noindent We call $\pi\!:(L,\rho,\na)\!\lra\!V$ a \sf{Hermitian line bundle} if $V$ is a manifold, $L\!\lra\!V$ is a smooth complex line bundle, $\rho$ is a Hermitian metric on~$L$, and $\na$ is a $\rho$-compatible connection on~$L$. We use the same notation~$\rho$ to denote the square of the norm function on~$L$ and the Hermitian form on~$L$ which is $\C$-antilinear in the second input. Thus, $$\rho(v)\equiv\rho(v,v), \quad \rho(\fI v,w)=\fI\rho(v,w)=-\rho(v,\fI w) \qquad\forall~(v,w)\!\in\!L\!\times_V\!L.$$ Let $\rho^{\R}$ denote the real part of the form~$\rho$.\\ \noindent A Riemannian metric on an oriented real vector bundle \hbox{$L\!\lra\!V$} of rank~2 determines a complex structure on the fibers of~$V$. A \sf{Hermitian structure} on an oriented real vector bundle \hbox{$L\!\lra\!V$} of rank~2 is a pair $(\rho,\na)$ such that $(L,\rho,\na)$ is a Hermitian line bundle with the complex structure~$\fI_{\rho}$ determined by the Riemannian metric~$\rho^{\R}$. If $\Om$ is a fiberwise symplectic form on an oriented vector bundle \hbox{$L\!\lra\!V$} of rank~2, an \sf{$\Om$-compatible Hermitian structure} on~$L$ is a Hermitian structure $(\rho,\na)$ on~$L$ such that $\Om(\cdot,\fI_{\rho}\cdot)=\rho^{\R}(\cdot,\cdot)$.\\ \noindent Let $(L_i,\rho_i,\na^{(i)})_{i\in I}$ be a finite collection of Hermitian line bundles over~$V$. If each $(\rho_i,\na^{(i)})$ is compatible with a fiberwise symplectic form~$\Om_i$ on~$L_i$ and $$(\cN,\Om,\na)\equiv\bigoplus_{i\in I}\big(L_i,\Om_i,\na^{(i)}\big),$$ then the 2-form~\eref{ombund_e2} is given~by \BE{ombund_e} \wt\om=\om_{(\rho_i,\na^{(i)})_{i\in I}} \equiv \pi^*\om+\frac12 \bigoplus_{i\in I} \pi_{I;i}^*\nd\big((\Om_i)_{\na^{(i)}}(\ze_{L_i},\cdot)\big),\EE where $\pi_{I;i}\!:\cN\!\lra\!L_i$ is the component projection map.\\ \noindent If in addition $\Psi\!:V'\!\lra\!V$ is a smooth map and $(L_i',\rho_i',\na'^{(i)})_{i\in I}$ is a finite collection of Hermitian line bundles over~$V'$, we call a (fiberwise) vector bundle isomorphism $$\wt\Psi\!: \bigoplus_{i\in I'}L'_i\lra \bigoplus_{i\in I}L_i$$ covering~$\Psi$ a \sf{product Hermitian isomorphism} if $$\wt\Psi\!: (L_i',\rho_i',\na'^{(i)}) \lra \Psi^*(L_i,\rho_i,\na^{(i)})$$ is an isomorphism of Hermitian line bundles over~$V'$ for every $i\!\in\!I$. \begin{dfn}\label{smreg_dfn} Let $X$ be a manifold and $V\!\subset\!X$ be a submanifold with normal bundle $\cN_XV\!\lra\!V$. A \sf{regularization for~$V$ in~$X$} is a diffeomorphism $\Psi\!:\cN'\!\lra\!X$ from a neighborhood of~$V$ in~$\cN_XV$ onto a neighborhood of~$V$ in~$X$ such that $\Psi(x)\!=\!x$ and the isomorphism $$ \cN_XV|_x=T_x^{\ver}\cN_XV \lhra T_x\cN_XV \stackrel{\nd_x\Psi}{\lra} T_xX\lra \frac{T_xX}{T_xV}\equiv\cN_XV|_x$$ is the identity for every $x\!\in\!V$. \end{dfn} \noindent If $(X,\om)$ is a symplectic manifold and $V$ is a symplectic submanifold in~$(X,\om)$, then $\om$ induces a fiberwise symplectic form~$\om|_{\cN_XV}$ on the normal bundle~$\cN_XV$ of~$V$ in~$X$ via the isomorphism $$\pi_{\cN_XV}\!: \cN_XV\equiv \frac{TX|_V}{TV}\approx TV^{\om} \equiv \big\{v\!\in\!T_xX\!:\,x\!\in\!V,\,\om(v,w)\!=\!0~\forall\,w\!\in\!T_xV\big\}\,.$$ We denote the restriction of~$\om|_{\cN_XV}$ to a subbundle $L\!\subset\!\cN_XV$ by~$\om|_L$. \begin{dfn}\label{sympreg1_dfn} Let $X$ be a manifold, $V\!\subset\!X$ be a submanifold, and $$\cN_XV=\bigoplus_{i\in I}L_i$$ be a fixed splitting into oriented rank~2 subbundles. If $\om$ is a symplectic form on~$X$ such that $V$ is a symplectic submanifold and $\om|_{L_i}$ is nondegenerate for every $i\!\in\!I$, then an \sf{$\om$-regularization for~$V$ in~$X$} is a tuple $((\rho_i,\na^{(i)})_{i\in I},\Psi)$, where $(\rho_i,\na^{(i)})$ is an $\om|_{L_i}$-compatible Hermitian structure on~$L_i$ for each $i\!\in\!I$ and $\Psi$ is a regularization for~$V$ in~$X$, such that $$\Psi^*\om=\om_{(\rho_i,\na^{(i)})_{i\in I}}\big|_{\Dom(\Psi)}.$$ \end{dfn} \vspace{.1in} \noindent Suppose $\{V_i\}_{i\in S}$ is a transverse collection of codimension~2 submanifolds of~$X$. For each $I\!\subset\!S$, the last isomorphism in~\eref{cNorient_e2} provides a natural decomposition $$\pi_I\!:\cN_XV_I\!=\!\bigoplus_{i\in I}\cN_{V_{I-i}}V_I \lra V_I$$ of the normal bundle of~$V_I$ in~$X$ into oriented rank~2 subbundles. We take this decomposition as given for the purposes of applying Definition~\ref{sympreg1_dfn}. If in addition $I'\!\subset\!I$, let \BE{cNIIprdfn_e}\pi_{I;I'}\!:\cN_{I;I'}\equiv \bigoplus_{i\in I-I'}\!\!\!\cN_{V_{I-i}}V_I=\cN_{V_{I'}}V_I\lra V_I\,.\EE There are canonical identifications \BE{cNtot_e}\cN_{I;I-I'}=\cN_XV_{I'}|_{V_I}, \quad \cN_XV_I=\pi_{I;I'}^*\cN_{I;I-I'}=\pi_{I;I'}^*\cN_XV_{I'} \qquad\forall~I'\!\subset\!I\!\subset\![N].\EE The first equality in the second statement above is used in particular in~\eref{overlap_e}. \begin{dfn}\label{TransCollReg_dfn} Let $X$ be a manifold and $\{V_i\}_{i\in S}$ be a transverse collection of submanifolds of~$X$. A \sf{system of regularizations for} $\{V_i\}_{i\in S}$ in~$X$ is a~tuple $(\Psi_I)_{I\subset S}$, where $\Psi_I$ is a regularization for~$V_I$ in~$X$ in the sense of Definition~\ref{smreg_dfn}, such~that \BE{Psikk_e} \Psi_I\big(\cN_{I;I'}\!\cap\!\Dom(\Psi_I)\big)=V_{I'}\!\cap\!\Im(\Psi_I)\EE for all $I'\!\subset\!I\!\subset\!S$. \end{dfn} \noindent Given a system of regularizations as in Definition~\ref{TransCollReg_dfn} and $I'\!\subset\!I\!\subset\!S$, let $$\cN_{I;I'}' = \cN_{I;I'}\!\cap\!\Dom(\Psi_I), \qquad \Psi_{I;I'}\equiv \Psi_I\big|_{\cN_{I;I'}'}\!: \cN_{I;I'}'\lra V_{I'}\,.$$ The map $\Psi_{I;I'}$ is a regularization for $V_I$ in~$V_{I'}$. As explained in \cite[Section~2.2]{SympDivConf}, $\Psi_I$ determines an isomorphism \BE{wtPsiIIdfn_e} \fD\Psi_{I;I'}\!: \pi_{I;I'}^*\cN_{I;I-I'}\big|_{\cN_{I;I'}'} \lra \cN_XV_{I'}\big|_{V_{I'}\cap\Im(\Psi_I)}\EE of vector bundles covering~$\Psi_{I;I'}$ and respecting the natural decompositions of $\cN_{I;I-I'}\!=\!\cN_XV_{I'}|_{V_I}$ and $\cN_XV_{I'}$. By the last assumption in Definition~\ref{smreg_dfn}, $$\fD\Psi_{I;I'}\big|_{\pi_{I;I'}^*\cN_{I;I-I'}|_{V_I}}\!=\!\id\!:\, \cN_{I;I-I'}\lra \cN_XV_{I'}|_{V_I}$$ under the canonical identification of $\cN_{I;I-I'}$ with $\cN_XV_{I'}|_{V_I}$. \begin{dfn}\label{TransCollregul_dfn} Let $X$ be a manifold and $\{V_i\}_{i\in S}$ be a transverse collection of submanifolds of~$X$. \begin{enumerate}[label=(\arabic*),leftmargin=*] \item A \sf{regularization for $\{V_i\}_{i\in S}$ in~$X$} is a system of regularizations $(\Psi_I)_{I\subset S}$ for $\{V_i\}_{i\in S}$ in~$X$ such~that \BE{overlap_e} \fD\Psi_{I;I'}\big(\Dom(\Psi_I)\big) =\Dom(\Psi_{I'})\big|_{V_{I'}\cap\Im(\Psi_I)}, \quad \Psi_I=\Psi_{I'}\circ\fD\Psi_{I;I'}|_{\Dom(\Psi_I)}\EE for all $I'\!\subset\!I\!\subset\!S$. \item\label{sympregul_it} Suppose in addition that $V_i$ is a closed submanifold of~$X$ of codimension~2 for every $i\!\in\!S$ and \hbox{$\om\!\in\!\Symp(X,\{V_i\}_{i\in S})$}. An \sf{$\om$-regularization for $\{V_i\}_{i\in S}$ in~$X$} is a~tuple $$(\cR_I)_{I\subset S} \equiv \big((\rho_{I;i},\na^{(I;i)})_{i\in I},\Psi_I\big)_{I\subset S}$$ such that $\cR_I$ is an $\om$-regularization for~$V_I$ in~$X$ for each $I\!\subset\!S$, $(\Psi_I)_{I\subset S}$ is a regularization for $\{V_i\}_{i\in S}$ in~$X$, and the induced vector bundle isomorphisms~\eref{wtPsiIIdfn_e} are product Hermitian isomorphisms for all $I'\!\subset\!I\!\subset\!S$. \end{enumerate} \end{dfn} \vspace{.1in} \noindent If $(\cR_I)_{I\subset S}$ is a regularization for $\{V_i\}_{i\in S}$ in~$X$, then \BE{SCDcons_e2} \Psi_{I;I''}=\Psi_{I';I''}\circ\fD\Psi_{I;I'}\big|_{\cN_{I;I''}'}\,, \quad \fD\Psi_{I;I''}=\fD\Psi_{I';I''}\circ \fD\Psi_{I;I'}\big|_{\pi_{I;I''}^*\cN_{I;I-I''}|_{\cN_{I;I''}'}}\EE for all $I''\!\subset\!I'\!\subset\!I\!\subset\!S$.\\ \noindent Suppose $\{X_I\}_{I\in\cP^*(N)}$ is a transverse configuration in the sense of Definition~\ref{TransConf_dfn1}. For each $I\!\in\!\cP^*(N)$ with $|I|\!\ge\!2$, let $$\pi_I\!:\cN X_I\equiv \bigoplus_{i\in I}\cN_{X_{I-i}}X_I\lra X_I\,.$$ If in addition $I'\!\subset\!I$, let $$\pi_{I;I'}\!:\cN_{I;I'}\equiv \bigoplus_{i\in I-I'}\!\!\!\cN_{X_{I-i}}X_I\lra X_I\,.$$ By the last isomorphism in~\eref{cNorient_e2} with $X\!=\!X_i$ for any $i\!\in\!I'$ and $\{V_j\}_{j\in S}\!=\!\{X_{ij}\}_{j\in[N]-i}$, $$ \cN_{I;I'}=\cN_{X_{I'}}X_I \qquad\forall~I'\!\subset\!I\!\subset\![N],~I'\!\neq\!\eset.$$ Similarly to~\eref{cNtot_e}, there are canonical identifications $$\cN_{I;I-I'}=\cN X_{I'}|_{X_I}, \quad \cN X_I=\pi_{I;I'}^*\cN_{I;I-I'}=\pi_{I;I'}^*\cN X_{I'} \qquad\forall~I'\!\subset\!I\!\subset\![N];$$ the first and last identities above hold if $|I'|\!\ge\!2$. \begin{dfn}\label{TransConfregul_dfn} Let $N\!\in\!\Z^+$ and $\X\!=\!\{X_I\}_{I\in\cP^*(N)}$ be a transverse configuration as in Definition~\ref{TransConf_dfn1}. \begin{enumerate}[label=(\arabic*),leftmargin=*] \item A \sf{regularization for $\X$} is a tuple $(\Psi_{I;i})_{i\in I\subset[N]}$, where for each $i\!\in\!I$ the tuple $(\Psi_{I;i})_{I\in\cP_i(N)}$ is a regularization for $\{X_{ij}\}_{j\in[N]-i}$ in~$X_i$ in the sense of Definition~\ref{TransCollregul_dfn}, such that \BE{SCCregCond_e0} \Psi_{I;i_1}\big|_{\cN_{I;i_1i_2}\cap\Dom(\Psi_{I;i_1})} =\Psi_{I;i_2}\big|_{\cN_{I;i_1i_2}\cap\Dom(\Psi_{I;i_2})}\EE for all $i_1,i_2\!\in\!I\!\subset\![N]$. \item\label{SCCreg_it} Suppose in addition that $X_{ij}$ is a closed submanifold of~$X_i$ of codimension~2 for all $i,j\!\in\![N]$ distinct and $(\om_i)_{i\in[N]}\!\in\!\Symp^+(\X)$. An \sf{$(\om_i)_{i\in[N]}$-regularization for~$\X$} is a~tuple \BE{SCCregdfn_e0} \fR\equiv (\cR_I)_{I\in\cP^*(N)} \equiv \big(\rho_{I;i},\na^{(I;i)},\Psi_{I;i}\big)_{i\in I\subset[N]}\EE such that $(\Psi_{I;i})_{i\in I\subset[N]}$ is a regularization for $\X$ and for each $i\!\in\![N]$ the~tuple $$\big((\rho_{I;j},\na^{(I;j)})_{j\in I-i},\Psi_{I;i}\big)_{I\in\cP_i(N)}$$ is an $\om_i$-regularization for $\{X_{ij}\}_{j\in[N]-i}$ in $X_i$ in the sense of Definition~\ref{TransCollregul_dfn}\ref{sympregul_it}. \end{enumerate} \end{dfn} \vspace{.1in} \noindent For a smooth family $(\om_t)_{t\in B}$ of symplectic forms in $\Symp(X,\{V_i\}_{i\in S})$, Definition~\ref{TransCollregul_dfn}\ref{sympregul_it} naturally extends to provide a notion of \sf{$(\om_t)_{t\in B}$-family of regularizations for $\{V_i\}_{i\in S}$ in~$X$}; see \cite[Definition~2.12(2)]{SympDivConf}. For a smooth family of symplectic structures $(\om_{t;i})_{t\in B,i\in[N]}$ on~$\X$, Definition~\ref{TransConfregul_dfn}\ref{SCCreg_it} similarly extends to provide a notion of \sf{$(\om_{t;i})_{t\in B,i\in[N]}$-family of regularizations for~$\X$}; see \cite[Definition~2.15(2)]{SympDivConf}. The first extension topologizes the set $\Aux(X,\{V_i\}_{i\in S})$ of pairs $(\om,(\cR_I)_{I\subset S})$ consisting of a symplectic structure~$\om$ on $\{V_i\}_{i\in S}$ in~$X$ and an $\om$-regularization $(\cR_I)_{I\subset S}$ for $\{V_i\}_{i\in S}$ in~$X$. The second extension topologizes the set $\Aux(\X)$ of pairs $((\om_i)_{i\in[N]},\fR)$ consisting of a symplectic structure $(\om_i)_{i\in[N]}$ on~$\X$ and an $(\om_i)_{i\in[N]}$-regularization~$\fR$ for~$\X$.\\ \noindent The existence of regularizations requires the symplectic divisors $V_i\!\subset\!X$ and $X_{ij}\!\subset\!X_i$ to meet $\om$-orthogonally and $\om_i$-orthogonally, respectively, which is rarely the case. However, \cite[Theorems~2.13,2.17]{SympDivConf} imply~that the projections \BE{AuxtoSympNC_e}\begin{aligned} \Aux\big(X,\{V_i\}_{i\in S}\big) &\lra \Symp^+\big(X,\{V_i\}_{i\in S}\big), & (\om,\fR)&\lra\om,\\ \Aux(\X) &\lra \Symp^+(\X), & \big((\om_i)_{i\in[N]},\fR\big)&\lra(\om_i)_{i\in[N]}, \end{aligned}\EE are weak homotopy equivalences and thus ensure a virtual kind of existence whenever $\{V_i\}_{i\in S}$ is an SC symplectic divisor in the sense of Definition~\ref{SCD_dfn} and $(X_{\eset},(\om_i)_{i\in[N]})$ is an NC symplectic variety in the sense of Definition~\ref{SCC_dfn}. \section{Normal crossings symplectic divisors} \label{NCD_sec} \noindent NC~divisors are spaces that are locally SC~divisors. This local perspective makes it fairly straightforward to define NC divisors and notions of regularizations. NC~divisors can also be viewed as analogues of SC~divisors for immersions instead of submanifolds. This global perspective leads to a more succinct notion of regularizations for NC~divisors and fits better with some applications. \subsection{Local perspective} \label{NCDloc_subs} \noindent Definitions~\ref{NCD_dfn} and~\ref{NCDregul_dfn} below locally correspond to Definitions~\ref{SCD_dfn} and~\ref{TransCollregul_dfn}\ref{sympregul_it}, respectively. \begin{dfn}\label{NCsubsp_dfn} Let $X$ be a manifold. A subspace $V\!\subset\!X$ is a \sf{normal crossings} (or \sf{NC}) \sf{divisor} if for every $x\!\in\!X$ there exist an open neighborhood~$U$ of~$x$ in~$X$ and a finite transverse collection $\{V_i\}_{i\in S}$ of closed submanifolds of~$U$ of codimension~2 such~that $$V\cap U= \bigcup_{i\in S}\!V_i\,.$$ \end{dfn} \begin{dfn}\label{NCD_dfn} Let $(X,\om)$ be a symplectic manifold. A subspace $V\!\subset\!X$ is an \sf{NC symplectic divisor in~$(X,\om)$} if for every $x\!\in\!X$ there exist $U$ and $\{V_i\}_{i\in S}$ as in Definition~\ref{NCsubsp_dfn} such that $\{V_i\}_{i\in S}$ is an SC symplectic divisor in~$(U,\om|_U)$. \end{dfn} \noindent By Definition~\ref{NCsubsp_dfn}, every NC divisor $V\!\subset\!X$ is a closed subspace. So is its \sf{singular locus} $V_{\prt}\!\subset\!V$ consisting of the points $x\!\in\!V$ such that there exist $U$ and $\{V_i\}_{i\in S}$ as in Definition~\ref{NCsubsp_dfn} and $I\!\subset\![N]$ with $|I|\!=\!2$ and $V_I\!\ni\!x$. For an NC divisor $V\!\subset\!X$, denote by $\Symp^+(X,V)$ the space of all symplectic forms~$\om$ on~$X$ so that $V$ is an NC symplectic divisor in~$(X,\om)$. An SC symplectic divisor in the sense of Definition~\ref{SCD_dfn} is an NC symplectic divisor, as we can take $U\!=\!X$ for every $x\!\in\!X$.\\ \noindent Let $V\!\subset\!X$ be an NC divisor. For each \sf{chart} $(U,\{V_i\}_{i\in S})$ as in Definition~\ref{NCsubsp_dfn} and each $x\!\in\!U$, let $$S_x=\big\{i\!\in\!S\!:\,x\!\in\!V_i\big\}\,.$$ If $(U',\{V_i'\}_{i\in S'})$ is another chart for $V$ in~$X$ and $x\!\in\!U\!\cap\!U'$, there exist a neighborhood~$U_x$ of~$x$ in $U\!\cap\!U'$ and a bijection \BE{NCDoverlap_e0}h_x\!:S_x\lra S_x' \qquad\hbox{s.t.}\quad V_i\!\cap\!U_x=V_{h_x(i)}'\!\cap\!U_x~~\forall\,i\!\in\!S_x\,.\EE We also denote by $h_x$ the induced bijection $\cP(S_x)\!\lra\!\cP(S_x')$. By~\eref{NCDoverlap_e0}, $$\cN_XV_I\big|_{V_I\cap U_x} =\cN_XV_{I'}'\big|_{V_{I'}'\cap U_x} \qquad\forall~I\subset S_x,~I'\!=\!h_x(I)\,.$$ Suppose $$\big(\cR_I\big)_{I\subset S} \equiv \big((\rho_{I;i},\na^{(I;i)})_{i\in I},\Psi_I\big)_{I\subset S}, \quad \big(\cR_I'\big)_{I\subset S'} \equiv \big((\rho_{I;i}',\na'^{(I;i)})_{i\in I},\Psi_I'\big)_{I\subset S'}$$ are an $\om|_U$-regularization for $\{V_i\}_{i\in S}$ in~$U$ and an $\om|_{U'}$-regularization for $\{V_i'\}_{i\in S'}$ in~$U'$. We define $$\big(\cR_y\big)_{I\subset S}\cong_X\big(\cR_I'\big)_{I\subset S'}$$ if for every $x\!\in\!U\!\cap\!U'$ there exist $U_x$ and~$h_x$ as in~\eref{NCDoverlap_e0} such~that \begin{gather*} \big(\rho_{I;i},\na^{(I;i)}\big)\big|_{V_I\cap U_x} =\big(\rho_{I';i'}',\na'^{(I';i')}\big)\big|_{V_{I'}\cap U_x},\\ \Psi_I=\Psi_{I'}' \quad\hbox{on}\quad \Dom(\Psi_I)|_{V_I\cap U_x}\cap \Dom(\Psi_{I'}')|_{V_{I'}\cap U_x} \end{gather*} for all $i\!\in\!I\!\subset\!S_x$, $i'\!\equiv\!h_x(i)\in I'\!\equiv\!h_x(I)$. \begin{dfn}\label{NCDregul_dfn} Let $X$ be a manifold, $V\!\subset\!X$ be an NC divisor, and $(U_y,\{V_{y;i}\}_{i\in S_y})_{y\in\cA}$ be a collection of charts for~$V$ in~$X$ as in Definition~\ref{NCsubsp_dfn}. \begin{enumerate}[label=(\arabic*),leftmargin=*] \item\label{NCDregul_it1} If $\om\!\in\!\Symp^+(X,V)$, an \sf{$\om$-regularization for $V$ in~$X$} is a~collection \BE{NCDregul_e1} \fR\equiv (\cR_{y;I})_{y\in\cA,I\subset S_y} \equiv \big((\rho_{y;I;i},\na^{(y;I;i)})_{i\in I},\Psi_{y;I}\big)_{y\in\cA,I\subset S_y}\EE such that $(\cR_{y;I})_{I\subset S_y}$ is an $\om|_{U_y}$-regularization for $\{V_{y;i}\}_{i\in S_y}$ in~$U_y$ for each $y\!\in\!\cA$ and \BE{NCDregul_e2}\big(\cR_{y;I}\big)_{I\subset S_y}\cong_X\big(\cR_{y';I}\big)_{I\subset S_{y'}} \qquad \forall\,y,y'\!\in\!\cA\,.\EE \item\label{NCDregul_it2} If $B$ is a manifold, possibly with boundary, and $(\om_t)_{t\in B}$ is a smooth family of symplectic forms in $\Symp^+(X,V)$, an \sf{$(\om_t)_{t\in B}$-family of regularizations for $V$ in~$X$} is a smooth family of~tuples $(\cR_{t;y;I})_{t\in B,y\in\cA,I\subset S_y}$ such that $(\cR_{t;y;I})_{y\in\cA,I\subset S_y}$ is an $\om_t$-regularization for $V$ in~$X$ for each $t\!\in\!B$ and $(\cR_{t;y;I})_{t\in B,I\subset S_y}$ is an $(\om_t|_{U_y})_{t\in B}$-family of regularizations for $\{V_{y;i}\}_{i\in S_y}$ in~$U_y$ for each $y\!\in\!\cA$. \end{enumerate} \end{dfn} \vspace{.1in} \noindent Suppose $X$, $V$, and $(U_y,\{V_{y;i}\}_{i\in S_y})_{y\in\cA}$ are as in Definition~\ref{NCDregul_dfn} and $(\om_t)_{t\in B}$ is a family of symplectic forms in $\Symp^+(X,V)$. We define two $(\om_t)_{t\in B}$-families of regularizations for~$V$ in~$X$ to be \sf{equivalent}, $$\big(\cR_{t;y;I}^{(1)}\big)_{t\in B,y\in\cA,I\subset S_y}\cong \big(\cR_{t;y;I}^{(2)}\big)_{t\in B,y\in\cA,I\subset S_y},$$ if they agree on the level of germs. This means that for every $y\!\in\!\cA$ the families $$(\cR_{t;y;I}^{(1)}\big)_{t\in B,I\subset S_y} \qquad\hbox{and}\qquad \big(\cR_{t;y;I}^{(2)}\big)_{t\in B,I\subset S_y}$$ of regularizations for $\{V_{y;i}\}_{i\in S_y}$ in~$U_y$ agree on the level of germs as formally defined just before \cite[Theorem~2.13]{SympDivConf}. \begin{thm}\label{NCD_thm} Let $X$, $V$, and $(U_y,\{V_{y;i}\}_{i\in S_y})_{y\in\cA}$ be as in Definition~\ref{NCDregul_dfn} and $X^*\!\subset\!X$ be an open subset such that $\ov{X^*}\!\cap\!V_{\prt}\!=\!\eset$. Suppose \begin{enumerate}[label=$\bullet$,leftmargin=*] \item $B$ is a compact manifold, possibly with boundary, \item $N(\prt B),N'(\prt B)$ are neighborhoods of $\prt B$ in~$B$ such that $\ov{N'(\prt B)}\!\subset\!N(\prt B)$, \item $(\om_t)_{t\in B}$ is a family of symplectic forms in $\Symp^+(X,V)$, \item $(\cR_{t;y;I})_{t\in N(\prt B),y\in\cA,I\subset S_y}$ is an $(\om_t)_{t\in N(\prt B)}$-family of regularizations for $V$ in~$X$. \end{enumerate} Then there exist a smooth family $(\mu_{t,\tau})_{t\in B,\tau\in\bI}$ of 1-forms on~$X$ such~that \begin{gather*} \mu_{t,0}=0, \quad \supp\big(\mu_{\cdot,\tau}\big)\subset \big(B\!-\!N'(\prt B)\big)\!\times\!(X\!-\!X^*) \qquad \forall~t\!\in\!B,\,\tau\!\in\!\bI,\\ \big(\om_{t,\tau}\equiv\om_t\!+\!\nd\mu_{t,\tau}\big) \in \Symp^+\big(X,V\big) \qquad \forall~t\!\in\!B,\,\tau\!\in\!\bI, \end{gather*} and an $(\om_{t,1})_{t\in B}$-family $(\wt\cR_{t;y;I})_{t\in B,y\in\cA,I\subset S_y}$ of regularizations for $V$ in~$X$ such~that $$(\wt\cR_{t;y;I})_{t\in N'(\prt B),y\in\cA,I\subset S_y} \cong (\cR_{t;y;I})_{t\in N'(\prt B),y\in\cA,I\subset S_y}\,.$$ \end{thm} \vspace{.1in} \noindent Definition~\ref{NCDregul_dfn}\ref{NCDregul_it2} topologizes the set $\Aux(X,V)$ of pairs $(\om,\fR)$ consisting of a symplectic structure~$\om$ on an NC divisor~$V$ in~$X$ and an $\om$-regularization $\fR$ for~$V$ in~$X$. Theorem~\ref{NCD_thm} above, which is the direct analogue of \cite[Theorem~2.13]{SympDivConf} for arbitrary NC divisors, implies that the first projection in~\eref{AuxtoSympNC_e} is a weak homotopy equivalence in this general setting as well. Similarly to the situation with \cite[Theorems~2.13,2.17]{SympDivConf}, Theorem~\ref{NCD_thm} is implied by Theorem~\ref{NCC_thm}; see the paragraph after \cite[Theorem~2.13]{SympDivConf} and Example~\ref{NCDvsC_eg}. \subsection{Global perspective} \label{NCDgl_subs} \noindent We now give an equivalent global description of the notions introduced in Section~\ref{NCDloc_subs}. We do so by viewing an NC symplectic divisor in the sense of Definition~\ref{NCD_dfn} as the image of a transverse immersion~$\io$ with certain properties.\\ \noindent For any map $\io\!:\wt{V}\!\lra\!X$ and $k\!\in\!\Z^{\geq 0}$, let \BE{tViotak_e} \wt{V}_{\io}^{(k)}=\big\{(x,\wt{v}_1,\ldots,\wt{v}_k)\!\in\!X\! \times\!(\wt{V}^k\!-\!\De_{\wt{V}}^{(k)})\!:\,\io(\wt{v}_i)\!=\!x~\forall\,i\!\in\![k]\big\},\EE where $\De_{\wt{V}}^{(k)}\!\subset\!\wt{V}^k$ is the big diagonal (at least two of the coordinates are the same). Define \begin{gather}\label{iotak_e} \io_k\!:\wt{V}_{\io}^{(k)}\lra X, \qquad \io_k(x,\wt{v}_1,\ldots,\wt{v}_k)=x,\\ \label{Xiotak_e} V_{\io}^{(k)}=\io_k(\wt{V}_{\io}^{(k)}) =\big\{x\!\in\!X\!:\,\big|\io^{-1}(x)\big|\!\ge\!k\big\}. \end{gather} For example, $$\wt{V}_{\io}^{(0)},V_{\io}^{(0)}=X, \qquad \wt{V}_{\io}^{(1)}\approx\wt{V}, \qquad V_{\io}^{(1)}=\io(\wt{V}).$$ \vspace{.2in} \noindent For $k',k\!\in\!\Z^{\ge0}$ and $i\!\in\!\Z^+$ with $i,k'\!\le\!k$, define \begin{alignat}{2} \label{whiokk_e} \wt\io_{k;k'}\!:\wt{V}_{\io}^{(k)}&\lra\wt{V}_{\io}^{(k')}, &\quad \wt\io_{k;k'}(x,\wt{v}_1,\ldots,\wt{v}_k)&=(x,\wt{v}_1,\ldots,\wt{v}_{k'}),\\ \label{iokcjdfn_e} \wt\io_{k;k-1}^{(i)}\!: \wt{V}_{\io}^{(k)}&\lra\wt{V}_{\io}^{(k-1)}, &\quad \wt\io_{k;k-1}^{(i)}(x,\wt{v}_1,\ldots,\wt{v}_k)&=(x,\wt{v}_1,\ldots,\wt{v}_{i-1},\wt{v}_{i+1},\ldots,\wt{v}_k),\\ \label{iokjdfn_e} \wt\io_k^{(i)}\!: \wt{V}_{\io}^{(k)} &\lra \wt{V}, &\quad \wt\io_k^{(i)}(x,\wt{v}_1,\ldots,\wt{v}_k)&=\wt{v}_i. \end{alignat} For example, \begin{alignat*}{2} \wt\io_{k;k'}\!=\!\wt\io_{k'+1;k'}^{(k'+1)}\!\circ\!\ldots\!\circ\!\wt\io_{k;k-1}^{(k)} \!&:\wt{V}_{\io}^{(k)}\lra\wt{V}_{\io}^{(k')}, &\qquad \wt\io_{k;1}\!\approx\!\wt\io_k^{(1)}\!&: \wt{V}_{\io}^{(k)}\lra\wt{V}_{\io}^{(1)}\!\approx\!\wt{V}, \\ \wt\io_{k;0}\!=\!\io_k\!&:\wt{V}_{\io}^{(k)}\lra\wt{V}_{\io}^{(0)}\!=\!X, &\qquad \wt\io_{1;0}\!\approx\!\io\!&:\wt{V}_{\io}^{(1)}\!\approx\!\wt{V}\lra X. \end{alignat*} We define an $\bS_k$-action on $\wt{V}_{\io}^{(k)}$ by requiring that \BE{SkVk_e} \wt\io_k^{(i)}=\wt\io_k^{(\si(i))}\!\circ\!\si\!: \wt{V}_{\io}^{(k)} \lra \wt{V}\EE for all $\si\!\in\!\bS_k$ and $i\!\in\![k]$. The diagrams \BE{Vkdiag_e}\begin{split} \xymatrix{\wt{V}_{\io}^{(k)~{}} \ar[rr]^{\wt\io_k^{(i)}} \ar[d]^{\wt\io_{k;k-1}^{(i)}} \ar@/_2pc/[dd]_{\wt\io_{k;k'}} \ar@/^1pc/[rrdd]^{\io_k} && \wt{V} \ar[dd]^{\io} \\ \wt{V}_{\io}^{(k-1)} \ar[rrd]^<<<<<<{\!\io_{k-1}} \ar@{-->}[d]^{\wt\io_{k-1;k'}} && \\ \wt{V}_{\io}^{(k')} \ar[rr]^{\io_{k'}}& &X} \end{split} \hspace{.5in} \begin{split} \xymatrix{\wt{V}_{\io}^{(k)}\ar[rr]^{\si} \ar[d]_{\wt\io_{k;k-1}^{(i)}}&& \wt{V}_{\io}^{(k)}\ar[d]^{\wt\io_{k;k-1}^{(\si(i))}} \\ \wt{V}_{\io}^{(k-1)} \ar[rr]^{\si_i}&& \wt{V}_{\io}^{(k-1)}} \end{split}\EE of solid arrows then commute; the entire first diagram commutes if $i\!>\!k'$.\\ \noindent A smooth map $\io\!:\wt{V}\lra\!X$ is an \sf{immersion} if the differential $\nd_x\io$ of~$\io$ at~$x$ is injective for all $x\!\in\!\wt{V}$. This implies~that $$\codim\,\io\equiv\dim X-\dim V\ge0.$$ Such a map has a well-defined normal bundle, $$\cN\io\equiv \io^*TX\big/\Im(\nd\io)\lra \wt{V}\,.$$ If $\io$ is a closed immersion, then the subsets $V_{\io}^{(k)}\!\subset\!X$ and $\wt{V}_{\io}^{(k)}\!\subset\!X\!\times\!\wt{V}^k$ are closed.\\ \noindent An immersion $\io\!:\wt{V}\lra\!X$ is \sf{transverse} if the homomorphism \BE{TransImmVerHom_e} T_xX\oplus\bigoplus_{i=1}^k T_{\wt{v}_i}\wt{V}\lra \bigoplus_{i=1}^kT_xX, \quad \big(w,(w_i)_{i\in[k]}\big)\lra \big(w\!+\!\nd_{\wt{v}_i}\io(w_i)\big)_{i\in[k]}\,,\EE is surjective for all $(x,\wt{v}_1,\ldots,\wt{v}_k)\!\in\!\wt{V}_{\io}^{(k)}$ and $k\!\in\!\Z^+$. By the Inverse Function Theorem, in such a case \begin{enumerate}[label=$\bullet$,leftmargin=*] \item each $\wt{V}_{\io}^{(k)}$ is a submanifold of $X\!\times\!\wt{V}^k$, \item\label{hatimerssion_it} the maps $\wt\io_{k;k-1}$ in \eref{whiokk_e} and the maps~\eref{iokcjdfn_e} are transverse immersions, \item the homeomorphisms~$\si$ determined by the elements of~$\bS_k$ as in~\eref{SkVk_e} are diffeomorphisms. \end{enumerate} By the commutativity of the upper and middle triangles in the first diagram in~\eref{Vkdiag_e}, the inclusion of $\Im(\nd\io_k)$ into $\wt\io_k^{(i)*}\Im(\nd\io)$ and the homomorphism~$\nd\io_{k-1}$ induce homomorphisms \BE{iodecompmaps_e} \cN\io_k\lra \io_k^{(i)*}\cN\io, \quad \cN\wt\io_{k;k-1}^{(i)}\lra \cN\io_k \qquad\forall~i\!\in\![k].\EE By the Inverse Function Theorem, the resulting homomorphisms \BE{ImmcNorient_e2} \cN\io_k\lra \bigoplus_{i\in[k]}\!\wt\io_k^{(i)*}\cN\io \qquad\hbox{and}\qquad \cN\wt\io_{k;k-1}^{(i)}\lra \wt\io_k^{(i)*}\cN\io~~~\forall\,i\!\in\![k]\EE are isomorphisms; they correspond to the first two isomorphisms in~\eref{cNorient_e2} if $\wt{V}$ is the disjoint union of submanifolds $V_i\!\subset\!X$. For $\si\!\in\!\bS_k$ and $i\!\in\![k]$, the homomorphisms~$\nd\si$ and~$\nd\si_i$ of the second diagram in~\eref{Vkdiag_e} induces an isomorphism \BE{Djsi_e}D_i\si\!:\cN\wt\io_{k;k-1}^{(i)}\lra \cN\wt\io_{k;k-1}^{(\si(i))}\EE covering~$\si$. \begin{lmm}\label{NCD_lmm} Let $X$ be a manifold. A subset $V\!\subset\!X$ is an NC divisor in the sense of Definition~\ref{NCsubsp_dfn} if and only if $V$ is the image of a closed transverse immersion $\io\!:\wt{V}\!\lra\!X$ of codimension~2. \end{lmm} \begin{proof} (1) Let $V\!\subset\!X$ be an NC divisor. Choose a locally finite open cover $\{U_y\}_{y\in\cA'}$ of~$X$ with associated transverse collections $\{V_{y;i}\}_{i\in S_y}$ as in Definition~\ref{NCsubsp_dfn}. Let $$\wt{V}=\bigg(\bigsqcup_{y\in\cA'}\bigsqcup_{i\in S_y} \{(y,i)\}\!\times\!V_{y;i}\bigg)\Big/\!\!\sim\,,$$ where we identify $(y,i,x)$ with $(y',i',x)$ if there exists a neighborhood~$U$ of~$x$ in $U_y\!\cap\!U_{y'}$ such that $V_{y;i}\!\cap\!U\!=\!V_{y';i'}\!\cap\!U$. The Hausdorffness of~$X$ implies the Hausdorffness of~$\wt{V}$. The latter inherits a smooth structure from the smooth structures of the submanifolds $V_{y;i}\!\subset\!X$ (which necessarily agree on the overlaps). The smooth~map $$\io\!: \wt{V}\lra X, \qquad \big[y,i,x]\lra x,$$ is then a well-defined closed transverse immersion of codimension~2.\\ \noindent (2) Let $\io\!:\wt{V}\!\lra\!X$ be a closed transverse immersion of codimension~2. Given $x\!\in\!X$, let $$\io^{-1}(x)=\big\{\wt{v}_1,\ldots,\wt{v}_k\big\}.$$ By \cite[Proposition~1.35]{Warner} and the closedness of~$\io$, there exist a neighborhood $U\!\subset\!X$ of~$x$ and neighborhoods $\wt{V}_i\!\subset\!\wt{V}$ of~$\wt{v}_i$ with $i\!\in\![k]$ such~that $$\io^{-1}(U)=\bigsqcup_{i=1}^k\wt{V}_i\subset\wt{V}$$ and $\io|_{\wt{V}_i}$ is an embedding for every $i\!\in\![k]$. Then, $\{\io(\wt{V}_i)\}_{i\in[k]}$ is a finite transverse collection of closed submanifolds of~$U$ of codimension~2 such~that $$V\cap U= \bigcup_{i=1}^k\io(\wt{V}_i)\,.$$ Thus, $\io(\wt{V})$ is an NC divisor in~$X$. \end{proof} \noindent If $V\!\subset\!X$ is the NC divisor associated with a closed transverse immersion of codimension~2 as in Lemma~\ref{NCD_lmm}, then $V_{\prt}\!=\!V_{\io}^{(2)}$.\\ \noindent If $\io\!:\wt{V}\!\lra\!X$ is any immersion between oriented manifolds of even dimensions, the short exact sequence of vector bundles \BE{ImmcNorient_e1} 0\lra T\wt{V}\stackrel{\nd\io}\lra \io^*TX\lra \cN\io\lra 0\EE over $\wt{V}$ induces an orientation on~$\cN\io$. If in addition $\io$ is a transverse immersion, the orientation on~$\cN\io$ induced by the orientations of~$X$ and~$\wt{V}$ induces an orientation on~$\cN\io_k$ via the first isomorphism in~\eref{ImmcNorient_e2}. The orientations of~$X$ and~$\cN\io_k$ then induce an orientation on~$\wt{V}_{\io}^{(k)}$ via the short exact sequence~\eref{ImmcNorient_e1} with $\io\!=\!\io_k$ for all $k\!\in\!\Z^+$, which we call \sf{the intersection orientation of~$\wt{V}_{\io}^{(k)}$}. For $k\!=\!1$, it agrees with the original orientation of~$\wt{V}$ under the canonical identification $\wt{V}_{\io}^{(1)}\!\approx\!\wt{V}$.\\ \noindent Suppose $(X,\om)$ is a symplectic manifold. If $\io\!:\wt{V}\!\lra\!X$ is a transverse immersion such that $\io_k^*\om$ is a symplectic form on $\wt{V}_{\io}^{(k)}$ for all $k\!\in\!\Z^+$, then each $\wt{V}_{\io}^{(k)}$ carries an orientation induced by $\io_k^*\om$, which we call the $\om$-orientation. By the previous paragraph, the $\om$-orientations of~$X$ and $\wt{V}$ also induce intersection orientations on all~$\wt{V}_{\io}^{(k)}$. By definition, the intersection and $\om$-orientations of~$\wt{V}_{\io}^{(1)}$ are the same. The next statement follows readily from Definition~\ref{NCD_dfn} and the proof of Lemma~\ref{NCD_lmm}. \begin{prp}\label{NCD_prp} Suppose $(X,\om)$ is a symplectic manifold, $V\!\subset\!X$ is an NC divisor, and \hbox{$\io\!:\wt{V}\!\lra\!X$} is its normalization as in Lemma~\ref{NCD_lmm}. Then $V$ is an NC symplectic divisor in~$(X,\om)$ if and only if $\io_k^*\om$ is a symplectic form on $\wt{V}_{\io}^{(k)}$ for all $k\!\in\!\Z^+$ and the intersection and $\om$-orientations of~$\wt{V}_{\io}^{(k)}$ are the same. \end{prp} \noindent Suppose $\io\!:\wt{V}\lra\!X$ is a transverse immersion and $k\!\in\!\Z^{\ge0}$. For $k'\!\in\!\Z^{\ge0}$ with $k'\!\le\!k$, define \BE{kk'bundles} \pi_{k;k'}\!:\cN_{k;k'}\io=\!\bigoplus_{i\in [k]-[k']}\!\!\!\!\!\cN\wt\io_{k;k-1}^{(i)} \lra \wt{V}_{\io}^{(k)} \quad\hbox{and}\quad \pi_{k;k'}^c\!:\cN_{k;k'}^c\io=\!\bigoplus_{i\in [k']}\!\cN\wt\io_{k;k-1}^{(i)} \lra \wt{V}_{\io}^{(k)}\,.\EE By the commutativity of the first diagram in~\eref{Vkdiag_e}, the homomorphisms~$\nd\tilde\io_{k-1;k'}$ and~$\nd\io_{k-1}$ induce homomorphisms $$\cN_{k;k'}\io\lra \cN\wt\io_{k;k'} \qquad\hbox{and}\qquad \cN_{k;k'}^c\io\lra\wt\io_{k;k'}^{\,*}\cN\io_{k'}.$$ By the Inverse Function Theorem, the last two homomorphisms are isomorphisms; they correspond to the last isomorphism in~\eref{cNorient_e2} and the first identification in~\eref{cNtot_e} if $\wt{V}$ is the disjoint union of submanifolds $V_i\!\subset\!X$. For each $\si\!\in\!\bS_k$, the isomorphisms~\eref{Djsi_e} induce an isomorphism \BE{cNioksplit_e} D\si=(D_i\si)_{i\in [k]}\!: \cN\io_k\approx\cN_{k;0}\io\equiv\bigoplus_{i\in[k]}\!\cN\wt\io_{k;k-1}^{(i)} \lra \bigoplus_{i\in[k]}\!\cN\wt\io_{k;k-1}^{(\si(i))}\equiv \cN_{k;0}\io\approx \cN\io_k\EE lifting the action of $\si$ on~$\wt{V}_{\io}^{(k)}$; the last isomorphism permutes the components of the direct sum. In particular, the subbundles $\cN_{k;k'}\io$ and $\cN_{k;k'}^c\io$ of $\cN_{k;0}\io$ are invariant under the action of the subgroup \hbox{$\bS_{k'}\!\times\!\bS_{[k]-[k']}$} of~$\bS_k$, but not under the action of the full group~$\bS_k$. \begin{dfn}\label{NCsmreg_dfn} A \sf{regularization} for an immersion $\io\!:\wt{V}\!\lra\!X$ is a smooth map \hbox{$\Psi\!:\cN'\!\lra\!X$} from a neighborhood of~$\wt{V}$ in~$\cN\io$ such that for every $\wt{v}\!\in\!\wt{V}$, there exist a neighborhood $U_{\wt{v}}$ of $\wt{v}$ in~$\wt{V}$ so that the restriction of $\Psi$ to $\cN'|_{U_{\wt{v}}}$ is a diffeomorphism onto its~image, $\Psi(\wt{v})\!=\!\io(\wt{v})$, and the homomorphism $$ \cN\io|_{\wt{v}}=T_{\wt{v}}^{\ver}\cN\io \lhra T_{\wt{v}}\cN\io \stackrel{\nd_{\wt{v}}\Psi}{\lra} T_{\wt{v}}X\lra \frac{T_{\wt{v}}X}{\Im(\nd_{\wt{v}}\io)}\equiv\cN\io|_{\wt{v}}$$ is the identity. \end{dfn} \begin{dfn}\label{NCTransCollReg_dfn} A \sf{system of regularizations for} a transverse immersion $\io\!:\wt{V}\!\lra\!X$ is a tuple $(\Psi_k)_{k\in\Z^{\ge0}}$, where each $\Psi_k$ is a regularization for the immersion~$\io_k$, such~that \begin{alignat}{2} \label{NCPsikk_e}\Psi_k\big(\cN_{k;k'}\io\!\cap\!\Dom(\Psi_k)\big) &=V_{\io}^{(k')}\!\cap\!\Im(\Psi_k) &\quad&\forall~k\!\in\!\Z^{\ge0},~k'\!\in\![k],\\ \label{NCPsikk_e2} \Psi_k&=\Psi_k\!\circ\!D\si\big|_{\Dom(\Psi_k)} &\quad&\forall~k\!\in\!\Z^{\ge0},\,\si\!\in\!\bS_k. \end{alignat} \end{dfn} \vspace{.1in} \noindent The stratification condition~\eref{NCPsikk_e} replaces~\eref{Psikk_e} and implies~that there exists a smooth~map \BE{Psikkprdfn_e}\begin{split} &\Psi_{k;k'}\!: \cN_{k;k'}'\io\!\equiv\!\cN_{k;k'}\io\!\cap\!\Dom(\Psi_k)\lra\wt{V}_{\io}^{(k')} \qquad\hbox{s.t.}\\ &\quad\Psi_{k;k'}\big|_{\wt{V}_{\io}^{(k)}}=\wt\io_{k;k'}\,,\quad \Psi_k\big|_{\cN_{k;k'}'\io}=\io_{k'}\!\circ\!\Psi_{k;k'}\,; \end{split}\EE see Proposition~1.35 and Theorem~1.32 in~\cite{Warner}. Similarly to~\eref{wtPsiIIdfn_e}, $\Psi_{k;k'}$ lifts to a (fiberwise) vector bundle isomorphism \BE{fDPsikk_e}\fD\Psi_{k;k'}\!: \pi_{k;k'}^*\cN_{k;k'}^c\io|_{\cN_{k;k'}'\io} \lra\cN\io_{k'}\big|_{\Im(\Psi_{k;k'})}.\EE This bundle isomorphism preserves the second splittings in~\eref{kk'bundles} and is $\bS_{k'}$-equivariant and $\bS_{[k]-[k']}$-invariant. The condition~\eref{NCoverlap_e} below replaces~\eref{overlap_e} in the present setting. \begin{dfn}\label{NCTransCollregul_dfn} A \sf{refined regularization} for a transverse immersion $\io\!:\wt{V}\!\lra\!X$ is a system $(\Psi_k)_{k\in\Z^{\ge0}}$ of regularizations for~$\io$ such~that \BE{NCoverlap_e}\fD\Psi_{k;k'}\big(\Dom(\Psi_k)\big) =\Dom(\Psi_{k'})\big|_{\Im(\Psi_{k;k'})}, \quad \Psi_k=\Psi_{k'}\circ\fD\Psi_{k;k'}|_{\Dom(\Psi_k)}\EE whenever $0\!\le\!k'\!\le\!k$ and $k\!\in\!\Z^{\ge0}$. \end{dfn} \noindent Suppose $(X,\om)$ is a symplectic manifold and $\io\!:\wt{V}\!\lra\!X$ is an immersion so that $\io^*\om$ is a symplectic form on~$V$. The normal bundle $$\cN\io\equiv \frac{\io^*TX}{\Im(\nd\io)}\approx \big(\Im(\nd\io)\big)^{\om} \equiv \big\{w\!\in\!T_{\io(\wt{v})}X\!:\,\wt{v}\!\in\!\wt{V},\, \om\big(w,\nd_x\io(w')\big)\!=\!0~ \forall\,w'\!\in\!T_{\wt{v}}V\big\}$$ of~$\io$ then inherits a fiberwise symplectic form~$\om|_{\cN\io}$ from~$\om$. We denote the restriction of~$\om|_{\cN\io}$ to a subbundle $L\!\subset\!\cN\io$ by~$\om|_L$. \begin{dfn}\label{NCsympreg1_dfn} Suppose $(X,\om)$ is a symplectic manifold, $\io\!:\wt{V}\!\lra\!X$ is an immersion so that $\io^*\om$ is a symplectic form on~$V$, and $$\cN\io=\bigoplus_{i\in I}L_i$$ is a fixed splitting into oriented rank~2 subbundles. If $\om|_{L_i}$ is nondegenerate for every $i\!\in\!I$, then an \sf{$\om$-regularization for~$\io$} is a tuple $((\rho_i,\na^{(i)})_{i\in I},\Psi)$, where $(\rho_i,\na^{(i)})$ is an $\om|_{L_i}$-compatible Hermitian structure on~$L_i$ for each $i\!\in\!I$ and $\Psi$ is a regularization for~$\io$, such that $$\Psi^*\om=\big(\io^*\om\big)_{(\rho_i,\na^{(i)})_{i\in I}}\big|_{\Dom(\Psi)}.$$ \end{dfn} \begin{dfn}\label{NCSCDregul_dfn} Suppose $(X,\om)$ is a symplectic manifold and $\io\!:\wt{V}\!\lra\!X$ is a transverse immersion of codimension~2 so that $\io_k^*\om$ is a symplectic form on $\wt{V}_{\io}^{(k)}$ for each $k\!\in\!\Z^+$. A \sf{refined $\om$-regularization for~$\io$} is a tuple \BE{NCSCDregul_e} \fR\equiv\big(\cR_k\big)_{k\in\Z^{\ge0}}\equiv \big((\rho_{k;i},\na^{(k;i)})_{i\in[k]},\Psi_k\big)_{k\in\Z^{\ge0}}\EE such that $(\Psi_k)_{k\in\Z^{\ge0}}$ is a refined regularization for~$\io$, $\cR_k$ is an $\om$-regularization for~$\io_k$ with respect to the splitting~\eref{cNioksplit_e} for every $k\!\in\!\Z^{\ge0}$, \BE{NCSCDregul_e2}\big(\rho_{k;i},\na^{(k;i)}\big) = \big\{D_i{\si}\big\}^{\!*}\big(\rho_{k;\si(i)},\na^{(k;\si(i))}\big) \quad\forall\,k\!\in\!\Z^{\ge0},\,\si\!\in\!\bS_k,\,i\!\in\![k],\EE and the induced vector bundle isomorphisms~\eref{fDPsikk_e} are product Hermitian isomorphisms for all $k'\!\le\!k$. \end{dfn} \noindent For a smooth family $(\om_t)_{t\in B}$ of symplectic forms on~$X$ satisfying the condition of Definition~\ref{NCSCDregul_dfn}, the notion of refined $\om$-regularization naturally extends to a notion of \sf{$(\om_t)_{t\in B}$-family of refined regularizations for~$\io$}. If $\io\!:\wt{V}\!\lra\!X$ corresponds to an NC divisor~$V$ in~$X$ as in Lemma~\ref{NCD_lmm}, then a refined $\om$-regularization for~$\io$ in the sense of Definition~\ref{NCSCDregul_dfn} determines an $\om$-regularization for~$V$ in~$X$ in the sense of Definition~\ref{NCDregul_dfn}\ref{NCDregul_it1}. Conversely, the local regularizations constituting an $\om$-regularization for~$V$ in~$X$ can be patched together to form a refined $\om$-regularization for~$\io$. These relations also apply to families of regularizations. \subsection{Resolutions of NC divisors} \label{EquivalentDfn_subs} \noindent In this section, we show that the normalization $\io\!:\!\wt{V}\!\lra\!X$ of an NC divisor~$V$ in~$X$ provided by Lemma~\ref{NCD_lmm} is unique and has a unique extension to a sequence of closed immersions compatible with group actions in the sense of Definition~\ref{NCRes_dfn} below. This structure can be used to simplify the notation in some applications; see Example~\ref{Succ_eg}. \begin{dfn}\label{NCRes_dfn} Let $X$ be a smooth manifold and $V\!\subset\!X$. A~\sf{resolution of~$V$} is a sequence of closed immersions \BE{absNC_e} \ldots \lra X^{(k)}\xlra{f_{k;k-1}}X^{(k-1)} \xlra{f_{k-1;k-2}}\cdots \xlra{f_{2;1}} X^{(1)} \xlra{f_{1;0}} X^{(0)}\!\equiv\!X\EE of a fixed codimension $\fc\!\in\!\Z^+$ such~that $f_{1;0}(X^{(1)})\!=\!V$ and \begin{enumerate}[label=(R\arabic*),leftmargin=*] \item for every $k\!\in\!\Z^{\ge0}$, $X^{(k)}$ is a manifold with a free $\bS_k$-action, \item\label{Requiv_it} for all $0\!\le\!k'\!\le\!k$, the map \BE{absNC_e2} f_{k;k'}\!\equiv\!f_{k'+1;k'}\!\circ\!\ldots\!\circ\!f_{k;k-1}\!: X^{(k)}\!\lra\!X^{(k')}\EE is $\bS_{k'}$-equivariant, and \item\label{inverse_it} for all $0\!\le\!k'\!\le\!k$ and $x\!\in\!X^{(k)}\!-\!f_{k+1;k}(X^{(k+1)})$, $f_{k;k'}^{-1}(f_{k;k'}(x))=\bS_{[k]-[k']}\!\cdot\!x$. \end{enumerate} \end{dfn} \vspace{.15in} \noindent Let $k\!\in\!\Z^{\ge0}$ and $x\!\in\!X^{(k)}$. Since any resolution~\eref{absNC_e} terminates, the number $$k(x)\equiv\max\big\{k'\!\in\!\Z^{\ge0}\!:k'\!\ge\!k,\,|f_{k';k}^{-1}(x)|\!\neq\!\eset\big\} \in\Z^{\ge0}$$ is well-defined. By \ref{Requiv_it} and~\ref{inverse_it} in Definition~\ref{NCRes_dfn} with $k$ replaced by~$k(x)$, $$f_{k;k'}(g\!\cdot\!x)=f_{k;k'}\big(f_{k(x);k}(g\!\cdot\!\wt{x})\big) \equiv f_{k(x);k'}(g\!\cdot\!\wt{x})=f_{k(x);k'}(\wt{x})=f_{k;k'}(x)$$ for all $g\!\in\!\bS_{[k]-[k']}$ and $\wt{x}\!\in\!f_{k(x);k}^{-1}(x)$. Thus, the immersion~\eref{absNC_e2} is $\bS_{[k]-[k']}$-invariant. Since the complement of $f_{k+1;k}(X^{(k+1)})$ in~$X^{(k)}$ is dense this conclusion also follows directly from~\ref{inverse_it} in Definition~\ref{NCRes_dfn} and the continuity of $f_{k;k'}$ and of the $\bS_k$-action on~$X^{(k)}$.\\ \noindent Suppose $0\!\le\!k'\!\le\!k$ and $x\!\in\!X^{(k)}$. If $g,g'\!\in\!\bS_k$, then \BE{NCRes_e5} f_{k;k'}(g\!\cdot\!x)=f_{k;k'}(g'\!\cdot\!x) \quad\Llra\quad g'g^{-1}\in\bS_{[k]-[k']}\,.\EE The {\it if} implication is immediate from the conclusion of the previous paragraph. The {\it only if} implication with $(k,x)$ replaced by $(k(x),\wt{x})$ as in the previous paragraph follows from~\ref{inverse_it} in Definition~\ref{NCRes_dfn} with~$k$ replaced by~$k(x)$. Along with~\ref{Requiv_it}, this establishes the {\it only if} in~\eref{NCRes_e5}.\\ \noindent If in addition $k'\!\in\!\Z^+$ and $y\!\in\!X^{(k)}$, then \BE{NCRes_e5b}f_{k;k'}(g\!\cdot\!x)=f_{k;k'}(g\!\cdot\!y) ~~\forall\,g\!\in\!\bS_k \quad\Lra\quad x=y.\EE This is immediate if $k'\!=\!k$ or if $x$ or $y$ is not in the image $f_{k+1;k}$. Suppose $$k'<k, \quad k(x)\le k(y), \quad \wt{x}\in f_{k(x);k}^{-1}(x), \quad\hbox{and}\quad \wt{y}\in f_{k(x);k}^{-1}(y).$$ The $\bS_k$-equivariance of~$f_{k(x);k}$ and the assumption in~\eref{NCRes_e5b} then imply~that \BE{NCRes_e5c}f_{k(x);k'}(g\!\cdot\!\wt{x})=f_{k(x);k'}(g\!\cdot\!\wt{y})\EE for all $g\!\in\!\bS_k\!\subset\!\bS_{k(x)}$. By the $\bS_{[k(x)]-[k']}$-invariance of~$f_{k(x);k'}$, \eref{NCRes_e5c} also holds for all $g\!\in\!\bS_{[k(x)]-[k']}$. Since $\bS_k$ and $\bS_{[k(x)]-[k']}$ generate~$\bS_{k(x)}$, it follows that~\eref{NCRes_e5c} holds for all $g\!\in\!\bS_{k(x)}$. Thus, $\wt{x}\!=\!\wt{y}$ and the conclusion in~\eref{NCRes_e5b} still holds. \begin{eg}\label{NCRes_eg} Let $\io\!:\wt{V}\!\lra\!X$ be a closed transverse immersion of codimension $\fc\!\in\!\Z^+$. With the notation as in~\eref{whiokk_e}, the sequence \BE{absNCCan_e} \cdots\lra \wt{V}_{\io}^{(k)}\xlra{\wt\io_{k;k-1}} \wt{V}^{(k-1)}_{\io}\xlra{\wt\io_{k-1;k-2}} \ldots \xlra{\wt\io_{2;1}} \wt{V}^{(1)}_{\io} \xlra{\wt\io_{1;0}} \wt{V}^{(0)}_{\io}\!=\!X \EE is a resolution of $V\!=\!\io(\wt{V})$. By Proposition~\ref{2TermToSeq_prp} below, every resolution of~$V$ is canonically isomorphic to~\eref{absNCCan_e}. \end{eg} \begin{prp}\label{2TermToSeq_prp} Suppose $X$ is a smooth manifold, $\io\!:\wt{V}\!\lra\!X$ is a closed transverse immersion of codimension $\fc\!\in\!\Z^+$, \eref{absNCCan_e} is the associated canonical resolution of $V\!=\!\io(\wt{V})$, and \eref{absNC_e} is any resolution of~$V$. Then there exist unique smooth maps~$h_k$ so that the diagram \BE{absNC_e10}\begin{split} \xymatrix{\ldots\ar[r]& X^{(k)}\ar[d]^{h_k}\ar[rr]^{f_{k;k-1}}&& X^{(k-1)}\ar[d]^{h_{k-1}}\ar[rr]^{f_{k-1;k-2}}&& \cdots \ar[r]^{f_{2;1}}& X^{(1)} \ar[d]^{h_1}\ar[r]^>>>>>{f_{1;0}}& X^{(0)}\!=\!X\ar[d]^{\id}\\ \ldots\ar[r] &\wt{V}_{\io}^{(k)}\ar[rr]^{\wt\io_{k;k-1}}&& \wt{V}_{\io}^{(k-1)} \ar[rr]^{\wt\io_{k-1;k-2}}&& \cdots\ar[r]^{\wt\io_{2;1}} & \wt{V}_{\io}^{(1)} \ar[r]^>>>>>{\wt\io_{1;0}}& \wt{V}_{\io}^{(0)}\!=\!X} \end{split}\EE commutes. These maps are $\bS_k$-equivariant diffeomorphisms. \end{prp} \begin{proof} The immersions~$f_{1;0}$ and~$\wt\io_{1;0}$ are of the same codimension~$\fc$ by \cite[Exercise~6]{Warner}. Thus, the dimensions of the manifolds~$X^{(k)}$ and~$\wt{V}_{\io}^{(k)}$ are the same for each $k\!\in\!\Z^+$. Since the restriction \BE{xx1xll_e2a}\wt\io_{k;k-1}\!: \wt{V}_{\io}^{(k)}\!-\!\wt\io_{k+1;k}(\wt{V}_{\io}^{(k+1)})\lra \wt\io_{k;k-1}\big(\wt{V}_{\io}^{(k)}\!-\!\wt\io_{k+1;k}\wt{V}_{\io}^{(k+1)})\big) \subset\wt{V}_{\io}^{(k-1)}\EE is injective and the dimension of~$\wt{V}_{\io}^{(k+1)}$ is strictly less than the dimension of~$X^{(k)}$, there can be at most one map~$h_k$ so that the left-most square in~\eref{absNC_e10} commutes with a specific choice of~$h_{k-1}$. Below we construct~$h_1$ based on general geometric considerations and then define each $h_k$ with $k\!\ge\!2$ by an explicit formula.\\ \noindent Since the restriction~\eref{xx1xll_e2a} is injective for $k\!=\!1$, there is a unique~map \BE{xx1xll_e2b}h_1\!: X^{(1)}\!-\!f_{1;0}^{-1}\big(\wt\io_{2;0}(\wt{V}_{\io}^{(2)})\big) \lra \wt{V}_{\io}^{(1)}\!-\!\wt\io_{2;1}(\wt{V}_{\io}^{(2)}) \subset\wt{V}_{\io}^{(1)}\EE so that $f_{1;0}\!=\!\wt\io_{1;0}\!\circ\!h_1$ on the domain of~$h_1$. Since the restriction~\eref{xx1xll_e2a} is an embedding, the map~\eref{xx1xll_e2b} is smooth by \cite[Theorem~1.38]{Warner}. Since $f_{1;0}$ is an immersion, $$f_{1;0}^{-1}\big(\wt\io_{2;0}(\wt{V}_{\io}^{(2)})\big)\subset X^{(1)}$$ is the image of a smooth map from a manifold of a smaller dimension than~$X^{(1)}$. Suppose \hbox{$x\!\in\!f_{1;0}^{-1}(\wt\io_{2;0}(\wt{V}_{\io}^{(2)}))$}. The restriction of~$f_{1;0}$ to a sufficiently small neighborhood~$U_x$ of~$x$ is an embedding and its image agrees with the image of an embedding of a neighborhood an element of $\wt\io_{1;0}^{-1}(f_{1;0}(x))$ in~$\wt{V}_{\io}^{(1)}$. By \cite[Theorem~1.38]{Warner}, $h_1$ extends smoothly over~$U_x$ as a map to $\wt{V}_{\io}^{(1)}$. We thus obtain a smooth map~$h_1$ so that the right-most square in~\eref{absNC_e10} commutes. Since~$f_{1;0}$ is a closed immersion which is injective on $X^{(1)}\!-\!f_{2;1}(X^{(2)})$, so is~$h_1$. Since the image of~$f_{1;0}$ contains~$V$, the image of~$h_1$ contains $\wt{V}_{\io}^{(1)}\!-\!\wt\io_{2;1}(\wt{V}_{\io}^{(2)})$; thus, $h_1$ is surjective. We conclude that~$h_1$ is a diffeomorphism.\\ \noindent From now on, we identify $(X^{(1)},f_{1;0})$ with $(\wt{V}_{\io}^{(1)},\wt\io_{1;0})\!\approx\!(\wt{V},\io)$ via~$h_1$. For each $k\!\in\!\Z^+$, we then~have $$f_{k;1}\!:X^{(k)}\lra X^{(1)}\!=\!\wt{V}\,.$$ We denote~by $$\pi_{k;k-1}\!:X\!\times\!\wt{V}^k\lra X\!\times\!\wt{V}^{k-1}$$ the projection to the first $k$ components; it restricts to the map~\eref{whiokk_e}. For $i,j\!\in\![k]$, let $\tau_{i,j}\!\in\!\bS_k$ be the transposition interchaning~$i$ and~$j$.\\ \noindent For $k\!\in\!\Z^{\ge0}$, define \BE{xx1xll_e2} h_k\!: X^{(k)}\lra X\!\times\!\wt{V}^k, \quad h_k(x)=\big(f_{k;0}(x),f_{k;1}(\tau_{1,1}(x)),\ldots, f_{k;1}(\tau_{1,k}(x))\big).\EE In particular, $h_k$ is an immersion, $h_0\!=\!\id_X$, $h_1\!=\!\id_{\wt{V}}$, and the diagram \BE{absNC_e10a}\begin{split} \xymatrix{\ldots\ar[r]& X^{(k)}\ar[d]^{h_k}\ar[rr]^{f_{k;k-1}}&& X^{(k-1)}\ar[d]^{h_{k-1}}\ar[rr]^{f_{k-1;k-2}}&& \cdots \ar[r]^{f_{2;1}}& X^{(1)} \ar[d]^{h_1}\ar[r]^{f_{1;0}}& X^{(0)}\ar[d]^{\id}\\ \ldots\ar[r] &X\!\times\!\wt{V}^k\ar[rr]^{\pi_{k;k-1}}&& X\!\times\!\wt{V}^{k-1} \ar[rr]^{\pi_{k-1;k-2}}&& \cdots\ar[r]^{\pi_{2;1}} & X\!\times\!\wt{V} \ar[r]^{\pi_{1;0}}& X^{(0)}} \end{split}\EE commutes by $\bS_{k-1}$-equivariance of $f_{k;k-1}$. Since \BE{absNC_e10c} \tau_{1,j}\tau_{1,i}=\tau_{i,j}\tau_{1,j} \qquad \forall~i,j\!\in\![k]\!-\![1],\,i\!\neq\!j,\EE the $\bS_{[k]-[1]}$-invariance of~$f_{k;1}$ implies~that $$h_k\big(\tau_{1,i}(x)\big)= \tau_{1,i}\!\cdot\!h_k(x) \qquad\forall\,x\!\in\!X^{(k)},\,i\!\in\![k],\,k\!\in\!\Z^+\,.$$ Thus, $h_k$ is $\bS_k$-equivariant. By~\eref{NCRes_e5b} with $k'\!=\!1$, $h_k$ is injective for all $k\!\ge\!1$. Since $f_{k;0}\!\equiv\!\io\!\circ\!f_{k;1}$ is $\bS_k$-invariant, $$\io\big(f_{k;1}(\tau_{1,i}(x))\big)=f_{k;0}(x) \qquad\forall\,x\!\in\!X^{(k)},\,i\!\in\![k],\,k\!\in\!\Z^+\,.$$ By~\eref{NCRes_e5}, $f_{k;1}(\tau_{1,i}(x))\!\neq\!f_{k;1}(\tau_{1,j}(x))$ for all $i\!\neq\!j$. Thus, $h_k(x)\!\in\!\wt{V}^{(k)}_{\io}$.\\ \noindent It remains to show that the maps $\Im\,h_k\!=\!\wt{V}_{\io}^{(k)}$. Since each $h_k$ is an injective immersion, this would imply that each $h_k$ is a diffeomorphism. This is the case for $k\!=\!0,1$. Suppose $k\!\ge\!2$ and $h_{k'}$ is a diffeomorphism for all $k'\!<\!k$. Since~$f_{k;k-1}$ is a closed immersion, so is~$h_k$. By the $\bS_2$-invariance of~$f_{2;0}$ and $\bS_{[k]-[k-2]}$-invariance of~$f_{k;1}$ for $k\!\ge\!3$, $$h_{k-1}\big(f_{k;k-1}(X^{(k)})\big)\subset \big\{\wt{v}\!\in\!\wt{V}_{\io}^{(k-1)}\!: \big|\wt\io_{k-1;k-2}^{-1}(\wt\io_{k-1;k-2}(\wt{v}))\big|\!>\!1\big\} =\wt\io_{k;k-1}\big(\wt{V}_{\io}^{(k)}\big).$$ The opposite inclusion holds by~\ref{inverse_it} in Definition~\ref{NCRes_dfn}. Since $\wt\io_{k;k-1}\!\circ\!h_k\!=\!h_{k-1}\!\circ\!f_{k;k-1}$ and~\eref{xx1xll_e2a} is injective, it follows that the image of~$h_k$ contains $\wt{V}_{\io}^{(k)}\!-\!\wt\io_{k+1;k}(\wt{V}_{\io}^{(k+1)})$. Thus, $\Im\,h_k\!=\!\wt{V}_{\io}^{(k)}$. We conclude that~$h_k$ is a diffeomorphism from~$X^{(k)}$ to~$\wt{V}_{\io}^{(k)}$. \end{proof} \begin{eg}\label{Succ_eg} Let $\io\!:\wt{V}\!\lra\!X$ be as in Example~\ref{NCRes_eg} with $\fc\!=\!2$. For every $m\!\in\!\Z^{\geq 0}$, the $\bS_m$-equivariant immersion \BE{ktok-1_e} \jo\!\equiv\!\wt\io_{m+1;m}\!: \wt{W}\!\equiv\!\wt{V}^{(m+1)}_\io \lra Y\!\equiv\!\wt{V}^{(m)}_\io\EE is the normalization of an NC divisor $W\!\equiv\!\jo(\wt{W})$ in~$Y$. This divisor is preserved by the $\bS_m$-action on~$\wt{V}^{(m)}$. By restricting the $\bS_{m+k}$-actions on $\wt{V}^{(m+k)}_{\io}$ to actions of the subgroups $\bS_{[m+k]-[m]}\!\approx\!\bS_k$, we obtain a resolution \BE{absNCCanShort_e} \ldots \xlra{\wt\io_{m+k+1;m+k}} \wt{V}^{(m+k)}_{\io} \xlra{\wt\io_{m+k;m+k-1}}\ldots \lra \wt{V}^{(m+1)}_{\io}\xlra{\wt\io_{m+1;m}}\wt{V}^{(m)}_{\io}\!=\!Y\EE of $W$ as in Definition~\ref{NCRes_dfn}. By Proposition~\ref{2TermToSeq_prp}, \eref{absNCCanShort_e} is canonically isomorphic to the resolution $$\ldots\xlra{\wt\jo_{k+1;k}} \wt{W}^{(k)}_{\jo}\xlra{\wt\jo_{k;k-1}}\ldots \lra \wt{W}^{(1)}_{\jo}\!\approx\!\wt{W} \xlra{\jo}Y,$$ with $\wt{W}^{(k)}_{\jo}$ and $\wt\jo_{k;k-1}$ defined as in~\eref{tViotak_e} and~\eref{whiokk_e}, respectively. An explicit identification is provided by the diffeomorphisms $$\phi_k\!:\wt{W}^{(k)}_{\jo}\lra \wt{V}^{(m+k)}_{\io}, \quad \phi_k\big(y,\wt{w}_1,\ldots,\wt{w}_k\big)= \big(y,\wt\io_{m+1}^{(m+1)}(\wt{w}_1),\ldots,\wt\io_{m+1}^{(m+1)}(\wt{w}_k)\!\big) \in \big(X\!\times\!\wt{V}^m\big)\!\times\!\wt{V}^k.$$ From such an identification, we obtain identifications \BE{cNapprox_e} \cN\wt\jo_{k;k-1}^{(i)} \approx \cN\wt\io_{m+k;m+k-1}^{(m+i)}\qquad \forall~k\!\in\!\Z^+,~i\!\in\![k],\EE of the normal bundles to the immersions defined by~\eref{iokcjdfn_e}. Therefore, a refined $\om$-regularization~$\fR$ for the immersion~$\io$ as in Definition~\ref{NCSCDregul_dfn} induces an $\io_m^*\om$-regularization $$\fR^{(m)}\approx\big((\rho_{m+k;m+i},\na^{(m+k;m+i)})_{i\in[k]},\Psi_{m+k;m}\big)_{k\in\Z^{\ge0}}$$ for the immersion~$\jo$, with $\Psi_{m+k;m}$ as in~\eref{Psikkprdfn_e}. We use the above identifications in~\cite{SympNCStructures} to construct canonical isomorphisms $$\io_m^*\cO_{X}(V) \approx\cO_{Y}(W)\otimes\!\bigotimes_{i\in[m]}\!\!\cN\wt\io_{m;m-1}^{(i)}, \quad \io_m^*TX(-\log V) \approx TY(-\log W)\!\oplus\! \big(Y\!\times\!\C^m\!\big)$$ of vector bundles over~$Y$. \end{eg} \section{Normal crossings symplectic varieties} \label{NCC_sec} \noindent NC~varieties are spaces that are locally varieties associated to SC~configurations. They can also be described as global quotients along images of immersions with compatible involutions on their domains. These two perspectives are presented in Sections~\ref{NCCloc_subs} and~\ref{NCCgl_subs}, respectively, and are shown to be equivalent in Section~\ref{NCCcomp_subs}. An alternative global characterization of regularizations for NC varieties is presented in Section~\ref{NCgl2_subs}. The (virtual) existence theorem for regularizations for NC symplectic varieties, Theorem~\ref{NCC_thm}, is justified in Section~\ref{NCCpf_sec}. Section~\ref{eg_subs} presents some examples of NC divisors and varieties. \subsection{Local perspective} \label{NCCloc_subs} \noindent Suppose \BE{XXprdfn_e}\X\equiv\big\{X_I\big\}_{I\in\cP^*(N)} \qquad\hbox{and}\qquad \X'\equiv\big\{X_I'\big\}_{I\in\cP^*(N')}\EE are transverse configurations with associated spaces~$X_{\eset}$ and~$X_{\eset}'$ as in~\eref{Xesetdfn_e}. A homeomorphism \BE{esetvphdfn_e}\vph\!:X_{\eset}\lra X_{\eset}'\EE is a \sf{diffeomorphism} if there exists a~map \hbox{$h\!:[N]\!\lra\![N']$} such that the~restriction \BE{vphhdfn_e}\vph\!:X_i\lra X_{h(i)}'\EE is a diffeomorphism between manifolds for every $i\!\in\![N]$. This implies that $X_j'\!=\!\eset$ unless $j\!=\!h(i)$ for some $i\!\in\![N]$. If $\vph$ is a diffeomorphism and $(\om_j')_{j\in[N']}$ is a symplectic structure on~$\X'$, then $$\big(\om_i\big)_{i\in[N]}\equiv\vph^*\big((\om_j')_{j\in[N']}\big) \equiv \big(\vph|_{X_i}^*\om_{h(i)}'\big)_{i\in[N]}$$ is a symplectic structure on~$\X$. \begin{dfn}\label{ImmTransConf_dfn0} Let $X$ be a topological space. \begin{enumerate}[label=(\arabic*),leftmargin=*] \item\label{NCchart_it} An \sf{NC chart for~$X$ around} a point~$x\!\in\!X$ is a tuple $(U,\X,\vph\!:U\!\lra\!X_{\eset})$, where $U$ is an open neighborhood of $x$ in~$X$, $\X$ is a finite transverse configuration with associated quotient space~$X_{\eset}$, and $\vph$ is a homeomorphism. \item\label{NCatlas_it} An \sf{NC atlas} for $X$ is a maximal collection $(U_y,\X_y,\vph_y)_{y\in\cA}$ of charts for~$X$ such that for all $y,y'\!\in\!\cA'$ and $x\!\in\!U_y\!\cap\!U_{y'}$ there exists a neighborhood~$U_{yy';x}$ of~$x$ in~$U_y\!\cap\!U_{y'}$ so that the overlap~map \BE{ImmTransConf_e2} \vph_{yy';x}\!\equiv\!\vph_y\!\circ\!\vph_{y'}^{-1}\!: \vph_{y'}\big(U_{yy';x}\big)\lra \vph_y\big(U_{yy';x}\big)\EE is a diffeomorphism. \end{enumerate} \end{dfn} \begin{dfn}\label{ImmTransConf_dfn} An~\sf{NC variety} is a (Hausdorff) paracompact second-countable topological space~$X$ with an NC atlas $(U_y,\X_y,\vph_y)_{y\in\cA}$ with $\X_y\!=\!(X_{y;I})_{I\in\cP^*(N_y)}$ such that $X_{y;ij}$ is a closed submanifold of~$X_{y;i}$ of codimension~2 for all $i,j\!\in\![N_y]$ distinct. \begin{enumerate}[label=(\arabic*),leftmargin=*] \item\label{NCsympstr_it} A \sf{symplectic structure} on such an~NC variety is a tuple $(\om_{y;i})_{y\in\cA,i\in[N_y]}$, where each $(\om_{y;i})_{i\in[N_y]}$ is a symplectic structure on~$\X_y$, such~that $$\big(\om_{y';i}\big)_{i\in[N_{y'}]}\big|_{\vph_{y'}(U_{yy';x})} = \vph_{yy';x}^{~*}\big((\om_{y;i})_{i\in[N_y]}\big)$$ for all $y,y'\!\in\!\cA$ and $x\!\in\!U_{yy';x}\!\subset\!U_y\!\cap\!U_{y'}$ as in Definition~\ref{ImmTransConf_dfn0}\ref{NCatlas_it}. \item If $B$ is a manifold, possibly with boundary, a tuple $(\om_{t;y;i})_{t\in B,y\in\cA,i\in[N_y]}$ is a \sf{family of symplectic structures} on~$\X$ if $(\om_{t;y;i})_{y\in\cA,i\in[N_y]}$ is a symplectic structure on~$\X$ for each $t\!\in\!B$ and $(\om_{t;y;i})_{t\in B,i\in[N_y]}$ is a family of symplectic structures on~$\X_y$ for each~$y\!\in\!\cA$. \end{enumerate} \end{dfn} \begin{dfn}\label{NCC_dfn} An \sf{NC symplectic variety} is a pair consisting of an~NC variety~$X$ and a symplectic structure $(\om_{y;i})_{y\in\cA,i\in[N_y]}$ on~$X$ as in Definition~\ref{ImmTransConf_dfn}\ref{NCsympstr_it} such~that $(\om_{y;i})_{i\in[N_y]}\!\in\!\Symp^+(\X_y)$ for every $y\!\in\!\cA$. \end{dfn} \noindent The symplectic variety~$X_{\eset}$ determined by an SC symplectic configuration~$\X$ as in Definition~\ref{SCC_dfn} is an NC symplectic variety in the sense of Definition~\ref{NCC_dfn}. The corresponding NC~atlas is the maximal collection of NC~charts on~$X_{\eset}$ containing the chart $(X_{\eset},\X,\id_{X_{\eset}})$.\\ \noindent The \sf{singular locus} of an~NC variety~$X$ as in Definition~\ref{ImmTransConf_dfn} is the closed subspace $$X_{\prt}\equiv \bigcup_{y\in\cA}\vph_y^{-1}\big(X_{y;\prt}\big)\subset X,$$ where $X_{y;\prt}\!\subset\!X_{y;\eset}$ is the subspace corresponding to~$\X_y$ as in~\eref{Xprtdfn_e}. Let $$\big(X_{\prt}\big)_{\prt}\equiv \bigcup_{y\in\cA} \bigcup_{\begin{subarray}{c}I\subset[N_y]\\ |I|=3\end{subarray}} \!\!\!\!\!\vph_y^{-1}\big(X_{y;I}\big)\subset X$$ denote the \sf{singular locus of~$X_{\prt}$}; it is also a closed subset.\\ \noindent For $k\!\in\!\Z^{\ge0}$, a \sf{$k$-form on~$X$} is a tuple $\mu\!\equiv\!(\mu_{y;i})_{y\in\cA,i\in[N_y]}$, where each $\mu_{y;i}$ is a $k$-form on~$X_{y;i}$ such~that \begin{alignat*}{2} \mu_{y;i}\big|_{X_{y;ij}}&=\mu_{y;j}\big|_{X_{y;ij}} &\qquad &\forall\,i,j\!\in\![N_y],\,y\!\in\!\cA, \\ \mu_{y';i}|_{\vph_{y'}(U_{yy';x})} &= \vph_{yy';x}^{~*}\,\mu_{y;h_{yy';x}(i)} &\qquad &\forall\,i\!\in\![N_{y'}],\,y,y'\!\in\!\cA,\,x\!\in\!U_{yy';x}\!\subset\!U_y\!\cap\!U_{y'}\,, \end{alignat*} with $\vph_{yy';x}$ as in Definition~\ref{ImmTransConf_dfn0}\ref{NCatlas_it} and with $h_{yy';x}$ being the associated map as in~\eref{vphhdfn_e}. The~tuple $$ \nd\mu\equiv\big(\nd\mu_{y;i}\big)_{y\in\cA,i\in[N_y]}$$ is then a $(k\!+\!1)$-form on~$X$. Let $$\supp(\mu)\equiv \ov{\bigcup_{y\in\cA}\bigcup_{i\in[N_y]} \!\!\! \vph_y^{-1}\big(\supp(\mu_{y;i})\big)}\subset X$$ denote the \sf{support of~$\mu$}. Denote by $\Symp^+(X)$ the space of all symplectic structures~$\om$ on~$X$ so that $(X,\om)$ is an NC symplectic variety.\\ \noindent Let $(\om_i)_{i\in[N]}$ and $(\om_i')_{i\in[N']}$ be symplectic structures on transverse configurations $\X$ and~$\X'$, respectively, as in~\eref{XXprdfn_e}. Suppose $\vph\!:U\!\lra\!U'$ is a symplectomorphism between open subsets of~$X_{\eset}$ and~$X_{\eset}'$ and $h\!:[N]\!\lra\![N']$ is as in~\eref{vphhdfn_e} with the two sides replaced by their intersections with~$U$ and~$U'$. We also denote by~$h$ the induced map from~$\cP(N)$ to~$\cP(N')$. The diffeomorphism~$\vph$ determines isomorphisms $$\nd\vph\!: \cN_{X_i}X_I\big|_{X_I\cap U}\lra \cN_{X_{i'}}X_{I'}\big|_{X_{I'}\cap U'} \qquad\forall\,i\!\in\!I\!\subset\![N],~i'\!\equiv\!h(i)\in I'\!\equiv\!h(I).$$ Suppose $$\big(\cR_I\big)_{I\in\cP^*(N)} \equiv \big(\rho_{I;i},\na^{(I;i)},\Psi_{I;i}\big)_{i\in I\subset[N]} \quad\hbox{and}\quad \big(\cR_I'\big)_{I\in\cP^*(N')} \equiv \big(\rho_{I;i}',\na'^{(I;i)},\Psi_{I;i}'\big)_{i\in I\subset[N']}$$ are an $(\om_i)_{i\in[N]}$-regularization for~$\X$ and an $(\om_i')_{i\in[N']}$-regularization for~$\X'$, respectively. We define $$\big(\cR_I\big)_{I\in\cP^*(N)}\cong_{\vph}\big(\cR_I'\big)_{I\in\cP^*(N')}$$ if \begin{gather*} \big(\rho_{I;i},\na^{(I;i)}\big)\big|_{X_I\cap U} =\big\{\nd\vph\big\}^*\big(\rho_{I';i'}',\na'^{(I';i')}\big),\\ \vph\!\circ\!\Psi_{I;i} =\Psi_{I';i'}'\!\circ\!\nd\vph \quad\hbox{on}\quad \Dom(\Psi_{I;i})\cap \nd\vph^{-1}\big(\Dom(\Psi_{I';i'}')\big) \end{gather*} for all $i\!\in\!I\!\subset\![N]$, $i'\!\equiv\!h(i)\in I'\!\equiv\!h(I)$. \begin{dfn}\label{NCCregul_dfn} Let $X$ be an NC variety with an NC atlas $(U_y,\X_y,\vph_y)_{y\in\cA}$. \begin{enumerate}[label=(\arabic*),leftmargin=*] \item\label{NCCregul_it1} If $\om\!\equiv\!(\om_{y;i})_{y\in\cA,i\in[N_y]}$ is a symplectic structure on~$X$, an \sf{$\om$-regularization for $X$} is a~collection $$(\fR_y)_{y\in\cA} \equiv (\cR_{y;I})_{y\in\cA,I\in\cP^*(N_y)} $$ such that $\fR_y$ is an $(\om_{y;i})_{i\in[N_y]}$-regularization for~$\X_y$ for each~$y\!\in\!\cA$ and \BE{NCCregulover_e}\fR_{y'}\big|_{\vph_{y'}(U_{yy';x})} \cong_{\vph_{yy';x}} \fR_y\big|_{\vph_y(U_{yy';x})}\EE for all $y,y'\!\in\!\cA$ and $x\!\in\!U_{yy';x}\!\subset\!U_y\!\cap\!U_{y'}$ as in Definition~\ref{ImmTransConf_dfn0}\ref{NCatlas_it}. \item\label{NCCregul_it2} If $B$ is a manifold, possibly with boundary, and $(\om_{t;y;i})_{t\in B,y\in\cA,i\in[N_y]}$ is a family of symplectic structures on~$X$, an \sf{$(\om_{t;y;i})_{t\in B,y\in\cA,i\in[N_y]}$-family of regularizations for~$X$} is a tuple $(\fR_{t;y})_{y\in\cA,t\in B}$ such that $(\fR_{t;y})_{y\in\cA}$ is an $(\om_{t;y;i})_{y\in\cA,i\in[N_y]}$-regularization for~$X$ for each~$t\!\in\!B$ and $(\fR_{t;y})_{t\in B}$ is an $(\om_{t;y;i})_{t\in B,i\in[N_y]}$-family of regularizations for~$\X_y$ for each~$y\!\in\!\cA$. \end{enumerate} \end{dfn} \vspace{.1in} \noindent For a family $(\om_t)_{t\in B}$ of symplectic structures on an NC variety~$X$, we define two $(\om_t)_{t\in B}$-families of regularizations for~$X$ to be \sf{equivalent} if they agree on the level of germs as defined before \cite[Theorem~2.17]{SympDivConf}. \begin{thm}\label{NCC_thm} Let $X$ be an NC variety with an NC atlas $(U_y,\X_y,\vph_y)_{y\in\cA}$ and $X^*\!\subset\!X$ be an open subset such that $\ov{X^*}\!\cap\!(X_{\prt})_{\prt}\!=\!\eset$. Suppose \begin{enumerate}[label=$\bullet$,leftmargin=*] \item $B$, $N(\prt B)$, and $N'(\prt B)$ are as in Theorem~\ref{NCD_thm}, \item $(\om_t)_{t\in B}$ is a family of symplectic structures in $\Symp^+(X)$, \item $(\fR_{t;y})_{t\in N(\prt B),y\in\cA}$ is an $(\om_t)_{t\in N(\prt B)}$-family of regularizations for~$X$. \end{enumerate} Then there exist a smooth family $(\mu_{t,\tau})_{t\in B,\tau\in\bI}$ of 1-forms on~$X$ such~that \BE{SCCom_e}\begin{aligned} \mu_{t,0}=0, ~~ &\supp\big(\mu_{\cdot,\tau}\big)\subset \big(B\!-\!N'(\prt B)\big)\!\times\!(X\!-\!X^*), &\qquad& \forall~t\!\in\!B,\,\tau\!\in\!\bI,\\ &\om_{t,\tau}\!\equiv\!\om_t\!+\!\nd\mu_{t,\tau}\in \Symp^+(X) &\qquad& \forall~t\!\in\!B,\,\tau\!\in\!\bI, \end{aligned}\EE and an $(\om_{t,1})_{t\in B}$-family $(\wt\fR_{t;y})_{t\in B,y\in\cA}$ of regularizations for~$X$ such~that \BE{SCCom_e2}\big(\wt\fR_{t;y}\big)_{t\in N'(\prt B),y\in\cA} \cong \big(\fR_{t;y}\big)_{t\in N'(\prt B),y\in\cA}\,.\EE \end{thm} \vspace{.1in} \noindent This statement, which is the direct analogue of \cite[Theorem~2.17]{SympDivConf} for arbitrary NC varieties, implies that the second projection in~\eref{AuxtoSympNC_e} is a weak homotopy equivalence in this general setting as well. The proof of \cite[Theorem~2.17]{SympDivConf} is mostly local in nature and readily extends to Theorem~\ref{NCC_thm}; see Section~\ref{NCCpf_sec} for more details. \subsection{Global perspective} \label{NCCgl_subs} \noindent Suppose $\wt{X}$ and $\wt{V}$ are sets. We call a pair \BE{iopsidfn_e}\big(\io\!:\wt{V}\!\lra\!\wt{X},\psi\!:\wt{V}\!\lra\!\wt{V}\big)\EE consisting of a map and an involution, i.e.~$\psi^2\!=\!\id_{\wt{V}}$, \sf{compatible}~if \BE{iopsiprop_e1}\begin{split} &\io\big(\psi(\wt{v})\big)\neq\io(\wt{v})~~\forall\,\wt{v}\!\in\! \wt{V},\quad \wt{v},\wt{v}'\!\in\!\wt{V},\,\wt{v}\!\neq\!\wt{v}',\, \io(\wt{v})=\io(\wt{v}')\Lra\io\big(\psi(\wt{v})\big)\neq\io\big(\psi(\wt{v}')\big),\\ &\qquad\psi\big(\io^{-1}\big(\{\wt{x}\}\!\cup\!\io(\psi(\io^{-1}(\wt{x})))\big)\big) =\io^{-1}\big(\{\wt{x}\}\!\cup\!\io(\psi(\io^{-1}(\wt{x})))\big) ~~\forall\,\wt{x}\!\in\!\wt{X}. \end{split}\EE The first condition in~\eref{iopsiprop_e1} is equivalent to the image of the~map \BE{wtXX2_e}\wt{V}\lra \wt{X}^2, \qquad \wt{v}\lra\big(\io(\wt{v}),\io(\psi(\wt{v}))\big),\EE being disjoint from the diagonal $\De_{\wt{X}}\!\subset\!\wt{X}^2$. The second condition in~\eref{iopsiprop_e1} is equivalent to the injectivity of this~map. The three conditions are illustrated in Figure~\ref{wtVpsi_fig}.\\ \noindent For a compatible pair $(\io,\psi)$ as in~\eref{iopsidfn_e}, define \BE{iopsiprop_e2} X_{\io,\psi}=\wt{X}\big/\!\!\sim, \quad \wt{x}\sim\wt{x}' ~~~\hbox{if}~~~\wt{x}=\wt{x}'~~\hbox{or}~~ \psi\big(\io^{-1}(\wt{x})\big)\cap\io^{-1}(\wt{x}')\neq\eset.\EE Since $\psi$ is an involution, the relation~$\sim$ above is symmetric. In light of the first two conditions in~\eref{iopsiprop_e1}, the last condition in~\eref{iopsiprop_e1} is equivalent to the transitivity of this relation. Let $$q\!:\wt{X} \lra X_{\io,\psi}$$ be the quotient projection. By~\eref{iopsiprop_e2}, the projection of~$\wt{X}_q^{(2)}$ to $\wt{X}^2$ is the image of~\eref{wtXX2_e}. For every $k\!\in\!\Z^{\ge0}$, the~map \BE{tiXprtX_e} \phi_k\!:\wt{V}_{\io}^{(k)}\lra \wt{X}_q^{(k+1)}\!\subset\!X_{\io,\psi}\!\times\!\wt{X}^{k+1}, ~~ \phi_k\big(\wt{x},\wt{v}_1,\ldots,\wt{v}_k\big)= \big(q(\wt{x}),\wt{x},\io(\psi(\wt{v}_1)\!),\ldots,\io(\psi(\wt{v}_k)\!)\!\big),\EE is thus a bijection. We note that the diagram \BE{phikiio_e}\begin{split} \xymatrix{\wt{V}_{\io}^{(k)} \ar[rr]^{\phi_k} \ar[rd]^{\io_k} \ar[dd]_{\io_{k;k-1}^{(i)}} && \wt{X}_q^{(k+1)} \ar[ld]|{q_{k+1;1}} \ar[dd]^{q_{k+1;k}^{(i+1)}}\\ & \wt{X} & \\ \wt{V}_{\io}^{(k-1)} \ar[rr]_{\phi_{k-1}} \ar[ru]|{\io_{k-1}} && \wt{X}_q^{(k)} \ar[ul]^{q_{k;1}\!\!}} \end{split}\EE commutes for all $i\!\in\![k]$ and $k\!\in\!\Z^+$. \begin{lmm}\label{iopsi_lmm} Suppose $(\io,\psi)$ is a compatible pair as in~\eref{iopsidfn_e} so that $\io$ is a closed transverse immersion and $\psi$ is smooth. \begin{enumerate}[label=(\arabic*),leftmargin=*] \item For every $k\!\in\!\Z^{\ge0}$, the bijection $\phi_k$ is a homeomorphism. \item\label{wtXqproj_it} The projection $\wt{X}_q^{(k+1)}\!\lra\!\wt{X}^{k+1}$ is an embedding with respect to the smooth structure on $\wt{X}_q^{(k+1)}$ induced by~$\phi_k$. \end{enumerate} \end{lmm} \begin{proof} Since $\io$ is a closed transverse immersion and $\psi$ is smooth, the map \BE{iopsi_e3}\wt{V}_{\io}^{(k)}\lra \wt{X}^{k+1}, \quad \big(\wt{x},\wt{v}_1,\ldots,\wt{v}_k\big)\lra \big(\wt{x},\io(\psi(\wt{v}_1)\!),\ldots,\io(\psi(\wt{v}_k)\!)\!\big),\EE is a closed immersion. Since this map is injective, it follows that it is a topological embedding. Along with \cite[Theorem~1.32]{Warner}, this implies that~\eref{iopsi_e3} is a smooth embedding. Since $q$ is continuous, the projection in~\ref{wtXqproj_it} is a topological embedding. The last two conclusions imply both claims of the lemma. \end{proof} \noindent By Lemma~\ref{iopsi_lmm}, the smooth structure on $\wt{X}_q^{(k+1)}$ induced by $\phi_k$ is $\bS_{k+1}$-invariant. Furthermore, the~maps \begin{alignat*}{2} q_{k+1}^{(i)}\!:\wt{X}_q^{(k+1)}&\lra\wt{X}, &\quad q_{k+1}^{(i)}(x,\wt{x}_1,\ldots,\wt{x}_{k+1})&=\wt{x}_i,\\ q_{k+1;k}^{(i)}\!:\wt{X}_q^{(k+1)}&\lra\wt{X}_q^{(k)},&\quad q_{k+1;k}^{(i)}\big(x,\wt{x}_1,\ldots,\wt{x}_{k+1}\big)&= \big(x,\wt{x}_1,\ldots,\wt{x}_{i-1},\wt{x}_{i+1},\ldots,\wt{x}_{k+1}\big), \end{alignat*} are immersions for all $k\!\in\!\Z^{\ge0}$ and $i\!\in\![k\!+\!1]$.\\ \begin{figure} \begin{pspicture}(-4,-4)(11,4.5) \psset{unit=.3cm} \psline[linewidth=.08](-1,14)(-1,8)\pscircle*(-1,11){.2}\rput(-2.1,11.7){\sm{$\wt{v}_{12}$}} \psline[linewidth=.08,linestyle=dashed](-3,9)(1,13)\rput(2.5,12.5){\sm{$\cN\io_{2;1}^{(1)}$}} \psline[linewidth=.05]{<->}(3,11)(10,11)\rput(6.5,11.7){$\psi$} \psline[linewidth=.08](14,14)(14,8)\pscircle*(14,11){.2}\rput(13,11.9){\sm{$\wt{v}_{21}$}} \psline[linewidth=.08,linestyle=dashed](11,11)(17,11)\rput(17,11.8){\sm{$\cN\io_{2;1}^{(1)}$}} \psline[linewidth=.08](27,9)(31,13)\pscircle*(29,11){.2}\rput(28.3,11.9){\sm{$\wt{v}_{31}$}} \psline[linewidth=.08,linestyle=dashed](26,11)(32,11)\rput(33,11.8){\sm{$\cN\io_{2;1}^{(1)}$}} \psline[linewidth=.09,linestyle=dashed](-1,6)(-1,0)\pscircle*(-1,3){.2}\rput(0.3,2.6){\sm{$\wt{v}_{13}$}} \psline[linewidth=.08](-3,1)(1,5)\rput(-2.8,5){\sm{$\cN\io_{2;1}^{(2)}$}} \psline[linewidth=.05]{<->}(3,5.3)(25,8.3)\rput(15,7.7){$\psi$} \psline[linewidth=.05,linestyle=dashed]{<->}(0,.5)(13,.5)\rput(6.5,1.3){$D_2\psi_2$} \psline[linewidth=.05]{->}(-1,-1.5)(-1,-5.5)\rput(-.3,-3.5){$\io$} \psline[linewidth=.08,linestyle=dashed](14,6)(14,0)\pscircle*(14,3){.2}\rput(15.1,2.2){\sm{$\wt{v}_{23}$}} \psline[linewidth=.08](11,3)(17,3)\rput(12.2,5){\sm{$\cN\io_{2;1}^{(2)}$}} \psline[linewidth=.05]{->}(14,-1.5)(14,-5.5)\rput(14.7,-3.5){$\io$} \psline[linewidth=.05]{<->}(18,3)(25,3)\rput(21.5,3.7){$\psi$} \psline[linewidth=.07,linestyle=dashed](27,1)(31,5)\pscircle*(29,3){.2}\rput(30,2.2){\sm{$\wt{v}_{32}$}} \psline[linewidth=.07](26,3)(32,3)\rput(29.2,5.5){\sm{$\cN\io_{2;1}^{(2)}$}} \psline[linewidth=.05]{->}(29,-1.5)(29,-5.5)\rput(29.7,-3.5){$\io$} \psline[linewidth=.09](-1,-7)(-1,-13)\pscircle*(-1,-10){.2}\rput(-1.8,-9.4){\sm{$\wt{x}_1$}} \psline[linewidth=.08](-3,-12)(1,-8) \psline[linewidth=.08](14,-7)(14,-13)\pscircle*(14,-10){.2}\rput(13.2,-9.2){\sm{$\wt{x}_2$}} \psline[linewidth=.08](11,-10)(17,-10) \psline[linewidth=.07](27,-12)(31,-8)\pscircle*(29,-10){.2}\rput(28.3,-9.2){\sm{$\wt{x}_3$}} \psline[linewidth=.07](26,-10)(32,-10) \end{pspicture} \caption{The solid lines in the bottom row represent the image of a transverse immersion \hbox{$\io\!:\wt{V}\!\lra\!\wt{X}$} around three double points $\wt{x}_i\!\in\!\wt{X}$. The solid lines above each~$\wt{x}_i$ are the preimages of the two branches at~$\wt{x}_i$ in~$\wt{V}$; they are interchanged by the involution~$\psi$ as indicated. The top and middle points in each column represent an element of $\wt{V}_{\io}^{(2)}$, with the top point being its first $\wt{V}$-component.} \label{wtVpsi_fig} \end{figure} \noindent By the commutativity of the diagram in~\eref{phikiio_e}, the identity on $q_{k+1;1}^*T\wt{X}$ and~$\nd\phi_{k-1}$ induce bundle isomorphisms $$D_k\!:\cN\io_k\lra \cN q_{k+1;1} \qquad\hbox{and}\qquad D_i\phi_k\!:\cN\io_{k;k-1}^{(i)}\lra \cN q_{k+1;k}^{(i+1)}$$ covering~$\phi_k$. By the commutativity of the right triangle in~\eref{phikiio_e} with $i$ replaced by $i\!-\!1$, $\nd q_{k;1}$ induces a bundle homomorphism \BE{cNqdecompmaps_e} \cN q_{k+1;k}^{(i)} \lra \cN q_{k+1;1}\EE over $\wt{X}_q^{(k+1)}$ for every $i\!\in\![k\!+\!1]\!-\!\{1\}$. For all $\si\!\in\!\bS_k$ and $i\!\in\![k]$, the homomorphism~$\nd\si_i$ of the second diagram in~\eref{Vkdiag_e} with~$\io$ replaced by~$q$ induces an isomorphism $$D_i{\si}\!:\cN q_{k;k-1}^{(i)}\lra \cN q_{k;k-1}^{(\si(i))}$$ covering the action of~$\si$ on~$\wt{X}_q^{(k)}$.\\ \noindent For $k\!\ge\!2$, let \begin{gather*} \cN q_k=\bigoplus_{i\in[k]}\cN q_{k;k-1}^{(i)}\,, \quad \cN_i q_k=\!\!\bigoplus_{\begin{subarray}{c}j\in[k]\\ j\neq i\end{subarray}}\! \cN q_{k;k-1}^{(j)}\subset\cN q_k\, ~~\forall\,i\!\in\![k]. \end{gather*} By the commutativity of~\eref{phikiio_e}, the diagram $$\xymatrix{ \bigoplus\limits_{i\in[k]}\!\!\cN\io_{k;k-1}^{(i)} \ar[d] \ar[rrr]^{\bigoplus\limits_{i\in[k]}\!\!\!D_i\phi_k} &&& \cN_1q_{k+1}\ar[d] \\ \cN\io_k\ar[rrr]^{D_k} &&& \cN q_{k+1;1} }$$ commutes for every $k\!\in\!\Z^+$; the left and right vertical arrows above are the isomorphisms induced by the homomorphisms~\eref{iodecompmaps_e} and~\eref{cNqdecompmaps_e}, respectively. For $k\!\ge\!2$ and $\si\!\in\!\bS_k$, let \BE{DsiXdfn_e}D\si\!\equiv\!\bigoplus_{i\in[k]}\!D_i\si: \cN q_k\lra \cN q_k\,.\EE This bundle isomorphism preserves the splitting of~$\cN q_k$ and takes~$\cN_i q_k$ to~$\cN_{\si(i)}q_k$. \begin{dfn}\label{NCTransConfregul_dfn} Suppose $(\io,\psi)$ is a compatible pair as in~\eref{iopsidfn_e} so that $\io$ is a transverse closed immersion and $\psi$ is smooth. Let $q\!:\wt{X}\!\lra\!X_{\io,\psi}$ be the associated quotient. A \sf{regularization} for $(\io,\psi)$ is a tuple $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$, where each $\Psi_{k;i}$ is a regularization for the immersion~$q_k^{(i)}$, such~that the tuple $(\Psi_{k+1;1}\!\circ\!D_k)_{k\in\Z^{\ge0}}$ is a refined regularization for~$\io$ as in Definition~\ref{NCTransCollregul_dfn}, \begin{alignat}{2} \label{NCSCCregCond_e0} q\!\circ\!\Psi_{k;i_1}\big|_{\cN_{i_2}q_k\cap\Dom(\Psi_{k;i_1})} &=q\!\circ\!\Psi_{k;i_2}\big|_{\cN_{i_1}q_k\cap\Dom(\Psi_{k;i_2})} &\qquad &\forall~i_1,i_2\!\in\![k],~k\!\in\!\Z^+, \\ \label{NCSCCregCond_e2} \Psi_{k;i}&=\Psi_{k;\si(i)}\!\circ\!D\si\big|_{\Dom(\Psi_{k;i})} &\qquad &\forall~i\!\in\![k],~\si\!\in\!\bS_k,~k\!\ge\!2\,. \end{alignat} \end{dfn} \begin{dfn}\label{NCSCCregul_dfn} Suppose $(\wt{X},\wt\om)$ is a symplectic manifold, $(\io,\psi)$ is a compatible pair as in~\eref{iopsidfn_e} so that $\io$ is a closed transverse immersion of codimension~2, $\psi$ is smooth, $\io_k^*\wt\om$ is a symplectic form on~$\wt{V}_{\io}^{(k)}$ for all $k\!\in\!\Z^+$, and $\psi^*\io^*\wt\om\!=\!\io^*\wt\om$. An \sf{$\wt\om$-regularization for~$(\io,\psi)$} is a~tuple \BE{NCSCCregul_e}\fR\equiv \big(\cR_k)_{k\in\Z^+}\equiv \big(\rho_{k;i},\na^{(k;i)},\Psi_{k;i}\big)_{k\in\Z^+,i\in[k]}\EE such that $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ is a regularization for~$(\io,\psi)$, the~tuple \BE{NCSCCregul_e0}\big(\big( \{D_i\phi_k\}^*(\rho_{k+1;i+1},\na^{(k+1;i+1)})\big)_{i\in[k]}\big), \Psi_{k+1;1}\!\circ\!D_k\big)_{k\in\Z^{\ge0}}\EE is a refined $\wt\om$-regularization for the immersion~$\io$ as in Definition~\ref{NCSCDregul_dfn}, and \BE{NCSCCregul_e3}\big(\rho_{k;i},\na^{(k;i)}\big) = \{D_i\si\}^*\big(\rho_{k;\si(i)},\na^{(k;\si(i))}\big) \quad \forall\,i\!\in\![k],\,\si\!\in\!\bS_k,\, k\!\in\!\Z^+.\EE \end{dfn} \vspace{.2in} \noindent The condition~\eref{NCSCCregCond_e0} replaces~\eref{SCCregCond_e0}. By Proposition~\ref{NCD_prp}, the first condition on~$\wt\om$ in Definition~\ref{NCSCCregul_dfn} is equivalent to $V\!\equiv\!\io(\wt{V})$ being an NC symplectic divisor in~$(\wt{X},\wt\om)$. For a smooth family $(\wt\om_t)_{t\in B}$ of symplectic forms on~$\wt{X}$ satisfying the conditions of Definition~\ref{NCSCCregul_dfn}, the notion of $\wt\om$-regularization naturally extends to a notion of \sf{$(\wt\om_t)_{t\in B}$-family of refined regularizations for~$(\io,\psi)$}. We show in Section~\ref{NCCcomp_subs} that the NC symplectic varieties of Definition~\ref{NCC_dfn} correspond to the compatible pairs~$(\io,\psi)$ and symplectic forms~$\wt\om$ satisfying the conditions of Definition~\ref{NCSCCregul_dfn}. An $\om$-regularization in the sense of Definition~\ref{NCCregul_dfn} likewise corresponds to an $\wt\om$-regularization for~$(\io,\psi)$. \subsection{Local vs.~global perspective} \label{NCCcomp_subs} \noindent We now show that the local and global notions of NC symplectic variety are equivalent. If $(\wt{X},\wt\om)$ and $(\io,\psi)$ correspond to an NC variety~$(X,\om)$ as in Corollary~\ref{NCC_crl} below, then an \hbox{$\wt\om$-regularization} for~$(\io,\psi)$ in the sense of Definition~\ref{NCSCCregul_dfn} determines an $\om$-regularization for~$X$ in the sense of Definition~\ref{NCCregul_dfn}\ref{NCCregul_it1}. Conversely, the local regularizations constituting an $\om$-regularization for~$X$ can be patched together to form an $\wt\om$-regularization for~$(\io,\psi)$. These relations also apply to families of regularizations. \begin{lmm}\label{NCC_lmm} Suppose $\wt{X}$ and $\wt{V}$ are smooth manifolds and $(\io,\psi)$ is a compatible pair as in~\eref{iopsidfn_e} so that~$\io$ is a closed transverse immersion of codimension~2 and $\psi$ is a smooth involution. The associated quotient space~$X_{\io,\psi}$ is then canonically an NC variety. Furthermore, every NC variety is isomorphic to $X_{\io,\psi}$ for some~$(\io,\psi)$ as above. \end{lmm} \begin{proof} (1) Let $k\!\in\!\Z^+$, $V_{\io}^{(k)}\!\subset\!\wt{X}$ be as in~\eref{Xiotak_e}, and $\wt{x}\!\in\!V_{\io}^{(k)}\!-\!V_{\io}^{(k+1)}$. In light of~\eref{iopsiprop_e1}, we can write \begin{equation*}\begin{split} &\{\wt{x}\}\!\cup\!\io\big(\psi(\io^{-1}(\wt{x})\!)\big)\equiv\big\{\wt{x}_i\!:i\!\in\![k\!+\!1]\big\} \quad\hbox{and}\quad \io^{-1}(\wt{x}_i)\equiv\big\{\wt{v}_{ij}\!:\,j\!\in\![k\!+\!1]\!-\!i\big\}~~\forall\,i\!\in\![k\!+\!1]\\ &\hspace{.8in}\hbox{with}\qquad \wt{x}_i\neq\wt{x}_{i'}~~\forall\,i\!\neq\!i', \quad \wt{v}_{ij}\neq\wt{v}_{ij'}~~\forall\,j\!\neq\!j',\quad \psi\big(\wt{v}_{ij}\big)=\wt{v}_{ji}\,; \end{split}\end{equation*} see Figure~\ref{wtVpsi_fig}. By \cite[Proposition~1.35]{Warner}, the closedness of~$\io$, and the continuity of~$\psi$, there exist open neighborhoods $U_i\!\subset\!\wt{X}$ of~$\wt{x}_i$ with $i\!\in\![k\!+\!1]$ and neighborhoods $\wt{V}_{ij}\!\subset\!\wt{V}$ of~$\wt{v}_{ij}$ with $i,j\!\in\![k\!+\!1]$, $i\!\neq\!j$, such~that $$\io^{-1}(U_i)=\bigsqcup_{j\in[k+1]-i}\!\!\!\!\!\!\!\!\wt{V}_{ij}\subset\wt{V},\quad U_i\!\cap\!U_j=\eset, ~~~ \psi\big(\wt{V}_{ij}\big)=\wt{V}_{ji} \quad\forall~i,j\!\in\![k\!+\!1],~i\!\neq\!j,$$ and the restriction of $\io$ to each $\wt{V}_{ij}$ is injective. Let $U\!=\!q(U_1)\!\cup\!\ldots\!q(U_{k+1})$. Since $$q^{-1}\big(q(U)\big)=\bigsqcup_{j\in[k+1]}\!\!\!\!\!U_j$$ by the above, $U$ is an open neighborhood of $q(\wt{x})$ in~$X_{\io,\psi}$.\\ \noindent For $i\!\in\!I\!\subset\![k\!+\!1]$ and $j\!\in\![k\!+\!1]\!-\!i$, let $$U_{ij}=\io(\wt{V}_{ij})\subset U_i, \qquad U_I=\bigcap_{j'\in I-i}\!\!\!\!U_{ij'}\subset U_i\,.$$ For each $i\!\in\![k\!+\!1]$, $\{U_{ij}\}_{j\neq i}$ is a transverse collection of closed submanifolds of~$U_i$ of codimension~2 such~that $$\io(\wt{V})\cap U_i= \bigcup_{j\in[k+1]-i}\!\!\!\!\!\!\!\!U_{ij}\,.$$ The diffeomorphisms $$\io\!: \wt{V}_{ij}\lra U_{ij}, \qquad \psi\!: \wt{V}_{ij}\lra\wt{V}_{ji}, \qquad\hbox{and}\qquad \io\!: \wt{V}_{ji}\lra U_{ji}$$ with $i\!\neq\!j$ induce an identification $\psi_{ij}\!:U_{ij}\!\lra\!U_{ji}$ restricting to identifications on the submanifolds $U_I\!\subset\!U_{ij},U_{ji}$ whenever $i,j\!\in\!I\!\subset\![k\!+\!1]$. By~\eref{iopsiprop_e2}, $$U=\bigg(\bigsqcup_{j\in[k+1]}\!\!\!\!\!U_j\bigg)\!\!\bigg/\!\!\!\sim \qquad\hbox{with}\qquad U_{ij}\ni \wt{x}' \sim \psi_{ij}(\wt{x}')\in U_{ji} ~~\forall\,\wt{x}'\!\in\!U_{ij},\,i\!\neq\!j\,.$$ Thus, $\X\!\equiv\!\{U_I\}_{I\in\cP^*(k+1)}$ is a transverse configuration such that $U_{ij}$ is a closed submanifold of~$U_i$ of codimension~2 for all $i,j\!\in\![k\!+\!1]$ distinct and the restriction of~$q$ to the union of~$U_i$ descends to a homeomorphism $$\vph\!: U\lra X_{\eset}\,.$$ The tuple $(U,\X,\vph)$ is then a chart around $q(\wt{x})$ in~$X_{\io,\psi}$ in the sense of Definition~\ref{ImmTransConf_dfn0}\ref{NCchart_it}. Any two such charts overlap smoothly and thus generate an NC atlas for~$X_{\io,\psi}$.\\ \noindent (2) Let $X$ be an NC variety. Choose a locally finite collection $$\big(U_y,\X_y\!\equiv\!(X_{y;I})_{I\in\cP^*(N_y)}, \vph_y\!:U_y\!\lra\!X_{y;\eset}\big)_{y\in\cA'}$$ of NC charts covering~$X$. Let $$\wt{X}=\bigg(\bigsqcup_{y\in\cA'}\bigsqcup_{i\in[N_y]}\!\!\!\{(y,i)\}\!\times\!X_{y;i} \bigg)\!\Big/\!\!\sim,$$ where we identify $(y,i,z)$ with $z\!\in\!X_{y;i}$ and $(y',i',z')$ with $z'\!\in\!X_{y';i'}$~if there exist $x\!\in\!U_y\!\cap\!U_{y'}$ and an overlap map~$\vph_{yy';x}$ as in~\eref{ImmTransConf_e2} such~that \BE{NCvarglue_e}z'\in \vph_{y'}\big(U_{yy';x}\big),\quad z=\vph_{yy';x}(z'),\quad \vph_{yy';x}\big(\vph_{y'}\big(U_{yy';x}\big)\!\cap\!X_{y';i'}\big) =\vph_y\big(U_{yy';x}\big)\!\cap\!X_{y;i}.\EE Define $$\wt{V}=\bigg(\bigsqcup_{y\in\cA'}\bigsqcup_{i\in[N_y]}\bigsqcup_{j\in[N_y]-i}\!\!\!\!\!\! \{(y,i,j)\}\!\times\!X_{y;ij}\bigg)\!\Big/\!\!\sim,$$ where we identify $(y,i,j,z)$ with $z\!\in\!X_{y;ij}$ and $(y',i',j',z')$ with $z'\!\in\!X_{y';i'j'}$~if there exist $x\!\in\!U_y\!\cap\!U_{y'}$ and an overlap map~$\vph_{yy';x}$ as in~\eref{ImmTransConf_e2} such that~\eref{NCvarglue_e} holds~and $$\vph_{yy';x}\big(\vph_{y'}(U_{yy';x})\!\cap\!X_{y';j'}\big) =\vph_y\big(U_{yy';x}\big)\!\cap\!X_{y;j}\,.$$ The Hausdorffness of $X$ implies the Hausdorffness of $\wt{X}$ and~$\wt{V}$. The last two spaces inherit smooth structures from the smooth structures of~$X_{y;i}$ and~$X_{y;ij}$. Define $$\io\!:\wt{V}\lra\wt{X}, \quad \io\big([y,i,j,z]\big)=[y,i,z], \qquad \psi\!:\wt{V}\lra\wt{V}, \quad \psi\big([y,i,j,z]\big)=[y,j,i,z].$$ By the assumption on the charts in Definition~\ref{ImmTransConf_dfn}, $\io$ is a closed transverse immersion; the smooth map~$\psi$ is an involution. It is immediate that the pair~$(\io,\psi)$ is compatible. The well-defined~map $$X\lra X_{\io,\psi}, \qquad x\lra \big[y,i,\vph_y(x)\big]\quad \forall~x\in\vph_y^{-1}(X_{y;i}),~i\!\in\![N_y],~y\!\in\!\cA',$$ is an isomorphism of NC varieties. \end{proof} \noindent Following the usual terminology of algebraic geometry, we call the map $$q\!: \wt{X}\lra X_{\io,\psi}=X$$ provided by the last statement of Lemma~\ref{NCC_lmm} the \sf{normalization} of the NC variety~$X$; it is unique up to isomorphism. By the proof of Lemma~\ref{NCC_lmm}, $q$ pulls back a symplectic structure~$\om$ on~$X_{\io,\psi}$ to a symplectic form~$\wt\om$ on~$\wt{X}$. \begin{crl}\label{NCC_crl} Suppose $(\io,\psi)$ is a compatible pair as in~\eref{iopsidfn_e} so that~$\io$ is a closed transverse immersion of codimension~2 and $\psi$ is a smooth involution. A symplectic structure~$\om$ on the NC variety~$X_{\io,\psi}$ corresponds to a symplectic form~$\wt\om$ on~$\wt{X}$ such that $\io(\wt{V})$ is an NC symplectic divisor in~$(\wt{X},\wt\om)$ and $\psi^*\io^*\wt\om\!=\!\io^*\wt\om$. \end{crl} \begin{proof} By the proof of Lemma~\ref{NCC_lmm}, a symplectic structure~$\om$ on~$X_{\io,\psi}$ corresponds to a symplectic form~$\wt\om$ on~$\wt{X}$ such that $\io_k^*\wt\om$ is a symplectic form on $\wt{V}_{\io}^{(k)}$ for all $k\!\in\!\Z^+$, the intersection and \hbox{$\om$-orientations} of~$\wt{V}_{\io}^{(k)}$ are the same, and $\psi^*\io^*\wt\om\!=\!\io^*\wt\om$. Thus, the claim follows from Proposition~\ref{NCD_prp}. \end{proof} \noindent The local perspective on NC varieties in Section~\ref{NCCloc_subs} leads to natural notions of smoothness, immersion, and transverse immersion. These notions in turn make it possible to adapt the considerations of Section~\ref{EquivalentDfn_subs} to NC varieties. The codimension~2 condition in Definition~\ref{ImmTransConf_dfn} is not material for these considerations. \begin{dfn}\label{NCRes2_dfn} Let $X$ be an NC variety as in Definition~\ref{ImmTransConf_dfn}. A~\sf{resolution of~$X$} is a sequence of closed immersions \BE{absNC2_e} \ldots \lra X^{(k)}\xlra{f_{k;k-1}}X^{(k-1)} \xlra{f_{k-1;k-2}}\cdots \xlra{f_{2;1}} X^{(1)} \xlra{f_{1;0}} X^{(0)}\!\equiv\!X\EE such that \begin{enumerate}[label=(R\arabic*),leftmargin=*] \setcounter{enumi}{-1} \item $f_{1;0}$ is a transverse surjective immersion; \item for every $k\!\in\!\Z^+$, $X^{(k)}$ is a manifold with a free $\bS_k$-action and the codimension of~$f_{k;k-1}$ is~2; \item\label{Requiv2_it} for all $0\!\le\!k'\!\le\!k$, the map \BE{absNC2_e2} f_{k;k'}\!\equiv\!f_{k'+1;k'}\!\circ\!\ldots\!\circ\!f_{k;k-1}\!: X^{(k)}\!\lra\!X^{(k')}\EE is $\bS_{k'}$-equivariant, and \item\label{inverse2_it} for all $0\!\le\!k'\!\le\!k$ and $x\!\in\!X^{(k)}\!-\!f_{k+1;k}(X^{(k+1)})$, $f_{k;k'}^{-1}(f_{k;k'}(x))=\bS_{[k]-[k']}\!\cdot\!x$. \end{enumerate} \end{dfn} \vspace{.1in} \noindent By the reasoning after Definition~\ref{NCRes_dfn}, the assumptions in Definition~\ref{NCRes2_dfn} imply that the map~\eref{absNC2_e2} is $\bS_{[k]-[k']}$-invariant. If $(\io,\phi)$ is a compatible pair as in~\eref{iopsidfn_e} with $X\!=\!X_{\io,\psi}$, then the sequence $$\ldots \lra \wt{X}_q^{(k)}\xlra{q_{k;k-1}}\wt{X}_q^{(k-1)} \xlra{q_{k-1;k-2}}\cdots \xlra{q_{2;1}} \wt{X}_q^{(1)} \xlra{q_{1;0}} \wt{X}_q^{(0)}\!\equiv\!X$$ of immersions constructed in Section~\ref{NCCgl_subs} is a resolution of~$X$. By the reasoning in the proof of Proposition~\ref{2TermToSeq_prp}, for any other resolution~\eref{absNC2_e} of~$X$ there exist unique smooth maps~$h_k$ so that the diagram \BE{absNC2_e10}\begin{split} \xymatrix{\ldots\ar[r]& X^{(k)}\ar[d]^{h_k}\ar[rr]^{f_{k;k-1}}&& X^{(k-1)}\ar[d]^{h_{k-1}}\ar[rr]^{f_{k-1;k-2}}&& \cdots \ar[r]^{f_{2;1}}& X^{(1)} \ar[d]^{h_1}\ar[r]^>>>>>{f_{1;0}}& X^{(0)}\!=\!X\ar[d]^{\id}\\ \ldots\ar[r] &\wt{X}_q^{(k)}\ar[rr]^{q_{k;k-1}}&& \wt{X}_{\io}^{(k-1)} \ar[rr]^{q_{k-1;k-2}}&& \cdots\ar[r]^{q_{2;1}} & \wt{X}_{\io}^{(1)} \ar[r]^>>>>>{q_{1;0}}& \wt{X}_q^{(0)}\!=\!X} \end{split}\EE commutes. These maps are $\bS_k$-equivariant diffeomorphisms. \subsection{Another global perspective} \label{NCgl2_subs} \noindent By~\eref{NCSCCregCond_e2}, a regularization $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ for $(\io,\psi)$ is determined by the refined regularization $(\Psi_{k+1;1}\!\circ\!D_{k;1})_{k\in\Z^{\ge0}}$ for~$\io$. By the proof of Proposition~\ref{NCCgl_prp} below, which re-interprets the notion of $\wt\om$-regularization for~$(\io,\psi)$ provided by Definition~\ref{NCSCCregul_dfn} in terms of the notion of refined $\wt\om$-regularization for~$\io$ provided by Definition~\ref{NCSCDregul_dfn}, the condition~\eref{NCSCCregCond_e0} is equivalent to the \hbox{$\psi$-equivariance} condition~\eref{NCCgl_e2} on $(\Psi_{k+1;1}\!\circ\!D_{k;1})_{k\in\Z^{\ge0}}$.\\ \noindent For $k\!\in\!\Z^+$ and $i,j\!\in\![k]$ distinct, let $\si_{k;ij}\!\in\!\bS_k$ be the transposition interchanging $i,j\!\in\![k]$. Let $(\io,\psi)$ be a compatible pair as in~\eref{iopsidfn_e}. The bijection~$\phi_k$ in~\eref{tiXprtX_e} is equivariant with respect to the inclusion of $\bS_k$ into $\bS_{k+1}$ induced by the inclusion $$[k]\lra[k\!+\!1], \qquad i\lra i\!+\!1;$$ the first inclusion identifies $\bS_k$ with the subgroup~$\bS_{[k+1]-[1]}$ of~$\bS_{k+1}$. The action of~$\si_{k+1;12}$ on~$\wt{X}_{\io}^{(k+1)}$ corresponds to an involution~$\psi_k$ on~$\wt{V}_{\io}^{(k)}$ via the bijection~$\phi_k$, i.e. \BE{psikdfn_e}\psi_k\!: \wt{V}_{\io}^{(k)}\lra \wt{V}_{\io}^{(k)}, \qquad \phi_k\!\circ\!\psi_k\equiv \si_{k+1;12}\!\circ\!\phi_k, \qquad \psi_k^2=\id_{\wt{V}_{\io}^{(k)}}\,.\EE This involution extends the natural $\bS_k$-action on~$\wt{V}_{\io}^{(k)}$ to an $\bS_{k+1}$-action, under the above identification of~$\bS_k$ with~$\bS_{[k+1]-[1]}$, so that $\phi_k$ becomes $\bS_{k+1}$-equivariant.\\ \noindent We note~that $$\psi_1\!\approx\!\psi\!: \wt{V}_{\io}^{(1)}\!\approx\!\wt{V}\lra\wt{V}\!\approx\!\wt{V}_{\io}^{(1)}$$ and that the diagram \BE{iopsidiag_e}\begin{split} \xymatrix{ \wt{V}_{\io}^{(k)} \ar[rr]^{\psi_k} \ar[d]_{\io_{k;k-1}^{(i)}} && \wt{V}_{\io}^{(k)} \ar[d]^{\io_{k;k-1}^{(i)}}\\ \wt{V}_{\io}^{(k-1)} \ar[rr]^{\psi_{k-1}} && \wt{V}_{\io}^{(k-1)}} \end{split}\EE commutes for all $k\!\in\!\Z^+$ and $i\!\in\![k]\!-\!\{1\}$. By the $\bS_{k+1}$-equivariance of~$\phi_k$ and the commutativity of the square in~\eref{phikiio_e}, the map~\eref{whiokk_e} is $\bS_{k'+1}$-equivariant and $\bS_{[k+1]-[k'+1]}$-invariant with respect to the extended actions for every $k'\!\in\![k]$.\\ \noindent Under the assumptions of Lemma~\ref{iopsi_lmm}, the involution~$\psi_k$ is smooth. For each $i\!\in\![k]\!-\!\{1\}$, the homomorphism \BE{Djpsi_e} D_i\psi_k\!: \cN\io_{k;k-1}^{(i)}\lra \cN\io_{k;k-1}^{(i)}\EE induced by the commutativity of~\eref{iopsidiag_e} in this case is an isomorphism. Let $$D\psi_k\!\equiv\!\bigoplus_{i\in[k]-\{1\}}\!\!\!\!\!\!\!D_i\psi_k\!: \cN_{k;1}\io\!\equiv\!\bigoplus_{i\in[k]-\{1\}}\!\!\!\!\!\!\cN\io_{k;k-1}^{(i)} \lra \cN_{k;1}\io\,.$$ \begin{prp}\label{NCCgl_prp} Suppose $(\wt{X},\wt\om)$ and $(\io,\psi)$ are as in Definition~\ref{NCSCCregul_dfn}. An $\wt\om$-regularization for~$(\io,\psi)$ is equivalent to a refined $\wt\om$-regularization \BE{NCCgl_e0}\wh\fR\equiv\big(\wh\cR_k\big)_{k\in\Z^{\ge0}}\equiv \big((\wh\rho_{k;i},\wh\na^{(k;i)})_{i\in[k]},\wh\Psi_k\big)_{k\in\Z^{\ge0}}\EE for~$\io$ as in~\eref{NCSCDregul_e} such~that \begin{gather} \label{NCCgl_e1} \big(\wh\rho_{k;i},\wh\na^{(k;i)}\big) =\big\{D_i\psi_k\big\}^*\big(\wh\rho_{k;i},\wh\na^{(k;i)}\big) \qquad \forall~i\!\in\![k]\!-\!\{1\},~k\!\in\!\Z^+,\\ \label{NCCgl_e2} \psi\!\circ\!\wh\Psi_{k;1}=\wh\Psi_{k;1}\!\circ\!D\psi_k ~~\hbox{on}~~\cN_{k;1}'\io=D\psi_k(\cN_{k;1}'\io) \qquad \forall~k\!\in\!\Z^+, \end{gather} with $\cN_{k;1}'\io\!\equiv\!\cN_{k;1}\io\!\cap\!\Dom(\wh\Psi_k)$ and $\wh\Psi_{k;1}\!:\cN_{k;1}'\io\!\lra\!\wt{V}_{\io}^{(1)}\!\approx\!\wt{V}$ as in~\eref{Psikkprdfn_e}. \end{prp} \begin{proof} Suppose $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ is a tuple of regularizations for the immersions~$q_k^{(i)}$ as in Definition~\ref{NCTransConfregul_dfn}. Let $k\!\in\!\Z^+$. The $\bS_k$-invariance condition~\eref{NCSCCregCond_e2} is equivalent to the condition \BE{Psik1i_e2} \Psi_{k;i}=\Psi_{k;1}\!\circ\!D\si\!\circ\!D\si_{k;1i}\big|_{D\si_{k;1i}(\Dom(\Psi_{k;1}))} \quad\forall\,i\!\in\![k],\,\si\!\in\!\bS_{[k]-[1]}.\EE Since $D_{k-1}$ is equivariant with respect to the natural identification of~$\bS_{k-1}$ with~$\bS_{[k]-[1]}$, the above condition and~\eref{NCSCCregCond_e2} are equivalent~to \BE{Psik1i_e2b}\begin{aligned} \big\{\!\Psi_{k;1}\!\circ\!D_{k-1}\big\} &=\big\{\!\Psi_{k;1}\!\circ\!D_{k-1}\big\}\!\circ\!D\!\si\big|_{D_{k-1}^{\,-1}(\Dom(\Psi_{k;1}))} &\quad&\forall\,\si\!\in\!\bS_{k-1},\\ \Psi_{k;i}&=\Psi_{k;1}\!\circ\!D\si_{k;1i}\big|_{D\si_{k;1i}(\Dom(\Psi_{k;1}))} &\quad&\forall\,i\!\in\![k]\,. \end{aligned}\EE The condition~\eref{Psik1i_e2} implies \eref{NCSCCregCond_e0} for all $i_1,i_2\!\in\![k]\!-\!\{1\}$. Suppose $k\!\ge\!2$ and~\eref{Psik1i_e2} holds. The full condition~\eref{NCSCCregCond_e0} is then equivalent~to $$q\!\circ\!\Psi_{k;1}\big|_{\cN_2q_k\cap\Dom(\Psi_{k;1})} =q\!\circ\!\Psi_{k;1}\!\circ\!D\si_{k;12} \big|_{\cN_1q_k\cap D\si_{k;12}(\Dom(\Psi_{k;1}))}\,.$$ By the middle statement in~\eref{psikdfn_e} for $k\!-\!1$ instead of~$k$ and by the denseness of $V_{\io}^{(1)}\!-\!V_{\io}^{(2)}$ in \hbox{$V_{\io}^{(1)}\!\subset\!\wt{X}$}, \eref{iopsiprop_e2} implies that the above condition and~\eref{NCSCCregCond_e0} are equivalent~to \BE{Psik1i_e4}\begin{split} &\psi\!\circ\!\big\{\!\Psi_{k;1}\!\circ\!D_{k-1}\big\} \big|_{\cN_{k-1;1}\io\cap D_{k-1}^{\,-1}(\Dom(\Psi_{k;1}))}\\ &\hspace{1in}=\big\{\!\Psi_{k;1}\!\circ\!D_{k-1}\big\}\!\circ\!D\psi_{k-1} \big|_{D\psi_{k-1}(\cN_{k-1;1}\io\cap D_{k-1}^{\,-1}(\Dom(\Psi_{k;1})))}\,. \end{split}\EE Thus, a regularization $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ for $(\io,\psi)$ as in Definition~\ref{NCTransConfregul_dfn} determines a refined regularization $(\wh\Psi_k\!\equiv\!\Psi_{k+1;1}\!\circ\!D_{k;1})_{k\in\Z^{\ge0}}$ for~$\io$ as in Definition~\ref{NCTransCollregul_dfn} satisfying~\eref{NCCgl_e2}.\\ \noindent If a tuple $(\wh\Psi_k)_{k\in\Z^{\ge0}}$ is a refined regularization for~$\io$, then $$\wh\Psi_{k-1}=\wh\Psi_{k-1}\!\circ\!D\!\si\big|_{\Dom(\wh\Psi_{k-1})} \qquad\forall~\si\!\in\!\bS_{k-1},~k\!\in\!\Z^+\,.$$ Via the second identity in~\eref{Psik1i_e2b} with $$\Psi_{k;1}\equiv \wh\Psi_k\!\circ\!D_{k-1}^{-1}\big|_{D_{k-1}(\Dom(\wh\Psi_{k-1}))},$$ the tuple $(\wh\Psi_k)_{k\in\Z^{\ge0}}$ thus determines a tuple $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ of regularizations for the immersions~$q_k^{(i)}$ satisfying~\eref{NCSCCregCond_e2}. If the first tuple satisfies~\eref{NCCgl_e2}, the second tuple satisfies~\eref{Psik1i_e4} and thus~\eref{NCSCCregCond_e0}.\\ \noindent Suppose $\fR$ is a tuple as in~\eref{NCSCCregul_e} so that $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ is a regularization for~$(\io,\psi)$. Let $(\wh\Psi_k)_{k\in\Z^{\ge0}}$ be the corresponding refined regularization for~$\io$ and $$\big(\wh\rho_{k;i},\wh\na^{(k;i)}\big) =\big\{D_i\phi_k\big\}^*\big(\rho_{k+1;i+1},\na^{(k+1;i+1)}\big) \quad\forall\,i\!\in\![k],\,k\!\in\!\Z^+.$$ Since $D_{k-1}$ is equivariant with respect to the natural identification of~$\bS_{k-1}$ with~$\bS_{[k]-[1]}$, the condition that~\eref{NCSCCregul_e0} is a refined $\wt\om$-regularization implies that \BE{Psik1i_e12a} \big(\rho_{k;i},\na^{(k;i)}\big)=\big\{\!D_i\si\big\}^{\!*} \big(\rho_{k;\si(i)},\na^{(k;\si(i))}\big) ~~\forall\,i\!\in\![k],\,\si\!\in\!\bS_{[k]-[1]},\,k\!\in\!\Z^+\,.\EE The condition~\eref{NCSCCregul_e3} is equivalent~to $$\big(\rho_{k;i},\na^{(k;i)}\big) = \big\{\!D_i\si_{k;1i}\big\}^{\!*} \big\{\!D_1\si\big\}^{\!*}\big\{\!D_1\si_{k;12}\big\}^{\!*} \big(\rho_{k;2},\na^{(k;2)}\big) \quad\forall\,i\!\in\![k],\,\si\!\in\!\bS_{[k]-[1]},\,k\!\in\!\Z^+.$$ This condition is in turn equivalent~to \BE{Psik1i_e12b}\begin{aligned} \big(\rho_{k;2},\na^{(k;2)}\big) &=\big\{\!D_2\si_{k;12}\big\}^{\!*}\big\{\!D_1\si\big\}^{\!*} \big\{\!D_1\si_{k;12}\big\}^{\!*}\big(\rho_{k;2},\na^{(k;2)}\big) &~~&\forall\,\si\!\in\!\bS_{[k]-[1]},\,k\!\in\!\Z^+,\\ \big(\rho_{k;i},\na^{(k;i)}\big)& =\big\{\!D_i\si_{k;1i}\big\}^{\!*}\big\{\!D_1\si_{k;12}\big\}^{\!*} \big(\rho_{k;2},\na^{(k;2)}\big)&~~&\forall\,i\!\in\![k],\,k\!\in\!\Z^+\,. \end{aligned}\EE Since $\si_{k;12}$ commutes with all elements of $\bS_{[k]-[2]}$ and $$\si_{k;12}\si_{k;2i}\si_{k;12}=\si_{k;1i}=\si_{k;2i}\si_{k;12}\si_{k;2i} \quad\forall\,i\!\in\![k]\!-\![2],\,k\!\in\!\Z^+,$$ the condition \eref{Psik1i_e12a} implies that the first condition in~\eref{Psik1i_e12b} is equivalent to $$\big(\rho_{k;i},\na^{(k;i)}\big)=\big\{\!D_i\si_{k;12}\big\}^{\!*}\big(\rho_{k;i},\na^{(k;i)}\big) \quad\forall\,i\!\in\![k]\!-\![2],\,k\!\in\!\Z^+\,.$$ By the middle statement in~\eref{psikdfn_e} for $k\!-\!1$ instead of~$k$, the above condition and~\eref{NCSCCregul_e3} are equivalent~to $$\big(\wh\rho_{k-1;i},\wh\na^{(k-1;i)}\big) =\big\{D_i\psi_{k-1}\big\}^*\big(\wh\rho_{k-1;i},\wh\na^{(k-1;i)}\big) \qquad \forall~i\!\in\![k\!-\!1]\!-\!\{1\},~k\!\in\!\Z^+.$$ Thus, an $\wt\om$-regularization for~$(\io,\psi)$ as in Definition~\ref{NCSCCregul_dfn} determines a refined $\wt\om$-regularization~\eref{NCCgl_e0} for~$\io$ satisfying~\eref{NCCgl_e1}.\\ \noindent Conversely, a refined $\wt\om$-regularization~\eref{NCCgl_e0} determines a tuple~\eref{NCSCCregul_e} such that $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ is a regularization for~$(\io,\psi)$ and the associated tuple~\eref{NCSCCregul_e0} is a refined $\wt\om$-regularization for the immersion~$\io$ via the second identity in~\eref{Psik1i_e12b} with $$\big(\rho_{k;2},\na^{(k;2)}\big)\equiv \big\{(D_1\phi_{k-1})^{-1}\}^*\big(\wh\rho_{k-1;1},\wh\na^{(k-1;1)}\big).$$ If the tuple~\eref{NCCgl_e0} satisfies~\eref{NCCgl_e1}, then the tuple~\eref{NCSCCregul_e} satisfies~\eref{NCSCCregul_e3}. \end{proof} \subsection{Examples} \label{eg_subs} \noindent We now give some examples of NC divisors and varieties. \begin{eg}\label{NCD_eg0} Let $X$ be a manifold and $\{V_i\}_{i\in S}$ be a finite transverse collection of closed submanifolds of~$X$ of codimension~2. The associated normalization $$\io\!:\wt{V}\equiv\bigsqcup_{i\in S}\!V_i \lra X$$ as in Lemma~\ref{NCD_lmm} is induced by the inclusions $V_i\!\lra\!X$ and $$\wt{V}_{\io}^{(k)}= \bigsqcup_{\begin{subarray}{c}I\subset S\\ |I|=k\end{subarray}} \bigsqcup_{\tau\in\Aut(I)} \!\!\!\!\!\!\big\{\big(x,(\tau(i),x)_{i\in I}\big)\!:\,x\!\in\!V_I\big\}$$ is the disjoint union of $k!$ copies of the disjoint union of the submanifolds~$V_I$ with $|I|\!=\!k$. \end{eg} \begin{eg}\label{NCC_eg0} Let $\X\!\equiv\!(X_I)_{I\in\cP^*(N)}$ is an $N$-fold transverse configuration in the sense of Definition~\ref{TransConf_dfn1} such that $X_{ij}$ is a closed submanifold of~$X_i$ of codimension~2 for all $i,j\!\in\![N]$ distinct, $$\wt{X}=\bigsqcup_{i\in[N]}\!\!\{i\}\!\times\!X_i\,, \qquad \wt{V}= \bigsqcup_{i\in[N]}\bigsqcup_{j\in[N]-i}\!\!\!\!\{(i,j)\}\!\times\!X_{ij},$$ and $q\!:\wt{X}\!\lra\!X_{\eset}$ be the natural quotient map. The normalization of the preimage of the singular locus $X_{\prt}\!\subset\!X_{\eset}$ in~$\wt{X}$, $$\io\!:\wt{V}\lra q^{-1}\big(X_{\prt}\big)=\bigsqcup_{i\in[N]}\bigcup_{j\in[N]-i}\!\!\!\!\{i\}\!\times\!X_{ij}\,,$$ is induced by the inclusions $X_{ij}\!\lra\!X_i$. The associated involution~is $$\psi\!:\wt{V}\lra\wt{V}, \qquad (i,j,x)\lra (j,i,x).$$ \end{eg} \begin{eg}\label{NCDvsC_eg} An NC symplectic divisor $V$ in $(X,\om)$ gives rise to an NC symplectic variety as follows. Let $\io\!:\wt{V}\!\lra\!X$ be the associated closed transverse immersion as in Lemma~\ref{NCD_lmm}, $$(\wt{X},\wt\om)=(X,\om)\sqcup(\wt{V}\!\times\!\C,\pi_1^*\io^*\om\!+\!\pi_2^*\om_{\C}), \qquad \wt{V}'=\{1\}\!\times\!\wt{V}\sqcup\{2\}\!\times\!\wt{V}\sqcup\wt{V}_{\io}^{(2)}\!\times\!\C,$$ where $\pi_1,\pi_2\!:\wt{V}\!\times\!\C\!\lra\!\wt{V},\C$ are the two projection maps and $\om_{\C}$ is the standard symplectic form on~$\C$. We define a closed transverse immersion $\io'\!:\wt{V}'\!\lra\!\wt{X}$ and an involution~$\psi$ on~$\wt{V}'$ by \begin{alignat*}{3} \io'(1,\wt{v})&=\io(\wt{v})\in X, &\quad \io'(2,\wt{v})&=(\wt{v},0)\in\wt{V}\!\times\!\C,&\quad \io'\big((x,\wt{v}_1,\wt{v}_2),c\big)&=(\wt{v}_1,c)\in\wt{V}\!\times\!\C,\\ \psi(1,\wt{v})&=(2,\wt{v}), &\quad \psi(2,\wt{v})&=(1,\wt{v}),&\quad \psi\big((x,\wt{v}_1,\wt{v}_2),c\big)&=\big((x,\wt{v}_2,\wt{v}_1),c\big); \end{alignat*} see Figure~\ref{NCDvsC_fig}. The pair $(\io',\psi)$ satisfies the three conditions in~\eref{iopsiprop_e1} with $\io$ replaced by~$\io'$ and thus determines an NC variety~$X_{\io',\psi}$. Since $\psi^*\io'^*\wt\om\!=\!\io'^*\wt\om$, $X_{\io',\psi}$ is an NC symplectic variety by Corollary~\ref{NCC_crl}. \end{eg} \begin{figure} \begin{pspicture}(-4,-1.5)(11,4.5) \psset{unit=.3cm} \psarc[linewidth=.1](2,11){3}{100}{260}\psarc[linewidth=.1](12,11){3}{100}{260} \pscircle*(-.12,13.12){.25}\pscircle*(-.12,8.88){.25} \pscircle*(9.88,13.12){.25}\pscircle*(9.88,8.88){.25} \rput(-3.5,11){$\{1\}\!\times\!\wt{V}$}\rput(11.6,11){$\{2\}\!\times\!\wt{V}$} \psline[linewidth=.05]{<->}(0,11)(8,11)\rput(4,11.8){$\psi$} \psline[linewidth=.05]{->}(0,7.5)(0,3.5)\rput(0.7,5.5){$\io$} \psline[linewidth=.05]{->}(11,7.5)(18.7,0.5)\rput(15.6,4.5){$\io'$} \psline[linewidth=.05]{->}(25,8.5)(25,3.5)\rput(25.7,6){$\io'$} \pscircle*(20,11){.3}\pscircle*(30,11){.3} \psline[linewidth=.1](20,8)(20,14)\psline[linewidth=.1](30,8)(30,14) \rput(25,15){$\wt{V}_{\io}^{(2)}\!\times\!\C$} \pnode(24.5,14){A1}\pnode(20.5,12.5){B1}\ncline[linewidth=.07]{->}{A1}{B1} \pnode(25.5,14){A1}\pnode(29.5,12.5){B1}\ncline[linewidth=.07]{->}{A1}{B1} \psline[linewidth=.05]{<->}(21,10.6)(29,10.6)\rput(25,11.4){$\psi$} \psline[linewidth=.1](-1,-1)(1,1)\psline[linewidth=.1](-1,1)(1,-1) \pscircle*(0,0){.3}\psarc[linewidth=.1](-2,0){1.41}{45}{-45} \psline[linewidth=.05](-5,-3)(5,-3)\psline[linewidth=.05](-5,3)(5,3) \psline[linewidth=.05](-5,-3)(-5,3)\psline[linewidth=.05](5,3)(5,-3) \rput(4,1){$X$}\rput(-3.5,-1.8){$V$}\rput(2.9,-1.8){$V_{\io}^{(2)}$} \pnode(0,-.5){B}\pnode(1.7,-2){A} \nccurve[linewidth=.07,angle=180,angleB=-90,ncurv=1]{->}{A}{B} \pscircle*(20,0){.3}\pscircle*(30,0){.3} \psline[linewidth=.1](20,-3)(20,3)\psline[linewidth=.1](30,-3)(30,3) \psline[linewidth=.1](18,0)(32,0) \rput(25,2){$\wt{V}\!\times\!\C$}\rput(25,-4){$\io^{-1}(V_{\io}^{(2)})\!\times\!\C$} \pnode(24.5,-3){A1}\pnode(20.5,-1.5){B1}\ncline[linewidth=.07]{->}{A1}{B1} \pnode(25.5,-3){A1}\pnode(29.5,-1.5){B1}\ncline[linewidth=.07]{->}{A1}{B1} \rput(33.5,1){$\wt{V}\!\times\!\{0\}$} \end{pspicture} \caption{The normalization $\wt{X}\!\equiv\!X\!\sqcup\!(\wt{V}\!\times\!\C)$ of the NC variety~$X_{\io',\psi}$ associated with an NC divisor $V\!\subset\!X$ as in Example~\ref{NCDvsC_eg}.} \label{NCDvsC_fig} \end{figure} \begin{eg}\label{NCC_eg1} A generalization of the 2-fold SC symplectic configuration of Example~\ref{NCC_eg0} is obtained by taking two disjoint copies, $V_1$ and~$V_2$, of a smooth symplectic divisor~$V$ in the same symplectic manifold~$(\wt{X},\wt\om)$. Let $\psi\!:V_1\!\lra\!V_2$ be a symplectomorphism and $\psi\!:V_2\!\lra\!V_1$ be its inverse; thus, $\psi$ is an involution on $\wt{V}\!\equiv\!V_1\!\sqcup\!V_2$. In~this case, the normalization $$\io\!:\wt{V}\lra V\!\equiv\!V_1\!\cup\!V_2\subset \wt{X}$$ is just the inclusion into~$\wt{X}$. The pair~$(\io,\psi)$ satisfies the three conditions in~\eref{iopsiprop_e1} and thus determines an NC variety~$X_{\io,\psi}$; it is obtained by identifying $V_1$ with~$V_2$ in~$\wt{X}$ via~$\psi$. The singular locus~$X_{\prt}$ in this case can be identified with~$(V,\om|_V)$. \end{eg} \begin{eg}\label{NCC_eg2} A more elaborate 2-fold NC symplectic variety is obtained by taking~$V$ in Example~\ref{NCC_eg1} to be any closed symplectic submanifold of~$(\wt{X},\wt\om)$ and $\psi\!:V\!\lra\!V$ to be any symplectomorphism without fixed points such that $\psi\!\circ\!\psi\!=\!\id_V$. The normalization $$\io\!:\wt{V}\!\equiv\!V\lra V\subset\wt{X}$$ is again just the inclusion. The pair~$(\io,\psi)$ satisfies the three conditions in~\eref{iopsiprop_e1} and thus determines an NC variety~$X_{\io,\psi}$; it is obtained by ``folding" $X$ along~$V$ as directed by~$\psi$. The singular locus~$X_{\prt}$ in this case is the quotient of~$V$ by the $\Z_2$-action determined by~$\psi$. \end{eg} \begin{figure} \begin{pspicture}(-4,-2.5)(11,2) \psset{unit=.3cm} \pscircle[linewidth=.1](0,-3.08){4.14}\rput(8,-5.38){\sm{$\wt{V}$}} \psarc[linewidth=.06]{->}(0,-3.08){2.5}{60}{240}\rput(-1,-2.5){\sm{$\psi$}} \pscircle*(0,1.06){.25}\pscircle*(0,-7.22){.25} \pscircle*(3.59,-1.01){.25}\pscircle*(3.59,-5.15){.25} \pscircle*(-3.59,-1.01){.25}\pscircle*(-3.59,-5.15){.25} \rput(4.8,-1){\sm{$\wt{v}_{12}$}}\rput(0.1,2){\sm{$\wt{v}_{13}$}} \rput(-4.7,-1){\sm{$\wt{v}_{23}$}}\rput(-4.3,-5.6){\sm{$\wt{v}_{21}$}} \rput(0.1,-8.2){\sm{$\wt{v}_{31}$}}\rput(4.8,-5.2){\sm{$\wt{v}_{32}$}} \psline[linewidth=.06](3.69,-3.08)(4.49,-3.08)\psline[linewidth=.06](-3.69,-3.08)(-4.49,-3.08) \psline[linewidth=.06](2.27,.87)(1.77,0)\psline[linewidth=.06](2.27,-6.93)(1.77,-6.16) \psline[linewidth=.06](-2.27,.87)(-1.77,0)\psline[linewidth=.06](-2.27,-6.93)(-1.77,-6.16) \rput(4.9,-3.08){\sm{$r$}}\rput(-4.9,-2.9){\sm{$\bar{r}$}} \rput(2.8,1.2){\sm{$\bar{q}$}}\rput(-2.8,-7.26){\sm{$q$}} \rput(2.7,-7.26){\sm{$\bar{p}$}}\rput(-2.6,1.1){\sm{$p$}} \psline[linewidth=.1](30,.65)(30,-5.58)\psline[linewidth=.1](30.65,0)(24.42,0) \psline[linewidth=.1](30.85,-5.23)(24.77,.85) \psarc(30.65,.65){.65}{270}{180}\psarc(30.5,-5.58){.5}{180}{45} \psarc(24.42,.5){.5}{45}{270} \pscircle*(30,0){.3}\pscircle*(30,-4.38){.3}\pscircle*(25.62,0){.3} \rput(29.3,-.8){\sm{$\wt{x}_1$}}\rput(25.4,-.9){\sm{$\wt{x}_2$}}\rput(29.2,-4.8){\sm{$\wt{x}_3$}} \psline[linewidth=.06](29.6,-2.19)(30.4,-2.19)\rput(30.9,-2.19){\sm{$r$}} \psline[linewidth=.06](24.42,.5)(23.5,.88)\rput(23.1,1.1){\sm{$\bar{r}$}} \psline[linewidth=.06](27.81,-.4)(27.81,.4)\rput(27.81,1){\sm{$p$}} \psline[linewidth=.06](27.53,-2.47)(28.09,-1.91)\rput(27,-2.7){\sm{$q$}} \psline[linewidth=.06](30.83,.83)(31.39,1.39)\rput(31.8,1.5){\sm{$\bar{q}$}} \psline[linewidth=.06](30.5,-5.58)(30.88,-6.5)\rput(31.4,-6.5){\sm{$\bar{p}$}} \psline[linewidth=.1]{->}(10,-3.08)(20,-3.08)\rput(15,-2.4){$\io$} \rput(23,-5.38){$V\subset\wt{X}$} \end{pspicture} \caption{The normalization of an NC variety with $\io(\wt{v}_{ij})\!=\!\wt{x}_i$ and $\psi(\wt{v}_{ij})\!=\!\wt{v}_{ji}$.} \label{NCC_fig} \end{figure} \noindent A ``3-fold" version of Example~\ref{NCC_eg2} is shown in Figure~\ref{NCC_fig}. The topological space~$X_{\io,\psi}$ is obtained from~$\wt{X}$ by folding the NC divisor~$V$ as indicated by the action of~$\psi$ on its normalization~$\wt{V}$. This folding is not induced by an involution on~$V$ itself; while most points of~$V$ are identified in pairs, the three double points are identified into~one. \section{On the proof of Theorem~\ref{NCC_thm}} \label{NCCpf_sec} \noindent We now explain why the proof of \cite[Theorem~2.17]{SympDivConf}, which is outlined in \cite[Figure~2]{SympDivConf}, extends to Theorem~\ref{NCC_thm}. This proof revolves around a weaker version of the notion of regularization of Definition~\ref{TransConfregul_dfn}\ref{SCCreg_it}, which is readily adaptable to Definition~\ref{NCCregul_dfn}. \begin{dfn}\label{ConfRegulLoc_dfn} Suppose $\X\!\equiv\!\{X_I\}_{I\in\cP^*(N)}$ is a transverse configuration as in Definition~\ref{TransConf_dfn1}, \hbox{$I^*\!\in\!\cP^*(N)$}, and $U\!\subset\!X_{I^*}$ is an open subset. A \sf{regularization for~$U$ in~$\X$} is a tuple $(\Psi_i)_{i\in I^*}$, where $\Psi_i$ is a regularization for~$U$ in~$X_i$ in the sense of Definition~\ref{smreg_dfn}, such that \begin{alignat*}{2} \Psi_i\big(\cN_{I^*;I}\!\cap\!\Dom(\Psi_i)\big)&=X_I\!\cap\!\Im(\Psi_i) &\quad &\forall\,i\!\in\!I\!\subset\!I^*,\\ \Psi_{i_1}\big|_{\cN_{I^*;i_1i_2}\cap\Dom(\Psi_{i_1})} &=\Psi_{i_2}\big|_{\cN_{I^*;i_1i_2}\cap\Dom(\Psi_{i_2})} &\quad &\forall\,i_1,i_2\!\in\!I^*\,. \end{alignat*} \end{dfn} \begin{dfn}\label{LocalRegul_dfn} Suppose $\X\!\equiv\!\{X_I\}_{I\in\cP^*(N)}$ is a transverse configuration and $W\!\subset\!X_{\eset}$ is an open subset. A \sf{weak $(\om_i)_{i\in[N]}$-regularization for} a transverse configuration~$\X$ over~$W$ is a tuple as in~\eref{SCCregdfn_e0} such~that \begin{enumerate}[label=$\bu$,leftmargin=*] \item for every $I\!\in\!\cP^*(N)$, the tuple $(\Psi_{I;i})_{i\in I}$ is a regularization for $X_I\!\cap\!W$ in~$\X$; \item for all $i\!\in\!I\!\subset\![N]$, the tuple $((\rho_{I;j},\na^{(I;j)})_{j\in I-i},\Psi_{I;i})$ is an $\om_i$-regularization for $X_I\!\cap\!W$ in~$X_i$ in the sense of Definition~\ref{sympreg1_dfn}; \item for all $i\!\in\!I'\!\subset\!I\!\subset\![N]$, the bundle isomorphism~$\fD\Psi_{I;i;I'}$ associated with~$\fD\Psi_{I;i}$ as in~\eref{wtPsiIIdfn_e} is a product Hermitian isomorphism and \BE{LocalRegul_e2} \Psi_{I;i}\big|_{\Dom(\Psi_{I;i})\cap\fD\Psi_{I;i;I'}^{\,-1}(\Dom(\Psi_{I';i}))} =\Psi_{I';i}\circ\fD\Psi_{I;i;I'}|_{\Dom(\Psi_{I;i})\cap\fD\Psi_{I;i;I'}^{\,-1}(\Dom(\Psi_{I';i}))}\,.\EE \end{enumerate} \end{dfn} \vspace{.1in} \noindent Thus, an $(\om_i)_{i\in[N]}$-regularization for~$\X$ in the sense of Definition~\ref{TransConfregul_dfn}\ref{SCCreg_it} is a weak $(\om_i)_{i\in[N]}$-regularization for~$\X$ over $W\!=\!X_{\eset}$ such that $$\Dom(\Psi_{I;i})=\fD\Psi_{I;i;I'}^{\,-1}(\Dom(\Psi_{I';i})) \qquad\forall\,i\!\in\!I'\!\subset\!I\!\subset\![N],~|I'|\!\ge\!2,$$ as required by the first condition in~\eref{overlap_e}. If $W,W_1,W_2\!\subset\!X_{\eset}$ are open subsets with \hbox{$W\!\subset\!W_1\!\cap\!W_2$} and $(\fR_t^{(1)})_{t\in B}$ and $(\fR_t^{(2)})_{t\in B}$ are families of weak regularizations for~$\X$ over~$W_1$ and~$W_2$, respectively, we define $$\big(\fR_t^{(1)}\big)_{t\in B} \cong_W\big(\fR_t^{(2)}\big)_{t\in B}$$ if the restrictions of the two regularizations to~$W$ agree on the level of germs; see the first part of \cite[Section~5.1]{SympDivConf} for a formal definition.\\ \noindent Let $X$ be an NC variety with an NC atlas $(U_y,\X_y,\vph_y)_{y\in\cA}$ as in Definition~\ref{ImmTransConf_dfn}. For each open subset $W\!\subset\!X$, let $$\cA(W)=\big\{y\!\in\!\cA\!:\,U_y\!\subset\!W\big\}.$$ In particular, $(U_y,\X_y,\vph_y)_{y\in\cA(W)}$ is an NC atlas for~$W$. For a symplectic structure~$\om$ on~$X$ as in Definition~\ref{NCCregul_dfn}\ref{NCCregul_it1} and an open subset $W\!\subset\!X$, a \sf{weak $\om$-regularization for~$X$ over~$W$} is a tuple~$(\fR_y)_{y\in\cA(W)}$ as in Definition~\ref{NCCregul_dfn}\ref{NCCregul_it1} so that each~$\fR_y$ is a weak $(\om_{y;i})_{i\in[N_y]}$-regularization for $\X_y$ and \eref{NCCregulover_e} holds for all $y,y'\!\in\!\cA(W)$ and $x\!\in\!U_{yy';x}\!\subset\!U_y\!\cap\!U_{y'}$ as in Definition~\ref{ImmTransConf_dfn0}\ref{NCatlas_it}.\\ \noindent Suppose $(\wt{X},\wt\om)$ is a symplectic manifold, $(\io,\psi)$ is a compatible pair as in~\eref{iopsidfn_e} so that~$\io$ is a closed transverse immersion of codimension~2 and $\psi$ is a smooth involution, and $W$ is an open subset of the NC variety $X\!=\!X_{\io,\psi}$. From the global perspective of Definition~\ref{NCSCCregul_dfn}, a \sf{weak $\om$-regularization for~$X$ over~$W$} is a tuple as in~\eref{NCSCCregul_e} satisfying~\eref{NCSCCregul_e3} such~that \begin{enumerate}[label=$\bu$,leftmargin=*] \item the tuple $(\Psi_{k;i})_{k\in\Z^+,i\in[k]}$ is a regularization for the restriction of~$(\io,\psi)$ to $\io^{-1}(q^{-1}(W))$, \item the tuple~\eref{NCSCCregul_e0} is a refined $\wt\om$-regularization for the restriction of~$\io$ to $\io^{-1}(q^{-1}(W))$, except the first condition in~\eref{NCoverlap_e} with $\Psi_k\!\equiv\!\Psi_{k+1;1}\!\circ\!D_{k;1}$ may not hold and the second holds over the intersection of the domains of the two sides, i.e.~$\Dom(\Psi_k)\!\cap\!\fD\Psi_{k;k'}^{-1}(\Dom(\Psi_{k'}))$. \end{enumerate} By Lemma~5.8 and Corollary~5.9 in~\cite{SympDivConf}, weak regularizations and equivalences between them in the simple NC~setting can be cut down to regularizations and equivalences between regularizations. The same reasoning applies in the arbitrary NC~setting viewed from the global perspective of either Section~\ref{NCCgl_subs} or~\ref{NCgl2_subs}. Thus, it is sufficient to establish Theorem~\ref{NCC_thm} with {\it regularizations} replaced by {\it weak regularizations} everywhere.\\ \noindent The last task is readily accomplished by combining the proof of \cite[Theorem~2.7]{SympDivConf} with the local perspective of Section~\ref{NCCloc_subs} via an inductive construction. Let $$\big(U_y,\X_y\!\equiv\!(X_{y;I})_{I\in\cP^*(N_y)}, \vph_y\!:U_y\!\lra\!X_{y;\eset}\big)_{y\in\Z^+}$$ be a locally finite collection of NC charts covering~$X$ and $(U_y')_{y\in\Z^+}$ be an open cover of~$X$ such that $\ov{U_y'}\!\subset\!U_y$ for every $y\!\in\!\Z^+$. For each $y^*\!\in\!\Z^+$, the tuple $(\fR_{t;y})_{t\in N(\prt B),y\in\cA(U_{y^*})}$ is an $(\om_t)_{t\in N(\prt B)}$-family of weak regularizations for~$X$ over~$U_{y^*}$ in the local sense. It determines an $(\om_{t;y^*;i})_{t\in N(\prt B),i\in[N_y^*]}$-family $(\fR_{y^*;t})_{t\in N(\prt B)}$ of weak regularizations for~$\X_{y^*}$ over $X_{y^*;\eset}$ in the sense of Definition~\ref{LocalRegul_dfn}.\\ \noindent Suppose $y^*\!\in\!\Z^+$ and we have constructed \begin{enumerate}[label=(I\arabic*),leftmargin=*] \item\label{neighbX_it} an open neighborhood~$W_{y^*}$ of $$\ov{U_{y^*}^<}\equiv\bigcup_{y<y^*}\!\!\ov{U_y'}\subset X\,,$$ \item\label{neighbB_it} a neighborhood $N_{y^*}(\prt B)$ of $\ov{N'(\prt B)}$ in $N(\prt B)$, \item a smooth family $(\mu_{t,\tau})_{t\in B,\tau\in\bI}$ of 1-forms on~$X$ such~that \BE{NC_e0}\mu_{t,0}=0, ~~ \supp\big(\mu_{\cdot,\tau}\big)\!\subset\! \big(B\!-\!N_{y^*}(\prt B)\big)\!\times\!(X\!-\!X^*), ~~ \om_{t,\tau}\!\equiv\!\om_t\!+\!\nd\mu_{t,\tau}\in \Symp^+(X)\EE for all $t\!\in\!B$ and $\tau\!\in\!\bI$, \item an $(\om_{t,1})_{t\in B}$-family $(\fR_{t;y}')_{t\in B,y\in\cA(W_{y^*})}$ of weak regularizations for~$X$ over~$W_{y^*}$ such~that \BE{NC_e3}\big(\fR'_{t;y}\big)_{t\in N_{y^*}(\prt B),y\in\cA(W_{y^*})} \cong_{W_{y^*}} \!\!\big(\fR_{t;y}\big)_{t\in N_{y^*}(\prt B),y\in\cA(W_{y^*})}.\EE \end{enumerate} This family determines an $(\om_{t,1;y^*;i})_{t\in B,i\in[N_y^*]}$-family $(\fR_{y^*;t}')_{t\in B}$ of weak regularizations for~$\X_{y^*}$ over $$ X_{y^*;\eset}^<\equiv \vph_{y^*}\big(U_{y^*}\!\cap\!W_{y^*}\big) \subset X_{y^*;\eset}$$ such that $$\big(\fR'_{y^*;t}\big)_{t\in N_{y^*}(\prt B)} \cong_{X_{y^*;\eset}^<} \!\!\big(\fR_{y^*;t}\big)_{t\in N_{y^*}(\prt B)}.$$ \vspace{.2in} \noindent Let \hbox{$N_{y^*+1}(\prt B)\!\subset\!N_{y^*}(\prt B)$} be a neighborhood of~$\ov{N'(\prt B)}$, $W'\!\subset\!W_{y^*}$ be a neighborhood of $\ov{U_{y^*}^<}$, and $U'\!\subset\!U''\!\subset\!U_{y^*}$ be neighborhoods of~$\ov{U_{y^*}'}$ such~that $$\ov{N_{y^*+1}(\prt B)}\subset N_{y^*}(\prt B), \qquad \ov{W'}\subset W_{y^*}, \qquad \ov{U'}\subset U'', \quad\hbox{and}\quad\ov{U''}\subset U_{y^*}\,.$$ Define \begin{gather*} W_{y^*+1}=U'\!\cup\!W',\qquad X_{y^*;i}''=X_{y^*;i}\cap\!\vph_{y^*}\!(U'')\quad\forall\,i\!\in\![N_{y^*}], \\ X_{y^*;\eset}'^<=\vph_{y^*}\!\big(U_{y^*}'\!\cap\!W\big)\subset X_{y^*;\eset}, \qquad X_{y^*}^*=\vph_{y^*}\!\big(U_{y^*}\!\cap\!X^*\big)\,. \end{gather*} In particular, \ref{neighbX_it} and~\ref{neighbB_it} hold with $y^*$ replaced by $y^*\!+\!1$. By repeated applications of \cite[Proposition~5.3]{SympDivConf} as in the proof of \cite[Theorem~2.17]{SympDivConf} at the end of \cite[Section~5.1]{SympDivConf}, we obtain \begin{enumerate}[label=$\bullet$,leftmargin=*] \item a smooth family $(\mu_{y^*;t,\tau;i}')_{t\in B,\tau\in\bI,i\in[N_{y^*}]}$ of 1-forms on~$X_{y^*;\eset}$ such~that \begin{gather*} \mu_{y^*;t,0;i}'=0, \quad \supp\big(\mu_{y^*;\cdot,\tau;i}'\big)\subset \big(B\!-\!N_{y^*+1}(\prt B)\big)\!\times\!\big(X_{y^*;i}''\!-\!X_{y^*;\eset}'^<\!\cup\!X_{y^*}^*\big) \quad \forall~t\!\in\!B,\,\tau\!\in\!\bI,\,i\!\in\![N_{y^*}],\\ \big(\om_{y^*;t,\tau;i}'\equiv\om_{y^*;t,1;i}\!+\!\nd\mu_{y^*;t,\tau;i}'\big)_{i\in[N_{y^*}]} \in \Symp^+(\X_{y^*}) \quad \forall~t\!\in\!B,\,\tau\!\in\!\bI, \end{gather*} \item an $(\om'_{y^*;t,1;i})_{t\in B,i\in[N_{y^*}]}$-family $(\wt\fR_{y^*;t})_{t\in B}$ of weak regularizations for~$\X_{y^*}$ over $\vph_{y^*}(U')$ such~that $$\big(\wt\fR_{y^*;t}\big)_{t\in N_{y^*+1}(\prt B)} \cong_{\vph_{y^*}(U')}\! \!\!\big(\fR_{y^*;t}\big)_{t\in N_{y^*+1}(\prt B)}\,, \quad \big(\wt\fR_{y^*;t}\big)_{t\in B} \cong_{\vph_{y^*}\!(U'\cap W')} \!\big(\fR_{y^*;t}'\big)_{t\in B}\,\,.$$ \end{enumerate} \vspace{.1in} \noindent Since $X_{y^*;i}''\!\subset\!\vph_{y^*}\!(U'')$, the support of $\mu_{y^*;t,\tau;i}'$ is contained in $\vph_{y^*}(U'')$. For all $t\!\in\!B$ and $\tau\!\in\!\bI$, the tuple $(\mu_{y^*;t,\tau;i}')_{i\in[N_{y^*}]}$ thus determines a 1-form $\mu_{t,\tau}'$ on~$X$ such~that \begin{gather}\notag \vph_{y^*}^{\,*}\mu_{t,\tau}'=\big(\mu_{y^*;t,\tau;i}'\big)_{i\in[N_{y^*}]}\,,\\ \label{NC_e7} \mu_{t',0}=0, ~~ \supp\big(\mu_{\cdot,\tau}'\big)\!\subset\! \big(B\!-\!N_{y^*+1}(\prt B)\big)\!\times\!(U_{y^*}\!-\!X^*), ~~ \om_{t,\tau}'\!\equiv\!\om_{t;1}\!+\!\nd\mu_{t,\tau}'\in \Symp^+(X). \end{gather} These forms vary smoothly with $t\!\in\!B$ and $\tau\!\in\!\bI$.\\ \noindent Let $\be_1,\be_2\!:\bI\!\lra\!\bI$ be smooth non-decreasing functions such that $$\be_1(\tau)=\begin{cases}\tau,&\hbox{if}~1\!-\!\tau\!\ge\!2^{-y^*};\\ 1,&\hbox{if}~1\!-\!\tau\!\le\!2^{-y^*-1}; \end{cases} \qquad \be_2(\tau)=\begin{cases}0,&\hbox{if}~1\!-\!\tau\!\ge\!2^{-y^*-1};\\ 1,&\hbox{if}~\tau\!=\!1. \end{cases}$$ We concatenate the families $(\mu_{t,\tau})_{t\in B,\tau\in\bI}$ and $(\mu_{t,\tau}\!+\!\mu_{t,\tau}')_{t\in B,\tau\in\bI}$ of \hbox{1-forms} on~$X$ into a new smooth family $(\mu_{t,\tau})_{t\in B,\tau\in\bI}$~by $$\mu_{t,\tau}''=\mu_{t,\be_1(\tau)}+\mu_{t,\be_2(\tau)}'\qquad\forall~t\!\in\!B,\,\tau\!\in\!\bI\,.$$ By~\eref{NC_e7}, \BE{NC_e8}\begin{aligned} \mu_{t,\tau}''(x)&=\mu_{t,1}''(x) &\qquad &\forall~x\!\in\!X\!-\!U_{y^*},~1\!-\!\tau\!\le\!2^{-y^*-1}, \\ \mu_{t,\tau}''(x)&=\mu_{t,\tau}(x) &\qquad &\forall~x\!\in\!X\!-\!U_{y^*} ~\hbox{s.t.}~\mu_{t,\tau'}(x)\!=\!\mu_{t,1}(x)~\forall~1\!-\!\tau'\!\le\!2^{-y^*}\,. \end{aligned}\EE Furthermore, \eref{NC_e0} holds with $\mu$ and $N_{y^*}(\prt B)$ replaced by~$\mu''$ and $N_{y^*+1}(\prt B)$, respectively.\\ \noindent The family $(\wt\fR_{y^*;t})_{t\in B}$ determines an $(\om'_{t,1})_{t\in B}$-family $(\wt\fR_{t;y})_{t\in B,y\in\cA(U')}$ of weak regularizations for~$X$ over~$U'$ such~that \begin{gather*} \big(\wt\fR_{t;y}\big)_{t\in N_{y^*+1}(\prt B),y\in\cA(U')}\cong_{U'}\! \big(\fR_{t;y}\big)_{t\in N_{y^*+1}(\prt B),y\in\cA(U')}\,,\\ \quad \big(\wt\fR_{t;y}\big)_{t\in B}\cong_{U'\cap W'}\! \big(\fR_{t;y}'\big)_{t\in B}\quad\forall\,y\!\in\!\cA(U'\!\cap\!W')\,. \end{gather*} Along with $(\fR_{t;y}')_{t\in B,y\in\cA(W_{y^*})}$, it thus determines an $(\om'_{t,1})_{t\in B}$-family $(\fR_{t;y}'')_{t\in B,y\in\cA(W_{y^*+1})}$ of weak regularizations for $\X$ over~$W_{y^*+1}$ so~that \BE{NC_e9}\big(\fR_{t;y}''\big)_{t\in B} = (\fR_{t;y}')_{t\in B} \qquad\forall~y\!\in\!\cA\big(W_{y^*+1}\!-\!\ov{U'}\big) \EE and~\eref{NC_e3} holds with $\fR'$, $N_{y^*}(\prt B)$, and~$W_{y^*}$ replaced by $\fR''$, $N_{y^*+1}(\prt B)$, and~$W_{y^*+1}$, respectively.\\ \noindent Since the collection $(U_y)_{y\in\Z^+}$ is locally finite, for every point $x\!\in\!X$ there exist a neighborhood $U_x\!\subset\!X$ of~$x$ and $N_x\!\in\!\Z^+$ such~that $$U_x\cap U_{y^*}=\eset \qquad\forall~y^*\!\in\!\Z^+,\,y^*\!\ge\!N_x\,.$$ By~\eref{NC_e8} and~\eref{NC_e9}, $$\mu_{t,\tau}''\big|_{U_x}=\mu_{t,\tau}\big|_{U_x}, \quad \fR_{t;y}''=\fR_{t;y}' \qquad\forall~t\!\in\!B,\,\tau\!\in\!\bI,\,y\!\in\!\cA(U_x),~y^*\!>\!N_x.$$ The inductive construction above thus terminates after finitely many step on a sufficiently small neighborhood $U_x\!\subset\!X$ of each point $x\!\in\!X$ and provides a smooth family $(\mu_{t,\tau})_{t\in B,\tau\in\bI}$ of 1-forms on~$X$ satisfying~\eref{SCCom_e} and an $(\om_{t,1})_{t\in B}$-family $(\wt\fR_t)_{t\in B}$ of weak regularizations for~$X$ over~$X$ satisfying~\eref{SCCom_e2}.\\ \noindent {\it Department of Mathematics, the University of Iowa, Iowa City, IA 52242\\ [email protected]}\\ \noindent {\it Department of Mathematics, Stony Brook University, Stony Brook, NY 11794\\ [email protected], [email protected]}
1,116,691,499,609
arxiv
\section{INTRODUCTION} \label{sec:intro} The advent of large-format CCD mosaic cameras makes it possible to obtain deep imaging of large fractions of the sky. By measuring fluxes through multiple filters, near-future projects\footnote{E.g., the Vera C. Rubin Observatory's Legacy Survey of Space and Time \citep{LSSTScienceBook}, the Nancy Grace Roman Space Telescope \citep{Spergel2015}, and the Euclid mission \citep{Euclid}} will obtain high signal-to-noise but limited-spectral-resolution information on billions of objects. Distances based on redshift estimates will be essential for interpreting these observations. We can only feasibly obtain this redshift information from imaging data alone; we thus must determine {\it photometric redshifts}, also referred to as ``photo-$z$'s''. \ifarxiv \begin{marginnote} \entry{Take-away}{Distances based on photometric redshifts enable the inference of many properties from imaging data, key for studies in both galaxy evolution and cosmology.} \end{marginnote} \fi A broad range of extragalactic studies rely on photometric redshifts. Given redshift estimates, intrinsic physical properties of galaxies can be inferred from their observed spectral energy distributions. Photo-$z$'s thereby enable studies of how the demographics of galaxies have changed over time, constraining models of galaxy evolution. Photo-$z$'s are also frequently used to select objects of interest for follow-up spectroscopy (e.g., galaxies or quasars whose properties are consistent with extremely high redshifts), enabling large imaging samples to be mined for rare objects. They also are vital for identifying the host galaxies of transient sources. Photometric redshifts likewise are necessary for many methods of constraining cosmological models. Probes of cosmology generally rely on determining the relationship between observable quantities and redshift. As we analyze deeper and wider data sets for more and more stringent tests of cosmological models, requirements on photometric redshift methods steeply increase. A number of recent works have described the large variety of photometric redshift methodologies available to the community, including evaluation of their performance under more or less idealized conditions \citep{Sanchez2014,Tanaka2018,Salvato2019,Schmidt2020,Desprez2020,Brescia2021}. In this review, we instead focus on common challenges that any approach to photometric redshifts must account for, rather than the methods themselves. We concentrate upon evaluating the needs for the new generation of deep, wide-field imaging surveys that will come online in the 2020s -- the ``Stage IV'' dark energy surveys in the classification of the Dark Energy Task Force \citep{DETF} -- in light of the results of the current-generation, ``Stage III'' surveys. These needs can be broadly divided into the twin goals of performance and characterization. Throughout this review, we will use the term \textbf{performance} to refer to the ability to predict the redshift of an \textit{individual galaxy} precisely; i.e., with small uncertainty when compared to its true redshift. \textbf{Characterization}, in contrast, will refer to the ability to constrain the properties of the \textit{redshift distribution of an ensemble of galaxies} -- e.g., the mean redshift or its higher moments -- as for many high-precision cosmology measurements it is that distribution which we need to know well. \begin{marginnote} \entry{Performance}{The ability of a photometric redshift algorithm to deliver higher-precision redshift estimates for individual galaxies.} \end{marginnote} In the remainder of \S\ref{sec:intro} we will describe how the performance and characterization of photo-$z$'s impact both galaxy evolution and cosmological studies. In \S \ref{sec:methods}, we outline the principles underlying recent photometric algorithms, providing context for the issues discussed in this review. \S \ref{sec:spectroscopy} describes the needs for spectroscopic redshift measurements for large samples of objects to improve both photo-$z$ performance and characterization. In \S \ref{sec:openissues} we describe a variety of open issues, highlighting areas where future work would be valuable. Finally, in \S \ref{sec:vision}, we attempt to forecast how photo-$z$ methods might ultimately evolve to optimize both performance and characterization. \begin{marginnote} \entry{Characterization}{The recovery of the redshift distributions of ensembles of galaxies, including their \emph{mean} redshifts, variances, and higher-order moments.} \end{marginnote} \ifarxiv \begin{marginnote} \entry{Take-Away}{This review focuses on common challenges and strategies for applications of photometric redshifts to the next generation of deep, wide-area imaging surveys.} \end{marginnote} \fi \subsection{Performance and Characterization of Photometric Redshifts} \label{sec:definitions} Here we first describe fundamental limitations to both the performance and characterization of photometric redshifts. We then discuss the impact of these sources of uncertainty on different science cases. A photometric redshift method could be extremely well-characterized despite its poor performance, or vice versa; applications of photometric redshifts across different subfields will place widely varying requirements on each aspect. \textbf{Limits to the potential performance} of redshift determination are set by the quality of data collected on a galaxy. In the case of spectroscopy, the flux obtained from an object is measured as a function of wavelength with a resolution ($R = \frac{\lambda}{\Delta \lambda}$) typically ranging from 100 to $>30,000$. When multiple features (e.g., absorption or emission lines) are detected in an object's spectrum, its redshift may be determined securely, as any possible pair of significant features in a spectrum has a unique wavelength ratio. Even when individual features are not found at high signal-to-noise ratio (SNR), comparisons to spectral templates may enable a determination of redshift from the combination of many weaker lines. If the features in an object's spectrum are relatively narrow (with wavelength extent not much larger than the instrumental resolution) and SNRs are sufficiently high, the redshift of an object may be determined via spectroscopy with an uncertainty that is well below $R^{-1}$. Photometric redshifts rely on measurements of integrated fluxes within a set of filters, with resolutions of at most $R\sim 50$ when many narrow bands are used \citep[e.g.,][]{Molino2014, Marti2014,spherex}. More commonly, present and near future large area surveys such as the Dark Energy Survey \citep{DES2016}, the Hyper Suprime-Cam SSC Survey \citep{Aihara2018}, the Kilo Degree Survey \citep{Hildebrandt2021}, and the Rubin Observatory LSST \citep{LSSTScienceBook} use broad-band filters with $R<10$. As a result of the low effective spectral resolution provided by this imaging, fewer spectral features which may constrain redshift are available, and each of those features provides weaker constraints due to the poorer wavelength localization. Additionally, it is rare that one can identify multiple, well-localized spectral features at such low $R$, leading to ambiguity in their identification. This can lead to degeneracies between multiple fits to a galaxy's spectral energy distribution (SED) -- i.e., its observed flux as a function of wavelength -- that further increase the uncertainty of redshift estimates (cf. the left panel of \autoref{fig:galaxy_zoo}). For instance, in many cases it can be difficult to distinguish whether a strong jump in a galaxy's SED corresponds to the 4000~\AA~break at a lower redshift, or the Lyman break at a higher $z$ \citep{Stabenau2008}. In the best cases, for high-quality, uniform photometry, either with a large number of narrow-band filters or for object classes with uniform intrinsic spectra that have strong, broad spectral features, photometric redshift errors $\sigma_z \sim 0.01(1+z)$ (where $z$ is the redshift of the object) have been obtained \citep[e.g.,][]{Pandey2021}. A similar performance has been achieved for broader samples using narrow-band photometry \citep{Alarcon2021}. However, photometric redshifts for diverse galaxy populations estimated from few broad-band filters are necessarily more uncertain, with $\sigma_z$ from current surveys commonly $\sim 0.05 (1+z)$ or larger. \ifarxiv \begin{marginnote} \entry{Take-Away}{The performance of photo-$z$ algorithms is limited by having measurements in only a few, broad, noisy photometric bands -- but that trade-off enables studies of large samples of faint galaxies.} \end{marginnote} \fi Photometric redshifts are vital despite their inferior precision because they presently are the only available option for estimating redshifts for the samples of large numbers of faint galaxies required by many current science cases. At an extreme, one could imagine using a 5000-fiber spectrograph (the equivalent of the most highly-multiplexed among the current options, the DESI instrument \citep{DESIInstrument}) on a 10m telescope to obtain spectroscopy of the LSST "gold sample" of 4 billion galaxies with $i<25.3$ that are expected to be used for weak lensing and large-scale-structure measurements \citep{Ivezic2019}. Extrapolating from past campaigns \citep{Newman2013}, it would require nearly 16,000 years of continuous integration time under clear, dark conditions to obtain spectra of the same signal-to-noise (S/N) for this entire sample as the DEEP2 survey attained for objects with $i<22.5$. Furthermore, at that S/N many faint objects (10-25\%) do not yield secure spectroscopic redshift measurements, a failure rate that is much higher than the catastrophic error rates in the best photo-$z$ algorithms. \textbf{The probability density function (PDF) of redshift, $p(z)$}, provides the best representation of the result of a photometric redshift algorithm, as the resulting distributions may be highly non-Gaussian or multimodal. Three different flavors of PDFs are commonly used in the literature, though they are not always clearly distinguished: \label{pz} \begin{itemize} \item \textbf{Bayesian posteriors}, with $\int_{z_0}^{z_1} p(z|\mathrm{photometry}) \, dz$ interpreted as the \textit{degree of belief} that the redshift of a given galaxy is between $z_0$ and $z_1$. These can properly incorporate any uncertainties in the underlying model (e.g., the prior of a template fitting scheme, or the limited training sample), broadening posteriors accordingly.. \item \textbf{Frequentist probabilities}, with $\int_{z_0}^{z_1} p(z|\mathrm{photometry}) \, dz$ interpreted as an estimate of the fraction of times the true redshift of one out of an ensemble of galaxies with indistinguishable photometry is between $z_0$ and $z_1$. These $p(z)$'s assume a fixed model for the galaxy population - they cannot marginalize over model uncertainties, as that procedure is inconsistent with the frequentist formalism. \item \textbf{Likelihoods} that interpret $p(z)$ as $p(\mathrm{photometry}|z)$ for a fixed template chosen in some way; e.g., for the best-fitting among a set of templates. \end{itemize} \begin{marginnote} \entry{Redshift Probability Density Function (PDF)}{A function whose integral between two limits corresponds to the probability that the redshift lies in that range: $p(z)$.} \entry{Likelihood}{The probability of obtaining the actually observed values as a function of the assumed model parameters (including redshift): $p({data} | {model})$.} \entry{Prior}{A function describing the relative probability of a given set of model parameters, a key ingredient in Bayesian inference: $p({model}$.} \entry{Redshift Posterior}{A redshift PDF derived multiplying the likelihood of the observations by the assumed prior probabilities for model parameters: $p({model} | {data})$.} \end{marginnote} Bayesian posteriors are required in schemes that allow the data of all galaxies to inform the model (see \S \ref{sec:bhm}). Frequentist approaches are particularly appropriate for applications where the $n(z)$ of a set of galaxies is estimated with a fixed model; e.g., directly from the spectroscopic redshift distribution of a set of reference galaxies (cf. \S \ref{sec:stage3}). A key distinction is that the stack (i.e., the sum over several galaxies) of frequentist $p(z)$'s, is meaningful. It provides an estimate of the ensemble's redshift distribution under the assumption of the fixed model, as the frequentist $p(z)$ corresponds to the fraction of times a given object should be found at a particular redshift, such that the expectation number in a redshift bin from a sample must be a simple sum. However, in the Bayesian definition, likelihoods (not posteriors) must be combined to infer overall redshift distributions, as summing individual posteriors that marginalize over the same model does not properly account for the effect of model uncertainty on the inferred redshift distribution \citep{Malz2021}. A variety of statistics derived from PDFs, including the redshift corresponding to the maximum likelihood or the maximum posterior probability, the expectation value of redshift evaluated across the full PDF, or the expectation calculated only on the highest peak \citep{Dahlen2013} have been used as single ``point'' estimates for the redshift of a galaxy in the past, although their use in cosmological studies has been waning due to the recognition that full information on redshift distributions is required already for present surveys and will continue to be needed for future applications. \ifarxiv \begin{marginnote} \entry{Take-Away}{Photo-$z$'s are best reported and interpreted as PDFs. Frequentist and Bayesian approaches differ.} \end{marginnote} \fi \begin{figure}[t] \centering \begin{subfigure}{.63\textwidth} \centering \includegraphics[width=\linewidth]{buchs} \end{subfigure}% \hspace{0.02\textwidth} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth]{galaxy_zoo} \end{subfigure} \caption{\emph{Left:} Given flux information in only a few broad bands, degeneracies between different combinations of intrinsic rest-frame spectral energy distributions and redshift are common, as illustrated by templates for an elliptical galaxy at $z=0.3$ and a spiral galaxy at $z=0.52$ that are virtually indistinguishable if only $riz$ photometry is used (as may be optimal for ground-based lensing analyses, \citet{Sheldon2020}). The resulting redshift PDF $p(z)$ would have to be bimodal, with the relative weight of the two modes determined by a model for the galaxy population. Figure adapted with permission from \citet{Buchs2019}. \emph{Right:} A sample of galaxies with similar $riz$ color-magnitude \citep{desdr2}. These come from two sub-populations that are distinguishable using additional colors, such as $g-r$ and $r-y$ (black and green boxes, showing HSC images, cf. \citealt{hscdr}); the populations have different redshifts (black: $z~0.85$, green: $z~0.50$). Color coding of boxes corresponds to that used in Figure~1 in \citet{Myles2021}. } \label{fig:galaxy_zoo} \end{figure} \vspace{10pt} \textbf{The characterization of photometric redshifts is limited} by our incomplete knowledge of the galaxy population. For instance, the samples of galaxies with spectroscopic redshifts used for this characterization may systematically miss some populations or include objects with incorrect redshift measurements. In forward-modeling approaches, the set of model galaxy templates and associated prior probability distributions used may not be capable of describing the full ensemble of galaxies in the Universe, leading to biased results. Imperfect characterization of statistical or systematic error distributions in the photometric data can likewise distort the redshift distributions that result from such methods. Furthermore, the limited size of deep spectroscopic samples will reduce the fidelity with which redshift distributions can be constrained. A connected issue is the difficulty of confidently estimating uncertainties in the characterization of redshift distribution. In the absence of accurate knowledge about the population of galaxies one is trying to estimate the redshift for, error estimates often have to remain approximate. Worse, imperfections in photometric redshift methodology can cause slight but relevant errors in the estimated ensemble redshift distributions that are impossible to know $a priori$, and difficult to evaluate on existing data. These issues do not represent fundamental limitations to the potential power of photometric redshifts -- there is no reason we could not, in principle, formulate and constrain a model of the galaxy population, or understand photometric data, in a way that is sufficient to characterize photometric redshifts at any level required, given sufficiently constraining data (including spectroscopy). This is a major benefit of photo-$z$'s for science cases that do not require individual galaxy redshifts. So long as the distribution of redshifts is known well and with minimal bias, strong constraints may be obtained. This is particularly the case for cosmological studies using a small number of redshift bins, such as for weak gravitational lensing or angular galaxy clustering measurements. However, this implies that those applications have extremely stringent requirements for the characterization of redshifts. If the mean, width, or perhaps even full shape of the estimated redshift distribution is not close to what one would obtain from measuring the actual redshifts of the objects in that sample, significant biases can result. Given the increasing precision of cosmological experiments, the characterization of photometric redshift distributions has become perhaps the leading systematic uncertainty in these analyses. \ifarxiv \begin{marginnote} \entry{Take-Away}{The scarcity of suitable deep spectroscopy, combined with stringent requirements, makes photometric redshift characterization a leading challenge and source of systematic uncertainty.} \end{marginnote} \fi \subsection{Applications of Photometric Redshifts} \subsubsection{Galaxy Evolution} Studies of galaxy evolution which rely upon photometric redshifts vary in their sensitivity to photometric redshift performance and characterization. We illustrate these dependencies by considering three major applications: subdivision of samples according to their redshifts; measurements of the abundances of objects as a function of their properties and redshift (as in luminosity function studies); and studies which rely on measurement of galaxy clustering or environmental densities. We focus on applications in the not-too-distant future, when systematic uncertainties in modelling galaxy evolution should remain substantial. If this limitation is overcome and we wish to extract as much information as possible from the data, requirements on the characterization of photometric redshifts for galaxy evolution studies will become much more stringent, resembling the needs for cosmological studies (q.v. \S \ref{sec:cosmo_requirements}). \textbf{The subdivision of objects according to their redshifts} can be used to study evolution of galaxy demographics over time \citep[e.g.,][]{Finkelstein2015} or to select targets in a particular $z$ range for spectroscopic surveys \citep{Mclure2018, Takada2014}. There is a trade-off between contamination from objects at other redshifts (i.e., what fraction of objects selected are not in the desired redshift range) versus the completeness of samples selected to be at a given $z$ (i.e., what fraction of objects truly at that redshift are included). More stringent selections will reduce contamination, but will also cause some objects that are in the desired range to be missed. Improving the photometric redshift performance for typical objects will decrease contamination rates by causing fewer objects to scatter into a redshift bin due to errors, while simultaneously improving completeness by reducing the number of objects that scatter out. In contrast, if their prevalence is low, catastrophic outliers (i.e., objects for which the photometric redshift is far from the spectroscopic redshift, on a non-Gaussian tail of the error distribution) will have limited impact, contributing to incompleteness and contamination at levels proportional to their rates. If the rates at which outliers occur is known well, corrections for them can be included in analyses. In the case of targeting for spectroscopic surveys the impact of outliers is further reduced, as better redshift measurements will show that such objects do not belong in the sample. \ifarxiv \begin{marginnote} \entry{Take-Away}{For most current galaxy evolution studies, the \emph{performance} of photo-$z$'s is the most important factor.} \end{marginnote} \fi In most cases errors in characterization will have limited impact on these analyses. For instance, a small additive bias in redshifts will have only minor effects on how observed samples are interpreted via galaxy evolution models at all but the lowest $z$. \begin{marginnote} \entry{Catastrophic outliers}{Objects for which the photometric (or spectroscopic) redshift is far from the true redshift, corresponding to a non-Gaussian tail of the error distribution.} \end{marginnote} In some analyses integrals over the redshift PDF for an object are used to divide up samples \citep[as in][]{Finkelstein2015}; in that case, inaccuracies of those PDFs can affect binned analyses. For instance, one can consider a sample where photometry may be consistent with either redshift $z \sim 2$ or $z \sim 6$ due to confusion between the 4000 Angstrom and Lyman alpha breaks, which will lead to bimodal (two-peaked) redshift PDFs for each object. In that case, if the relative amount of probability assigned to each of these redshifts is incorrect, the abundance of $z = 6$ galaxies could be badly mis-estimated. \textbf{Measuring distributions of galaxy properties} introduces additional complexity. Photo-$z$'s have been used to measure the distribution of galaxy luminosities or stellar masses (commonly referred to as a luminosity function or mass function, respectively), as in \citet{Wolf2003} and \citet{Bundy2017}. In these applications, modeling uncertainties are substantial, due to our limited knowledge of how to relate the star formation history of a galaxy to the observed light from it; for instance, changing the assumed initial mass function of stars can alter inferred stellar masses by $\sim$0.5 dex \citep{Courteau2014}. In these applications, the gains from improving photo-$z$ performance are generally modest, as luminosity or mass functions are often measured in bins which are broader than photo-$z$ uncertainties (e.g., $\Delta z = 0.2$ or $0.5$). The effects are larger at the bright/high-mass end of the luminosity/mass function, as the propagation of distance errors into the inferred luminosity will alter the shape of the exponential cutoff of the Schechter function substantially \citep[to first order, it will be convolved with a Gaussian kernel determined by this propagation; cf.][]{Santini2015}, resulting in an Eddington-like bias \citep{Eddington1913}. The effects are weaker for fainter objects, as convolution with a Gaussian leaves a distribution unchanged when its second and higher derivatives are negligible. However, where incompleteness becomes large that condition will no longer hold, and photometric redshift errors can again bias results \citep{Sheth2007}. Catastrophic (i.e., large and non-Gaussian) photometric redshift errors will primarily affect the bright end of the luminosity function at higher redshifts. Since faint objects are common but bright ones are rare, if a luminous, higher-$z$ object is erroneously placed at low redshift there is little impact, as then it will have a low inferred luminosity and be outweighed by the numerous faint objects that are truly at that $z$. However, if a faint lower-$z$ object is falsely assumed to be at high redshift, it will have a high inferred luminosity; as a result, contaminants can dominate samples at the bright end. However, studies of luminosity and mass functions are not very sensitive to overall biases in photometric redshifts; a 1\% error in the mean redshift of all objects in a sample would alter their inferred stellar masses by less than 0.01 dex, in contrast to systematic uncertainties of 0.1-0.5 dex. Nevertheless, characterization of the rate and distribution of catastrophic outliers can be important if their effects are to be removed to enable studies of the bright end of the luminosity function. The final category of galaxy evolution measurements where photometric redshifts have played an important role is measurements of the \textbf{clustering (or environments) of galaxies as a function of their properties}. In contrast to the previous cases, for such studies improving typical photometric redshift performance will have a large effect. As a simple illustration of this, one can consider counting the number of objects within a cylinder with length in the redshift direction $\Delta z$ and projected comoving radius $r_p$ about some object. The count of objects within the cylinder can be used as a measure of local overdensity \citep{Cooper2005}, and is equivalent to a measurement of the mean projected correlation function within the cylinder, $<w_p(r_p)>$, integrating to a maximum separation $\pi_{max} \equiv \Delta z / 2$. In the Poisson-dominated regime (common for environment measures), the fractional uncertainty on the density within the cylinder will be $\frac{\sigma(n)}{n} = \frac{1}{\sqrt{n \pi r_p^2 \Delta z dl/dz}} \propto \frac{1}{\sqrt{\Delta z}}$, where $n$ is the mean comoving density and $dl/dz$ is the derivative of comoving distance with respect to redshift. However, the number of objects truly associated with a cylinder -- the signal which one desires to measure -- remains fixed. In the simplest case, where $\Delta z$ is large compared to photometric redshift errors, the signal-to-noise of overdensity measurements will be proportional to $\frac{1}{\sqrt{\Delta z}}$; if objects scatter out of the cylinder due to photo-$z$ uncertainties, the S/N will only get worse. Improving photometric redshift performance enables smaller redshift windows to be used without losing physical pairs of objects, reducing $\Delta z$ and increasing the signal-to-noise accordingly. \ifarxiv \begin{marginnote} \entry{Take-Away}{Photo-$z$ \emph{performance} strongly affects the signal-to-noise ratio of clustering and environment studies.} \end{marginnote} \fi Catastrophic outliers will cause systematic biases in the inferred clustering strength. If a fraction $f_{\rm outlier}$ of photo-$z$'s are far from their true redshifts, correlation function and overdensity measurements will be reduced by a factor of $(1-f_{\rm outlier})^2$, so large catastrophic outlier rates can cause clustering to be badly underestimated. Outliers will also cause the effective density of a sample (generally used as a constraint in halo model interpretations of clustering) to be overestimated by a factor of $(1-f_{\rm outlier})^{-1}$. For analyses not to be systematically biased as a result, it is necessary either for the outlier rate to be known and corrected for, or for outlier rates to be marginalized over in analysis (as in \citealt{Zhou2021}), at the cost of degraded constraints on other quantities. As for luminosity functions, biases in inferred photometric redshifts have only a modest effect on environment and clustering measurements, as differences in redshift between pairs of galaxies will remain unchanged. Instead, the impact is to alter the mean redshift of the samples for which clustering has been measured. Given current limitations on modeling galaxy evolution, small biases in effective redshift should be subdominant to other systematics in the near future. Accurate characterization of the uncertainties in individual redshift estimates, or particularly having accurate photo-$z$ PDFs for individual objects, is beneficial for clustering analyses. If the redshift PDF is well-known, measurements can directly utilize the range of possible redshifts of each object, rather than relying only on (for instance) calculating angular correlation functions within fixed redshift bins. This can be exploited to maximize the SNR of measurements. For instance, \citet{Zhou2021} replaced each object with a large number of samples from its redshift PDF and measured the number of pairs within a fixed redshift window around each, eliminating the loss of pairs due to bin edges while taking PDF information fully into account. However, if errors (or PDFs or outlier rates) are mis-estimated, inferences based on clustering measurements can be systematically biased. Fitting for additional parameters characterizing the degree to which errors are over- or under-estimated can be used to account this effect at the cost of degraded constraints on other parameters, as in \citet{Zhou2021}. \subsubsection{Cosmology} \label{sec:cosmo_requirements} The principal objective of observational cosmology is to test predictions for the expansion history of the universe and the growth of structure across time. Measurements based upon the cosmic microwave background, the distance-redshift relation with supernovae and baryonic acoustic peaks in the clustering of galaxies, the growth of structure observed through the clustering of galaxies, gravitational lensing, and through galaxy clusters broadly agree. Jointly they indicate that overall deviations of expansion and structure growth from a $\Lambda$CDM \emph{standard model} of cosmology are at most at the level of a few percent. The next steps of this research program thus must advance into a regime of highly precise and accurate measurements to significantly challenge $\Lambda$CDM predictions with data. Present and future experiments have been designed to reduce statistical uncertainties on cosmological measurements beyond the current state of the art. Assuming successful data collection, the outcomes from these experiments are almost certain to be limited by systematic uncertainties. For this reason, the requirements on photometric redshifts for the purpose of cosmology are quite different from those for galaxy evolution, and largely shared among different probes. Redshift affects the observables predicted by a cosmological model -- e.g., the amplitude of large-scale matter density fluctuations, the number density of galaxy clusters, or a cosmological distance measure -- typically via an integral over the redshift distribution $n(z)$. As a consequence, reporting frequentist $p(z)$ estimates for individual galaxies or sets of galaxies, stacking them, and marginalizing over uncertain characterization by repeating the whole procedure at varied model parameters \citep[e.g.][]{Stoelzner2020, Cordero2021} can be well-matched to these applications. The relevant integrals change by of order unity per unit change in mean redshift. To improve upon the current few-percent-level tests of $\Lambda$CDM predictions, the mean redshifts of observed samples will need to be known to similarly high accuracy. This includes accurately characterizing the tails of the redshift distribution of photometric samples associated with catastrophic outliers. Characterization of higher order moments of the redshift distribution of samples is likely to be a secondary limiting factor. In addition to the stringent requirements placed upon the characterization of photometric redshifts, in some instances photo-$z$ performance will affect the signal-to-noise ratio of cosmological measurements greatly. We consider the requirements on photometric redshifts for each of the major imaging-based probes of cosmology below. \ifarxiv \begin{marginnote} \entry{Take-Away}{Cosmological studies primarily require exquisite \emph{characterization} of photo-$z$'s.} \end{marginnote} \fi \begin{itemize} \item In \textbf{weak gravitational lensing}, the angular diameter distances of lensed galaxies, determined from their redshift by the cosmological model, set the amplitude of lensing signals \citep[for a recent review, see][particularly their sections 2.8 and 3.2]{Mandelbaum2018}. For weak gravitational lensing, only large ensembles of galaxies will yield useful signal-to-noise ratio; subdividing samples into a small number of minimally-overlapping redshift bins is sufficient to capture most information. The assignment of galaxies to redshift bins benefits from improvements to photo-$z$ performance, but due to the relatively shallow increase of lensing efficiency with source redshift, the gain in constraining power is comparatively modest \citep{Hu1999}. However, extremely stringent characterization of the redshift distribution of the ensemble of galaxies assigned to each bin is required for both present and future experiments to reach their goals. \item In \textbf{strong gravitational lensing} \citep[e.g.,][]{Treu2010}, photometric redshifts can aid in the identification of potential lens systems, as well as in the study of foreground galaxies which contribute additional lensing effects and influence the inferred properties for a given system; these applications will benefit weakly from improvements to photo-$z$ performance. Photometric redshift estimates of the multiply imaged background galaxies are likewise useful for the reconstruction of lens geometries. Commonly, follow-up spectroscopy will be needed for cosmological analyses of strong lensing, so photo-$z$ characterization requirements are minimal. \item The \textbf{clustering of galaxies} is a probe both of the amplitude of density fluctuations (when joined with lensing; see, e.g., \citealt{Baldauf2010}) and of the scale of baryonic acoustic oscillations (BAO). Large-scale-structure measurements can be performed either by simply measuring angular clustering in photometric redshift slices \citep[e.g.,][]{Sanchez2011,Carnero2012,DESY3BAO} or, for more sensitive results, by using photometric redshift estimates (or PDFs) for individual objects \citep[e.g.,][]{Padmanabhan2007,Seo2012, Zhou2021}. The observed amplitude of angular clustering will depend directly on the redshift distribution of the galaxy sample. As was the case for clustering-based studies of galaxy evolution, increasing the performance of photo-$z$'s will improve signal-to-noise for cosmological large-scale-structure studies. Characterization of the width of the ensemble redshift distribution is particularly important for minimizing systematic uncertainties in measurements of the clustering amplitude \citep[e.g.,][]{Cawthon2021}, while characterization of the mean redshift will be more important for constraints on the BAO distance scale. \item For \textbf{clusters of galaxies} \citep[e.g.,][]{Allen2011}, the expected abundance of clusters and the relationships of observables to the intrinsic physical properties of a cluster both depend on redshift. However, these should all be relatively slow functions of $z$, and photometric redshifts for clusters tend to be very well determined (due both to their red galaxy populations and the ability to average photo-$z$'s from many objects), so that improvements to photo-$z$ performance will have limited effect on cosmological inference for clusters selected based on their gas properties. Photo-$z$ performance is however a crucial factor for optically-selected cluster samples, where objects are selected as overdensities in the three dimensional distribution of galaxies. Uncertainties in the photometric redshifts of \emph{individual galaxies} will set the $\Delta z$ scale over which foreground or background objects may affect optical observables for a given cluster (such as richness, total flux, etc.). Photo-$z$ performance thus directly impacts the measured distribution of richnesses and the mass limit down to which physical clusters can be distinguished from the random galaxy background. The angular clustering of clusters depends on their redshift distribution (as was true of galaxy clustering), requiring uncertainties in cluster photometric redshifts to be accurately characterized for applications that depend on that quantity. Calibrations of cluster masses based upon weak lensing measurements will have stringent requirements on the characterization of the redshift distributions of background objects, much as for other weak lensing measurements \citep{DESC_SRD}. \item For analyses of \textbf{photometric supernovae} used to constrain the distance-redshift relation without spectroscopic follow-up, imaging alone must be used to determine redshifts \citep{Linder2019}. In this case performance must be sufficient to help classify supernova type, with the important distinction that for these analysis extreme photo-$z$ outliers can be rejected based upon the observed brightnesses of supernovae regardless of the host galaxy photometry. Accurate characterization of photo-$z$'s will be needed to use such supernovae in cosmological analyses. Additionally, photometric redshifts can be used to select suitable targets for spectroscopic follow-up that are likely to be supernovae of the desired type Ia (as opposed to other types of transients); this places only weak requirements on performance, however. \end{itemize} A broad assessment of requirements on the level of accuracy of the characterization of photometric redshifts is presented in \citet{DESC_SRD}, which tested the impact of both biases and mis-estimations of photometric redshift errors on cosmological measurements with the Rubin Observatory LSST. This study found that for cosmological measurements combining weak gravitational lensing and large-scale structure, the mean redshift of each tomographic bin must be known to $\delta z < 0.001(1+z)$ by the end of the survey for systematic uncertainties in the dark energy equation of state not to exceed statistical uncertainties. Similarly, the photometric redshift scatter $\sigma_z$ must be known to better than $\delta \sigma_z < 0.003(1+z)$. Requirements on the characterization of the redshifts of the lensed objects behind galaxy clusters used to calibrate cluster masses are similar: biases must be below $\delta z < 0.001(1+z)$ and uncertainty in scatter $\delta \sigma_z < 0.005(1+z)$. These requirements are all extremely stringent and will be challenging both to meet and to demonstrate that they have been definitively achieved. The ultimate impact of photo-$z$ characterization on constraints based on strong lensing or supernova brightness measurements is much weaker, however, as for those probes analyses of only the subset of objects with spectroscopic measurements are already likely to be systematics limited; they will thus not be major drivers of photometric redshift requirements for the upcoming imaging surveys. We note that this study considered only a simple Gaussian model of photometric redshift distributions without outliers, but in real applications it is likely that higher moments of the redshift distribution, not only mean and variance, must be characterized stringently. \ifarxiv \begin{marginnote} \entry{Take-Away}{Weak lensing, large-scale-structure, and cosmology studies with upcoming surveys all require characterization at the 0.1\% level.} \end{marginnote} \fi \section{OVERVIEW OF PHOTOMETRIC REDSHIFT METHODS} \label{sec:methods} The idea that broad-band photometry of galaxies could be used to constrain their redshift goes back almost 60 years \citep[e.g.,][]{Baum1962,Koo1985,Loh1986}. Since then, two families of methods have commonly been considered separate - one based on comparing observations to galaxy spectral energy distribution templates, and one based on empirical relations of photometry to redshift found, often by means of machine learning, on samples of galaxies with known redshift. This dichotomy, while useful in characterizing methods, is somewhat superficial. All photometric redshift methods can be interpreted in the same context of Bayesian statistics, of inferring the posterior probability (or some related statistic) of redshift given observational data \citep{Budavari2009}. The model of galaxy templates or the sample of reference galaxies with known redshift are part of the prior - along with other implicit assumptions made in the respective method \citep{Schmidt2020}. \ifarxiv \begin{marginnote} \entry{Take-Away}{Photometric redshift methods can be categorized by how they use prior information, including training samples and SED templates.} \end{marginnote} \fi \subsection{Template-Based Methods} \label{sec:templates} The family of methods often described as \emph{template-based} perform inference based upon an \textit{a priori} model of the range of galaxy SEDs that exist. These codes commonly construct PDFs for the redshift of a galaxy via an application of Bayes' theorem, following \cite{Benitez2000}. The posterior probability for the redshift $z$ given a set of observed fluxes, $F$, $p(z | F)$ can be determined from an equation: \begin{equation} p(z | F) = \frac{\int p(F | z,t,O) p(z,t,O) \, dt \, dO}{p(F)}. \label{eqn:posterior} \end{equation} Here the prior, $p(z,t,O)$, represents the joint probability of a given combination of template $t$, redshift, and potential extra parameters such as luminosity or apparent magnitude in some band, absent any other information about the individual object of interest; the choice of extra parameters used varies amongst implementations. If the templates are not distributed over a continuous space, the integral contains a discrete sum. The likelihood, $p(F | z,t,O)$, corresponds to the probability of getting the particular values of the fluxes in each band observed for the object of interest, assuming a set of values of the redshift, template type, and any additional parameters. For Gaussian errors, the likelihood will be proportional to $\exp{-\chi^2/2}$, where the $\chi^2$ value is computed as $\sum[F_{\rm observed,i} - F_{{\rm predicted}(z,t,O),i}]^2/\sigma_i^2$. Here $F_i$ corresponds to the $i$th element of either the observed flux vector or the flux vector predicted for a given set of parameters $z,t,O$ and $\sigma_i$ is the uncertainty in the $i$th observed flux. As long as the model for galaxy SEDs is considered fixed, the resulting $p(z)$ can be interpreted as either a Bayesian or frequentist estimate. Some template methods report the posterior probability of redshift inferred from the observed fluxes, $p(z | F)$; others instead provide the likelihood $p(F|z,t,O)$ without incorporating the prior term $p(z,t,O)$. Care must thus be taken to interpret outputs correctly. Commonly applied methods that use a $\chi^2$-based likelihood include LePhare \citep{Arnouts1999,Ilbert2006}, BPZ \citep{Benitez2000}, ZEBRA \citep{Feldmann2006} and EAZY \citep{Brammer2008}. They, and other template-based methods, differ primarily in their choice of: \begin{itemize} \item \textit{What observables are used to predict redshifts;} e.g., a set of fluxes (LePhare, ZEBRA), or instead a set of colors (i.e., differences between magnitudes between photometric bands) which may be combined with a magnitude-dependent redshift prior (BPZ, EAZY). One could imagine exploiting morphological parameters in priors as well. We note that using magnitudes or magnitude-derived colors has the disadvantage of discarding the information present in negative flux measurements; dropping such measurements entirely can result in biased inference. \item \textit{What set of templates to use,} e.g.,~ones derived from spectroscopic observations (\citealt{Coleman1980,Kinney1996} as used~in BPZ or \citealt{Connolly1995}) or synthetic spectra based upon stellar population synthesis models (e.g., \citealt{Bruzual1993,Bruzual2003} in LePhare). Variants use best-fitting linear combinations of templates at each redshift (as in EAZY) or iteratively update the initial template set to better match observed colors (e.g., ZEBRA). \item \textit{Whether to multiply the likelihood $p(F | z,t,O)$ by the prior probability, $p(z,t,O)$,} which some methods chose to not do (LePhare), others do using a redshift/luminosity prior (EAZY) or redshift/type/magnitude prior (BPZ) derived from training data, and others with an iteratively adjusted prior (ZEBRA). \item \textit{How to marginalize over templates:} formally, to calculate the posterior redshift distribution $p(z | F)$ one must integrate the multi-dimensional posterior on the right-hand side of equation \ref{eqn:posterior} over the template dimension $t$ (a process known as marginalization). However, some codes (e.g., EAZY) instead approximate the marginalized likelihood $p(F | z)$ by $\exp{-\chi^2(t_{\rm best},z)/2}$, where $t_{\rm best}$ is the template (or combination of templates) which provides the best fit for a given $z$, replacing the integral by its value for only the highest-likelihood template at each $z$, ignoring other templates which may also be consistent with the photometry. \item \textit{What quantity to report,} whether the full redshift posterior (BPZ, ZEBRA) or single ``point'' values such as the combination of template type and redshift that have the maximum likelihood (e.g., LePhare, ZEBRA); the redshift which has the maximum posterior probability (e.g., EAZY); or the expectation value of redshift $\int z p(z) dz$ (e.g., BPZ, EAZY). \end{itemize} A strength of template-based methods is that they use an informative prior, the underlying model of the full galaxy population, to infer the redshift posterior from the noisy information from an individual galaxy. The importance of this prior, however, makes template-based methods subject to a number of potential problems: \begin{itemize} \item \textit{The template set is incomplete.} Since no two galaxies have exactly the same SED, no finite set of templates can fully describe a population. When more degrees of freedom are given to the set of templates, conversely, unphysical solutions or biases can result. \item \textit{The prior is wrong.} Even with a complete and sufficiently flexible set of templates, the priors on the abundance, redshift, and luminosity of galaxies resembling a template may not be accurate. Especially when photometry is noisy and only available in a few bands, it can be consistent with multiple combinations of template and redshift, making the photometric redshift estimate highly sensitive to the priors used. \ifarxiv \begin{marginnote} \entry{Take-Away}{Incomplete, incorrect, and/or inflexible models for the galaxy population currently limit template-fitting redshift performance at levels of $|\langle\Delta z\rangle/(1+z)|>0.01$.} \end{marginnote} \fi \item \textit{The data do not inform the model.} In the limit where the templates and priors describing the galaxy population are already accurately known, separating their determination from the estimation of individual galaxy posterior PDFs, as done in most template-fitting codes, is an appropriate choice. However, such accurate knowledge does not exist about the general galaxy population. Photometric datasets could themselves be used to constrain models of the set of template SEDs needed and prior probability distributions for redshift and type. \end{itemize} Template-based methods have addressed these concerns in a variety of ways: e.g., by using flexible and/or optimized template sets (as in EAZY) or by adjusting templates and priors using the ensemble of galaxy data (as in ZEBRA). These and other recent developments bring them closer to the Bayesian Hierarchical methods described in Section~\ref{sec:bhm}. \subsection{Machine Learning Methods} Empirical methods for photometric redshift estimation find a relation between galaxy observables (e.g., fluxes and errors) and statistics related to the redshift. Most current methods use machine learning techniques, which rely on training samples of galaxies whose observables and known redshifts are taken as samples from the desired relation. Methods can be distinguished by: \begin{itemize} \item The \textit{training sample} of galaxies with known redshift, as well as the information about the training sample that will be used to predict redshifts (the ``features'' used for prediction, in machine learning parlance). Some methods only utilize galaxy colors (or flux ratios between bands) to predict redshift, while others incorporate the magnitude or flux in at least one band as a separate feature used for prediction, and some methods use full pixelized images. The selection of objects in the training sample and appropriate reweighting as a function of their properties can be used to reduce biases or the impact of sample variance on redshift characterization. \item The \textit{quantity they are trained to optimize}. Early methods commonly were optimized to minimize the variance of a point estimate of the redshift of a galaxy given its observables \citep[e.g., ANNz][]{Collister2004}. Many newer approaches aim to provide an estimate of the PDF of redshift instead \citep[e.g., TPZ, ANNz2][]{CarrascoKind2013,Sadeh2016,deVicente2016}. Approaches differ (and sometimes are ambiguous) in how exactly the resulting PDF should be interpreted (i.e., which of the types of PDFs described in \S \ref{sec:definitions} is being produced by a given algorithm). \item Further \textit{assumptions or choices} that affect the estimation of the target quantity. For instance, some methods choose to divide the training sample of galaxies with spectroscopic redshifts into subsets by distinguishable properties \citep[e.g., cells in self-organizing maps][]{Masters2015,Buchs2019}. Other methods define a neighborhood in photometric space over which reference galaxies are used to estimate the redshift of an object with given photometry \citep[e.g., DNF, DIR, CMNN][]{deVicente2016,Hildebrandt2017,Graham2018}, and potentially also a functional form (or, equivalently, machine learning architecture) relating photometry to redshift that is fitted within that neighborhood \citep[e.g., GPz, ANNz2, DNF][]{Almosallam2016,Sadeh2016,deVicente2016}. \end{itemize} Each of these characteristics can lead to limitations on performance or characterization: \begin{itemize} \item \textit{The training sample is not a superset of the target sample.} When the sample of galaxies for which photometric redshifts are needed occupy regions of observable or physical-property space that are not also populated by the training sample, empirical models that are not built upon physical knowledge about galaxies cannot be assumed to successfully extrapolate. For example, due to the expense of spectroscopy for faint galaxies, most objects with spectroscopic redshift measurements are much brighter than the objects for which photo-$z$'s are desired. \item \textit{The training sample is not representative of the target sample.} A more treacherous case is when the training sample covers the distribution of the target sample in the space of observables that are available for both, but is non-representative in additional dimensions that are not known for the target sample. For instance, spectroscopy may fail to deliver secure redshifts for objects of some types or at some redshifts while succeeding for other objects with similar colors, biasing training. \item \textit{The training sample contains faulty entries.} Errors in the redshifts or photometric observables assigned to the training sample will generally lead directly to systematic errors in the estimated photo-$z$'s. \item \textit{The trained quantity does not match the science}; for instance, in many cases a science analysis requires the full distribution of redshifts of a sample of galaxies, but the output of a photo-$z$ algorithm may be a single point estimate of redshift or some other quantity that does not allow the full distribution to be reconstructed accurately. \item \textit{Further choices introduce bias.} Even seemingly reasonable analysis choices -- e.g., to estimate photo-$z$'s through nearest neighbors in photometry space, or simplifying the treatment of noise -- can be shown to introduce significant biases in mean redshift \citep[e.g.,][]{Wright2020}. \end{itemize} \ifarxiv \begin{marginnote} \entry{Take-Away}{Insufficient training samples or analysis choices that do not match the science case commonly limit empirical redshift methods at levels of $|\langle\Delta z\rangle/(1+z)|>0.008$ \citep{Pasquet2019,Hayat2020}. } \end{marginnote} \fi Whether a method is appropriate depends on the science case -- e.g., the target quantity that must be estimated, the level of performance needed, and whether PDFs are needed for individual objects or if instead only overall redshift distributions are required. Tomographic weak gravitational lensing analyses, for instance, need the full redshift distribution for a sample; point estimates are not suitable due to the non-linear dependence of lensing strength on redshift, making the tails of PDFs important. Determining these distributions will require a sufficiently-representative reference sample of nearly outlier-free and precise redshifts to train. {For this application, so long as the same selections can be performed on both the training and target sets of galaxies, it is sufficient to estimate the combined redshift distribution of all objects within a set of bins in parameter space, enabling the compression of continuous photometric information into a discrete number of subsets.} \subsection{Methods Employed by Recent Surveys} \label{sec:stage3} Cosmological analyses from recent surveys such as DES, HSC, and KiDS have generally responded to the above concerns by not assuming the output of any particular classical template-fitting or empirical photo-$z$ method will be correct. Where they did use such methods, they calibrated the result with a comparison to a reference catalog of spectroscopic \citep{Hildebrandt2019} or high-quality photometric \citep{Hoyle2018,Hikage2019} redshifts. Where they did not, they developed custom approaches designed to produce histograms of those reference redshifts re-weighted to represent the lensing source galaxy sample, designed to reproduce their redshift distribution $n(z)$ with minimal bias and variance \citep{Hikage2019,Wright2020,Myles2021}. By design, the result of these methods are frequentist $p(z)$'s for individual galaxies or, when stacked, $n(z)$'s. The uncertainties in the mean redshift resulting from applying these characterization methods under idealized conditions to recent Stage III\footnote{The current generation of imaging dark energy experiments -- including DES \citep{DES2016}, HSC \citep{hscdr}, and KIDS \citep{Hildebrandt2021} -- all are classified as ``Stage III'' surveys in the scheme of the Dark Energy Task force \citep{DETF} based upon the level of constraints on the dark energy equation of state and its evolution that they will achieve. The Vera C. Rubin Observatory's LSST, Nancy Grace Roman Space Telescope, and Euclid mission are classified as Stage IV experiments. } dark energy experiments have been estimated to range from 0.004 to 0.05 with earlier methods \citep{Hoyle2018,Hildebrandt2019,Wright2020} and from 0.001 to 0.006 in the most recent studies \citep{Wright2020,Myles2021}, \ifarxiv as illustrated in the Figure within the margin. \else as illustrated in \autoref{fig:summary}. \fi Comparing to the requirements described in \S \ref{sec:cosmo_requirements}, one finds that the methods used for recent surveys are thus in principle capable of characterizing redshift distributions at a level approaching the requirements for future, Stage IV experiments. However, these estimated characterization uncertainties all \emph{exclude} the effects discussed in \S \ref{sec:characterization}, whose impact can be several times larger. \ifarxiv \begin{marginnote} \entry{Characterization of photometric redshifts in ideal conditions, by method}{\includegraphics[width=3.5cm]{methods.pdf}} \label{fig:char_error_margin} \end{marginnote} \else \begin{figure}[htp!] {\includegraphics[width=0.9\textwidth]{methods_triptychon.pdf}} {\caption{{Uncertainties in characterization of redshift distributions for state-of-the-art photometric redshift methods under the assumption of ideal training data, summarized as the systematic uncertainty of the estimated mean redshift for an ensemble of galaxies. Horizontal bars indicate the ranges of redshift bias consistent with results from a given study; points correspond to the best estimate of a redshift bias. Grey vertical bands indicate the range of redshift biases one is required to be in for cosmological analyses with Year 1 or Year 10 Vera Rubin Observatory LSST data \citep{DESC_SRD} not to be impeded. The left panel shows the level of characterization errors for template-fitting and machine learning methods found in community challenges that provide sufficient, representative training data \citep{Sanchez2014,Schmidt2020}. The central and right panels show characterization errors on ideal data for earlier and later versions of Stage-III methods from the Dark Energy Survey \citep{Hoyle2018,Myles2021} and the Kilo Degree Survey \citep{Hildebrandt2020,Wright2020}. Most methods fail to meet LSST Y1 characterization requirements even under idealized conditions; the actual characterization errors will be further inflated by the effects described in \S \ref{sec:characterization} (see also \autoref{fig:char_error}). } } \label{fig:summary}} \vskip-0.15in \end{figure} \fi \subsection{Bayesian Hierarchical Methods} \label{sec:bhm} Photometric redshift methods are necessarily hierarchical, as the term is used in the statistics literature; i.e., there exist at least two levels of parameters, one set describing the properties of an individual galaxy, and one set describing (either via a reference sample or sets of templates and priors) the distribution of properties of the ensemble of galaxies. Methods can be distinguished by how they treat the parameters of the latter sort. Most commonly at present, the model of the underlying population of galaxies remains fixed while photometric redshift inference is performed. One may choose a template prescription or train an empirical method based upon a sample of trusted redshifts and photometry, and then estimate the redshift of each target galaxy given that input set of information. Some methods allow a limited degree of feedback from the inference of redshift distributions to the parameters describing the galaxy ensemble; examples include the BPZ or ZEBRA methods described in \S \ref{sec:templates}, or the combination of SOMPZ, 3sDIR, and WZ used in DES Y3 \citep{Myles2021,Gatti2021}. Bayesian Hierarchical Methods take this idea to its limit by performing a simultaneous, joint Bayesian inference to constrain parameters at both levels: i.e., determining posterior probability distributions both for the redshifts of individual galaxies and for parameters that describe the properties of the broader population of galaxies. In the context of photometric redshift estimation, such methods were first introduced by \citet{Leistedt2016}, who used a hierarchical model to constrain the underlying distributions of template types and redshifts while at the same time computing PDFs for the redshifts of each individual object using a mock data set. These underlying distributions correspond to the prior $p(z,t)$ used within a Bayesian photometric redshift method; by inferring $p(z,t)$ from the data itself, uncertainties in the prior can be propagated into the final redshift PDFs for each object. \citet{Leistedt2019} extended this method to also allow the set of rest-frame SED templates used to be modified via hierarchical inference. A different approach is taken by \citet{Sanchez2019,Alarcon2020}, who instead infer a prior probability distribution for the density of galaxies in the \emph{observed} high-dimensional color space, rather than in the space of intrinsic properties, via a hierarchical inference process. Another variant of hierarchical models, forward-modeling, has also been successfully applied to data \citep[e.g.,][]{Herbel2017,Tortorelli2021}. In these methods, a parametric model for the galaxy population is used to simulate galaxy images and/or catalogs. The model parameters, along with other cosmological and nuisance parameters, are constrained via Markov Chain Monte-Carlo methods to resemble the observed distributions of galaxy properties, including the measured redshifts of objects with spectroscopy. In addition to constraints on model parameters, hierarchical methods can simultaneously provide photo-$z$ PDFs for individual objects or redshift distributions for ensembles of galaxies, marginalizing over the values of those parameters. Hierarchical and forward-modeling approaches are still at early stages of development, but hold significant promise for addressing many of the challenges discussed in this review. They can, in principle, overcome the limitations posed by incomplete training sets or inaccurate templates or priors, so long as the models used are sufficiently general to encompass the underlying reality without providing so much freedom that redshifts are poorly constrained. \ifarxiv \begin{marginnote} \entry{Take-Away}{Methods that inform a model for the galaxy population with all collected data have promise for addressing current limitations of photo-$z$ algorithms.} \end{marginnote} \fi \section{IMPROVING PERFORMANCE AND CHARACTERIZATION VIA SPECTROSCOPY} \label{sec:spectroscopy} Modern photometric redshift methods are dependent upon having a set of objects whose redshifts are securely known. Most directly, cosmological analyses of current-generation (Stage III) surveys have estimated redshift distributions of galaxy samples by simply using a weighted histogram of the redshifts in reference samples (cf. \S \ref{sec:stage3}). In machine learning-based techniques, samples of objects with known redshifts and photometry provide the training data used to optimize algorithms for estimating photo-$z$'s. Template-based methods are less directly dependent upon having redshift measurements available; however, the best-performing algorithms today use such samples to optimize the libraries of galaxy spectral energy distributions used to compute likelihoods \citep{Crenshaw2020}; to optimize photometric passband throughput curves and zero-points \citep{Ilbert2006}; to refine error models \citep{Brammer2008}; and/or to develop redshift priors \citep{Benitez2000}. Sets of secure redshift measurements are also fundamental to testing for (and, if necessary, correcting) any systematic errors in photometric redshift measurements; if uncorrected, such errors can far exceed random errors in precision cosmology measurements \citep{DESC_SRD}. \ifarxiv \begin{marginnote} \entry{Take-Away}{There can be no photometric redshifts of high quality without spectroscopic redshifts of high quality \emph{and} quantity.} \end{marginnote} \fi In this section, we describe the needs for external redshift measurements to improve photo-$z$ performance and characterize redshift distributions, and lay out the scope of the problem for future imaging surveys. The limited availability of secure spectroscopic redshifts for objects as faint as those to which photo-$z$ algorithms will be applied in the near future may be a major stumbling block for photometric redshift methods. \subsection{The Twin Needs for Spectroscopy} One application of spectroscopic samples is to develop (in the case of machine-learning methods) or optimize (for template-based methods) photometric redshift algorithms, {\it reducing random uncertainties in redshift estimates for individual objects}. In these cases, a set of objects with precision redshift measurements is used to improve the \textbf{performance} of algorithms. This application of spectroscopy is referred to as ``training'' in \citet{Newman2015}. For template-based methods, when sufficiently large training samples are available, we should be able to refine the underlying model used arbitrarily well, in which case photo-$z$ errors should be determined only by photometric uncertainties and not be degraded by our limited knowledge of the intrinsic SEDs of galaxies, the system used to obtain photometry, etc. For machine learning algorithms, a perfectly-trained algorithm will have fully determined the mapping from observed properties to redshifts; the performance of photo-$z$ algorithms will then be limited only by the information contained in the photometry itself. However, for many precision studies (particularly in cosmology), photometric redshifts for individual galaxies do not need to be highly precise to constrain the quantities of interest. Instead, it is only requirements on the \textbf{characterization} of redshift distributions that are extremely stringent, as discussed in \S \ref{sec:cosmo_requirements}. This characterization is generally performed using samples of objects with spectroscopic redshift measurements, which may be used to estimate redshift distributions directly when weighted to match photometric samples (this application is referred to as ``calibration'' in \citealt{Newman2015}). In work to date, the same basic spectroscopic samples are frequently used both to train photometric redshift algorithms and to characterize their results \citep[e.g.,][]{Hikage2019,Wright2020,Myles2021}. However, for upcoming imaging surveys, it will be very challenging to obtain low-error-rate sets of redshifts with minimal systematic incompleteness down to the magnitude limits of samples that will be used for cosmological measurements (cf. \S\S \ref{sec:incompleteness} and \ref{sec:outliers}). Large, deep samples are already systematically incomplete at $i=22.5$, whereas Rubin Observatory cosmology samples will extend to $i=25$ or greater, and Roman Observatory will utilize deep IR-limited samples that are even more challenging spectroscopically. In that case, small, deep but incomplete samples may be used to improve the performance of photometric redshift algorithms, but approaches that are less sensitive to incompleteness must be used for characterization (cf. \S\S \ref{sec:bhm}, \ref{sec:xcorr}, and \ref{sec:vision}); for instance, much larger but shallower samples of secure redshifts can characterize distributions by exploiting correlations from large-scale-structure \citep{Newman2008}. We note that the term ``spectroscopy'' should be interpreted broadly in this section. For improving photo-$z$ performance, at least, useful information may be obtained from extremely-high-quality, many-band photometric redshifts which are available in some limited areas of sky (as those from e.g.~\citealt{Laigle2016,Alarcon2021}). However, many-band photo-$z$'s exhibit much larger catastrophic outlier rates and redshift errors than higher-resolution spectroscopy does. This is a consequence of the limited spectral resolution of the information available from many-band surveys and the poorer signal-to-noise compared to broadband imaging, as well as the limited deep data available for training the many-band photo-$z$'s. The lower robustness of many-band redshifts will generally limit their utility for precision characterization. Slitless and prism spectroscopy exhibit similar characteristics to many-band imaging, providing less-secure redshifts for larger samples derived from lower-resolution spectral information. \subsection{Spectroscopy for Improving Photometric Redshift Performance} For all training-based methods of determining photometric redshifts, it is necessary to have a set of objects for which both the properties that will be used to make predictions and the quantity one wishes to predict (in this case, the redshift) have been measured in the same way as for the galaxies to which algorithms will be applied. Since the relationship between color and redshift is complex, it requires many samples to fully map it. A machine learning algorithm trained with only a sparse set of objects with spectroscopic redshifts may have to rely on information from galaxies with very different properties (color, brightness, $z$, etc.) to predict the redshift of a given galaxy, and prediction errors will be correspondingly degraded. This degradation will be even worse in regions of parameter space where training redshifts are unavailable, in which case algorithms will extrapolate (often nonlinearly) from objects with systematically different properties. The left and center panels of \S \ref{fig:sparsesampling} illustrate the loss of information when training samples become sparse. In contrast, given a very large and representative input sample the ability of a training-based algorithm to predict redshift will be limited only by the uncertainties in the fluxes provided in inputs and the intrinsic scatter in the mapping from noiseless photometry to redshift; at that point, results should not improve as sample sizes get larger. However, how fast this transition occurs will depend upon the algorithm used to predict photometric redshifts, the quantity that is being estimated (e.g., the full $p(z)$ as opposed to point estimates of redshift), and the photometric data it is estimated from (cf. \S \ref{sec:morphology}). Methods which effectively interpolate between members in the training sample in a manner which takes more advantage of the underlying, simpler structure of the distribution of galaxies in the space of rest-frame spectral energy distributions should be more effective at predicting redshifts for galaxies outside their training set. Scalings of photometric redshift errors when several standard machine learning algorithms are applied to the mock LSST dataset of \citet{Graham2018} are illustrated in Figure 1 of \citet{Newman2019}. Errors scale with training set size approximately as $\sigma_z \sim \sigma_{\infty}(1 + a N_{\rm training}^{-0.4})$, where $\sigma_z$ is the RMS scatter between a photometric redshift estimate and true redshift, $\sigma_{\infty}$ is the scatter that would be obtained with an infinite, perfect training set, $N_{\rm training}$ is the number of objects in the training set, and $a$ is a constant which depends upon the algorithm used. The reason for the observed power-law exponent remains unknown. In machine learning methods applied to this dataset to date, improvements in errors generally become slow beyond sample sizes of 20-30,000. Deep-learning-based algorithms which utilize pixel-level information in galaxy images to predict redshift require larger training samples to reach their optimum performance; in the most recent algorithms, the core scatter does not plateau until training samples comprise $>100,000$ objects, and catastrophic outlier rates continue to fall even for training samples of $>400,000$ galaxies \citep{Dey2021b}. However, such methods are unlikely to yield large photo-$z$ performance improvements for the faint, poorly-resolved galaxies from next-generation surveys whose redshifts are most difficult to measure spectroscopically. If we therefore set aside the ambition to apply deep learning methods at faint magnitudes, a practical goal for near-optimal performance from near-future photo-$z$ algorithms would be to obtain redshifts for a sample of 20,000-30,000 objects in total, spanning the flux range of the samples that will be used for cosmological studies. \ifarxiv \begin{marginnote} \entry{Take-Away}{Roughly 30,000 deep spectroscopic redshift measurements are needed to optimize \emph{performance } of photo-$z$ algorithms in near-future surveys.} \end{marginnote} \fi \subsection{Spectroscopy for Characterizing Redshift Distributions} \label{sec:spec_characterization} As we shall see, if our goal is to characterize redshift distributions rather than optimize the performance of photometric redshift algorithms, one coincidentally obtains a very similar estimate of the sample size required. We note, however, that characterization of redshift distributions for precision cosmology measurements with future imaging surveys will require spectroscopic samples with very high redshift success rates (99\% or higher); very low incorrect-redshift rates ($<0.1\%$); \textbf{and} minimal sample/cosmic variance (cf. \S \ref{sec:sample_variance}), in order to ensure validity of results (assuming characterization through a simple reweighted redshift histogram, as is done in current analyses). No deep redshift survey to date has approached the required levels of completeness (i.e., the fraction of targets that yield extremely-secure redshifts) needed for direct characterization of photometric redshifts for future cosmology surveys, as will be discussed in \S \ref{sec:incompleteness}. However, new instruments and strategies could change this situation, so it is desirable for samples designed to improve photometric redshift performance to be also capable of fulfilling our characterization needs if the necessary high success rates are achieved. If the redshift estimation process including its failure modes can be forward-modelled accurately, higher failure rates may still be tolerable for characterization purposes. A number of theoretical works have explored strategies for the characterization of redshift distributions, and have each determined that sample sizes of 20-30,000 should be sufficient whether simple Gaussian errors or more complex scenarios including catastrophic outliers are considered \citep{Ma2008,Bernstein2010,Hearin2010}. The fact that improvements in errors for machine learning methods begin to become slow beyond this sample size appears to occur purely by coincidence. \ifarxiv \begin{marginnote} \entry{Take-Away}{Only if the deep spectroscopic samples collected for training reach unprecedentedly low or well-understood incompleteness and outlier rates, or if characterization methods greatly improve, can the stringent requirements for future dark energy studies be met.} \end{marginnote} \fi An additional consideration when designing spectroscopic samples for improving the performance and particularly characterization of photo-$z$'s is sample (or ``cosmic'') variance: the variation in density from one region of the universe to another due to the underlying matter density fluctuations \citep{Cunha2012}. Deep spectroscopic surveys generally target only one or a few fields each covering only a very small area of sky; as a consequence, the volume at a given redshift is low, and density fluctuations are correspondingly large. As a result, the redshift distribution in each field will exhibit large fluctuations (much larger than would be expected from Poisson statistics), with some redshifts being over-represented and others under-represented. This effect is illustrated in the right panel of \autoref{fig:sparsesampling}. Furthermore, the \textit{types} of galaxies will also vary, as the most massive quiescent galaxies will only be found in extreme overdensities, whereas bluer galaxies will comparatively favor underdensities. This will affect the characterization of any photo-$z$ method, whether training-based, direct, or Bayesian hierarchical. The effects of sample/cosmic variance can be mitigated in a variety of ways. One option is to obtain spectroscopic data sets over a larger number of small but widely-separated fields \citep{Cunha2012}, as in that case the fluctuations in each field will be independent and tend to average out. \citet{Newman2015} propose a baseline survey for future dark energy experiments in which spectroscopy is obtained over 15 widely-separated, 20 arcminute diameter fields. This proposed design would produce similar total fluctuations in density as the C3R2 survey \citet{Masters2019}, but would require only 1.3 square degrees of sky to be sampled, rather than the $>6$ square degrees (spread over six fields) planned for the latter. A second option is to use spectroscopy to characterize the color-redshift relation in a photometric space that has a large number of bands \citep{Gruen2017}, possibly larger than the wide field survey itself. When the photometry alone constrains redshift well, sample variance largely manifests as a fluctuation of galaxy density in photometric space. It can thus be mitigated by reweighting according to the density of purely photometric galaxies observed in those same photometric bands, reducing the need for spectroscopic data \citep{Buchs2019}, so long as the volume and number of objects surveyed is sufficient that all cells in the photometric space are well-characterized. At the same time, as the number of photometric bands increases, spectroscopic incompleteness should manifest as a variation of success rates across photometric space \citep{Masters2015} and its impact can thus be better isolated and potentially reduced. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{spectroscopiceffects_trim.pdf} \caption{Illustration of the impact of limitations on spectroscopic training sets on the ability to map out relations between color and redshift. In this toy model, the X and Y coordinates could correspond to measures of galaxy colors in different bands or to a dimensionality-reduced transformation of color space into two dimensions (such as that produced by a self-organizing map). The relationship between galaxy redshift and its position in this color space is determined by evaluating a Gaussian random field; redshifts ranging from 0 to 1 are mapped onto colors ranging from indigo at the lowest $z$ to red at the highest, as shown by the color bar. When large spectroscopic training sets that densely sample the color space are available, as in the panel at left, machine learning methods can easily predict the redshift at any location in this space by interpolation or determination of the distribution of training redshifts in a local neighborhood. When spectroscopic training samples are sparse, as in the middle panel, the relationship of color to redshift will be only poorly determined and photo-$z$ accuracy will be degraded. Finally, in the right panel we emulate the effect of sample variance by selecting galaxies at a rate which is a periodic function of $z$. This causes some redshifts to be overrepresented in training sets and others to be underrepresented, resulting in systematic gaps in the coverage of color space as well as biased characterization of $z$ distributions due to some redshifts being favored compared to others. } \label{fig:sparsesampling} \end{figure} \section{MAJOR CHALLENGES FOR NEXT-GENERATION PHOTOMETRIC REDSHIFTS} \label{sec:openissues} A great deal of progress has been made on improving both the performance and characterization of photometric redshift algorithms in recent years. However, upcoming large imaging surveys will only be able to reach their full promise if further advancements are made on a variety of fronts. In this section, we focus on areas where current algorithms fall short and the potential to make gains is clear, serving as possible areas of focus for research in the near term. We do not discuss one open issue, the measurement of photometric redshifts for objects with (often time-variable) emission from an active galactic nucleus, as that is reviewed in detail within \citet{Salvato2019}. We first consider issues that primarily affect the performance of photo-$z$ algorithms, and then describe potential sources of problems that primarily influence characterization. As future imaging-based probes of cosmology will require exquisite calibration of redshift distributions, the latter section will focus primarily on characterization requirements for such analyses. The needs for galaxy evolution work will be comparatively easier to meet. \subsection{Challenges for Improving Photometric Redshift Performance} \subsubsection{Interpreting ``Probability Distributions'' from Photometric Redshift Algorithms} \label{sec:pdf} The probability distribution functions (PDFs) produced by photometric redshift codes often do not meet either frequentist or Bayesian expectations (cf.~\autoref{pz}). A frequentist expects that the true redshift of an object, as measured e.g.~spectroscopically, does lie between $z_0$ and $z_1$ in a fraction of trials equal to $\int_{z_0}^{z_1} p(z)\,\mathrm{d}z$. Alternatively, the criterion can be expressed via the Probability Integral Transform (PIT): the distribution of the values of the cumulative distribution function for a given object, evaluated at its true redshift, should be uniform between 0 and 1. Any inference that depends upon the assumption that PDFs accord with the expectations of frequentist statistics will be biased if that definition is not fulfilled. This failure mode is wide-spread. \cite{Dahlen2013} found that out of 11 different photometric redshift codes run on data from the CANDELS survey, all of which delivered point estimates with comparable scatter from spectroscopic redshifts, the fraction of objects whose true redshift fell within the 68.3\% confidence region of their PDF ranged from 2.5\% to 89\%, versus the expected 68.3\%. On the other hand, between 2.9\% and 97\% of the time the true redshift fell in the 95.4\% confidence region (and even when a code came close for the 68\% region the fraction within the 95\% region was badly off, as well as vice versa). \citet{Schmidt2020} performed a test of photometric redshift codes on simulated Rubin Observatory LSST data, in which a large and perfectly representative training set was provided for training-based algorithms, and the actual template sets used to generate photometry were made available for template-based methods. Even in this best-case scenario, all current photo-$z$ codes fell short at providing accurate PDFs in comparison to a control method. The latter, named \texttt{trainZ}, was designed to return a maximally broad yet perfectly frequentist $p(z)$, identical for each galaxy in the sample, corresponding to the histogram of the redshifts of all galaxies from the representative training set. For any other code, the deviations of the PIT distribution from the expected uniform distribution were larger by an order of magnitude or more than those of \texttt{trainZ}, as illustrated in \autoref{fig:DC1}; this degree of inaccuracy would greatly compromise cosmological inference from future surveys. Conversely, the redshift performance for individual objects from \texttt{trainZ} would be badly insufficient for most analyses. The reasons for these shortcomings can be either errors in the redshift PDF estimation process, among them the issues discussed in this review, or incorrect interpretation of the produced outputs of a photo-$z$ code. The latter can occur if the output of a photometric redshift code is not actually \emph{intended} to meet the frequentist definition. The Bayesian definition of probability as a degree of belief that a value lies in some range (cf. \S \ref{sec:definitions}) is harder to test quantitatively. However, even codes which are nominally Bayesian sometimes fail to properly marginalize probability over parameter uncertainties (e.g., template types), causing them not to match any statistical definition of a PDF. As described in \S \ref{sec:templates}, template-based methods typically compute posterior probability distribution functions for galaxies via a simple application of Bayes' theorem. When this is done with a set of templates and priors for each that are matched to the true distribution of galaxy SEDs, with proper marginalization over all templates and redshifts and accounting for selection effects, the output should fulfill the frequentist definition of a PDF. However, when the output is calculated from (for instance) the likelihood of the best-fitting template at a given redshift, rather than marginalizing over templates, it is incorrect to interpret outputs as either frequentist or Bayesian PDFs. Even if one were to correctly marginalize over an appropriate model for templates and priors that is itself uncertain, the output could be correct in a Bayesian sense, but would not match the frequentist definition. For instance, when a set of discrete templates that are not evenly distributed within the underlying parameter space are all given equal prior probability, the result corresponds to an (incorrect) implicit prior on template type, with regions of parameter space having more templates getting extra weight in computing the PDF. It is even less clear how to properly perform marginalization when likelihoods are computed between the observed colors and the best-fit linear combination of templates, which is a common procedure. Although they generally do not compute either a posterior or likelihood directly, machine learning-based methods still often output a redshift PDF, potentially incorporating both measurement uncertainties and factors that contribute to the spread in redshift at fixed color. A variety of techniques exist for this \citep[see, e.g.,][]{Sadeh2016,QuantileRegressionPZ} for which there often is no expectation that they meet the frequentist criterion. Even if the loss (i.e., the quantity that a machine learning algorithm is optimized to minimize) incorporates measures of PDF-ness, that loss will be minimized across the entire training set, but may yield biases for specific subsets of the training domain even if the distribution over the full sample appears consistent with expectations. Unlike typical template-based or machine learning approaches, the techniques employed in current-generation (Stage III) analyses are constructed to return frequentist PDFs if a set of underlying assumptions are valid \S \ref{sec:stage3}, but are not necessarily consistent with Bayesian approaches. \citet{Bordoloi2010} addressed the mismatch between photo-$z$ PDFs and the frequentist definition by remapping the PDFs of all objects according to the global PIT distribution for galaxies with known redshifts, redistributing probability for each PDF such that the PIT distribution will be uniform for the overall spectroscopic sample by construction. This procedure corrects the individual PDFs accurately, \textit{if} the set of objects with known redshifts is representative of the set of objects to which the corrections are applied, and if the degree of mis-specification of PDFs does not depend on object properties (e.g., brightness, intrinsic SED, or redshift). Those assumptions do not hold in general: the spectroscopic samples used to retune the PIT will in general be biased and incomplete compared to photometric samples, and (for instance) the contributions of incorrectly estimated photometric errors and of incorrect handling of templates will have different impact for galaxies of different brightnesses or different restframe colors. In such scenarios, the overall PIT distribution for the entire spectroscopic sample may match the ideal case even while the PDFs for particular subsets of the sample are poorly calibrated (cf. \citealt{Zhao2021}). This can be addressed by predicting the PIT distribution at all points in parameter space via machine learning regression \citep{Zhao2021} and then correcting each object's PDF according to the PIT prediction evaluated at its location. This procedure has produced good results in initial tests \citep{Dey2021a}. If spectroscopic samples are biased this procedure may still fail to yield accurate PDFs, however; it would be even better to develop a fundamental understanding of \textit{why} current codes fall short of the ideal, and to then develop methods constructed such that they provide well-defined PDF outputs. \ifarxiv \begin{marginnote} \entry{Take-away}{Current photo-$z$ codes optimized for predicting redshifts of individual objects fail to produce outputs that fulfill the frequentist definition of a PDF; Bayesian methods have also fallen short of the ideal.} \end{marginnote} \fi \begin{figure}[htp!] {\includegraphics[width=0.95\textwidth]{alex_pit_qq_trim.pdf}} {\caption{Photometric redshift codes which deliver good performance generally fail to produce results which match the frequentist definition of a PDF. Plotted are results for the best-performing template-based code in the controlled tests of \citet{Schmidt2020}, BPZ \citep{Benitez2000}; the best-performing training-based code, FlexZBoost \citep{Dalmasso2020}; and a control method developed for that work which has pessimal performance at predicting redshifts for individual objects but delivers well-characterized PDFs when given ideal training sets, \texttt{trainZ}. Blue histograms (corresponding to the gray axis labels) show the distribution of the Probability Integral Transform (PIT) statistic in the tests by \citet{Schmidt2020}, which should follow a uniform distribution for a proper frequentist PDF. Green curves (associated with the green axis labels) show the Quantile-Quantile (QQ) plot for each method, which shows the fraction of the time that the actual redshift of an object is below a given quantile in the redshift PDF ($Q_{data}$) as a function of the chosen quantile ($Q_{theory}$); ideally, this should fall along the dashed diagonal unity line. Figure adapted with permission from \citealt{Malz_thesis}. } \label{fig:DC1}} \vskip-0.15in \end{figure} \subsubsection{Combining Results from Multiple Methods} It has repeatedly been found that better photometric redshift performance can be attained by combining information from multiple photometric redshift codes than by considering only results from one, even when the same input data is used for each. For instance, \citet{Gorecki2014} found that neural-network-based and template-based codes exhibited catastrophic redshift errors for \textit{different} objects. As a result, requiring consistency between the results of two very different codes can produce much better performance than samples where only one code is used in isolation. More strikingly, \citet{Dahlen2013} found that even when only template codes are considered, combining results can improve performance, likely related to the use of complementary templates between the codes. In that work, the median redshift prediction from the five best-performing codes yielded both smaller scatter and smaller outlier rates than any single code achieved. It is clear that with current methods, no single photo-$z$ code performs best for all objects, allowing performance to be improved if the strengths of different codes can all be exploited. It is straightforward to apply a median or some other algorithm for defining a consensus value to combine point estimates for the redshift of an object. However, it is also possible to combine posterior PDF estimates from different codes to produce a single PDF that incorporates information from each. The fact that each code's estimate is influenced by its particular (perhaps inaccurate) choices can at the same time be an opportunity for more robust results when multiple results are combined. As can be seen in \autoref{fig:z-H_heatmap}, a variety of template-based codes which all yield excellent performance when tested on galaxies with spectroscopic redshifts \citep{Kodra2019} yield PDFs which correspond to very different redshift distributions from each other for faint objects. \citet{Dahlen2013} presented a ``hierarchical Bayesian'' method for combining such disparate PDFs based upon techniques employed in \citet{Licquia2015}; similar methods were introduced in other contexts in \citet{Press1997}, \citet{Newman1999} and \citet{Lang2012}. This algorithm calculates a posterior probability distribution for the redshift under the assumption that an unknown fraction $f_{bad}$ of the PDFs being combined are false and contain no information. A free parameter, $\alpha$, describes the degree of covariance between the outputs of different codes. $\alpha=1$ corresponds to the case where there is no covariance and PDFs may be treated as statistically independent, so that the posterior PDF is the product of the input PDFs; $\alpha=1/N$, where N is the number of results combined, would instead correspond to the case where all estimates are completely redundant. \begin{figure}[htp!] {\includegraphics[width=0.95\textwidth]{z-H_heatmap.pdf}} {\caption{Even photo-$z$ codes which yield excellent performance for galaxies with known spectroscopic redshifts clearly disagree when applied to faint objects. Each panel shows averages of the photometric redshift PDF for objects in bins of $H$-band magnitude for five different template-based photo-$z$\ implementations applied to data from the CANDELS survey \citep{Grogin2011}, with lighter colors corresponding to higher average probability. The resulting redshift distribution estimates are plotted using a linear y-axis scale for $0 < z_{phot} < 1$ and a log scale for $1 < z_{phot} < 10$. At the lowest redshifts, differences in the assumed priors produce dramatically different distributions. At higher $z$ the codes do not agree on the redshifts at which overdensities of galaxies are found (which are visible as horizontal features in each panel), despite all exhibiting small scatter when compared to available samples of spectroscopic redshifts. Figure provided by Dritan Kodra (priv.~comm.). } \label{fig:z-H_heatmap}} \vskip-0.15in \end{figure} The results of the hierarchical Bayesian combination will be tighter than the input PDFs where the results all agree, so long as $\alpha > 1/N$; however, for objects for which the input PDFs disagree, the posterior PDF that results will be broader than any individual input PDF, reflecting this uncertainty. For the CANDELS test data, the most probable value of $\alpha$ (when marginalizing over all other parameters) was $\alpha=1/2.1$; i.e., even when provided identical input photometry, the PDFs from different template-based codes behave somewhat more like independent estimates than redundant ones. Using a hierarchical Bayesian combination of input PDFs will tend to complicate the deviations of the posteriors from meeting the statistical definition of a PDF, as different codes will contribute differently for different objects, and the choice of how to treat 'noninformative' results can substantially change the output PDF: if the fit value of $f_{bad}$ is non-zero, and if bad measurements are treated as completely noninformative (corresponding to a uniform PDF across all redshifts), the hierarchical Bayesian result will have non-zero probability at all $z$. Kodra et al. (in prep.; cf. \citet{Kodra2019}) introduces a new method to combine photo-$z$ PDFs which avoids this problem: the use of Fr\'echet means. Essentially, given a set of photo-$z$ PDFs for a given object, the Fr\'echet mean PDF will be the one with the smallest total distance from all the other PDFs considered, integrated over all redshifts. It is thus a functional analog of the median (which minimizes the sum of absolute values of the deviations from a set of data values) or the arithmetic mean (which minimizes the sum of the squares); similarly, Kodra et al. find the PDF for each object which minimizes the total $\textsl{l}^1$ or $\textsl{l}^2$ norm across the results from all codes. Since the Fr\'echet mean PDF must be one of the inputs, if the input PDFs all fulfill the frequentist definition of a PDF, so will the result of this process. Kodra et al. have tested this method using independent sets of spectroscopic or grism redshifts in the CANDELS fields, and found that the Fr\'echet mean of the four best-performing codes that provided PDFs yielded results that come closer to meeting the frequentist definition of a PDF than the outputs of any individual code. The fact that it is possible to realize gains in photo-$z$ perfomance by combining results from multiple methods suggests that further improvements in photo-$z$ techniques are clearly possible; ideally, only one code would be needed to produce the best results. Lacking such an ultimate code, the question remains how best to do that combination. Ultimately, this is a choice of what to optimize for: we may desire methods that minimize the typical deviations of point estimates from their true values, reduce the frequency of outliers, present us with a conservative range of possibilities for the redshift of an object, or provide PDFs that best fulfill the statistical definition. Each of these goals would lead us to a different method of combination. As long as we can define a loss function that we wish to minimize, this can be treated as a machine learning problem. There exist a variety of ``ensemble'' machine-learning methods that take as input the predictions from separate machine learning models and then output a result based on those inputs which minimizes the desired loss. As the concept is general, the final prediction could come from simple regression methods, decision trees, or deep neural networks and still fit within this framework. This remains an active area of machine learning research, but to date such methods have only begun to be explored in astrophysics \citep{Vilalta2017}. \ifarxiv \begin{marginnote} \entry{Take-away}{Because photo-$z$ codes make analysis choices that fall short in different ways, combining PDFs from multiple methods can provide high-performing results, but better techniques for combination are still needed.} \end{marginnote} \fi \subsubsection{Improving Joint Inference of Redshift and Physical Properties from Photometry} \label{sec:phys_props} The observed colors of a galaxy depend not only upon its redshift, but also on its intrinsic physical properties such as its history of star formation (commonly summarized via the total stellar mass, current star formation rate, and stellar metallicity) and the amount and nature of dust attenuation along the line of sight. Some template-based codes can exploit this fact to determine multi-dimensional posterior probability distributions for redshift and a variety of intrinsic galaxy properties simultaneously (e.g., BEAGLE \citep{Chevallard2016} and BAGPIPES \citep{Carnall2018}; see also \cite{Acquaviva2011}). We can think of these codes as not producing just a PDF for redshift, $p(z\, | \,{\rm photometry})$, but rather for the combination of redshift and a vector of galaxy properties, \textbf{G}: i.e., $p(z, \mathbf{G} \,|\, {\rm photometry})$. A good example of such a multidimensional PDF is presented in Figure 8 of \cite{Chevallard2016}. Such a joint probability distribution can encode everything that we can infer about the nature of a given galaxy from its observed photometry. However, there are two current limitations on such analyses: our limited ability to predict the observed spectrum of a galaxy from its physical properties, and the extensive run time required per object for such an analysis. We consider these separately. Inference of physical properties from the observed SED of a galaxy will generally rely on population synthesis models, which predict the combined spectrum of a population of stars of varying masses and formation times \citep{Conroy2013}. Given an initial mass function of stars and star formation history, the distribution of intrinsic properties of stars that should exist at a given time can be predicted from theoretical isochrones, and then the spectra of the resulting set of stars can be combined to produce a prediction for the overall SED. We can compare the SEDs predicted for different star formation histories to the observed properties of a galaxy in order to constrain the values of its intrinsic parameters (total stellar mass, star formation rate, etc.) in either a likelihood or Bayesian framework. However, the predictions from population synthesis models will only be as accurate as the inputs to those models are. Observed isochrones have been characterized well and the initial mass function has been studied extensively in conditions that can be explored within our Galaxy (e.g., higher-metallicity, young stellar clusters and lower-metallicity, old globular clusters); however, extragalactic objects may fall within other regions of parameter space. Similarly, the libraries of observed stellar spectra that may be used for population synthesis are limited both in their sampling of different stellar types -- including some populations of rare but luminous stars that can have large impact on the SED \citep{Conroy2010} -- and in their wavelength coverage. Synthetic stellar spectra can alternatively be used, but the required modeling remains a difficult challenge; empirical and theoretical stellar spectral libraries produce differing results \citep{Coelho2019}. These issues can all cause systematic biases in the inference of physical parameters from observed photometry. However, even if population synthesis models were perfect, the computational complexity of determining the multi-dimensional joint probability distribution of galaxy properties and redshift is daunting. In general, sampling-based techniques (Markov Chain Monte Carlo [MCMC] or related algorithms) are used to characterize distributions over this parameter space efficiently. Even so, for each object, complex population synthesis models must be evaluated many thousands of times in order to characterize the joint posterior probability distribution $p(z, \mathbf{G} \,|\, {\rm photometry})$. With current computational resources, it would be infeasible to apply such methods to the billions of objects that will be cataloged by upcoming surveys. There are two potential routes to make progress on this problem: we can either speed up sampling-based inference, or bypass it entirely. Methods have already begun to take advantage of newer sampling algorithms; e.g., BAGPIPES utilizes the MultiNest nested sampling algorithm, which performs better than MCMC for degenerate or multimodal probability distributions as are commonly encountered in this application \citep{Feroz2009}. Still greater speed improvements may be possible by substituting a deep neural network-based emulator trained to match population synthesis model calculations in place of new calculations at each sampling step. Such emulators can take extensive time to train but evaluate extremely rapidly \citep{Kasim2020}, making them well-suited for applications where large numbers of objects will be analyzed. Still greater speed-ups would be possible if we avoid sampling at all: deep neural networks could instead be utilized to predict physical parameters from the observed SED directly. This application would be similar in spirit to past work which used deep learning methods to fit models trained from simulations to strong lens observations \citep{Hezaveh2017} or gravitational wave signals \citep{George2018}. Uncertainties on the physical parameters could then be determined by measuring the derivatives of the likelihood around the point in parameter space predicted by the machine learning analysis; neural network-based emulators of population synthesis models could be used to calculate those derivatives rapidly. This speed advantage would come at a cost: the derivatives about the peak would provide a local approximation to the likelihood surface around it, but would not capture the effects of parameter degeneracies or secondary maxima which often occur in the inference of galaxy parameters. Nevertheless, if we wish to characterize the physical parameters for all objects from upcoming surveys, this may prove to be the most feasible option for the near future. \ifarxiv \begin{marginnote} \entry{Take-away}{Photometry can be used to jointly constrain many galaxy physical parameters along with redshift, but this will be computationally prohibitive to perform with the large samples from upcoming surveys unless methods are improved.} \end{marginnote} \fi \subsubsection{Storing Multi-Dimensional Probability Distributions} Characterizing joint probability distributions for both redshift and galaxy properties -- $p(z, \mathbf{G} \,|\, {\rm photometry})$, as we defined it in \S \ref{sec:phys_props} -- poses difficulties that go beyond just computing the PDF. If methods for measuring these joint distributions will be applied to the samples of billions of objects that will be cataloged by upcoming surveys, it quickly becomes infeasible to store the results directly. While a variety of methods have been produced to reduce the storage needs for one-dimensional photometric redshift probability distributions \citep{Carrasco2014,Malz2018}, those methods break down in multiple dimensions. If we consider a five-dimensional grid of redshift, stellar mass, specific star formation rate (i.e., star formation rate per unit mass), metallicity, and dust extinction -- a standard set of parameters to estimate from photometry -- using a 100-element grid for each dimension would mean that $10^8$ floating point numbers are needed to store $p(z, \mathbf{G} \,|\, {\rm photometry})$ \textit{for each individual object in the catalog}, corresponding to nearly an exabyte of storage per billion objects. This level of storage would cost millions of dollars per month to host at 2021 pricesAdding additional parameters (e.g., to allow for variation in dust attenuation curves or greater diversity in star formation histories) would only make this problem worse. It is clear that alternative methods are needed if such multi-dimensional posteriors are desired for large samples of objects. One option is simply to store a limited number of samples from the posterior PDF for each object. These samples would include the effects of all covariances between parameters, and can be used to construct aggregate distributions in parameter space by combining the samples from all objects of interest (though it is worth noting that to predict aggregate distributions one should combine likelihoods, not posteriors; cf. \citet{Malz2021}). This option is quite inexpensive to store, requiring $N_{samples} \times N_{properties}$ of numbers to be stored per object, where $N_{samples}$ is the number of samples from the posterior PDF that are stored (e.g., 10 or 100), and $N_{properties}$ is the number of different properties recorded for each sample (redshift included). However, a limited set of samples will provide correspondingly limited fidelity in describing the overall posterior probability distribution for individual objects. A second option is to be able to calculate posterior PDFs so quickly that storage is not needed. This would require many orders of magnitude of speed-up compared to current algorithms. A promising option may be to develop deep learning-based emulators of the entire posterior fitting process: although such methods would require millions of examples and considerable CPU resources to train, once that is completed they would take minimal time per object to run. \ifarxiv \begin{marginnote} \entry{Take-away}{The cost of storing joint many-dimensional PDFs for large samples would be prohibitive with current methods.} \end{marginnote} \fi A third possibility would be to exploit the fact that, while the joint distribution of redshift and galaxy properties is broad, in many cases (e.g., specific star formation rate or mass-to-light ratio) the distribution of a property \textit{conditioned on the redshift and observed photometry} is quite narrow; that is, if one knows (or assumes) the value of the redshift, the quantity of interest may be determined from that value and the observed photometry with only small scatter. In that case, we can store a one-dimensional PDF for redshift, $p(z) | {\rm photometry}$, as well as a compact description for $p(\mathbf{G}\, |\, z, {\rm photometry})$; the product of those two probability distributions will be the $p(z, \mathbf{G} \,|\, {\rm photometry})$ we desire. As an example, if the conditional distribution at a single $z$ is well-described by a multi-dimensional Gaussian in parameter space, we might store the coefficients of polynomial fits to the parameters of that Gaussian as a function of $z$; that could be combined with $p(z) | {\rm photometry}$ to reconstruct the full multi-dimensional PDF. The most simplified version of this would be to only store the peak position and derivatives about the peak (or, equivalently, the best-fit solution and the covariances between all parameters), as would result from the deep learning-based inference of properties described in \S \ref{sec:phys_props}. If we wish full, high-dimensional posteriors to be stored for large numbers of objects (and not only samples from those posteriors), though, new methods will be necessary. \subsubsection{Incorporating Morphological Information} \label{sec:morphology} Most photometric redshift algorithms only utilize integrated measurements of flux or color as inputs. However, well-resolved galaxy images contain more information than just flux. For instance, a galaxy's morphological type exhibits strong correlations with its restframe spectral energy distribution; e.g., elliptical galaxies in the local universe exhibit de Vaucoleurs-like light profiles and red restframe colors characteristic of old, passively-evolved stellar populations. If a spiral galaxy's image is well-resolved, the redder characteristic colors and stronger spectral breaks exhibited by its older bulge can provide increased information for a photometric redshift algorithm to exploit, even if that component is dominated by the younger, blue disk in integrated colors. Additionally, the observed angular size of a galaxy of fixed physical size will vary quickly with redshift at low $z$, though only slowly at high $z$; as a result, size information can constrain the possible redshift of a galaxy (e.g., a galaxy observed to be several arcminutes across could not plausibly be at a high redshift). Furthermore, the observed bolometric surface brightness of a galaxy is dimmed by a factor of $(1+z)^4$ compared to its restframe value \citep{Tolman1930,Hubble1935}. As a result, at higher redshifts the combination of galaxy brightness and size could provide additional information about redshift; this has been exploited for photometric redshift applications previously \citep{Stabenau2008}. However, galaxy evolution could easily obscure the surface brightness signal, making it difficult to exploit. Deep neural network-based methods that use images of a galaxy in multiple bands as inputs can exploit all of these phenomena, and have made considerable progress since their first application in \citet{Hoyle2016}). Such methods are now outperforming even the best algorithms which use color information alone when applied to the SDSS Main Galaxy Sample, with $\sim 40\%$ reduction in photometric redshift scatter and even larger gains in catastrophic outlier rates \citep{Pasquet2019,Hayat2020,Henghes2021}, as can be seen in \autoref{fig:morphology}. This clearly demonstrates that additional redshift information is available in galaxy images beyond what integrated photometry can capture. However, many open questions remain. First and foremost is whether such methods can continue to yield gains for fainter objects at higher redshifts. In that domain, galaxies will only be marginally resolved from the ground, greatly reducing the information available, and training sets will be much smaller and sparser (generally a challenge for deep learning methods). It is clear that at least some improvements are possible when morphological information is utilized even in this domain; for instance, photo-$z$ errors at $z \sim 0.2-1$ are $\sim 10\%$ lower when basic morphological information is provided to a random forest algorithm than when only integrated photometry is used \citep{Zhou2021}. The Nancy Grace Roman Space Telescope will provide multi-band resolved images even for higher-redshift galaxies in the future, with sufficient angular resolution to probe similar physical scales at $z \sim 2$ as SDSS imaging does at $z \sim 0.1$. However, at such redshifts the relationship between galaxy color and morphology can be very different from today. For instance, low star-formation rate but massive galaxies generally show clear evidence for disk components at $z\sim 2$, but that is rarely seen in quiescent objects locally \citep{Chevance2012,Chang2013}. As a result, it is not yet clear whether deep learning methods could offer similar gains at higher redshifts as they do for nearby galaxies from SDSS, even with the high resolution of space-based imaging. It also remains uncertain whether summary statistics can be identified that could measure morphological characteristics with minimal information loss compared to processing the entire image of each galaxy via a deep neural network, such that a wider array of machine learning methods can be used. How best to incorporate morphological information into template-based methods (e.g., via morphology-dependent priors on template type or size-dependent priors on redshift) has also not yet been determined. However, even with these unresolved questions, the recent gains in photometric redshift performance that deep learning methods have made possible for nearby galaxies from SDSS, after more than a decade when many different algorithms all yielded results of similar quality, offer tantalizing hope that it could be possible to make improvements in photometric redshifts for fainter objects as well. \ifarxiv \begin{marginnote} \entry{Take-away}{Exploiting morphological information has yielded significant improvements to photometric redshift performance at low redshift; it remains to be seen whether similar gains can be achieved at higher $z$.} \end{marginnote} \fi \begin{figure}[t] \centering \includegraphics[width=0.85\textwidth]{performance_vs_data_ARAA.pdf} \caption{An example of the improvements in photometric redshift performance that incorporation of morphological information can enable. Blue curves show the results from the deep neural network-based methods applied by \citet{Pasquet2019}, \citet{Beck2016}, and Dey et al. (in prep.), which use galaxy images as inputs, for training sets of different sizes; the red symbol shows the performance from the algorithm of \cite{Beck2016}, which employs only galaxy magnitude measurements to predict redshift. When large training sets are available, morphological information can be exploited to yield better performance, with smaller normalized median absolute deviation between photometric and spectroscopic redshifts, $\sigma_{\mathrm{NMAD}}$, as shown in the left panel, as well as smaller catastrophic outlier rates, $\mathrm{f}_{\mathrm{outlier}}$, as shown at right. Figure provided by Biprateep Dey (priv. comm.). } \label{fig:morphology} \end{figure} \subsubsection{Photometric Redshifts at Very Low $z$} \label{sec:lowz} A somewhat surprising challenge that has been encountered in recent work \citep[e.g.,][]{Mao2021} is the comparatively poor performance of both template-based and training-based techniques at the lowest redshifts ($z < 0.05-0.1$). This problem has many sources. The first and foremost challenge is that there is very little volume of the Universe at very low $z$, so the lowest-redshift galaxies have a correspondingly low surface density on the sky. There are only $\sim 10$ $z<0.02$ galaxies (or $\sim 100$ $z<0.05$ objects) per square degree down to $r=24$ \citep{MSEScienceBook}, as seen in \autoref{fig:lowz}. As a result of their rarity, deep surveys contain very few low-$z$ galaxies, while wide-field surveys contain only the very brightest objects, which tend to have have intrinsically different SEDs than their fainter compatriots. Flux measurements for bright objects are also subject to a variety of systematic problems that do not affect more compact, fainter galaxies, as photometric pipelines tend to be optimized for the more numerous smaller objects rather than well-resolved ones. Due to their small numbers, low-redshift galaxies also contribute little to the calculation of losses used to optimize machine learning algorithms, so performance for them may be discarded in favor of improvements at the redshifts which dominate training samples. This effect will be illustrated in the top left panel of \autoref{fig:training_issues}. This problem, which will tend to cause higher-redshift solutions to be favored, is compounded by the fact that redshifts have a lower bound of zero (modulo small contributions from peculiar velocities). This violates assumptions commonly made in solving regression problems (including when applying machine learning techniques) that distributions of errors about the predicted value should be normal (or at least symmetric). As a result of this effect, photometric redshift estimates at low $z$ tend to be biased high so that the zero bound will be at one end of the predicted distribution for an object rather than in the middle. The low volume of the Universe sampled also causes very large fluctuations in redshift distributions (or correspondingly, the occupation of cells in color/magnitude space by galaxies) at low $z$ due to sample/cosmic variance. These fluctuations will then tend to imprint on any training-based redshift distributions. Even though they should not be affected directly by these training issues, to date template-based methods which yield good performance at higher redshifts have still provided poorer results at low $z$. One possible cause is that, if $u$-band photometry is not available or is noisy, it can be extremely difficult to localize the 4000 Angstrom break, compromising the ability of an algorithm to determine the redshift with precision. A further challenge is that low-$z$ photometric redshift estimates will rely on the longest-wavelength portions of the templates used, which may be only poorly tested given their irrelevance for most objects. There are multiple approaches that may lead to improvement in this area. One is expanding spectroscopic training samples at the lowest redshifts; for instance, the DESI Bright Galaxy Sample will help by surveying galaxies down to a limit roughly two magnitudes fainter than SDSS \citep{Ruiz2020}, and the SAGA survey is obtaining many redshifts for low-$z$ galaxies as part of its search for satellites around Milky Way-mass objects \citep{Mao2021}. Even for template-based methods, this will allow better understanding of the sources of the problem and testing of solutions. Incorporating size or surface brightness information into photometric redshifts (cf. \S \ref{sec:morphology}) has also yielded substantial improvements at low $z$ \citep{Mao2021}. For machine learning methods, gains can also be made by changing the loss function used to optimize redshift predictions. For instance, penalizing deviations in ${\log{z}}$, rather than the raw redshift value, will give greater weight in the training to the low-redshift regime, as $\Delta(\log{z}) \sim \frac{\Delta(z)} {z}$. However, so long as a tiny fraction of training samples are at low $z$, higher-redshift objects will dominate the loss calculation due to their much greater numbers in the training sets, unless the nearby objects are provided additional weight; but doing so would degrade performance for the bulk of the sample. As a result, photometric redshift applications which demand small errors even at low redshifts will likely require bespoke solutions if they employ machine learning-based methods. \ifarxiv \begin{marginnote} \entry{Take-away}{Current photo-$z$ methods tend to perform very poorly at very low redshift ($z < 0.05)$.} \end{marginnote} \fi \begin{figure}[t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{geha_fig_photz.pdf} \end{subfigure} ~ ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{lowz_density.pdf} \end{subfigure} \caption{(Left panel) Performance of photometric redshift algorithms at low $z$ can be very poor. The plot shows the SDSS DR12 photometric redshift estimates from \citet{Beck2016} versus the spectroscopic redshifts of the same objects. Although in general this algorithm yields good results (with $\sigma_{\rm NMAD} \sim 0.013(1+z)$), many galaxies at $z < 0.015$ (shown by red or blue symbols) are assigned inaccurately high photo-$z$'s. This plot is adapted from Figure 4 of \cite{Geha2017}. \textcopyright AAS. Reproduced with permission. (Right panel) The low surface density of low-redshift objects makes the collection of training and validation sets of spectroscopic redshifts difficult. Curves show the number of objects per square degree brighter than a given $r$-band magnitude with redshift below a specified limit. Only $\sim 10$ galaxies per square degree have $z<0.02$, even when galaxies as faint as $r = 24$ are included. Adapted from Figure 66 of \cite{MSEScienceBook}, figure courtesy Yao-Yuan Mao. } \label{fig:lowz} \end{figure} \subsection{Challenges for Improving Photometric Redshift Characterization} \label{sec:characterization} In this section, we describe a number of effects that significantly affect redshift characterization. Several, if not all of them, can cause errors exceeding the requirements of near-future cosmological experiments. Careful calibration and accounting for their effects are therefore necessary for them not to limit the cosmological insights to be gained. However, this is not always achievable with currently available methods, so further research in these areas is needed. It is possible that other systematic issues associated with photo-zs that have not yet been identified could also compromise cosmological inference at a comparable level to the effects discussed here. \subsubsection{Spectroscopic Incompleteness} \label{sec:incompleteness} Any sample of galaxies can be characterized by a selection function that determines the probability with which an object will be included based on its relevant physical characteristics (e.g., its redshift, observed spectral energy distribution, and surface brightness profile). For any galaxy $i$, we can compute the ratio $r_i=\frac{p^{\rm spec-z}_i}{p^{\rm phot}_i}$ of the probability of obtaining a successful spectroscopic redshift for such an object in available surveys divided by the probability that object has for being placed within some sample of interest (e.g., a particular redshift bin in a cosmological analysis). We will refer to such a sample, which can be defined based only on the photometric properties of the included objects (possibly incorporating photometric redshift estimates), as a target sample. A direct estimate of the redshift distribution for a target sample can then be written as \begin{equation} p^{\rm phot}(z) \propto \sum_{j\in\mathrm{spec-z}} r_j^{-1} \delta(z-z_j) \; , \label{eqn:incompleteness} \end{equation} where the sum runs over all galaxies $j$ with successful spectroscopic redshifts $z_j$. \ifarxiv \begin{marginnote} \entry{Impact of spectroscopic incompleteness}{\includegraphics[width=3.5cm]{impact421.pdf}} \end{marginnote} \fi Difficulties can arise when $r$ depends on the \emph{redshift} of a galaxy, which is very common. Two galaxies may have indistinguishable magnitude and colors in a given imaging dataset, yet still have different true redshifts and restframe SEDs (e.g., due to confusion of one spectral break with another, or because of the nearly featureless, power-law-like spectra exhibited by highly star-forming galaxies over a broad wavelength range). Photometrically indistinguishable objects that are at different redshifts can have different probabilities of yielding secure redshifts (e.g., due to strong emission lines passing beyond the optical window at higher $z$). In such a situation, $p^{\rm spec-z}$ remains a crucial factor in redshift characterization but cannot be determined from the photometric data alone. As a simple example, if objects at a given point in color space have two possible redshifts, $z_1$ and $z_2$, which yield secure redshifts at rates of 100\% and 0\%, respectively, the objects at $z_1$ whose spectroscopic redshifts were measured would receive an enhanced weight in calculating the redshift distribution via \autoref{eqn:incompleteness}, whereas $z_2$ would not contribute to the calculation of $p^{\rm phot}(z)$ at all. As a result of this effect, so long as spectroscopic success varies strongly across subsets of galaxies distinguished by $>0.1$ in redshift (as it does in real samples), spectroscopic completeness -- i.e., the fraction of targets that yield secure redshifts -- will need to be extremely high ($>99\%$ or even $>99.9\%$; G. Bernstein, private communication) for cosmological measurements from present and future surveys not to be degraded when direct calibration is performed. If such high completeness is not achieved, the selection of spectroscopic samples and the true nature of objects which failed to yield secure redshifts must be modeled carefully and understood well for direct calibration to yield accurate results. The probability of obtaining a successful redshift, $p^{\rm spec-z}$, within a particular spectroscopic survey will be subject to both intentional and unintended selection effects. \emph{Intentional} selections can include cuts on colors, magnitudes, or surface brightness; these are commonly used in deep spectroscopic samples (e.g., DEEP2 \citealt{Newman2013}, VIPERS \citealt{Guzzo2014}, and zCOSMOS-Deep \citealt{Lilly2007}). They can ensure high spectroscopic success rates for targets of interest within feasible exposure times (e.g., by preferentially selecting blue star-forming galaxies with strong emission features) or to target galaxies in a particular redshift range of interest. The impact of a color space selection is illustrated in the top right panel of \autoref{fig:training_issues}. If equivalent photometric data are available for the spectroscopic samples used and the target sample one wishes to characterize, one can determine the intended $r_i$ for each object. However, in many cases its value may be zero for a subset of the target sample (even when there are no color cuts, one might wish to characterize redshift distributions for objects that go fainter than the spectroscopic samples available). Such galaxies would then have to be excluded from the target sample for direct spectroscopic characterization of their $n(z)$ to be possible. If uniform photometric data are \emph{not} available for both the spectroscopic and target samples, one commonly finds that a non-zero but unknown fraction of galaxies which pass any useful target sample selection are missing from the spectroscopic sample, making direct spectroscopic calibration fundamentally unreliable. In the common case where one combines a variety of spectroscopic surveys, even if no subset of the target sample is excluded by \emph{all} spectroscopic selections, they would have to be correctly reweighted for the joint selection function to be correct; if even \emph{one} of the spectroscopic selections cannot be reproduced on the target sample's photometric data, the procedure becomes ill-defined. The value of $p^{\rm spec-z}$ can also be less than one for reasons that were \emph{not intended} by the spectroscopic survey design. At the faint end of deep spectroscopic samples, the rate at which highly-secure redshifts are measured for targeted galaxies is rarely above $75\%$ \citep{Newman2013,LeFevre2013,Bonnett2016}. Whether a secure redshift will be obtained from spectroscopy depends sensitively on the properties of a galaxy and the observations. When spectral features are weak, fall outside the wavelength coverage of a particular instrument, or are at wavelengths affected by atmospheric emission or absorption, the probability of obtaining a successful redshift measurement may be reduced or even eliminated entirely. This can cause galaxies at some redshifts to be missing entirely in training samples. The incompleteness of spectroscopic samples is likely worse in regions of high galaxy surface density where blending is more common, further biasing the redshift distributions recovered from direct characterization. The impact of systematic incompleteness on our ability to map the relationship of colors to redshift is illustrated in the bottom left panel of \autoref{fig:training_issues}, and is apparent in the $z>0.9$ tail of \autoref{fig:zdist}. All these causes for unintended incompleteness depend on the redshift of a galaxy and are complex to model, posing substantial challenges for redshift characterization. As can be seen in \autoref{fig:zsuccess}, the problem of incompleteness is further compounded if samples are limited to the most secure redshifts to prevent contamination of characterization by outliers. The effects of spectroscopic incompleteness on photo-$z$ characterization in few-band surveys are large compared to current and future requirements. Based upon both many-band template fitting photo-$z$'s \citep{Gruen2017} and simulated spectroscopic observations \citep{Hartley2020}, recent work has found biases in the mean redshift of up to $|\delta z|\approx0.05$ for direct calibration from the combination of intended and unintended spectroscopic selection effects. Similar levels of disagreement have been found by \citet{Hildebrandt2020} when comparing characterization of redshifts performed with spectroscopic samples with different selection functions using the more photometrically constraining KiDS-VIKING data set and by \citet{Joudaki2020} using mock analyses of few-band DES-like data that include intended spectroscopic sample selections. The impact of spectroscopic incompleteness can be reduced if many- and/or narrow-band photometric information is used to select photometric samples (or for reweighting spectroscopic samples, if there are not regions with zero probability due to unintended systematic effects; \citealt{Buchs2019,Myles2021}). Intended selection effects will lead to regions which are devoid of spectroscopic data in such spaces \citep{Masters2015,Hildebrandt2019}. Subsets of the target sample with poor spectroscopic information can be removed based on their position in the many-band color space, or dedicated surveys can be performed that could potentially fill in any gaps in coverage \citep{Stanford2021}. If at all points covered by a target sample in a high-dimensional color space the width of the redshift distribution is small and the redshift success rate is relatively uniform with $z$, there is little room for unintended selection to bias the characterization. However, the converse is also true. If there are regions of the many-dimensional space which correspond to multiple, significantly-separated redshift values and redshift success rates vary between the different possible solutions, direct characterization will remain biased. \ifarxiv \begin{marginnote} \entry{Take-away}{Mitigating the effects of systematic incompleteness in the deep spectroscopic samples used to train photo-$z$ algorithms and characterize redshift distributions will be a key challenge for upcoming surveys.} \end{marginnote} \fi \begin{figure}[t] \centering \includegraphics[width=\textwidth]{zsuccess_trim.pdf} \caption{ The fraction of objects which spectroscopic surveys of faint galaxies obtain secure redshifts for varies with observed galaxy color, but in parameter spaces defined by the deepest optical bands it nowhere approaches 100\%. The panel at left shows the fraction of magnitude $R < 24.1$ galaxies targeted by the DEEP2 Galaxy Redshift Survey which delivered secure redshifts ($Q=3$, corresponding to $>95\%$ confidence, or $Q = 4$, corresponding to $>99\%$ confidence), as a function of observed optical $B-R$ and $R-I$ colors. The white line in each panel shows the color cut used to select targets for DEEP2 in three of the four survey fields, excluding the Extended Groth Strip whose data were used to produce this plot. At right is shown the fraction of objects which yielded the most secure, $Q=4$ redshifts; although incorrect-redshift rates as low as achieved for this sample, or even lower, may be required for future cosmology applications, requiring purer redshift measurements only increases the challenge of systematic incompleteness in spectroscopic samples. This figure is adapted from Figures 44 and 45 of \cite{Newman2013}. \textcopyright AAS. Reproduced with permission. } \label{fig:zsuccess} \end{figure} \subsubsection{Outliers and Biases in Calibration Redshifts} \label{sec:outliers} Training-based photometric redshift methods (as well as direct spectroscopic calibration techniques such as the ``SOM'' method used in \citealt{Hildebrandt2021}) typically assume that all redshifts in a training set are correct. However, real-world datasets do not fulfill this assumption. Deep spectroscopic surveys generally must use low-signal-to-noise spectra to measure redshifts, due to the very long exposure times needed for faint objects. Occasionally, features in a spectrum that are in fact due to noise (particularly in regions affected by night sky emission lines) may match templates at some false redshift, leading to the misattribution of the redshift of an object. When only a single emission line or spectral break has been clearly detected, misidentification of that feature will result in a systematically incorrect redshift. The rates at which these failures occur are substantial. Current deep surveys generally have assigned quality flags to spectroscopic redshifts based on visual inspection of redshift fits, with quality $Q=3$ corresponding to 95\% certainty that a redshift is correct and $Q=4$ corresponding to $>99\%$ certainty. In the DEEP2 Galaxy Redshift Survey, $\sim 17\%$ of secure redshifts were assigned $Q=3$, with the remainder receiving $Q=4$ (the high resolution of the DEEP2 spectroscopy, enables splitting of the [OII] doublet, causing most redshifts obtained to be unambiguous). More than 1000 DEEP2 objects were observed twice, allowing the repeatability of redshifts to be tested; for this sample, the best estimates of the failure rate are $0.75\%/0.15\%$ (with upper limits of $\sim 2.2\%/0.3\%$) for $Q=3/4$, respectively \citep{Newman2013}. For the zCOSMOS-bright survey, $Q=3$ redshifts dominate ($\sim 58\%$ of secure redshifts) due to the lower spectroscopic resolution used. Based on objects with repeated spectra, failure rates for zCOSMOS $Q=3$ and $Q=4$ are estimated to be 1\% and 0.2\%, respectively \citep{Lilly2007,zCOSMOSDR}. Similarly, many-band photo-$z$ suffer from significant catastrophic outlier rates, particularly in regions of parameter space which are poorly characterized by training spectroscopy. Catastrophic outlier rates ranged from $\sim 0.6\%$ for quiescent objects with $i \sim 22.5$ to $>10\%$ for star-forming objects with $i \sim 24.5$ in the COSMOS2015 catalog of \cite{Laigle2016}. Even with perfect spectroscopy, part of this problem is irreducible due to blending of galaxies, see \S \ref{sec:blending}. Additionally, some fraction of objects will have photometry contaminated by artifacts or failures of measurement algorithms; both effects cause the mapping of color to redshift to be inappropriate for some portion of the training sample. \ifarxiv \begin{marginnote} \entry{Impact of biases in calibration redshifts}{\includegraphics[width=3.5cm]{impact422.pdf}} \end{marginnote} \fi Incorrect training data impact the performance of photo-$z$ algorithms by introducing an unphysical spread of redshift at given photometry, as illustrated in the bottom right panel of \autoref{fig:training_issues}. However, a greater problem is that we rely on spectroscopic redshifts (or very high-quality photometric redshifts) for characterization; if the redshifts used for this purpose are incorrect, the derived redshift distributions will be too. The impact can be large if not accounted for. A simple toy model is sufficient to assess the magnitude of this problem. If a fraction $f_{\rm inc}$ of redshifts for a sample described by a Gaussian distribution of standard deviation $\sigma$ are systematically off from the true mean redshift by $\Delta$ (emulating the common situation where one spectral feature is mistaken for another, leading to an incorrect $z$), the obtained mean redshift for the sample will be shifted by $\delta \langle z\rangle = f_{\rm inc} \Delta$, and the standard deviation will be shifted by $$\delta \sigma = \sqrt{(1-f_{\rm inc})\sigma^2+ f_{\rm inc} \Delta^2} - \sigma \approx \frac{f_{\rm inc} \sigma}{2}\left(\frac{\Delta^2}{(1-f_{\rm inc})\sigma^2} -1\right) \approx \frac{f_{\rm inc} \Delta^2}{2 \sigma}$$ when $f_{\rm inc}$ is small and $\Delta$ is large compared to $\sigma$. For instance, for a sample (e.g., a redshift bin) described by a Gaussian redshift distribution with $\sigma=0.1$, a rate of one $\Delta=0.5(1+z)$ redshift error per thousand spectroscopic redshifts would shift the inferred mean redshift by 0.0005(1+z), smaller than LSST cosmology requirements of $\delta <z> < 0.002(1+z)$, but the inferred $\sigma$ would be too large by 0.0024(1+z), in tension with the $\delta <\sigma> < 0.003(1+z)$ requirement \citep{DESC_SRD}. Figure \ref{fig:photoz_issues} illustrates the results from this model. When a fraction $f_{\rm inc}$ of the redshifts used to characterize the distribution of photo-$z$ errors is erroneous, the inferred spread $\sigma_z$ is biased by an amount $\Delta \sigma_z$ from the true value. We here have performed Monte Carlo simulations of scenarios where we measure the standard deviation of Gaussian-distributed photometric redshift errors from a training set where a fraction $f_{\rm inc}$ have their redshift systematically offset by $0.5(1+z)$, emulating the typical effect of line misidentification. For this toy model, $f_{\rm inc}$ must be $\sim 10^{-3}$ or less -- an order of magnitude lower than in current deep samples -- for characterization at the level required for Rubin Observatory weak lensing cosmology measurements to be possible. The bias grows larger, and the requirements on $f_{\rm inc}$ are correspondingly more stringent, when photometric redshift errors are smaller. If the incorrect redshifts instead had a symmetric, Gaussian distribution about the true value, the inferred $\sigma_z$ would still be biased, though the curves in \autoref{fig:photoz_issues} would differ depending upon assumptions. Similar results hold if we consider a bias in the mean redshift rather than a bias in the spread. \begin{figure} \includegraphics[width=0.5\textwidth]{sigmaz.pdf} \caption{Impact of incorrect redshifts in spectroscopic sets used for direct calibration of photometric redshifts. The red and blue curves show the bias in recovering photo-$z$ uncertainties that results for samples with a true standard deviation $\sigma$ of 0.02 (red solid) or 0.05$(1+z)$ (blue dashed) in Monte Carlo simulations where a fraction $f_\mathrm{\rm inc}$ of redshifts are systematically off by $0.5(1+z)$, a typical value in real spectroscopic datasets. The shaded region shows the requirement from \cite{DESC_SRD}. Contamination of spectroscopic calibration sets by incorrect redshifts at the level seen in the best samples from current deep datasets, $\sim 1\%$, would still cause systematic errors in mean $z$ and in photo-$z$ uncertainties that are $\sim$7--10$\times$ larger than the science requirements for Rubin Observatory weak lensing measurements. Requirements for Euclid and Roman weak lensing should be similar. Figure provided by Brett Andrews (priv. comm.).} \label{fig:photoz_issues} \end{figure} We might consider three responses to this problem. The first is to reduce the error rates in spectroscopic training/characterization datasets by roughly an order of magnitude, {which likely would require a combination of much longer integration times, broader wavelength coverage, and more restrictive manual quality control}. Even then, this only solves the problem if we can also reduce all other causes of mismatches between spectroscopic training sets and other objects (e.g., blending and photometric artifacts) to the same level. {One possible path could be to limit the use of spectroscopic redshifts to characterizing the relation between photometry and redshift in a high-dimensional, well-measured color space. If the latter tightly constrains true redshift at given color such that the same color cannot genuinely correspond to multiple redshifts and if the size of the training sample is sufficient, outliers can be identified confidently.} A second option is to rely on methods other than direct calibration via deep spectroscopic samples for characterization. For instance, it is possible to exploit cross-correlations with wide-field surveys of brighter galaxies and quasars, in which we can select only the most secure redshifts and still have a large sample to characterize the redshift distributions of photometric samples, as we discuss in \S \ref{sec:xcorr}. The third option is to develop methods for characterization that are robust to outliers in the training set, which can be done in a variety of ways. If the potential impact of outliers on the redshift distribution of galaxy bins can be described by a prior, combinations of correlation functions allow for partial self-calibration \citep[e.g.,][]{Zhang2010,Schaan2020}. In hierarchical Bayesian methods that use the full data, biased training sets can potentially be compensated for \citep{Sanchez2019}. In principle, hierarchical use of spectroscopic samples could allow for a parameter indicating whether the reported redshift of each spectroscopic galaxy is wrong, whose posterior is constrained by the combination of all photometric and spectroscopic data, that allows to reduce the impact of outliers on redshift characterization. \ifarxiv \begin{marginnote} \entry{Take-away}{The rate of incorrect redshifts in current deep spectroscopic samples is high enough to compromise direct redshift characterization methods for future surveys; the problem is worse for redshifts from many-band photometry or low-resolution spectroscopy.} \end{marginnote} \fi \subsubsection{Impact of Sample Variance on Characterization} \label{sec:sample_variance} As is the case for any cosmological measurement, the limited volume over which it has or even can be made sets a lower bound to the uncertainty of any conclusions that can be drawn from it. In the case of characterizing redshift distributions with spectroscopy, this is fundamentally due to the fact that at a given set of photometric observables, the distribution of redshifts is still broad. When observed with a finite sample of spectra, one retrieves only a sampling of that distribution. When observed over a field of limited area or size, one retrieves such a sampling of only a version of that distribution that is modulated by variations in the mean matter (and thus galaxy) density as a function of redshift within that field. This is a strong effect in current deep samples, as can be seen in \autoref{fig:zdist}. This phenomenon is commonly referred to as sample variance or, in some works, as cosmic variance. It is useful to consider sample variance in redshift calibration as a combination of three effects: \begin{itemize} \item A variation of the observed density $\hat{n}_{ij}$ of galaxies in the calibration field, for instance number per solid angle per color element up to some limiting magnitude, at positions in a high-dimensional color space which we have denoted by a two-dimensional index $i,j$ for notation and illustration purposes. \item A variation of the true mean redshift of galaxies of color $i,j$ in the calibration field relative to the true mean redshift of such galaxies over a very large area, due to matter density variations in the finite calibration volume, denoted by $\sigma^z_{\mathrm{CV}, ij}$. Even in the hypothetical case where there are a very large number of galaxies of that color available within a calibration field, their mean redshift would still deviate from the cosmic mean. \item An uncertainty in the observed mean redshift of galaxies of color $i,j$ in a calibration field relative to the former hypothetical value due to sampling with a finite number of galaxies. Given a scatter in redshift $\sigma^z_{ij}$ among the sample of galaxies, and a number of galaxies $N_{ij}$ observed, this uncertainty is given by $\sigma^z_{ij}/\sqrt{N_{ij}}$. \end{itemize} The amplitude of the former two uncertainties depends on galaxy type and luminosity (by means of their effect on galaxy clustering bias) and on the field size. The amplitude of the latter uncertainty is set by the intrinsic dispersion of the redshift of galaxies of color $i,j$ and the number of galaxies of that color observed. The magnitudes of the three effects depend on not just the size and sampling of the calibration field, but also the photometric information available. Sample variance can have the largest impact for surveys with only a few photometric bands observed, as illustrated in the extreme case of a single magnitude cut in \autoref{fig:zdist}. To see this quantitatively, consider two hypothetical surveys - one where the full photometric information $ij$ is available for all galaxies (both in the calibration sample and in the target sample), and one where only a subset of the photometric bands, $i$, is available. Assume that in both setups, we would like to estimate the mean redshift of the subsample of galaxies with some given color $i$, ${\langle \hat{z}\rangle}_i$. \ifarxiv \begin{marginnote} \entry{Impact of Sample Variance}{\includegraphics[width=3.5cm]{impact423.pdf}} \end{marginnote} \fi In the former case of a many-band survey, one could write this as a sum over all additional colors $j$, \begin{equation} \hat{\langle z\rangle}_i = \frac{\sum_j n_{ij} \hat{z}_{ij}}{\sum_j n_{ij}} \;, \end{equation} where $\hat{z}_{ij}$ is the mean redshift of calibration galaxies of color $ij$, and $n_{ij}$ is the density of galaxies of that color measured over the full survey, essentially to re-weight the calibration sample as a function of color $j$. We can assume $n_{ij}$ to be essentially noiseless for a large-area photometric survey, which leads to an uncertainty of the mean redshift estimate of \begin{equation} \left(\sigma^{\langle z \rangle}_{i}\right)^2 = \frac{\sum_j n^2_{ij} \left[(\sigma^z_{\mathrm{CV}, ij})^2 + (\sigma^z_{ij})^2/N_{ij}\right]}{\left(\sum_j n_{ij}\right)^2} \; . \end{equation} Consider instead that the true density $n_{ij}$ is not known because the wide field survey measures only the color $i$, but not $j$. From the calibration sample one would estimate \begin{equation} \hat{\langle z\rangle}_i = \frac{\sum_j \hat{n}_{ij} z_{ij}}{\sum_j \hat{n}_{ij}} \;, \end{equation} using the densities of galaxies in the calibration field, $\hat{n}_{ij}$. Note that the r.h.s.~of the above equation is the value of the estimated mean redshift of galaxies of color $i$ regardless of whether or not $j$ is measured in the calibration field, due to the linearity of the expression. The corresponding uncertainty is \begin{equation} \left(\sigma^{\langle z \rangle}_{i}\right)^2 = \frac{\sum_j n^2_{ij} \left[(\sigma^z_{\mathrm{CV}, ij})^2 + (\sigma^z_{ij})^2/N_{ij}\right] + \left[\sigma^{\hat{n}}_{ij} (z_{ij}-z_{i})\right]^2}{(\sum_j n_{ij})^2} \; . \end{equation} Note the additional term in the nominator, which is due to uncertainty $\sigma^{\hat{n}}_{ij}$ in the number density of galaxies of color $ij$ in the limited volume of the calibration field(s). The relative importance of the three effects discussed here depends on the survey design. \citet{Bordoloi2010} explore the trade-off between $(\sigma^z_{\mathrm{CV}, ij})^2$ and $(\sigma^z_{ij})^2/N_{ij}$, finding that a hypothetical fully-sampled redshift survey down to faint magnitudes with moderately broad redshift bins would be limited by the former CV term, not by the latter shot noise, and thus that observing a subset of galaxies spread out over a larger area is beneficial. \citet[][their Table 2]{Hoyle2018} confirm this is indeed so for the COSMOS-based calibration of DES Year 1 photometric redshifts, with about $7\times10^{-3}$ and $2\times10^{-3}$ uncertainty in mean redshift due to $(\sigma^z_{\mathrm{CV}, ij})^2$ and $(\sigma^z_{ij})^2/N_{ij}$, respectively. \citet[][their appendix~B]{Gruen2017} show that sample variance can change by an order of magnitude with the same calibration fields, depending on the number of photometric bands used for reweighting. Among current surveys the KiDS-VIKING $ugriZYJK_s$ data most effectively allows to utilize this effect. Re-weighting calibration samples over a high-dimensional color space greatly reduces sample variance and other uncertainties \citep{Hildebrandt2019,Wright2020}. Surveys for which fewer bands are measured over the wide field can still benefit from multi-band photometric deep fields in addition to spectroscopic calibration samples \citep{Buchs2019,Myles2021}. \citet{Buchs2019} show that with $ugrizYJK_s$ photometry, the COSMOS field as a source of redshift calibration is sufficient in terms of galaxy number, but not fully sufficient in terms of volume, to reach total calibration uncertainties of $10^{-3}$ in mean redshift, if larger fields can be used to estimate the density of galaxies in that color space, $n_{ij}$. The limited area over which deep optical and near-infrared photometry is available has a dominant effect on the resulting sample variance. \ifarxiv \begin{marginnote} \entry{Take-away}{Deep spectroscopic and many-band photometric surveys cover only limited areas of sky; as a result, sample variance in redshift distributions is large and can limit direct redshift distribution characterization.} \end{marginnote} \fi \begin{figure}[ht] {\includegraphics[width=0.65\textwidth]{zdist.pdf}} {\caption{Redshift distributions of magnitude $i<22.5$ galaxies in the zCOSMOS-bright and DEEP2/DEEP3 surveys \citep{Lilly2007,Newman2013,Zhou2019}. Histograms show the $z$ distributions of galaxies with secure spectroscopic redshift measurements (confidence class or quality 3 or 4) from each survey, normalized by the total number of $i < 22.5$ objects in each sample. The $i$ is obtained using HST F814W band photometry for zCOSMOS and extinction-corrected CFHTLS $i$-band for DEEP2 and DEEP3 selections. Even though the zCOSMOS-bright survey spanned 1.7 square degrees of sky, large fluctuations in the number of objects at a given redshift due to sample/cosmic variance are clearly apparent in its redshift distribution. For DEEP2 and DEEP3, we use only redshifts in the Extended Groth Strip (DEEP2 Field 1) which covered a total of 0.6 square degrees; excursions are even larger in this case. The redshift distributions differ at $z > 0.9$ largely because of the higher spectral resolution used for DEEP2 and DEEP3; this allows the [OII] 3727 Angstrom doublet to be resolved, enabling higher secure redshift success rates in that regime. } \label{fig:zdist}} \vskip-0.15in \end{figure} \subsubsection{Biases from the Selection of Photometric Samples} \label{sec:photo_selection} In parallel to the selection biases in spectroscopic samples discussed in \S \ref{sec:incompleteness}, the samples of galaxies whose redshifts are to be estimated commonly are subject to selections that can bias the estimated redshift distribution at a given point in color space if not accounted for. Already the presence of Poissonian photometric noise, particularly when noise levels differ between training sample and wide-field photometry, can have a significant impact when methods are not designed to account for it \citep[e.g.,][their table 3]{Wright2020}. The selection of galaxy samples is usually much more complex in reality due to the non-linear interplay between pixel-level noise, detection, star-galaxy separation, model-fitting photometry, and shape measurement for barely resolved galaxies. If these effects are not modeled -- e.g., if using magnitude limited spectroscopic redshift catalogs without further selection -- they introduce biases in redshift characterization of order 0.01 \citep{Gruen2017}. By measuring the detection probability and transfer function of galaxies with image injection \citep{Huang2017,Everett2021}, this bias can be greatly reduced, potentially to levels compatible with Stage IV requirements \citep{Myles2021}. \ifarxiv \begin{marginnote} \entry{Impact of selection in lensing sources}{\includegraphics[width=3.5cm]{impact424.pdf}} \end{marginnote} \fi \ifarxiv \begin{marginnote} \entry{Take-away}{Selections affecting photometric samples must be taken into account when characterizing photo-$z$'s.} \end{marginnote} \fi \begin{figure}[t] \centering \includegraphics[width=\textwidth]{selectioneffects_trim.pdf} \caption{Illustration of additional factors which limit the use of spectroscopic training sets to map out relations between color and redshift. In these figures we continue to use the toy model mapping of galaxy color (or a dimensionality-reduced color space) to redshift that was employed in \autoref{fig:sparsesampling}. At left we again show the ideal case, where spectroscopic samples cover the color space both densely and uniformly; colors correspond to redshifts ranging from zero to one, as indicated in the color bar. The top middle panel illustrates the impact of the failure of deep spectroscopic training sets to include many objects at low redshift, as a small area of sky will include only a very limited volume at low $z$, and hence correspondingly few objects (cf. \S \ref{sec:lowz}). As a result, deep galaxy samples only sparsely cover color space in that domain, degrading photo-$z$ performance at low redshifts. The top right panel illustrates the impact of using spectroscopic samples which are restricted to a limited region of parameter space (\textit{intentional} selection effects, as described in \S \ref{sec:incompleteness}. This can provide dense sampling of the relationship between photometry and redshift, but only over a limited region; beyond that range the spectroscopy will have no constraining power. The bottom middle panel, conversely, shows the impact when spectroscopy systematically fails to obtain secure redshift measurements for objects at high $z$ (corresponding to the \textit{unintended} selection effects in \S \ref{sec:incompleteness}): this again causes gaps in color coverage, leading both to degraded photo-$z$ performance where training samples are lacking and to systematic biases in redshift characterization. Finally, the bottom right panel illustrates the impact when incorrect redshifts (here, drawn randomly from a uniform distribution) are assigned to a fraction of targets. As discussed in \S \ref{sec:outliers}, this will cause systematic biases in both any photo-$z$'s that are derived from a training set and in the characterization of redshift distributions. } \label{fig:training_issues} \end{figure} \subsubsection{Blending} \label{sec:blending} Multiple galaxies can overlap on a sky image or contribute light to the same fiber or slit of a spectrograph, both because galaxies are intrinsically extended objects and due to blurring by the point-spread function. This blending of light occurs most easily for bright galaxies with intrinsically large angular size, but in those cases it is rarely problematic due to the relative faintness of any overlapping objects. However, it is sufficiently common and severe even for fainter objects for its impact to pose challenges. For instance, for more than half of the galaxies detected by Rubin Observatory, overlapping galaxies contribute at least 1\% of the total flux within their pixels (\citealt{Sanchez2021}; see also Figure 2 of \citealt{Melchior2021}). Blending will impact flux and color measurements for some objects within the samples of galaxies with spectroscopy used to train photometric redshift algorithms and characterize redshift distributions. This effect can account for roughly 20\% of persistent photometric redshift outliers in samples with deep many-band photometry \citep{Masters2019}. Worse, when a faint emission line galaxy is blended with another object, the resulting strong line features can dominate the determination of the spectroscopic redshift, even when the broadband colors are primarily determined by another object. This occurs in 1\% to 5\% of cases for faint sources \citep{Newman2013,Brinchmann2017,Masters2019}. At $z>1$, $>5\%$ of objects targeted in the DEEP2 Galaxy Redshift survey had multiple counterparts in Hubble Space Telescope imaging within 0.75 arcsec of their nominal position \citep{Newman2013}; at sufficiently small separations blends will not be detectable even from space. This will set a floor level of systematic uncertainty in empirically estimated photometric redshifts unless such objects can be excluded with high confidence, e.g., through space-based observations or (in some cases) by visual inspection and re-analysis of spectroscopic data. Similarly, unless blends can be excluded entirely, blending will limit the performance of photometric redshift estimates for individual objects, even with template-based methods, as the photometry ascribed to an object may not correspond only to its intrinsic properties but rather be altered by contributions from objects at very different $z$. Blending will also affect cosmological studies by altering the effective $z$ distributions of the redshift bins used in an analysis. For instance, in the case of tomographic weak lensing, The measured shape of a dominant galaxy at redshift $z_0$ may be affected when a fainter, blended galaxy at $z_1$ is being gravitationally sheared. The observed signal is therefore due to a superposition of lensing of light at $z_0$ and $z_1$, much as if one were looking at a bin of galaxies that contained objects at both redshifts. This effect can be correctly described by modifying the $n(z)$ of each redshift bin accordingly. \citet{MacCrann2021} first formulated this effect and demonstrated it in image simulations, finding impacts $0.003<|\delta z|<0.012$ in the effective mean redshift for samples from DES three-year data. The solution to this issue will require extended work on realistic lensing image simulations. \ifarxiv \begin{marginnote} \entry{Impact of blending}{\includegraphics[width=3.5cm]{impact425.pdf}} \end{marginnote} \fi \ifarxiv \begin{marginnote} \entry{Take-away}{Undetected blends between galaxies whose images overlap are unavoidable in ground-based data, but make interpretation of photo-$z$'s and spectroscopic samples more difficult.} \end{marginnote} \fi \subsubsection{Astrophysical Systematics on the Cross-correlation Characterization of Redshifts} \label{sec:xcorr} In the linear limit of large-scale-structure, the angular cross-correlation $w_{\rm phot,spec}(\theta)$ between objects in a photometric sample and spectroscopic objects of known redshift is proportional to the product of the large-scale-structure bias of each sample (which we will refer to as $b_{\rm phot}$ and $b_{\rm spec}$) and the probability that an object in the photometric sample is at the spectroscopic $z$, $p(z)$: i.e., $w_{\rm phot,spec}(\theta) \propto b_{\rm phot} b_{\rm spec} p(z)$. These cross-correlations therefore can be exploited to reconstruct the redshift distribution of any photometric sample \citep{Newman2008}, in concert with measurements of the autocorrelation of each sample. The resulting redshift distributions are sometimes referred to as ``clustering redshifts'' \citep{Rahman2015}. The cross-correlation method has the great advantage of providing accurate characterization of redshift distributions even if spectroscopic samples are systematically highly incomplete, as has been true of all deep surveys to date (cf. \S \ref{sec:incompleteness}). In fact, wide-area, shallow surveys of the distant universe such as those that DESI and 4MOST will provide \citep{DESI2016,4MOST2019} will yield much lower errors on cross-correlation measurements than deep, small-area surveys \citep{Matthews2010,Newman2015}. This makes such methods a promising route for characterizing redshift distributions for samples of faint objects for which it is difficult to obtain statistically complete spectroscopy. A variant of this idea is to measure the distribution of redshift differences, $|\Delta z|$, between pairs of objects which are near to each other on the sky \citep{Quadri2010,Huang2013}. That distribution will exhibit a peak near zero offsets, the width of which is set by the convolution of the errors on independent objects' photometric redshifts. This method can accurately characterize the average scale of the core of redshift errors, but does not allow reconstruction of catastrophic error rates or the shape of the redshift distribution of outliers due to the dependence of the measured $|\Delta z|$ distribution on both the strength of galaxy correlations and any evolution of the large-scale-structure bias with redshift. It is therefore not suitable for the high-precision characterization required for cosmological measurements. In principle, cross-correlations with upcoming wide-field spectroscopic samples will contain sufficient \emph{information} to reconstruct redshift distributions at the accuracy needed for the next generation of cosmological measurements \citep{Newman2008,Matthews2010}. However, it has not yet been demonstrated that this can actually be achieved in the presence of astrophysical \emph{systematics}. A number of areas of concern exist which will need to be addressed for these methods to reach the extreme accuracy needed for future imaging experiments \citep{DESC_SRD}. One issue that will need to be addressed is magnification of background galaxies due to gravitational lensing \citep{Newman2008,Matthews2014}. This magnification will cause an excess density of galaxies in regions around foreground objects. The cross-correlation signal depends on the derivative of the luminosity function of the objects being magnified; lensing has a null effect for a power law slope of -1, as is typical for faint galaxies. As a result, the cross-correlation signal from magnification is primarily driven by lensing of intrinsically luminous \textit{spectroscopic} objects by fainter photometric objects in the foreground, rather than lensing of photometric objects by galaxies and quasars with spectroscopic redshift measurements \citep{Moessner1998,Newman2008}, as may be seen in \autoref{fig:magnification}. There are some hints of this signal in clustering redshift measurements in the literature \citep[e.g.,][]{Hildebrandt2021}, but this will be a much greater issue at the precision required for future experiments. The strength of this signal should be predictable, given estimates of the redshift distribution of the lensing objects and the mass associated with them, combined with the luminosity function of the lensed objects. For the dominant case of lensing by objects in the photometric sample, for instance, the lensing-contaminated redshift distribution inferred from cross correlation measurements can provide an initial guess for $p(z)$, and CMB lensing measurements of the mass associated with that sample can provide the additional information required without imposing a dependence on the optical/IR weak lensing measurements for which we need to characterize the redshift distributions. \citet{Newman2008} has suggested combining these measurements with the luminosity function of the spectroscopic sample, which would enable the lensing magnification signal to be predicted and corrected for; this procedure could be iterated until convergence. However, there has not yet been a demonstration that this method can deliver redshift distributions with the exquisite accuracy required for future cosmological experiments. \ifarxiv \begin{marginnote} \entry{Impact of astrophysical systematics on clustering redshifts}{\includegraphics[width=3.5cm]{impact426.pdf}} \end{marginnote} \fi A second area where investigation is needed is tests of whether systematics can be avoided if cross-correlation information from pairs at small separations ($\lesssim 5$ Mpc) is utilized to constrain redshift distributions, and if so, how best to do so. The signal-to-noise ratio of cross-correlation measurements is maximized at small separations \citep{Newman2008}. As a result, many applications of clustering redshifts to date have not excluded information from close pairs in order to have the strongest possible constraints on redshift distributions \citep[e.g.,][]{Schmidt2013,Rahman2015}. However, on those scales structure evolves non-linearly, and the relationship between the clustering of galaxies and that of dark matter, or of one set of galaxies with another, can be complex. In contrast, at the large separations which more theoretical work has focused on \citep[e.g.,][]{Newman2008, McQuinn2013}, the relationship between the clustering of dark matter and that of galaxies is much simpler, and the assumption of linear biasing holds, so that the correlation function of galaxies, $\xi_{gg}$ is simply $b^2 \xi_{mm}$, where $b$ is a constant for a particular galaxy population and $\xi_{mm}$ is the two-point correlation function of matter. In that case, the intrinsic cross-correlation between the photometric and spectroscopic populations can be treated simply, with the cross-correlation bias determined as $b_{\rm phot,spec}^2 = b_{\rm spec} b_{\rm phot}$; at smaller scales, the relationship can be much more complicated. Performing cross-correlation measurements at smaller scales has been invaluable for detecting the signal with currently-available samples; however, the simple analysis methods which can be utilized at large scales may have systematics at unacceptable levels for future experiments when applied for smaller separations. One example of a cause for concern is the quasar proximity effect \citep{Efstathiou1992}: the large amount of ionizing radiation released by quasars may affect the evolution of galaxies in some volume around each one. As a result, cross-correlation measurements that use quasars as a spectroscopic tracer of large-scale-structure could yield incorrect results if the scales where pairs between photometric objects and quasars are measured are those where galaxies have been prevented from developing by the proximity effect. This would not be an issue at larger separations, where the clustering between quasars and photometric objects is driven purely by the underlying web of dark matter. Given these concerns, it is important to test how well clustering redshifts can perform when small-separation pairs are exploited, in order to determine whether they can be used in a simple manner to characterize redshift distributions for next-generation experiments. If not, more complicated modeling of the relationship between both photometric and spectroscopic galaxies and dark matter halos will be necessary for accurate characterization to be possible. A third area where work remains to be done is the handling of redshift evolution of the clustering bias of the photometric galaxy sample. This includes the extreme case where photometric redshift outliers have very different bias from more typical objects, but also the more typical type-redshift degeneracies common with few-band data, which leads to cases where two populations of galaxies with different bias and redshift are indistinguishable photometrically \citep[e.g.,][]{Dunlop2007}. Since the cross-correlation is proportional to the product of the bias and the fraction of objects at a given redshift, if differences in bias are not handled correctly, the redshift distribution inferred will be incorrect. Recent studies show the impact of bias evolution with current methods to be of order $|\Delta z|\approx0.01$ \citep{vandenBusch2020,Gatti2021}. However, it could be possible to address this issue by exploiting the astrophysical relationships between galaxy color and luminosity and the large-scale structure bias: both locally and at $z\sim 1$, the clustering strength of galaxies is primarily a monotonic function of color, with redder galaxies clustering more strongly, and a weaker function of luminosity \citep{Hogg2003,Cooper2006}. Although it may be difficult to determine whether an individual galaxy is at either one of two possible redshifts, it is straightforward to determine the restframe color and luminosity of a galaxy, and hence bias as predicted from analyses of broader populations, \textit{conditioned on} the redshift; that is, we can accurately determine $p(C_{\rm restframe}, L | P_{\rm observed}, z)$, where $C_{\rm restframe}$ is some restframe color, $L$ is luminosity, and $P_{\rm observed}$ is the measured photometry for an object. If we characterize the dependence of bias on restframe properties and redshift, $b(C_{\rm restframe}, L, z)$, either via auto- or cross-correlation analyses, then we have sufficient information to predict $b(z)$ for a sample. Again, this is a method that should in principle be effective, given the simple behavior of galaxy biasing; however, more work is needed to demonstrate that the level of characterization needed for future surveys can be achieved. \ifarxiv \begin{marginnote} \entry{Take-away}{Cross-correlations between photometric and spectroscopic samples have the potential to characterize redshift distributions even if deep spectroscopy remains systematically incomplete, but a number of astrophysical systematics could limit their power.} \end{marginnote} \fi \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{matthews_magnification.pdf} \caption{An illustration of the impact of weak lensing magnification on cross-correlation measurements that could be used to reconstruct redshift distributions. The black line shows the expected cross-correlation signal, $w_{sp}$, between a spectroscopic sample at redshift $z = z_s$ with a photometric sample having a Gaussian distribution in redshift, plotted as a function of $z_s$ (black line); it is this signal that can be used to reconstruct redshift distributions, for a simple toy model scenario (detailed in \citealt{Matthews2014}). The cross-correlation signals that result from objects in the photometric sample being lensed by objects in a given spectroscopic redshift bin are shown as blue lines corresponding to different values of the faint-end Schechter function slope $\alpha$. This signal is comparatively weak, as the lensing effect is null for the typical faint-end slope of $\alpha = -1$. Finally, red curves show the signal resulting from comparatively bright spectroscopic objects being lensed by the photometric objects; this effect is much stronger as the density of bright galaxies drops exponentially with increasing luminosity, making counts at the bright end sensitive to lensing contamination. Figure reproduced with permission from \cite{Matthews2014}.} \label{fig:magnification} \end{figure} \section{A VISION FOR THE FUTURE OF PHOTOMETRIC REDSHIFTS} \label{sec:vision} In the previous section, we have considered challenges that may affect applications of photometric redshifts to near-future projects. Figure \ref{fig:char_error} summarizes the current state of the art on those challenges which will impact the characterization of redshift distributions, a major source of systematic uncertainty for dark energy experiments. Considerable progress will need to be made in many areas for future experiments to reach their full potential. \begin{figure}[htp!] {\includegraphics[width=0.95\textwidth]{summary.pdf}} {\caption{{Recent estimates of the impact of challenges on the characterization of state-of-the-art photometric redshifts, summarized as the resulting systematic uncertainty of the estimated mean redshift of ensembles of galaxies. Horizontal bars indicate the ranges of impacts consistent with a given study; points correspond to the best estimate of an impact. Grey vertical bands indicate the requirements for cosmological analyses with Year 1 or Year 10 Vera Rubin Observatory LSST data \citep{DESC_SRD}. At top left we show the impact of biases from the selection of spectroscopic redshift samples \citep{Gruen2017, Joudaki2020, Hartley2020}; cf. \S \ref{sec:incompleteness}. At top middle we show the impact of incorrect redshifts in spectroscopic or many-band photometric training sets \citep{Newman2013,Laigle2016,Myles2021}, as described in \S \ref{sec:outliers}. At top right is shown the impact of sample variance due to the limited volume of spectroscopic surveys \citep{Hildebrandt2020, Myles2021}; cf. \S \ref{sec:sample_variance}. At bottom left is shown the impact of biases in the selection of objects within photometric sample bins, as discussed in \S \ref{sec:photo_selection} \citep{Gruen2017,Wright2020,Myles2021}. In the bottom middle panel we show the impact of blending between multiple objects, either through its effect on the lensed galaxies \citep{MacCrann2021} or on the spectroscopic samples used for characterization \citep{Brinchmann2017, Masters2019}; cf. \S \ref{sec:blending}. At bottom right we show the estimated impact of a variety of astrophysical uncertainties on the characterization of redshift distributions via cross-correlations, also known as clustering redshifts \citep{vandenBusch2020,Gatti2021}, as described in \S \ref{sec:xcorr}. Here we consider only sources of uncertainty in the first moment of the redshift distribution, as those are best-studied to date; however, higher moments will need to be characterized to similarly stringent levels \citep{DESC_SRD}. \label{fig:char_error} } } } \vskip-0.15in \end{figure} In this section, we conclude by evaluating how we might expect photometric algorithms to develop by the time of the final analyses of the next generation of imaging surveys, in the mid-2030s, to help address these challenges. At that time, we might hope that our understanding of the physics of galaxy evolution has become developed enough that photometric redshift systematics will be inseparably linked to that science, not just to cosmology. We can also expect that some progress will be made on spectroscopic training samples for photo-$z$ analyses, but that completeness and sample sizes will remain limited; the sorts of large-aperture ($\gtrsim 8$m), highly-multiplexed, wide-field-of-view capabilities that are optimal for photometric redshift training samples are unlikely to be available much before that time \citep{Newman2019}. \subsection{Potential Developments in Photometric Redshift Algorithms} \label{sec:pz_vision} The issues of spectroscopic incompleteness, outliers and biases in calibration redshifts, and sample variance in calibration redshifts discussed in \S \ref{sec:characterization} limit traditional machine-learning photometric redshift approaches at levels significantly exceeding the requirements of Stage IV surveys. At the same time, template-based photometric redshift methods have delivered poorer performance, and often poorer characterization, than machine learning methods within the range of coverage of training sets of spectroscopic redshifts. Ultimately, we might hope to unify the advantages of both techniques. A model for the ensemble of rest-frame galaxy SEDs and luminosity functions, as well as the redshift evolution thereof, would allow more meaningful interpolation between the sparse sampling of galaxies targeted by deep spectroscopic surveys in the presence of sample variance. Such a model could also be used to meaningfully extrapolate beyond the limits of the training data, {e.g.,~to emulate intrinsically fainter galaxies}. At the same time, at present no model derived from \emph{a priori} principles has achieved sufficient fidelity; improvements both to stellar population synthesis models and to our understanding of the underlying galaxy population are needed. Rather, there will have to be an interplay of empirical modeling based on artificial intelligence with physics-driven components, incorporating the notion of a rest-frame SED, parameterizations for spectral features that vary from galaxy to galaxy as well as for the characteristics of the underlying galaxy population, flexible models of reddening, and the incorporation of the characteristics of the instrument used for observations. Such a model will have to be refined using all available data. In this, spectroscopy and deep multi-band photometry will continue to play decisive roles due to their ability to break degeneracies that are irreducible in shallow broad-band photometry. Narrow-band photometry, if it can be obtained with much greater depths and areas than currently available, could be an important, highly constraining element. The large impact of selection biases both in spectroscopic and in photometric galaxy samples, and their complex and non-linear dependence on observing conditions and analysis algorithms is an additional concern. It can be overcome only if such a scheme includes careful simulation or emulation of observations for interpreting the spectroscopic and photometric data. In principle such a model could be used to generate representative artificial training sets of arbitrary size and full redshift coverage as input for machine learning algorithms. Ultimately, it is more desirable that all data be interpreted simultaneously to characterize the likelihood distribution $p({\rm data} | {\rm model})$ -- i.e., the probability of the full set of photometric and spectroscopic observations obtained as a function of the parameters of the underlying model for the galaxy population and SEDs. The most universally useful outputs from this process would be a set of samples from the probability distribution of the parameters of the model (as would be obtained from probabilistic inference methods such as Markov Chain Monte Carlo sampling). Each such sample would then be associated with a set of posterior probability distributions for the redshift of each individual object and/or for ensembles of galaxies; by averaging the posteriors corresponding to different samples, one can obtain the posterior PDF marginalizing over all parameters of the model. \ifarxiv \begin{marginnote} \entry{Take-away}{Flexible, observationally-constrained models of the underlying population of galaxy spectral energy distributions could enable significant improvements in photometric redshift performance and characterization.} \end{marginnote} \fi \subsection{Potential Developments in Spectroscopic Training} \label{sec:spec_vision} As discussed in Sections \ref{sec:incompleteness} and \ref{sec:outliers}, sets of objects with spectroscopic redshifts at the full depth of future imaging surveys from the Rubin Observatory, Euclid, and Roman Space Telescope will be valuable for optimizing the performance of photo-$z$ algorithms and, if the impact of systematic incompleteness and outliers can be avoided, could potentially provide the exquisite characterization needed for precision cosmological measurements. If incompleteness and outlier rates are improved but do not reach the level required for direct characterization, simultaneous forward-modeling of photometric and spectroscopic survey data (e.g., in extension of \citealt{Fagioli2020}) may provide a path forward. However, such improved samples would still be highly expensive to obtain with instruments that currently exist or are in construction. It will thus be important both to develop new spectroscopic resources as well as to optimize how we will use existing ones. New telescopes and instruments optimized for wide-field, highly-multiplexed spectroscopy on $\gtrsim 8$m diameter apertures can vastly reduce the time required to obtain photometric redshift training samples. \citet{Newman2015} defined a fiducial photometric redshift spectroscopy sample consisting of 30,000 objects down to magnitude $i=25.3$ distributed over 15 fields of at least 20 arcminute diameter (in order to mitigate and characterize the effects of sample/cosmic variance), with sufficient depth per pointing to obtain redshifts for at least 75\% of targets, and sufficient spectral resolution to split the [OII] 3727 Angstrom doublet at $z \gtrsim 0.7$, where other emission-line spectral features are redshifted beyond optical wavelengths. \citealt{Newman2019} found that the most efficient currently-available options, the DESI instrument at the Mayall Telescope \citep{DESIInstrument} and the DEIMOS spectrograph at Keck Observatory \citep{Faber2003}, would take more than 1800 dark nights to conduct this survey. In contrast, the upcoming Subaru/PFS \citep{Tamura2016} would take roughly 400 dark nights, while the proposed Maunakea Spectroscopic Explorer \citep{MSE2018}, ESO SpecTel project \citep{Spectel2019}, or the MANIFEST fiber-feed for the GMACS instrument on the Giant Magellan Telescope \citep{Lawrence2020} could potentially complete this fiducial survey in less than 200. Given the large investment required for spectroscopic surveys in support of photometric redshift measurements, it would be desirable to optimize our use of such datasets in order both to minimize the resources needed and maximize the impact of the data that is obtained. Current photo-$z$ training surveys are already utilizing self-organizing maps to identify regions of parameter space where additional spectroscopy is needed \citep[e.g.,][]{Masters2019}. Future datasets could take advantage of developments in active learning algorithms, which in this application would identify those objects for which obtaining a spectroscopic redshift measurement would most improve the model \citep[e.g.,][]{Vilalta2017}. Another area of potential gain is better integration of spectroscopic and many-band photometric surveys (or low-resolution grism surveys, which have similar properties). Obtaining photometry in large numbers of bands (as in COSMOS, \citealt{Laigle2016}), in narrow bands (as in PAU, \citealt{Alarcon2021}), or alternatively obtaining low-resolution spectroscopy (as used by PRIMUS, \citealt{Coil2011} or slitless spectroscopy (as in 3D-HST, \citealt{Momcheva2016}) can provide redshift estimates for complete samples of galaxies, with larger uncertainties and catastrophic error rates than spectroscopic surveys, but lower errors than broadband photo-$z$'s. At fixed telescope etendue, survey time, and number of spectral features resolved, the error on redshifts will be proportional to $\frac{1}{\sqrt{R}}$, where the spectral resolution $R \equiv \frac{\lambda}{\Delta \lambda}$. Given this scaling, it is rarely feasible to greatly reduce redshift errors over a full imaging survey area down to the depths reached by broadband imaging, but deep campaigns covering tens or hundreds of square degrees to useful depths could be possible. Such multi-band data can boost the benefits gained from spectroscopic samples. To achieve this, full integration of the information gained from higher-resolution spectroscopic surveys and many-band photometric ones would be desirable (some steps toward that have been taken; e.g., in \citealt{Buchs2019}). The near-infrared grism spectroscopy capabilities of the Euclid mission and the Roman Space Telescope present an appealing opportunity here. Low-resolution spectroscopy at IR wavelengths is difficult from the ground due to high sky backgrounds, while the space-based missions have only limited optical capabilities. Deep many-band optical imaging from the ground over the same areas covered deeply by space-based grisms would provide detailed SEDs over a broad wavelength range, yielding a rich dataset for photometric redshift training and characterization as well as for galaxy evolution studies. Deep multi-wavelength photometry can also help by breaking degeneracies between different redshift solutions that few-band optical colors cannot. We might hope to gain a deeper understanding of the range of galaxy SEDs at somewhat brighter magnitudes via spectroscopy, and extend that information to fainter galaxies via many-band and multi-wavelength photometry, greatly expanding the training sets that may be used for photometric redshifts (similar to suggestions in \citealt{LSSTScienceBook}). Ideally, this would again be used to help move us towards a full phenomenological model of galaxy spectral evolution. If we can mitigate the impact of sample/cosmic variance on photometric redshift characterization, this would further reduce the scale of the training sets required and make wider-field spectrographs (e.g., MegaMapper, \citealt{Schlegel2019}) more useful for this work. The intermediate-precision redshifts provided by many-band and multiwavelength surveys could play a role here, allowing the large-scale-structure fluctuations in the regions used for deep spectroscopic surveys to be measured and characterized. By surveying wider areas, such surveys could also enable rare populations to be identified at many different redshifts, rather than just those associated with the highest over- or under-densities in small spectroscopic survey fields. It would be even better to have a model for galaxy SEDs that can be tuned with the objects in spectroscopic fields, but continuously predicts colors at arbitrary redshift; again, the closer we can come to the ideal of a complete phenomenological model of galaxy spectral evolution (or, if one were even more ambitious a full physical model), the better our understanding of photometric redshifts will be. \ifarxiv \begin{marginnote} \entry{Take-away}{In order to obtain optimal photometric redshift performance and characterization, large investments of observing time on wide-field spectrographs on large telescopes are needed, supported by deep narrow-band and multiwavelength imaging and shallower, wide-area spectroscopy.} \end{marginnote} \fi \vspace{5pt} Of course, the future of photometric redshifts could lie in different directions than we have speculated here. It is clear, however, that several independent effects are currently limiting their performance and characterization at levels that will be prohibitive for the advancement of the field that the data collection of Stage IV galaxy surveys in principle allows. There are thus many broad avenues of research -- both in areas discussed above and beyond -- that, if successful, will allow photometric redshift methods to be improved far beyond the current state of the art. If a substantial investment in both data-taking and method development is made, it is likely to pay off: photometric redshifts will be a critical tool for studies of both galaxy evolution and cosmology with the next generation of imaging surveys, and should continue to be valuable for extragalactic science for the foreseeable future. \ifarxiv \vskip 0.01in \else \begin{summary}[SUMMARY POINTS] \begin{enumerate} \item Distances based on photometric redshifts enable the inference of many properties from imaging data, key for studies in both galaxy evolution and cosmology. \item The \emph{performance} of photo-$z$ algorithms is limited by having measurements in only a few, broad, noisy photometric bands -- but that trade-off enables studies of large samples of faint galaxies. For galaxy evolution studies and measurements of clustering, performance is frequently the most important factor determining the constraining power of photometric redshift-based studies. \item The scarcity of suitable deep spectroscopy, combined with stringent requirements, makes photometric redshift \emph{characterization} a leading challenge and source of systematic uncertainty. Cosmological studies frequently place only weak demands on performance, but require exquisite characterization of photo-$z$'s. Weak lensing, large-scale-structure, and cosmology studies with upcoming surveys all require moments of redshift distributions to be known to the 0.1\% level. \item Photometric redshift methods can be categorized by how they use prior information, including training samples and SED templates. Incomplete, incorrect, and/or inflexible models for the galaxy population currently limit template-fitting redshift performance at levels of $|\langle\Delta z\rangle/(1+z)|>0.01$. Insufficient training samples or analysis choices that do not match the science case commonly limit empirical redshift methods at a similar level. Methods that inform a model for the galaxy population with all collected data have promise for addressing current limitations of photo-$z$ algorithms, but are not yet widely used. \item There can be no photometric redshifts of high quality without spectroscopic redshifts of high quality \emph{and} quantity. Roughly 30,000 deep spectroscopic redshift measurements will be needed to optimize the \emph{performance} of photo-$z$ algorithms for near-future surveys. Either spectroscopic samples will need to reach unprecedentedly low or well-understood incompleteness and outlier rates, or characterization methods will need to greatly improve, in order for the stringent requirements for future dark energy studies to be met. \item Progress will need to be made in a variety of areas for photometric redshifts with next-generation imaging surveys to reach their full potential, as described in the summary of Future issues. \item Flexible, observationally-constrained models of the underlying population of galaxy spectral energy distributions could enable significant improvements in both photometric redshift performance and characterization. Large investments of observing time on wide-field spectrographs on large telescopes will be needed for improving both performance and characterization, supported by deep narrow-band and multi-wavelength imaging as well as shallower, wide-area spectroscopy. \end{enumerate} \end{summary} \fi \ifarxiv \vskip 0.01in \else \begin{issues}[FUTURE ISSUES] \begin{enumerate} \item Current photo-$z$ codes optimized for predicting redshifts of individual objects fail to produce outputs that fulfill the frequentist definition of a PDF; Bayesian treatments have often also had limitations. Because photo-$z$ codes make analysis choices that fall short in different ways, combining redshift PDFs for an object that were derived via multiple methods can provide higher-performing results, but better techniques for combination are still needed. \item Photometry can be used to jointly constrain physical galaxy parameters along with redshift, but this will be computationally prohibitive with the large samples from upcoming surveys unless methods are improved. Additionally, the cost of storing joint many-dimensional PDFs for large samples is prohibitive with current methods. \item Exploiting morphological information has yielded significant improvements to photometric redshift performance at low redshift; it remains to be seen whether similar gains can be achieved at higher $z$. Even when morphological information is included, current photo-$z$ methods tend to perform poorly at the very lowest redshifts ($z < 0.05)$. \item Mitigating the effects of systematic incompleteness in the deep spectroscopic samples used to train photo-$z$ algorithms and characterize redshift distributions will be a key challenge for upcoming surveys. Additionally, the rate of incorrect redshifts in current deep spectroscopic samples is high enough to compromise direct redshift characterization methods for future surveys; the problem is worse for redshifts from many-band photometry or low-resolution spectroscopy. \item Deep spectroscopic and many-band photometric surveys cover only limited areas of sky; as a result, sample variance in redshift distributions is large and can limit direct redshift distribution characterization. \item Selections affecting photometric samples must be taken into account when characterizing photo-$z$'s, placing additional requirements on spectroscopic datasets. \item Undetected overlapping of galaxy images is unavoidable in ground-based data, but make interpretation of photo-$z$'s and spectroscopic samples more difficult. \item Cross-correlations between photometric and spectroscopic samples have the potential to characterize redshift distributions even if deep spectroscopy remains systematically incomplete, but astrophysical systematics could limit their power. \end{enumerate} \end{issues} \fi \section*{DISCLOSURE STATEMENT} The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. \section*{ACKNOWLEDGMENTS} We gratefully acknowledge Paulina Contreras, Biprateep Dey, Moritz Gammel, Elisa Legnani, Alex Malz, Yao-Yuan Mao, Justin Myles, Markus Rau, Samuel Schmidt, and Luca Tortorelli for a variety of helpful conversations and for their reviews of versions of this article. We thank Biprateep Dey, Alex Malz, and Yao-Yuan Mao for providing figures for use in this work. We also gratefully acknowledge many helpful interactions with members of the LSST Dark Energy Science Collaboration Photometric Redshifts Working Group. We also wish to thank Sandra Faber for her continued help, support, and feedback throughout the development of this review, as well as Robert Kennicutt for his assistance with the editorial process in the later stages of its development. This work was supported by grants DE-SC0007914 and DE-SC0020256 from the United States Department of Energy, Office of Science, Office of High Energy Physics, without which it would not have been possible.
1,116,691,499,610
arxiv
\section{\label{sec:intro}Introduction} \input{./tex_files/intro.tex} \section{\label{sec:methods}Methods} \input{./tex_files/methods.tex} \section{\label{sec:results}Results and Discussion} \input{./tex_files/results.tex} \section{\label{sec:conslusions}Conclusions} \input{./tex_files/conclusion.tex} \section*{Conflict of interest} There are no conflicts to declare. \section*{\label{sec:acknowledgments}Acknowledgements} \input{./tex_files/acknowledgements.tex} \section*{Supplementary Material} See supplementary material for single molecule probability distribution in the model pore, derivation of the ideal random distribution $p_{ref}(\theta)$, details of the setup and results of explorative metadynamics simulations, additional analysis of the radial density profiles. . \section*{\label{sec:references}References} \bibliographystyle{unsrt} \section*{\label{sec:references}References} \end{document}
1,116,691,499,611
arxiv
\section{Introduction} Andreev's Theorem \cite{AND,AND2} provides a complete characterization of compact hyperbolic polyhedra having non-obtuse dihedral angles. See also \cite{ROE2,ROE} for an alternative exposition on the classical proof. Other approaches to Andreev's Theorem can be found by Rivin and Hodgson \cite{RH,H}, Thurston \cite{TH}, Marden and Rodin \cite{MR}, and Bowers and Stephenson \cite{STEVE}. In this paper we show that the classical proof from \cite{AND,AND2,ROE2,ROE} is constructive when combined with Newton's Method for solving nonlinear equations. Combinatorial descriptions of hyperbolic polyhedra that are relevant to Andreev's Theorem fall into three classes, {\em simple, truncated}, and {\em compound}, all defined later in this section. The proof in \cite{ROE2,ROE} provides an explicit continuous path in the space of polyhedra deforming a given simple polyhedron $P$ to one of two which are easily constructed by hand: the $N$-faced prism $Pr_N$ and the $N$-faced split prism $D_N$. We use Newton's method to follow such a path backwards deforming a computer realization of $Pr_N$ or $D_N$ to a computer realization of the desired polyhedron $P$. This technique, which has been well studied in the literature, is known as the homotopy method \cite{ALLGOWER,BLUM,SHUB1,SHUB2,SHUB3,SHUB4,BELTRAN}. We illustrate the construction of simple polyhedra in Sections \ref{SEC_WH},\ref{SEC_ALGORITHM}, and \ref{SEC_DIFFICULT}. A similar deformation, again using Newton's method, allows us to construct truncated polyhedra from simple polyhedra. We demonstrate this deformation in Section \ref{SEC_TRUNCATION}. In Section \ref{SEC_COMPOUND} we show how to construct a compound polyhedron as a gluing of two appropriate truncated polyhedra. In this way, our program graphically illustrates Andreev's proof of existence for explicit examples. In fact, writing this program and working through Andreev's proof for some specific examples led to the detection an error in the proof of existence, which has been corrected in \cite{ROE2,ROE}. A further benefit of this program is the construction of polyhedra whose dihedral angles are proper integer sub-multiples of $\pi$. As a consequence of Poincar\'es polyhedron theorem \cite{POINCARE}, the group $\Gamma$ generated by reflections in the faces of such a polyhedron is a discrete group of isometries of hyperbolic space. The quotient $\mathbb{H}^3/\Gamma$ is hence a compact hyperbolic 3-orbifold in which we study the hyperbolic volume and spectrum of closed geodesic lengths using SnapPea \cite{SNAPPEA}. Such orbifolds and their covering manifolds have been studied extensively \cite{LOBELL,VESNIN_LOB,HYPER_ELL,LOB_VOL,SMALL_COVERS,KELLERHALS,RENI}. In fact the first example of a closed hyperbolic 3-manifold was obtained in this way in 1931 by L\"obell \cite{LOBELL}. One consequence of our study is a volume estimate for a ``hyperelliptic'' manifold considered in \cite{HYPER_ELL}. The reader should note that there are already excellent computer programs for experimentation with hyperbolic 3-manifolds. The program SnapPea \cite{SNAPPEA} constructs hyperbolic structures on knot and link compliments, as well as the hyperbolic Dehn surgeries on these compliments. SnapPea provides for the computation of a variety of geometry invariants of the computed hyperbolic structure. (See also \cite{WEEKS}.) The program Snap \cite{SNAP,SNAP_PAPER} provides a way of computing arithmetic invariants of hyperbolic manifolds. Both of these programs are quite easy to use and have allowed for vast levels of experimentation, including a nice census of low-volume hyperbolic manifolds and orbifolds. The experimentation done in this paper with the hyperbolic orbifolds obtained from polyhedral reflection groups is very modest in comparison. However, it is an alternative way to construct hyperbolic structures on certain orbifolds (and, in the future, possibly on manifold covers of these orbifolds) in a way that these structures can nicely be studied by SnapPea (as well as Snap, and other software, in the future). \vspace{.1in} Let $E^{3,1}$ be $\mathbb{R}^4$ with the indefinite metric $\Vert {\bf x} \Vert^2 = -x_0^2+x_1^2+x_2^2+x_3^2$. In this paper, we work in the hyperbolic space $\mathbb{H}^3$ given by the component of the subset of $E^{3,1}$ given by $$\Vert {\bf x} \Vert^2 = -x_0^2+x_1^2+x_2^2+x_3^2 = -1$$ \noindent having $x_0 > 0$, with the Riemannian metric induced by the indefinite metric $$-dx_0^2+dx_1^2+dx_2^2+dx_3^2.$$ The hyper-plane orthogonal to a vector ${\bf v} \in E^{3,1}$ intersects $\mathbb{H}^3$ if and only if $\langle{\bf v},{\bf v}\rangle> 0$. Let ${\bf v} \in E^{3,1}$ be a vector with $\langle{\bf v},{\bf v}\rangle > 0$, and define \begin{eqnarray*} P_{\bf v} = \{{\bf w} \in \mathbb{H}^3 | \langle{\bf w},{\bf v}\rangle = 0\} \mbox{ and } H_{\bf v} = \{{\bf w} \in \mathbb{H}^3 | \langle{\bf w},{\bf v}\rangle \leq 0 \} \end{eqnarray*} \noindent to be the hyperbolic plane orthogonal to ${\bf v}$ and the corresponding closed half space. If one normalizes $\langle{\bf v},{\bf v}\rangle = 1$ and $\langle{\bf w},{\bf w}\rangle = 1$ the planes $P_{\bf v}$ and $P_{\bf w}$ in $\mathbb{H}^3$ intersect in a line if and only if $\langle{\bf v},{\bf w}\rangle^2 < 1$, in which case their dihedral angle is $\arccos(-\langle{\bf v},{\bf w}\rangle)$. They intersect in a single point at infinity if and only if $\langle{\bf v},{\bf w}\rangle^2 = 1$; in this case their dihedral angle is $0$. A {\it hyperbolic polyhedron} is an intersection $$P = \bigcap_{i=0}^n H_{\bf v_i}$$ \noindent having non-empty interior. We will often use the Poincar\'e ball model of hyperbolic space, given by the unit ball in $\mathbb{R}^3$ with the metric $$4\frac{dx_1^2+dx_2^2+dx_3^2}{(1 -\Vert {\bf x}\Vert^2)^2}$$ \noindent and the upper half-space model of hyperbolic space, given by the subset of $\mathbb{R}^3$ with $x_3 > 0$ equipped with the metric $$\frac{dx_1^2+dx_2^2+dx_3^2}{x_3^2}.$$ \noindent Both of these models are isomorphic to $\mathbb{H}^3$. Hyperbolic planes in these models correspond to Euclidean hemispheres and Euclidean planes that intersect the boundary perpendicularly. Furthermore, these models are conformally correct, that is, the hyperbolic angle between a pair of such intersecting hyperbolic planes is exactly the Euclidean angle between the corresponding spheres or planes. \subsection{Combinatorial polyhedra and Andreev's Theorem} A compact hyperbolic polyhedron $P$ is topologically a 3-dimensional ball, and its boundary a 2-sphere $\mathbb{S}^2$. The face structure of $P$ gives $\mathbb{S}^2$ the structure of a cell complex $C$ whose faces correspond to the faces of $P$. Considering only hyperbolic polyhedra with non-obtuse dihedral angles simplifies the combinatorics of any such $C$: \begin{prop} \label{TRIVALENT} (a) A vertex of a non-obtuse hyperbolic polyhedron $P$ is the intersection of exactly 3 faces. \newline (b) For such a $P$, we can compute the angles of the faces in terms of the dihedral angles; these angles are also $\leq \pi/2$. \end{prop} \noindent See \cite{ROE2,ROE}. The fundamental axioms of incidence place the following, obvious, further restrictions on the complex $C$: \begin{itemize} \item Every edge of $C$ belongs to exactly two faces. \item A non-empty intersection of two faces is either an edge or a vertex. \item Every face contains not fewer than three edges. \end{itemize} Any trivalent cell complex $C$ on $\mathbb{S}^2$ that satisfies the three conditions above is an {\it abstract polyhedron}. Since $C$ must be a trivalent cell complex on $\mathbb{S}^2$, its dual, $C^*$, has only triangular faces and the three above conditions ensure that it is a simplicial complex on $\mathbb{S}^2$. The figure below shows an abstract polyhedron $C$ drawn in the plane (i.e. with one of the faces corresponding to the region outside of the figure.) The dual complex is also shown, in dashed lines. \vspace{.1in} \begin{center} \begin{picture}(0,0)% \epsfig{file=comb_poly.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4015,2919)(139,-2143) \end{picture}% \end{center} \vspace{.1in} We call a simple closed curve $\Gamma$ formed of $k$ edges of $C^*$ a {\it k-circuit} and if all of the endpoints of the edges of $C$ intersected by $\Gamma$ are distinct, we call such a circuit a {\it prismatic k-circuit}. The figure below shows the same abstract polyhedron as above, except this time the prismatic 3-circuits are dashed, the prismatic 4-circuits are dotted, and the dual complex is not shown. \vspace{.1in} \begin{center} \begin{picture}(0,0)% \epsfig{file=comb_poly2.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4152,2571)(80,-1853) \end{picture}% \end{center} \vspace{.1in} We say that a combinatorial polyhedron $C$ is {\em simple} if it has no prismatic $3$-circuits, {\em truncated} if $C$ has prismatic $3$-circuits and each surrounds on one side a single triangular face, and otherwise we call $C$ {\em compound}. The combinatorial polyhedron shown in the two above diagrams is compound. \begin{thm} {\bf Andreev's Theorem} Let $C$ be an abstract polyhedron with more than 4 faces and suppose that non-obtuse angles ${\bf a}_i$ are given corresponding to edge $e_i$ of $C$. There is a compact hyperbolic polyhedron $P$ whose faces realize $C$ with dihedral angle ${\bf a}_i$ at each edge $e_i$ if and only if the following five conditions all hold: \setcounter{enumi}{0} \begin{enumerate} \item For each edge $e_i$, ${\bf a}_i > 0$. \item Whenever 3 distinct edges $e_i,e_j,e_k$ meet at a vertex, ${\bf a}_i+{\bf a}_j+{\bf a}_k > \pi$. \item Whenever $\Gamma$ is a prismatic 3-circuit intersecting edges $e_i,e_j,e_k$, ${\bf a}_i+{\bf a}_j+{\bf a}_k < \pi$. \item Whenever $\Gamma$ is a prismatic 4-circuit intersecting edges $e_i,e_j,e_k,e_l$, then ${\bf a}_i+{\bf a}_j+{\bf a}_k+{\bf a}_l < 2\pi$. \item Whenever there is a four sided face bounded by edges $e_1,$ $e_2,$ $e_3,$ and $e_4$, enumerated successively, with edges $e_{12}, e_{23}, e_{34}, e_{41}$ entering the four vertices (edge $e_{ij}$ connecting to the ends of $e_i$ and $e_j$), then: $${\bf a}_1 + {\bf a}_3 + {\bf a}_{12} + {\bf a}_{23} + {\bf a}_{34} + {\bf a}_{41} < 3\pi, \hspace{.2in} {\rm and}$$ $${\bf a}_2 + {\bf a}_4 + {\bf a}_{12} + {\bf a}_{23} + {\bf a}_{34} + {\bf a}_{41} < 3\pi.$$ \end{enumerate} Furthermore, this polyhedron is unique up to isometries of $\mathbb{H}^3$. \end{thm} \begin{cor} \label{EX3APR} If $C$ is simple, i.e. has no prismatic $3$-circuits, there exists a unique hyperbolic polyhedron realizing C with dihedral angles $2\pi/5$. \end{cor} \vspace{.1in} For a given $C$, let $E$ be the number of edges of $C$. The subset of $(0,\pi /2]^E$ satisfying these linear inequalities will be called the {\it Andreev Polytope}, $A_C$. Since $A_C$ is determined by linear inequalities, it is convex. Andreev's restriction to non-obtuse dihedral angles is emphatically necessary to ensure that $A_C$ be convex. Without this restriction, the corresponding space of dihedral angles, $\Delta_C$, of compact (or finite volume) hyperbolic polyhedra realizing a given $C$ is not convex \cite{DIAZ}. In fact, the recent work by D\'iaz \cite{DIAZ_ANDREEV} provides a detailed analysis of this space of dihedral angles $\Delta_C$ for the class of abstract polyhedra $C$ obtained from the tetrahedron by successively truncating vertices. Her work nicely illustrate the types of non-linear conditions that are necessary in a complete analysis of the larger space of dihedral angles $\Delta_C$. The work of Rivin \cite{RIV_IDEAL2, RIV_IDEAL1} shows that the space of dihedral angles for ideal polyhedra forms a convex polytope, {\em without the restriction to non-obtuse angles}. (See also \cite{GUE}.) Notice also that the hypothesis that the number of faces is greater than four is also necessary because the space of non-obtuse dihedral angles for compact tetrahedra is not convex \cite{ROE_TET}. Conditions (1-5) remain necessary conditions for compact tetrahedra, but they are no longer sufficient. Bao and Bonahon \cite{BAO} prove a similar classification theorem for hyperideal polyhedra. Finally, the papers of Vinberg on discrete groups of reflections in hyperbolic space \cite{AVS,VIN,VINREFL,VINVOL,VS} are also closely related, as well as the work of Bennett and Luo \cite{LUO} and Schlenker \cite{SCH2,SCH1,SCH3}. \vspace{.1in} Much attention has been focused on Andreev's Theorem from the viewpoint of circle packings and circle patterns. Given a polyhedron $P$ in the upper half-space model of $\mathbb{H}^3$, the planes supporting the faces of $P$ intersect the boundary at infinity $x_3=0$ in a pattern of circles (and straight lines) each with an orientation specifying ``on which side'' is the polyhedron $P$. Similarly, from such a pattern of circles and orientations one can re-construct a polyhedron $P$. The works of Thurston \cite{TH}, Marden and Rodin \cite{MR}, and Bowers and Stephenson \cite{STEVE} all follow this approach to Andreev's Theorem. In fact, there is a beautiful computer program known as Circlepack \cite{CIRCLE_PACK}, written by Ken Stephenson, that computes circle packings and patterns of circles with specified angles of overlap. All of the proofs from this point of view use the conformal structure of the Riemann sphere $\hat{\mathbb{C}} = \partial_\infty \mathbb{H}^3$ and use the correspondence between conformal automorphisms of $\hat{\mathbb{C}}$ with isometries of $\mathbb{H}^3$. Instead of using the conformal structure on $\partial_\infty \mathbb{H}^3$, in this paper we will work specifically with the metric structure of $\mathbb{H}^3$. (However, there is certainly some significant overlap with the results in \cite{TH,MR,STEVE} and with the capabilities of the computer program CirclePack \cite{CIRCLE_PACK}.) \vspace{.1in} We will now explain the implementation of a computer program whose input is the combinatorial polyhedron $C$ and a dihedral angle vector ${\bf a} \in A_C$ and whose output is a hyperbolic polyhedron realizing the pair $(C,{\bf a})$. \subsection{An example}\label{SEC_EXAMPLE} The following figure shows an explicit example of the data $(C,{\bf a})$ and the resulting polyhedron displayed in the conformal ball model using the computer program Geomview \cite{GEO}. \begin{figure}[htp]\label{BIG_FIG} \begin{center} \begin{picture}(0,0)% \epsfig{file=angles_spec.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4654,3849)(82,-3083) \put(878,-1751){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2626,-2761){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}Andreev's Theorem}% }}}} \put(581,-98){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1101,-1498){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(720,-1049){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1195,-729){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1389,211){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3142,-308){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2716,-281){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3253,416){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2091,-81){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(3892,-560){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(4381,-779){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3897,-1181){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3661,-1985){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2411,-1816){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1786,-2248){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1060,-1963){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1141,-1696){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(945,-1578){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1018,-1263){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(706,-1454){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(615,-1271){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(400,-1976){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(3387,-645){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(401,-1604){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1577,-309){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1953,-292){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2436,-622){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2041,-862){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1374,-1660){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(225,-1275){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1697,694){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2661, 90){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2407,360){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(402,-932){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2241,-1003){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(2086,-504){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(2972,-599){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1653,-1430){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(2219,-670){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2276,-451){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2569,-406){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{6}$}% }}}} \end{picture}% \vspace{.1in} \includegraphics[scale=.35]{comp.ps} \end{center} \caption{\hbox{ }} \end{figure} \subsection{Outline of the proof of Andreev's Theorem}\label{SEC_OUTLINE} In this section, we recall the major steps from the proof of Andreev's Theorem that were presented in \cite{ROE2,ROE}. Let $C$ be a trivalent abstract polyhedron with $N$ faces. We say that a hyperbolic polyhedron $P \subset \mathbb{H}^3$ {\it realizes $C$} if there is a cellular homeomorphism from $C$ to $\partial P$ (i.e., a homeomorphism mapping faces of $C$ to faces of $P$, edges of $C$ to edges of $P$, and vertices of $C$ to vertices of $P$). We will call each isotopy class of cellular homeomorphisms $\phi : C \rightarrow \partial P$ a {\it marking} on $P$. We defined ${\cal P}_C$ to be the set of pairs $(P,\phi)$ so that $\phi$ is a marking with the equivalence relation that $(P,\phi) \sim (P',\phi ')$ if there exists an automorphism $\rho: \mathbb{H}^3 \rightarrow \mathbb{H}^3$ such that $\rho(P) = P'$, and both $\phi '$ and $\rho \circ \phi$ represent the same marking on $P'$. \begin{prop} The space ${\cal P}_C$ is a manifold of dimension $3N-6$ (perhaps empty). \end{prop} \noindent The proof is relatively standard and can be found in \cite{ROE2,ROE}. Since the edge graph of $C$ is trivalent, the number $E$ of edges of $C$ is the same as the dimension of ${\cal P}_C$. Given any $P \in {\cal P}_C$, let $\alpha(P) = ({\bf a}_1,{\bf a}_2,{\bf a}_3,...)$ be the $E$-tuple consisting of the dihedral angles of $P$ at each edge (according to some fixed numbering of the edges of $C$). This map $\alpha$ is obviously continuous with respect to the topology on ${\cal P}_C$, which it inherits from its manifold structure. We let ${\cal P}_C^0$ be the subset of ${\cal P}_C$ consisting of polyhedra with non-obtuse dihedral angles. To establish Andreev's Theorem, we proved the following statement: \begin{thm} For every abstract polyhedron $C$ having more than four faces, the mapping $\alpha: {\cal P}_C^0 \rightarrow A_C$ is a homeomorphism. \end{thm} There were two major steps: \begin{prop} \label{NONEMPTYIMPLIESAND} If ${\cal P}^0_C \neq \emptyset$, then $\alpha : {\cal P}^0_C \rightarrow A_C$ is a homeomorphism. \end{prop} We checked that $\alpha({\cal P}_C^0) \subset A_C$ by showing that conditions (1)-(5) are necessary. There is an open subset ${\cal P}^1_C \subset {\cal P}_C$ containing ${\cal P}^0_C$ on which one can prove that $\alpha:{\cal P}^1_C \rightarrow \mathbb{R}^E$ is injective, using a modification of Cauchy's rigidity for Euclidean polyhedra. This gives the uniqueness part of Andreev's Theorem. Using invariance of domain, it also gives that $\alpha : {\cal P}^1_C \rightarrow \mathbb{R}^E$ is a local homeomorphism. Because ${\cal P}^0_C \subset {\cal P}^1_C$, $\alpha$ restricted to ${\cal P}^0_C$ is a local homeomorphism, as well. We then showed that $\alpha:{\cal P}_C^0 \rightarrow A_C$ is proper, which amounts to showing that if a sequence of polyhedra $P_i$ in ${\cal P}_C^0$ degenerate (i.e. leave ${\cal P}_C^0$) then the sequence $\alpha(P_i)$ tends to $\partial A_C$. The fact that $\alpha:{\cal P}_C^0 \rightarrow A_C$ is a proper local homeomorphism was sufficient to show that $\alpha({\cal P}_C^0)$ is open and closed in $A_C$. \begin{prop}\label{EXIST} If $A_C \neq \emptyset$, then ${\cal P}^0_C \neq \emptyset$. \end{prop} The second step was much more difficult because for each $C$ with non-empty $A_C$ one needed to construct some polyhedron realizing $C$ (with non-obtuse dihedral angles). In fact the proof of Proposition \ref{EXIST} outlines a scheme for how to construct a polyhedron realizing $C$. The remainder of this paper outlines how to follow this scheme explicitly on the computer using Newton's Method and a homotopy method. \section{A method for constructing Andreev polyhedra} \subsection{Representing polyhedra on the computer} All the constructions of polyhedra done in this paper are using Matlab \cite{MAT} or the Free Software Foundation alternative Octave \cite{OCTAVE}, and all of the polyhedra are displayed in Geomview \cite{GEO}. When doing calculations, we represent a hyperbolic polyhedron $P$ having $N$ faces by specifying $N$ outward pointing normal vectors vectors ${\bf v}_1,\cdots {\bf v}_N$ each with $\langle {\bf v}_i, {\bf v}_i \rangle = 1$, so that $P = \bigcap_{i=1}^n H_{v_i}$. Although such a list of $N$ vectors is sufficient to specify $P$, in order to avoid repeated computation of the combinatorial structure of $P$ from these vectors we additionally specify the adjacency matrix and a list of all plane triples meeting at a vertex. These three items are described in a Matlab struct $P$, with $P.{\rm faces}$, $P.{\rm adjacency}$, and $P.{\rm vert}$ holding the data mentioned above, respectively. For example, the data for the polyhedron shown in Section \ref{SEC_EXAMPLE} is stored in Matlab as: \begin{multicols}{2} {\tiny \begin{verbatim} New_poly = vert: [28x3 double] faces: [16x4 double] adjacency: [16x16 double] New_poly.faces = 36.5078 -10.7624 -0.3090 -34.8983 4.9237 -1.5291 -1.2342 -4.6240 -0.0000 0.8660 -0.5000 -0.0000 4.5134 -2.1988 -2.3943 -3.2868 2.7290 -1.9854 -0.3091 -2.1000 13.3691 -5.0338 -0.3090 -12.4216 65.0863 -19.6363 -2.7939 -61.9987 35.9576 -9.6209 -1.5515 -34.6262 51.5713 -13.3145 -0.3090 -49.8320 5.8378 -0.5352 -0.3090 -5.8905 -0.0000 0.0000 1.0000 0.0000 1.2179 -1.4082 -0.7071 0.0000 -7.8692 2.1329 3.6943 6.6879 3.4039 -1.5744 -2.7269 -1.6344 -1.0781 0.5773 -0.0000 -1.3524 -2.1544 0.9964 1.7260 -1.2921 New_poly.adjacency = 1 0 0 0 0 1 1 1 1 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 1 0 1 1 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 1 1 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 1 0 1 0 1 1 0 0 1 1 1 1 0 0 1 0 0 0 1 1 1 0 0 0 0 0 1 1 0 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 1 New_poly.vert = 3 4 14 6 5 11 4 12 5 12 5 11 1 6 7 1 7 8 1 8 9 1 9 11 10 9 11 1 6 11 2 3 13 2 4 5 2 5 6 2 6 7 2 7 8 9 8 10 2 8 10 2 10 3 11 10 3 2 4 13 3 4 13 3 12 14 4 12 14 11 3 15 11 12 15 3 12 16 3 15 16 12 15 16 \end{verbatim} } \end{multicols} We display the polyhedra in Geomview using the hyperbolic mode and specifying the conformal ball model. The file format most convenient for our use is the Object File Format, ``New\_poly.off.'' The first line of an Object format file specifies the number of vertices, the number of faces, and the number of edges of $P$ in that order: ${\rm num\_vert}$ ${\rm num\_faces}$ ${\rm num\_edges}.$ The next block of data is a list of the coordinates of vertices as points in the unit ball. (In fact, these are the coordinates of points in the Projective Model for $\mathbb{H}^3$, not the Poincar\'e ball model which we describe in the introduction.) The last block of data is a list of the faces with each face given by ${\rm vertex}_1$ ${\rm vertex}_2$ ... ${\rm vertex}_n$ ${\rm colorspec}$, where the faces is spanned by ${\rm vertex}_1$ ${\rm vertex}_2$ ... ${\rm vertex}_n$ and ${\rm colorspec}$ is an integer telling Geomview what color to assign to the face. \begin{multicols}{2} {\tiny \begin{verbatim} 28 16 42 0.093414 0.626297 -0.759378 0.668701 -0.423986 0.508660 0.480895 0.480927 -0.729094 0.533074 -0.046431 -0.835831 0.000602 -0.321164 0.944739 -0.109909 -0.298793 0.946284 -0.257482 -0.413626 0.871178 -0.241511 -0.517345 0.817366 -0.394737 -0.502851 0.762029 0.039860 -0.522084 0.841280 -0.198257 0.894806 -0.304826 0.473945 0.834475 -0.252208 0.632626 0.016126 0.705772 0.030462 -0.199457 0.975695 -0.112537 -0.193119 0.972142 -0.373893 -0.377061 0.844031 -0.376723 -0.130280 0.905450 -0.802208 0.544817 0.123196 -0.869841 -0.223571 -0.241933 0.160888 0.910483 -0.333052 0.007468 0.802257 -0.570580 0.104069 0.367146 -0.917226 0.301741 0.507113 -0.798909 -0.023848 -0.005394 -0.995609 0.158629 -0.006953 -0.985479 0.065661 0.192695 -0.976218 0.028183 0.094604 -0.992919 0.091162 0.094314 -0.989423 5 5 6 7 9 4 1 8 17 16 14 13 12 11 19 10 2 9 20 10 17 18 23 26 25 21 0 3 6 20 19 11 2 22 0 4 5 3 2 11 12 1 5 5 9 4 13 12 1 6 4 5 14 13 4 7 5 6 15 16 14 5 8 4 7 8 15 6 9 5 15 16 17 18 8 10 8 3 24 23 18 8 7 9 1 11 7 3 24 27 25 21 22 2 12 3 19 20 10 13 3 21 22 0 14 4 24 27 26 23 15 3 26 27 25 16 \end{verbatim} } \end{multicols} Something is lacking when viewing the polyhedra displayed in the two-dimensional images shown in this paper. To alleviate this difficulty, the Matlab and OFF files associated to each polyhedron that is constructed in this paper are included as supplementary materials on the website of Experimental Mathematics. See the website \cite{GEO} for full details on the use of Geomview. \subsection{The desired polyhedron as a solution to $4N$ quadratic equations in $4N$ unknowns} The proof of Andreev's Theorem gives that $\alpha_C: {\cal P}_C^0 \rightarrow A_C$ is a homeomorphism, so the problem of constructing a polyhedron $P$ realizing $(C,{\bf a})$ can be expressed as the problem of finding a solution $P$ to the equation $\alpha_C(P) = {\bf a}$. Instead of working in ${\cal P}_C^0$, we write the desired polyhedron as a solution of a system of $4N$ quadratic equations in $4N$ variables, where $N$ is the number of faces of $C$. Our solution is $N$ vectors $v_1,\cdots,v_N \in E^{3,1}$ satisfying \begin{itemize} \item{$\langle v_i,v_i \rangle = 1$ and} \item{$\langle v_j,v_j \rangle = -\cos({\bf a}_{i,j})$ if faces $i$ and $j$ are adjacent in $C$ and their common edge is assigned dihedral angle ${\bf a}_{i,j}$.} \end{itemize} These equations impose $E+N$ conditions on $4N$ variables, where $C$ has $N$ faces and $E$ edges. As mentioned in Section \ref{SEC_OUTLINE}, we have $E=3N-6$, so we have imposed $4N-6$ conditions on $4N$ variables. We impose $6$ additional conditions in order to have the same number of equations and unknowns. We normalize by requiring that a chosen vector $v_i$ perpendicular to one of the faces agree with some given $v$ (where $v$ is chosen so that $\langle v,v \rangle = 1$.) We then require that one of the vertices on the face perpendicular to $v_i$ is at a given point $w$ in the plane $P_{v}$ and that a vertex adjacent to this vertex be on a given line $l$ in $P_{v}$ through $w$. One can check that these normalizations provide $3$, $2$, and $1$ additional equations respectively. (Notice that the six equations for this normalization are each linear.) We denote the normalization by a triple $(v,w,l)$. We denote the resulting quadratic map by $F_{C,(v,w,l)}:\mathbb{R}^{4N} \rightarrow \mathbb{R}^{4N}$. Typically we will only mention the normalization when necessary. We denote the conditions described above for the right hand side of the equations $F(x)=y$ as $({\bf a},0)$, where the ${\bf a}$ from this pair is shorthand for the conditions $\langle v_i,v_i \rangle = 1$ and $\langle v_j,v_j \rangle = -\cos(\alpha_{i,j})$ if faces $i$ and $j$ are adjacent in $C$, and the $0$ represents the fact that the normalization $(v,w,l)$ is satisfied. Andreev's Theorem asserts that if ${\bf a} \in A_C$, there is a real solution to $F_{C,(v,w,l)}(x) = ({\bf a},0)$ corresponding to $N$ vectors ${\bf v}_1,\cdots,{\bf v}_N$ in $E^{3,1}$ so that $P = \bigcap_{i=0}^n H_{\bf v_i}$ realizes the pair ($C$,${\bf a})$. \vspace{0.1in} There are many sensible ways to numerically solve a system of quadratic equations in the same number of equations as unknowns. These include the pre-packaged non-linear solvers in Matlab, Maple, and Mathematica, Newton's Method, as well as Groebner basis techniques and fancier quadratically constrained solvers. The difficulty is that with $4N$ quadratic equations in $4N$ unknowns, Bezout's Theorem states that there will typically be $16N^2$ solutions. On their own, these solvers cannot easily be adapted to find the specific solution corresponding to a convex polyhedron, without first finding all solutions (or at least all real solutions) and then examining each solution to check if it corresponds to the desired polyhedron. Since some solutions may be much harder to find than others, one could spend significant computation times pursuing solutions that aren't of interest. One way to ensure that the solution does correspond to a compact convex polyhedron is to use an iterative method, like Newton's Method, for which an initial condition that is sufficiently close to a given solution is garunteed to converge to that root, in combination with a homotopy that garuntees that the nearest root is always the root that corresponds to a compact convex polyhedron. This is our approach, which we describe in greater detail in the next few sections of the paper. We are not entirely sure that this method is faster than finding all of the roots with ``brute force'' and then checking each solution to see if it is the desired one, but our approach has the additional benefit that it explicitly follows Andreev's proof of existence, providing insight into how this proof works for specific examples. \subsection{Newton's Method and Homotopy methods} Given two vector spaces $V$ and $W$ of the same dimension and a mapping $F: V \rightarrow W$, the associated Newton map $N_F: V \rightarrow V$ is given by the formula \begin{eqnarray}\label{NEWTON} N_F({\bf x}) = {\bf x} - [DF({\bf x})]^{-1}(F({\bf x})). \end{eqnarray} \noindent \noindent If the roots of $F$ are non-degenerate, i.e. $DF(r_i)$ is invertible for each root $r_i$ of $F$, then the roots of $F$ corresponds bijectively to super-attracting fixed points of $N_F$. Kantorovich's Theorem \cite{KANT} gives a precise lower bound on the size of the basin of attraction for a root. \begin{thm} {\rm ({\bf Kantorovich's Theorem})}\label{KANT}. Let ${\bf a_0}$ be a point in $\mathbb{R}^n$, $U$ an open neighborhood of ${\bf a_0}$ in $\mathbb{R}^n$, and $F:U \rightarrow \mathbb{R}^n$ a differentiable mapping with $[DF({\bf a_0})]$ invertible. Let $U_0$ be the open ball of radius $\vert [DF({\bf a_0})]^{-1}F({\bf a_0}) \vert$ centered at ${\bf a_1} = N_F({\bf a_1})$. If $U_0 \subset U$ and $[DF({\bf x})]$ satisfies the Lipshitz condition $\parallel DF({\bf u_1}) - DF({\bf u_2}) \parallel \leq M \vert {\bf u_1} - {\bf u_2} \vert$ for all ${\bf u_1, u_2} \in U_0$, and if the inequality \begin{eqnarray}\label{KANTOROWICH} \vert F({\bf a_0}) \vert \cdot \vert [DF({\bf a_0})]^{-1} \vert ^2 M \leq \frac{1}{2} \end{eqnarray} \noindent is satisfied, then the equation $F({\bf x}) = 0$ has a unique solution in $U_0$, and Newton's Method with initial guess ${\bf a_0}$ converges to it. \end{thm} For a proof of Kantorovich's Theorem see \cite{HBH} or the original source \cite{KANT}. While the dynamics near a fixed point can be easily understood by Kantorovich's Theorem, the global dynamics of Newton's Method can be very complicated, with loci of indeterminacy, and critical curves where $DN$ is not injective. In fact, the dynamics of Newton's method to solve for the common roots of a pair of quadratic polynomials in $\mathbb{C}^2$ is a field of active research \cite{HP,RNEWT}. We expect that the global dynamics of the Newton map to solve $F_{C,(v,w,l)}(x) = ({\bf a},0)$ is even significantly more complicated than those in \cite{HP,RNEWT}. In particular, we have no reason to expect that a general initial condition in $R^{4N}$ will converge under iteration of $N_F$ to any solution of $F_{C,(v,w,l)}(x) = ({\bf a},0)$ nor to the specific solution representing a convex compact polyhedron $P$. An approach that can sometimes be used to avoid the difficulties with the global dynamics of Newton's Method is the homotopy method. Suppose that you want to solve $g(x)=y$. The idea is to replace this equation by a family that depends continuously on a single variable: \begin{eqnarray*} g_t(x_t) = y_t \end{eqnarray*} \noindent so that $g_1$ is the same function as $g$ and $y_1 = y$, while $g_0(x) =y_0$ is an equation for which you already know a solution $x_0$. Choose $k$ points $0=t_1,t_2,\ldots t_k=1$. If $k$ is sufficiently large, then $x_{t_1}$ may be in the basin of attraction of the Newton's method for $f_{t_2}(x) = y_{t_2}$. In this case, you can solve for $x_{t_2}$ and can attempt to solve for $x_{t_3}$ using Newton's Method for $f_{t_3}(x) = y_{t_3}$ with initial condition $x_{t_2}$. Repeating this procedure, if possible, leads to the solution $x_1 = x_{t_k}$. While this is obviously a very powerful method, there are many difficulties choosing appropriate paths $g_t(x_t) = y_t$ and appropriate subdivisions $0=t_1,t_2,\ldots,t_k=1$. It is necessary to check that the conditions for Kantorovich's Theorem are satisfied by $x_{t_j}$ for the equation $f_{t_{j+1}}(x) = y_{t_{j+1}}$. The biggest difficulty is to avoid the situation where the derivative $\frac{\partial}{\partial x} f_t$ is singular for some $t$. Such points are described as being in the {\em discriminant variety} and choosing paths that avoid the discriminant variety is a big program of research. These difficulties are discussed extensively by many authors including Shub and Smale in \cite{BLUM,ALLGOWER,SHUB1,SHUB2,SHUB3,SHUB4,BELTRAN}. \vspace{.1in} The proof of Andreev's Theorem in \cite{ROE2,ROE} provides an explicit path that we can use for a homotopy method to construct any simple polyhedron $P$ as a continuous deformation of either the prism $Pr_N$ or the split prism $D_N$, both of which can be easily constructed ``by hand''. We will use this path for our homotopy method: repeatedly using a polyhedron realizing a point on the path as initial condition and solving for a polyhedron slightly further on the path, chosen so that the dynamics of Newton's method converges to the correct solution of $F$. With a similar path we can use the homotopy method again to construct any truncated polyhedron for which $A_C \neq \emptyset$. We take a continuous deformation of a simple polyhedron until the vertices to be truncated pass $\partial_\infty \mathbb{H}^3$, and then add a finite number of additional triangular faces intersecting the appropriate triples of faces perpendicularly. Compound polyhedra are then constructed as gluings of a finite number of truncated polyhedra. \begin{prop} The quadratic equation $F$ has a uniform Lipshitz constant on $R^{4N}$ depending only on the combinatorics $C$. \end{prop} \noindent {\bf Proof:} The proof is merely the observation that $F$ is quadratic so each of the second derivatives are constant. $\Box$ \vspace{.15in} While we have checked that $F$ is Lipshitz, we make no effort to bound the norm of the derivative $[DF]$ away from zero (hence avoiding the discriminant variety). In fact, for a typical problem this is very hard to do. Instead, we merely try the homotopy method with the path mentioned in the preceding paragraph and we show that the method works for all of the constructions that we attempt. It may be interesting to provide a more rigorous basis for our use of Newton's Method and the current choice of path. \subsection{Deforming a given polyhedron using Newton's Method}\label{SEC_DEFORM} Given a polyhedron $P$ realizing $C$ with dihedral angles ${\bf a} \in A_C$, it is easy to use Newton's method to deform $P$ into a new polyhedron $P'$ having any other angles ${\bf a}' \in A_C$. Since $A_C$ is a convex polytope, choose the line segment between ${\bf a}$ and ${\bf a}'$ and subdivide this segment into $K$ equally distributed points ${\bf a} = {\bf a}^0,{\bf a}^1,{\bf a}^2,\cdots {\bf a}^{K-1}={\bf a}'$. Then we use Newton's method with initial condition corresponding to $P$ to solve for a polyhedron $P_1$ with dihedral angles ${\bf a}^1$. We then repeats, using $P_1$ as initial condition for Newton's method to solve for a polyhedron $P_2$ with dihedral angles ${\bf a}^2$, and continue until reaching $P'$ realizing ${\bf a}'$. If the homotopy method has worked, then each step of Newton's method converges; otherwise we can try a larger number of subdivisions $K$, or attempt to check if the path has hit the discriminant variety. In all of the calculations within this paper, when deforming the angles of a given polyhedron $P$ within $A_C$, we use $K$ between $100$ and $300$ subdivisions, although this is sometimes a significant overkill. We consider it sufficient to show how to use Newton's method to construct some polyhedron $P$ with non-obtuse dihedral angles for every $C$ that has $A_C \neq \emptyset$. From this $P$ one can construct any other $P' \in {\cal P}_C^0$ using the deformation described above. (This ease with which one can deform the angles of a given polyhedron is an additional benefit of our homotopy method.) In the next sections we will see how to connect individual paths in $A_{C_1},\cdots,A_{C_k}$ so as to construct compact polyhedra realizing $C_1$ a a sequence of deformations of a compact polyhedron realizing $C_k$. \subsection{Simple polyhedra and Whitehead moves}\label{SEC_WH} Recall that if $C$ is simple then $\left(\frac{2\pi}{5},\cdots,\frac{2\pi}{5}\right) \in A_C$. The goal of this section and the following is to demonstrate the construction of a polyhedron $P$ realizing any simple $C$ with these dihedral angles. Andreev's Theorem provides a sequence of elementary changes (Whitehead moves) to the reducing the combinatorics $C$ to one of two the combinatorial polyhedra $D_N$ or $Pr_N$ depicted below. \vspace{.1in} \begin{center} \begin{picture}(0,0)% \includegraphics{prism_comb.ps}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4224,2022)(1114,-1981) \put(1669,-1968){\makebox(0,0)[lb]{\smash{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}Prism with 10 faces}% }}} \put(3769,-1950){\makebox(0,0)[lb]{\smash{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}Splitprism with 11 faces}% }}} \end{picture} \end{center} In this section we show how to create polyhedra realizing $D_N$ and $Pr_N$ and how to do the Whitehead moves using Newton's method. \vspace{.1in} \begin{lem} \label{EXPRISM} Let $Pr_N$ and $D_N$ be the abstract polyhedra corresponding to the $N$-faced prism and the $N$-faced ``split prism'', as illustrated below. If $N > 4$, ${\cal P}^0_{Pr_N}$ is nonempty and if $N > 7$, ${\cal P}^0_{D_N}$ is nonempty. \end{lem} \noindent {\bf Construction:} Construct a regular polygon with $N-2$ sides in the disc model for $\mathbb{H}^2$. ($N-2 \geq 3$, since $N \geq 5$.) We can do this with the angles arbitrarily small. Now view $\mathbb{H}^2$ as the equatorial plane of $\mathbb{H}^3$, and consider the hyperbolic planes perpendicular to the equatorial plane containing the sides of the polygon. In Euclidean geometry these are hemispheres with centers on the boundary of the equatorial disc. The dihedral angles of these planes are the angles of the polygon. Consider two hyperbolic planes close to the equatorial plane, one slightly above and one slightly beneath, both perpendicular to the $z$-axis. These will intersect the previous planes at angles slightly smaller than $\pi/2$. The region defined by these $N$ planes makes a hyperbolic polyhedron realizing the cell structure of the prism. Note that our construction completes the proof of Proposition \ref{NONEMPTYIMPLIESAND}, for the special case $C = Pr_N$, $N \geq 5$. \begin{figure}[htp] \begin{center} \includegraphics[scale=1.0]{Pr10_and_D11_arxiv.ps} \end{center} \caption{\hbox{ }} \end{figure} For $N>7$ we will construct $D_N$ by cutting it into two prisms each with $N-1$ faces and the dihedral angles shown below. \begin{figure}[htp] \begin{center} \begin{picture}(0,0)% \epsfig{file=prism.ps}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(1824,1711)(830,-1539) \put(1715,-1036){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \put(2654,-710){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(2445,-1223){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1172,-1356){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1039,-140){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1762, 88){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(2464,-159){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1362,-729){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \put(1457,-444){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \put(2122,-710){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \put(1457,-975){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \put(1576,-109){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1593,-1256){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(2122,-1211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1172,-402){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(830,-710){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1187,-1070){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(2178,-187){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1731,-1508){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/4$}% }}}} \put(2024,-938){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \put(1976,-413){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \put(2374,-856){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(2385,-562){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/2$}% }}}} \put(1740,-299){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\pi/3$}% }}}} \end{picture}% \end{center} \caption{\hbox{ }} \end{figure} These angles satisfy Andreev's conditions (1) -- (5) so we can use Newton's method to deform the prism constructed in the previous paragraph to have these angles. Gluing this prism is to its mirror image, the edges labeled $\pi/2$ on the outside disappear as edges, and the edges labeled on the outside by $\pi/4$ glue together becoming an edge with dihedral angle $\pi/2.$ Hence, we have constructed a polyhedron realizing $D_N$, assuming $N > 7$. Notice that when $N \leq 7$ the combinatorics of $D_N$ coincides with that of $Pr_N$. $\Box$ \vspace{.15in} Assume that the two vertices incident at an edge $e$ are trivalent. A Whitehead move $Wh(e)$ on edge $e$ is given by the local change of the abstract polyhedron described in the following diagram. The Whitehead move in the dual complex is dashed. Often we will find it convenient to describe the Whitehead move entirely in terms of the dual complex, in which case we write $Wh(f)$. \begin{figure}[htp] \begin{center} \begin{picture}(0,0)% \includegraphics{whitehead.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(2975,1424)(1339,-1412) \put(2157,-1381){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}Whitehead move on edge $e$}% }}} \put(3613,-72){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_1$}% }}} \put(4290,-83){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_2$}% }}} \put(4314,-1085){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_3$}% }}} \put(3614,-1096){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_4$}% }}} \put(4004,-687){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e'$}% }}} \put(1799,-671){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e$}% }}} \put(1374,-340){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_1$}% }}} \put(2426,-351){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_2$}% }}} \put(2483,-1051){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_3$}% }}} \put(1357,-1096){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e_4$}% }}} \put(1992,-556){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f$}% }}} \put(3793,-497){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f'$}% }}} \put(2832,-476){\makebox(0,0)[lb]{\smash{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Wh(e)$}% }}} \end{picture} \end{center} \caption{\hbox{ }} \end{figure} The following lemma appears in \cite{ROE2}: \begin{lem} \label{WHITEHEAD} Let the abstract polyhedron $C'$ be obtained from the simple abstract polyhedron $C$ by a Whitehead move $Wh(e)$. Then if ${\cal P}^0_C$ is non-empty so is ${\cal P}^0_{C'}.$ \end{lem} The proof constructs a sequence of polyhedra realizing $C$ with dihedral angles chosen so that the edge $e$ converges to a single point at infinity. A carefully chosen small perturbation of this limiting configuration results in a compact polyhedron realizing $C'$ with non-obtuse dihedral angles. Suppose that we have a polyhedron realizing $C$ with all dihedral angles equal to $\frac{2\pi}{5}$, and choose a small $\epsilon > 0$. To implement a Whitehead move using the computer, we assign dihedral angle $\epsilon$ to the the edge $e$ and dihedral angle $\frac{\pi}{2}$ to the four edges sharing an endpoint with $e$. Leaving the dihedral angles of the remaining edges the same, the resulting set of angles is in $A_C$ and hence we can use Newton's Method to deform $P$ into a polyhedron $P_1$ realizing $C$ with these new angles. If $\epsilon$ was chosen small enough, $P_1$ will be in the basin of attraction for a polyhedron realizing $C'$ with the edge $e'$ replacing $e$, the dihedral angle at $e'$ equal to $\epsilon$, and all other dihedral angles as in $P_1$. We call the resulting polyhedron $P_2$. Since $C'$ is simple we can deform $P_2$ to have all dihedral angles $\frac{2\pi}{5}$, obtaining $P'$. The following diagram shows these four steps when doing a Whitehead move on one of the edges of the dodecahedron. Here and elsewhere in this paper we use $\epsilon$ approximately $\frac{\pi}{45}$. (A smaller $\epsilon$ may be necessary when constructing polyhedra with a very large number of faces.) \begin{figure}[htp] \begin{center} \begin{picture}(0,0)% \epsfig{file=explicit_whitehead.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5550,5550)(1201,-5911) \put(2326,-2086){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e$}% }}}} \put(5239,-4653){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$e'$}% }}}} \end{picture}% \end{center} \caption{Whitehead move $Wh(e)$ on edge $e$ of the dodecahedron} \end{figure} \vspace{.1in} If we can find a sequence of combinatorial Whitehead moves reducing a given simple abstract polyhedron $C$ to either $Pr_N$ or $D_N$ via a sequence of simple abstract polyhedra $C_1,\cdots,C_N$, we can use Newton's method to perform this sequence of Whitehead moves in the reverse order, constructing geometric polyhedra that realize $C_N, C_{N-1},\cdots,C_1$, and finally $C$. Before explaining why such a sequence always exists, we demonstrate this process for the dodecahedron. \begin{figure}[htp] \begin{center} \scalebox{1.3}{ \begin{picture}(0,0)% \epsfig{file=dodec_example2.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5259,5691)(1066,-6190) \put(2616,-3675){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(6,9)$}}% }}}} \put(1836,-1236){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(1543,-1612){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(1868,-726){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(1083,-1005){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(1918,-932){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(1904,-1462){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(2500,-1013){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(2144,-1176){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(2198,-1570){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(1424,-1051){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(1380,-1300){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(5202,-5046){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(6159,-4447){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(5803,-4610){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(5857,-5004){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(5179,-4270){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(5738,-4340){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(4881,-4253){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(4708,-4443){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(5027,-4734){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(5407,-4256){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(5478,-5082){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(2595,-4738){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny Straighten}}% }}}} \put(2596,-4823){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny out diagram}}% }}}} \put(2475,-2826){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny Straighten}}% }}}} \put(2476,-2911){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny out diagram}}% }}}} \put(1066,-661){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny at infinity}}% }}}} \put(1652,-6132){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(2609,-5533){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(2253,-5696){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(2307,-6090){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(1629,-5356){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(2188,-5426){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(1331,-5339){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(1158,-5529){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(1477,-5820){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(1857,-5342){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(1928,-6168){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(2823,-5761){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small which is $D_{12}$}}% }}}} \put(1568,-3954){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(2525,-3355){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(2169,-3518){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(2223,-3912){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(1545,-3178){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(2104,-3248){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(1247,-3161){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(1074,-3351){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(1393,-3642){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(1865,-3582){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(3483,-3942){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(4440,-3343){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(4084,-3506){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(4138,-3900){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(3460,-3166){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(4019,-3236){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(3162,-3149){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(2989,-3339){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(3308,-3630){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(3780,-3570){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(5223,-3922){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(6180,-3323){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(5824,-3486){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(5878,-3880){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(5200,-3146){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(5759,-3216){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(4902,-3129){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(4729,-3319){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(5048,-3610){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(5470,-3402){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(1810,-3871){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(3730,-3864){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(5466,-3843){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(3483,-5057){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(4440,-4458){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(4084,-4621){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(4138,-5015){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(3460,-4281){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(4019,-4351){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(3162,-4264){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(2989,-4454){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(3308,-4745){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(3721,-5080){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(1610,-5023){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(2567,-4424){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(2211,-4587){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(2265,-4981){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(1587,-4247){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(2146,-4317){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(1289,-4230){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(1116,-4420){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(1435,-4711){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(1857,-4503){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(1861,-4941){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(5538,-1204){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(5245,-1580){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(5570,-694){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(4785,-973){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(6202,-981){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(5846,-1144){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(5900,-1538){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(5126,-1019){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(5082,-1268){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(1827,-2400){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(1534,-2776){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(1859,-1890){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(1074,-2169){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(2491,-2177){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(2135,-2340){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(2189,-2734){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(1415,-2215){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(1371,-2464){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(3754,-2387){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(3461,-2763){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(3001,-2156){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(4418,-2164){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(4062,-2327){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(4116,-2721){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(3298,-2451){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(3523,-2067){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(5469,-2374){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(5176,-2750){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(5501,-1864){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(4716,-2143){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(6133,-2151){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(5777,-2314){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(5831,-2708){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(5013,-2438){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(5238,-2054){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(5488,-1498){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(3708,-2684){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(5423,-2667){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(1777,-2694){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(3684,-1841){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(4022,-2065){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(2113,-2090){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(5719,-2052){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(3730,-4309){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(3769,-1225){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(3476,-1601){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(3801,-715){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(3016,-994){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(3837,-1451){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(4433,-1002){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(4077,-1165){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(4131,-1559){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(3357,-1040){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(3313,-1289){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(4049,-897){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(5830,-876){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(2616,-1305){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(7,12)$}}% }}}} \put(2724,-1689){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(2,4)$}}% }}}} \put(2586,-2445){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(3,6)$}}% }}}} \put(4386,-2463){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(4,5)$}}% }}}} \put(4458,-3663){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(2,5)$}}% }}}} \put(2742,-4041){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(4,6)$}}% }}}} \put(4434,-4785){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(2,9)$}}% }}}} \put(3588,-5373){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(5,9)$}}% }}}} \put(4434,-1299){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(8,10)$}}% }}}} \put(1078,-571){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny vertex} $1$}% }}}} \end{picture}% } \end{center} \caption{\hbox{ }} \end{figure} We can then use Newton's Method to do the reverse sequence of Whitehead moves $Wh(8,11)$, $Wh(4,11)$, $Wh(1,2)$, $Wh(9,11)$, $Wh(2,4)$, $Wh(1,6)$, $Wh(7,11)$, $Wh(6,9)$, $Wh(1,5)$, and $Wh(1,4)$ geometrically, constructing the dodecahedron from $D_{12}$. The following diagram shows this process. \begin{figure}[htp] \begin{center} \includegraphics[scale=1.2]{dodec_construction_arxiv.ps} \end{center} \caption{Construction of the dodecahedron from $D_{12}$ using $10$ Whitehead Moves.} \end{figure} \subsection{A Lemma on Whitehead Moves}\label{SEC_ALGORITHM} The following lemma from \cite{AND} and \cite{ROE2} is necessary to prove Andreev's Theorem and for our construction of simple polyhedra doing Whitehead moves geometrically with Newton's Method. \begin{lem} {\bf Whitehead Sequence} \label{ROEDER} Let $C$ be a simple abstract polyhedron on $\mathbb{S}^2$ which is not a prism. If $C$ has $N > 7$ faces, $C$ can be simplified to $D_N$ by a finite sequence of Whitehead moves such that {\it all of the intermediate abstract polyhedra are simple.} \end{lem} Theorem 6 in Andreev's original paper contains our Lemma \ref{ROEDER}. Andreev's original proof of Theorem 6 provides an algorithm to produce the Whitehead moves needed for this lemma, but the algorithm {\it contains a glitch }. An error was detected when the author tried to implement it as a computer program. The algorithm failed on the first test, when given a dodecahedron. In the instead using $Wh(6,9)$ for the fifth Whitehead move of the sequence described in the previous section, Andreev's algorithm uses either $Wh(2,6)$ or $Wh(2,5)$. In both cases it produces an abstract polyhedron which had a prismatic 3-circuit, see below: \begin{figure}[htp] \begin{center} \scalebox{1.2}{ \begin{picture}(0,0)% \epsfig{file=./error.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3556,879)(1074,-3976) \put(3792,-3867){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(1568,-3954){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(2525,-3355){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(2169,-3518){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(2223,-3912){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(1545,-3178){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(2104,-3248){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(1247,-3161){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(1074,-3351){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(1393,-3642){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(1865,-3582){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \put(1810,-3871){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$5$}% }}}} \put(2616,-3675){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $Wh(2,6)$}}% }}}} \put(3550,-3950){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$10$}% }}}} \put(4507,-3351){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$12$}% }}}} \put(4151,-3514){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$9$}% }}}} \put(4205,-3908){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$8$}% }}}} \put(3527,-3174){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6$}% }}}} \put(4086,-3244){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4$}% }}}} \put(3229,-3157){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$7$}% }}}} \put(3056,-3347){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$3$}% }}}} \put(3375,-3638){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$11$}% }}}} \put(3847,-3578){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2$}% }}}} \end{picture}% } \end{center} \label{FIG_ERROR} \caption{\hbox{ }} \end{figure} \vspace{.1in} In combination with the computer-implemented Whitehead move described in the previous section, the sequence of Whitehead moves given in the proof of Lemma \ref{ROEDER} gives us the path that we will use for our homotopy method when constructing simple polyhedra. We provide an outline of the proof here that is sufficient to describe the sequence of Whitehead moves. Those who wish to see a complete proof may refer to \cite{ROE2,ROE}. \noindent {\bf Outline of the Proof of Lemma \ref{ROEDER}:} We assume that $C \neq Pr_N$ is a simple abstract polyhedron with $N>7$ faces. We will construct a sequence of Whitehead moves that change $C$ to $D_N$, so that no intermediate complex has a prismatic 3-circuit. Find a vertex $v_\infty$ of $C^*$ which is connected to the greatest number of other vertices. We will call the link of $v_\infty$, a cycle of $k$ vertices and $k$ edges, the {\it outer-polygon}. Most of the work is to show that we can do Whitehead moves to increase $k$ to $N-3$ without introducing any prismatic 3-circuits during the process. Once this is completed, it is easy to change the resulting complex to $D^*_N$ by additional Whitehead moves. Let us set up some notation. Draw the dual complex $C^*$ in the plane with the vertex $v_\infty$ at infinity, and the outer polygon $P$ surrounding the remaining vertices and triangles. We call the vertices inside of $P$ {\it interior vertices}. All of the edges inside of $P$ which do not have an endpoint on $P$ are called {\it interior edges}. Note that the graph of interior vertices and edges is connected, since $C^*$ is simple. An interior vertex which is connected to only one other interior vertex will be called an {\it endpoint}. \begin{center} \begin{picture}(0,0)% \epsfig{file=fig2.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3387,1956)(1189,-2220) \put(3578,-1771){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$F^2_v$}}% }}}} \put(1223,-751){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$F^1_w$}}% }}}} \put(1651,-1329){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$w$}}% }}}} \put(3270,-638){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$F^1_v$}}% }}}} \put(3256,-1179){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$v$}}% }}}} \put(4576,-661){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{endpoint}}% }}}} \end{picture}% \end{center} Throughout this proof we will draw $P$ and the interior edges and vertices of $C^*$ in black. The connections between $P$ and the interior vertices will be grey. Connections between $P$ and $v_{\infty}$ will be black, if shown at all. The link of an interior vertex $v$ intersects $P$ in a number of components $F_v^1,\cdots,F_v^n$ (possibly $n = 0$.) See the above figure. We say that $v$ is {\it connected to $P$ in these components.} Notice that since $C^*$ is simple, an endpoint is always connected to $P$ in exactly one such component. \begin{move} \label{SUBLEMMA1} Suppose that there is an interior vertex $A$ of $C^*$ which is connected to $P$ in exactly one component consisting of exactly two consecutive vertices $Q$ and $R$. The Whitehead move $Wh(QR)$ on $C^*$ increases the length of the outer polygon by one, and introduces no prismatic $3$-circuit. \end{move} \begin{center} \begin{picture}(0,0)% \epsfig{file=fig3.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4176,1741)(1537,-2354) \put(2316,-835){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$v_\infty$}}% }}}} \put(5542,-2022){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$E$}}% }}}} \put(4755,-769){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$v_\infty$}}% }}}} \put(3402,-1387){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Wh(QR)$}% }}}} \put(2828,-1223){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$R$}}% }}}} \put(5315,-1216){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$R$}}% }}}} \put(4509,-2236){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{interior stuff}}% }}}} \put(1704,-2243){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{other interior stuff}}% }}}} \put(2546,-2018){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$A$}}% }}}} \put(3106,-2041){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$E$}}% }}}} \put(1621,-2033){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$D$}}% }}}} \put(1789,-1207){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q$}}% }}}} \put(4267,-1214){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q$}}% }}}} \put(5057,-2019){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$A$}}% }}}} \put(4114,-2034){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$D$}}% }}}} \end{picture}% \end{center} \vspace{.05in} \begin{move} \label{SUBLEMMA2} Suppose that there is an interior vertex $A$ that is connected to $P$ in a component consisting of $M$ consecutive vertices $Q_1,\cdots,Q_M$ of $P$ (and possibly other components). \noindent (a) If $A$ is not an endpoint and $M > 2$, the sequence of Whitehead moves $Wh(AQ_M),\ldots,Wh(AQ_3)$ results in a complex in which $A$ is connected to the same component of $P$ in only $Q_1$ and $Q_2$. These moves leave $P$ unchanged, and introduce no prismatic 3-circuit. \vspace{.07in} \begin{center} \begin{picture}(0,0)% \epsfig{file=fig4.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5193,1438)(1104,-1372) \put(1902,-1341){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$A$}% }}}} \put(2319,-30){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_{M-1}$}% }}}} \put(2797,-30){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_M$}% }}}} \put(4348,-1341){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$D$}% }}}} \put(5959,-1341){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$E$}% }}}} \put(5124,-1341){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$A$}% }}}} \put(4348,-30){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_1 Q_2 Q_3$}% }}}} \put(5481,-30){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_{M-1}$}% }}}} \put(5898,-30){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_M$}% }}}} \put(3035,-387){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Wh(AQ_M)$}% }}}} \put(1126,-1341){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$D$}% }}}} \put(2737,-1341){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$E$}% }}}} \put(1126,-30){\makebox(0,0)[lb]{\smash{{\SetFigFont{9}{10.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$Q_1 Q_2 Q_3$}% }}}} \end{picture}% \end{center} \vspace{.07in} \noindent (b) If $A$ is an endpoint and $M > 3$, the sequence of Whitehead moves \newline $Wh(AQ_M),\ldots,Wh(AQ_4)$ results in a complex in which $A$ is connected to the same component of $P$ in only $Q_1,Q_2$, and $Q_3$. These moves leave $P$ unchanged and introduce no prismatic 3-circuits. \vspace{.07in} \begin{center} \begin{picture}(0,0)% \epsfig{file=./fig8.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5002,1664)(571,-1179) \put(4918,-1121){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_1$}}% }}}} \put(4801,-511){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$E$}}% }}}} \put(4471,-495){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$A$}}% }}}} \put(4171,171){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_{M-1}$}}% }}}} \put(4884,329){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_M$}}% }}}} \put(4051,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_2$}}% }}}} \put(3721,-422){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_4$}}% }}}} \put(3796,-615){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_3$}}% }}}} \put(3751, 14){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_{M-2}$}}% }}}} \put(2626,-286){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Wh(AQ_M)$}}% }}}} \put(1651,-511){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$E$}}% }}}} \put(1321,-495){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$A$}}% }}}} \put(901,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_2$}}% }}}} \put(571,-422){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_4$}}% }}}} \put(1021,171){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_{M-1}$}}% }}}} \put(1734,329){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_M$}}% }}}} \put(646,-615){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_3$}}% }}}} \put(601, 14){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_{M-2}$}}% }}}} \put(1767,-1112){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Q_1$}}% }}}} \end{picture}% \end{center} \vspace{.07in} \end{move} Note: In both parts (1) and (2), each of the Whitehead moves $Wh(AQ_M)$ transfers the connection between $A$ and $Q_M$ to a connection between the neighboring interior vertex $E$ and $Q_{M-1}$. This is helpful in case 2 later. \begin{move}\label{SUBLEMMA3} Suppose that there is an interior vertex $A$ whose link contains two distinct vertices $X$ and $Y$ of $P$. Then there are Whitehead moves which eliminate any component in which $A$ is connected to $P$, if that component does not contain $X$ or $Y$. $P$ is unchanged, and no prismatic $3$-circuits will be introduced. \end{move} \begin{center} \vspace{.125in} \begin{center} \begin{picture}(0,0)% \includegraphics{fig5.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(2959,1013)(879,-1352) \put(1126,-811){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$A$}}% }}} \put(2994,-849){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$A$}}% }}} \put(2026,-511){\makebox(0,0)[lb]{\smash{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}} \put(3826,-511){\makebox(0,0)[lb]{\smash{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}} \put(2044,-1089){\makebox(0,0)[lb]{\smash{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}} \put(3809,-1094){\makebox(0,0)[lb]{\smash{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}} \end{picture} \end{center} \vspace{.125in} \end{center} Here $A$ is connected to $P$ in four components containing six vertices. We can eliminate connections of $A$ to all of the components except for the single-point components $X$ and $Y$. The proof that this move does not introduce any new prismatic 3-circuit is rather technical and depends essentially on the fact that $A$ is connected to $P$ in at least two other vertices $X$ and $Y$. Andreev describes a nearly identical process to move \ref{SUBLEMMA3} in his paper \cite{AND} on pages 333-334. However, he merely assumes that $A$ is connected to $P$ in at least one component in addition to the components being eliminated. He does not require that $A$ is connected to $P$ in at least {\it two vertices} outside of the components being eliminated. Andreev then asserts: ``It is readily seen that all of the polyhedra obtained in this way are simple...'' In fact, the Whitehead move demonstrated in Figure \ref{FIG_ERROR} creates a prismatic 3-circuit. Having assumed this stronger (and incorrect) version of move 3, the remainder of Andreev's proof is relatively easy. Unfortunately, the situation pictured in Figure \ref{FIG_ERROR} is not uncommon (as we will see in Case 3 below!) Restricted to the weaker hypotheses of Move \ref{SUBLEMMA3} we will have to work a little bit harder. Using Move 1, Move 2, and Move 3 we check that if the length of $P$ is less than $N-3$, then there is a sequence of Whitehead moves that increases the length of $P$ by one without introducing any prismatic $3$-circuits. \vspace{.1in} {\bf Case 1:} An interior vertex that is not an endpoint connects to $P$ in a component with two or more vertices, and possibly in other components. \vspace{.05in} Apply Move \ref{SUBLEMMA2} decreasing this component to two vertices. We can then apply Move \ref{SUBLEMMA3}, eliminating any other components since this component contains two vertices. Finally, apply Move \ref{SUBLEMMA1} to increase the length of the outer polygon by 1. \vspace{.1in} {\bf Case 2:} An interior vertex that is an endpoint is connected to more than three vertices of $P$. \vspace{.05in} We assume that each of the interior vertices that are not endpoints are connected to $P$ in components consisting of single vertices, otherwise we are in Case 1. Let $A$ be the endpoint which is connected to more than three vertices of $P$. By Move \ref{SUBLEMMA2}, part (2), there is a Whitehead move that transfers one of these connections to the interior vertex $E$ that is next to $A$. Now, one of the components in which $E$ is connected to $P$ has exactly two vertices. The vertex $E$ is not an endpoint since $k < N-3$ implies that there are at least three interior vertices. Once this is done, we can apply Case 1. \vspace{.1in} {\bf Case 3:} Each interior vertex that is an endpoint is connected to exactly three vertices of $P$ and each interior vertex which is not an endpoint is connected to $P$ in components each consisting of a single vertex. \vspace{.05in} First, notice that if the interior vertices and edges form a line, the restriction on how interior vertices are connected to $P$ results in the prism, contrary to the assumption that $C$ is not the prism. However, there are many complexes satisfying the hypotheses of this case which have interior vertices and edges forming a graph more complicated than a line: \vspace{.125in} \begin{center} \begin{picture}(0,0)% \includegraphics{fig9.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4724,1469)(879,-1283) \end{picture} \end{center} \vspace{.125in} For such complexes we need a very special sequence of Whitehead moves to increase the length of $P$. Pick an interior vertex which is an endpoint and label it $I_1$. Denote by $P_1$, $P_2$, and $P_3$ the three vertices of $P$ to which $I_1$ connects. $I_1$ will be connected to a sequence of interior vertices $I_2, I_3, \cdots I_m, m \ge 2$, with $I_m$ the first interior vertex in the sequence that is connected to more than two other interior vertices. Vertex $I_m$ must exist by the assumption that the interior vertices don't form a line segment, the configuration that we ruled out above. By hypothesis, $I_2,\cdots,I_m$ can only connect to $P$ in components which each consist of a vertex, hence each must be connected to $P_1$ and to $P_3$. Similarly, there is an interior vertex (call it $X$) which connects both to $I_m$ and to $P_1$ and another vertex $Y$ which connects to $I_m$ and $P_3$. Vertex $I_m$ may connect to other vertices of $P$ and other interior vertices, as shown on the left side of the following diagram. \vspace{.125in} \begin{center} \begin{picture}(0,0)% \epsfig{file=fig10.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4185,1639)(16,-1844) \put(1861,-915){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-1}$}}% }}}} \put(2851,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_1$}}% }}}} \put(4201,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_2$}}% }}}} \put(2701,-1786){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_3$}}% }}}} \put(1051,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}}} \put(1051,-1711){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}}} \put(136,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\tiny{other vertices}}% }}}} \put(1201,-833){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_m$}}% }}}} \put(3601,-916){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_1$}}% }}}} \put(2784,-909){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_3$}}% }}}} \put(3084,-908){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_2$}}% }}}} \put(2194,-914){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-2}$}}% }}}} \end{picture}% \end{center} \vspace{.125in} Now we describe a sequence of Whitehead moves that can be used to connect $I_m$ to $P$ in only $P_1$ and $P_2$. This will allow us to use Move \ref{SUBLEMMA1} to increase the length of $P$ by one. First, using Move \ref{SUBLEMMA3} we can eliminate all possible connections of $I_m$ to $P$ in places other than $P_1$ and $P_3$. Next, we do the move $Wh(I_mP_3)$ so that $I_m$ connects to $P$ only in $P_1$. \vspace{.125in} \begin{center} \begin{picture}(0,0)% \epsfig{file=fig11.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4185,1639)(16,-1844) \put(1605,-1179){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-1}$}}% }}}} \put(2851,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_1$}}% }}}} \put(4201,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_2$}}% }}}} \put(2701,-1786){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_3$}}% }}}} \put(3601,-886){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_1$}}% }}}} \put(3076,-886){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_2$}}% }}}} \put(2776,-886){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_3$}}% }}}} \put(1201,-886){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_m$}}% }}}} \put(1051,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}}} \put(1051,-1711){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}}} \put(2228,-901){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-2}$}}% }}}} \put(151,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\tiny{other vertices}}% }}}} \end{picture}% \end{center} \vspace{.125in} Next, we must do the moves $Wh(I_{m-1}P_1)$,...,$Wh(I_1P_1)$, in that order (see the figure below.) \vspace{.125in} \begin{center} \begin{picture}(0,0)% \epsfig{file=./fig11b.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4215,3607)(-14,-3608) \put(2851,-2161){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_1$}}% }}}} \put(2723,-2919){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_k$}}% }}}} \put(1576,-2986){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-1}$}}% }}}} \put(151,-2761){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\tiny{other vertices}}% }}}} \put(1051,-3511){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}}} \put(1051,-2161){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}}} \put(1201,-2611){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_m$}}% }}}} \put(4201,-2761){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_2$}}% }}}} \put(2191,-1816){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Wh(I_kP_1)$}}% }}}} \put(2851,-61){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_1$}}% }}}} \put(2723,-819){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_k$}}% }}}} \put(1576,-886){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-1}$}}% }}}} \put(151,-661){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\tiny{other vertices}}% }}}} \put(1051,-1411){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}}} \put(1051,-61){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}}} \put(1201,-511){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_m$}}% }}}} \put(4201,-661){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_2$}}% }}}} \put(2701,-1486){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_3$}}% }}}} \put(3076,-586){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{k-1}$}}% }}}} \put(3076,-2686){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{k-1}$}}% }}}} \put(2414,-819){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{k+1}$}}% }}}} \put(2701,-3586){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_3$}}% }}}} \put(2414,-2919){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{k+1}$}}% }}}} \put(3587,-798){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_1$}}% }}}} \put(3592,-2893){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_1$}}% }}}} \end{picture}% \end{center} \vspace{.125in} After this sequence of Whitehead moves we obtain the first diagram below, with $I_m$ connected to $P$ exactly at $P_1$ and $P_2$, so that we can apply Move \ref{SUBLEMMA1} to increase the length of $P$ by the move $Wh(P_1P_2)$, below. \vspace{.125in} \begin{center} \begin{picture}(0,0)% \epsfig{file=./fig12.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4185,1639)(16,-1844) \put(3603,-1092){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_1$}}% }}}} \put(2851,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_1$}}% }}}} \put(4201,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_2$}}% }}}} \put(2701,-1786){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_3$}}% }}}} \put(1051,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}}} \put(1051,-1711){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}}} \put(151,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\tiny{other vertices}}% }}}} \put(3076,-1118){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_2$}}% }}}} \put(2723,-1119){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_3$}}% }}}} \put(2415,-1126){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_4$}}% }}}} \put(1201,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_m$}}% }}}} \put(1576,-1186){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-1}$}}% }}}} \end{picture}% \end{center} \vspace{.125in} \vspace{.125in} \begin{center} \begin{picture}(0,0)% \epsfig{file=fig13.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4185,1639)(16,-1844) \put(3596,-1094){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_1$}}% }}}} \put(2851,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_1$}}% }}}} \put(2701,-1786){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_3$}}% }}}} \put(1051,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$X$}}% }}}} \put(1051,-1711){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$Y$}}% }}}} \put(151,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\tiny{other vertices}}% }}}} \put(1576,-1186){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_{m-1}$}}% }}}} \put(3076,-1118){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_2$}}% }}}} \put(2723,-1119){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_3$}}% }}}} \put(2415,-1126){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_4$}}% }}}} \put(1201,-811){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$I_m$}}% }}}} \put(4201,-961){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$P_2$}}% }}}} \end{picture}% \end{center} \vspace{.125in} This concludes Case 3. \vspace{.1in} Since $C^*$ must belong to one of these cases, we have seen that if the length of $P$ is less than $N-3$, we can do Whitehead moves to increase it to $N-3$ without creating prismatic $3$-circuits. Hence we can reduce to the case of two interior vertices, both of which must be endpoints. Then we can apply Move \ref{SUBLEMMA2} part (b) to decrease the number of connections between one of these two interior vertices and $P$ to exactly $3$. The result is the complex $D_N$, as shown to the right below. \vspace{.125in} \begin{center} \begin{picture}(0,0)% \epsfig{file=fig14.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4074,1599)(439,-1798) \put(1189,-641){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$w$}}% }}}} \put(3751,-1186){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$v$}}% }}}} \put(1240,-1148){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$v$}}% }}}} \put(3684,-661){\makebox(0,0)[lb]{\smash{{\SetFigFont{6}{7.2}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}\small{$w$}}% }}}} \end{picture}% \end{center} \vspace{.125in} $\Box$ \vspace{.15in} \vspace{.2in} \subsection{Construction of a ``difficult'' simple polyhedron}\label{SEC_DIFFICULT} We illustrate the algorithm described in the previous section by constructing a hyperbolic polyhedron realizing an abstract polyhedron for which Case 3 from the proof of Lemma \ref{ROEDER} is necessary, and hence is particularly difficult. \begin{center} \scalebox{1.1}{ \begin{picture}(0,0)% \epsfig{file=const_wh_3a.ps}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5242,8241)(852,-7843) \put(960,-7365){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small Case 3 from the proof of Lemma \ref{ROEDER} is done in sub-figures (1)--(7).}}% }}}} \put(960,-7578){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small Case 1 follows in sub-figures (7)--(9) and again (for a different edge of $P$)}}% }}}} \put(1532,-1358){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (1)}}% }}}} \put(3320,-1329){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (2)}}% }}}} \put(5193,-1372){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (3)}}% }}}} \put(1542,-3258){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (4)}}% }}}} \put(3346,-3244){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (5)}}% }}}} \put(5132,-3244){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (6)}}% }}}} \put(1523,-5116){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (7)}}% }}}} \put(3368,-5102){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (8)}}% }}}} \put(5142,-5102){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (9)}}% }}}} \put(1546,-7017){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (10)}}% }}}} \put(3349,-7031){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (11)}}% }}}} \put(5194,-7002){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (12)}}% }}}} \put(1731,-334){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1726,-127){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(1996,-393){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(1726,-735){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(1494,-498){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(1740, 65){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(2193,-393){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(1735,-933){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(1282,-388){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1278,-26){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1726,326){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(2092,-94){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(2445,-431){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2107,-778){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(1750,-1155){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1209,-841){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(881,-349){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(877,277){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $18$}}% }}}} \put(868,215){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny at infinity}}% }}}} \put(3515,-344){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(3519,-137){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3519, 75){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(3543,287){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3875,-94){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(3775,-388){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3978,-388){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(4224,-427){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(3872,-802){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(3514,-730){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3519,-946){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3539,-1168){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(2984,-827){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3283,-490){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3065,-393){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(2718,-359){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(3047,-46){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4537,-374){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(4888,-50){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(5361,297){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5727,-85){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(6076,-436){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(5718,-816){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(5381,-1183){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(4815,-840){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(4912,-398){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(5188,-484){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5348,-344){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5366,-142){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5371, 80){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5617,-393){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5829,-393){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5371,-730){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5371,-937){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(876,-2278){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1185,-2750){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1278,-1921){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1722,-1844){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1726,-2042){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(1722,-2253){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1726,-2644){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(1727,-2851){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(2086,-2707){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(2449,-2331){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2178,-2297){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(1986,-2298){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(2097,-2009){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(1727,-3078){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1456,-2249){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(1278,-2278){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1739,-1579){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3886,-1976){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(4252,-2326){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(3866,-2693){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(3515,-3069){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3525,-2833){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3519,-2624){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3987,-2289){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3731,-2298){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3520,-2235){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(3514,-2032){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3525,-1825){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(3057,-1936){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3066,-2273){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3254,-2239){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(2988,-2731){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(2656,-2292){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(5317,-1574){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5650,-1961){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(6017,-2322){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(5679,-2683){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(5328,-3060){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5319,-2823){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(5303,-2624){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5777,-2289){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5507,-2293){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5198,-2288){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5308,-2027){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5308,-1821){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(4860,-1936){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4478,-2292){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(4801,-2720){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(5034,-2230){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(4860,-2278){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(852,-4143){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1131,-4606){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1694,-4954){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1697,-4709){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(1696,-4504){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(2052,-4573){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(2406,-4211){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2154,-4157){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(1904,-4168){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(1592,-4158){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(2068,-3870){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(1437,-4100){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(1253,-4149){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1693,-3894){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(1244,-3802){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1700,-3454){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3529,-1574){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3542,-3449){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3890,-3855){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(4252,-4206){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(3894,-4573){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(3559,-4940){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(2972,-4606){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3539,-4703){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3535,-4490){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3722,-4172){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(4001,-4148){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(3539,-3880){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3047,-3797){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3192,-4163){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(2956,-4158){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(2717,-4129){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(3429,-4154){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1688,-3706){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(3553,-3706){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5288,-3474){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5645,-3860){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(6021,-4236){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(5673,-4578){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(5295,-4973){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5299,-4721){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(5289,-4510){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5487,-4193){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5756,-4187){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5298,-3909){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5298,-3711){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5175,-4183){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(4711,-4187){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(4913,-4177){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(4788,-3831){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4444,-4187){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(4757,-4592){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3524,-5307){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3910,-5721){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(3538,-5576){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(3543,-5775){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(4267,-6115){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(3996,-6028){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(3752,-6038){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3211,-6034){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3434,-6039){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(2704,-6034){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(2966,-6044){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3057,-5692){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3918,-6435){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(3539,-6386){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3554,-6592){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(2993,-6496){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3531,-6830){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5385,-5249){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5770,-5687){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(5375,-5508){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(4888,-5615){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4511,-5956){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(4770,-5961){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(5004,-5967){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5257,-5956){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5545,-5971){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5820,-5955){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(6094,-6010){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(5746,-6348){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(5397,-6747){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5381,-6496){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(5372,-6303){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(4825,-6399){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1711,-5320){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(1729,-5580){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(2072,-5726){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(1735,-5775){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(1239,-5701){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(875,-6048){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1143,-6044){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1374,-6048){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(1587,-6053){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1910,-6047){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(2184,-6034){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(2449,-6086){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2091,-6435){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(1694,-6830){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1727,-6586){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(1727,-6386){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(1184,-6477){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(5372,-5703){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(960,-7789){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small in sub-figures (9)--(12).}}% }}}} \end{picture}% } \end{center} \begin{center} \scalebox{1.1}{ \begin{picture}(0,0)% \epsfig{file=const_wh_3b.ps}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5514,8165)(886,-7543) \put(1417,-7489){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small figures (21) and (22), then Case 1 is done in (22)--(29)}.}% }}}} \put(6384,-3934){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(6132,-3900){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5887,-3916){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5626,-3992){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1687,-4211){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3610,-4100){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5658,-4137){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5459,-3946){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5100,-3905){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1433,-7013){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small Case 1 is repeated three times in sub-figures (12)--(15), then (15)--(17),}}% }}}} \put(1449,-7246){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small and finally in (17)--(21). The figure is straightened out between}}% }}}} \put(2139,-6001){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(2144,-6310){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(1831,-6473){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1820,-5059){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(2142,-5760){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(1459,-5159){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(1425,-5382){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1424,-5568){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1423,-5751){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1420,-5936){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1407,-6143){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(1443,-6386){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(1752,-5302){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1735,-6073){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(1877,-5710){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(2124,-5160){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(2148,-5363){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3261,-5999){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3266,-6308){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(2953,-6471){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(2943,-5057){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(3264,-5758){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(2581,-5157){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(2547,-5380){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(2546,-5566){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(2545,-5749){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(2543,-5934){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(2529,-6141){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(2566,-6384){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2874,-5301){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(2857,-6071){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(2999,-5708){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3247,-5158){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3271,-5361){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(4460,-6004){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(4465,-6313){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(4152,-6476){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(4142,-5062){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(4463,-5763){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3780,-5161){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3746,-5385){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3745,-5571){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(3744,-5753){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3742,-5939){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3728,-6146){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(3765,-6388){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(4073,-5305){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(4056,-6076){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(4198,-5713){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(4446,-5162){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(4470,-5366){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5663,-6013){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(5668,-6322){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5355,-6485){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5344,-5071){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(5666,-5773){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(4983,-5171){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(4949,-5395){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4948,-5581){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(4947,-5763){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(4944,-5948){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(4931,-6155){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(4967,-6398){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(5276,-5315){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5259,-6085){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5401,-5723){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(5648,-5172){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5672,-5375){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3490,-3011){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (17)}}% }}}} \put(1566,-1171){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (13)}}% }}}} \put(3434,-1156){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (14)}}% }}}} \put(5442,-1156){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (15)}}% }}}} \put(1636,-3025){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (16)}}% }}}} \put(5484,-3011){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (18)}}% }}}} \put(1650,-4894){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (19)}}% }}}} \put(3490,-4879){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (20)}}% }}}} \put(5499,-4865){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (21)}}% }}}} \put(1650,-6692){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (22)}}% }}}} \put(2816,-6692){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (23)}}% }}}} \put(4024,-6678){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (24)}}% }}}} \put(5232,-6678){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (25)}}% }}}} \put(1653,-160){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1953,-185){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(2197,-185){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(2451,-209){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2124,-572){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(1757,-963){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1737,-734){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(1732,-533){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(1263,-646){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(886,-264){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1131,-195){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1385,-190){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(1252,143){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1702,540){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(1732,289){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1732, 75){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(2089,119){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(3586,550){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3616,289){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(3617, 84){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3969,128){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(4345,-209){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(4062,-185){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(3827,-190){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3989,-577){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(3636,-973){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3625,-734){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3606,-508){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3518,-160){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(3249,-181){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3113,-631){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(2976,-185){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3112,138){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(2752,-264){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(5602,536){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5602,285){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5604, 79){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5961,133){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(6336,-225){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(6073,-185){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5828,-190){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5510,-180){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5972,-582){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(5608,-489){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5598,-739){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(5588,-983){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5120,-646){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(4768,-269){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(4988,-190){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(5251,-181){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5075,133){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(959,-2129){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1317,-2501){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1806,-2828){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1805,-2609){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(2168,-2471){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(2534,-2089){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2257,-2046){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(2036,-2055){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(1693,-2054){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1448,-2045){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(1203,-2060){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1301,-1727){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1795,-1585){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1806,-1790){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(2182,-1790){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(1698,-2373){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(1795,-1340){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3666,-2852){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3646,-2623){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(4024,-2462){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(4379,-2074){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(4101,-2055){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3852,-2055){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3563,-2382){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3172,-2520){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(2794,-2129){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(3048,-2055){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3308,-2045){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3161,-1727){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3650,-1580){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(3660,-1790){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3635,-1336){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(4027,-1795){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(5671,-1331){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5667,-1575){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(6044,-1805){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(5671,-1785){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5163,-1722){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4801,-2143){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(5079,-2050){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(5315,-2049){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5570,-2382){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5163,-2492){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(5672,-2848){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5662,-2608){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(6040,-2452){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(6400,-2080){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(6127,-2046){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5873,-2060){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3616,-2153){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5627,-2143){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1781,-3166){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(2187,-3601){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(1815,-3425){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1806,-3626){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(1335,-3548){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1448,-3890){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(1751,-3968){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1223,-3886){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(968,-3983){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1307,-4322){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1781,-4401){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(2164,-4311){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(2528,-3934){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(1992,-3896){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(2261,-3886){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(1815,-4689){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3636,-3166){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3670,-3420){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(4032,-3640){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(3666,-3630){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3200,-3519){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3049,-3890){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3294,-3890){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(2809,-3973){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(3138,-4312){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3655,-4689){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3572,-4268){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(4023,-4302){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(4383,-3934){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(4111,-3881){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(3857,-3891){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3625,-3973){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5701,-3205){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(6048,-3620){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(5680,-3445){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5677,-3640){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5206,-3554){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4815,-3997){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(5574,-4283){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(5145,-4357){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(5686,-4689){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(6053,-4307){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \end{picture}% } \end{center} \begin{center} \scalebox{1.1}{ \begin{picture}(0,0)% \epsfig{file=const_wh_3c.ps}% \end{picture}% \setlength{\unitlength}{4144sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4604,4991)(1373,-5080) \put(1444,-4584){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small is done in sub-figures (30)--(33) so that one of the two interior}}% }}}} \put(1390,-2872){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1373,-3053){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(1543,-3196){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(1382,-2576){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1377,-2347){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1448,-2134){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(2226,-2656){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(2140,-3101){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(1942,-3219){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1663,-2027){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1900,-2029){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(2105,-2036){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(2214,-2209){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(2219,-2401){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(2218,-2932){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(2708,-2879){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(2691,-3060){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(2861,-3203){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(2700,-2583){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(2695,-2354){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(2766,-2141){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(3544,-2663){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3458,-3108){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3260,-3226){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(2981,-2034){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3218,-2036){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3423,-2043){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(3532,-2216){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3537,-2408){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3536,-2939){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3872,-2869){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3855,-3050){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(4025,-3193){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(3864,-2573){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3859,-2344){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3930,-2131){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(4708,-2653){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(4622,-3098){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(4424,-3216){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(4145,-2024){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(4382,-2026){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(4587,-2033){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(4696,-2206){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(4701,-2398){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(4700,-2929){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(1428,-4797){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small vertices is only connected to three points on the outer polygon}}% }}}} \put(1428,-5022){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small This reduces the complex to $D_{18}^*$. }}% }}}} \put(1461,-3884){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small Continuing from the last page, sub-figures (22)--(29) are another}}% }}}} \put(3189,-163){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(3512,-316){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(3512,-503){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(3519,-880){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(3526,-1176){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3509,-1493){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(3189,-1626){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(2805,-1517){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(3118,-1241){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(3209,-867){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(2757,-497){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(2754,-295){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(2751,-715){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(2751,-882){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(2751,-1066){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(2757,-1303){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(3196,-424){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(4353,-231){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(4676,-384){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(4676,-571){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(4683,-948){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(4690,-1244){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(4673,-1561){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(4353,-1694){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(3969,-1585){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(4282,-1309){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(4373,-935){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(3921,-565){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(3918,-363){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(3915,-783){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(3915,-950){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(3915,-1134){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(3921,-1371){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(4360,-492){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1681,-3406){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (30)}}% }}}} \put(1741,-1846){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (26)}}% }}}} \put(4216,-1861){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (28)}}% }}}} \put(5446,-1831){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (29)}}% }}}} \put(3031,-3421){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (31)}}% }}}} \put(4201,-3421){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (32)}}% }}}} \put(5491,-3436){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (33)}}% }}}} \put(3046,-1816){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small (27)}}% }}}} \put(1859,-422){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1852,-161){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(2175,-314){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(2175,-501){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(2182,-878){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(2189,-1174){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(2172,-1491){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(1852,-1624){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(1468,-1515){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(1781,-1239){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(1872,-865){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(1420,-495){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(1417,-293){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(1414,-713){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(1414,-880){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(1414,-1064){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(1420,-1301){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(5602,-465){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5595,-204){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(5918,-357){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5918,-544){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5925,-921){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5932,-1217){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(5915,-1534){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5595,-1667){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5211,-1558){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(5524,-1282){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5615,-908){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(5163,-538){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(5160,-336){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5157,-756){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(5157,-923){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(5157,-1107){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5163,-1344){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(1881,-2864){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5141,-2869){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $9$}}% }}}} \put(5124,-3050){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $14$}}% }}}} \put(5294,-3193){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $13$}}% }}}} \put(5133,-2573){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $15$}}% }}}} \put(5128,-2344){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $16$}}% }}}} \put(5199,-2131){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $17$}}% }}}} \put(5977,-2653){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $4$}}% }}}} \put(5891,-3098){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $3$}}% }}}} \put(5693,-3216){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $1$}}% }}}} \put(5414,-2024){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $10$}}% }}}} \put(5651,-2026){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $11$}}% }}}} \put(5856,-2033){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $12$}}% }}}} \put(5965,-2206){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $2$}}% }}}} \put(5970,-2398){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $5$}}% }}}} \put(5969,-2929){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $8$}}% }}}} \put(3223,-2885){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(4391,-2890){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(5662,-2885){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $7$}}% }}}} \put(1908,-2388){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(3231,-2414){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(4394,-2415){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(5664,-2440){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\tiny $6$}}% }}}} \put(1461,-4109){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small instance of Case 1. The figure has been straightened out between}}% }}}} \put(1445,-4334){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small sub-figures (29) and (30). A sequence of final Whitehead moves}}% }}}} \end{picture}% } \end{center} \vspace{.1in} Following the Whitehead moves backwards from $D_{18}^*$ back to $R_{18}^*$ we find the following sequence of Whitehead moves. \vspace{.1in} \noindent {\footnotesize $Wh(6,13)$, $Wh(6,9)$, $Wh(3,6)$, $Wh(9,12)$, $Wh(6,15)$, $Wh(6,16)$, $Wh(6,17)$, $Wh(6,7)$, $Wh(6,8)$, $Wh(6,4)$, $Wh(8,18)$, $Wh(7,9)$, $Wh(9,14)$, $Wh(9,15)$, $Wh(3,18)$, $Wh(7,8)$, $Wh(4,18)$, $Wh(3,8)$, $Wh(8,9)$, $Wh(5,18)$, $Wh(4,9)$, $Wh(6,9)$, $Wh(2,18)$, $Wh(5,6)$, $Wh(1,18)$, $Wh(1,13)$, $Wh(1,3)$, $Wh(3,4)$, $Wh(4,5)$, $Wh(2,5)$. } \begin{figure}[p]\label{FIG_COMB2} \begin{center} \includegraphics[scale=1.2]{comb2_con_arxiv.ps} \end{center} \caption{Construction of $R_{18}$ from $D_{18}$ using $30$ Whitehead moves.} \end{figure} We did this sequence of Whitehead moves geometrically, using Newton's Method. The result, starting with $D_{18}$, and realizing $R_{18}$ is shown in Figure \ref{FIG_COMB2}. Each polyhedron is displayed in the conformal ball model. \subsection{Truncation of vertices}\label{SEC_TRUNCATION} We have seen an outline of how to construct simple polyhedra. We now show how to construct all truncated polyhedra, except for the triangular prism, which we have already constructed in Section \ref{SEC_WH}. \begin{lem} \label{PI3} If $A_C \neq \emptyset$, then there are points in $A_C$ arbitrarily close to \\ $(\pi/3,\pi/3,\ldots,\pi/3)$. \end{lem} \noindent {\bf Proof:} Simply check that if ${\bf a} \in A_C$, then the entire straight line path to $(\pi/3,\pi/3,\ldots,\pi/3)$, excluding the final point is in $A_C$. $\Box$ \vspace{.15in} Thus we can assume that ${\bf a}$ is arbitrarily close to $(\pi/3,\pi/3,\cdots,\pi/3)$, because once we have a polyhedron realizing $C$ with non-obtuse dihedral angles, we can deform it to have any dihedral angles in $A_C$, as described in Section \ref{SEC_DEFORM}. Specifically, choose some $0 < \delta < \frac{\pi}{18}$ and assume that each component of ${\bf a}$ is within $\delta$ of $\frac{\pi}{3}$. Let $\widetilde{C}$ be the $C$ with each of the triangular faces $f_i^{\rm tr}$ replaced by a single vertex $v_i^{\rm tr}$. (Or, if $C$ is the truncated triangular prism, let $\widetilde{C}$ be the prism.) Let $\hat{{\bf a}}$ be the angles from ${\bf a}$ corresponding to the edges from $C$ that are in $\widetilde{C}$ and let $\beta = (\hat{{\bf a}}_1+2\delta,\hat{{\bf a}}_2+2\delta,\cdots)$. (If $\widetilde{C}$ is the prism, re-number the edges so that the three edges forming the prismatic cycle are the first three, and choose $\beta = (\hat{{\bf a}}_1,\hat{{\bf a}}_2,\hat{{\bf a}}_1,\hat{{\bf a}}_5+2\delta,\hat{{\bf a}}_5+2\delta,\cdots)$.) Note that $\delta$ was chosen so that $\beta \in A_{\widetilde{C}}$. Then, the straight line path ${\bf a}(t)$ joining $\beta$ to $\hat{\bf a}$ (parameterized by $t \in (0,1)$) will remain in $A_{\widetilde{C}}$ except that the sum of the dihedral angles of edges meeting at each of the vertices $v_i^{\rm tr}$ will decrease past $\pi$ at some time $t_i \in (0,1)$. In \cite{ROE2,ROE} the authors use the path ${\bf a}(t)$ to construct a sequence of polyhedra $\widetilde{P} = P_0,P_1,\cdots,P_{N-1} = P$ where $\widetilde{P}$ realizes $C$, and $P_i$ is obtained from $P_{i-1}$ truncating the vertices that become ideal when $t=t_i$. (The truncation is done using a small perturbation.) Realizing $P$ proves that ${\cal P}_C^0 \neq \emptyset$, as needed for the proof of Andreev's Theorem. Because the proof from \cite{ROE2,ROE} gives us a priori knowledge that compact polyhedra exist realizing each of the intermediate combinatorial structures exist, we can use Newton's method to deform the planes forming $\widetilde{P}$ to realize the angles in the entire path ${\bf a}(t)$ without truncating each vertex once it meets $\partial_\infty \mathbb{H}^3$. We can then solve independently for the planes corresponding to the triangles in $C$ so that each intersects the three appropriate planes at the three appropriate angles. We illustrate this construction below for the following pair $(C,{\bf a})$, which has three truncations labeled $f_1^{\rm tr}$, $f_2^{\rm tr}$, and $f_3^{\rm tr}$. \begin{center} \begin{picture}(0,0)% \epsfig{file=truncation.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3532,2581)(137,-1763) \put(3258,-581){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_3^{\rm tr}$}% }}}} \put(356,-617){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(609,-725){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1003,-460){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1163,319){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2615,-111){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2262,-89){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1745, 77){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3045,-1500){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2010,-1360){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(958,-1261){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(796,-1163){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(856,-902){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(522,-909){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2818,-390){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(1319,-112){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1630,-98){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(1513,-1120){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(1881,-729){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1139,-1242){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2471,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1501,-1736){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(326,-1511){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(882,-1482){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(695,-1321){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(336,-1217){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(476, 68){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2237,-352){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{6}$}% }}}} \put(1490,-559){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(190,-912){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1458,746){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2711,525){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(1856,-177){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(598,-1060){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(921,-1070){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2234,251){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2401,-211){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_2^{\rm tr}$}% }}}} \put(1726,-511){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f_1^{\rm tr}$}% }}}} \put(3119,-951){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3128,-165){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3669,-496){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2036,420){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \end{picture}% \end{center} The path ${\bf a}(t)$ described above works with any truncated $C$. For many $C$, such as the current one, a much easier path ${\bf a}(t)$ can be found satisfying conditions (1,3,4) and (5) from Andreev's Theorem for $\widetilde{C}$, but for which the sum of the dihedral angles of the edges meeting at each $v_i^{\rm tr}$ decreases past $\pi$ at some $t_i \in (0,1)$. Such a path is sufficient for our construction. For the current construction, $\widetilde{C}$ is shown below with edges labeled according to an appropriate path ${\bf a}(t)$. (Notation: ${\bf a}(t) = \beta$ for the edges labeled with a single angle $\beta$, whereas ${\bf a}(t) = \eta (1-t) + \gamma t$ for the edges $[\eta, \gamma]$.) \begin{center} \begin{picture}(0,0)% \epsfig{file=truncation_path.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3452,2581)(137,-1763) \put(2769,-296){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\left[\frac{2\pi}{5},\frac{\pi}{4}\right]$}% }}}} \put(356,-617){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(609,-725){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1003,-460){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1163,319){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1745, 77){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(958,-1261){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(796,-1163){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(856,-902){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(522,-909){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1319,-112){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1139,-1242){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(326,-1511){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(882,-1482){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(695,-1321){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(336,-1217){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(476, 68){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(190,-912){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1458,746){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(598,-1060){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(921,-1070){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1764,398){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1922,-1317){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1501,-1736){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1583,-911){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\left[\frac{2\pi}{5},\frac{\pi}{3}\right]$}% }}}} \put(3589,-142){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_3^{\rm tr}$}% }}}} \put(2027,-482){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\left[\frac{2\pi}{5},\frac{\pi}{6}\right]$}% }}}} \put(1500,-563){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_1^{\rm tr}$}% }}}} \put(2391,-306){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$v_2^{\rm tr}$}% }}}} \put(3093,-982){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\left[\frac{2\pi}{5},\frac{\pi}{4}\right]$}% }}}} \put(2944,261){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\left[\frac{2\pi}{5},\frac{\pi}{4}\right]$}% }}}} \put(1698,-179){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\left[\frac{2\pi}{5},\frac{\pi}{4}\right]$}% }}}} \put(2271, 49){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\left[\frac{2\pi}{5},\frac{\pi}{4}\right]$}% }}}} \end{picture}% \end{center} We constructed a polyhedron $\widetilde{P}$ realizing the pair $(\widetilde{C},{\bf a}(0))$ and used Newton's method to deform the faces so that the dihedral angles follow the path ${\bf a}(t)$. After obtaining a non-compact polyhedron $\widetilde{P}_1$ realizing angles ${\bf a}(1)$ we truncated the vertices $v_1^{\rm tr},v_2^{\rm tr},v_3^{\rm tr}$. The final result is the polyhedron: \begin{center} \includegraphics[scale=1.0]{truncated_arxiv.ps} \end{center} which will be used in the next section to form part of a compound polyhedron. \subsection{Constructing compound polyhedra}\label{SEC_COMPOUND} Any compound polyhedron can be constructed by gluing together a finite number of truncated polyhedra. We illustrate this construction for the polyhedron shown in Section \ref{SEC_EXAMPLE}. \begin{figure}[h!] \begin{center} \begin{picture}(0,0)% \epsfig{file=angles_spec2.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(4654,3041)(82,-2275) \put(2476,-1261){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\gamma$}% }}}} \put(581,-98){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1101,-1498){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(720,-1049){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1195,-729){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1389,211){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3142,-308){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2716,-281){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3253,416){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2091,-81){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(3892,-560){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(4381,-779){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3897,-1181){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3661,-1985){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2411,-1816){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1786,-2248){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1060,-1963){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1141,-1696){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(945,-1578){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1018,-1263){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(706,-1454){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(615,-1271){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(400,-1976){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(3387,-645){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(401,-1604){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1577,-309){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1953,-292){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2436,-622){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2041,-862){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1374,-1660){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(225,-1275){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1697,694){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2661, 90){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2407,360){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(402,-932){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2241,-1003){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(2086,-504){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(2972,-599){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1653,-1430){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(2219,-670){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2276,-451){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2569,-406){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{6}$}% }}}} \put(878,-1751){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \end{picture}% \end{center} \caption{\hbox{ }} \end{figure} In general, one cuts along every prismatic $3$-circuit which does not correspond to a triangular face. Here there is one such circuit which is labeled $\gamma$. We cut along $\gamma$ obtaining two combinatorial polyhedra for which every prismatic $3$-circuit corresponds to a triangular face: \begin{center} \begin{picture}(0,0)% \epsfig{file=two_diagrams.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(5487,2581)(137,-1763) \put(2234,251){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(356,-617){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(609,-725){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1003,-460){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1163,319){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2615,-111){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2262,-89){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1745, 77){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3236,-320){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3641,-501){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3240,-834){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(3045,-1500){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(2010,-1360){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(958,-1261){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(796,-1163){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(856,-902){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(522,-909){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2818,-390){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(1319,-112){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1630,-98){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(1513,-1120){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(1881,-729){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1139,-1242){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1758,-515){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$F$}% }}}} \put(4774,-496){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(4955,-148){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(4780,337){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(4330,-971){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(4595,-192){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(4514, 66){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{3}$}% }}}} \put(4800,-979){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\tilde{F}$}% }}}} \put(4705,-1116){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}{\small on the outside}}% }}}} \put(4894,-747){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(4846, 26){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(5192,-27){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{6}$}% }}}} \put(4144,-497){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(4558,-441){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(4727,-199){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(2471,-361){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(1501,-1736){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(326,-1511){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(882,-1482){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(695,-1321){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(336,-1217){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(476, 68){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2237,-352){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{6}$}% }}}} \put(1490,-559){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(190,-912){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(1458,746){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(2711,525){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{4}$}% }}}} \put(1856,-177){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{\pi}{2}$}% }}}} \put(598,-1060){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \put(921,-1070){\makebox(0,0)[lb]{\smash{{\SetFigFont{5}{6.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{2\pi}{5}$}% }}}} \end{picture}% \end{center} In this case, the diagram on the left is that for the polyhedron that we constructed in the previous section. The diagram on the right is that of the truncated triangular prism, which can also be easily constructed. We require that the new triangular faces $F$ and $\widetilde{F}$ obtained by cutting along $\gamma$ be perpendicular to each of the other faces that they intersect. Then each face angle equals the dihedral angle outside of $F$, or $\widetilde{F}$, that leads to that vertex. Because we obtained the two diagrams by cutting the original diagram along $\gamma$, the dihedral angles on the edges leading to $F$ and $\widetilde{F}$ are the same and we naturally obtain that $F$ and $\widetilde{F}$ have the same face angles, but are mirror images of each other. \begin{center} \begin{picture}(0,0)% \epsfig{file=two_poly.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3375,2008)(1201,-2369) \put(3601,-2311){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\tilde{F}$}% }}}} \put(3226,-2311){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$F$}% }}}} \end{picture}% \end{center} These two polyhedra glue perfectly together to form a polyhedron realizing $(C,{\bf a})$ as shown in the following figure. \begin{center} \includegraphics[scale=1.0]{comp2_arxiv.ps} \end{center} \section{Applications to discrete groups and polyhedral orbifolds}\label{SEC_APPLICATIONS} Let $P$ be a finite volume hyperbolic polyhedron having dihedral angles each of which is a proper integer sub-multiple of $\pi$. It is a well-known application of the Poincar\'e Polyhedron Theorem \cite{POINCARE} that the group generated by reflections in the faces $P$ forms a discrete subgroup $\Gamma_P$ of $Isom(\mathbb{H}^3)$. Such groups have been extensively studied, see \cite{VINREFL}, and the references therein. Given such a discrete reflection group $\Gamma_P$, we denote the corresponding orbifold by $O_P = \mathbb{H}^3/\Gamma_P$. We will use the term {\em polyhedral orbifolds} to describe orbifolds obtained in this way. (Note: often in the literature, the term ``polyhedral orbifold'' is used to describe the oriented double cover $\mathbb{H}^3/\Gamma^+_P$, where $\Gamma^+_P$ is index two subgroup consisting of orientation preserving elements of $\Gamma_P$.) See Thurston \cite[Chapter 13]{TH} and Reni \cite{RENI} for more details on polyhedral orbifolds. We use the computer program described in this paper to construct examples from three classes of polyhedral orbifolds, the Lambert cubes \cite{KELLERHALS}, the L\"obell orbifolds \cite{LOBELL,VESNIN_LOB,HYPER_ELL}, and an mysterious orbifold described by Mednykh and Vesnin \cite{HYPER_ELL} whose 16-fold cover is a``hyperelliptic'' compact hyperbolic manifold. We output the generators each reflection group as elements of $SO(3,1)$ into SnapPea \cite{SNAPPEA}, computing volumes and length spectra of these orbifolds. For details on how SnapPea calculates the length spectrum refer to \cite{WEEKS_LENGTHS}. \subsection{Construction of Lambert Cubes} A Lambert cube is a compact polyhedron realizing the combinatorial type of a cube, with three non-coplanar edges chosen and assigned dihedral angles $\alpha,$ $\beta$, and $\gamma$, and the remaining edges assigned dihedral angles $\frac{\pi}{2}$. It is easy to verify that if $0 < \alpha, \beta, \gamma < \frac{\pi}{2}$ then, such an assignment of dihedral angles satisfies the hypotheses of Andreev's Theorem. The resulting polyhedron is called the $(\alpha,\beta,\gamma)$ Lambert Cube, which we will denote by $P_{\alpha,\beta,\gamma}$. Thus, there are discrete reflection groups generated in the faces of a Lambert Cube when $\alpha = \frac{\pi}{p}$, $\beta = \frac{\pi}{q}$, and $\gamma = \frac{\pi}{r}$ for integers $p,q,r > 2$. We denote the corresponding orbifold $O_{\rm Lambert}(p,q,r)$ In the following table, we present volumes and the lengths of the shortest geodesics for a sampling of Lambert Cubes for small $p,q$, and $r$: \vspace{.1in} \begin{tabular}{|l|l|l|} \hline $O_{\rm Lambert}(3,3,3)$ & Computed Volume: $0.324423$ & Theoretical Volume: $0.3244234492$ \\ \hline Short Geodesics & 6 mI $1.087070$ & 3 mI $1.087070 + i \cot 2\pi/3$ \\ & 6 mI $1.400257$ & 6 mI $1.400257 + i \cdot \pi$ \\ & 3 mI $1.601733 + i \cdot 2.765750$ & 3 mI $1.790480 + i \cdot 0.762413$ \\ & 6 mI $1.864162$ & 3 mI $2.138622$ \\ & 4 mI $2.174140$ & 6 mI $2.199243 + i \cdot 2.436822$ \\ \hline \hline $O_{\rm Lambert}(3,4,5)$ & Computed Volume: $0.479079$ & Theoretical Volume: $0.4790790206$ \\ \hline Short Geodesics & 2 mI $0.622685$ & 1 mI $0.622685 + i \cdot 1.256637$ \\ & 1 mI $0.622685 + i \cdot 2.513274$ & 3 mI $0.883748$ \\ & 1 mI $0.883748 + i \cdot 1.570797$ & 1 mI $0.883748 + i \cdot 3.141592$ \\ & 1 mI $1.123387$ & 1 mI $1.123387 + i \cdot 3.141593$ \\ & 1 mI $1.245371$ & \\ \hline \hline $O_{\rm Lambert}(4,4,4)$ & Computed Volume: $0.554152$ & Theoretical Volume: $0.5382759501$ \\ \hline Short Geodesics & 2 mI $0.175240$ & 1 mI $0.175240 + i \cdot 0.369599$ \\ & 1 mI $0.175240 + i \cdot 1.108797$ & 1 mI $0.175240 + i \cdot 0.739198$\\ & 1 mI $0.175240 + i \cdot 0.739198$ & 1 mI $0.175240 + i \cdot 1.478396$ \\ & 1 mI $0.175240 + i \cdot 1.847996$ & 1 mI $0.175240 + i \cdot 2.217595$ \\ & 1 mI $0.175240 + i \cdot 2.587194$ & 1 mI $0.175240 + i \cdot 2.956793$ \\ & 1 mI $0.350479$ & \\ \hline \hline $O_{\rm Lambert}(5,8,12)$ & Computed Volume: $0.768801$ & Theoretical Volume: $0.7688005863$ \\ \hline Short Geodesics & 3 mI $0.407809$ & 1 mI $0.407809 + i \cdot 0.523599$ \\ & 1 mI $0.407809 + i \cdot 1.047198$ & 1 mI $0.407809 + i \cdot 1.570797$ \\ & 1 mI $0.407809 + i \cdot 2.094396$ & 1 mI $0.407809 + i \cdot 2.617995$ \\ & 1 mI $0.407809 + i \cdot 3.141592$ & 2 mI $0.643110$ \\ & 1 mI $0.643110 + i \cdot 0.785398$ & 1 mI $0.643110 + i \cdot 1.570796$ \\ & 1 mI $0.643110 + i \cdot 2.356194$ & 1 mI $0.643110 + i \cdot 3.141593$ \\ \hline \end{tabular} \vspace{.1in} The format of the lists of Geodesic lengths presented in this and in the following tables, is the same as that presented by SnapPea. The first entry is the multiplicity of distinct geodesics having the same complex length. The second entry is either ``mI'' to indicate that the geodesic has the topological type of a mirrored interval or is empty, if the geodesic has the topological type of a circle. The third entry is the complex length. Nearly all of the short geodesics that we present in these tables are mirrored intervals because our orbifolds are mirrored polyhedra and because we have only listed rather short geodesics. Also notice, that while SnapPea provides many more digits of precision for the geodesic length, we have rounded to the first 6 decimal places in order to group geodesics that are likely to correspond to the same class, but weren't listed that way due to numerical imprecision. \vspace{.1in} The volumes of Lambert cubes have been explicitly calculated by R. Kellerhals \cite{KELLERHALS}. If we write $\Delta(\eta,\xi) = \Lambda(\eta+\xi) - \Lambda(\eta-\xi)$, where $\Lambda$ is the well-known Lobachevskii function $\Lambda(x) = -\int_0^x \log \vert 2\sin(t) \vert dt$, then \begin{eqnarray}\label{LAM_VOL} {\rm Vol}(P_{\alpha,\beta,\gamma}) = \frac{1}{4}\left(\Delta(\alpha,\theta)+\Delta(\beta,\theta)+\Delta(\gamma,\theta)-2 \cdot \Delta\left(\frac{\pi}{2},\theta\right) -\Delta(0,\theta)\right). \end{eqnarray} where $\theta$, with $0 < \theta < \frac{\pi}{2}$ is the parameter defined by: \begin{eqnarray*} {\rm tan}^2(\theta) = p + \sqrt{p^2+L^2M^2N^2},\\ p = \frac{L^2+M^2+N^2+1}{2} \mbox{, and }\\ L = {\rm tan}\alpha, M = {\rm tan}\beta, N = {\rm tan}\gamma. \end{eqnarray*} The column in the above table labeled ``approximate volume'' gives the volume of $O_{\rm Lambert}(p,q,r)$ as computed using SnapPea, while the column labeled ``actual volume'' gives the volume of $O_{\rm Lambert}(p,q,r)$ computed using Equation \ref{LAM_VOL}. \subsection{Construction of L\"obell Orbifolds} For each $n > 5$, there is an radially symmetric combinatorial polyhedron having two $n$-sided faces and having $2n$ faces with $5$ sides which provides a natural generalization of the dodecahedron. This combinatorial polyhedron is depicted below for $n=8$. \begin{center} \begin{picture}(0,0)% \epsfig{file=./unlabelled_lobell.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(2333,2294)(3889,-4808) \end{picture}% \end{center} Andreev's Theorem provides the existence of a compact right angled polyhedron $R_n$ realizing this abstract polyhedron because it contains no prismatic $3$-circuits or prismatic $4$-circuits. (In fact, the work of L\"obell predates Andreev by many years, and one can also verify the existence of $R_n$ as an appropriate truncation and gluing of tetrahedra.) We refer to the group generated by reflections in the faces of $R_n$ by $\Gamma_n$ and the corresponding orbifold $O_{\rm L\ddot{o}bell}(n) = \mathbb{H}^3/\Gamma_n$. \vspace{0.05in} \noindent {\bf Historical Note:} \noindent While we restrict our attention to the orbifold $O_{\rm L\ddot{o}bell}(n)$ in this paper, the reader may wish to notice that the first example of a closed hyperbolic manifolds was constructed by L\"obell \cite{LOBELL} in 1931 by an appropriate gluing of $8$ copies of $R_5$. Generalizing this notion, Vesnin \cite{VESNIN_LOB} has described a convenient algebraic method to construct a torsion free subgroup $\Gamma_n' \subset \Gamma_n$ of index 8. This, in the {\em $n$-th L\"obell Manifold} is the compact, orientable, hyperbolic manifold $M_{\rm L\ddot{o}bell}(n) := \mathbb{H}^3/\Gamma'_n$. Naturally, $M_{\rm L\ddot{o}bell}(n)$ is an $8$-fold (orbifold) cover of $O_{\rm L\ddot{o}bell}(n)$. We refer the reader to the nice exposition in \cite{VESNIN_LOB,HYPER_ELL} for the details. The delightful paper by Reni \cite{RENI} provides further details on the construction of hyperbolic manifolds and orbifolds and finite covers of right-angled polyhedra. \vspace{0.05in} We include the following table of data computed using SnapPea for the $n=5,\cdots,8$ L\"obell orbifolds. \vspace{.1in} \begin{tabular}{|l|l|l|} \hline $O_{\rm L\ddot{o}bell}(5)$ (Dodecahedron) & Computed Volume: $4.306208$ & Theoretical Volume: $4.3062076007$ \\ \hline Short Geodesics & 60 mI $2.122550$ & 60 mI $2.122550 + i \cdot\pi $ \\ & 60 mI $2.938703$ & 60 mI $2.938703 + i \cdot \pi$ \\ & 126 mI $3.233843$ & 60 mI $3.579641$ \\ & 60 mI $3.783112 + i \cdot 1.376928$ & 12 $3.835986$ \\ & 12 $3.835986 + i \cdot \pi$ & 60 mI $3.835986 + i \cdot \pi$ \\ & 60 mI $3.966774$ & 60 mI $4.0270318 + i \cdot 2.264758$ \\ \hline \hline $O_{\rm L\ddot{o}bell}(6)$ & Computed Volume: $6.023046$ & Theoretical Volume: $6.0230460200$ \\ \hline Short Geodesics & 36 mI $1.762747$ & 12 mI $1.762747 + i \cdot \pi$ \\ & 37 mI $2.292431$ & 12 mI $2.292431 + i \cdot \pi$ \\ & 48 mI $2.633916$ & 24 mI $2.633916 + i \cdot \pi$ \\ & 36 mI $2.887271$ & 24 mI $2.887271 + i \cdot \pi$ \\ & 48 mI $3.088970$ & 12 mI $3.154720 + i \cdot 1.312496$ \\ & 24 mI $3.256614$ & 36 mI $3.256614 + i \cdot \pi$ \\ \hline \hline $O_{\rm L\ddot{o}bell}(7)$ & Computed Volume: $7.563249$ & Theoretical Volume: $7.5632490914$ \\ \hline Short Geodesics & 42 mI 1.611051 & 14 mI $1.611051 + i \cdot \pi$ \\ & 1 mI $1.823106$ & 42 mI $2.388409$ \\ & 14 mI $2.388409 + i \cdot \pi$ & 14 mI $2.512394$ \\ & 14 mI $2.512394 + i \cdot \pi$ & 14 mI $2.601666$\\ & 70 mI $2.898149$ & 14 mI $2.898149 + i \cdot 1.280529$ \\ & 42 mI $2.898149 + i \cdot \pi$ & 14 mi $3.031090 +i \cdot \pi$ \\ \hline \hline $O_{\rm L\ddot{o}bell}(8)$ & Computed Volume: $9.019053$ & Theoretical Volume: $9.0190527274$\\ \hline Short Geodesics & 49 mI $1.528571$ & 16 mI $1.528571 + i \cdot \pi$ \\ & 80 mI $2.448452$ & 32 mI $2.448452 + i \cdot \pi$ \\ & 16 mI $2.760884 + i \cdot 1.261789$ & 32 mI $2.914035$ \\ & 48 mI $2.914035 + i \cdot \pi$ & 160 mI $3.057142$ \\ & 32 mI $3.057142 + i \cdot \pi$ & 16 mI $3.461816 + i \cdot 2.650944$ \\ & 64 mI $3.553688$ & 32 mI $3.553688 + i \cdot \pi$ \\ \hline \end{tabular} \vspace{.1in} The column labeled ``Computed Volume'' gives the volume as computed in SnapPea, whereas``Theoretical Volume'' provides the volume of $O_{\rm L\ddot{o}bell}(n)$ using explicit formula from \cite{LOB_VOL}. (In fact we have divided by $8$ the volume formula presented in \cite{LOB_VOL}, because they study the volume of the $8$-fold cover $M_{\rm L\ddot{o}bell}(n)$.) If we let $\theta = \frac{\pi}{2} -\arccos\left(\frac{1}{2\cos(\pi/n)}\right)$, then \begin{eqnarray}\label{EQ_LOB_VOL} {\rm Vol}(O_{\rm L\ddot{o}bell}(n)) = \frac{n}{2}\left(2\Lambda(\theta)+\Lambda\left(\theta+\frac{\pi}{n}\right)+\Lambda\left(\theta-\frac{\pi}{n}\right) -\Lambda\left(2\theta+\frac{\pi}{2}\right)\right). \end{eqnarray} where $\Lambda$ is the Lobachevskii function. Notice that for each of the L\"obell orbifolds that we computed, the volume computed in SnapPea agrees perfectly (within the six digits of precision available) with that given by Equation \ref{EQ_LOB_VOL}. \subsection{An orbifold due to Mednykh and Vesnin} In a very similar way to the construction of L\"obell manifolds, Mednykh and Vesnin describe in \cite{HYPER_ELL} a compact three-dimensional hyperbolic manifold $G$ which forms a $2$-fold branched cover over the sphere $\mathbb{S}^3$. They call manifolds with such a covering property over $\mathbb{S}^3$ ``hyperelliptic,'' generalizing the classical notion of hyperelliptic Riemann surfaces. See also \cite{MEDNYKH,HYPER_ELL2,HYPER_ELL3}. The combinatorial polyhedron considered by Mednykh and Vesnin (and apparently originally due to Grinbergs) is depicted below. \begin{center} \begin{picture}(0,0)% \epsfig{file=unlabelled_grinbergs.ps}% \end{picture}% \setlength{\unitlength}{3947sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3325,2592)(1564,-3488) \end{picture}% \end{center} This abstract polyhedron has no prismatic $3$-circuits or prismatic $4$-circuits, so Andreev's Theorem garuntees the existence of a polyhedron $R_{MV}$ realizing it with $\pi/2$ dihedral angles. We denote the group generated by reflections in the faces of $R_{MV}$ by $\Gamma_{MV}$ and the orbifold by $O_{MV}$. Combinatorial details on the construction of $M_{MV}$ as a $16$-fold cover of $O_{MV}$ can be found in \cite{HYPER_ELL}. The following table contains invariants of the orbifold $O_{MV} = \mathbb{H}^3/\Gamma_{MV}$ obtained by entering an explicit list of generators for $\Gamma_{MV}$ into SnapPea. \vspace{.1in} \begin{tabular}{|l|l|l|} \hline $O_{MV}$ & Computed Volume: $6.023046$ & Theoretical Volume: unknown \\ \hline Short Geodesics & 9 mI $0.989308$ & 3 mI $0.989308 + i \cdot \pi$ \\ & 9 mI $1.183451$ & 3 mI $1.183451 + i \cdot \pi$ \\ & 18 mI $1.834468$ & 6 mI $1.834468 + i \cdot\pi$ \\ & 18 mI $1.859890$ & 6 mI $1.859890+ i \cdot\pi$ \\ & 27 mI $1.882318$ & 9 mI $1.882318 + i \cdot\pi$ \\ & 6 mI $1.978616$ & 3 mI $1.978616 + i \cdot\pi$ \\ & 9 mI $2.214787$ & 3 mI $2.214787 + i \cdot \pi$ \\ & 18 mI $2.252719$ & 6 mI $2.252719+ i \cdot \pi$ \\ & 6 mI $2.366902$ & 3 mI $2.366902 + i \cdot \pi$ \\ & 6 mI $2.433170$ & 6 mI $2.433170 + i \cdot \pi$ \\ & 6 mI $2.446977$ & 6 mI $2.446977 + i \cdot \pi$ \\ \hline \end{tabular} \vspace{.1in} As an application, we obtain the estimate ${\rm Vol}(M_{MV}) = 16\cdot 15.608119 = 249.729904$ using that $M_{MV}$ is a $16$-fold orbifold cover over $O_{MV}$. \subsection{Spectral Staircases} For a given hyperbolic manifold or orbifold $M$, the ``spectral staircase'' is a plot of the number of closed geodesics of length less than $l$, which we denote $N(l)$, versus $l$. (In fact, it is much more common to plot $\log(N(l))$ due to the exponential grown predicted by (\ref{EQ_MARG}) below.) The spectral staircase provides both a nice way to graphically display the spectrum of $M$ and a illustration of the classical result of Margulis \cite{MARGULIS}, who proves the following universal formula for the asymptotics of $N(l)$: \begin{eqnarray}\label{EQ_MARG} N(l) \sim \frac{exp(\tau l)}{\tau l} \mbox{ as } l \rightarrow \infty \end{eqnarray} \noindent where the constant $\tau$ is the topological entropy, which for hyperbolic space $\mathbb{H}^d$ is given by $\tau = d-1$. For an exposition and nice experimental works considering spectral staircases, see \cite{INOUE} and the references within. We compute these spectral staircases for $O_{\rm Lambert}(3,4,5)$, $O_{\rm L\ddot{o}bell}(6)$, and $O_{MV}$ displaying the results in Figure \ref{SPEC_STAIR}. (The data for $O_{\rm Lambert}(3,4,5)$ ends at roughly $l = 3.8$. SnapPea encounters an error computing at this length, probably due to the comparatively small dihedral angles of $O_{\rm Lambert}(3,4,5)$.) \begin{figure}\label{SPEC_STAIR} \includegraphics[scale = 0.90]{./spectral_stair.ps} \caption{Spectral Staircases for the $O_{\rm Lambert}(3,4,5)$ (solid line), $O_{\rm L\ddot{o}bell}(6)$ (dotted line), and $O_{MV}$ (dashed line).} \end{figure} \section{Questions for further study} We present a non-comprehensive list of interesting questions for further study: \begin{itemize} \item[1] Determine if there is a faster way of computing Andreev Polyhedra. \item[1b] Related question: determine if CirclePack \cite{CIRCLE_PACK} can be used to construct compact hyperbolic polyhedra and their reflection groups. If so, it would provide a faster method of construction. \item[2] Construct manifold covers of the polyhedral orbifolds that were considered in Section \ref{SEC_APPLICATIONS}, including the L\"obell Manifolds \cite{VESNIN_LOB} the ``Small Covers of the Dodecahedron'' \cite{SMALL_COVERS}, and the Hyperelliptic Manifold \cite{HYPER_ELL}. Such a construction, could potentially lead to computations of many additional interesting invariants of these manifolds using SnapPea, as well as on drilling and Dehn fillings on them (which would also be possible in SnapPea). \item[2b] Related question: use the program SNAP \cite{SNAP} to compute arithmetic invariants for these manifolds. \item[2c] Related question: using SNAP, or the ideas used in SNAP \cite{SNAP_PAPER}, study the arithmetic invariants of polyhedral reflection groups. \item[3] Perform a study of volumes of hyperbolic polyhedra corresponding to general angles in $A_C$. (While SnapPea computes volumes only for polyhedra with discrete reflection groups, the functions from the SnapPea kernel could probably be used for this more general study.) \end{itemize}
1,116,691,499,612
arxiv
\section*{Introduction} In the beginning, \emph{$V$-enriched categories} were defined for $V$ a monoidal category (see for instance \cite{kelly}), showing that the compositional structure of a category could be expressed in settings where the collection $\Hom(a,b)$ of arrows from $a$ to $b$ is not (necessarily) a set but rather some other type of mathematical object. \begin{defn*} For $(V,\otimes,I)$ a monoidal category, a $V$-enriched category consists of \begin{itemize} \item a collection $A$ of objects \item for each $a,b \in A$ an object $\Homm(a,b)$ of $V$ \item for each $a \in A$ a morphism $I \to \Homm(a,a)$ in $V$ \item for each $a,b,c \in A$, a morphism $\Homm(a,b) \otimes \Homm(b,c) \to \Homm(a,c)$ in $V$ \end{itemize} subject to certain unit and associativity equations. \end{defn*} This is useful not only for including in category theory the common feature of a category's Hom-sets admitting the extra structure of a group or space or category or what not, but also for allowing the defining features of categories to apply in settings that would otherwise seem entirely unrelated: famously, Lawvere showed that a slight generalization of metric spaces can be defined as simply categories in which $\Homm(a,b)$ is not a set but a non-negative real number representing the distance from $a$ to $b$. Since then, the theory of enrichment has expanded in three directions, each seeking to answer one of the following questions: \begin{enumerate} \item What structures, in lieu of categories, can \emph{be enriched}? \item What structures, in lieu of monoidal categories, can a structure \emph{be enriched in}? In other words, what are the possible \emph{bases of enrichment}? \item What are the different ways in which one fixed structure can be enriched in another fixed structure? \end{enumerate} To the first question, many higher category structures such as $n$-categories and multicategories can be enriched in a symmetric monoidal category $V$, replacing the sets of top-dimensional cells with fixed boundary with objects of $V$.\footnote{We have been unable to find references for enriched algebraic $n$-categories, though the idea is far from new. Enriched multicategories in this sense are discussed in \cite[Section 2]{ElmendorfMandell} among others, and specialize Leinster's earlier definition of enriched multicategories.} Leinster's theory defines enriched $T$-multicategories for any cartesian monad $T$ (see \cref{sec:multicat} for the definition of $T$-multicategories). In this paper, we define enriched $T$-algebras for any familial monad $T$, which includes enrichment of most algebraic categorical structures in common use (see \cref{sec:familial} for the definition of a familial monad). We take the position that the fundamental idea of enrichment is varying the nature of the upper-dimensional parts of any algebraic structure. The algebraic structures we are interested in are higher categories of various flavors, and in this framework any applicable structure is regarded as a type of higher category. Higher categories are typically defined using sets and functions, which can be replaced by objects and morphisms of a different category or category-like structure. Leinster's theory applies to a wide variety of higher categories which can be modeled as generalized multicategories: this includes categories, plain multicategories, and virtual double categories, but not $n$-categories, monoidal categories, or double categories. These latter three structures are among those we highlight as examples of enrichable familial monad algebras. To the second question, categories enriched in a multicategory are defined in \cite{LambekEnrichment}, similar to classical enrichment but with the identity and composition morphisms replaced by 0-to-1 and 2-to-1 morphisms in a multicategory. Categories enriched in a bicategory are described in \cite{BenBicat} (as ``polyads'') and \cite{variation}, among others. Leinster defines $T$-multicategories enriched in $T^+$-multicategories where $T^+$ is a new cartesian monad built from $T$ whose algebras are $T$-multicategories (see \cref{sec:enrichedmulticats} for a definition of $T^+$). In this paper, we define $T$-algebras enriched in any $T$-multicategory $V$. In the same way as Leinster's theory (which it generalizes only slightly), this type of enrichment base *feels* to be the most general possible structure with which $T$-algebras can be enriched with the associativity equations holding up to equality. It would be difficult to precisely state, let alone prove, this kind of statement, but we hope that like with Leinster's theory this feeling comes to be shared by many others. There are also ``weak'' analogues of enrichment, such as categories enriched in a monoidal bicategory where the usual equations hold up to higher cells (called ``enriched bicategories'' in \cite{enrichedbicat}).\footnote{Note that this is orthogonal to the previous notion of categories enriched in a bicategory. The former treats monoidal categories as 1-object bicategories while the latter treats them as locally discrete bicategories.} While it is not the main focus of this paper, as it tends to work in mostly the same way across the various other types of enrichment definitions, we very briefly discuss in \cref{weakenrichment} how one could weaken our definitions of enrichment so that the relevant equations only hold up to higher isomorphisms. While $T$-multicategories are the most general type of enrichment base for $T$-algebras we consider, we also emphasize a range of specializations of $T$-multicategories. We describe (in \cref{sec:multicat}) $T$-multicategories which are \emph{trivial}, \emph{discrete}, and/or \emph{representable} relative to some restriction of the cell shapes in $T$-algebras, and discuss how enrichment in these settings can preserve additional structure and/or recover existing notions of enrichment in the literature. For example, for $\fc$ the free category monad on graphs: \begin{itemize} \item $\fc$-multicategories are virtual double categories \item representable $\fc$-multicategories are pseudo-double categories \item 0-discrete representable $\fc$-multicategories are bicategories \item 0-trivial $\fc$-multicategories are plain multicategories \item 0-trivial representable $\fc$-multicategories are monoidal categories \end{itemize} A recurring theme in our examples is the prevalence of monoidal double categories as a convenient base of enrichment. Double categories arise as the representable $\fc$-multicategories, and monoidal double categories are the representable $fmc$-multicategories for $fmc$ the free monoidal category monad on graphs (see \cref{fmc}), so monoidal double categories are natural bases of enrichment for categories and monoidal categories. Monoidal double categories also provide examples of Leinster's $fm$-multicategories for $fm$ the free multicategory monad (\cref{enrichedplainmulticats}). We also show in \cref{verticallytrivialrep} and \cref{vertenricheddoublecats} that monoidal double categories are a natural base for a certain form of enrichment of double categories. We list again the various structures enrichable in a monoidal double category in \cref{app:mdcat}. To the third question, a major emphasis in this paper is placed on the relevance, when enriching structures more complex than categories, of how much low-dimensional information in an enrichable structure should exist or be preserved in the enrichment base. In our recurring example, we emphasize this choice when enriching monoidal categories in a monoidal double category $V$: the underlying monoid structure on the objects can be preserved strictly, weakly, or laxly in $V$, with only lax preservation guaranteed by the most general definition of enrichment. Unlike many mathematical texts which introduce definitions in order to prove theorems, in this paper we prove results in order to introduce definitions. We believe that a general notion of enrichment for a variety of higher category structures is of independent interest, and so the provable claims we make are either to support the existence of the elements of that definition or show that in examples the definition recovers existing forms of enrichment. We leave the development of more theory for these enriched structures (along the lines of \cite{kelly}) to future work, though we have shown in \cite{mythesis} that in many circumstances the category of $T$-algebras enriched in the cartesian monoidal category of $T'$-algebras is equivalent to the Eilenberg-Moore category of yet another familial monad, which implies various convenient properties. We describe in \cite{adaptives} (joint with David Spivak) several examples of structures enriched in a monoidal double category, in the settings of machine learning and probability. The novel technical advances presented here are admittedly modest. Most of the supporting definitions we include (such as $T$-multicategories, triviality, discreteness, representability, $T^+$) are covered or alluded to in \cite{leinster} or \cite{LeinsterEnrichment}, as are many of the supporting propositions. Our main contribution is to extend to all algebraic higher categories Leinster's definition of enrichment for generalized multicategories by incorporating the idea of higher and lower dimensional cell-shapes via a generalized construction of ``indiscrete'' higher dimensional structures from lower dimensional structures, and describing in detail how it works for various new examples (most notably monoidal categories and double categories). Our hope is that this self contained presentation of enrichment, leveraging recent approaches to analyzing familial monads and covering old and new examples, will provide a useful reference for a wide variety of enrichment practitioners. \subsection*{Notation} $\one$ denotes the terminal category, while $\two$ denotes the category $0 \to 1$. For $\C$ a small category, $\ch$ denotes the category $\Set^{\C^{op}}$ of presheaves on $\C$. $*$ denotes the terminal presheaf in $\ch$. All of the familial monad algebras and enriched structures we describe here will be small in the set theoretic sense, which we will not specify each time. Large enriched structures can be defined similarly without much additional work, but we do not discuss that any further. \subsection*{Acknowledgements} We would like to thank David Spivak for many enlightening conversations, and Marcelo Aguiar for sharing his previous work on duoidal enrichment of monoidal/2-/double categories (\cite{AguiarEnrichment}). This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0348. \section{Familial Monads}\label{sec:familial} \begin{defn}\label{def:familial} A \emph{familially representable} (or simply \emph{familial}) monad $T$ on a presheaf category $\ch$ is a cartesian monad with functor part $$TX_c = \coprod_{t \in T(1)_c} \Hom_\ch(T[t],X)$$ for each $c$ in $\C$, where $T(1) : \C^{op} \to \Set$ and $T[-] : \sint T(1) \to \ch$. \end{defn} A $T$-algebra is a presheaf $A$ on $\ch$ equipped with a map $h : TA \to A$ that, for each $t \in T(1)_c$, sends $a : T[t] \to A$ to $h(a) \in A_c$. Hence $h$ can be viewed as ``composing'' the arrangement of cells $a$ into a single $c$-cell in $A$. Hence we consider $t \in T(1)_c$ as an ``operation'' outputting a $c$-cell, and $T[t]$ as its ``arity.'' The monad structure on $T$ can be summarized by operations $\eta(c) \in T(1)_c$ for each $c$ in $\C$, and $\mu(t,f) \in T(1)_c$ for each $t \in T(1)_c$ and $f : T[t] \to T(1)$, with $$T[\eta(c)] \cong y(c) \qquad\qquad \textrm{and} \qquad\qquad T[\mu(t,f)] \cong \colim_{x \in T[t]} T[f(x)],$$ satisfying naturality, unit, and associativity equations (see \cite[Theorem 2.2]{representability}). \begin{ex}\label{fmc} Consider the category $\G_1 = \O \starrows \I$. $\widehat{\G_1}$ is the category of graphs $X$, where $X_0$ is the set of vertices and $X_1$ the set of edges, each edge equipped with a source and target vertex. The standard example of a familial monad is the free category monad $\fc$ on $\widehat{\G_1}$, which is the identity on vertices and has arities given by the walking length $n$ path graph for all $n \ge 0$ (see \cite[Example 1.2]{representability} for more details). Our main example of a familial monad, however, will be the free \emph{monoidal} category monad $\MC$ on graphs.\footnote{To be clear, this is the free strict monoidal category monad. There are also free weak and/or symmetric monoidal category monads on graphs, but we find this one the most illustrative for our definitions of enrichment.} Combining in a sense (which can be made precise in several different ways) the free monoid monad on sets and the free category monad on graphs, we define $\MC$ via its representing family $(\MC(1),\MC[-])$ as follows: \begin{itemize} \item $\MC(1)_0 = \nats$, and $\MC(1)_1 = \coprod\limits_{n \in \nats} \nats^n$, with both the source and target maps $\MC(1)_1 \to \MC(1)_0$ sending $(n,m_1,...,m_n)$ to $n$ \item $\MC[n]$ is the graph with $n$ vertices and no edges, and $\MC[n;m_1,...,m_n]$ is the disjoint union of walking paths of length $m_i$ for $1 \le i \le n$. $\MC[n]$ includes into this graph as the $n$ source vertices or as the $n$ target vertices, completing the definition of $\MC[-] : \sint \MC(1) \to \widehat{\G_1}$ \item The monad structure on the functor $$\MC (X)_0 = \coprod_{n \in \MC(1)_0} \Hom(\MC[n],X), \qquad \MC X_1 = \coprod_{(n;m_1,...,m_n) \in \MC(1)_1} \Hom(\MC[n;m_1,...,m_n],X)$$ is described by identifying the operations $1 \in \MC(1)_0$ and $(1;1) \in \MC(1)_1$ with identities on the 0-cells and 1-cells of $X$, and by observing that replacing each edge in $T[n;m_1,...,m_n]$ with a disjoint union of paths in a compatible way (as in, adjacent edges are plugged with the same number of disjoint paths) yields a new, usually larger disjoint union of usually longer paths. \end{itemize} $\MC$-algebras are precisely (small) strict monoidal categories, in which objects can be composed as in a monoid and arrows can be composed either from paths, as in a category, or from potentially disjoint pairs as in a monoidal category. The unique operations for each disjoint union of paths of any length witness that any such arrangement has a unique composite, which encodes the strict unit, associativity, and interchange equations of a monoidal category. \end{ex} \begin{ex} Let $\fdc$ denote the free double category monad on double graphs, which are presheaves over $\G_1 \times \G_1$. In a double graph $X$, $X_{0,0}$ is the set of vertices, $X_{1,0}$ is the set of horizontal edges, $X_{0,1}$ is the set of vertical edges, and $X_{1,1}$ is the set of squares between parallel pairs of horizontal and vertical edges. Each double graph has a ``horizontal underlying graph'' and a ``vertical underlying graph'' given by the vertices and horizontal or vertical edges, respectively. $\fdc$ acts as the free category monad on both $(X_{0,0},X_{1,0})$ and $(X_{0,0},X_{0,1})$, and inserts a square for every $n \times m$ grid of squares in $X$. In other words, $\fdc(1)_{i,j} = \nats^{i+j}$, $\fdc[() \in \fdc(1)_{0,0}]$ is a vertex, $\fdc[n \in \fdc(1)_{1,0}]$ is the string of $n$ composable horizontal edges, $\fdc[m \in \fdc(1)_{0,1}]$ is the string of $m$ composable vertical edges, and $\fdc[n,m]$ is the walking $n \times m$ grid of squares. \end{ex} \begin{ex} Recall that a \emph{duoid} is a set with two monoidal structures (and no assumed compatibility between them). A \emph{duoidal category} is a category $\M$ equipped with two monoidal structures $(\diamond,I)$ and $(\star,J)$ such that $J : \one \to \M$ and $\star : \M \times \M \to \M$ are lax monoidal functors with respect to $(\diamond,I)$ and the coherence transformations are $(\diamond,I)$-monoidal. (We will assume the monoidal structures are strict, but there is an analogous definition when they are weak.) This lax monoidality means that, for instance, there is a not-necessarily-invertible morphism $I \to J$, a morphism $J \diamond J \to J$, a morphism $I \to I \star I$, and the interchange equation holds only up to a morphism $$(a \star b) \diamond (c \star d) \to (a \diamond c) \star (b \diamond d).$$ Duoidal categories are algebras for a familial monad $\fduc$ on graphs, where $\fduc(1)_0$ is the free duoid on one object with arities sending each word to its set of variables, and $\fduc(1)_1$ is the set consisting of pairs of words in the free duoid on $\nats$ such that the second word is reachable by a finite sequence of the basic non-invertible duoidal structure maps of the four forms above. The arity of such a pair is the disjoint union of string graphs whose lengths are given by the natural numbers in the first word (which are the same as those in the second word, though perhaps in a different order). \end{ex} \subsection{Restriction of cell shapes Some documented forms of enrichment for higher dimensional structures include not just sets of lower dimensional cells, as in classical enriched categories, but lower dimensional algebraic structures. Enriched 2-categories, for instance, could be defined as a 1-category with an object of some $V$ for each pair of adjacent arrows, equipped with horizontal and vertical identity and composite maps. To facilitate this sort of definition, we describe how to extract these lower dimensional algebraic structures from a familial monad $T$ and related constructions. \begin{defn} A subcategory $\D$ of $\C$ is a \emph{restriction of cell shapes} if it is full and downward closed, meaning that for any $i : c' \to c$ in $\C$, if $c$ is in $\D$ then so is $c'$. \end{defn} The goal of this definition is to codify the properties we will need for a meaningful distinction between ``lower dimensional'' and ``higher dimensional'' cell shapes. Restrictions of cell shapes are sometimes called \emph{cribles} and are equivalent to functors $\C \to \two$ by taking the fiber over $0$. We will write $u : \D \to \C$ for the inclusion of such a subcategory. There is an adjunction between $\ch$ and $\dh$ where the right adjoint $u^\ast$ restricts a presheaf on $\C$ to its $d$-cells for $d$ in $\D$, and the left adjoint $u_!$ sends a presheaf on $\D$ to the presheaf on $\C$ with no $c$-cells for all $c \in \Ob(\C) \backslash \Ob(\D)$. We say a presheaf in the image of $u_!$ (meaning one with no $c$-cells for $c$ not in $\D$) ``arises from $\dh$.'' \begin{ex} $u : \G_0 \to \G_1$ is a restriction of cell shapes, where $\G_0$ is the terminal category containing the object 0 in $\G_1$. $\widehat{\G_0}$ is then the category of sets. $u^\ast$ sends a graph to its set of vertices, and $u_!$ sends a set to the graph with that set of vertices and no edges. \end{ex} \begin{ex}\label{endpoints} Generalizing the above example, an object of $\C$ is an \emph{endpoint object} if it has no non-identity outgoing morphisms. A collection of endpoint objects in $\C$ determines a restriction of cell shapes given by the full subcategory of $\C$ on all of its other objects. An endpoint object is a ``top dimensional cell shape'' in a higher category structure, such as the arrow in categories, the $n$-cell in $n$-categories, and the $n$-to-1 arrow in a multicategory for all $n$. \end{ex} \begin{defn} A familial monad on $\ch$ is \emph{$\D$-graded} for $\D$ a restriction of cell shapes if for all $d$ in $\D$, $t \in T(1)_d$, $T[t]$ arises from $\dh$. \end{defn} \begin{prop}\label{restriction_monad} Given a $\D$-graded familial monad $T$ on $\ch$, there is a familial monad $T_\D$ on $\dh$ with $T_\D(1) = u^\ast T(1)$ and $$T_\D[-] : \sint u^\ast T(1) \to \sint T(1) \atol{T[-]} \ch \atol{u^\ast} \dh).$$ \end{prop} \begin{proof This follows from the straightforward observation that for any $X$ in $\ch$, $T_\D u^\ast X = u^\ast TX$. As $u^\ast$ is a right adjoint and $u^\ast u_! Y = Y$ for $Y$ in $\dh$, this shows that $T_\D Y = u^\ast T u_! Y$, so $T_\D$ is the transport of $T$ along the adjunction $u^\ast \vdash u_!$. \end{proof} \begin{ex} The monad $\MC$ on $\widehat{\G_1}$ is $\G_0$-graded, as all of the arities of operations in $\MC(1)_0$ have no edges. The monad $\MC_{\G_0}$ is easily seen to be the free monoid monad on sets, which we write as simply $\MC_0$. \end{ex} $u^\ast$ also has a right adjoint $u_\ast : \dh \to \ch$, where $u_\ast X_c = \Hom_\dh(u^\ast y(c), X)$. When $d$ is in $\D$, $u_\ast X_d \cong X_d$, and otherwise $X_c$ has a single $c$-cell in every position where a $c$-cell could possibly be inserted into $u^\ast X$ (that is, for each map $u^\ast y(c) \to u^\ast X$). \begin{prop} When $T$ is $\D$-graded, $u^\ast : \ch \to \dh$ and $u_\ast : \dh \to \ch$ both lift to functors between $T$-algebras and $T_\D$-algebras. \end{prop} \begin{proof} As noted in the proof above, for a $T$-algebra $X$ we have $T_\D u^\ast X = u^\ast TX$, so the $T_\D$-algebra structure on $u^\ast X$ is given by applying $u^\ast$ to the structure map $TX \to X$ and the algebra equations follow easily. Given a $T_\D$-algebra $Y$, we need to define a structure map $Tu_\ast Y \to u_\ast Y$, which for each $c$ in $\C$ unwinds to $$\coprod_{t \in T(1)_c} \Hom_\ch(T[t],u_\ast Y) \to (u_\ast Y)_c.$$ Applying the adjunction $u_\ast \vdash u^\ast$ equates this map with one of the form $$\coprod_{t \in T(1)_c} \Hom_\dh(u^\ast T[t], Y) \to (u_\ast Y)_c.$$ When $c$ is in $\D$, each $u^\ast T[t] = T_\D[t]$ and $(u_\ast Y)_c = Y_c$, so this map can be chosen to be the $c$-component of the of the $T_\D$-structure map on $Y$. When $c$ is not in $\D$, then any map $u^\ast T[t] \to Y$ precomposes with maps of the form $T_\D[i^\ast t] = u^\ast T[i^\ast t] \to u^\ast T[t]$ for $i : d \to c$ in $\C$ and $d$ in $\D$. This composite map $T_\D[i^\ast t] \to Y$ can be composed by the algebra structure into a $d$-cell in $Y$, and together (ranging over the different possible such $i$) these cells in $Y$ form the boundary of a potential $c$-cell in $u_\ast Y$. But there is exactly one $c$-cell in $u_\ast Y$ with that boundary, so the composite of the map $u^\ast T[t] \to Y$ in $(u_\ast Y)_c$ is uniquely determined. The algebra equations for this structure map then derive from those for the $T_\D$-algebra $Y$ and the uniqueness of $c$-cells in $u_\ast Y$ relative to their boundaries. \end{proof} \begin{rem} While $u^\ast$ as a functor between categories of algebras also has a left adjoint, it is not given by $u_!$ on the underlying presheaves unless there is no $t \in T(1)_c$ for $c$ not in $\D$ such that $T[t]$ arises from $\dh$. Then for each such $t$ and map $T[t] \to Y$, a $c$-cell must be added to $u_! Y$ in order to define an algebra structure. \end{rem} \begin{ex} $u_\ast : \widehat{\G_0} \to \widehat{\G_1}$ sends a set to the indiscrete graph on that set, with a single edge in each direction between any pair of vertices. As functors on algebras for $\MC$ and $\MC_0$: \begin{itemize} \item $u^\ast$ sends a strict monoidal category to its underlying monoid of objects \item $u_\ast$ sends a monoid to the indiscrete category on its underlying set, with the unique monoidal structure induced by the monoid structure on the objects. In particular, for the unique morphisms $a \to b$ and $a' \to b'$, there is a unique morphism $a \otimes a' \to b \otimes b'$, so the choice of their product is determined. \end{itemize} \end{ex} \begin{ex} Similarly, $\fduc_0$ is the free duoid monad on sets, the restriction functor $u^\ast$ on algebras sends a duoidal category to its underlying duoid, and $u_\ast$ sends a duoid to the indiscrete category on its objects, which inherits a canonical duoidal structure (albeit one that arises from a symmetric monoidal groupoid). \end{ex} \begin{ex} Let $\G_1 \vee \G_1$ denote the full subcategory of $\G_1 \times \G_1$ spanned by $(0,0),(1,0),(0,1)$. The free double category monad $\fdc$ on double graphs is $(\G_1 \vee \G_1)$-graded, where presheaves on $\G_1 \vee \G_1$ are graphs with a single set of vertices but two distinct types of edges. This is because the operations in $\fdc(1)$ that output vertices or edges of either type do not have any squares in their arities, which are all strings of edges. The restriction of $\fdc$ to $\G_1 \vee \G_1$ from \cref{restriction_monad} is a monad whose algebras are pairs of categories with the same objects, and the restriction functor on algebras merely forgets the squares of a double category retaining only the underlying pair of categories. \end{ex} \begin{ex} $\G_1 \times \G_1$ also has a full subcategory $\G_1 \times 0$ spanned by $(0,0),(1,0)$, isomorphic to $\G_1$. For this subcategory the ``vertical'' restriction of $\fdc$ is the free category monad on graphs, and the restriction functor on algebras sends a double category to its vertical category. Given a category $A$, $u_\ast A$ is the double category with a unique horizontal arrow between each pair of objects and a unique square between each pair of (now vertical) arrows in $A$. \end{ex} \section{$T$-Multicategories}\label{sec:multicat} We now define $T$-multicategories and discuss their relationship with $T$-algebras. More details and references can be found in \cite{LeinsterEnrichment} and \cite{leinster}. Fix a familial monad $T$ on $\ch$. \begin{defn}\label{def:multicat} A $T$-multicategory $V = (V_0,V_1)$ is a span in $\ch$ of the form \bctikz & \ar{dl}[swap]{dom} V_1 \ar{dr}{cod} \\ TV_0 & & V_0 \ectikz equipped with unit and multiplication maps (fixing $X$ and $TX$) from the following two spans \bctikz & \ar{dl}[swap]{\eta} V_0 \ar[equals]{dr} \\ TV_0 & & V_0 \ntikz & \ar{dl}[swap]{\mu \circ T(dom) \circ \pi_1} TV_1 \times_{TV_0} V_1 \ar{dr}{cod \circ \pi_2} \\ TV_0 & & V_0 \ectikz satisfying unit and associativity equations. \end{defn} Unwinding this definition, a $T$-multicategory consists of: \begin{itemize} \item $V_0$ in $\ch$, whose elements in $(V_0)_c$ can be regarded as $c$-cells \item for each $c$ in $\C$ and $t \in T(1)_c$, cells (in $(V_1)_c$) resembling ``arrows'' from a map $T[t] \to V_0$ to a $c$-cell in $V_0$, which we call $t$-arrows \item for each $i : c' \to c$ in $\C$ and $t \in T(1)_c$, every $t$-arrow restricts along $i$ to a $T(1)_i t$-arrow. There is hence a canonical map $V_1 \to T(1)$ in $\ch$ sending all $t$-arrows to $t \in T(1)_c$ \item for each $c$ in $\C$ and $a \in (V_0)_c$, an identity $\eta(c)$-arrow from $a$ to $a$ \item for each $c$ in $\C$, $t \in T(1)_c$, $f : T[t] \to T(1)$, $t$-arrow $\alpha$, and $\beta: T[t] \to V_1$ commuting over $T(1)$, a $\mu(t,f)$-arrow from the combined domains of $\beta$ in $X$ to the codomain of $\alpha$ \item these identities and composites satisfy unit and associativity equations \end{itemize} For convenience, for $c$ in $\C$ we write $V^c$ for $(V_0)_c$ and for $t \in T(1)_c$ we write $V^t$ for the subset of $(V_1)_c$ containing the $t$-arrows. $V$ can in fact be regarded as a presheaf over a category $\C^{+T}$ which is an algebra for a familial monad $T^+$ on $\widehat{\C^{+T}}$, which we discuss in \cref{sec:enrichedmulticats}. \begin{prop} For $u : \D \to \C$ a restriction of cell shapes, when $T$ is $\D$-graded a $T$-multicategory $V = (V_0,V_1)$ restricts to a $T_\D$-multicategory $u^\ast V = (u^\ast V_0, u^\ast V_1)$. \end{prop} \begin{proof} This is a straightforward observation from the unwinded definition above, as if $T$ is $\D$-graded the identities and composites of $d$-cells in $V$ and $t$-arrows for $d$ in $\D$ and $t \in T(1)_d$ are unaffected by forgetting the higher dimensional cell shapes and operations. \end{proof} \begin{ex} Recall (from, for instance, \cite[Section 2.1]{LeinsterEnrichment}) that a $\fc$-multicategory is what is sometimes called a virtual double category: a structure resembling a double category but without horizontal composition of horizontal morphisms or squares. Instead of horizontal composition, squares can have a string of adjacent horizontal morphisms in the top row as in \eqref{vdoub} on the right, and many such horizontally adjacent squares can be composed vertically above a single square: in \eqref{vdoub}, the arrangement on the left composes into a 3-to-1 square as on the right: \begin{equation}\label{vdoub}\begin{tikzcd}[column sep={40,between origins}] \mdot \dar \ar[slash]{r} & \mdot \ar[slash]{r} & \mdot \dar \ar[slash, ""{name=D, below}]{r} & \mdot \dar & & \mdot \ar[slash]{r} \ar{dd} & \mdot \ar[slash,""{name=F, below}]{r} & \mdot \ar[slash]{r} & \mdot \ar{dd} \\ \mdot \dar \ar[slash, ""{name=A, above}]{rr} & {\color{white}{\mdot}} \ar[phantom, ""{name=B, below}]{r} & \mdot \ar[slash, ""{name=E, above}]{r} & \mdot \dar \\ \mdot \ar[slash, ""{name=C, above}]{rrr} & & & \mdot & & \mdot \ar[slash,""{name=G, above}]{rrr} & & & \mdot \arrow[Rightarrow,shorten=4,from=1-2,to=A] \arrow[Rightarrow,shorten=5,from=B,to=C] \arrow[Rightarrow,shorten=5,from=D,to=E] \arrow[Rightarrow,shorten=5,from=F,to=G] \end{tikzcd}\end{equation} As an $\fc$-multicategory, the $\O$- and $\I$-cells are the objects and horizontal morphisms of the virtual double category, the $\eta(\O)$-arrows are the vertical morphisms, the $n$-arrows for $n \in \fc(1)_1$ are the $n$-to-1 squares, and the composites and identities agree with the composition and vertical identities in a virtual double category. Under both descriptions of this structure, the unit and associativity equations are the same. \end{ex} \begin{rem} To make the choice of directions unambiguous when there are many different types of arrows present in a $T$-multicategory, we will always say that the $t$-arrows point in the ``forward'' direction. So in a virtual double category, the vertical morphisms and $n$-to-1 squares all point in the forward direction in this terminology. \end{rem} \begin{ex} An $\MC$-multicategory consists of the following: \begin{itemize} \item A graph $V_0$ with vertex set $V^\O$ and edge set $V^\I$ \item For each $n \in \MC(1)_0$, a set $V^n$ of arrows from a list of $n$ vertices in $V^\O$ to a single vertex. These arrows resemble the many-to-1 arrows in a multicategory \item For each $(n;m_1,...,m_n) \in \MC(1)_1$, a set $V^{n;m_1,...,m_n}$ of arrows from an arrangement of paths of length $m_i$ in $V_0$ for $1 \le i \le n$ to a single edge. These arrows have source and target $n$-to-1 arrows in $V^n$. When $n=1$, these resemble the many-to-1 squares in a virtual double category, and in general we call them ``many-to-1 multi-squares'' \item For each $x \in V^\O$, an identity arrow from $x$ to $x$ in $V^1$ \item For each edge $x \in V^\I$, an identity arrow from $x$ to $x$ in $V^{1;1}$ \item Compositions of many-to-1 arrows between vertices that form a multicategory \item Arrows from arrangements of paths to an edge have compositional structure similar to that of a virtual double category, which respects the multicategory structure on their sources and targets \end{itemize} In other words, it consists of a plain multicategory, a set of edges between the objects of that multicategory, and a virtual-double-category-like structure on those edges. When those edges have the structure of a category that extends to the ``squares,'' this looks like the double multicategories of \cite[Definition 3.10]{doublemulti}. Given such an $\MC$-multicategory $V$, $u^\ast V$ is its underlying multicategory on the vertices $V^\O$, which is precisely a $\MC_0$-multicategory (\cite[Example 4.2.7]{leinster}). \end{ex} \begin{ex} An $\fdc$-multicategory $V$ consists of: \begin{itemize} \item A double graph $V_0$ \item A virtual double category $V_h$ whose underlying graph is the horizontal underlying graph of $V_0$ \item A virtual double category $V_v$ whose underlying graph is the vertical underlying graph of $V_0$ \item ``(n,m)-arrows'' from an $n \times m$ grid of squares in $V_0$ to a single square in $V_0$, whose vertical source and target are squares in $V_h$ and whose horizontal source and target are squares in $V_v$ \item Identities and composites of these 3-dimensional arrows analogous to those in a virtual double category, respecting the identities and composites in $V_h,V_v$ and satisfying the usual unit and associativity axioms \end{itemize} It would also make sense to call this a ``virtual triple category.'' \end{ex} We now define $T$-multifunctors and transformations between them. \begin{defn} A \emph{$T$-multifunctor} is a map of spans determined by maps on $V_0$ and $V_1$, which commutes with identities and composites. \end{defn} Concretely, for $T$-multicategories $V,V'$, the map of spans amounts to maps $V^c \to V'^c$ and $V^t \to V^t$ natural in $c$ and $t$. \begin{defn} For $T$-multifunctors $F,G : V \to V'$, a \emph{$T$-multinatural transformation} (or simply \emph{transformation}) is a natural assignment of, for all $t$-arrows in $V$ from $a : T[t] \to V_0$ to $b \in V^c$, a $t$-arrow in $V'$ from $Fa : T[t] \to V_0 \to V'_0$ to $Gb \in V'^c$. Here natural means that these arrows are closed under precomposition with arrows under $F$ and postcomposition with arrows under $G$. \end{defn} \subsection{Trivial $T$-multicategories} In \cref{sec:enrichment}, we define $T$-algebras enriched in a $T$-multicategory $V$. In order to recover the classical definitions of enrichment of categories and other fundamental categorical structures, we describe in the next few subsections various specializations of the notion of $T$-multicategory that recover more familiar bases of enrichment, such as monoidal categories. This process is a generalization of \cite[Table 2.1]{LeinsterEnrichment}, where we provide definitions of the conditions ``vertically discrete'' (which we call 0-discrete), ``vertically trivial'' (which we call 0-trivial), ``representable,'' and ``uniformly representable'' listed in that table for the case when $T$ is familial. When $\fc$-multicategories are specialized to plain multicategories (see \cite[Example 2.1.1.v]{LeinsterEnrichment}), it is assumed that the sets of objects (or in our terminology, vertices) and vertical arrows (here $\O$-arrows) have only one element. This reduces the dimension of the structure by allowing the set of horizontal arrows (here $I$-cells), each of which has the unique object as its source and target, to instead be treated like objects and the many-to-one squares (here $n$-arrows) like many-to-one arrows. This process of imposing that there is only one cell and one arrow of certain lower-dimensional shapes in a $T$-multicategory applies broadly to specializing $T$-multicategories to more classical or classically-inspired bases of enrichment. \begin{defn} For $u : \D \to \C$ a restriction of cell shapes and $T$ a $\D$-graded familial monad on $\ch$, a $T$-multicategory $V$ is \emph{$\D$-trivial} if for each $d$ in $\D$, there is a unique $d$-cell and for each $t \in T(1)_d$, there is a unique $t$-arrow. When $V$ is $\C$-trivial, we say that $V$ is simply \emph{trivial}. \end{defn} It is important to assume that $T$ is $\D$-graded, as otherwise for $t \in T(1)_d$ the arity $T[t]$ could contain $c$-cells for $c$ not in $\D$, so it would not be clear what the source diagram of the unique $t$-arrow in a $\D$-trivial $T$-multicategory should be. \begin{ex} For the restriction $\G_0 \to \G_1$, a $\G_0$-trivial $\fc$-multicategory, which we call simply 0-trivial, has a unique 0-cell and $\eta(\O)$-arrow. The only remaining pieces of data are the 1-cells and $n$-arrows, which form the objects and $n$-to-1 arrows of a plain multicategory as discussed above. \end{ex} \begin{ex} A 0-trivial $\MC$-multicategory is quite similar to a 0-trivial $\fc$-multicategory. In place of the $n$-to-1 squares in the latter that are interpreted as $n$-to-1 arrows in a plain multicategory, in a 0-trivial $\MC$-multicategory $V$ the set $V^{n;m_1,...,m_n}$ of $(m_1,...,m_n)$-to-1 multi-squares can be interpreted as $(m_1+\cdots+m_n)$-to-1 arrows. Unlike the multi-arrows in a plain multicategory however, the many 1-cells (interpreted as objects) are arranged arranged in $n$ different rows rather than 1, and such a multi-arrow can only be precomposed with multi-arrows into each domain object such that the arrows into objects of the same row each have the same number of rows in their domains. \end{ex} \begin{ex} When $u : \D \to \C$ is a restriction of cell shapes away from a choice $e = \{e_k\}$ of endpoint objects (\cref{endpoints}) and $T$ is a $\D$-graded familial monad on $\ch$, $\D$-trivial $T$-multicategories are similar to plain multicategories: the objects are $e_k$-cells for all of the chosen endpoints $e_k$ in $\C$, and the many-to-1 arrows go from $e_k$-cells typed according to the $e$-cells in $T[t]$ to an $e_{k'}$-cell for $t \in T(1)_{e_{k'}}$. We say $T$ is \emph{finitary over $e$} if $T$ is $\D$-graded and for all $t \in T(1)_e$, $T[t]_e$ is finite. By \cite[Corollary C.4.8]{leinster}, when $T$ is finitary over $e$ any symmetric multicategory forms a $\D$-trivial $T$-multicategory, albeit in a non-canonical way. \end{ex} \subsection{Discrete $T$-multicategories} For a $T$-algebra $h : TA \to A$, there is a corresponding $T$-multicategory $MA$ given by the span \bctikz & \ar[equals]{dl} TA \ar{dr}{h} \\ TA & & A. \ectikz This $T$-multicategory has a single $t$-arrow from $a$ to $h(a)$ for each $c$ in $\C$, $t \in T(1)_c$, and $a : T[t] \to A$, with identities and composites uniquely determined by their domains. This construction can be interpreted as replacing the algebraic structure of $A$ with more geometric ``witnesses'' in $MA$ in the form of the unique arrow from $a$ to $h(a)$ for each composable arrangement $a : T[t] \to A$. The algebraic structure in $MA$ is then comparatively simple, merely recognizing that witnesses to a nested composition in $A$ assemble into a witness of the corresponding total composition. $T$-multifunctors $MA \to V$, for $A$ a $T$-algebra, tend to behave like lax functors out of $A$ when $A$ is interpreted as a category-like structure. \begin{ex} Given a strict monoidal category $A$, $MA$ is the following $\MC$-multicategory: \begin{itemize} \item The vertices and edges are those of the underlying graph of $A$ \item There is an $n$-to-1 arrow from $a_1,...,a_n$ to $b$ precisely when $b = a_1 \otimes \cdots \otimes a_n$ \item There is an arrow from the paths $$a_{1,1} \to \cdots \to a_{1,m_1} \qquad \cdots \qquad a_{n,1} \to \cdots \to a_{n,m_n}$$ to the edge $b \to b'$ precisely when $b = a_{1,1} \otimes \cdots \otimes a_{n,1}$, $b = a_{1,m_1} \otimes \cdots \otimes a_{n,m_n}$, and $b \to b'$ is the product of the composites of the paths \item Identities are given by the unique arrow from $a$ to $a$, where $a$ is any vertex or edge \item Each arrangement of paths has a unique outgoing arrow, which determines arrow composition. This composition is possible because of the algebra structure on $A$ which ensures every arrangement has a composite \end{itemize} A $\MC$-multifunctor from $MA$ to $V$ then consists of a vertex $v_a$ of $V$ for each object of $A$, and $n$-to-1 cells from $v_{a_1},...,v_{a_n}$ to $v_b$ whenever $b = a_1 \otimes \cdots \otimes a_n$ (and similar structure for paths of morphisms). This is the sense in which this resembles a lax monoidal functor, as a statement like ``$v_b = v_{a_1} \otimes \cdots \otimes v_{a_n}$'' may not even make sense in $V$ but there can be an arrow from a list of objects to another object. Below we describe conditions on $V$ such that these statements actually do make sense. However, $\MC$-multifunctors $MA \to MA'$ precisely correspond to strict monoidal functors $A \to A'$, as the action on arrows ensures that the assignment on objects and morphisms respects composition and products. \end{ex} Just like the notion of triviality of a $T$-multicategory, $T$-algebras are characterized among $T$-multicategories by a uniqueness condition on the arrows of $V$. \begin{defn} For $u : \D \to \C$ a restriction of cell shapes, a $T$-multicategory $V$ is \emph{$\D$-discrete} if for each $d$ in $\D$, $t \in T(1)_d$, and $a : T[t] \to V_0$, there is a unique $t$-arrow with domain $a$. When $V$ is $\C$-discrete, we say that $V$ is simply \emph{discrete}. \end{defn} The name ``discrete'' corresponds to Leinster's notion of ``vertically discrete'' $\fc$-multicategories, emphasizing that the forward arrows to $\D$-cells in a discrete $T$-multicategory only encode $T_\D$-algebra structure (if there is any, unlike $\fc_0 = \id$), rather than additional data. \begin{prop} $V$ is $\D$-discrete if and only if $u^\ast V \cong MA$ for some $T_\D$-algebra $A$. \end{prop} \begin{proof} If $V$ is $\D$-discrete, $u^\ast V_1$ is isomorphic to $T_\D A$, and it is easy to see that such a $T_\D$-multicategory is isomorphic to one of the form $MA$, where $A = u^\ast V_0$ and the codomain map $$T_\D u^\ast V_0 \cong u^\ast V_1 \to u^\ast V_0$$ provides the algebra structure. The converse is immediate from the definition. \end{proof} \begin{ex An $\MC$-multicategory $V$ is therefore discrete if it has the form $MA$ for $A$ a monoidal category, and $\G_0$-discrete (or simply 0-discrete) if there is a monoid structure on its vertices and it has only unique $n$-to-1 arrows from $a_1,...,a_n$ to the product $a_1 \cdots a_n$. \end{ex} \subsection{Representable $T$-multicategories} A $t$-arrow could be interpreted as a witness to the fact that its domain can compose into its codomain, as in the discrete $T$-multicategories, but in practice they often resemble something more like ``the composite of the domain maps into the codomain.'' We now make this latter interpretation precise by defining the corresponding condition on a $T$-multicategory. We now describe weaker conditions in which each $a$ can be the domain of many $t$-arrows, but with one or more still distinguished as witness(es) to composition. \begin{defn} In a $T$-multicategory $V$, for $c$ in $\C$, $t \in T(1)_c$, and $a : T[t] \to V_0$, a \emph{universal $t$-arrow for $a$} is a $t$-arrow $h_a$ with domain $a$ such that every $t$-arrow with domain $a$ factors uniquely as $h_a$ composed with an $\eta(c)$-arrow with domain $cod(h_a)$. \end{defn} This definition, as well as the following one, should be compared with universal arrows in representable multicategories as in \cite{Hermida}, which resemble the arrows from $a_1,...,a_n$ to $a_1 \cdots a_n$ in a discrete $T$-multicategory but in a setting where there may still be additional arrows out of the list $a_1,...,a_n$, as in the classical underlying multicategory of a monoidal category. \begin{defn} For $u : \D \to \C$ a restriction of cell shapes, a $T$-multicategory $V$ is \emph{$\D$-representable} if: \begin{itemize} \item for each $d$ in $\D$, $t \in T(1)_d$, and $a : T[t] \to V_0$, $a$ has a universal $t$-arrow \item universal arrows in $u^\ast V$ are closed under composition \end{itemize} $V$ is furthermore \emph{uniformly $\D$-representable} if each such $a$ is equipped with a choice of universal $t$-arrow $h_a$, closed under identities and composition. When $V$ is (uniformly) $\C$-representable, we say that $V$ is simply \emph{(uniformly) representable}. \end{defn} Note that identity arrows are automatically universal, and more generally universal arrows can be interpreted as a generalization of isomorphisms for $t$-arrows in $V$. Uniformly representable $T$-multicategories describe the situation of $t$-arrows corresponding to maps out of the composite of the domain, while representable $T$-multicategories provide a weaker version of this picture where an arrangement $a$ can have multiple valid composites, though all must be isomorphic. \begin{prop}\label{representable_has_algebra} A uniformly $\D$-representable $T$-multicategory $V$ induces a $T_\D$-algebra structure on $u^\ast V_0$. \end{prop} \begin{proof} For $t \in T(1)_d$, $h : T_\D u^\ast V_0 \to u^\ast V_0$ composes $a : T[t] \to V_0$ into the codomain of $h_a$, and these compositions respect $\eta$ and $\mu$ as the universal arrows $h_a$ are closed under identities and composition. \end{proof} \begin{ex}\label{representable_fc} As in \cite[Examples 2.1.1.i and 2.1.1.ii]{LeinsterEnrichment}, an $\fc$-multicategory $V$ is representable when there is a horizontal composition operation on 1-cells and squares making $V$ into a pseudo-double category, and uniformly representable when it is a strict double category. In the strict case, the choice of universal $n$-arrows for each string of $n$ adjacent 1-cells determines their composition, while horizontal composition of $n$ squares is induced by composing the codomain 1-cells in this manner and then factoring the resulting $n$-to-1 square through the universal $n$-arrow out of the domain 1-cells. When $V$ is merely representable, any choice of universal $n$-arrow for each $n$ adjacent 1-cells suffices to define the composition, which will be associative and unital up to (1-arrow) isomorphism as the universal $n$-arrows out of a fixed choice of $n$ adjacent 1-cells are unique up to isomorphism. \end{ex} \begin{ex} A uniformly representable $\MC$-multicategory is given by a strict monoidal double category: all of the $n$-to-1 arrows from $a_1,...,a_n$ to $b$ are given by vertical morphisms $a_1 \otimes \cdots \otimes a_n \to b$, and all of the arrows from an arrangement of paths to an edge are given by squares from the product of the composites of those paths to an edge. A representable $\MC$-multicategory corresponds to a weak monoidal pseudo-double category, where there may be many universal arrows out of a list of vertices $a_1,...,a_n$ but all are canonically isomorphic, with similar behavior for composites and products of edges. \end{ex} \begin{ex}\label{doublemulticats} The double multicategories of \cite[Definition 3.10]{doublemulti} don't fit neatly into these properties of a $\MC$-multicategory, because they are those which are ``uniformly representable in the category direction but not representable in the monoidal direction.'' In other words, they admit horizontal composition of 1-cells and multi-squares, but the multicategories of 0-cells and $n$-arrows, and of 1-cells and $(n;1,...,1)$-arrows do not arise from monoidal categories. However, double multicategories are precisely the uniformly representable $\fm$-multicategories for $\fm$ the free plain multicategory monad on graphs with many-to-1 arrows. As double multicategories are defined as categories internal to plain multicategories, this is a straightforward example of the following characterization of (uniformly) representable $T$-multicategories. \end{ex} \begin{prop}\label{representable_structured} Uniformly representable $T$-multicategories are equivalent to categories internal to $T$-algebras. \end{prop} This claim is analogous to the same result equating representable plain multicategories and monoidal categories in \cite[Section 9]{Hermida}. The proof is entirely analogous, but we sketch the correspondence below. The relationship between $T$-multicategories and categories internal to $T$-algebras (also called ``$T$-structured categories'') is discussed further in \cite[Section 6.6]{leinster}. \begin{proof} Given a $T$-structured category $(C_0,C_1)$, we can define its underlying uniformly representable $T$-multicategory as follows: \begin{itemize} \item its $c$-cells are the same as $C_0$ \item for $t \in T(1)_c$, its $t$-arrows from $a : T[t] \to C_0$ to $b \in (C_0)_c$ are the $c$-cells in $C_1$ with source the composite of $a$ and target $b$ \item identities and composites derive directly from the category structure on $(C_0,C_1)$; as a compatible arrangement of arrows corresponds to a $c$-cell in $C_1$ from the composite of $a : T[\mu(t,f)] \to C_0$ to the composite of $b : T[t] \to C_0$, which can be composed in the categorical (forward) direction with a $t$-arrow from the composite of $b$ to another $c$-cell \item the distinguished universal arrows are the $t$-arrows given by the identities in $C_1$ on the composite of $T[t] \to C_0$, which are closed under $T$-multicategory composition as identities are closed under composition in an internal category \end{itemize} Conversely, given a uniformly representable $T$-multicategory $V$, we define its corresponding $T$-structured category $(C_0,C_1)$ by: \begin{itemize \item $C_0$ is precisely $V_0$, consisting of the $c$-cells of $V$, which forms a $T$-algebra by \cref{representable_has_algebra} \item $C_1$ has $c$-cells given by the $\eta(c)$-arrows in $V$, with source and target $c$-cells in $C_0$ defined as the source and target of the arrow in $V$ \item the $T$-algebra structure on $C_1$ is defined as in \cref{representable_fc}, where $a : T[t] \to C_1$ is composed by first composing its target $t(a) : T[t] \to C_1 \to C_0$, where the distinguished universal arrow witnessing that composition composes with $a$ in $V$ into a $t$-arrow from $a$ to $t(a)$.\footnote{The notation $t(a)$ means the target of $a$; not to be confused with the operation $t \in T(1)_c$. We do not believe this unfortunate overload of notation occurs anywhere else in this paper.} This $t$-arrow then factors through the universal arrow out of $s(a)$ via an $\eta(c)$-arrow which we regard as the composite of $a$ \item identities are given by identity arrows in $V$ \item composition is given by composition in $V$ \end{itemize} \end{proof} Representable $T$-multicategories are equivalent to a weaker analogue of $T$-structured categories. This requires redefining the latter as algebras for a monad analogous to $T$ on the 2-category of $\Cat$-valued presheaves on $\C$. $T$-structured categories are the strict algebras for this 2-monad, while pseudo-algebras provide a notion of weak $T$-structured categories (as discussed in \cite[Section 6.6]{leinster}) which are equivalent to representable $T$-multicategories. When $T$ is the free monoid monad, this recovers the equivalence of monoidal categories and representable multicategories of \cite[Section 9]{Hermida}. \begin{ex} A uniformly representable $\fdc$-multicategory is by \cref{representable_structured} a strict triple category, and a representable $\fdc$-multicategory is a triple category which is weak in the horizontal and vertical directions, though strict in the forward direction. \end{ex} \begin{defn} For $\E \to \D$ a further restriction of cell shapes, a $T$-multifunctor between $\D$-representable $T$-multicategories is \emph{$\E$-strong} if it preserves universal arrows when restricted to $\E$, and a $T$-multifunctor between uniformly $\D$-representable $T$-multicategories is \emph{$\E$-strict} if it preserves the distinguished universal $t$-arrows $h_a$ when restricted to $\E$. We say such a $T$-multifunctor is simply \emph{strong} (resp. \emph{strict}) when it is $\D$-strong (resp. $\D$-strict). \end{defn} \begin{ex} For $V,V'$ weak monoidal pseudo-double categories, general $\MC$-multifunctors $V \to V'$ correspond to lax monoidal lax double functors, while strong $\MC$-multifunctors correspond to strong monoidal pseudo-double functors. When $V,V'$ are strict (in both senses), strict $\MC$-multifunctors correspond to strict monoidal double functors. Meanwhile, 0-strong (resp. 0-strict) $\MC$-multifunctors look like lax monoidal lax double functors which are strong (resp. strict) monoidal on objects. \end{ex} \subsection{Triviality plus representability as extra structure} The ultimate specialization of $\fc$-multicategories from \cite[Table 2.1]{LeinsterEnrichment} is the combination of ``vertically trivial'' and ``(uniformly) representable,'' which recovers (strict) monoidal categories and, ultimately, the classical definition of enriched categories. The combination of these two conditions generally produces lower-dimensional data (by $\D$-triviality) with extra algebraic structure (from representability) that often recovers bases of enrichment already present in the literature. \begin{ex} Completing the table in \cite[Table 2.1]{LeinsterEnrichment}, a (uniformly) representable 0-trivial $\fc$-multicategory is a (strict) monoidal category, and a (uniformly) representable 0-discrete $\fc$-multicategory is a bicategory (2-category). \end{ex} \begin{ex} A 0-trivial (uniformly) representable $\MC$-multicategory is a category with two compatible (strict) monoidal structures, which by a standard Eckmann-Hilton type argument is the same as a braided (strict symmetric) monoidal category. A 0-discrete representable $\MC$-multicategory is analogously to a monoidal bicategory which is strict monoidal on objects, and a uniformly representable 0-discrete $\MC$-multicategory corresponds to a strict monoidal 2-category. \end{ex} \begin{ex} When $u : \D \to \C$ is a restriction of cell shapes away from a choice $e$ of endpoint objects (\cref{endpoints}) and $T$ is a $\D$-graded familial monad on $\ch$, $\D$-trivial (uniformly) representable $T$-multicategories are equivalent to a (strict) $(T,e)$-structured category in the sense of \cite[Section 8.1]{mythesis}. Again using \cite[Corollary C.4.8]{leinster}, when $T$ is finitary over $e$ this includes (albeit non-canonically) any symmetric monoidal category. \end{ex} \begin{ex} An example of a restriction of cell shapes away from an endpoint object is $\G_1 \vee \G_1$ inside $\G_1 \times \G_1$, and a $(\G_1 \vee \G_1)$-trivial representable $\fdc$-multicategory is a weak triple category with a unique object, vertical morphism, horizontal morphism, forward morphism, vertical-forward square, and horizontal-forward square. What remains are the vertical-horizontal squares and the cubes, which form the objects and morphisms of a category. This category has two weak monoidal structures given by horizontal and vertical composition which have isomorphic units and satisfy the interchange law, so by an Eckmann-Hilton type argument the two monoidal structures agree up to isomorphism and the resulting monoidal category is braided. Similarly a $(\G_1 \vee \G_1)$-trivial, uniformly representable $\fdc$-multicategory is a category with two strict mooidal structures satisfying interchange, resulting in a strict symmetric monoidal category. \end{ex} \begin{ex} If we relax the definition of representability so that universal arrows need not be closed under composition, we see that a duoidal category $M$ (strict/weak) also provides a $(\G_1 \vee \G_1)$-trivial (uniformly/not uniformly) representable $\fdc$-multicategory: the squares are the objects of $M$, a composite of universal arrows goes from an $n \times m$ grid of objects in $M$ to any possible duoidal composite of that grid, and an $(n,m)$-arrow is then any composite of universal arrows followed by a morphism in $M$. Horizontal composition and identities are given by $(\star,J)$ and vertical composition and identities by $(\diamond,I)$. Every grid has a universal arrow to the vertical composite of its horizontal composites, and these are closed under vertical composition but not horizontal composition. \end{ex} \begin{ex}\label{verticallytrivialrep} A $(\G_1 \times 0)$-trivial (or \emph{vertically trivial}) representable $\fdc$-multicategory $V$ is a weak triple category with just one object, one vertical morphism, one forward morphism, and one vertical-forward square. This is precisely the data of a double category whose objects are the horizontal morphisms of $V$, vertical morphisms are the vertical-horizontal squares, forward morphisms are the horizontal-forward squares in $V$, and squares are the cubes in $V$. Furthermore, horizontal composition in $V$ makes this a monoidal double category. \end{ex} \section{Enrichment}\label{sec:enrichment} Throughout this section, $u : \D \to \C$ will denote a restriction of cell shapes, $T$ a $\D$-graded familial monad on $\ch$, and $V$ will denote a $T$-multicategory. We will also sometimes consider a further restriction of cell shapes $v : \E \to \D$ \begin{defn}\label{def:enrichment} A $(V,\D)$-enriched $T$-algebra consists of a $T_\D$-algebra $A$ and a $T$-multifunctor $\Homm : Mu_\ast A \to V$. \end{defn} Unpacking this, an enriched $T$-algebra with the cell shapes of $\D$ regarded as lower-dimensional consists of: \begin{itemize} \item a $T_\D$-algebra $A$ \item for each $d$-cell in $A$, a $d$-cell in $V$ with an arrow in $V$ from ay composable arrangement of these cells to their composite. These cells can be considered as ``book-keeping,'' merely recording the types of objects that will describe the higher dimensional cells in the enriched $T$-algebra \item for each $c$ in $\C$ but not in $\D$, and each possible $c$-cell position in $A$ (that is, each map $u^\ast y(c) \to A$ in $\dh$), a $c$-cell in $V$ whose boundary $d$-cells agree under $\Homm$ with the corresponding boundary cells in $A$. These are the ``Hom-objects'' of the enriched $T$-algebra, which are closest to the classical setting when the $c$-cells of $V$ are similar to sets with additional structure. The lower dimensional book-keeping for the $d$-cells in $A$ determines which $c$-cells are eligible to be a particular Hom-object \item for each $t \in T(1)_c$ and $a : u^\ast T[t] \to A$ which composes on its boundary into $b : u^\ast y(c) \to A$, a ``composition map'' $t$-arrow in $V$ from $\Homm(a) : Mu_\ast u^\ast T[t] \to Mu_\ast A \to V$ to $\Homm(b)$. This map from the many Hom-objects ($c$-cells) in $V$ making up $\Homm(a)$ to the single Hom-object $\Homm(b)$ is analogous to the map from $\Homm(x,y) \otimes \Homm(y,z)$ to $\Homm(x,z)$ in a classical enriched category. \end{itemize} \begin{defn} A map of $(V,\D)$-enriched $T$-algebras from $(A,\Homm)$ to $(A',H')$ is a map of $T_\D$-algebras $A \to A'$ along with a transformation of $T$-multifunctors as below: \bctikz Mu_\ast A \ar{dr}[swap]{H} \ar{rr} \ar[Rightarrow, shorten=16, shift right=5]{rr} & & Mu_\ast A' \ar{dl}{H'} \\ & V \ectikz \end{defn} \begin{ex}\label{enriched_cats} A category ($\fc$-algebra) enriched in an $\fc$-multicategory $V$ is precisely as described in \cite[Section 2.2]{LeinsterEnrichment}. It consists of: \begin{itemize} \item a set ($\fc_0$-algebra) $A$ \item vertices $\Homm(a)$ in $V$ for each element $a$ of $A$ \item edges $\Homm(a,b)$ in $V$ from $\Homm(a)$ to $\Homm(b)$ for each pair of objects $a,b$ in $A$ \item composition $n$-arrows from $\Homm(a_0,a_1),...,\Homm(a_{n-1},a_n)$ to $\Homm(a_0,a_n)$ for each $n \in \nats$ and $a_0,...,a_n \in A$, satisfying unit and associativity equations \end{itemize} \end{ex} \begin{ex} If $\D$ is the empty category $\varnothing$, $T_\varnothing$ is the identity monad on the terminal category $\widehat{\varnothing}$. For $A$ the unique $T_\varnothing$-algebra, $u_\ast A$ is the terminal $T$-algebra and $Mu_\ast A$ is the terminal $T$-multicategory. A $(V,\varnothing)$-enriched $T$-algebra then consists of: \begin{itemize} \item for each $c$ in $\C$, a $c$-cell $\Homm(c)$ in $V$ forming a map $\Homm_0 : * \to V_0$ in $\ch$ \item for each $t \in T(1)_c$, a single $t$-arrow from $\Homm_0 \circ ! : T[t] \to * \to V_0$ to $\Homm(c)$, natural in $c \item such that these arrows are closed under identities and composites \end{itemize} For instance a $(V,\varnothing)$-enriched category is then a horizontal monoid in the virtual double category $V$, meaning a horizontal endomorphism $m$ equipped with squares from the $n$th iterated composite of $m$ to $m$ for $n \ge 0$ closed under composition in $V$. \end{ex} \begin{ex} When $u : \D \to \C$ is a restriction of cell shapes away from a choice $e$ of endpoint objects (\cref{endpoints}), $T$ is a $\D$-graded familial monad on $\ch$, and $V$ is $\D$-trivial and representable (that is, a $(T,e)$-structured category as in \cite[Section 8.1]{mythesis}), a $(V,\D)$-enriched $T$-algebra is the same as a $V$-enriched $T$-algebra in the sense of \cite[Section 8.2]{mythesis}. \end{ex} \begin{ex}\label{enriched_moncats} We call a $(V,\G_0)$-enriched $\MC$-algebra simply a $V$-enriched monoidal category. It consists of: \begin{itemize} \item A monoid $A$ \item For each $a \in A$ a vertex $\Homm(a)$ in $V_0$ \item For each $a,a' \in A$, an edge $\Homm(a,a')$ from $\Homm(a)$ to $\Homm(a')$ in $V$. This is because an edge in $u_\ast A$ is determined by its source and target \item For each $a_1,...,a_n \in A$, an $n$-to-1 arrow from $\Homm(a_1),...,\Homm(a_n)$ to $\Homm(a_1 \cdots a_n)$ in $V$. This includes a 0-to-1 arrow to $\Homm(e)$ for $e$ the unit of $A$ \item For each list of elements $a_{1,0},...,a_{1,m_1},...,a_{n,0},...,a_{n,m_n} \in A$, an arrow in $V$ from the arrangement of paths of the form $$\Homm(a_{i,0},a_{i,1}),...,\Homm(a_{i,m_i-1},a_{i,m_i})$$ to the edge $$\Homm(a_{1,0} \cdots a_{n,0},a_{1,m_1} \cdots a_{n,m_n}).$$ \item These arrows are closed under identities and composition in $V$ \end{itemize} This definition is rather complicated, but simplifies in the case when $V$ is representable, which is to say a monoidal double category (we will assume it is weak and pseudo- henceforth unless otherwise specified) where we can call the edges ``morphisms.''. In that case, a $V$-enriched monoidal category consists of: \begin{itemize} \item A monoid $A$ \item For each $a \in A$ an object $\Homm(a)$ in $V$ \item For each $a,a' \in A$, a morphism $\Homm(a,a')$ in $V$ from $\Homm(a)$ to $\Homm(a')$ \item A forward arrow $I \to \Homm(e)$ for $e$ the unit of $A$ and $I$ the unit of $V$ \item For each $a_1,a_2 \in A$, a forward arrow $\Homm(a_1) \otimes \Homm(a_2) \to \Homm(a_1 a_2)$ in $V$ \item These forward arrows satisfy the unit and associativity equations of a lax monoidal functor from the discrete monoidal category $A$ to the forward monoidal category of $V$ \item For each $a_1,a_2,a'_1,a'_2 \in A$, a square \bctikz[column sep=80] \Homm(a_1) \otimes \Homm(a_2) \ar[""{name=S, below}]{r}{\Homm(a_1,a'_1) \otimes \Homm(a_2,a'_2)} \dar & \Homm(a'_1) \otimes \Homm(a'_2) \dar \\ \Homm(a_1a_2) \ar[""{name=T, above}]{r}[swap]{\Homm(a_1a_2,a'_1a'_2)} & \Homm(a'_1a'_2) \arrow[Rightarrow,shorten=5,from=S,to=T] \ectikz \item For each $a,a',a'' \in A$, a square \bctikz \Homm(a) \rar{\Homm(a,a')} \dar[equals] & \Homm(a') \rar{\Homm(a',a'')} & \Homm(a'') \dar[equals] \\ \Homm(a) \ar[""{name=T, above}]{rr}[swap]{\Homm(a,a'')} & & \Homm(a'') \arrow[Rightarrow,shorten=5,from=1-2,to=T] \ectikz \item These squares satisfy unit and associativity equations with respect to both products and compositions, as well as an interchange equation \end{itemize} This data can be summarized as a monoid $A$ and a lax monoidal lax double functor from the monoidal category\footnote{Here $u_\ast A$ is regarded as a double category with only identity forward morphisms and squares} $u_\ast A$ to $V$. \end{ex} The book-keeping in $V$ for the lower dimensional cells has thus far only assumed a $t$-arrow in $V$ from $\Homm(a)$ to $\Homm(b)$ for $t \in T(1)_d$, $a : T[t] \to A$, and $b \in A_d$, meaning a lax monoidal functor to $V$ in the previous example. But when $V$ is representable or uniformly representable, we can impose universality conditions on these arrows that correspond to the book-keeping part of $\Homm$ weakly or strictly preserving $T$-algebra structure (so replacing this lax monoidality with strong or strict monoidality). \begin{defn}\label{strong_strict} When $V$ is (uniformly) $\E$-representable, a $(V,\D)$-enriched $T$-algebra $(A,\Homm)$ is $\E$-strong (resp. $\E$-strict) if $\Homm$ is $\E$-strong (resp. $\E$-strict). We say $(A,\Homm)$ is simply \emph{strong} (resp. \emph{strict}) when it is $\D$-strong (resp. $\D$-strict). \end{defn} \begin{ex} A $V$-enriched monoidal category $(A,\Homm)$, where $V$ is a monoidal double category, is 0-strong when the vertical morphisms $\Homm(a_1) \otimes \Homm(a_2) \to \Homm(a_1a_2)$ are isomorphisms. When $V$ is strict monoidal, $(A,\Homm)$ is 0-strict when these are in fact identities. This may seem unusual from the perspective of lax double functors (it is uncommon to consider functors which are strong monoidal on objects but lax monoidal on morphisms), but from an enrichment point of view it is fairly natural, as it is reasonable to expect the monoid $A$ of objects in an enriched monoidal category to maintain its form in $V$. In other words, the emphasis in enrichment is on modeling only the higher dimensional cells in $V$ using the $\Homm(a,a')$'s, so it is expected that there would be a non-invertible map $$\Homm(a_1,a'_1) \otimes \Homm(a_2,a'_2) \to \Homm(a_1a_2,a'_1a'_2),$$ describing how morphisms are tensored together. The objects $\Homm(a)$ do not carry the same interpretation as collections of cells, instead serving more of a book-keeping role. They merely allow the $\Homm(a,a')$'s to live in different categories when desireable, so there is no intuitive reason to expect that the maps $\Homm(a_1) \otimes \Homm(a_2) \to \Homm(a_1a_2)$ should not be isomorphisms or even identities. \end{ex} \begin{ex} When $M$ is a duoidal category regarded as a representable $(\G_1 \vee \G_1)$-trivial $\fdc$-multicategory, an $(M,\G_1 \vee \G_1)$-enriched double category agrees with the definition of ++-enriched double category in \cite{AguiarEnrichment}, which amounts to a pair $A$ of 1-categories with the same objects, for each square boundary $\alpha$ in $A$ an object $\Homm(\alpha)$ in $M$, and morphisms in $M$ corresponding to identities and composition: \begin{itemize} \item when $\alpha$ has identities as its horizontal arrows, a morphism $J \to \Homm(\alpha)$ in $M$ \item when $\alpha$ has identities as its vertical arrows, a morphism $I \to \Homm(\alpha)$ in $M$ \item when $\alpha,\alpha'$ are horizontally adjacent with composite square boundary $\alpha''$, a morphism $\Homm(\alpha) \star \Homm(\alpha') \to \Homm(\alpha'')$ \item when $\alpha,\alpha'$ are vertically adjacent with composite square boundary $\alpha''$, a morphism $\Homm(\alpha) \diamond \Homm(\alpha') \to \Homm(\alpha'')$ \end{itemize} These morphisms must satisfy unit, associativity, and interchange laws analogous to those in a double category, including a unit interchange equation which says that when $\alpha$ has identities as both its horizontal and vertical arrows, $I \to J \to \Homm(\alpha)$ agrees with $I \to \Homm(\alpha)$. A double category enriched in a representable $\fdc$-multicategory $V$, namely a weak triple category, consists of forward-lax functors from the horizontal/vertical categories of a shared-object pair of categories $A$ to the horizontal/vertical categories of $V$, along with horizontal-vertical squares for each square boundary in $A$ and composition cubes. It is strong when these are pseudo-functors, and when $V$ is uniformly representable (a strict triple category) the enriched double category is strict when these are ordinary functors. \end{ex} \begin{ex}\label{vertenricheddoublecats} For $V$ a virtual triple category, $u : \G_1 \times 0 \to \G_1 \times \G_1$, and $A$ a category, a virtual triple functor $Mu_\ast A \to V$ amounts to suitable choices of objects and vertical arrows for those in $A$ with forward-vertical squares witnessing each composition in $A$, a horizontal arrow for each pair of objects in $A$, a square for each pair of arrows in $A$, and squares/cubes in the forward direction of $V$ for each composition map in this \emph{vertically $V$-enriched} double category. When $V$ is representable and vertically trivial, hence a monoidal double category (\cref{verticallytrivialrep}), a vertically $V$-enriched double category amounts to \begin{itemize} \item a category $A$ \item for each pair of objects $a,b$ in $A$, an object $\Homm(a,b)$ of $V$ \item for each pair of morphisms $f : a \to b,f' : a' \to b'$ in $A$, a vertical arrow $\Homm(f,f')$ of $V$ from $\Homm(a,b)$ to $\Homm(a',b')$ \item for each object $a$ in $A$, a forward arrow $I \to \Homm(a,a)$ \item for each triple of objects $a,b,c$ in $A$, a forward arrow $\Homm(a,b) \otimes \Homm(b,c) \to \Homm(a,c)$ \item for each morphism $f$ in $A$, a square from $\id_I$ to $\Homm(f,f)$ \item for each triple of morphisms $f,f',f''$ in $A$, a square from $\Homm(f,f') \otimes \Homm(f',f'')$ to $\Homm(f,f'')$ \item for each object $a$ in $A$, a forward-globular square from $\id_{\Homm(a,a)}$ to $\Homm(\id_a,\id_a)$ \item for $a \atol{f} b \atol{g} c$ and $a' \atol{f'} b' \atol{g'} c'$ in $A$, a forward-globular square from $\Homm(g,g') \circ \Homm(f,f')$ to $\Homm(g \circ f,g' \circ f')$ \item these composition maps satisfy unit, associativity, and interchange equations and commute with sources and targets in $A$ \end{itemize} This definition is in fact equivalent to the following: \begin{itemize} \item a category $A$ \item a category enriched in the forward monoidal category of $V$ with the same objects as $A$ \item a category enriched in the monoidal category of vertical arrows and squares in $V$ whose objects are the morphisms of $A$ \item these enriched categories respect sources and targets, and laxly respect identities and composites in $A$ \end{itemize} This definition is very nearly saying that ``a category internal to categories enriched in a category internal to monoidal categories is the same as a category internal to enriched categories.'' A statement like this could perhaps be made more formal given a suitable definition of morphism between enriched categories with varying enrichment bases, but we do not pursue this further. \end{ex}
1,116,691,499,613
arxiv
\section{Introduction}\label{sec1} \bg Many identities that evaluate trigonometric sums in closed form can be found in the literature. For example, in a solution to a problem in SIAM Review \cite[p.157]{klam}, M. Fisher shows that \[ \sum_{k=1}^{p-1}\sec^2\left(\frac{k\pi}{2p}\right) =\frac{2}{3}\left(p^2-1\right),\quad \sum_{k=1}^{p-1}\sec^4\left(\frac{k\pi}{2p}\right) =\frac{4}{45}\left(2p^4+5p^2-7\right). \] General results giving closed forms for the power sums secants $\sum_{k=1}^{p-1}\sec^{2n}(\frac{k\pi}{2p})$ and ${\sum_{k=1}^{p}\sec^{2n}(\frac{k\pi}{2p+1}})$, for many values of the positive integer $n$, can be found in \cite{chen} and \cite{grab}. Also, in \cite{kou2} the author proves that \[\sum_{k=1}^{p}\sec\left(\frac{2k\pi}{2p+1}\right) =\begin{cases} \phantom{-}p&\text{if $p$ is even,}\\ -p-1& \text{if $p$ is odd.} \end{cases} \] However, while there are many cases where closed forms for finite trigonometric sums can be obtained it seems that there are no such formul\ae\, for the sums we are interested in. In this paper we study the trigonometric sums $I_p$ and $J_p$ defined for positive integers $p$ by the formul\ae: \begin{align} I_p&=\sum_{k=1}^{p-1}\frac{1}{\sin(k\pi /p)}=\sum_{k=1}^{p-1}\csc\left(\frac{k\pi}{p}\right)\label{E:I}\\ J_p&=\sum_{k=1}^{p-1}k\cot\left(\frac{k\pi}{p}\right)\label{E:J} \end{align} with empty sums interpreted as $0$. \bg To the author's knowledge there is no known closed form for $I_p$, and the same can be said about the sum $J_p$. Therefore, we will look for asymptotic expansions for these sums and will give some tight inequalities that bound $I_p$ and $J_p$. This investigation complements the work of H. Chen in \cite[Chapter 7.]{chen2} where it was asked, as an open problem, whether the inequality \[ I_p\le \frac{2p}{\pi}(\ln p+\gamma -\ln(\pi/2)) \] holds true for $p\ge3$, (here $\gamma$ is the so called Euler-Mascheroni constant.) \bg In fact, it will be proved that for every positive integer $p$ and every nonnegative integer $n$, we have \begin{align*} I_p&<\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+ \sum_{k=1}^{2n}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1},\\ \noalign{\noindent\text{and}} I_p&>\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+ \sum_{k=1}^{2n+1}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1}. \end{align*} where the $b_{2k}$'s are Bernoulli numbers (see Corollary \ref{cor94}. The corresponding inequalities for $J_p$ are also proved (see Corollary \ref{cor97}.) Harmonic numbers play an important role in this investigation. Recall that the $n$th harmonic number $H_n$ is defined by $H_n=\sum_{k=1}^n1/k$ (with the convention $H_0=0$). In this work, a link between our trigonometric sums $I_p$ and $J_p$ and the sum of several series related to harmonic numbers is uncovered. Indeed, the well-known fact that $H_n=\ln n+\gamma+\frac{1}{2n}+\mathcal{O}\left(\frac{1}{n^2}\right)$ proves the convergence of the numerical series, \begin{align*} C_p&=\sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right), \\ D_p&=\sum_{n=1}^\infty(-1)^{n-1}\left(H_{pn}-\ln(pn)-\gamma\right),\\ E_p&= \sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn}), \end{align*} for every positive integer $p$. An interplay between the considered trigonometric sums and the sum of these series will allow us to prove sharp inequalities for $I_p$ and $J_p$, and to find the expression of the sums $C_p$, $D_p$ and $E_p$ in terms of $I_p$ and $J_p$. The main tool will be the following formulation \cite[Corollary 8.2]{kou3} of the Euler-Maclaurin summation formula: \begin{theorem}\label{cor61} Consider a positive integer $m$, and a function $f$ that has a continuous $(2m-1)^{\text{st}}$ derivative on $[0,1]$. If $f^{(2m-1)}$ is {\normalfont \text{decreasing}}, then \[ \int_0^1f(t)\,dt=\frac{f(1)+f(0)}{2} -\sum_{k=1}^{m-1}\frac{b_{2k}}{(2k)!}\,\delta f^{(2k-1)}+(-1)^{m+1}R_m \] with \[ R_m=\int_0^{1/2}\frac{\abs{B_{2m-1}(t)}}{(2m-1)!}\,\left(f^{(2m-1)}(t)-f^{(2m-1)}(1-t)\right)\,dt \] and \[ 0\leq R_m\leq\frac{6}{(2\pi)^{2m}}\left(f^{(2m-1)}(0)-f^{(2m-1)}(1)\right). \] where the $b_{2k}$'s are Bernoulli numbers, $B_{2m-1}$ is the Bernoulli polynomial of degree $2m-1$, and the notation $\delta g$ for a function $g:[0,1]\to\comp$ means $g(1)-g(0)$. \end{theorem} For more information on the Euler-Maclaurin formula, Bernoulli polynomials and Bernoulli numbers the reader may refer to \cite{abr, grad, hen,kou3,olv} and the references therein. This paper is organized as follows. In section 2 we find the asymptotic expansions of $C_p$ and $D_p$ for large $p$. In section 3, the trigonometric sums $I_p$ and $J_p$ are studied. \bg \section{The sum of certain series related to harmonic numbers}\label{sec8} \bn In the next lemma, the asymptotic expansion of $(H_n)_{n\in\nat}$ is presented. It can be found implicitly in \cite[Chapter 9]{knuth2} we present a proof for the convenience of the reader. \bg \begin{lemma}\label{pr81} For every positive integer $n$ and nonnegative integer $m$, we have \[ H_n=\ln n+\gamma+\frac{1}{2n}-\sum_{k=1}^{m-1}\frac{b_{2k}}{2k}\cdot\frac{1}{n^{2k}}+(-1)^m R_{n,m}, \] with \[ R_{n,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}\, \sum_{j=n}^\infty\left(\frac{1}{(j+t)^{2m}}-\frac{1}{(j+1-t)^{2m}} \right)\,dt \] Moreover, $ 0<R_{n,m}<\dfrac{\abs{b_{2m}}}{2m\cdot n^{2m}}$. \end{lemma} \begin{proof} Note that for $j\geq1$ we have \[ \frac{1}{j}-\ln\left(1+\frac{1}{j}\right)=\int_0^1\left(\frac1j-\frac{1}{j+t}\right)\,dt=\int_0^1\frac{t}{j(j+t)}\,dt \] Adding these equalities as $j$ varies from $1$ to $n-1$ we conclude that \[ H_n-\ln n-\frac{1}{n}=\int_0^1\left(\sum_{j=1}^{n-1}\frac{t}{j(j+t)}\right)\,dt. \] Thus, letting $n$ tend to $\infty$, and using the Monotone Convergence Theorem, we conclude \[ \gamma=\int_0^1\left(\sum_{j=1}^{\infty}\frac{t}{j(j+t)}\right)\,dt. \] It follows that \begin{equation*} \gamma+\ln n-H_n+\frac{1}{n}=\int_0^1\left(\sum_{j=n}^\infty\frac{t}{j(j+t)}\right)\,dt. \end{equation*} So, let us consider the function $f_n:[0,1]\vers\reel$ defined by \[ f_n(t)=\sum_{j=n}^\infty\frac{t}{j(j+t)} \] Note that $f_n(0)=0$, $f_n(1)=1/n$, and that $f_n$ is infinitely continuously derivable with \[ \frac{f_n^{(k)}(t)}{k!}=(-1)^{k+1}\sum_{j=n}^\infty\frac{1}{(j+t)^{k+1}},\quad\text{for $k\geq1$.} \] In particular, \[ \frac{f_n^{(2k-1)}(t)}{(2k-1)!}=\sum_{j=n}^\infty\frac{1}{(j+t)^{2k}},\quad\text{for $k\geq1$.} \] So, $f_n^{(2m-1)}$ is decreasing on the interval $[0,1]$, and \[ \frac{\delta f_n^{(2k-1)}}{(2k-1)!} =\sum_{j=n}^\infty\frac{1}{(j+1)^{2k}} -\sum_{j=n}^\infty\frac{1}{j^{2k}}=-\frac{1}{n^{2k}} \] Applying Theorem \ref{cor61} to $f_n$, and using the above data, we get \[ \gamma+\ln n-H_n+\frac{1}{2n}= \sum_{k=1}^{m-1}\frac{b_{2k}}{2k\,n^{2k}}+(-1)^{m+1}R_{n,m} \] with \[ R_{n,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}\, \sum_{j=n}^\infty\left(\frac{1}{(j+t)^{2m}}-\frac{1}{(j+1-t)^{2m}} \right)\,dt \] and \[ 0< R_{n,m}<\frac{6\cdot(2m-1)!}{(2\pi)^{2m}n^{2m}}. \] The important estimate is the lower bound, \textit{i.e.} $R_{n,m}>0$. In fact, considering separately the cases $m$ odd and $m$ even, we obtain, for every nonnegative integer $m'$: \begin{align*} H_n&<\ln n+\gamma+\frac{1}{2n}-\sum_{k=1}^{2m'}\frac{b_{2k}}{2k}\cdot\frac{1}{n^{2k}},\\ \noalign{\noindent\text{and}} H_n&>\ln n+\gamma+\frac{1}{2n}-\sum_{k=1}^{2m'+1}\frac{b_{2k}}{2k}\cdot\frac{1}{n^{2k}}.\\ \end{align*} This yields the following more precise estimate for the error term: \begin{equation*} 0<(-1)^{m}\left(H_n-\ln n-\gamma-\frac{1}{2n}+ \sum_{k=1}^{m-1}\frac{b_{2k}}{2k\cdot n^{2k}} \right)<\frac{\abs{b_{2m}}}{2m\cdot n^{2m}} \end{equation*} which is valid for every positive integer $m$. \end{proof} \bg Now, consider the two sequences $(c_n)_{n\geq1}$ and $(d_n)_{n\geq1}$ defined by \[ c_n=H_n-\ln n-\gamma-\frac{1}{2n}\qquad\text{and}\qquad d_n=H_n-\ln n-\gamma \] For a positive integer $p$, we know according to Lemma~\ref{pr81} that $c_{pn}=\mathcal{O}\left(\frac{1}{n^2}\right)$, it follows that the series $\sum_{n=1}^\infty c_{pn}$ is convergent. Similarly, since $d_{pn}=c_{pn}+\frac{1}{2pn}$ and the series $\sum_{n=1}^\infty(-1)^{n-1}/n$ is convergent, we conclude that $\sum_{n=1}^\infty(-1)^{n-1} d_{pn}$ is also convergent. In what follows we aim to find asymptotic expansions, (for large $p$,) of the following sums: \begin{align} C_p&=\sum_{n=1}^\infty c_{pn}=\sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right)\label{E:Cp}\\ D_p&=\sum_{n=1}^\infty (-1)^{n-1}d_{pn} =\sum_{n=1}^\infty(-1)^{n-1}\left(H_{pn}-\ln(pn)-\gamma\right)\label{E:Dp}\\ E_p &=\sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn}).\label{E:Ep} \end{align} \begin{proposition}\label{pr82} If $p$ and $m$ are positive integers and $C_p$ is defined by \eqref{E:Cp}, then \[ C_p=-\sum_{k=1}^{m-1}\frac{b_{2k}\zeta(2k)}{2k\cdot p^{2k}} +(-1)^m\frac{\zeta(2m)}{2m\cdot p^{2m}}\eps_{p,m},\quad\text{with $0<\eps_{p,m}<\abs{b_{2m}}$}, \] where $\zeta$ is the well-known Riemann zeta function. \end{proposition} \begin{proof} Indeed, we conclude from Lemma \ref{pr81} that \[ H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}=-\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\cdot p^{2k}}\cdot\frac{1}{n^{2k}} +\frac{(-1)^m}{2m\cdot p^{2m}}\cdot\frac{r_{pn,m}}{n^{2m}}. \] with $0<r_{pn,m}\leq\abs{b_{2m}}$. It follows that \[C_p=-\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\,p^{2k}}\cdot\left(\sum_{n=1}^\infty \frac{1}{n^{2k}}\right)+\frac{(-1)^m}{2m\cdot p^{2m}}\cdot\tilde{r}_{p,m}. \] where $\tilde{r}_{p,m}=\sum_{n=1}^\infty\frac{r_{pn,m}}{n^{2m}}$. \bg Hence, \[ 0<\tilde{r}_{p,m}=\sum_{n=1}^\infty \frac{r_{pn,m}}{n^{2m}}< \abs{b_{2m}} \, \sum_{n=1}^\infty \frac{1}{n^{2m}}=\abs{b_{2m}}\zeta(2m) \] and the desired conclusion follows with $\eps_{p,m}=\tilde{r}_{p,m}/\zeta(2m)$. \end{proof} \bg For example, taking $m=3$, we obtain \[ \sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right) =-\frac{\pi^2}{72p^2}+\frac{\pi^4}{10800p^4}+\mathcal{O}\left(\frac{1}{p^6}\right). \] \bg In the next proposition we have the analogous result corresponding to $D_p$.\bg \begin{proposition}\label{pr83} If $p$ and $m$ are positive integers and $D_p$ is defined by \eqref{E:Dp}, then \[ D_p=\frac{\ln 2}{2p} -\sum_{k=1}^{m-1} \frac{b_{2k}\eta(2k)}{2k \cdot p^{2k}} +(-1)^m\frac{\eta(2m)}{2m\cdot p^{2m}}\eps'_{p,m},\quad\text{with $0<\eps'_{p,m}<\abs{b_{2m}}$,} \] where $\eta$ is the Dirichlet eta function \cite{wis}. \end{proposition} \begin{proof} Indeed, let us define $a_{n,m}$ by the formula \[ a_{n,m}=H_n-\ln n-\gamma-\frac{1}{2n}+\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\cdot n^{2k}} \] with empty sum equal to 0. We have shown in the proof of Lemma \ref{pr81} that \[ (-1)^ma_{n,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}g_{n,m}(t)\,dt \] where $g_{n,m}$ is the positive decreasing function on $[0,1/2]$ defined by \[ g_{n,m}(t)=\sum_{j=n}^\infty\left(\frac{1}{(j+t)^{2m}}-\frac{1}{(j+1-t)^{2m}}\right). \] Now, for every $t\in[0,1/2]$ the sequence $(g_{np,m}(t))_{n\geq1}$ is positive and decreasing to $0$. So, using the alternating series criterion \cite[Theorem~7.8, and Corollary~7.9]{aman} we see that, for every $N\geq1$ and $t\in[0,1/2]$, \[ \abs{\sum_{n=N}^\infty(-1)^{n-1}g_{np,m}(t)}\leq g_{Np,m}(t)\leq g_{Np,m}(0)=\frac{1}{(Np)^{2m}}. \] This proves the uniform convergence on $[0,1/2]$ of the series \[G_{p,m}(t)=\sum_{n=1}^\infty(-1)^{n-1}g_{np,m}(t). \] Consequently \[ (-1)^m\sum_{n=1}^\infty(-1)^{n-1}a_{pn,m}=\int_0^{1/2}\abs{B_{2m-1}(t)}G_{p,m}(t)\,dt. \] Now using the properties of alternating series, we see that for $t\in(0,1/2)$ we have \[ 0<G_{p,m}(t)<g_{p,m}(t)<g_{p,m}(0)=\sum_{j=p}^\infty\left(\frac{1}{j^{2m}}-\frac{1}{(j+1)^{2m}}\right)=\frac{1}{p^{2m}} \] Thus, \[ \sum_{n=1}^\infty(-1)^{n-1}a_{pn,m}=\frac{(-1)^m}{p^{2m}}\rho_{p,m} \] with $0<\rho_{p,m}<\int_0^{1/2}\abs{B_{2m-1}(t)}\,dt$. \bg On the other hand we have \begin{align*} \sum_{n=1}^\infty(-1)^{n-1}a_{pn,m}&=D_p -\frac{1}{2p} \sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}+\sum_{k=1}^{m-1}\frac{b_{2k}}{2k\,p^{2k}}\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^{2k}}\\ &=D_p-\frac{\ln 2}{2p} +\sum_{k=1}^{m-1}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}}. \end{align*} Thus \[ D_p=\frac{\ln 2}{2p}-\sum_{k=1}^{m-1}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}} +\frac{(-1)^m}{p^{2m}}\rho_{p,m} \] Now, the important estimate for $\rho_{p,m}$ is the lower bound, \textit{i.e.} $\rho_{p,m}>0$. In fact, considering separately the cases $m$ odd and $m$ even, we obtain, for every nonnegative integer $m'$: \begin{align*} D_p&<\frac{\ln 2}{2p}-\sum_{k=1}^{2m'}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}},\\ \noalign{\text{and}} D_p&>\frac{\ln 2}{2p}-\sum_{k=1}^{2m'+1}\frac{b_{2k}\eta(2k)}{2k\cdot p^{2k}}.\\ \end{align*} This yields the following more precise estimate for the error term: \[ 0<(-1)^{m}\left(D_p-\frac{\ln 2}{2p}+\sum_{k=1}^{m-1}\frac{b_{2k}\eta(2k)}{2k\,p^{2k}} \right)<\frac{\abs{b_{2m}}\eta(2m)}{2m\cdot p^{2m}}, \] and the desired conclusion follows. \end{proof} \bg The case of $E_p$ which is the sum of another alternating series \eqref{E:Ep} is discussed in the next lemma where it is shown that $E_p$ can be easily expressed in terms of $D_p$.\bg \begin{lemma}\label{lm84} For a positive integer $p$, we have \begin{equation*} E_p =\ln p+\gamma-\ln\left(\frac{\pi}{2}\right)+2D_p, \end{equation*} where $D_p$ is the sum defined by \eqref{E:Dp}. \end{lemma} \begin{proof} Indeed \begin{align*} 2D_p&=d_p+\sum_{n=2}^\infty(-1)^{n-1}d_{pn}+\sum_{n=1}^\infty(-1)^{n-1}d_{pn}\\ &=d_p+\sum_{n=1}^\infty(-1)^{n}d_{p(n+1)}+\sum_{n=1}^\infty(-1)^{n-1}d_{pn}\\ &=d_p+\sum_{n=1}^\infty(-1)^{n-1}(d_{pn}-d_{p(n+1)})\\ &=d_p+\sum_{n=1}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn})+\sum_{n=1}^\infty(-1)^{n-1}\ln\left(\frac{n+1}{n}\right)\\ &=-\ln p-\gamma+\sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn})+\sum_{n=1}^\infty(-1)^{n-1}\ln\left(\frac{n+1}{n}\right)\\ \end{align*} Using Wallis formula for $\pi$~ \cite[Formula~0.262]{grad}, we have \begin{align*} \sum_{n=1}^\infty(-1)^{n-1}\ln\left(\frac{n+1}{n}\right)&= \sum_{n=1}^\infty\ln\left(\frac{2n}{2n-1}\cdot\frac{2n}{2n+1}\right)\\ &=-\ln\prod_{n=1}^\infty\left(1-\frac{1}{4n^2}\right)=\ln\left(\frac{\pi}{2}\right)\\ \end{align*} and the desired formula follows. \end{proof} \section{Inequalities for trigonometric sums}\label{sec2} As we mentioned in the introduction, we are interested in the sum of cosecants $I_p$ defined by \eqref{E:I} and the sum of cotangents $J_p$ defined by \eqref{E:J}. Many other trigonometric sums can be expressed in terms of $I_p$ and $J_p$. The next lemma lists some of these identities. \bg \begin{lemma}\label{lm91} For a positive integer $p$ let \begin{alignat*}{2} K_p&=\sum_{k=1}^{p-1}\tan\left(\frac{k\pi}{2p}\right), &\qquad \widetilde{K}_p&=\sum_{k=1}^{p-1}\cot\left(\frac{k\pi}{2p}\right),\\ L_p&=\sum_{k=1}^{p-1}\frac{k}{\sin(k\pi/p)}, &\qquad M_p&=\sum_{k=1}^{p}(2k-1)\cot\left(\frac{(2k-1)\pi}{2p}\right) \end{alignat*} Then, $\ds \begin{matrix} \hfill i.&\hfill K_p=&\widetilde{K}_p=I_p.\hfill\label{lm911}\\ \hfill ii.&\hfill L_p=&(p/2)\,I_p.\hfill\label{lm912}\\ \hfill iii.&\hfill M_p=&(p/2)\,J_{2p}-2J_p=-p\,I_p.\hfill\label{lm913}\\ \end{matrix} $ \end{lemma} \begin{proof} First, note that the change of summation variable $k\leftarrow p-k$ proves that $K_p=\widetilde{K}_p$. So, using the trigonometric identity $\tan\theta+\cot\theta=2\csc(2\theta)$ we obtain $(i)$ as follows: \begin{equation*} 2K_p=K_p+\widetilde{K}_p=\sum_{k=1}^{p-1}\left(\tan\left(\frac{k\pi}{2p}\right)+\cot\left(\frac{k\pi}{2p}\right)\right) =2\sum_{k=1}^{p-1}\csc\left(\frac{k\pi}{p}\right)=2I_p \end{equation*} Similarly, $(ii)$ follows from the change of summation variable $k\leftarrow p-k$ in $L_p$: \[ L_p=\sum_{k=1}^{p-1}\frac{p-k}{\sin(k\pi/p)}=pI_p-L_p \] Also, \begin{align*} M_p&=\sum_{\substack{1\leq k<2p\\ k \text{ odd} }} k\cot\left(\frac{k\pi}{2p}\right)=\sum_{k=1}^{2p-1} k\cot\left(\frac{k\pi}{2p}\right)- \sum_{\substack{1\leq k<2p\\ k \text{ even} }} k\cot\left(\frac{k\pi}{2p}\right)\\ &=\sum_{k=1}^{2p-1} k\cot\left(\frac{k\pi}{2p}\right)-2 \sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{p}\right)=J_{2p}-2J_p. \end{align*} But \begin{align*} J_{2p}&=\sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{2p}\right)+\sum_{k=p+1}^{2p-1} k\cot\left(\frac{k\pi}{2p}\right)\\ &=\sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{2p}\right)-\sum_{k=1}^{p-1} (2p-k)\cot\left(\frac{k\pi}{2p}\right)\\ &=2\sum_{k=1}^{p-1} k\cot\left(\frac{k\pi}{2p}\right)-2p\widetilde{K}_p \end{align*} Thus, using $(i)$ and the trigonometric identity $\cot(\theta/2)-\cot\theta=\csc\theta$ we obtain \begin{align*} M_p&=J_{2p}-2J_p=2\sum_{k=1}^{p-1} k\left(\cot\left(\frac{k\pi}{2p}\right)-\cot\left(\frac{k\pi}{p}\right)\right) -2pI_p\\ &=2\sum_{k=1}^{p-1}k\csc\left(\frac{k\pi}{p}\right)-2pI_p=2L_p-2pI_p=-pI_p \end{align*} This concludes the proof of $(iii)$. \end{proof} \begin{proposition}\label{pr92} For $p\geq2$, let $I_p$ be the sum of cosecants defined by the \eqref{E:I}. Then \begin{align*} I_p&=-\frac{2\ln 2}{\pi}+\frac{2p}{\pi}E_p,\\ &=-\frac{2\ln 2}{\pi}+\frac{2p}{\pi}\left(\ln p+\gamma-\ln(\pi/2)\right)+\frac{4p}{\pi}D_p, \end{align*} where $D_p$ and $E_p$ are defined by formul\ae~ \eqref{E:Dp} and \eqref{E:Ep} respectively. \end{proposition} \begin{proof} Indeed, our starting point will be the ``simple fractions'' expansion \cite[Chapter 5, \S 2]{ahl} of the cosecant function: \[ \frac{\pi}{\sin(\pi\alpha)}=\sum_{n\in\ent}\frac{(-1)^n}{\alpha-n}=\frac{1}{\alpha}+\sum_{n=1}^\infty(-1)^n\left(\frac{1}{\alpha-n}+ \frac{1}{\alpha+n}\right) \] which is valid for $\alpha\in\comp\setminus\ent$. Using this formula with $\alpha=k/p$ for $k=1,2,\ldots,p-1$ and adding, we conclude that \begin{align*} \frac{\pi}{p} I_p&=\sum_{k=1}^{p-1}\frac{1}{k}+\sum_{n=1}^\infty(-1)^n\sum_{k=1}^{p-1}\left(\frac{1}{k-np}+ \frac{1}{k+n p}\right)\\ &=\sum_{k=1}^{p-1}\frac{1}{k}+ \sum_{n=1}^\infty(-1)^n\left(-\sum_{j=p(n-1)+1}^{pn-1}\frac{1}{j}+ \sum_{j=pn+1}^{p(n+1)-1}\frac{1}{j}\right), \end{align*} and this result can be expressed in terms of the Harmonic numbers as follows \begin{align*} \frac{\pi}{p} I_p&=H_{p-1}+ \sum_{n=1}^\infty(-1)^n\left(- H_{pn-1}+H_{p(n-1)}+H_{p(n+1)-1}-H_{pn} \right)\\ &=H_{p-1}+ \sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right) +\frac{1}{p}\sum_{n=1}^\infty(-1)^n\left(\frac{1}{n}-\frac{1}{n+1} \right)\\ &=H_{p-1}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right) +\frac{1}{p}\left(\sum_{n=1}^\infty\frac{(-1)^n}{n}+\sum_{n=2}^\infty\frac{(-1)^n}{n}\right)\\ &=H_{p}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right) -\frac{2}{p}\sum_{n=1}^\infty(-1)^{n-1}\frac{1}{n}\\ &=H_{p}-\frac{2\ln 2}{p}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-2H_{pn}+H_{p(n-1)}\right). \end{align*} Thus \begin{align*} \frac{\pi}{p} I_p+\frac{2\ln 2}{p}&= H_{p}+\sum_{n=1}^\infty(-1)^n\left(H_{p(n+1)}-H_{pn} \right) +\sum_{n=1}^\infty(-1)^n\left(H_{p(n-1)}-H_{pn}\right)\\ &=\sum_{n=0}^\infty(-1)^n\left(H_{p(n+1)}-H_{pn}\right) +\sum_{n=1}^\infty(-1)^n\left(H_{p(n-1)}-H_{pn}\right)\\ &=E_p+E_p=2E_p, \end{align*} and the desired formula follows according to Lemma~\ref{lm84}. \end{proof} Combining Proposition~\ref{pr92} and Proposition~\ref{pr83}, we obtain: \begin{proposition}\label{pr93} For $p\geq2$ and $m\geq 1$, we have \[ \pi I_p= 2p\ln p+2(\gamma-\ln( \pi/2))p -\sum_{k=1}^{m-1} \frac{2b_{2k}\eta(2k)}{ k \cdot p^{2k-1}} +(-1)^m\frac{2\eta(2m)}{ m\cdot p^{2m-1}}\eps'_{p,m} \] with $\ds 0<\eps'_{p,m}<\abs{b_{2m}}$. \end{proposition} Using the well-known result (\cite{ wis},\cite[Formula 9.542]{grad}): \[ \eta(2k)=(1-2^{1-2k})\zeta(2k)=\frac{(2^{2k-1}-1)\pi^{2k}(-1)^{k-1}b_{2k}}{(2k)!}, \] and considering separately the cases $m$ even and $m$ odd we obtain the following corollary. \begin{corollary}\label{cor94} For every positive integer $p$ and every nonnegative integer $n$, the sum of cosecants $I_p$ defined by \eqref{E:I} satisfies the following inequalities: \begin{align*} I_p&<\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+ \sum_{k=1}^{2n}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1},\\ \noalign{\text{and}} I_p&>\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))+ \sum_{k=1}^{2n+1}(-1)^{k}\frac{(2^{2k}-2)b_{2k}^2}{k\cdot(2k)!}\left(\frac{\pi}{p}\right)^{2k-1}. \end{align*} \end{corollary} \bg \qquad As an example, for $n=0$ we obtain the following inequality, valid for every $p\geq1$: \[ \frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2))-\frac{\pi}{36p} <I_p<\frac{2p}{\pi}(\ln p+\gamma-\ln(\pi/2)). \] This answers positively the open problem proposed in \cite[Section 7.4]{chen2}. \bg \begin{remark} The asymptotic expansion of $I_p$ was proposed as an exercise in \cite[Exercise~13, p.~460]{hen}, and it was attributed to P. Waldvogel, but the result there is less precise than Corollary~\ref{cor94} because here we have inequalities valid in the whole range of $p$. \end{remark} \bg Now we turn our attention to the other trigonometric sum $J_p$. The first step to we find an analogous result to Proposition~\ref{pr92} for the trigonometric sum $J_p$, is the next lemma, where an asymptotic expansion for $J_p$ is proved but it has a harmonic number as an undesired term, later it will be removed. \begin{lemma}\label{pr72} For every positive integers $p$, there is a real number $\theta_{p}\in(0,1)$ such that \[ \pi J_p=-p^2H_p+\ln(2\pi) p^2-\frac{p}{2}-\theta_p. \] \end{lemma} \begin{proof} Indeed, let $\vf$ be the function defined by \[\vf(x)=\pi x\cot(\pi x)+\frac{1}{1-x}.\] According to the partial fraction expansion formula for the cotangent function \cite[Chapter 5, \S 2]{ahl} we know that \[ \vf(x)=2+\frac{x}{x+1}+\sum_{n=2}^\infty\left(\frac{x}{x-n}+\frac{x}{x+n}\right). \] Thus, $\vf$ is defined and analytic on the interval $(-1,2)$. Let us show that $\vf$ is concave on this interval. Indeed, it is straight forward to check that, for $-1<x<2$ we have \begin{align*} \vf^{\prime\prime}(x)&=-\frac{2}{(1+x)^3}-2 \sum_{n=2}^\infty\left(\frac{n}{(n-x)^3}+\frac{n}{(n+x)^3}\right)<0. \end{align*} So, we can use Theorem \ref{cor61} with $m=1$ applied to the function $x\mapsto\vf\left(\frac{x+k}{p}\right)$ for $1\le k<p$ to get \[ 0< p\int_{k/p}^{(k+1)/p}\vf(x)dx-\frac{1}{2}\left(\vf\left(\frac{k+1}{p}\right)+\vf\left(\frac{k}{p}\right)\right)\le\frac{3}{2p\pi^2} \left(\vf'\left(\frac{k}{p}\right)-\vf'\left(\frac{k+1}{p}\right)\right) \] Adding these inequalities and noting that $\vf(0)=2$, $\vf'(0)=1$, $\vf(1)=1$ and $\vf'(1)=-\pi^2/3$, we get \[ 0< p\int_0^1\vf(x)dx-\frac{\pi}{p}J_p-pH_p-\frac{1}{2}\le\frac{3+\pi^2}{2\pi^2p}<\frac{1}{p} \] Also, for $x\in[0,1)$, we have \[ \int_0^x\vf(t)\,dt=-\ln(1-x)+x\ln \sin(\pi x)-\int_0^x \ln\sin(\pi t)\,dt \] and, letting $x$ tend to $1$ we obtain \[ \int_0^1\vf(t)\,dt=\ln(\pi)-\int_0^1\ln\sin(\pi t)\,dt=\ln(2\pi) \] where we used the fact $\int_0^1\ln\sin(\pi t)\,dt=-\ln2$, (see \cite[4.224 Formula 3.]{grad}. So, we have proved that \[ 0< p\ln(2\pi)-\frac{\pi}{p}J_p-pH_p-\frac{1}{2}<\frac{1}{p} \] which is equivalent to the desired conclusion. \end{proof} \bg The next proposition gives an analogous result to Proposition~\ref{pr92} for the trigonometric sum $J_p$. \bg \begin{proposition}\label{pr95} For a positive integer $p$, let $J_p$ be the sum of cotangents defined by \eqref{E:J}. Then \[ \pi J_p= -p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p+2p^2C_p \] where $C_p$ is given by \eqref{E:Cp}. \end{proposition} \begin{proof} Recall that $c_{n}=H_n-\ln n-\gamma-\frac{1}{2n}$ satisfies $c_n=\mathcal{O}(1/n^2)$. Thus, both series \[ C_p=\sum_{n=1}^\infty c_{pn}\quad\text{ and }\quad \widetilde{C}_p=\sum_{n=1}^\infty(-1)^{n-1} c_{pn} \] are convergent. Further, we note that $\widetilde{C}_p=D_p-\frac{\ln 2}{2p}$ where $D_p$ is defined by \eqref{E:Dp}. According to Proposition~\ref{pr92} we have \begin{equation}\label{E:pr991} \widetilde{C}_p =\frac{\ln(\pi/2)-\gamma-\ln p}{2}+\frac{\pi}{4p}I_p. \end{equation} Now, noting that \begin{align*} C_p&=\sum_{\substack{n\geq1\\ n\,\text{odd}}}c_{pn} +\sum_{\substack{n\geq1\\ n\,\text{even}}}c_{pn} =\sum_{\substack{n\geq1\\ n\,\text{odd}}}c_{pn}+\sum_{n=1}^\infty c_{2pn}\\ \widetilde{C}_p&=\sum_{\substack{n\geq1\\ n\,\text{odd}}}c_{pn} -\sum_{\substack{n\geq1\\ n\,\text{even}}}c_{pn} =\sum_{\substack{n\geq1\\ n\,\text{odd}}}c_{pn}-\sum_{n=1}^\infty c_{2pn} \end{align*} we conclude that $C_p-\widetilde{C}_p=2C_{2p}$, or equivalently \begin{equation}\label{E:pr992} C_p-2C_{2p}=\widetilde{C}_p \end{equation} On the other hand, for a positive integer $p$ let us define $F_p$ by \begin{equation}\label{E:Fp} F_p=\frac{\ln p+\gamma-\ln(2\pi)}{2}+\frac{1}{2p}+\frac{\pi}{2p^2}J_p. \end{equation} It is easy to check, using Lemma~\ref{lm91} $(iii)$, that \begin{align}\label{E:pr994} F_p-2F_{2p}&=\frac{\ln(\pi/2)-\ln p-\gamma}{2} -\frac{\pi}{4p^2}(J_{2p}-2J_p)\notag\\ &=\frac{\ln(\pi/2)-\ln p-\gamma}{2} +\frac{\pi}{4p}I_p \end{align} We conclude from \eqref{E:pr992} and \eqref{E:pr994} that $C_p-2C_{2p}=F_p-2F_{2p}$, or equivalently \[C_p-F_p=2(C_{2p}-F_{2p}).\] Hence, \begin{equation}\label{E:pr995} \forall\,m\geq1,\qquad C_p-F_p=2^m(C_{2^mp}-F_{2^mp}) \end{equation} Now, using Lemma~\ref{pr81} to replace $H_p$ in Lemma~\ref{pr72}, we obtain \begin{align*} \frac{\pi}{p^2}J_p&=\ln(2\pi)-H_p-\frac{1}{2p}+\mathcal{O}\left(\frac{1}{p^2}\right)\\ &=\ln(2\pi)-\ln p-\gamma-\frac{1}{p} +\mathcal{O}\left(\frac{1}{p^2}\right) \end{align*} Thus $F_p=\mathcal{O}\left(\frac{1}{p^2}\right)$. Similarly, from the fact that $c_n=\mathcal{O}\left(\frac{1}{n^2}\right)$ we conclude also that $C_p=\mathcal{O}\left(\frac{1}{p^2}\right)$. Consequently, there exists a constant $\kappa$ such that, for large values of $p$ we have $\abs{C_p-F_p}\leq \kappa/p^2$. So, from \eqref{E:pr995}, we see that for large values of $m$ we have \[ \abs{C_p-F_p}\leq\frac{\kappa}{2^mp^2} \] and letting $m$ tend to $+\infty$ we obtain $C_p=F_p$, which is equivalent to the announced result. \end{proof} \bg \qquad Combining Proposition~\ref{pr95} and Proposition~\ref{pr82}, we obtain: \begin{proposition}\label{pr96} For $p\geq2$ and $m\geq 1$, we have \[ \pi J_p= -p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p -\sum_{k=1}^{m-1}\frac{b_{2k}\zeta(2k)}{ k\cdot p^{2k-2}} +(-1)^m\frac{\zeta(2m)}{m\cdot p^{2m-2}}\eps_{p,m}, \] with $0<\eps_{p,m}<\abs{b_{2m}}$, where $\zeta$ is the well-known Riemann zeta function. \end{proposition} \qquad Using the values of the $\zeta(2k)$'s \cite[Formula 9.542]{grad}), and considering separately the cases $m$ even and $m$ odd we obtain the next corollary. \begin{corollary}\label{cor97} For every positive integer $p$ and every nonnegative integer $n$, the sum of cotangents $J_p$ defined by \eqref{E:J} satisfies the following inequalities: \begin{align*} J_p&< \frac{1}{\pi}\left(-p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p\right) +2\pi\sum_{k=1}^{2n}(-1)^k\frac{b_{2k}^2}{ k\cdot(2k)!} \left(\frac{2\pi}{ p}\right)^{2k-2},\\ \noalign{\text{and}} J_p&> \frac{1}{\pi}\left(-p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p\right) +2\pi\sum_{k=1}^{2n+1}(-1)^k\frac{b_{2k}^2}{ k\cdot(2k)!} \left(\frac{2\pi}{ p}\right)^{2k-2}. \end{align*} \end{corollary} \bg As an example, for $n=0$ we obtain the following double inequality, which is valid for $p\geq1$ : \[ 0 < \frac{1}{\pi}\left(-p^2\ln p+(\ln(2\pi)-\gamma)p^2 -p\right)-J_p<\frac{\pi}{36} \] \bg \begin{remark} \label{rm98} Note that we have proved the following results. For a postive integer $p$: \begin{align*} \sum_{n=1}^\infty(-1)^{n-1}(H_{pn}-\ln(pn)-\gamma)&=\frac{\ln(\pi/2)-\gamma-\ln p}{2}+\frac{\ln 2}{2p} +\frac{\pi}{4p}\sum_{k=1}^{p-1}\csc\left(\frac{k \pi}{p}\right).\\ \sum_{n=0}^\infty(-1)^{n}(H_{p(n+1)}-H_{pn})&=\frac{\ln 2}{p} +\frac{\pi}{2p}\sum_{k=1}^{p-1}\csc\left(\frac{k \pi}{p}\right).\\ \sum_{n=1}^\infty\left(H_{pn}-\ln(pn)-\gamma-\frac{1}{2pn}\right)&= \frac{\ln p+\gamma-\ln(2\pi)}{2}+\frac{1}{2p}+\frac{\pi}{2p^2}\sum_{k=1}^{p-1}k\cot\left(\frac{k \pi}{p}\right). \end{align*} These results are to be compared with those in \cite{kou}, see also \cite{kou1}. \end{remark}
1,116,691,499,614
arxiv
\section{Introduction} The behavior of multi-point correlation functions and S-matrix amplitudes at large particle number is of interest for various applications. At tree-level, there is an $N!$ enhancement of large $N$ $N$-point correlation functions in perturbative quantum field theory as initially studied by Voloshin in \cite{Voloshin} and developed by many authors \cite{Browntree} \cite{Argyresetal} \cite{Sonetal} \cite{Khozeetal} \cite{RajuN}. For S-matrix amplitudes that produce $N$ outgoing quanta, this occurs because the contributions from low-order interaction vertices build up many tree diagrams, of order $N!$. For some regimes of couplings and kinematics, this enhancement is known to survive the sum over tree diagrams (which can be derived equivalently from the classical field configuration) and to persist in the presence of sufficiently small quantum corrections. The interaction probability -- obtained by squaring the amplitude and integrating over final particle momenta -- contains one $N!$ in the denominator in the phase space for identical particles, leaving a net enhancement. For example, in $\lambda\phi^4$ theory, the $1\to N$ amplitude near threshold is of order $\lambda^{N/2} N! + \text{loops}$, and the decay probability is of order $\lambda^N N! + \text{loops}$. In the setting of particle decays and scattering it is not clear to what extent this effect survives in the quantum theory when $\lambda N$ is not perturbatively small. As noted in \cite{Khozeetal}, if it did persist it would have dramatic implications for Higgs physics, leading to a large decay width for the Higgs: the Higgs would fail to be a good quasiparticle at a relatively low energy scale. More recent analyses \cite{criticalrefs}\ do not obtain such growth in somewhat similar quantities; still, by investigating this they uncover an interesting emergent 'tHooft expansion arising from a semiclassical approximation, related to results in large-charge quantum field theory \cite{largecharge}. From this perspective, it seems interesting in contrast that we will find a positive result for factorial growth in the setting of early universe cosmology, where the required calculations are actually easier to control. In time-dependent backgrounds, such as that arising in early universe cosmology, we may ask a similar question for $0\to N$ processes. A prime example is the set of connected $N$-point in-in correlation functions relevant for studies of primordial non-Gaussianity, the moments of the probability distribution for scalar fluctuations. The main object of interest there is the wavefuntion of primordial perturbations which seed the structure in the universe. We may write it schematically as \begin{equation} \Psi(\zeta(\mathbf{x}), \gamma(\mathbf{x}), \{\chi(\mathbf{x}) \}; \{ \lambda \}) \end{equation} where $\zeta$ and $\gamma$ are the scalar and tensor perturbations, $\{ \chi \}$ represents additional sectors of fields not directly observable, and $\{ \lambda \}$ denotes the parameters (couplings) of the theory that generates the perturbations. The probability distribution for the observables $\zeta$, $\gamma$ is derived from this by tracing over the $\chi$ sector, \begin{equation}\label{probdist} {\cal L}(\zeta(\mbf x), \gamma(\mbf x)| \{\lambda\})=\text{Tr}[\rho |\zeta\gamma\rangle \langle \zeta \gamma|] =\int D\chi |\Psi |^2, ~~~~\rho=\text{Tr}_\chi[|\Psi\rangle\langle\Psi|]. \end{equation} Observations indicate that this is at least approximately Gaussian \cite{Gdata}. A Gaussian distribution arises in free field theory { when} the system starts in its ground state (or any other Gaussian initial state). In any other situation, the state is non-Gaussian at some level. For a perturbative quantum field theory, the non-Gaussianity vanishes in the limit of zero couplings $\{\lambda\}\to 0$. But for mildly perturbative couplings (such as those arising in particle physics at appropriate scales, with $\lambda\sim 10^{-2}$), the effects of interactions are not arbitrarily small and it is interesting to compute their effects and constrain them with data as systematically as possible. In situations where the quantum fields in the early universe interact arbitrarily weakly, one can immediately characterize this via low-point correlation functions. These are already rich with different possible shapes in kinematic space \cite{shapes, Gdata}\ which encode various aspects of the dynamics. However, even within the class of field theories with perturbative couplings $\lambda <1$, interaction effects can build up during inflation \cite{Starobinsky}\cite{GS}\cite{productive}\ and reheating \cite{BondBilliards}. This in turn can lead to non-Gaussianity that is not well captured by the lowest-point correlation function \cite{productive}\cite{BondBilliards}\cite{AndreiWeb}\cite{PBHpaper}\cite{PajerN}\cite{Bruno}.\footnote{See also \cite{outsideHNG}\ for an interesting analysis of multifield evolution beyond the observable horizon and its relation to multipoint correlators and local inferences.} The structure of the paper is as follows: After explaining qualitatively why the $N!$ enhancement is tractable in dS space in Section 2, we present the methods in detail in section 3. We then present the enhancement for a toy theory that is fully solvable, and prove it for a large class of theories in section 4. In section 5, we investigate implications of this for primordial non-Gaussianity searches, and in section 6 we summarize and mention directions for further research. \section{Simplifications of dS space and local\\non-Gaussianity} It is perhaps surprising that we are able to derive a general $N!$ enhancement for correlation functions in an inflationary setting while no such result exists for Minkowski space. The results on Minkowski space are accessible in specific regimes of coupling and kinematics in the theory of interest. For example, \cite{Voloshin} focuses on $\lambda \phi^4$ and \cite{Sonetal} on the weak-coupling multi-particle limit $\lambda n\to \epsilon$, with $\lambda$ being the coupling, $n$ the number of particles produced and $\epsilon$ fixed. The simplicity of dS space comes in the freezing out of the modes after horizon crossing. In the multifield context, there remains meaningful dynamics outside the horizon, and the dilution of gradients enables the stochastic approach to inflation \cite{SalopekBond}\cite{Starobinsky}\ which descends from the full quantum theory as in \cite{GS}. Those approaches are able to resum some of the loop contributions by exploiting the fact that $-k\eta \ll 1$, where $\eta\sim - e^{-Ht}/H$ is the proper time which decays at late time exponentially in FRW time $t$. In QFT in Minkowski space, one has much less control over the loop effects that could spoil the tree-level enhancement of the correlation functions. That is why specific regimes of the phase-space were enforced by hand in the initial investigations of the flat spacetime problem. In the cosmological case, the accelerated expansion itself restricts the phase space naturally. This is not the first time this phenomenon of a greater simplicity in de Sitter than in flat spacetime has arisen. It has even made an appearance in rigorous mathematics (related to physics): the proof of stability of Kerr black holes \cite{Kerrstability}\ pertains in de Sitter spacetime but not otherwise. This is for a similar reason, involving the dilution of perturbations from the accelerated expansion. The multifield inflationary scenarios that generate local non-Gaussianity captures this simplicity. It enables us to analyze large tails of the primordial scalar perturbations in a controlled way \cite{PBHpaper}. After the exit from inflation and all the long modes are frozen out, we mix the additional field sectors with the inflaton. As we will see, in the mixing, dS helps us again by suppressing the momentum conjugate to the inflaton by $a^3$ and enabling us to write the wavefunction evolved by the mixing Hamiltonian as a simple shift in field space. It would be interesting to explore whether single-field inflationary perturbations can produce the same N!-enhanced correlation functions that we find here\footnote{This goes beyond the low-point functions analyzed in e.g. \cite{Juan}\cite{DBI}\cite{EFT}) following early work including \cite{SalopekBond}\cite{EarlyNG}.}. There, correlation functions would go like (assuming the tree diagrams constructively add up) \begin{equation} \langle \zeta_1 \cdots \zeta_n \rangle \sim N! \lambda^{\alpha N} (1+c_1 \lambda^{\beta} N^2+\dots) \end{equation} with $\alpha$ and $\beta$ being constants depending on the order of the interaction and $\lambda$ a dimensionless coupling constant. For example, for a cubic interaction, $\alpha = 1$ and $\beta = 2$, and for a quartic interaction, $\alpha = \frac{1}{2}$ and $\beta = 1$. The leading loop effects come from joining any 2 lines with a propagator, and any two lines at a point respectively. These loop effects can be very large and require resumming. For $\lambda \phi^4$ in some regimes, previews work \cite{Sonetal} was able to resum the contributions controlled by $\lambda N^2$, relegating the question to the effect of those controlled by $\lambda N$. Even those may be calculable, although this case seems more similar to the particle physics case (\cite{criticalrefs}\ versus \cite{Khozeetal}), something that would be interesting to generalize to cosmological correlators. We will leave this to future work. \section{General Setup and Methods} Observations of cosmological scalar\footnote{From now on we suppress the tensor perturbations, which have not been detected at least as of this writing. However, our analysis can be straightforwardly generalized to include tensor modes.} perturbations $\zeta(\mbf x)$ may be compared to those predicted by a theoretical probability distribution depending on some parameters $\{\lambda \}$. The likelihood, or probability of the data given the theory, is given by squaring and tracing over the non-observable fields as in (\ref{probdist}): \begin{equation}\label{Likgen} {\cal L}(\zeta(\mbf x) |\{\lambda\})= \int D\chi |\Psi(\zeta(\mbf x), \chi(\mbf x); \{\lambda\})|^2. \end{equation} Ideally we would compute this functional theoretically, and compare it to data directly. At CMB scales, we would evaluate it on the map; large scale structure may enable a volume's worth of data points, and in \cite{PBHpaper}\ we were led to shorter-scale probes. This determines whether, according to the theory, the data is higher-probability with null values $\{\lambda\}=0$ or for some nonzero values of the couplings (and with what significance). That is not always tractable, so it is useful to work with other quantities derived from the full likelihood. The set of connected correlation functions of $\zeta$ is a useful quantity, which is sometimes easier to compute than the full probability distribution. These are generated by $W(J)$, defined by \begin{equation}\label{WJ} e^{W(J)}= \int D\zeta e^{\int J\zeta} {\cal L}(\zeta | \{\lambda\}) \end{equation} by taking $N$ functional derivatives with respect to $J$: \begin{equation}\label{NpfW} \langle \zeta_{\mbf k_1}\dots\zeta_{\mbf k_N}\rangle_C = { \left.\frac{\delta^N}{\delta J_{\mbf k_1}\dots \delta J_{\mbf k_N}} W(J)\right|_{J_{\mbf k_i}=0}} \end{equation} setting $J$ to zero at the end. We will find that these connected correlators scale like $N!$ in a wide class of inflationary scenarios with at least one additional light field. In some cases, there is also an exponential enhancement $\sim \lambda_r^N$, with $\lambda_r$ a ratio of couplings in the model. To simplify the analysis, we will often work with another quantity derived from the full likelihood -- the histogram of temperature fluctuations, also known as the one-point probability density function. Given a realization of the field, we can count points with a given fluctuation $\hat\zeta$: \begin{equation}\label{histfromP} N_{\hat\zeta}= k_{\rm max}^3 \int d\mbf x' \delta(\zeta(\mbf x')-\hat\zeta) \end{equation} where $1/k_{\rm max}$ is the resolution of the survey, which for simplicity is assumed to be uniform. We can compare this to the average of the histogram according to the field-theoretic distribution (\ref{Likgen}), given by \begin{equation}\label{histfromPavg} \langle N_{\hat\zeta}\rangle=k_{\rm max}^3 \int d\mbf x' \int D\delta\zeta(\mbf x) {\cal L}(\zeta(\mbf x)|\{\lambda\}) \delta(\zeta(\mbf x')-\hat\zeta) \end{equation} This is the probability of measuring a given value of $\zeta$, $\hat\zeta$ at one point, having traced out the field at other points. It can also be used to calculate the N point functions at a single point. In scenarios containing one or more additional light non-shift-symmetric fields present during inflation, this theoretical averaged histogram is determined by a combination of the stochastic methods of Starobinsky as in \cite{Starobinsky}, and the mixing between field sectors. In different regimes one or the other of these may be relevant. We will review this and make use of it below. \subsection{Local non-Gaussianity}\label{twofield} A standard form of non-Gaussianity with amplitude parameterized by $f_{\rm NL}^{\rm local}$ is sensitive to the presence of one or more additional fields $\chi$. If these are light, they develop a variance during inflation similar to that of the inflaton perturbations $\delta\phi$. But unlike the inflaton field, their super-horizon interactions are not constrained by symmetries, and they may imprint nonlinearities on the scalar perturbations via a variety of mechanisms \cite{BondBilliards}\cite{LocalNGmechanisms}. Their evolution outside the horizon is ultralocal, as we will review shortly. At the level of the bispectrum, the local shape of non-Gaussianity, which contains correlations between long and short modes, can only be generated if at least one additional field is present \cite{Juan}\cite{EFT}. In this section, we will set up a class of models of this kind and determine the relative importance of the bispectrum versus other aspects of the distribution, including the power spectrum and higher point correlators. We will make some special choices in specifying the scenario in order to make the calculations as simple as possible. After deriving the factorial enhancement explicitly in a simple example, we will show that it extends to a much wider class of models. Consider a system with two fields, the inflaton $\phi$ and another scalar $\chi$. We denote the wave functional of the perturbations $\delta\phi(\mbf x)$ and $\chi(\mbf x)$ as $\Psi(\delta\phi(\mbf x), \chi(\mbf x), t)$; we will eventually trace out $\chi$ because { $\zeta\sim H\delta\phi/\dot\phi$ will be the directly observed scalar perturbation}. {We are interested { for simplicity} in cases where the scalar perturbation is dominated by the mostly-Gaussian fluctuations of $\delta\phi$, but where there is an additional, potentially very-non-Gaussian contribution, which will dominate in higher-$N$ $N$-point functions of $\delta\phi$.} {This is somewhat analogous to the cases in \cite{productive}, although the origin of the enhanced non-Gaussianity will be different (coming from factorial enhancements of connected correlation functions).} As above, the probability distribution at time $t_0$ will be given by the functional integral \begin{equation}\label{Probdelphi} P(\delta\phi)=\int D\chi |\Psi(\delta\phi, \chi, t_0)|^2 = \text{Tr} [\rho |\delta\phi\rangle\langle \delta\phi|] \end{equation} where $\rho = \int D\chi |\Psi\rangle\langle\Psi|$ is the density matrix obtained by tracing out $\chi$. There is a wide range of initial conditions that lead to inflation; see \cite{Eastetal,Kleban:2016sqm}\ for some recent developments. However, we will simply start from the Bunch-Davies vacuum. This is a conservative choice for our purposes, as it avoids introducing non-Gaussianity at the level of the initial state. We would like to understand the possible N! enhancement of $0\to N$ processes in the time dependent background, so we start in the vacuum, with no particles in the initial state. To separate issues we will prescribe various time-dependent couplings which can be mediated by fields that evolve outside the horizon, e.g. at reheating, as discussed extensively in the early literature on multifield inflation and non-Gaussianity such as \cite{LocalNGmechanisms}. In particular, we will introduce mixing between $\chi$ and $\delta\phi$ after they have evolved independently over $\sim N_e$ e-foldings. { To begin, {for each mode $k$, there is a time $t_{c,k}\sim \log(k/k_*)/H$ at which it has just exited {the horizon}. Let us denote by $t_c$ the time at which all modes accessible in the CMB have exited the horizon.} At this time}, as just mentioned, we have a direct product state \begin{equation}\label{productstate} \Psi(\delta\phi,\chi, t_c) =\psi_G(\delta\phi, t_c)\psi_\perp (\chi, t_c) \end{equation} where we are neglecting slow-roll corrections and hence $\psi_G$ is the approximately Gaussian state of the inflaton fluctuations, of the form \begin{equation}\label{psiGform} \psi_G (f) \sim \sqrt{{\rm det}(C)} \exp(- f C{{}^{-1}} f) \end{equation} with covariance matrix \begin{equation}\label{CGauss} C \sim \delta(\mbf k+\mbf k') { {P_{\delta\phi}(k)}}, ~~~~~ P_{\delta\phi}(k)\sim \frac{H^2}{k^3} \end{equation} encoding scale invariant perturbations. There are in principle many choices for the state of the transverse sector and its dynamics. We will consider a light field $\chi$, of mass $m_\chi\ll H$, starting in its ground state. For the full range of $\chi$ , we take its potential energy $V(\chi)$ to be subdominant to the inflaton potential in sourcing inflation; the slow roll conditions are satisfied separately in the $\chi$ directions. The interactions in the $\chi$ sector build up over a large number of e-foldings $N_e$, with each mode outside the horizon affected by a stochastic distribution of shorter modes \cite{Starobinsky}\cite{SalopekBond}. In the next section, we will illustrate this buildup of nonlinearities. In some cases we may focus on the late-time limit, and its equilibrium 1-point probability distribution. Here each `point' is a patch of size the correlation length, $R_S$ described below, and the distribution obtained by tracing over the other patches is the equilibrium solution to the appropriate Fokker-Planck equation \cite{Starobinsky}\ \begin{equation}\label{latedist} \int D\chi(x\ne x_0) |\Psi_\perp(\chi)|^2\to \rho_{eq}\sim {\cal N}_{eq} e^{- 4\pi^2 V(\chi (x_0))/3 H^4} \end{equation} where ${\cal N}_{eq}$ is a normalization factor. This was worked out in detail, with a focus on the $\lambda\chi^4$ theory in \cite{Starobinsky}. Subleading corrections to this classical stochastic approximation and its derivation from the full quantum field theory were examined in \cite{GS}. { Similar results hold for multiple $\chi$ fields, and other potentials, but with an interesting subtlety. To explain what we mean by this, let us focus on potentials of the form \begin{equation}\label{Vpowers} V(\chi)=\mu^{4-p}|\chi|^p \end{equation} This is a particular family of models motivated by the potential-flattening effects of multiple, generically massive, fields as we review further below \cite{flattening}. The behavior at the origin in (\ref{Vpowers}) may be smoothed out by integrating in additional fields, but for the present discussion this will not be necessary and in fact the form (\ref{Vpowers}) leads to a very simple analysis. The Fokker-Planck equation for the one-point pdf of the long modes of $\chi$ takes the form \begin{equation}\label{FPone} \frac{\partial\rho_1}{\partial t}=\frac{H^3}{8\pi^2}\frac{\partial^2\rho_1}{\partial\chi^2}+\frac{1}{3 H}\frac{\partial}{\partial\chi}\left({V'(\chi)\rho_1}\right) \end{equation} The equilbrium solution arises from setting $\partial \rho_1/\partial t =0$. To capture the approach to equilibrium (when it pertains), we need more general solutions. It is useful to work as reviewed in \cite{PBHpaper}\ in a basis of eigenstates of the operator on the right hand side, which gives an analogue Schrodinger problem \cite{Starobinsky} \begin{equation}\label{StarSchrod} \left( -\frac{\partial}{\partial\chi^2} + [v'(\chi)^2-v''(\chi)]\right)\Phi_n(\chi)= \left(-\frac{\partial}{\partial\chi}+v(\chi)\right) \left(\frac{\partial}{\partial\chi}+v(\chi)\right) \Phi_n(\chi)=\frac{8\pi^2\Lambda_n}{H^3}\Phi_n(\chi) \end{equation} with $v(\chi)=4\pi^2 V(\chi)/3 H^4$. The effective potential $w(\chi)\equiv v'(\chi)^2-v''(\chi)$ in this problem leads to a vanishing lowest eigenvalue, $\Lambda_0=0$; this corresponds to the solution $\propto e^{-v(\chi)}$ (\ref{latedist}) as can be seen immediately from the middle form of (\ref{StarSchrod}). When the nonzero eigenvalues $\Lambda_{n>0}$ are gapped, as we approach equilibrium the non-equilibrium terms are suppressed exponentially, $\sim e^{-\Lambda_n (t-t_0)}$. In the family of models (\ref{Vpowers}), the effective Schrodinger potential $w(\chi)$ has a delta function potential well at the origin which holds the ground state (or a smoothed version with $\Lambda_*$ turned on). (This comes from the $v''(\chi)$ term, with the derivatives acting on the cusp at the origin.) For $p > 1$, $w(\chi)\to\infty$ as $|\chi|\to\infty$ and the energy levels are discrete. For $p=1$, $w(\chi)$ approaches a positive constant at large field values: there is a continuum above a gap, with $\Lambda_{gap}/H\sim (\mu/H)^6$. For $p<1$, $w(\chi)\to 0$ as $|\chi|\to\infty$, leading to an ungapped continuum of excited states. It is straightforward to verify in this formalism that the $p=0$ case reproduces free field theory fluctuations. } \subsection{Mixing with $\phi$ and the probability distribution for $\zeta$} Finally at a late time $t_0$, to convert $\chi$ to $\delta\phi$, we introduce a mixing interaction \begin{equation} \mathcal{S}_{mix} = \int dt \int d\mathbf{x}\, a(t)^3 F_{mix}(\chi) \dot{\phi}^2 \end{equation} with support between times $t_0$ and $t_0+\Delta t$. We can understand the effect of this interaction by noting that during inflation, $\dot{\phi} = \dot{\overline{\phi}}(t) + \delta \dot{\phi}(\mathbf{x}, t)$, where the first term is the leading homogeneous piece. Thus, the interaction is, to leading order \begin{equation} \mathcal{S}_{mix} \sim \int dt \int d\mathbf{x}\, \dot{\overline{\phi}} [a(t)^3 \delta \dot{\phi}] F_{mix}(\chi) \end{equation} We can write this in terms of the conjugate momentum to the inflaton fluctuation $\delta\phi$, $\Pi_{\delta\phi} = a(t)^3 \delta \dot{\phi}$ leading to a mixing Hamiltonian \begin{equation} H_{mix} = \int d\mathbf{x}\, \dot{\overline{\phi}}\,\Pi_{\delta\phi} F_{mix}(\chi) \end{equation} that dominates over the free Hamiltonian, as described in \cite{PBHpaper}; for completeness we briefly summarize the setup here. The operator $\Pi_{\delta\phi}$ is the generator of translation in field space and so the evolution over $\Delta t$ is just a shift of the wavefunction: \begin{equation} \Psi(\chi, \delta\phi, t_0 + \Delta t) = \Psi(\chi, \delta\phi + \dot{\overline{\phi}}\Delta t F_{mix}(\chi), t_0) \end{equation} Putting all this together, the likelihood for $\delta\phi\sim \zeta \dot\phi/H$ is then given to good approximation by \begin{equation}\label{mixed} {\cal L}(\delta\phi|\{\lambda\}, \kappa) =\int D\chi_0 ~ |\psi_\perp (\chi_0, t_0)|^2 ~ |\Psi_G(\delta\phi +\kappa F_{mix}(\chi_0))|^2 \end{equation} up to $1/N_e$ corrections. Here we have defined $\kappa \equiv \dot{\overline{\phi}}\Delta t$. After this step of evolution, we postulate that reheating quickly leads to a local thermal distribution, with $\delta\phi\sim \zeta\dot\phi/H$ distributed according to the likelihood (\ref{mixed}). Given this, $\zeta$ remains constant during the remaining evolution outside the horizon, and (\ref{mixed}) contains the primordial non-Gaussianity. For the case where we reach the equilibrium distribution in the $\chi$ sector, the one-point pdf for $\delta\phi$, defined as in (\ref{histfromPavg}), is easily computed by Gaussian integration \begin{equation}\label{histapprox} \langle N_{\delta\hat\phi}\rangle=\int d\vec\chi_0 \,{\cal N}_{eq} \exp(-4\pi^2 V(\vec\chi_0)/3 H^4) \frac{\exp(-(\delta\hat\phi{+}\kappa F_{mix}(\vec\chi_0))^2/2\sigma^2)}{\sqrt{2\pi}\sigma} \ , \end{equation} where we used (\ref{latedist}), and we have allowed for the possibility of multiple $\chi$ fields. Here the width $\sigma$ is given by \begin{equation}\label{sighist} \frac{1}{2\sigma^2}= C^{-1}_{x', x'}+ 4 C^{-1}_{x', \perp}(C^{-1}_{\perp,\perp})^{-1}C^{-1}_{\perp, x'} \end{equation} where $C$ is the covariance matrix in position space, and $\perp$ denotes points not equal to $ x'$. This width is of order $H$. Again, (\ref{sighist}) can be traded for the $\hat\zeta\sim H{\delta\hat\phi}/\dot\phi$ histogram. \subsection{Regime of applicability of the equilibrium distribution}\label{sec:equilibriumregime} { Let us now spell out the regime of applicability of the equilibrium distribution. This depends in part on the relative size of various relevant patches. In the derivation of the equilibrium distribution, following \cite{Starobinsky}\ let us denote the correlation length as \begin{equation} R_S \sim H^{-1}e^{H/\Lambda_1}. \end{equation} As reviewed in \cite{PBHpaper}, this can be read off from the two point correlation function. We must compare this to two other scales. First, we have the size of the observable patch of the universe, \begin{equation}\label{Robs} R_{obs}=\frac{1}{H}e^{n_e} \end{equation} where $n_e<60$ is the total number of efoldings of phenomenological inflation. The third scale of interest is the scale of resolution of the CMB, or of some shorter-scale probe such as PBHs. This we will parameterize as \begin{equation}\label{Rres} R_{res}\sim \frac{1}{H} e^{n_e-\Delta n_e} \end{equation} The number of independent patches is \begin{equation}\label{Np} N_P = \left(\frac{R_{obs}}{R_S}\right)^3 \end{equation} To have more than one patch in the observable universe, each of which larger than the resolution, we need \begin{equation} e^{n_e-\Delta n_e} < R_S H < e^{n_e} \end{equation} In other words, the equilibrium distribution applies in a straightforward way for \begin{equation}\label{parameterwindow} \frac{1}{n_e}< \frac{\Lambda_1}{H} < \frac{1}{n_e-\Delta n_e} \end{equation} where the eigenvalues $\Lambda_n$ depend on the model parameters as determined by (\ref{StarSchrod}). For example, for the $p=1$ model $V(\chi) = \mu^3\chi$ we find $\frac{\Lambda_1}{H}\sim \frac{\mu^6}{H^6}$, while for the $p=4$ model $V(\chi)\sim \lambda \chi^4$ we have $\frac{\Lambda_1}{H}\sim \sqrt{\lambda}$ \cite{Starobinsky}. If we restricted attention to the CMB, then this particular scenario, with the $\chi$ sector reaching the equilibrium distribution, pertains for a rather particular value of the coupling in this family of models (although one which might arise in a rich potential landscape). Moreover, once it reaches equilibrium, the contribution $\chi$ makes to the fluctuations is very blue. For shorter scale probes, such as primordial black holes, there is a wide window of applicability as described in \cite{PBHpaper}. However, at least in that context the stochastic evolution of $\chi$ is only applicable to the leading observables if the potential drifts outward, for reasons explained in \cite{PBHpaper}.} As we will review further below, the mixing interaction itself can introduce strong non-Gaussianity associated with the tail of the distribution. \subsection{Flattened directions in field space and Non-Gaussian tails} The effect of $\vec\chi$ on the histogram for ${\delta\hat\phi}$ can be understood analytically to some extent. We will be particularly interested in the tails of the distribution. To see whether or not the Gaussian tail dominates for ${\delta\hat\phi}\gg H$, consider field configurations where the Gaussian suppression is canceled by the $\vec\chi_0$ field: \begin{equation}\label{tailcanc} F(\vec\chi_{0, tail})\simeq -{\delta\hat\phi}/\kappa. \end{equation} In that regime, the probability is suppressed by $\exp(-4\pi^2 V(\vec\chi_{0, tail})/3 H^4)$. If in this direction (or directions) in field space, the potential $V(\vec\chi_{0, tail})$ is flatter than quadratic in ${\delta\hat\phi}$, then the Non-Gaussian tail dominates over the Gaussian at sufficiently large ${\delta\hat\phi}$. In order to be potentially observable, the overall probability of this tail must be larger than $1/N_{P}$ where $N_P$ is the number of independent data points in the survey volume: roughly, \begin{equation}\label{Probcondition} \int_{tail} d{\delta\hat\phi} \frac{{\cal N}_{eq}}{\sqrt{2\pi}}\exp(-4\pi^2 V(\vec\chi_{0, tail}({\delta\hat\phi}))/3 H^4) > \frac{1}{N_{P}} \end{equation} Flattened potentials arise naturally from adjustments of heavier fields as in \cite{flattening}\ as well as for other reasons such as those studied in \cite{DBIStochastic}\cite{alpha}. In fact, constraining the Non-Gaussian tail in our scenario gives us a new way to probe large field ranges, in the $\chi$ sector rather than the inflaton sector. The less efficient our conversion is (i.e. for smaller mixing $\kappa$), the larger the field range is that we probe.\footnote{This is somewhat reminiscent of observations in \cite{LocalNGmechanisms}.} Here, we probe the field range via the quantum (effectively stochastic) fluctuations of $\chi$ rather than the classical motion of $\chi$, and via non-Gaussianity rather than the tensor to scalar ratio. { Of course, the distributions differ in other ways than asymptotically on the tail. In some cases, including an axionic $\chi$ field, the Non-Gaussian histogram contains an intermediate region where it exceeds the Gaussian, before rejoining the Gaussian tail further out. When probability moves to a region away from the origin in ${\delta\hat\phi}$, this is made up by a suppression of probabiliy near the origin. Low-point moments are sensitive to the latter effect, and it is a quantitative question to determine which measurements best capture the difference in the two distributions. PBH formation is directly sensitive to the tail, as we analyzed in \cite{PBHpaper}. But other parts of the distribution may lead to other signals and constraints to take into account. We will comment on this briefly after deriving the factorial enhancement in a wide class of multifield models.} \section{The generating functional for connected $N$-point functions and $N!$ enhancement}\label{WJNpf} One tractable probe of the distribution is its moments, the $N$-point correlation functions. Also from a purely theoretical point of view, we would simply like to deteremine the fate of the factorial enhancement \cite{Voloshin}\ in our cosmological setting. We can estimate the $N$ dependence of the $N$-point functions by first extracting the connected ones by computing the generating functional \begin{equation}\label{WJgen} Z(J)=e^{-W[J]}=\int D{\delta\phi}\; {\cal L}[{\delta\phi}|\lambda, \kappa] \;e^{-\int J {\delta\phi}} \end{equation} with the connected $N$-point function given by \begin{equation}\label{Wderivs} {\left.\frac{\delta^N W}{\delta J_{\mbf k_1}\dots \delta J_{\mbf k_N}} \right|_{J_{\mbf k_i}=0}} \sim \langle {\delta\phi}_{\mbf k_1}\dots {\delta\phi}_{\mbf k_N} \rangle_{C}. \end{equation} and the disconnected diagrams obtained from a similar formula with $W(J)$ replaced with $Z(J)$. For our case, the likelihood takes the special form (\ref{mixed}), so we get \begin{equation}\label{WJus} e^{-W[J]}=\int D\chi_0|\psi_\perp[\chi_0]|^2 \int D{\delta\phi} P_G[{\delta\phi}-\kappa F(\chi_0)] e^{-\int J {\delta\phi}} \end{equation} The path integral over ${\delta\phi}$ is a Gaussian, and gives \begin{equation}\label{WJnext} e^{-W[J]}\sim e^{J { C} J} \int D\chi_0 |\psi_\perp[\chi_0]|^2 e^{-\int \kappa J F(\chi_0)} \end{equation} where $C$ is the Gaussian covariance (\ref{CGauss}). We will discuss saddle point estimates for the $\chi_0$ integral below, working with the 1-point pdf (histogram). But first, we will derive $W(J)$ for a special choice of $\psi_\perp$ and $F(\chi)$ which is nontrivial but completely calculable in the full quantum field theory. For the transverse state, we will simply take a Gaussian $\psi_G(\chi_0)$ (\ref{psiGform}). For the mixing interaction, we consider \begin{equation}\label{quadchi} F(\chi_0) = \frac{\chi_0^2}{M} + \chi_0 \end{equation} This form arose from interesting (p)reheating dynamics in \cite{BondBilliards}. If the mass parameter $M$ is of order $H$, this is fully nonlinear; the effective coupling is $H/M$. \subsection{Full field theory calculation in a special case} For this case, the result is \begin{align}\label{WJquad} W[J] &\sim \int d\mbf k d\mbf k' \sqrt{P_{\delta\phi}(k)}J_\mbf k { \left[ \delta_{\mbf k,\mbf k'}+\kappa^2\left(\delta_{\mbf k\k'}+\kappa \frac{J_{-\mbf k+\mbf k'}}{M}\sqrt{P_{\delta\phi}(k)P_{\delta\phi}(k')}\right)^{-1} \right]}\sqrt{P_{\delta\phi}(k')}J_{\mbf k'} \nonumber\\ & ~~~~~~ -\frac{1}{2} {\rm Tr~ log} \left({\delta_{\mbf k \mbf k'}} + \kappa { \frac{J_{-\mbf k+\mbf k'}}{M}}\sqrt{P_{\delta\phi}(k)P_{\delta\phi}(k')} \right) + \text{const} \nonumber \\ \end{align} Evaluating the derivatives (\ref{Wderivs}) after expanding $W[J]$ in a power series in $J$ gives us the following result for the $N$-point function. From the top line of (\ref{WJquad}) we obtain, { for $N>2$,}~\footnote{The $N$-point functions for $\zeta$ are obtained by the rescaling $\zeta\sim H \delta\phi/\dot\phi$.} \begin{align}\label{Npftree} \langle\delta\phi^N\rangle|_0 &\sim \frac{\kappa^N }{M^{N-2}}\delta(\sum\mbf k) P_{\delta\phi}(k_1)P_{\delta\phi}(|\mbf k_1+\mbf k_2|) P_{\delta\phi}(|\mbf k_1+\mbf k_2+\mbf k_3|)\dots P_{\delta\phi}(|\mbf k_1+\dots +\mbf k_{N-2}|) P_{\delta\phi}(k_N)\nonumber\\ &+ {\rm permutations} \nonumber\\ &\sim \kappa^N N! \end{align} which has the structure of a tree diagram. From the second line we obtain \begin{align}\label{Npfloop} \langle\delta\phi^N\rangle|_1&\sim\frac{\kappa^N }{M^N} \delta(\sum\mbf k)\int d^3\mbf k P_{\delta\phi}(k)P_{\delta\phi}(|\mbf k_1+\mbf k|)P_{\delta\phi}(|\mbf k_2+\mbf k_1+\mbf k|)\dots P_{\delta\phi}(|\mbf k_{N-1}+\dots +\mbf k_1+\mbf k|) \nonumber\\ &+ {\rm permutations} \nonumber\\ &\sim \kappa^N N! \end{align} This has the structure of a loop diagram. These contributions both have the expected scaling with momenta for a nearly scale-invariant theory. The amplitude is enhanced by $N!$. {The overall level of non-Gaussianity of the map is naively of order $\frac{\langle\delta\phi(x)^N\rangle_c}{\langle\delta\phi(x)^2\rangle^{N/2}}\sim N! \kappa^N \left(\frac{H}{M}\right)^{N-2}$ {from (\ref{Npftree})}}, but this does not generally reflect the actual observable level. \begin{figure}[htbp] \centering \begin{subfigure}[c]{0.49\textwidth} \centering \begin{tikzpicture} \draw [very thick] (0,0) -- (5,0) (0,0) -- (0,1) (1,0) -- (1,1) (4,0) -- (4,1) (5,0) -- (5,1); \node at (0,1.3) {$\vec{k}_1$}; \node at (1,1.3) {$\vec{k}_2$}; \node at (4,1.3) {$\vec{k}_{N-1}$}; \node at (5,1.3) {$\vec{k}_N$}; \node [scale=2] at (2.5,1.3) {$\dotsm$}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[c]{0.49\textwidth} \centering \begin{tikzpicture} \draw [very thick] (0,0) circle (0.7) (0,0.7) -- (0,1.7) (0.7*0.71, 0.7*0.71) -- (1.7*0.71, 1.7*0.71) (0.7,0) -- (1.7, 0) (0.7*0.71, -0.7*0.71) -- (1.7*0.71, -1.7*0.71) (-0.7*0.71, 0.7*0.71) -- (-1.7*0.71, 1.7*0.71); \node at (0,2) {$\vec{k}_1$}; \node at (2*0.71, 2*0.71) {$\vec{k}_2$}; \node at (2, 0) {$\vec{k}_3$}; \node at (2*0.71, -2*0.71) {$\vec{k}_4$}; \node at (-2*0.71, 2*0.71) {$\vec{k}_N$}; \node [rotate=-45,scale=2] at (-0.9, -0.9) {$\dotsm$}; \end{tikzpicture} \end{subfigure} \caption{A diagrammatic representation of the two contributions to the $N$-point function described in the text, (\ref{Npftree}) on the left and (\ref{Npfloop}) on the right.} \label{diagramsNpf} \end{figure} For more general $\psi_\perp(\chi)$ and $F(\chi)$, we can obtain similar results, now using a saddle point approximation to the integral. Shortly we will see that for a rather generic (but not entire) nonlinear function, the order $N$ term in the expansion of $W[J]$ has no factorial suppression. Hence, $N$-point correlators obtained by Nth functional derivative will be factorially enhanced. To make this clear, we can study a 1d integral version of the problem, the histogram of scalar fluctuations defined above. We will show this in the next subsection. \subsection{More general theories and the factorial enhancement} We would like to understand how general the factorial enhancement is given a more generic model than the one just analyzed. Clearly small perturbations around the example above will not change the conclusion. More generally, we can analyze this by considering the histogram version of the integral (\ref{WJnext}). This suffices to capture the N-point functions at coincident points, and hence it is enough to determine the factorial structure. (However, it does not necessarily capture the strongest tails in the full quantum field theory.) For this exercise, let us consider the histogram arising from the equilibrium distribution \cite{Starobinsky}\ discussed above: \begin{equation}\label{WJnextoy} Z(J)=e^{-W(J)} \sim \int d\vec\chi_0 \exp\left(-\frac{4\pi^2 V(\vec\chi_0)}{3 H^4} - \kappa J F(\vec\chi_0) \right). \end{equation} We have not included the $e^{JC^{-1}J}$ term here as it only contributes to the 2-point function, or the normalization since this drops out of the N point function growth. The $N$-point functions are obtained by taking N ordinary derivatives of $W(J)$, which still captures the combinatorial factors. These depend on the behavior of the coefficients in a series expansion of $W(J)$ (or $Z(J)$ for the disconnected diagrams). We can assess the combinatorial factor using the structure of the integrals that arise in the expansion with respect to $J$. The disconnected diagrams are generated by \begin{equation}\label{Zexpansion} Z(J) = \sum_n z_n \kappa^n J^n \end{equation} The corresponding disconnected N point functions go like $z_N N!\kappa^N$. So these have a factorial enhancement if the coefficients $z_n$ are not suppressed by $1/n!$, in which case the series has a finite radius of convergence (possibly zero, meaning the series is only asymptotic). For that to be the case, the function $Z(J)$ should not be entire. One way that a function can fail to be entire is if it diverges somewhere in the complex $\kappa$ plane. This in turn depends on whether the potential $V(\vec\chi_0)$ grows more quickly than $F(\vec\chi_0)$ in every direction in field space. If not, then the disconnected diagrams have a factorial growth, and the distribution has a non-Gaussian tail at sufficiently large ${\delta\hat\phi}$, as discussed above. For the connected N point functions derived from $W(J)$ we have a similar criterion for a factorial enhancement. This may have an enhancement even when the disconnected correlators do not (one can see an example of this simply from the fact that taking the logarithm leads to non-analyticity at the zeros of (\ref{WJnextoy})). We proceed by evaluating the integral by saddle point. The saddle point equation for $\chi_{0I}^*$ is: \begin{equation}\label{saddleqn} \frac{4\pi^2 \partial_IV(\vec\chi_0^*)}{3 H^4} + \kappa J \partial_I F(\vec\chi_{0}^*) = 0 \end{equation} It will be useful to express the function $F$ evaluated on the solution to this equation by a power series \begin{equation}\label{saddleqnsol} F(\vec\chi_{0}^{*}(J)) = \sum_{n=0}^\infty a_{n} J^n \end{equation} The saddle point value of $W(J)$ is \begin{equation}\label{Wchicstar} W(J) =\frac{4\pi^2 V(\vec\chi_0^*)}{3 H^4} + \kappa J F(\vec\chi_{0}^*). \end{equation} Differentiating $W(J)$ with respect to $J$ and using (\ref{saddleqn}) we obtain the differential equation \begin{equation}\label{Wdiffeqn} \frac{dW}{dJ} = \kappa F(\vec\chi_{0}^{*}(J)) \end{equation} which upon integration gives \begin{equation}\label{Wpowseries} W(J) = \kappa \sum_{n=0}^\infty \frac{a_n J^{n+1}}{n+1} \end{equation} {after we impose that $W(J=0)=0$.} The $N$-point functions are thus $\sim a_N N!$. The main question is then how the coefficients $a_N$ scale. If they cancel the $N!$ enhancement then clearly the power expansion of $F(\vec\chi_0^{*}(J))$ must converge for all complex J and thus the function must be entire. This is a very stringent requirement and is not true in general. To simplify the analysis, let us now specialize to the case of a single variable $\chi_0$. Equation (\ref{saddleqn}) can be viewed as an inversion problem. We are given $J = g(\chi_0^*)$, where the function $g$ is \begin{equation}\label{gdef} g(x)= -\frac{4\pi^2 V'(x)}{3 H^4 \kappa F'(x)} \end{equation} and we want to show that $F\circ g^{-1}$ is not entire. Consider, for example, $F(x)\propto x$. Then, a sufficient (but not necessary) condition is that $g'$ has finite roots in the complex plane as that would imply that the derivative of $g^{-1}$ has a pole at that point. Furthermore, for this particle example for $F$, this corresponds to the question of whether $V''(x)$ has roots in the complex plane. \subsection{Example of non-analytic $W(J)$} It is possible for the saddle point equation to have a solution which is not Taylor expandable. A simple but important example is a potential of the form $V(\chi_0) = |\frac{\chi_0}{M}|^p$, with $p > 1$ and $F(\chi_0) = \chi_0$. The integral is then (neglecting all constants as they can be absorbed in the definition of $J$) \begin{equation}\label{WJnonanalytic} Z(J) = e^{-W(J)} \sim \int_{-\infty}^\infty d\chi_0 \exp\left(-|\chi_0|^p - J\chi_0\right) = 2\int_0^\infty dx e^{-x^p}\cosh(Jx) \end{equation} To conclude that the connected correlation functions exhibit an $N!$ enhancement, it suffices to show that there exists a $J_0\in \mathbb{C}$ such that $Z(J_0) = 0$. Then $W$ has a { logarithimic branch cut} at $J = J_0$ and is thus not analytic everywhere. To that end, let $J$ be purely imaginary and define $J\equiv i y$. Then, \begin{equation} Z(iy) = 2\int_0^\infty dx e^{-x^p} \cos(y x) \end{equation} This as a function of $y$ is purely real and continuous. Therefore, if we can prove that it takes a negative value, it would imply that it also has a zero. For concreteness, take $y=2\pi$ \begin{equation} Z(2\pi i) = 2\int_0^\infty dx e^{-x^p} (1-2\sin^2(\pi x)) = 2\Gamma\left(1+\frac{1}{p}\right) - 4\int_0^\infty dx e^{-x^p} \sin^2(\pi x) \end{equation} We can make a series of approximations to the final integral \begin{align} \int_0^\infty dx e^{-x^p} \sin^2(\pi x) &> \int_0^1 dx e^{-x^p} \sin^2(\pi x) > \int_0^1 dx (1-x^p)\sin^2(\pi x) \nonumber\\ &> \frac{1}{2} - \int_0^1 dx x^p \pi^2 (x-1)^2 = \frac{1}{2} - \frac{2\pi^2}{(p+1)(p+2)(p+3)} \end{align} to conclude that \begin{equation} Z(2\pi i) < 2\Gamma\left(1+\frac{1}{p}\right) - 2 + \frac{8\pi^2}{(p+1)(p+2)(p+3)} \end{equation} which is negative for $p > 6.4$ and goes as $-\frac{2\gamma}{p}$ for large $p$. For values close to $p=2$, we can numerically Taylor-expand $Z(2\pi i)$ around $p=2$. This gives \begin{equation} Z(2\pi i) \approx 9.17\times 10^{-5} - 3.85\times 10^{-2} (p-2) \end{equation} which is negative for $p > 2.003$. We can fill in the intermediate regime by taking more terms in the approximations above. The result is that $Z(2\pi i)$ is negative for all $p > 2.003$. \section{Comments on observational implications} It is interesting to apply these results to primordial non-Gaussianity searches. It sharpens the question of systematically mapping out the ideal probe of Non-Gaussianity (low point functions versus the histogram or higher moments).\footnote{As mentioned above, the dominance of higher point functions has arisen previously in examples \cite{BondBilliards}\cite{AndreiWeb}\cite{PBHpaper}\cite{PajerN}\cite{productive}\cite{Bruno}. Another previous incarnation of this question led to a negative result in a different context as explained in \cite{34confusion}.} In the present context, this may be model-dependent as a result of the exponential dependence of the tail of the distribution on the fields and parameters. In \cite{PBHpaper}\ we focused on primordial black hole production, which occurs on shorter scales than the CMB. In this section, we will consider the histogram (\ref{histfromPavg}) which might be applied to the CMB or large scale structure. As described above in section \ref{sec:equilibriumregime}, the applicability of the stochastic nonlinearities is limited to a narrow (but nonvanishing) window in coupling (\ref{parameterwindow}). However, the mixing itself introduces heavy tails of the distribution in appropriate cases, and in those examples there is no such limitation. \subsection{Signal to Gaussian noise formula and its limitations}\label{sec:SNprobs} In the collider physics version of this quantum field theory problem \cite{Voloshin}\cite{Browntree} \cite{Argyresetal} \cite{Sonetal} \cite{Khozeetal}\cite{criticalrefs}, the quantity of physical interest is the cross section (squared N point function amplitude). This is factorially enhanced at tree level, sufficiently close to threshold. The analogous squared quantity in our case, formally, would be signal to noise estimate for an N point function estimator. In all examples with a factorial enhancement, the ratio of the non-Gaussian mean and the Gaussian variance, which we review shortly, is similarly factorially enhanced. This by itself would naively indicate a generic new discovery window for non-Gaussianity. However, it is necessary to analyze the full distribution of the estimator to determine how likely such a discovery would be, and this turns out to be model-dependent. By working in the cosmic variance limited regime of CMB observations, we can focus on the noise introduced by the quantum fluctuations of the fields themselves. In general, this is highly nontrivial, with a covariance matrix \begin{equation}\label{Ncovar} C^{(N)}_{\{\mbf k_1,\dots,\mbf k_N\}, \{\mbf k'_1,\dots,\mbf k'_N\}}=\langle\zeta_{\mbf k_1}\dots\zeta_{\mbf k_N} \zeta_{\mbf k'_1}\dots\zeta_{\mbf k'_N}\rangle \end{equation} which is a 2$N$-point function. Including only the noise from Gaussian fluctuations, and including only connected contributions to the $N$-point functions, this matrix is diagonal and leads to a relatively simple expression \begin{eqnarray}\label{SNsq} &&(S/N)^2 =\int_{\{\mbf k\}\,,\{\mbf k'\}}\langle\zeta_1\dots\zeta_N\rangle_{C}^{ *}\; {C^{(N)}}{(\{\mbf k\},\{\mbf k'\})}^{-1}\; \langle\zeta_{1'}\dots\zeta_{N'}\rangle_{C}\\ \nonumber &&\qquad \to\quad \int_{\{ \mbf k \}} \frac{|\langle\zeta_{\mbf k_1}\dots\zeta_{\mbf k_N}\rangle_C|^2}{N! \prod P(k_i)} \equiv (S/N)^2_G \end{eqnarray} where \begin{equation}\label{Pzeta} P (k) \sim \frac{H^4}{\dot\phi^2 k^3} \end{equation} is the power spectrum for $\zeta$. Here the $N!$ in the denominator compensates for the unrestricted momentum integrals over the $N$ identical fields in the final state This is similar to the $1/N!$ arising in the multiparticle density of states for scattering with identical final particles. The integrals over phase space are restricted to \begin{equation}\label{kminkmax} k_{\rm min}< \{ |{\bf{k}}|\} < k_{\rm max} \end{equation} where $k_{\rm min}\sim 1/L$ with $L$ the size of the survey, and $k_{\rm max}$ is the largest momentum scale we can probe. One can analyze this quantity, finding that it has an interesting enhancement related to the N! growth of correlators. Nonetheless, the probabilility of a detection for a given $N_{pix}$ is model-dependent within this class. The reason that the nominal S/N is not a good guide is that the distribution of the estimator may be highly non-Gaussian. We see that explicitly below in figure \ref{fig:likelihoods}. \subsection{Basic estimates of observational sensitivity} One diagnostic of the information available to distinguish the Non-Gaussian probability distribution from the Gaussian one is the relative entropy (also known as the Kullback–-Leibler divergence), an average of the log of the ratio of likelihoods at two values of some theoretical parameter $\lambda$: \begin{equation}\label{RelSdef} {\cal S}_{rel}\equiv \int D\zeta P(\zeta)\log\left(\frac{{\cal L}(\zeta(\mbf x) |\{\lambda\})}{{\cal L}(\zeta(\mbf x) |\{0\})}\right) \end{equation} Here, $P(\zeta)$ may be taken to be either of the two probability distributions; ${\cal S}_{rel}$ is not symmetric. The first term in its Taylor expansion about $\{\lambda\}=0$ is the Fisher metric $F_{\lambda\lambda}$. As we will see, in some cases, the relative entropy (\ref{RelSdef}) is well approximated by the first term in the Taylor expansion, the constraint on $\lambda$ is well estimated by the inverse of the Fisher metric, and low-point correlation functions suffice to achieve this constraint. In other cases, this first term is subdominant, and there is more information available (e.g. on the tail of the distribution). Moreover, certain observables (such as primordial black hole production \cite{PBHpaper}) are specifically sensitive to the tail. {The analysis above establishes factorial enhancement of N point functions for the families of models described above in (\ref{powersVf}), and it is clear that this extends to many others. We note that the factorial enhancement of the connected diagrams is universal in this class, while that of the disconnected diagrams is model dependent. For example, we can parameterize a class of models by \begin{equation}\label{powersVf} V = \mu^{4-p} (\Lambda_*^2+\chi^2)^{p/2}, ~~~~~~~~~~ F(\chi)= H \left(\frac{\chi}{H}\right)^m \xrightarrow[m \to \infty]{} H e^{2{\chi}/M_*} \end{equation} The tail becomes stronger with larger $m/p$. Small values of $p$ emerge from the flattening mechanism discussed in \cite{flattening}; moreover, with more generic kinetic terms, the possibilities proliferate, at least in some cases leading to a flatter distribution for different reasons \cite{DBIStochastic}\cite{alpha}. Large integer values of $m$ do not appear particularly well-motivated a priori, but the $m\to\infty$ limit leads to a Wilsonian-natural model of a hyperbolic field space \begin{equation}\label{hypmodel} F(\chi) \sim H e^{2{\chi}/M_*} \end{equation} similar to the structure of the kinetic terms considered in e.g. \cite{hyperbolic}. As mentioned in \cite{PBHpaper}, this has a very heavy tail compared to the Gaussian case. We can think of the first expression in (\ref{powersVf}) as an ad hoc parameterization of the slope of the potential in the direction giving the strongest contribution to the tail. In this class, the heavier than Gaussian tails only arise for $p<m$. So for example, the $\chi^4$ theory with mixing $m\le 4$ has a Gaussian tail asymptotically, but still has factorial-enhanced connected N point functions.} One can also analyze fields with an underlying periodicity, something also considered in \cite{Bruno}. In the case (\ref{hypmodel}), the dominant contribution to the non-Gaussianity is from the mixing interaction, liberating us from the condition (\ref{parameterwindow}) as anticipated above. { { \subsubsection{Corrections to the power spectrum ($N=2$)} Before considering the tail of the distribution, it is interesting to ask what the effect of the mixing is on the power spectrum. At order $\kappa^2$, we get a correction to the $2$ point function. First, we note that \begin{equation}\label{1pf} \langle\delta\phi\rangle = \kappa \langle F(\chi_0)\rangle + {\cal O}(\kappa^3) \end{equation} Let us shift away the unobservable zero mode, defining \begin{equation}\label{shiftphi} \delta\phi = f +\langle\delta\phi\rangle \end{equation} where $\langle \delta\phi \rangle \simeq \kappa \langle\chi_0\rangle$. We then have a probability distribution \begin{equation}\label{Pf} {\cal L}(f|\kappa)= \int D\chi_0 |\psi_\perp[\chi_0]|^2P_G[f+\langle\delta\phi\rangle -\kappa F(\chi_0) ], \end{equation} the likelihood of measuring a fluctuation $f$ given $\kappa$. Let us define $P_{\chi_0}(k)$ by \begin{equation}\label{deftwo} \langle F(\chi_{0})_{\mbf k_1}F(\chi_{0})_{\mbf k_2} \rangle= \int D\chi_0 |\psi_\perp[\chi_0]|^2 F(\chi_{0})_{\mbf k_1}F(\chi_{0})_{\mbf k_2} \equiv P_{\chi_0}(k_1)\delta(\mbf k_1+\mbf k_2). \end{equation} Expanding the likelihood in $\kappa$, we find \begin{equation}\label{Pfexp} {\cal L}(f|\kappa)= {\cal L}(f|0)\left(1+ \frac{1}{2} \kappa^2 \int d\mbf k \frac{P_{\chi_0}(k)}{P_{\delta\phi}(k)^2}f_\mbf k f_{-\mbf k} +\dots\right) \end{equation} with \begin{equation}\label{Pf0} {\cal L}(f| 0) = \exp\left( -\frac{1}{2}\int d\mbf k \frac{1}{P_{\delta\phi}(k)}f_\mbf k f_{-\mbf k} \right) \end{equation} At order $\kappa^2$, this simply means \begin{equation}\label{powershift} P_{\delta\phi}(k)\to P_{\delta\phi}(k)+\kappa^2 P_{\chi_0}(k) \end{equation} The $\chi$ sector modifies the power spectrum at order $\kappa^2$. In the regime we are focused on, with couplings satisfying $\lambda n_e^2\ge 1$, the function $P_{\chi_0}(k)$ will have a fully nonlinear dependence on $\log(k)$. In other words, it will not be a simple perturbative expansion in tilt, running, etc., in contrast to minimal single-field slow roll inflationary models. In the absence of non-Gaussianity, this could potentially provide an upper bound on $\kappa$ of order \begin{equation}\label{kapupper} \Delta\kappa|_{2pf}\sim \frac{1}{N_{P}^{1/4}}\sqrt{\frac{P_{\delta\phi} }{P_{\chi_0}}} \end{equation} which in itself is an improvement over the bound from the bispectrum constraint on $f_{\rm NL}^{\rm local}$, which scales like $ N_{\rm p}^{-1/6}$ in this regime. Conversely, there is a similar improvement in the discovery potential in the two point function given (\ref{kapupper}). }} \subsubsection{Information in the tail for a family of models} Here we analyze the Non-Gaussian histogram quantitatively for the family of models defined in (\ref{powersVf}). Although the factorial enhancement of connected N point functions is general, the accessible information beyond the 2 point function is model-dependent. We will classify the regimes according to the behavior of the histogram and the various N point functions. (Even the 2 point function is informative, especially for theories with { $\lambda n_e^2>1$ }, as there is no suppression of the running versus the tilt and so on.) \smallskip {\bf{Analytic estimates for the size of the tail}} \smallskip Before getting into detailed analysis, we can estimate the size of the tail at the upper bound on $\kappa$ that could be inferred from a bound on corrections to the 2-point function. Let us consider the class of models described above (\ref{powersVf}). For these, we can write the histogram as \begin{eqnarray}\label{powersdist} \langle N_{\delta\hat\phi}\rangle &=& \int d\chi_0 \,{\cal N}_{eq} \exp(-4\pi^2 \mu^{4-p}|\chi_0|^p/3 H^4) \frac{\exp(-(\delta\hat\phi{+}\kappa H (\chi_0/H)^m)^2/2\sigma^2)}{\sqrt{2\pi}\sigma} \ , \nonumber\\ &=& \int d\tilde\chi_0 \,\tilde{\cal N}_{eq} \exp(-|\tilde\chi_0|^p) \frac{\exp(-(\delta\hat\phi{+}\tilde\kappa H \tilde\chi_0^m)^2/2\sigma^2)}{\sqrt{2\pi}\sigma} \nonumber\\ \end{eqnarray} where the only parameter that enters is \begin{equation}\label{kaptilde} \tilde\kappa = \frac{\kappa}{(\mu (4\pi^2/3)^{1/(4-p)}/H)^{m(4-p)/p}} \end{equation} To estimate the size of the tail, we use the relations \begin{equation}\label{tailrelations} \tilde\chi_{0, tail}^m\sim \frac{{\delta\hat\phi}/H}{\tilde\kappa} \sim \frac{\tilde\chi_{0, tail}^{p/2}}{\tilde\kappa} \end{equation} The first relation here is (\ref{tailcanc}), and the second is the crossover between the dominance of the Gaussian in ${\delta\hat\phi}$ and the dominance of the tail, $\sim\exp(-\tilde\chi^p)$. Putting these together, we have a suppression of the tail by a factor \begin{equation}\label{tailkappa} \exp(-\frac{1}{\tilde\kappa^{p/(m-p/2)}}) \end{equation} In this section, we will imagine that we have observational access to all $N$ point functions, and work out the information content of the tail versus low point correlators. In \cite{PBHpaper}\ we focused on an application to PBH formation, which is specifically sensitive to the tail (although even in that context, the variance can play a role as in \cite{SMBHmu}). In that spirit, if we evaluate $\tilde\kappa$ at the bound it is possible to obtain from the 2 point function \begin{equation}\label{tilddap2pf} \tilde\kappa^2 \frac{\Gamma(\frac{1+2m}{p})}{\Gamma(\frac{1}{p})} < \frac{1}{\sqrt{N_{P}}} \end{equation} this scales like \begin{equation}\label{tailscale} \exp\{-N_{P}^{p/(4(m-p/2))}\left(\frac{\Gamma(\frac{1+2m}{p})}{\Gamma(\frac{1}{p})}\right)^{p/(2m-p)} \} \end{equation} For the special model described around (\ref{quadchi}), we effectively have $p=2, m=2$. (In this case, we are not working with the equilbrium Starobinsky distribution, but the model is equivalent to the one with these values of $p$ and $m$.) With $N_P=N_{pix}\sim 10^6$, this evaluates to $\exp(-\sqrt{N_{pix}}\Gamma(5/2)/\Gamma(1/2))\simeq 10^{-326}$, hence nowhere near observable in the CMB. But relatively small changes in parameters make a big difference; larger $m$ (e.g. of order 10) leads to much less suppression. Formally, smaller values of $p$ would also do this, but dialing $p$ in that way introduces the need to satisfy (\ref{parameterwindow}). For the analysis in this section, we will illustrate the information content by considering different ratios of $p/m$. This captures the effect of dialing up the parameter $m$, which is motivated by the fact that large $m$ matches onto the natural model (\ref{hypmodel}) on a hyperbolic field space geometry. \smallskip {\bf{Numerical analysis}} \smallskip Here, we construct realizations of the Non-Gaussian distributions discussed in the above section. {We evaluate whether low point correlation functions are in principle best for detecting them, or whether instead other aspects of the distribution such as the tail or higher point correlators contain more information. } For the purposes of this section, we will use the following family of distributions: \begin{equation} P(-\infty < \phi < \infty) = \frac{1}{2\sqrt{2\pi}\Gamma\left(\frac{1}{p}+1\right)}\int_{-\infty}^\infty d\chi \exp\left(-|\chi|^p - \frac{(\phi - {k}^{\frac{1}{p}} \chi^m)^2}{2} \right) \end{equation} The relation between $k$ and $\tilde\kappa\propto k^{1/p}$ can be read off from (\ref{powersdist}) above. With this normalized distribution and a numerical analysis, we will check the estimates made above for models with accessible information on the tail. If we focus for simplicity on N-point functions, we can determine which $N$ would be best for detecting the non-Gaussianity. We generate a large number of Gaussian and non-Gaussian realizations (data sets), each containing $N_{P}$ points. We evaluate the even N-point function estimator on each simulated map. For each $N$, we find the range in which 90\% of the Gaussian results fall, starting from zero. In other words, for each correlation function, we find where the 90th percentile lies in the Gaussian realizations. We then compute the percentage of Non-Gaussian realizations that are above that 90th percentile. For tail-dominated models, such as the hyperbolic model (\ref{hypmodel}), this can be a large percentage as we will see in an example below. We also do this for the likelihood. To be more specific, we consider the following estimators for the N-point functions: \begin{equation} \hat{\mathcal{E}}_N = \frac{1}{N_{P}}\sum_{i=1}^{N_{P}} \phi_i^N \end{equation} where $\phi_i$ are the $N_{P}$ data points drawn from either a Gaussian or a Non-Gaussian distribution. We can also define an estimator by evaluating the log-likelihood on the map as follows: \begin{equation} \hat{\mathcal{E}}_L = \sum_{i=1}^{N_{P}} \log\left(\frac{P_{NG}(\phi_i)}{P_G(\phi_i)}\right) \end{equation} The relative entropies are just the expectation values of this estimator over the two distributions: \begin{equation} \langle S \rangle_{NG} \equiv E[\hat{\mathcal{E}}_L]_{NG} = \int dx P_{NG}(x) \log\left(\frac{P_{NG}(x)}{P_G(x)}\right) \end{equation} \begin{equation} \langle S \rangle_{G} \equiv -E[\hat{\mathcal{E}}_L]_G = -\int dx P_{G}(x) \log\left(\frac{P_{NG}(x)}{P_G(x)}\right) \end{equation} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{nptfns.png} \caption{The horizontal axis is the even N-pt functions up to $N=20$ for the distribution with $m=3$ and $p=0.7$, with $N_{P}=1000$. The vertical axis is the fraction of samples that are above the 90th percentile of those in the Gaussian distribution as described in the text. The dashed line is the likelihood. The low point correlators are not optimal in this example. } \label{fig:nptfns} \end{figure} If we Taylor expand around $\kappa=0$ in our distributions, the first surviving term is of order $N_{P}\kappa^4$, matching the two point function constraint. One can compare this to the full relative entropy, computed with respect to either the Gaussian or non-Gaussian probability. If these do not agree, then this indicates that the 2 point correlation function does not contain all the information. As was discussed in the above section, the dominance of the tail is very sensitive to model parameters. As one particular example, we expect the distribution with $m=3$ and $p=0.7$ to be tail dominated. Figure \ref{fig:nptfns} shows the results of the first 10 even N-pt functions for that particular distribution, with $k=1/6$. The dashed line is the result of using the likelihood as our observable. Clearly, the 2-pt function does not do a good job of detecting the Non-Gaussianity. The optimal N-pt functions are the 6-th and the 8-th in this case. { Conversely, in a non-tail dominated model, the 2-pt function would essentially be lying on the likelihood line with the successive N-pt functions decreasing and plateauing for large N.} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{likelihood1.png} \caption*{likelihood} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{2pt1.png} \caption*{2-point function} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{6pt1.png} \caption*{6-point function} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{likelihood_m3_p7.png} \caption*{likelihood} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{2pt_m3_p7.png} \caption*{2-point function} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{6pt_m3_p7.png} \caption*{6-point function} \end{subfigure} \caption{The likelihood and the distributions of the 2-point function and 6-point function estimators, for the Gaussian distribution (in yellow), and a non-Gaussian distribution (in blue). The first row is the model with $m=1$, $p=4$, $k=1/20$, $N_{P}=1000$ and the second row with $m=3$, $p = 0.7$, $k=1/6$, $N_{P}=1000$.} \label{fig:likelihoods} \end{figure} In Figure \ref{fig:likelihoods} the distribution of the likelihood, 2-pt function and 6-pt function are plotted. For the case with $m=1$ and $p=4$, the 2-pt function distribution is essentially the same as the likelihood, while the 6-pt function has a significantly more tailed distribution. This non-Gaussian distribution of the estimator illustrates why the naive signal/noise analyzed in section \ref{sec:SNprobs}\ -- which is generically factorial enhanced -- is not by itself an indicator of observational sensitivity. However, in the $m=3$ and $p=0.7$ model, the 2-pt function is very different from the likelihood. This behavior is model-dependent, but applies to interesting models such as (\ref{hypmodel}). Our results for different parameters are summarized in Table \ref{table:modelnums}. The results for models with $m\neq 1$ and $p$ are similar to those with $m=1$ and $p^{\prime} = \frac{p}{m}$, and a large ratio of $m/p$ should be a good guide to the natural hyperbolic model (\ref{hypmodel}) \cite{PBHpaper}. \begin{table} \begin{center} \begin{tabular}{c | c | c || c | c | c} m & p & k & $\langle S \rangle_G$ & $\langle S \rangle_{NG}$ & Best N-pf \\\hline 1 & $1/16$ & $1/29$ & 0.86 & 73 & 8-12 \\ 1 & $1/8$ & $1/17$ & 0.86 & 35 & 6-10 \\ 1 & $1/4$ & $1/10$ & 1.1 & 5.8 & 4-6 \\ 1 & $1/2$ & $1/6$ & 1.9 & 2.5 & 2-4 \\ 1 & $1$ & $1/5$ & 1.4 & 1.5 & 2 \\ 1 & $4$ & $1/20$ & 1.3 & 1.4 & 2 \\\hline 3 & $0.7$ & $1/6$ & 1.0 & 13.7 & 6-8 \end{tabular} \end{center} \caption{Numerical results for different distributions with $N_{P} = 1000$ and $10000$ samples. $S$ is the relative entropy, computed either with respect to the Gaussian or Non-Gaussian distribution as indicated in the columns, including the factor of $N_{P}$. We chose this to be order 1, i.e. a barely detectable difference between the two distributions, according to the Gaussian-weighted relative entropy. {For tail-dominated models, we find a discrepancy between the two relative entropies, with a large relative entropy weighted with the non-Gaussian distribution.}} \label{table:modelnums} \end{table} \section{Conclusions and future directions} In this work, we showed that in the multifield inflationary context, factorial enhancement of $N$ point correlation functions survives quantum effects and applies in the regime of kinematic interest. This is a basic question in quantum field theory motivated by the factorial enhancement known in particular parameter and kinematic regimes. It is simpler to analyze more fully and exploit in the regime of physical interest in our cosmological setting than in collider physics (although in that context this question has stimulated a number of interesting results \cite{Voloshin}\cite{Browntree} \cite{Argyresetal} \cite{Sonetal} \cite{Khozeetal}\cite{criticalrefs}). The basic reason for this is the dilution of gradients, along with the calculably stochastic behavior of the system that applies in some regimes of couplings. Specifically, we derived and applied the enhanced amplitude of these large $N$-point functions in the study of primordial non-Gaussianity. We encountered some subtleties along the way, but were left with interesting model-dependent possibilities for substantially enhanced sensitivity beyond low point correlators. It would be interesting to explore in more depth the phenomenological implications, beyond that of enhanced primordial black hole production addressed recently in \cite{PBHpaper}. The models we analyzed in this work contain additional fields during inflation, which is reasonable given the multiple fields in the Standard model as well as hidden sectors that often arise in string theory. This enabled us to apply the theory of stochastic inflation for certain windows of couplings, as well as mixing interactions among field sectors in all cases. A natural question that arises is whether this effect persists in the case when any additional fields are too heavy during inflation to have such effects, reducing the system effectively to a single-field model of the primordial perturbations. We leave these questions to future work, perhaps building from recent progress on the calculation of multipoint correlators in other areas of quantum field theory \cite{criticalrefs}. In the present work, the kinematic simplicity and resulting calculability of the ultralocal multifield dynamics in early universe inflation enabled us to settle the factorial enhancement question in the affirmative in this context. \bigskip \noindent{\bf Acknowledgments} We thank Mehrdad Mirbabayi, Leonardo Senatore, and Matias Zaldarriaga for extensive discussions and collaboration on this subject. We also thank N. Arkani-Hamed, J. R. Bond, J. Cardy, P. Creminelli, R. Flauger, V. Gorbenko, Z. Komargodski, M. Munchmeyer, D. Murli, J. Polchinski, S. Shenker, K. Smith, D. Spergel, J. Thompson, and B. Wandelt for useful related discussions. This research was supported in part by the Simons Foundation Origins of the Universe Initiative (modern inflationary cosmology collaboration), and by a Simons Investigator award. GP and ES are grateful to the KITP for hospitality during part of this project.